Tagged: ars technica Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:06 am on October 11, 2018 Permalink | Reply
    Tags: , ars technica, , , Turbulance unsolved, Werner Heisenberg   

    From ars technica: “Turbulence, the oldest unsolved problem in physics” 

    Ars Technica
    From ars technica

    10/10/2018
    Lee Phillips

    The flow of water through a pipe is still in many ways an unsolved problem.

    Werner Heisenberg won the 1932 Nobel Prize for helping to found the field of quantum mechanics and developing foundational ideas like the Copenhagen interpretation and the uncertainty principle. The story goes that he once said that, if he were allowed to ask God two questions, they would be, “Why quantum mechanics? And why turbulence?” Supposedly, he was pretty sure God would be able to answer the first question.

    Werner Heisenberg from German Federal Archives

    The quote may be apocryphal, and there are different versions floating around. Nevertheless, it is true that Heisenberg banged his head against the turbulence problem for several years.

    His thesis advisor, Arnold Sommerfeld, assigned the turbulence problem to Heisenberg simply because he thought none of his other students were up to the challenge—and this list of students included future luminaries like Wolfgang Pauli and Hans Bethe. But Heisenberg’s formidable math skills, which allowed him to make bold strides in quantum mechanics, only afforded him a partial and limited success with turbulence.

    Some nearly 90 years later, the effort to understand and predict turbulence remains of immense practical importance. Turbulence factors into the design of much of our technology, from airplanes to pipelines, and it factors into predicting important natural phenomena such as the weather. But because our understanding of turbulence over time has stayed largely ad-hoc and limited, the development of technology that interacts significantly with fluid flows has long been forced to be conservative and incremental. If only we became masters of this ubiquitous phenomenon of nature, these technologies might be free to evolve in more imaginative directions.

    An undefined definition

    Here is the point at which you might expect us to explain turbulence, ostensibly the subject of the article. Unfortunately, physicists still don’t agree on how to define it. It’s not quite as bad as “I know it when I see it,” but it’s not the best defined idea in physics, either.

    So for now, we’ll make do with a general notion and try to make it a bit more precise later on. The general idea is that turbulence involves the complex, chaotic motion of a fluid. A “fluid” in physics talk is anything that flows, including liquids, gases, and sometimes even granular materials like sand.

    Turbulence is all around us, yet it’s usually invisible. Simply wave your hand in front of your face, and you have created incalculably complex motions in the air, even if you can’t see it. Motions of fluids are usually hidden to the senses except at the interface between fluids that have different optical properties. For example, you can see the swirls and eddies on the surface of a flowing creek but not the patterns of motion beneath the surface. The history of progress in fluid dynamics is closely tied to the history of experimental techniques for visualizing flows. But long before the advent of the modern technologies of flow sensors and high-speed video, there were those who were fascinated by the variety and richness of complex flow patterns.

    2
    One of the first to visualize these flows was scientist, artist, and engineer Leonardo da Vinci, who combined keen observational skills with unparalleled artistic talent to catalog turbulent flow phenomena. Back in 1509, Leonardo was not merely drawing pictures. He was attempting to capture the essence of nature through systematic observation and description. In this figure, we see one of his studies of wake turbulence, the development of a region of chaotic flow as water streams past an obstacle.

    For turbulence to be considered a solved problem in physics, we would need to be able to demonstrate that we can start with the basic equation describing fluid motion and then solve it to predict, in detail, how a fluid will move under any particular set of conditions. That we cannot do this in general is the central reason that many physicists consider turbulence to be an unsolved problem.

    I say “many” because some think it should be considered solved, at least in principle. Their argument is that calculating turbulent flows is just an application of Newton’s laws of motion, albeit a very complicated one; we already know Newton’s laws, so everything else is just detail. Naturally, I hold the opposite view: the proof is in the pudding, and this particular pudding has not yet come out right.

    The lack of a complete and satisfying theory of turbulence based on classical physics has even led to suggestions that a full account requires some quantum mechanical ingredients: that’s a minority view, but one that can’t be discounted.

    An example of why turbulence is said to be an unsolved problem is that we can’t generally predict the speed at which an orderly, non-turbulent (“laminar”) flow will make the transition to a turbulent flow. We can do pretty well in some special cases—this was one of the problems that Heisenberg had some success with—but, in general, our rules of thumb for predicting the transition speeds are summaries of experiments and engineering experience.

    3
    There are many phenomena in nature that illustrate the often sudden transformation from a calm, orderly flow to a turbulent flow.The transition to turbulence. Credit: Dr. Gary Settles

    This figure above is a nice illustration of this transition phenomenon. It shows the hot air rising from a candle flame, using a 19th century visualization technique that makes gases of different densities look different. Here, the air heated by the candle is less dense than the surrounding atmosphere.

    For another turbulent transition phenomenon familiar to anyone who frequents the beach, consider gentle, rolling ocean waves that become complex and foamy as they approach the shore and “break.” In the open ocean, wind-driven waves can also break if the windspeed is high or if multiple waves combine to form a larger one.

    For another visual aid, there is a centuries-old tradition in Japanese painting of depicting turbulent, breaking ocean waves. In these paintings, the waves are not merely part of the landscape but the main subjects. These artists seemed to be mainly concerned with conveying the beauty and terrible power of the phenomenon, rather than, as was Leonardo, being engaged in a systematic study of nature. One of the most famous Japanese artworks, and an iconic example of this genre, is Hokusai’s “Great Wave,” a woodblock print published in 1831.

    4
    Hokusai’s “Great Wave.”

    For one last reason to consider turbulence an unsolved problem, turbulent flows exhibit a wide range of interesting behavior in time and space. Most of these have been discovered by measurement, not predicted, and there’s still no satisfying theoretical explanation for them.

    Simulation

    Reasons for and against “mission complete” aside, why is the turbulence problem so hard? The best answer comes from looking at both the history and current research directed at what Richard Feynman once called “the most important unsolved problem of classical physics.”

    The most commonly used formula for describing fluid flow is the Navier-Stokes equation. This is the equation you get if you apply Newton’s first law of motion, F = ma (force = mass × acceleration), to a fluid with simple material properties, excluding elasticity, memory effects, and other complications. Complications like these arise when we try to accurately model the flows of paint, polymers, some biological fluids such as blood (there are many other substances also that violate the assumptions of the Navier-Stokes equations). But for water, air, and other simple liquids and gases, it’s an excellent approximation.

    The Navier-Stokes equation is difficult to solve because it is nonlinear. This word is thrown around quite a bit, but here it means something specific. You can build up a complicated solution to a linear equation by adding up many simple solutions. An example you may be aware of is sound: the equation for sound waves is linear, so you can build up a complex sound by adding together many simple sounds of different frequencies (“harmonics”). Elementary quantum mechanics is also linear; the Schrödinger equation allows you to add together solutions to find a new solution.

    But fluid dynamics doesn’t work this way: the nonlinearity of the Navier-Stokes equation means that you can’t build solutions by adding together simpler solutions. This is part of the reason that Heisenberg’s mathematical genius, which served him so well in helping to invent quantum mechanics, was put to such a severe test when it came to turbulence.

    Heisenberg was forced to make various approximations and assumptions to make any progress with his thesis problem. Some of these were hard to justify; for example, the applied mathematician Fritz Noether (a brother of Emmy Noether) raised prominent objections to Heisenberg’s turbulence calculations for decades before finally admitting that they seemed to be correct after all.

    (The situation was so hard to resolve that Heisenberg himself said, while he thought his methods were justified, he couldn’t find the flaw in Fritz Noether’s reasoning, either!)

    The cousins of the Navier-Stokes equation that are used to describe more complex fluids are also nonlinear, as is a simplified form, the Euler equation, that omits the effects of friction. There are cases where a linear approximation does work well, such as flow at extremely slow speeds (imagine honey flowing out of a jar), but this excludes most problems of interest including turbulence.

    Who’s down with CFD?

    Despite the near impossibility of finding mathematical solutions to the equations for fluid flows under realistic conditions, science still needs to get some kind of predictive handle on turbulence. For this, scientists and engineers have turned to the only option available when pencil and paper failed them—the computer. These groups are trying to make the most of modern hardware to put a dent in one of the most demanding applications for numerical computing: calculating turbulent flows.

    The need to calculate these chaotic flows has benefited from (and been a driver of) improvements in numerical methods and computer hardware almost since the first giant computers appeared. The field is called computational fluid dynamics, often abbreviated as CFD.

    Early in the history of CFD, engineers and scientists applied straightforward numerical techniques in order to try to directly approximate solutions to the Navier-Stokes equations. This involves dividing up space into a grid and calculating the fluid variables (pressure, velocity) at each grid point. The problem of the large range of spatial scales immediately makes this approach expensive: you need to find a solution where the flow features are accurate for the largest scales—meters for pipes, thousands of kilometers for weather, and down to near the molecular scale. Even if you cut off the length scale at the small end at millimeters or centimeters, you will still need millions of grid points.

    5
    A possible grid for calculating the flow over an airfoil.

    One approach to getting reasonable accuracy with a manageable-sized grid begins with the realization that there are often large regions where not much is happening. Put another way, in regions far away from solid objects or other disturbances, the flow is likely to vary slowly in both space and time. All the action is elsewhere; the turbulent areas are usually found near objects or interfaces.

    6
    A non-uniform grid for calculating the flow over an airfoil.

    If we take another look at our airfoil and imagine a uniform flow beginning at the left and passing over it, it can be more efficient to concentrate the grid points near the object, especially at the leading and trailing edges, and not “waste” grid points far away from the airfoil. The next figure shows one possible gridding for simulating this problem.

    This is the simplest type of 2D non-uniform grid, containing nothing but straight lines. The state of the art in nonuniform grids is called adaptive mesh refinement (AMR), where the mesh, or grid, actually changes and adapts to the flow during the simulation. This concentrates grid points where they are needed, not wasting them in areas of nearly uniform flow. Research in this field is aimed at optimizing the grid generation process while minimizing the artificial effects of the grid on the solution. Here it’s used in a NASA simulation of the flow around an oscillating rotor blade. The color represents vorticity, a quantity related to angular momentum.

    7
    Using AMR to simulate the flow around a rotor blade.Neal M. Chaderjian, NASA/Ames

    The above image shows the computational grid, rendered as blue lines, as well as the airfoil and the flow solution, showing how the grid adapts itself to the flow. (The grid points are so close together at the areas of highest grid resolution that they appear as solid blue regions.) Despite the efficiencies gained by the use of adaptive grids, simulations such as this are still computationally intensive; a typical calculation of this type occupies 2,000 compute cores for about a week.

    Dimitri Mavriplis and his collaborators at the Mavriplis CFD Lab at the University of Wyoming have made available several videos of their AMR simulations.

    8
    AMR simulation of flow past a sphere.Mavriplis CFD Lab

    Above is a frame from a video of a simulation of the flow past an object; the video is useful for getting an idea of how the AMR technique works, because it shows how the computational grid tracks the flow features.

    This work is an example of how state-of-the-art numerical techniques are capable of capturing some of the physics of the transition to turbulence, illustrated in the image of candle-heated air above.

    Another approach to getting the most out of finite computer resources involves making alterations to the equation of motion, rather than, or in addition to, altering the computational grid.

    Since the first direct numerical simulations of the Navier-Stokes equations were begun at Los Alamos in the late 1950s, the problem of the vast range of spatial scales has been attacked by some form of modeling of the flow at small scales. In other words, the actual Navier-Stokes equations are solved for motion on the medium and large scales, but, below some cutoff, a statistical or other model is substituted.

    The idea is that the interesting dynamics occur at larger scales, and grid points are placed to cover these. But the “subgrid” motions that happen between the gridpoints mainly just dissipate energy, or turn motion into heat, so don’t need to be tracked in detail. This approach is also called large-eddy simulation (LES), the term “eddy” standing in for a flow feature at a particular length scale.

    The development of subgrid modeling, although it began with the beginning of CFD, is an active area of research to this day. This is because we always want to get the most bang for the computer buck. No matter how powerful the computer, a sophisticated numerical technique that allows us to limit the required grid resolution will enable us to handle more complex problems.

    There are several other prominent approaches to modeling fluid flows on computers, some of which do not make use of grids at all. Perhaps the most successful of these is the technique called “smoothed particle hydrodynamics,” which, as its name suggests, models the fluid as a collection of computational “particles,” which are moved around without the use of a grid. The “smoothed” in the name comes from the smooth interpolations between particles that are used to derive the fluid properties at different points in space.

    Theory and experiment

    Despite the impressive (and ever-improving) ability of fluid dynamicists to calculate complex flows with computers, the search for a better theoretical understanding of turbulence continues, for computers can only calculate flow solutions in particular situations, one case at a time. Only through the use of mathematics do physicists feel that they’ve achieved a general understanding of a group of related phenomena. Luckily, there are a few main theoretical approaches to turbulence, each with some interesting phenomena they seek to penetrate.

    Only a few exact solutions of the Navier-Stokes equations are known; these describe simple, laminar flows (and certainly not turbulent flows of any kind). For flow in a pipe or between two flat plates, the flow velocity profile between two plates is zero at the boundaries and a maximum half-way between them. This parabolic flow profile (shown below) solves the equations: something that has been known for over a century. Laminar flow in a pipe is similar, with the maximum velocity occurring at the center.

    9
    Exact solution for flow between plates.

    The interesting thing about this parabolic solution, and similar exact solutions, is that they are valid (mathematically speaking) at any flow velocity, no matter how high. However, experience shows that while this works at low speeds, the flow breaks up and becomes turbulent at some moderate “critical” speed. Using mathematical methods to try to find this critical speed is part of what Heisenberg was up to in his thesis work.

    Theorists describe what’s happening here by using the language of stability theory. Stability theory is the examination of the exact solutions to the Navier-Stokes equation and their ability to survive “perturbations,” which are small disturbances added to the flow. These disturbances can be in the form of boundaries that are less than perfectly smooth, variations in the pressure driving the flow, etc.

    The idea is that, while the low-speed solution is valid at any speed, near a critical speed another solution also becomes valid, and nature prefers that second, more complex solution. In other words, the simple solution has become unstable and is replaced by a second one. As the speed is ramped up further, each solution gives way to a more complicated one, until we arrive at the chaotic flow we call turbulence.

    In the real world, this will always happen, because perturbations are always present—and this is why laminar flows are much less common in everyday experience than turbulence.

    Experiments to directly observe these instabilities are delicate, because the distance between the first instability and the onset of full-blown turbulence is usually quite small. You can see a version of the process in the figure above, showing the transition to turbulence in the heated air column above a candle. The straight column is unstable, but it takes a while before the sinuous instability grows large enough for us to see it as a visible wiggle. Almost as soon as this happens, the cascade of instabilities piles up, and we see a sudden explosion into turbulence.

    Another example of the common pattern is in the next illustration, which shows the typical transition to turbulence in a flow bounded by a single wall.

    10
    Transition to turbulence in a wall-bounded flow. NASA.

    We can again see an approximately periodic disturbance to the laminar flow begin to grow, and after just a few wavelengths the flow suddenly becomes turbulent.

    Capturing, and predicting, the transition to turbulence is an ongoing challenge for simulations and theory; on the theoretical side, the effort begins with stability theory.

    In fluid flows close to a wall, the transition to turbulence can take a somewhat different form. As in the other examples illustrated here, small disturbances get amplified by the flow until they break down into chaotic, turbulent motion. But the turbulence does not involve the entire fluid, instead confining itself to isolated spots, which are surrounded by calm, laminar flow. Eventually, more spots develop, enlarge, and ultimately merge, until the entire flow is turbulent.

    The fascinating thing about these spots is that, somehow, the fluid can enter them, undergo a complex, chaotic motion, and emerge calmly as a non-turbulent, organized flow on the other side. Meanwhile, the spots persist as if they were objects embedded in the flow and attached to the boundary.


    Turbulent spot experiment: pressure fluctuation. (Credit: Katya Casper et al., Sandia National Labs)

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    It turns out that the one great obstacle in the way of a statistical approach to turbulence theory is, once again, the nonlinear term in the Navier-Stokes equation. When you use this equation to derive another equation for the average velocity at a single point, it contains a term involving something new: the velocity correlation between two points. When you derive the equation for this velocity correlation, you get an equation with yet another new term: the velocity correlation involving three points. This process never ends, as the diabolical nonlinear term keeps generating higher-order correlations.

    The need to somehow terminate, or “close,” this infinite sequence of equations is known as the “closure problem” in turbulence theory and is still the subject of active research. Very briefly, to close the equations you need to step outside of the mathematical procedure and appeal to a physically motivated assumption or approximation.

    Despite its difficulty, some type of statistical solution to the fluid equations is essential for describing the phenomena of fully developed turbulence, of which there are a number. Turbulence need not be merely a random, featureless expanse of roiling fluid; in fact, it usually is more interesting than that. One of the most intriguing phenomena is the existence of persistent, organized structures within a violent, chaotic flow environment. We are all familiar with magnificent examples of these in the form of the storms on Jupiter, recognizable, even iconic, features that last for years, embedded within a highly turbulent flow.

    More down-to-Earth examples occur in almost any real-world case of a turbulent flow—in fact, experimenters have to take great pains if they want to create a turbulent flow field that is truly homogeneous, without any embedded structure.

    In the below images of a turbulent wake behind a cylinder and of the transition to turbulence in a wall-bounded flow, you can see the echoes of the wave-like disturbance that precedes the onset of fully developed turbulence: a periodicity that persists even as the flow becomes chaotic.

    12
    Cyclones at Jupiter’s north pole. NASA, JPL-Caltech, SwRI, ASI, INAF, JIRAM.

    13
    Wake behind a cylinder. Joseph Straccia et al. (CC By NC-ND)

    When your basic governing equation is very hard to solve or even to simulate, it’s natural to look for a more tractable equation or model that still captures most of the important physics. Much of the theoretical effort to understand turbulence is of this nature.

    We’ve mentioned subgrid models above, used to reduce the number of grid points required in a numerical simulation. Another approach to simplifying the Navier-Stokes equation is a class of models called “shell models.” Roughly speaking, in these models you take the Fourier transform of the Navier-Stokes equation, leading to a description of the fluid as a large number of interacting waves at different wavelengths. Then, in a systematic way, you discard most of the waves, keeping just a handful of significant ones. You can then calculate, using a computer or, with the simplest models, by hand, the mode interactions and the resulting turbulent properties. While, naturally, much of the physics is lost in these types of models, they allow some aspects of the statistical properties of turbulence to be studied in situations where the full equations cannot be solved.

    Occasionally, we hear about the “end of physics”—the idea that we are approaching the stage where all the important questions will be answered, and we will have a theory of everything. But from another point of view, the fact that such a commonplace phenomenon as the flow of water through a pipe is still in many ways an unsolved problem means that we are unlikely to ever reach a point that all physicists will agree is the end of their discipline. There remains enough mystery in the everyday world around us to keep physicists busy far into the future.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

    Advertisements
     
  • richardmitnick 1:03 pm on July 8, 2017 Permalink | Reply
    Tags: "Answers in Genesis", A great test case, , ars technica, ,   

    From ars technica: “Creationist sues national parks, now gets to take rocks from Grand Canyon” a Test Case Too Good to be True 

    Ars Technica
    ars technica

    7/7/2017
    Scott K. Johnson

    1
    Scott K. Johnson

    “Alternative facts” aren’t new. Young-Earth creationist groups like Answers in Genesis believe the Earth is no more than 6,000 years old despite actual mountains of evidence to the contrary, and they’ve been playing the “alternative facts” card for years. In lieu of conceding incontrovertible geological evidence, they sidestep it by saying, “Well, we just look at those facts differently.”

    Nowhere is this more apparent than the Grand Canyon, which young-Earth creationist groups have long been enamored with. A long geologic record (spanning almost 2 billion years, in total) is on display in the layers of the Grand Canyon thanks to the work of the Colorado River. But many creationists instead assert that the canyon’s rocks—in addition to the spectacular erosion that reveals them—are actually the product of the Biblical “great flood” several thousand years ago.

    Andrew Snelling, who got a PhD in geology before joining Answers in Genesis, continues working to interpret the canyon in a way that is consistent with his views. In 2013, he requested permission from the National Park Service to collect some rock samples in the canyon for a new project to that end. The Park Service can grant permits for collecting material, which is otherwise illegal.

    Snelling wanted to collect rocks from structures in sedimentary formations known as “soft-sediment deformation”—basically, squiggly disturbances of the layering that occur long before the sediment solidifies into rock. While solid rock layers can fold (bend) on a larger scale under the right pressures, young-Earth creationists assert that all folds are soft sediment structures, since forming them doesn’t require long periods of time.

    The National Park Service sent Snelling’s proposal out for review, having three academic geologists who study the canyon look at it. Those reviews were not kind. None felt the project provided any value to justify the collection. One reviewer, the University of New Mexico’s Karl Karlstrom, pointed out that examples of soft-sediment deformation can be found all over the place, so Snelling didn’t need to collect rock from a national park. In the end, Snelling didn’t get his permit.

    In May, Snelling filed a lawsuit alleging that his rights had been violated, as he believed his application had been denied by a federal agency because of his religious views. The complaint cites, among other things, President Trump’s executive order on religious freedom.

    That lawsuit was withdrawn by Snelling on June 28. According to a story in The Australian, Snelling withdrew his suit because the National Park Service has relented and granted him his permit. He will be able to collect about 40 fist-sized samples, provided that he makes the data from any analyses freely available.

    Not that anything he collects will matter. “Even if I don’t find the evidence I think I will find, it wouldn’t assault my core beliefs,” Snelling told The Australian. “We already have evidence that is consistent with a great flood that swept the world.”

    Again, in actuality, that hypothesis is in conflict with the entirety of Earth’s surface geology.

    Snelling says he will publish his results in a peer-reviewed scientific journal. That likely means Answers in Genesis’ own Answers Research Journal, of which he is editor-in-chief.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:07 am on June 18, 2017 Permalink | Reply
    Tags: ars technica, , , , , Molybdenum isotopes serve as a marker of the source material for our Solar System, Tungsten acts as a timer for events early in the Solar System’s history, U Münster   

    From ars technica: “New study suggests Jupiter’s formation divided Solar System in two” 

    Ars Technica
    ars technica

    6/17/2017
    John Timmer

    1
    NASA

    Gas giants like Jupiter have to grow fast. Newborn stars are embedded in a disk of gas and dust that goes on to form planets. But the ignition of the star releases energy that drives away much of the gas within a relatively short time. Thus, producing something like Jupiter involved a race to gather material before it was pushed out of the Solar System entirely.

    Simulations have suggested that Jupiter could have won this race by quickly building a massive, solid core that was able to start drawing in nearby gas. But, since we can’t look at the interior or Jupiter to see whether it’s solid, finding evidence to support these simulations has been difficult. Now, a team at the University of Münster has discovered some relevant evidence [PNAS] in an unexpected location: the isotope ratios found in various meteorites. These suggest that the early Solar System was quickly divided in two, with the rapidly forming Jupiter creating the dividing line.

    2

    Divide and conquer

    Based on details of their composition, we already knew that meteorites formed from more than one pool of material in the early Solar System. The new work extends that by looking at specific elements: tungsten and molybdenum. Molybdenum isotopes serve as a marker of the source material for our Solar System, determining what type of star contributed that material. Tungsten acts as a timer for events early in the Solar System’s history, as it’s produced by a radioactive decay with a half life of just under nine million years.

    While we have looked at tungsten and molybdenum in a number of meteorite populations before, the German team extended that work to iron-rich meteorites. These are thought to be fragments of the cores of planetesimals that formed early in the Solar System’s history. In many cases, these bodies went on to contribute to building the first planets.

    The chemical composition of meteorites had suggested a large number of different classes produced as different materials solidified at different distances from the Sun. But the new data suggests that, from the perspective of these isotopes, everything falls into just two classes: carbonaceous and noncarbonaceous.

    These particular isotopes tell us a few things. One is that the two populations probably have a different formation history. The molybdenum data indicates that material was added to the Solar System as it was forming, material that originated from a different type of source star. (One way to visualize this is to think of our Solar System as forming in two steps: first, from the debris of a supernova, then later we received additional material ejected by a red giant star.) And, because the two populations are so distinct, it appears that the later addition of material didn’t spread throughout the entire Solar System. If the later material had spread, you’d find some objects with intermediate compositions.

    A second thing that’s clear from the tungsten data is that the two classes of objects condensed at two different times. This suggests the noncarbonaceous bodies were forming from one to two million years into the Solar System’s history, while carbonaceous materials condensed later, from two to three million years.

    Putting it together

    To explain this, the authors suggest that the Solar System was divided early in its history, creating two different reservoirs of material. “The most plausible mechanism to efficiently separate two disk reservoirs for an extended period,” they suggest, “is the accretion of a giant planet in between them.” That giant planet, obviously, would be Jupiter.

    Modeling indicates that Jupiter would need to be 20 Earth masses to physically separate the two reservoirs. And the new data suggest that a separation had to take place by a million years into the Solar System’s history. All of which means that Jupiter had to grow very large, very quickly. This would be large enough for Jupiter to start accumulating gas well before the newly formed Sun started driving the gas out of the disk. By the time Jupiter grew to 50 Earth masses, it would create a permanent physical separation between the two parts of the disk.

    The authors suggest that the quick formation of Jupiter may have partially starved the inner disk of material, as it prevented material from flowing in from the outer areas of the planet-forming disk. This could explain why the inner Solar System lacks any “super Earths,” larger planets that would have required more material to form.

    Overall, the work does provide some evidence for a quick formation of Jupiter, probably involving a solid core. Other researchers are clearly going to want to check both the composition of additional meteorites and the behavior of planet formation models to see whether the results hold together. But the overall finding of two distinct reservoirs of material in the early Solar System seems to be very clear in their data, and those reservoirs will have to be explained one way or another.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 7:53 am on June 10, 2017 Permalink | Reply
    Tags: , , ars technica, Common Crawl, Implicit Association Test (IAT), Princeton researchers discover why AI become racist and sexist, Word-Embedding Association Test (WEAT)   

    From ars technica: “Princeton researchers discover why AI become racist and sexist” 

    Ars Technica
    ars technica

    19/4/2017
    Annalee Newitz

    Study of language bias has implications for AI as well as human cognition.

    1
    No image caption or credit

    Ever since Microsoft’s chatbot Tay started spouting racist commentary after 24 hours of interacting with humans on Twitter, it has been obvious that our AI creations can fall prey to human prejudice. Now a group of researchers has figured out one reason why that happens. Their findings shed light on more than our future robot overlords, however. They’ve also worked out an algorithm that can actually predict human prejudices based on an intensive analysis of how people use English online.

    The implicit bias test

    Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus—created by millions of people typing away online—might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes.

    People taking the IAT are asked to put words into two categories. The longer it takes for the person to place a word in a category, the less they associate the word with the category. (If you’d like to take an IAT, there are several online at Harvard University.) IAT is used to measure bias by asking people to associate random words with categories like gender, race, disability, age, and more. Outcomes are often unsurprising: for example, most people associate women with family, and men with work. But that obviousness is actually evidence for the IAT’s usefulness in discovering people’s latent stereotypes about each other. (It’s worth noting that there is some debate among social scientists about the IAT’s accuracy.)

    Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. The “word-embedding” part of the test comes from a project at Stanford called GloVe, which packages words together into “vector representations,” basically lists of associated terms. So the word “dog,” if represented as a word-embedded vector, would be composed of words like puppy, doggie, hound, canine, and all the various dog breeds. The idea is to get at the concept of dog, not the specific word. This is especially important if you are working with social stereotypes, where somebody might be expressing ideas about women by using words like “girl” or “mother.” To keep things simple, the researchers limited each concept to 300 vectors.

    To see how concepts get associated with each other online, the WEAT looks at a variety of factors to measure their “closeness” in text. At a basic level, Caliskan told Ars, this means how many words apart the two concepts are, but it also accounts for other factors like word frequency. After going through an algorithmic transform, closeness in the WEAT is equivalent to the time it takes for a person to categorize a concept in the IAT. The further apart the two concepts, the more distantly they are associated in people’s minds.

    The WEAT worked beautifully to discover biases that the IAT had found before. “We adapted the IAT to machines,” Caliskan said. And what that tool revealed was that “if you feed AI with human data, that’s what it will learn. [The data] contains biased information from language.” That bias will affect how the AI behaves in the future, too. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender.

    Imagine an army of bots unleashed on the Internet, replicating all the biases that they learned from humanity. That’s the future we’re looking at if we don’t build some kind of corrective for the prejudices in these systems.

    A problem that AI can’t solve

    Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. In one test, they found strong associations between the concept of woman and the concept of nursing. This reflects a truth about reality, which is that nursing is a majority female profession.

    “Language reflects facts about the world,” Caliskan told Ars. She continued:

    Removing bias or statistical facts about the world will make the machine model less accurate. But you can’t easily remove bias, so you have to learn how to work with it. We are self-aware, we can decide to do the right thing instead of the prejudiced option. But machines don’t have self awareness. An expert human might be able to aid in [the AIs’] decision-making process so the outcome isn’t stereotyped or prejudiced for a given task.”

    The solution to the problem of human language is… humans. “I can’t think of many cases where you wouldn’t need a human to make sure that the right decisions are being made,” concluded Caliskan. “A human would know the edge cases for whatever the application is. Once they test the edge cases they can make sure it’s not biased.”

    So much for the idea that bots will be taking over human jobs. Once we have AIs doing work for us, we’ll need to invent new jobs for humans who are testing the AIs’ results for accuracy and prejudice. Even when chatbots get incredibly sophisticated, they are still going to be trained on human language. And since bias is built into language, humans will still be necessary as decision-makers.

    In a recent paper for Science about their work, the researchers say the implications are far-reaching. “Our findings are also sure to contribute to the debate concerning the Sapir Whorf hypothesis,” they write. “Our work suggests that behavior can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages.” If you watched the movie Arrival, you’ve probably heard of Sapir Whorf—it’s the hypothesis that language shapes consciousness. Now we have an algorithm that suggests this may be true, at least when it comes to stereotypes.

    Caliskan said her team wants to branch out and try to find as-yet-unknown biases in human language. Perhaps they could look for patterns created by fake news or look into biases that exist in specific subcultures or geographical locations. They would also like to look at other languages, where bias is encoded very differently than it is in English.

    “Let’s say in the future, someone suspects there’s a bias or stereotype in a certain culture or location,” Caliskan mused. “Instead of testing with human subjects first, which takes time, money, and effort, they can get text from that group of people and test to see if they have this bias. It would save so much time.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 4:44 pm on May 16, 2017 Permalink | Reply
    Tags: ars technica, ,   

    From ars technica: “Atomic clocks and solid walls: New tools in the search for dark matter” 

    Ars Technica
    ars technica

    5/15/2017
    Jennifer Ouellette

    1
    An atomic clock based on a fountain of atoms. NSF

    Countless experiments around the world are hoping to reap scientific glory for the first detection of dark matter particles. Usually, they do this by watching for dark matter to bump into normal matter or by slamming particles into other particles and hoping for some dark stuff to pop out. But what if the dark matter behaves more like a wave?

    That’s the intriguing possibility championed by Asimina Arvanitaki, a theoretical physicist at the Perimeter Institute in Waterloo, Ontario, Canada, where she holds the Aristarchus Chair in Theoretical Physics—the first woman to hold a research chair at the institute. Detecting these hypothetical dark matter waves requires a bit of experimental ingenuity. So she and her collaborators are adapting a broad range of radically different techniques to the search: atomic clocks and resonating bars originally designed to hunt for gravitational waves—and even lasers shined at walls in hopes that a bit of dark matter might seep through to the other side.

    “Progress in particle physics for the last 50 years has been focused on colliders, and rightfully so, because whenever we went to a new energy scale, we found something new,” says Arvanitaki. That focus is beginning to shift. To reach higher and higher energies, physicists must build ever-larger colliders—an expensive proposition when funding for science is in decline. There is now more interest in smaller, cheaper options. “These are things that usually fit in the lab, and the turnaround time for results is much shorter than that of the collider,” says Arvanitaki, admitting, “I’ve done this for a long time, and it hasn’t always been popular.”

    The end of the WIMP?

    While most dark matter physicists have focused on hunting for weakly interacting massive particles, or WIMPs, Arvanitaki is one of a growing number who are focusing on less well-known alternatives, such as axions—hypothetical ultralight particles with masses that could be as little as ten thousand trillion trillion times smaller than the mass of the electron. The masses of WIMPs, by contrast, would be larger than the mass of the proton.

    Cosmology gave us very good reason to be excited about WIMPs and focus initial searches in their mass range, according to David Kaplan, a theorist at Johns Hopkins University (and producer of the 2013 documentary Particle Fever). But the WIMP’s dominance in the field to date has also been due, in part, to excitement over the idea of supersymmetry. That model requires every known particle in the Standard Model—whether fermion or boson—to have a superpartner that is heavier and in the opposite class. So an electron, which is a fermion, would have a boson superpartner called the selectron, and so on.

    Physicists suspect one or more of those unseen superpartners might make up dark matter. Supersymmetry predicts not just the existence of dark matter, but how much of it there should be. That fits neatly within a WIMP scenario. Dark matter could be any number of things, after all, and the supersymmetry mass range seemed like a good place to start the search, given the compelling theory behind it.

    But in the ensuing decades, experiment after experiment has come up empty. With each null result, the parameter space where WIMPs might be lurking shrinks. This makes distinguishing a possible signal from background noise in the data increasingly difficult.

    “We’re about to bump up against what’s called the ‘neutrino floor,’” says Kaplan. “All the technology we use to discover WIMPs will soon be sensitive to random neutrinos flying through the Universe. Once it gets there, it becomes a much messier signal and harder to see.”

    Particles are waves

    Despite its momentous discovery of the Higgs boson in 2012, the Large Hadron Collider has yet to find any evidence of supersymmetry. So we shouldn’t wonder that physicists are turning their attention to alternative dark matter candidates outside of the mass ranges of WIMPs. “It’s now a fishing expedition,” says Kaplan. “If you’re going on a fishing expedition, you want to search as broadly as possible, and the WIMP search is narrow and deep.”

    Enter Asimina Arvanitaki—“Mina” for short. She grew up in a small Greek Village called Koklas, and, since her parents were teachers, she grew up with no shortage of books around the house. Arvanitaki excelled in math and physics—at a very young age, she calculated the time light takes to travel from the Earth to the Sun. While she briefly considered becoming a car mechanic in high school because she loved cars, she decided, “I was more interested in why things are the way they are, not in how to make them work.” So she majored in physics instead.

    Similar reasoning convinced her to switch her graduate-school focus at Stanford from experimental condensed matter physics to theory: she found her quantum field theory course more scintillating than any experimental results she produced in the laboratory.

    Central to Arvanitaki’s approach is a theoretical reimagining of dark matter as more than just a simple particle. A peculiar quirk of quantum mechanics is that particles exhibit both particle- and wave-like behavior, so we’re really talking about something more akin to a wavepacket, according to Arvanitaki. The size of those wave packets is inversely proportional to their mass. “So the elementary particles in our theory don’t have to be tiny,” she says. “They can be super light, which means they can be as big as the room or as big as the entire Universe.”

    Axions fit the bill as a dark matter candidate, but they interact so weakly with regular matter that they cannot be produced in colliders. Arvanitaki has proposed several smaller experiments that might succeed in detecting them in ways that colliders cannot.

    Walls, clocks, and bars

    One of her experiments relies on atomic clocks—the most accurate timekeeping devices we have, in which the natural frequency oscillations of atoms serve the same purpose as the pendulum in a grandfather clock. An average wristwatch loses roughly one second every year; atomic clocks are so precise that the best would only lose one second every age of the Universe.

    Within her theoretical framework, dark matter particles (including axions) would behave like waves and oscillate at specific frequencies determined by the mass of the particles. Dark matter waves would cause the atoms in an atomic clock to oscillate as well. The effect is very tiny, but it should be possible to see such oscillations in the data. A trial search of existing data from atomic clocks came up empty, but Arvanitaki suspects that a more dedicated analysis would prove more fruitful.

    Then there are so-called “Weber bars,” which are solid aluminum cylinders that Arvanitaki says should ring like a tuning fork should a dark matter wavelet hit them at just the right frequency. The bars get their name from physicist Joseph Weber, who used them in the 1960s to search for gravitational waves. He claimed to have detected those waves, but nobody could replicate his findings, and his scientific reputation never quite recovered from the controversy.

    Weber died in 2000, but chances are he’d be pleased that his bars have found a new use. Since we don’t know the precise frequency of the dark matter particles we’re hunting, Arvanitaki suggests building a kind of xylophone out of Weber bars. Each bar would be tuned to a different frequency to scan for many different frequencies at once.

    Walking through walls

    Yet another inventive approach involves sending axions through walls. Photons (light) can’t pass through walls—shine a flashlight onto a wall, and someone on the other side won’t be able to see that light. But axions are so weakly interacting that they can pass through a solid wall. Arvanitaki’s experiment exploits the fact that it should be possible to turn photons into axions and then reverse the process to restore the photons. Place a strong magnetic field in front of that wall and then shine a laser onto it. Some of the photons will become axions and pass through the wall. A second magnetic field on the other side of the wall then converts those axions back into photons, which should be easily detected.

    This is a new kind of dark matter detection relying on small, lab-based experiments that are easier to perform (and hence easier to replicate). They’re also much cheaper than setting up detectors deep underground or trying to produce dark matter particles at the LHC—the biggest, most complicated scientific machine ever built, and the most expensive.

    “I think this is the future of dark matter detection,” says Kaplan, although both he and Arvanitaki are adamant that this should complement, not replace, the many ongoing efforts to hunt for WIMPs, whether deep underground or at the LHC.

    “You have to look everywhere, because there are no guarantees. This is what research is all about,” says Arvanitaki. “What we think is correct, and what Nature does, may be two different things.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 6:51 pm on February 14, 2017 Permalink | Reply
    Tags: ars technica, , , , Caltech Palomar Intermediate Palomar Transient Factory, ,   

    From ars technica: “Observations catch a supernova three hours after it exploded” 

    Ars Technica
    ars technica

    1
    BRIGHT AND EARLY Scientists caught an early glimpse of an exploding star in the galaxy NGC7610 (shown before the supernova). Light from the explosion revealed that gas (orange) surrounded the star, indicating that the star spurted out gas in advance of the blast.

    2
    The remains of an earlier Type II supernova. NASA

    The skies are full of transient events. If you don’t happen to have a telescope pointed at the right place at the right time, you can miss anything from the transit of a planet to the explosion of a star. But thanks to the development of automated survey telescopes, the odds of getting lucky have improved considerably.

    In October of 2013, the telescope of the intermediate Palomar Transient Factory worked just as expected, capturing a sudden brightening that turned out to reflect the explosion of a red supergiant in a nearby galaxy.

    Caltech Palomar Intermediate Palomar Transient Factory telescope at the Samuel Oschin Telescope at Palomar Observatory,located in San Diego County, California, United States
    Caltech Palomar Intermediate Palomar Transient Factory telescope at the Samuel Oschin Telescope at Palomar Observatory,located in San Diego County, California, United States

    The first images came from within three hours of the supernova itself, and followup observations tracked the energy released as it blasted through the nearby environment. The analysis of the event was published on Monday in Nature Physics, and it suggests the explosion followed shortly after the star ejected large amounts of material.

    This isn’t the first supernova we’ve witnessed as it happened; the Kepler space telescope captured two just as the energy of the explosion of the star’s core burst through the surface. By comparison, observations three hours later are relative latecomers. But SN 2013fs (as it was later termed) provided considerably more detail, as followup observations were extensive and covered all wavelengths, from X-rays to the infrared.

    Critically, spectroscopy began within six hours of the explosion. This technique separates the light according to its wavelength, allowing researchers to identify the presence of specific atoms based on the colors of light they absorb. In this case, the spectroscopy picked up the presence of atoms such as oxygen and helium, which lost most of its electrons. The presence of these heavily ionized oxygen atoms surged for several hours, then was suddenly cut off 11 hours later.

    The authors explain this behavior by positing that the red supergiant ejected a significant amount of material before it exploded. The light from the explosion then swept through the vicinity, eventually catching up with the material and stripping the electrons off its atoms. The sudden cutoff came when the light exited out the far side of the material, allowing it to return to a lower energy state, where it stayed until the physical debris of the explosion slammed into it about five days later.

    Since the light of the explosion is moving at the speed of light (duh), we know how far away the material was: six light hours, or roughly the Sun-Pluto distance. Some blurring in the spectroscopy also indicates that it was moving at about 100 kilometers a second. Based on its speed and the distance it is from the star that ejected it, they could calculate when it was ejected: less than 500 days before the explosion. The total mass of the material also suggests that the star was losing about 0.1 percent of the Sun’s mass a year.

    Separately, the authors estimate that it is unlikely there is a single star in our galaxy with the potential to be less than 500 days from explosion, so we probably won’t be able to look at an equivalent star—assuming we knew how to identify it.

    Large stars like red supergiants do sporadically eject material, so there’s always the possibility that the ejection-explosion series occurred by chance. But this isn’t the first supernova we’ve seen where explosion material has slammed into a shell of material that had been ejected earlier. Indeed, the closest red supergiant, Betelgeuse, has a stable shell of material a fair distance from its surface.

    What could cause these ejections? For most of their relatively short lives, these giant stars are fusing relatively light elements, each of which is present in sufficient amounts to burn for millions of years. But once they start to shift to heavier elements, higher rates of fusion are needed to counteract gravity, which is constantly drawing the elements in the core. As a result, the core undergoes major rearrangements as it changes fuels, sometimes within a span of a couple of years. It’s possible, suggests an accompanying perspective by astronomer Norbert Langer, that these rearrangements propagate to the surface and force the ejection of matter.

    For now, we’ll have to explore this possibility using models of the interiors of giant stars. But with enough survey telescopes in operation, we may have more data to test the idea against before too long.

    Nature Physics, 2017. DOI: 10.1038/NPHYS4025

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

    Nature Physics, 2017. DOI: 10.1038/NPHYS4025

     
  • richardmitnick 7:59 pm on December 28, 2016 Permalink | Reply
    Tags: ars technica, , How humans survived in the barren Atacama Desert 13000 years ago   

    From ars technica: “How humans survived in the barren Atacama Desert 13,000 years ago” Revised for more Optical telescopes 

    Ars Technica
    ars technica

    12/28/2016
    Annalee Newitz

    1
    The Atacama Desert today is barren, its sands encrusted with salt. And yet there were thriving human settlements there 12,000 years ago.
    Vallerio Pilar

    Home of:

    ESO/LaSilla
    ESO/LaSilla

    ESO/VLT at Cerro Paranal, Chile
    ESO/VLT at Cerro Paranal, Chile

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at  Chajnantor plateau, at 5,000 metres
    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    6
    Cerro Tololo Inter-American Observatory
    Blanco 4.0-m Telescope
    SOAR 4.1-m Telescope
    Gemini South 8.1-m Telescope

    When humans first arrived in the Americas, roughly 18,000 to 20,000 years ago, they traveled by boat along the continents’ shorelines. Many settled in coastal regions or along rivers that took them inland from the sea. Some made it all the way down to Chile quite quickly; there’s evidence for a human settlement there from more than 14,000 years ago at a site called Monte Verde. Another settlement called Quebrada Maní, dating back almost 13,000 years, was recently discovered north of Monte Verde in one of the most arid deserts in the world: the Atacama, whose salt-encrusted sands repel even the hardiest of plants. It seemed an impossible place for early humans to settle, but now we understand how they did it.

    At a presentation during the American Geophysical Union meeting this month, UC Berkeley environmental science researcher Marco Pfeiffer explained how he and his team investigated the Atacama desert’s deep environmental history. Beneath the desert’s salt crust, they found a buried layer of plant and animal remains between 9,000 and 17,000 years old. There were freshwater plants and mosses, as well as snails and plants that prefer brackish water. Quickly it became obvious this land had not always been desert—what Pfeiffer and his colleagues saw suggested wetlands fed by fresh water.

    1
    Chile’s early archaeological sites, named and dated. The yellow area shows the extension of the Atacama Desert hyperarid core. Also note the surrounding mountains that block many rainy weather systems. Quaternary Science Reviews

    But where could this water have come from? The high mountains surrounding the Atacama are a major barrier to weather systems that bring rain, which is partly why the area is lifeless today. Maybe, they reasoned, the water came from the mountains themselves. Based on previous studies, they already knew that rainfall in the area was six times higher than today’s average in that 9,000- to 17,000-years-ago range. So they used a computer model to figure out how all that water would have drained off the mountain peaks to form streams and pools in the Atacama. “We saw that water must have been accumulating,” Pfeiffer said. As a result, the desert bloomed into a marshy ecosystem which could easily have supported a number of human settlements.

    Indeed, Pfeiffer says that his team has found evidence of human settlements in Atacama’s surrounding flatlands, which they are still investigating. Now that they understand climate change in the region, Pfeiffer added, it will be easier for archaeologists to account for the oddly large population in the area. The history of humanity in the Americas isn’t just the story of vanished peoples—it’s also the tale of lost ecosystems.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:03 am on July 25, 2016 Permalink | Reply
    Tags: , ars technica, Final International Technology Roadmap for Semiconductors (ITRS),   

    From ars technica: “Transistors will stop shrinking in 2021, but Moore’s law will live on” 

    Ars Technica
    ars technica

    25/7/2016
    Sebastian Anthony

    1
    A 22nm Haswell wafer, with a pin for scale. No image credit

    Transistors will stop shrinking after 2021, but Moore’s law will probably continue, according to the final International Technology Roadmap for Semiconductors (ITRS).

    The ITRS—which has been produced almost annually by a collaboration of most of the world’s major semiconductor companies since 1993—is about as authoritative as it gets when it comes to predicting the future of computing. The 2015 roadmap will however be its last.

    The most interesting aspect of the ITRS is that it tries to predict what materials and processes we might be using in the next 15 years. The idea is that, by collaborating on such a roadmap, the companies involved can sink their R&D money into the “right” technologies.

    For example, despite all the fuss surrounding graphene and carbon nanotubes a few years back, the 2011 ITRS predicted that it would still be at least 10 to 15 years before they were actually used in memory or logic devices. Germanium and III-V semiconductors, though, were predicted to be only five to 10 years away. Thus, if you were deciding where to invest your R&D money, you might opt for III-V rather than nanotubes (which appears to be what Intel and IBM are doing).

    The latest and last ITRS focuses on two key areas: that it will no longer be economically viable to shrink transistors after 2021—and, pray tell, what might be done to keep Moore’s law going despite transistors reaching their minimal limit. (Remember, Moore’s law simply predicts a doubling of transistor density within a given integrated circuit, not the size or performance of those transistors.)

    The first problem has been known about for a long while. Basically, starting at around the 65nm node in 2006, the economic gains from moving to smaller transistors have been slowly dribbling away. Previously, moving to a smaller node meant you could cram tons more chips onto a single silicon wafer, at a reasonably small price increase. With recent nodes like 22 or 14nm, though, there are so many additional steps required that it costs a lot more to manufacture a completed wafer—not to mention additional costs for things like package-on-package (PoP) and through-silicon vias (TSV) packaging.

    This is the primary reason that the semiconductor industry has been whittled from around 20 leading-edge logic-manufacturing companies in 2000, down to just four today: Intel, TSMC, GlobalFoundries, and Samsung. (IBM recently left the business by selling its fabs to GloFo.)

    2
    A diagram showing future transistor topologies, from Applied Materials (which makes the machines that actually create the various layers/features on a die). Gate-all-around is shown at the top.

    The second problem—how to keep increasing transistor density—has a couple of likely solutions. First, ITRS expects that chip makers and designers will begin to move away from FinFET in 2019, towards gate-all-around transistor designs. Then, a few years later, these transistors will become vertical, with the channel fashioned out of some kind of nanowire. This will allow for a massive increase in transistor density, similar to recent advances in 3D V-NAND memory.

    The gains won’t last for long though, according to ITRS: by 2024 (so, just eight years from now), we will once again run up against a thermal ceiling. Basically, there is a hard limit on how much heat can be dissipated from a given surface area. So, as chips get smaller and/or denser, it eventually becomes impossible to keep the chip cool. The only real solution is to completely rethink chip packaging and cooling. To begin with, we’ll probably see microfluidic channels that increase the effective surface area for heat transfer. But after that, as we stack circuits on top of each other, we’ll need something even fancier. Electronic blood, perhaps?

    The final ITRS is one of the most beastly reports I’ve ever seen, spanning seven different sections and hundreds of pages and diagrams. Suffice it to say I’ve only touched on a tiny portion of the roadmap here. There are large sections on heterogeneous integration, and also some important bits on connectivity (semiconductors play a key role in modulating optical and radio signals).

    3
    Here’s what ASML’s EUV lithography machine may eventually look like. Pretty large, eh?

    I’ll leave you with one more important short-term nugget, though. We are fast approaching the cut-off date for choosing which lithography and patterning techs will be used for commercial 7nm and 5nm logic chips.

    As you may know, extreme ultraviolet (EUV) has been waiting in the wings for years now, never quite reaching full readiness due to its extremely high power usage and some resolution concerns. In the mean time, chip makers have fallen back on increasing levels of multiple patterning—multiple lithographic exposures, which increase manufacturing time (and costs).

    Now, however, directed self-assembly (DSA)—where the patterns assemble themselves—is also getting very close to readiness. If either technology wants to be used over multiple patterning for 7nm logic, the ITRS says they will need to prove their readiness in the next few months.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 10:13 am on July 22, 2016 Permalink | Reply
    Tags: ars technica, ,   

    from ars technica: “Gravity doesn’t care about quantum spin” 

    Ars Technica
    ars technica

    7/16/2016
    Chris Lee

    1
    An atomic clock based on a fountain of atoms. NSF

    Physics, as you may have read before, is based around two wildly successful theories. On the grand scale, galaxies, planets, and all the other big stuff dance to the tune of gravity. But, like your teenage daughter, all the little stuff stares in bewildered embarrassment at gravity’s dancing. Quantum mechanics is the only beat the little stuff is willing get down to. Unlike teenage rebellion, though, no one claims to understand what keeps relativity and quantum mechanics from getting along.

    Because we refuse to believe that these two theories are separate, physicists are constantly trying to find a way to fit them together. Part and parcel with creating a unifying model is finding evidence of a connection between the gravity and quantum mechanics. For example, showing that the gravitational force experienced by a particle depended on the particle’s internal quantum state would be a great sign of a deeper connection between the two theories. The latest attempt to show this uses a new way to look for coupling between gravity and the quantum property called spin.

    I’m free, free fallin’

    One of the cornerstones of general relativity is that objects move in straight lines through a curved spacetime. So, if two objects have identical masses and are in free fall, they should follow identical trajectories. And this is what we have observed since the time of Galileo (although I seem to recall that Galileo’s public experiment came to an embarrassing end due to differences in air resistance).

    The quantum state of an object doesn’t seem to make a difference. However, if there is some common theory that underlies general relativity and quantum mechanics, at some level, gravity probably has to act differently on different quantum states.

    To see this effect means measuring very tiny differences in free fall trajectories. Until recently, that was close to impossible. But it may be possible now thanks to the realization of Bose-Einstein condensates. The condensates themselves don’t necessarily provide the tools we need, but the equipment used to create a condensate allows us to manipulate clouds of atoms with exquisite precision. This precision is the basis of a new free fall test from researchers in China.

    Surge like a fountain, like tide

    The basic principle behind the new work is simple. If you want to measure acceleration due to gravity, you create a fountain of atoms and measure how long it takes for an atom to travel from the bottom of the fountain to the top and back again. As long as you know the starting velocity of the atoms and measure the time accurately, then you can calculate the force due to gravity. To do that, you need to impart a well-defined momentum to the cloud at a specific time.

    Quantum superposition

    Superposition is nothing more than addition for waves. Let’s say we have two sets of waves that overlap in space and time. At any given point, a trough may line up with a peak, their peaks may line up, or anything in between. Superposition tells us how to add up these waves so that the result reconstructs the patterns that we observe in nature.

    Then you need to measure the transit time. This is done using the way quantum states evolve in time, which also means you need to prepare the cloud of atoms in a precisely defined quantum state.

    If I put the cloud into a superposition of two states, then that superposition will evolve in time. What do I mean by that? Let’s say that I set up a superposition between states A and B. Now, when I take a measurement, I won’t get a mixture of A and B; I only ever get A or B. But the probability of obtaining A (or B) oscillates in time. So at one moment, the probability might be 50 percent, a short time later it is 75 percent, then a little while later it is 100 percent. Then it starts to fall until it reaches zero and then it starts to increase again.

    This oscillation has a regular period that is defined by the environment. So, under controlled circumstances, I set the superposition state as the atomic cloud drifts out the top of the fountain, and at a certain time later, I make a measurement. Each atom reports either state A or state B. The ratio of the amount of A and B tells me how much time has passed for the atoms, and, therefore, what the force of gravity was during their time in the fountain.

    Once you have that working, the experiment is dead simple (he says in the tone of someone who is confident he will never have to actually build the apparatus or perform the experiment). Essentially, you take your atomic cloud and choose a couple of different atomic states. Place the atoms in one of those states and measure the free fall time. Then repeat the experiment for the second state. Any difference, in this ideal case, is due to gravity acting differently on the two quantum states. Simple, right?

    Practically speaking, this is kind-a-sorta really, really difficult.

    I feel like I’m spinnin’

    Obviously, you have to choose a pair of quantum states to compare. In the case of our Chinese researchers, they chose to test for coupling between gravity and a particle’s intrinsic angular momentum, called spin. This choice makes sense because we know that in macroscopic bodies, the rotation of a body (in other words, its angular momentum) modifies the local gravitational field. So, depending on the direction and magnitude of the angular momentum, the local gravitational field will be different. Maybe we can see this classical effect in quantum states, too?

    However, quantum spin is, confusingly, not related to the rotation of a body. Indeed, if you calculate how fast an electron needs to rotate in order to generate its spin angular momentum, you’ll come up with a ridiculous number (especially if you take the idea of the electron being a point particle seriously). Nevertheless, particles like electrons and protons, as well as composite particles like atoms, have intrinsic spin angular momentum. So, an experiment comparing the free fall of particles with the same spin, but oriented in different directions, makes perfect sense.

    Except for one thing: magnetic fields. The spin of a particle is also coupled to its magnetic moment. That means that if there are any changes in the magnetic field around the atom fountain, the atomic cloud will experience a force due to these variations. Since the researchers want to measure a difference between two spin states that have opposite orientations, this is bad. They will always find that the two spin populations have different fountain trajectories, but the difference will largely be due to variations in the magnetic field, rather than to differences in gravitational forces.

    So the story of this research is eliminating stray magnetic fields. Indeed, the researchers spend most of their paper describing how they test for magnetic fields before using additional electromagnets to cancel out stray fields. They even invented a new measurement technique that partially compensates for any remaining variations in the magnetic fields. To a large extent, the researchers were successful.

    So, does gravity care about your spin?

    Short answer: no. The researchers obtained a null result, meaning that, to within the precision of their measurements, there was no detectable difference in atomic free falls when atoms were in different spin states.

    But this is really just the beginning of the experiment. We can expect even more sensitive measurements from the same researchers within the next few years. And the strategies that they used to increase accuracy can be transferred to other high-precision measurements.

    Physical Review Letters, 2016, DOI: 10.1103/PhysRevLett.117.023001

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 2:16 pm on November 9, 2015 Permalink | Reply
    Tags: ars technica, , , ,   

    From ars technica: “Finally some answers on dark energy, the mysterious master of the Universe” 

    Ars Technica
    ars technica

    Nov 5, 2015
    Eric Berger

    U Texas McDonald Observatory Hobby-Eberle 9.1 meter Telescope
    U Texas McDonald Observatory Hobby Eberle 9.1 meter Telescope Interior
    U Texas McDonald Observatory Hobby-Eberle 9.1 meter Telescope

    Unless you’re an astrophysicist, you probably don’t sit around thinking about dark energy all that often. That’s understandable, as dark energy doesn’t really affect anyone’s life. But when you stop to ponder dark energy, it’s really rather remarkable. This mysterious force, which makes up the bulk of the Universe but was only discovered 17 years ago, somehow is blasting the vast cosmos apart at ever-increasing rates.

    Astrophysicists do sit around and think about dark energy a lot. And they’re desperate for more information about it as, right now, they have essentially two data points. One shows the Universe in its infancy, at 380,000 years old, thanks to observations of the cosmic microwave background radiation. And by pointing their telescopes into the sky and looking about, they can measure the present expansion rate of the Universe.

    But astronomers would desperately like to know what happened in between the Big Bang and now. Is dark energy constant, or is it accelerating? Or, more crazily still, might it be about to undergo some kind of phase change and turn everything into ice, as ice-nine did in Kurt Vonnegut’s novel Cat’s Cradle? Probably not, but really, no one knows.

    The Plan

    Fortunately astronomers in West Texas have a $42 million plan to use the world’s fourth largest optical telescope to get some answers. Until now, the 9-meter Hobby-Eberly telescope at McDonald Observatory has excelled at observing very distant objects, but this has necessitated a narrow field of view. However, with a clever new optical system, astronomers have expanded the telescope’s field of view by a factor of 120, to nearly the size of a full Moon. The next step is to build a suite of spectrographs and, using 34,000 optical fibers, wire them into the focal plane of the telescope.

    “We’re going to make this 3-D map of the Universe,” Karl Gebhardt, a professor of astronomy at the University of Texas at Austin, told Ars. “On this giant map, for every image that we take, we’ll get that many spectra. No other telescope can touch this kind of information.”

    With this detailed information about the location and age of objects in the sky, astronomers hope to gain an understanding of how dark energy affected the expansion rate of the Universe 5 billion to 10 billion years ago. There are many theories about what dark energy might be and how the expansion rate has changed over time. Those theories make predictions that can now be tested with actual data.

    In Texas, there’s a fierce sporting rivalry between the Longhorns in Austin and Texas A&M Aggies in College Station. But in the field of astronomy and astrophysics the two universities have worked closely together. And perhaps no one is more excited than A&M’s Nick Suntzeff about the new data that will come down over the next four years from the Hobby-Eberly telescope.

    Suntzeff is most well known for co-founding the High-Z Supernova Search Team along with Brian Schmidt, one of two research groups that discovered dark energy in 1998. This startling observation that the expansion rate of the Universe was in fact accelerating upended physicists’ understanding of the cosmos. They continue to grapple with understanding the mysterious force—hence the enigmatic appellation dark energy—that could be causing this acceleration.

    Dawn of the cosmos

    When scientists observe quantum mechanics, they see tiny energy fluctuations. They think these same fluctuations occurred at the very dawn of the Universe, Suntzeff explained to Ars. And as the early Universe expanded, so did these fluctuations. Then, at about 1 second, when the temperature of the Universe was about 10 billion degrees Kelvin, these fluctuations were essentially imprinted onto dark matter. From then on, this dark matter (whatever it actually is) responded only to the force of gravity.

    Meanwhile, normal matter and light were also filling the Universe, and they were more strongly affected by electromagnetism than gravity. As the Universe expanded, this light and matter rippled outward at the speed of sound. Then, at 380,000 years, Suntzeff said these sound waves “froze,” leaving the cosmic microwave background.

    These ripples, frozen with respect to one another, expanded outward as the Universe likewise grew. They can still be faintly seen today—many galaxies are spaced apart by about 500 million light years, the size of the largest ripples. But what happened between this freezing long ago, and what astronomers see today, is a mystery.

    The Texas experiment will allow astronomers to fill in some of that gap. They should be able to tease apart the two forces acting upon the expansion of the Universe. There’s the gravitational clumping, due to dark matter, which is holding back expansion. Then there’s the acceleration due to dark energy. Because the Universe’s expansion rate is now accelerating, dark energy appears to be dominating now. But is it constant? And when did it overtake dark matter’s gravitational pull?

    “I like to think of it sort of as a flag,” Suntzeff said. “We don’t see the wind, but we know the strength of the wind by the way the flag ripples in the breeze. The same with the ripples. We don’t see dark energy and dark matter, but we see how they push and pull the ripples over time, and therefore we can measure their strengths over time.”
    The universe’s end?

    Funding for the $42 million experiment at McDonald Observatory, called HETDEX for Hobby-Eberly Telescope Dark Energy Experiment, will come from three different sources: one-third from the state of Texas, one-third from the federal government, and a third from private foundations.

    The telescope is in the Davis Mountains of West Texas, which provide some of the darkest and clearest skies in the continental United States. The upgraded version took its first image on July 29. Completing the experiment will take three or four years, but astronomers expect to have a pretty good idea about their findings within the first year.

    If dark energy is constant, then our Universe has a dark, lonely future, as most of what we can now observe will eventually disappear over the horizon at speeds faster than that of light. But if dark energy changes over time, then it is hard to know what will happen, Suntzeff said. One unlikely scenario—among many, he said—is a phase transition. Dark energy might go through some kind of catalytic change that would propagate through the Universe. Then it might be game over, which would be a nice thing to know about in advance.

    Or perhaps not.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: