Tagged: ars technica Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:03 pm on July 8, 2017 Permalink | Reply
    Tags: "Answers in Genesis", A great test case, , ars technica, ,   

    From ars technica: “Creationist sues national parks, now gets to take rocks from Grand Canyon” a Test Case Too Good to be True 

    Ars Technica
    ars technica

    7/7/2017
    Scott K. Johnson

    1
    Scott K. Johnson

    “Alternative facts” aren’t new. Young-Earth creationist groups like Answers in Genesis believe the Earth is no more than 6,000 years old despite actual mountains of evidence to the contrary, and they’ve been playing the “alternative facts” card for years. In lieu of conceding incontrovertible geological evidence, they sidestep it by saying, “Well, we just look at those facts differently.”

    Nowhere is this more apparent than the Grand Canyon, which young-Earth creationist groups have long been enamored with. A long geologic record (spanning almost 2 billion years, in total) is on display in the layers of the Grand Canyon thanks to the work of the Colorado River. But many creationists instead assert that the canyon’s rocks—in addition to the spectacular erosion that reveals them—are actually the product of the Biblical “great flood” several thousand years ago.

    Andrew Snelling, who got a PhD in geology before joining Answers in Genesis, continues working to interpret the canyon in a way that is consistent with his views. In 2013, he requested permission from the National Park Service to collect some rock samples in the canyon for a new project to that end. The Park Service can grant permits for collecting material, which is otherwise illegal.

    Snelling wanted to collect rocks from structures in sedimentary formations known as “soft-sediment deformation”—basically, squiggly disturbances of the layering that occur long before the sediment solidifies into rock. While solid rock layers can fold (bend) on a larger scale under the right pressures, young-Earth creationists assert that all folds are soft sediment structures, since forming them doesn’t require long periods of time.

    The National Park Service sent Snelling’s proposal out for review, having three academic geologists who study the canyon look at it. Those reviews were not kind. None felt the project provided any value to justify the collection. One reviewer, the University of New Mexico’s Karl Karlstrom, pointed out that examples of soft-sediment deformation can be found all over the place, so Snelling didn’t need to collect rock from a national park. In the end, Snelling didn’t get his permit.

    In May, Snelling filed a lawsuit alleging that his rights had been violated, as he believed his application had been denied by a federal agency because of his religious views. The complaint cites, among other things, President Trump’s executive order on religious freedom.

    That lawsuit was withdrawn by Snelling on June 28. According to a story in The Australian, Snelling withdrew his suit because the National Park Service has relented and granted him his permit. He will be able to collect about 40 fist-sized samples, provided that he makes the data from any analyses freely available.

    Not that anything he collects will matter. “Even if I don’t find the evidence I think I will find, it wouldn’t assault my core beliefs,” Snelling told The Australian. “We already have evidence that is consistent with a great flood that swept the world.”

    Again, in actuality, that hypothesis is in conflict with the entirety of Earth’s surface geology.

    Snelling says he will publish his results in a peer-reviewed scientific journal. That likely means Answers in Genesis’ own Answers Research Journal, of which he is editor-in-chief.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:07 am on June 18, 2017 Permalink | Reply
    Tags: ars technica, , , , , Molybdenum isotopes serve as a marker of the source material for our Solar System, Tungsten acts as a timer for events early in the Solar System’s history, U Münster   

    From ars technica: “New study suggests Jupiter’s formation divided Solar System in two” 

    Ars Technica
    ars technica

    6/17/2017
    John Timmer

    1
    NASA

    Gas giants like Jupiter have to grow fast. Newborn stars are embedded in a disk of gas and dust that goes on to form planets. But the ignition of the star releases energy that drives away much of the gas within a relatively short time. Thus, producing something like Jupiter involved a race to gather material before it was pushed out of the Solar System entirely.

    Simulations have suggested that Jupiter could have won this race by quickly building a massive, solid core that was able to start drawing in nearby gas. But, since we can’t look at the interior or Jupiter to see whether it’s solid, finding evidence to support these simulations has been difficult. Now, a team at the University of Münster has discovered some relevant evidence [PNAS] in an unexpected location: the isotope ratios found in various meteorites. These suggest that the early Solar System was quickly divided in two, with the rapidly forming Jupiter creating the dividing line.

    2

    Divide and conquer

    Based on details of their composition, we already knew that meteorites formed from more than one pool of material in the early Solar System. The new work extends that by looking at specific elements: tungsten and molybdenum. Molybdenum isotopes serve as a marker of the source material for our Solar System, determining what type of star contributed that material. Tungsten acts as a timer for events early in the Solar System’s history, as it’s produced by a radioactive decay with a half life of just under nine million years.

    While we have looked at tungsten and molybdenum in a number of meteorite populations before, the German team extended that work to iron-rich meteorites. These are thought to be fragments of the cores of planetesimals that formed early in the Solar System’s history. In many cases, these bodies went on to contribute to building the first planets.

    The chemical composition of meteorites had suggested a large number of different classes produced as different materials solidified at different distances from the Sun. But the new data suggests that, from the perspective of these isotopes, everything falls into just two classes: carbonaceous and noncarbonaceous.

    These particular isotopes tell us a few things. One is that the two populations probably have a different formation history. The molybdenum data indicates that material was added to the Solar System as it was forming, material that originated from a different type of source star. (One way to visualize this is to think of our Solar System as forming in two steps: first, from the debris of a supernova, then later we received additional material ejected by a red giant star.) And, because the two populations are so distinct, it appears that the later addition of material didn’t spread throughout the entire Solar System. If the later material had spread, you’d find some objects with intermediate compositions.

    A second thing that’s clear from the tungsten data is that the two classes of objects condensed at two different times. This suggests the noncarbonaceous bodies were forming from one to two million years into the Solar System’s history, while carbonaceous materials condensed later, from two to three million years.

    Putting it together

    To explain this, the authors suggest that the Solar System was divided early in its history, creating two different reservoirs of material. “The most plausible mechanism to efficiently separate two disk reservoirs for an extended period,” they suggest, “is the accretion of a giant planet in between them.” That giant planet, obviously, would be Jupiter.

    Modeling indicates that Jupiter would need to be 20 Earth masses to physically separate the two reservoirs. And the new data suggest that a separation had to take place by a million years into the Solar System’s history. All of which means that Jupiter had to grow very large, very quickly. This would be large enough for Jupiter to start accumulating gas well before the newly formed Sun started driving the gas out of the disk. By the time Jupiter grew to 50 Earth masses, it would create a permanent physical separation between the two parts of the disk.

    The authors suggest that the quick formation of Jupiter may have partially starved the inner disk of material, as it prevented material from flowing in from the outer areas of the planet-forming disk. This could explain why the inner Solar System lacks any “super Earths,” larger planets that would have required more material to form.

    Overall, the work does provide some evidence for a quick formation of Jupiter, probably involving a solid core. Other researchers are clearly going to want to check both the composition of additional meteorites and the behavior of planet formation models to see whether the results hold together. But the overall finding of two distinct reservoirs of material in the early Solar System seems to be very clear in their data, and those reservoirs will have to be explained one way or another.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 7:53 am on June 10, 2017 Permalink | Reply
    Tags: , , ars technica, Common Crawl, Implicit Association Test (IAT), Princeton researchers discover why AI become racist and sexist, Word-Embedding Association Test (WEAT)   

    From ars technica: “Princeton researchers discover why AI become racist and sexist” 

    Ars Technica
    ars technica

    19/4/2017
    Annalee Newitz

    Study of language bias has implications for AI as well as human cognition.

    1
    No image caption or credit

    Ever since Microsoft’s chatbot Tay started spouting racist commentary after 24 hours of interacting with humans on Twitter, it has been obvious that our AI creations can fall prey to human prejudice. Now a group of researchers has figured out one reason why that happens. Their findings shed light on more than our future robot overlords, however. They’ve also worked out an algorithm that can actually predict human prejudices based on an intensive analysis of how people use English online.

    The implicit bias test

    Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus—created by millions of people typing away online—might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes.

    People taking the IAT are asked to put words into two categories. The longer it takes for the person to place a word in a category, the less they associate the word with the category. (If you’d like to take an IAT, there are several online at Harvard University.) IAT is used to measure bias by asking people to associate random words with categories like gender, race, disability, age, and more. Outcomes are often unsurprising: for example, most people associate women with family, and men with work. But that obviousness is actually evidence for the IAT’s usefulness in discovering people’s latent stereotypes about each other. (It’s worth noting that there is some debate among social scientists about the IAT’s accuracy.)

    Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. The “word-embedding” part of the test comes from a project at Stanford called GloVe, which packages words together into “vector representations,” basically lists of associated terms. So the word “dog,” if represented as a word-embedded vector, would be composed of words like puppy, doggie, hound, canine, and all the various dog breeds. The idea is to get at the concept of dog, not the specific word. This is especially important if you are working with social stereotypes, where somebody might be expressing ideas about women by using words like “girl” or “mother.” To keep things simple, the researchers limited each concept to 300 vectors.

    To see how concepts get associated with each other online, the WEAT looks at a variety of factors to measure their “closeness” in text. At a basic level, Caliskan told Ars, this means how many words apart the two concepts are, but it also accounts for other factors like word frequency. After going through an algorithmic transform, closeness in the WEAT is equivalent to the time it takes for a person to categorize a concept in the IAT. The further apart the two concepts, the more distantly they are associated in people’s minds.

    The WEAT worked beautifully to discover biases that the IAT had found before. “We adapted the IAT to machines,” Caliskan said. And what that tool revealed was that “if you feed AI with human data, that’s what it will learn. [The data] contains biased information from language.” That bias will affect how the AI behaves in the future, too. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender.

    Imagine an army of bots unleashed on the Internet, replicating all the biases that they learned from humanity. That’s the future we’re looking at if we don’t build some kind of corrective for the prejudices in these systems.

    A problem that AI can’t solve

    Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. In one test, they found strong associations between the concept of woman and the concept of nursing. This reflects a truth about reality, which is that nursing is a majority female profession.

    “Language reflects facts about the world,” Caliskan told Ars. She continued:

    Removing bias or statistical facts about the world will make the machine model less accurate. But you can’t easily remove bias, so you have to learn how to work with it. We are self-aware, we can decide to do the right thing instead of the prejudiced option. But machines don’t have self awareness. An expert human might be able to aid in [the AIs’] decision-making process so the outcome isn’t stereotyped or prejudiced for a given task.”

    The solution to the problem of human language is… humans. “I can’t think of many cases where you wouldn’t need a human to make sure that the right decisions are being made,” concluded Caliskan. “A human would know the edge cases for whatever the application is. Once they test the edge cases they can make sure it’s not biased.”

    So much for the idea that bots will be taking over human jobs. Once we have AIs doing work for us, we’ll need to invent new jobs for humans who are testing the AIs’ results for accuracy and prejudice. Even when chatbots get incredibly sophisticated, they are still going to be trained on human language. And since bias is built into language, humans will still be necessary as decision-makers.

    In a recent paper for Science about their work, the researchers say the implications are far-reaching. “Our findings are also sure to contribute to the debate concerning the Sapir Whorf hypothesis,” they write. “Our work suggests that behavior can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages.” If you watched the movie Arrival, you’ve probably heard of Sapir Whorf—it’s the hypothesis that language shapes consciousness. Now we have an algorithm that suggests this may be true, at least when it comes to stereotypes.

    Caliskan said her team wants to branch out and try to find as-yet-unknown biases in human language. Perhaps they could look for patterns created by fake news or look into biases that exist in specific subcultures or geographical locations. They would also like to look at other languages, where bias is encoded very differently than it is in English.

    “Let’s say in the future, someone suspects there’s a bias or stereotype in a certain culture or location,” Caliskan mused. “Instead of testing with human subjects first, which takes time, money, and effort, they can get text from that group of people and test to see if they have this bias. It would save so much time.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 4:44 pm on May 16, 2017 Permalink | Reply
    Tags: ars technica, Atomic Clocks,   

    From ars technica: “Atomic clocks and solid walls: New tools in the search for dark matter” 

    Ars Technica
    ars technica

    5/15/2017
    Jennifer Ouellette

    1
    An atomic clock based on a fountain of atoms. NSF

    Countless experiments around the world are hoping to reap scientific glory for the first detection of dark matter particles. Usually, they do this by watching for dark matter to bump into normal matter or by slamming particles into other particles and hoping for some dark stuff to pop out. But what if the dark matter behaves more like a wave?

    That’s the intriguing possibility championed by Asimina Arvanitaki, a theoretical physicist at the Perimeter Institute in Waterloo, Ontario, Canada, where she holds the Aristarchus Chair in Theoretical Physics—the first woman to hold a research chair at the institute. Detecting these hypothetical dark matter waves requires a bit of experimental ingenuity. So she and her collaborators are adapting a broad range of radically different techniques to the search: atomic clocks and resonating bars originally designed to hunt for gravitational waves—and even lasers shined at walls in hopes that a bit of dark matter might seep through to the other side.

    “Progress in particle physics for the last 50 years has been focused on colliders, and rightfully so, because whenever we went to a new energy scale, we found something new,” says Arvanitaki. That focus is beginning to shift. To reach higher and higher energies, physicists must build ever-larger colliders—an expensive proposition when funding for science is in decline. There is now more interest in smaller, cheaper options. “These are things that usually fit in the lab, and the turnaround time for results is much shorter than that of the collider,” says Arvanitaki, admitting, “I’ve done this for a long time, and it hasn’t always been popular.”

    The end of the WIMP?

    While most dark matter physicists have focused on hunting for weakly interacting massive particles, or WIMPs, Arvanitaki is one of a growing number who are focusing on less well-known alternatives, such as axions—hypothetical ultralight particles with masses that could be as little as ten thousand trillion trillion times smaller than the mass of the electron. The masses of WIMPs, by contrast, would be larger than the mass of the proton.

    Cosmology gave us very good reason to be excited about WIMPs and focus initial searches in their mass range, according to David Kaplan, a theorist at Johns Hopkins University (and producer of the 2013 documentary Particle Fever). But the WIMP’s dominance in the field to date has also been due, in part, to excitement over the idea of supersymmetry. That model requires every known particle in the Standard Model—whether fermion or boson—to have a superpartner that is heavier and in the opposite class. So an electron, which is a fermion, would have a boson superpartner called the selectron, and so on.

    Physicists suspect one or more of those unseen superpartners might make up dark matter. Supersymmetry predicts not just the existence of dark matter, but how much of it there should be. That fits neatly within a WIMP scenario. Dark matter could be any number of things, after all, and the supersymmetry mass range seemed like a good place to start the search, given the compelling theory behind it.

    But in the ensuing decades, experiment after experiment has come up empty. With each null result, the parameter space where WIMPs might be lurking shrinks. This makes distinguishing a possible signal from background noise in the data increasingly difficult.

    “We’re about to bump up against what’s called the ‘neutrino floor,’” says Kaplan. “All the technology we use to discover WIMPs will soon be sensitive to random neutrinos flying through the Universe. Once it gets there, it becomes a much messier signal and harder to see.”

    Particles are waves

    Despite its momentous discovery of the Higgs boson in 2012, the Large Hadron Collider has yet to find any evidence of supersymmetry. So we shouldn’t wonder that physicists are turning their attention to alternative dark matter candidates outside of the mass ranges of WIMPs. “It’s now a fishing expedition,” says Kaplan. “If you’re going on a fishing expedition, you want to search as broadly as possible, and the WIMP search is narrow and deep.”

    Enter Asimina Arvanitaki—“Mina” for short. She grew up in a small Greek Village called Koklas, and, since her parents were teachers, she grew up with no shortage of books around the house. Arvanitaki excelled in math and physics—at a very young age, she calculated the time light takes to travel from the Earth to the Sun. While she briefly considered becoming a car mechanic in high school because she loved cars, she decided, “I was more interested in why things are the way they are, not in how to make them work.” So she majored in physics instead.

    Similar reasoning convinced her to switch her graduate-school focus at Stanford from experimental condensed matter physics to theory: she found her quantum field theory course more scintillating than any experimental results she produced in the laboratory.

    Central to Arvanitaki’s approach is a theoretical reimagining of dark matter as more than just a simple particle. A peculiar quirk of quantum mechanics is that particles exhibit both particle- and wave-like behavior, so we’re really talking about something more akin to a wavepacket, according to Arvanitaki. The size of those wave packets is inversely proportional to their mass. “So the elementary particles in our theory don’t have to be tiny,” she says. “They can be super light, which means they can be as big as the room or as big as the entire Universe.”

    Axions fit the bill as a dark matter candidate, but they interact so weakly with regular matter that they cannot be produced in colliders. Arvanitaki has proposed several smaller experiments that might succeed in detecting them in ways that colliders cannot.

    Walls, clocks, and bars

    One of her experiments relies on atomic clocks—the most accurate timekeeping devices we have, in which the natural frequency oscillations of atoms serve the same purpose as the pendulum in a grandfather clock. An average wristwatch loses roughly one second every year; atomic clocks are so precise that the best would only lose one second every age of the Universe.

    Within her theoretical framework, dark matter particles (including axions) would behave like waves and oscillate at specific frequencies determined by the mass of the particles. Dark matter waves would cause the atoms in an atomic clock to oscillate as well. The effect is very tiny, but it should be possible to see such oscillations in the data. A trial search of existing data from atomic clocks came up empty, but Arvanitaki suspects that a more dedicated analysis would prove more fruitful.

    Then there are so-called “Weber bars,” which are solid aluminum cylinders that Arvanitaki says should ring like a tuning fork should a dark matter wavelet hit them at just the right frequency. The bars get their name from physicist Joseph Weber, who used them in the 1960s to search for gravitational waves. He claimed to have detected those waves, but nobody could replicate his findings, and his scientific reputation never quite recovered from the controversy.

    Weber died in 2000, but chances are he’d be pleased that his bars have found a new use. Since we don’t know the precise frequency of the dark matter particles we’re hunting, Arvanitaki suggests building a kind of xylophone out of Weber bars. Each bar would be tuned to a different frequency to scan for many different frequencies at once.

    Walking through walls

    Yet another inventive approach involves sending axions through walls. Photons (light) can’t pass through walls—shine a flashlight onto a wall, and someone on the other side won’t be able to see that light. But axions are so weakly interacting that they can pass through a solid wall. Arvanitaki’s experiment exploits the fact that it should be possible to turn photons into axions and then reverse the process to restore the photons. Place a strong magnetic field in front of that wall and then shine a laser onto it. Some of the photons will become axions and pass through the wall. A second magnetic field on the other side of the wall then converts those axions back into photons, which should be easily detected.

    This is a new kind of dark matter detection relying on small, lab-based experiments that are easier to perform (and hence easier to replicate). They’re also much cheaper than setting up detectors deep underground or trying to produce dark matter particles at the LHC—the biggest, most complicated scientific machine ever built, and the most expensive.

    “I think this is the future of dark matter detection,” says Kaplan, although both he and Arvanitaki are adamant that this should complement, not replace, the many ongoing efforts to hunt for WIMPs, whether deep underground or at the LHC.

    “You have to look everywhere, because there are no guarantees. This is what research is all about,” says Arvanitaki. “What we think is correct, and what Nature does, may be two different things.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 6:51 pm on February 14, 2017 Permalink | Reply
    Tags: ars technica, , , , Caltech Palomar Intermediate Palomar Transient Factory, ,   

    From ars technica: “Observations catch a supernova three hours after it exploded” 

    Ars Technica
    ars technica

    1
    BRIGHT AND EARLY Scientists caught an early glimpse of an exploding star in the galaxy NGC7610 (shown before the supernova). Light from the explosion revealed that gas (orange) surrounded the star, indicating that the star spurted out gas in advance of the blast.

    2
    The remains of an earlier Type II supernova. NASA

    The skies are full of transient events. If you don’t happen to have a telescope pointed at the right place at the right time, you can miss anything from the transit of a planet to the explosion of a star. But thanks to the development of automated survey telescopes, the odds of getting lucky have improved considerably.

    In October of 2013, the telescope of the intermediate Palomar Transient Factory worked just as expected, capturing a sudden brightening that turned out to reflect the explosion of a red supergiant in a nearby galaxy.

    Caltech Palomar Intermediate Palomar Transient Factory telescope at the Samuel Oschin Telescope at Palomar Observatory,located in San Diego County, California, United States
    Caltech Palomar Intermediate Palomar Transient Factory telescope at the Samuel Oschin Telescope at Palomar Observatory,located in San Diego County, California, United States

    The first images came from within three hours of the supernova itself, and followup observations tracked the energy released as it blasted through the nearby environment. The analysis of the event was published on Monday in Nature Physics, and it suggests the explosion followed shortly after the star ejected large amounts of material.

    This isn’t the first supernova we’ve witnessed as it happened; the Kepler space telescope captured two just as the energy of the explosion of the star’s core burst through the surface. By comparison, observations three hours later are relative latecomers. But SN 2013fs (as it was later termed) provided considerably more detail, as followup observations were extensive and covered all wavelengths, from X-rays to the infrared.

    Critically, spectroscopy began within six hours of the explosion. This technique separates the light according to its wavelength, allowing researchers to identify the presence of specific atoms based on the colors of light they absorb. In this case, the spectroscopy picked up the presence of atoms such as oxygen and helium, which lost most of its electrons. The presence of these heavily ionized oxygen atoms surged for several hours, then was suddenly cut off 11 hours later.

    The authors explain this behavior by positing that the red supergiant ejected a significant amount of material before it exploded. The light from the explosion then swept through the vicinity, eventually catching up with the material and stripping the electrons off its atoms. The sudden cutoff came when the light exited out the far side of the material, allowing it to return to a lower energy state, where it stayed until the physical debris of the explosion slammed into it about five days later.

    Since the light of the explosion is moving at the speed of light (duh), we know how far away the material was: six light hours, or roughly the Sun-Pluto distance. Some blurring in the spectroscopy also indicates that it was moving at about 100 kilometers a second. Based on its speed and the distance it is from the star that ejected it, they could calculate when it was ejected: less than 500 days before the explosion. The total mass of the material also suggests that the star was losing about 0.1 percent of the Sun’s mass a year.

    Separately, the authors estimate that it is unlikely there is a single star in our galaxy with the potential to be less than 500 days from explosion, so we probably won’t be able to look at an equivalent star—assuming we knew how to identify it.

    Large stars like red supergiants do sporadically eject material, so there’s always the possibility that the ejection-explosion series occurred by chance. But this isn’t the first supernova we’ve seen where explosion material has slammed into a shell of material that had been ejected earlier. Indeed, the closest red supergiant, Betelgeuse, has a stable shell of material a fair distance from its surface.

    What could cause these ejections? For most of their relatively short lives, these giant stars are fusing relatively light elements, each of which is present in sufficient amounts to burn for millions of years. But once they start to shift to heavier elements, higher rates of fusion are needed to counteract gravity, which is constantly drawing the elements in the core. As a result, the core undergoes major rearrangements as it changes fuels, sometimes within a span of a couple of years. It’s possible, suggests an accompanying perspective by astronomer Norbert Langer, that these rearrangements propagate to the surface and force the ejection of matter.

    For now, we’ll have to explore this possibility using models of the interiors of giant stars. But with enough survey telescopes in operation, we may have more data to test the idea against before too long.

    Nature Physics, 2017. DOI: 10.1038/NPHYS4025

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

    Nature Physics, 2017. DOI: 10.1038/NPHYS4025

     
  • richardmitnick 7:59 pm on December 28, 2016 Permalink | Reply
    Tags: ars technica, , How humans survived in the barren Atacama Desert 13000 years ago   

    From ars technica: “How humans survived in the barren Atacama Desert 13,000 years ago” Revised for more Optical telescopes 

    Ars Technica
    ars technica

    12/28/2016
    Annalee Newitz

    1
    The Atacama Desert today is barren, its sands encrusted with salt. And yet there were thriving human settlements there 12,000 years ago.
    Vallerio Pilar

    Home of:

    ESO/LaSilla
    ESO/LaSilla

    ESO/VLT at Cerro Paranal, Chile
    ESO/VLT at Cerro Paranal, Chile

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at  Chajnantor plateau, at 5,000 metres
    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    6
    Cerro Tololo Inter-American Observatory
    Blanco 4.0-m Telescope
    SOAR 4.1-m Telescope
    Gemini South 8.1-m Telescope

    When humans first arrived in the Americas, roughly 18,000 to 20,000 years ago, they traveled by boat along the continents’ shorelines. Many settled in coastal regions or along rivers that took them inland from the sea. Some made it all the way down to Chile quite quickly; there’s evidence for a human settlement there from more than 14,000 years ago at a site called Monte Verde. Another settlement called Quebrada Maní, dating back almost 13,000 years, was recently discovered north of Monte Verde in one of the most arid deserts in the world: the Atacama, whose salt-encrusted sands repel even the hardiest of plants. It seemed an impossible place for early humans to settle, but now we understand how they did it.

    At a presentation during the American Geophysical Union meeting this month, UC Berkeley environmental science researcher Marco Pfeiffer explained how he and his team investigated the Atacama desert’s deep environmental history. Beneath the desert’s salt crust, they found a buried layer of plant and animal remains between 9,000 and 17,000 years old. There were freshwater plants and mosses, as well as snails and plants that prefer brackish water. Quickly it became obvious this land had not always been desert—what Pfeiffer and his colleagues saw suggested wetlands fed by fresh water.

    1
    Chile’s early archaeological sites, named and dated. The yellow area shows the extension of the Atacama Desert hyperarid core. Also note the surrounding mountains that block many rainy weather systems. Quaternary Science Reviews

    But where could this water have come from? The high mountains surrounding the Atacama are a major barrier to weather systems that bring rain, which is partly why the area is lifeless today. Maybe, they reasoned, the water came from the mountains themselves. Based on previous studies, they already knew that rainfall in the area was six times higher than today’s average in that 9,000- to 17,000-years-ago range. So they used a computer model to figure out how all that water would have drained off the mountain peaks to form streams and pools in the Atacama. “We saw that water must have been accumulating,” Pfeiffer said. As a result, the desert bloomed into a marshy ecosystem which could easily have supported a number of human settlements.

    Indeed, Pfeiffer says that his team has found evidence of human settlements in Atacama’s surrounding flatlands, which they are still investigating. Now that they understand climate change in the region, Pfeiffer added, it will be easier for archaeologists to account for the oddly large population in the area. The history of humanity in the Americas isn’t just the story of vanished peoples—it’s also the tale of lost ecosystems.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:03 am on July 25, 2016 Permalink | Reply
    Tags: , ars technica, Final International Technology Roadmap for Semiconductors (ITRS),   

    From ars technica: “Transistors will stop shrinking in 2021, but Moore’s law will live on” 

    Ars Technica
    ars technica

    25/7/2016
    Sebastian Anthony

    1
    A 22nm Haswell wafer, with a pin for scale. No image credit

    Transistors will stop shrinking after 2021, but Moore’s law will probably continue, according to the final International Technology Roadmap for Semiconductors (ITRS).

    The ITRS—which has been produced almost annually by a collaboration of most of the world’s major semiconductor companies since 1993—is about as authoritative as it gets when it comes to predicting the future of computing. The 2015 roadmap will however be its last.

    The most interesting aspect of the ITRS is that it tries to predict what materials and processes we might be using in the next 15 years. The idea is that, by collaborating on such a roadmap, the companies involved can sink their R&D money into the “right” technologies.

    For example, despite all the fuss surrounding graphene and carbon nanotubes a few years back, the 2011 ITRS predicted that it would still be at least 10 to 15 years before they were actually used in memory or logic devices. Germanium and III-V semiconductors, though, were predicted to be only five to 10 years away. Thus, if you were deciding where to invest your R&D money, you might opt for III-V rather than nanotubes (which appears to be what Intel and IBM are doing).

    The latest and last ITRS focuses on two key areas: that it will no longer be economically viable to shrink transistors after 2021—and, pray tell, what might be done to keep Moore’s law going despite transistors reaching their minimal limit. (Remember, Moore’s law simply predicts a doubling of transistor density within a given integrated circuit, not the size or performance of those transistors.)

    The first problem has been known about for a long while. Basically, starting at around the 65nm node in 2006, the economic gains from moving to smaller transistors have been slowly dribbling away. Previously, moving to a smaller node meant you could cram tons more chips onto a single silicon wafer, at a reasonably small price increase. With recent nodes like 22 or 14nm, though, there are so many additional steps required that it costs a lot more to manufacture a completed wafer—not to mention additional costs for things like package-on-package (PoP) and through-silicon vias (TSV) packaging.

    This is the primary reason that the semiconductor industry has been whittled from around 20 leading-edge logic-manufacturing companies in 2000, down to just four today: Intel, TSMC, GlobalFoundries, and Samsung. (IBM recently left the business by selling its fabs to GloFo.)

    2
    A diagram showing future transistor topologies, from Applied Materials (which makes the machines that actually create the various layers/features on a die). Gate-all-around is shown at the top.

    The second problem—how to keep increasing transistor density—has a couple of likely solutions. First, ITRS expects that chip makers and designers will begin to move away from FinFET in 2019, towards gate-all-around transistor designs. Then, a few years later, these transistors will become vertical, with the channel fashioned out of some kind of nanowire. This will allow for a massive increase in transistor density, similar to recent advances in 3D V-NAND memory.

    The gains won’t last for long though, according to ITRS: by 2024 (so, just eight years from now), we will once again run up against a thermal ceiling. Basically, there is a hard limit on how much heat can be dissipated from a given surface area. So, as chips get smaller and/or denser, it eventually becomes impossible to keep the chip cool. The only real solution is to completely rethink chip packaging and cooling. To begin with, we’ll probably see microfluidic channels that increase the effective surface area for heat transfer. But after that, as we stack circuits on top of each other, we’ll need something even fancier. Electronic blood, perhaps?

    The final ITRS is one of the most beastly reports I’ve ever seen, spanning seven different sections and hundreds of pages and diagrams. Suffice it to say I’ve only touched on a tiny portion of the roadmap here. There are large sections on heterogeneous integration, and also some important bits on connectivity (semiconductors play a key role in modulating optical and radio signals).

    3
    Here’s what ASML’s EUV lithography machine may eventually look like. Pretty large, eh?

    I’ll leave you with one more important short-term nugget, though. We are fast approaching the cut-off date for choosing which lithography and patterning techs will be used for commercial 7nm and 5nm logic chips.

    As you may know, extreme ultraviolet (EUV) has been waiting in the wings for years now, never quite reaching full readiness due to its extremely high power usage and some resolution concerns. In the mean time, chip makers have fallen back on increasing levels of multiple patterning—multiple lithographic exposures, which increase manufacturing time (and costs).

    Now, however, directed self-assembly (DSA)—where the patterns assemble themselves—is also getting very close to readiness. If either technology wants to be used over multiple patterning for 7nm logic, the ITRS says they will need to prove their readiness in the next few months.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 10:13 am on July 22, 2016 Permalink | Reply
    Tags: ars technica, ,   

    from ars technica: “Gravity doesn’t care about quantum spin” 

    Ars Technica
    ars technica

    7/16/2016
    Chris Lee

    1
    An atomic clock based on a fountain of atoms. NSF

    Physics, as you may have read before, is based around two wildly successful theories. On the grand scale, galaxies, planets, and all the other big stuff dance to the tune of gravity. But, like your teenage daughter, all the little stuff stares in bewildered embarrassment at gravity’s dancing. Quantum mechanics is the only beat the little stuff is willing get down to. Unlike teenage rebellion, though, no one claims to understand what keeps relativity and quantum mechanics from getting along.

    Because we refuse to believe that these two theories are separate, physicists are constantly trying to find a way to fit them together. Part and parcel with creating a unifying model is finding evidence of a connection between the gravity and quantum mechanics. For example, showing that the gravitational force experienced by a particle depended on the particle’s internal quantum state would be a great sign of a deeper connection between the two theories. The latest attempt to show this uses a new way to look for coupling between gravity and the quantum property called spin.

    I’m free, free fallin’

    One of the cornerstones of general relativity is that objects move in straight lines through a curved spacetime. So, if two objects have identical masses and are in free fall, they should follow identical trajectories. And this is what we have observed since the time of Galileo (although I seem to recall that Galileo’s public experiment came to an embarrassing end due to differences in air resistance).

    The quantum state of an object doesn’t seem to make a difference. However, if there is some common theory that underlies general relativity and quantum mechanics, at some level, gravity probably has to act differently on different quantum states.

    To see this effect means measuring very tiny differences in free fall trajectories. Until recently, that was close to impossible. But it may be possible now thanks to the realization of Bose-Einstein condensates. The condensates themselves don’t necessarily provide the tools we need, but the equipment used to create a condensate allows us to manipulate clouds of atoms with exquisite precision. This precision is the basis of a new free fall test from researchers in China.

    Surge like a fountain, like tide

    The basic principle behind the new work is simple. If you want to measure acceleration due to gravity, you create a fountain of atoms and measure how long it takes for an atom to travel from the bottom of the fountain to the top and back again. As long as you know the starting velocity of the atoms and measure the time accurately, then you can calculate the force due to gravity. To do that, you need to impart a well-defined momentum to the cloud at a specific time.

    Quantum superposition

    Superposition is nothing more than addition for waves. Let’s say we have two sets of waves that overlap in space and time. At any given point, a trough may line up with a peak, their peaks may line up, or anything in between. Superposition tells us how to add up these waves so that the result reconstructs the patterns that we observe in nature.

    Then you need to measure the transit time. This is done using the way quantum states evolve in time, which also means you need to prepare the cloud of atoms in a precisely defined quantum state.

    If I put the cloud into a superposition of two states, then that superposition will evolve in time. What do I mean by that? Let’s say that I set up a superposition between states A and B. Now, when I take a measurement, I won’t get a mixture of A and B; I only ever get A or B. But the probability of obtaining A (or B) oscillates in time. So at one moment, the probability might be 50 percent, a short time later it is 75 percent, then a little while later it is 100 percent. Then it starts to fall until it reaches zero and then it starts to increase again.

    This oscillation has a regular period that is defined by the environment. So, under controlled circumstances, I set the superposition state as the atomic cloud drifts out the top of the fountain, and at a certain time later, I make a measurement. Each atom reports either state A or state B. The ratio of the amount of A and B tells me how much time has passed for the atoms, and, therefore, what the force of gravity was during their time in the fountain.

    Once you have that working, the experiment is dead simple (he says in the tone of someone who is confident he will never have to actually build the apparatus or perform the experiment). Essentially, you take your atomic cloud and choose a couple of different atomic states. Place the atoms in one of those states and measure the free fall time. Then repeat the experiment for the second state. Any difference, in this ideal case, is due to gravity acting differently on the two quantum states. Simple, right?

    Practically speaking, this is kind-a-sorta really, really difficult.

    I feel like I’m spinnin’

    Obviously, you have to choose a pair of quantum states to compare. In the case of our Chinese researchers, they chose to test for coupling between gravity and a particle’s intrinsic angular momentum, called spin. This choice makes sense because we know that in macroscopic bodies, the rotation of a body (in other words, its angular momentum) modifies the local gravitational field. So, depending on the direction and magnitude of the angular momentum, the local gravitational field will be different. Maybe we can see this classical effect in quantum states, too?

    However, quantum spin is, confusingly, not related to the rotation of a body. Indeed, if you calculate how fast an electron needs to rotate in order to generate its spin angular momentum, you’ll come up with a ridiculous number (especially if you take the idea of the electron being a point particle seriously). Nevertheless, particles like electrons and protons, as well as composite particles like atoms, have intrinsic spin angular momentum. So, an experiment comparing the free fall of particles with the same spin, but oriented in different directions, makes perfect sense.

    Except for one thing: magnetic fields. The spin of a particle is also coupled to its magnetic moment. That means that if there are any changes in the magnetic field around the atom fountain, the atomic cloud will experience a force due to these variations. Since the researchers want to measure a difference between two spin states that have opposite orientations, this is bad. They will always find that the two spin populations have different fountain trajectories, but the difference will largely be due to variations in the magnetic field, rather than to differences in gravitational forces.

    So the story of this research is eliminating stray magnetic fields. Indeed, the researchers spend most of their paper describing how they test for magnetic fields before using additional electromagnets to cancel out stray fields. They even invented a new measurement technique that partially compensates for any remaining variations in the magnetic fields. To a large extent, the researchers were successful.

    So, does gravity care about your spin?

    Short answer: no. The researchers obtained a null result, meaning that, to within the precision of their measurements, there was no detectable difference in atomic free falls when atoms were in different spin states.

    But this is really just the beginning of the experiment. We can expect even more sensitive measurements from the same researchers within the next few years. And the strategies that they used to increase accuracy can be transferred to other high-precision measurements.

    Physical Review Letters, 2016, DOI: 10.1103/PhysRevLett.117.023001

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 2:16 pm on November 9, 2015 Permalink | Reply
    Tags: ars technica, , , ,   

    From ars technica: “Finally some answers on dark energy, the mysterious master of the Universe” 

    Ars Technica
    ars technica

    Nov 5, 2015
    Eric Berger

    U Texas McDonald Observatory Hobby-Eberle 9.1 meter Telescope
    U Texas McDonald Observatory Hobby Eberle 9.1 meter Telescope Interior
    U Texas McDonald Observatory Hobby-Eberle 9.1 meter Telescope

    Unless you’re an astrophysicist, you probably don’t sit around thinking about dark energy all that often. That’s understandable, as dark energy doesn’t really affect anyone’s life. But when you stop to ponder dark energy, it’s really rather remarkable. This mysterious force, which makes up the bulk of the Universe but was only discovered 17 years ago, somehow is blasting the vast cosmos apart at ever-increasing rates.

    Astrophysicists do sit around and think about dark energy a lot. And they’re desperate for more information about it as, right now, they have essentially two data points. One shows the Universe in its infancy, at 380,000 years old, thanks to observations of the cosmic microwave background radiation. And by pointing their telescopes into the sky and looking about, they can measure the present expansion rate of the Universe.

    But astronomers would desperately like to know what happened in between the Big Bang and now. Is dark energy constant, or is it accelerating? Or, more crazily still, might it be about to undergo some kind of phase change and turn everything into ice, as ice-nine did in Kurt Vonnegut’s novel Cat’s Cradle? Probably not, but really, no one knows.

    The Plan

    Fortunately astronomers in West Texas have a $42 million plan to use the world’s fourth largest optical telescope to get some answers. Until now, the 9-meter Hobby-Eberly telescope at McDonald Observatory has excelled at observing very distant objects, but this has necessitated a narrow field of view. However, with a clever new optical system, astronomers have expanded the telescope’s field of view by a factor of 120, to nearly the size of a full Moon. The next step is to build a suite of spectrographs and, using 34,000 optical fibers, wire them into the focal plane of the telescope.

    “We’re going to make this 3-D map of the Universe,” Karl Gebhardt, a professor of astronomy at the University of Texas at Austin, told Ars. “On this giant map, for every image that we take, we’ll get that many spectra. No other telescope can touch this kind of information.”

    With this detailed information about the location and age of objects in the sky, astronomers hope to gain an understanding of how dark energy affected the expansion rate of the Universe 5 billion to 10 billion years ago. There are many theories about what dark energy might be and how the expansion rate has changed over time. Those theories make predictions that can now be tested with actual data.

    In Texas, there’s a fierce sporting rivalry between the Longhorns in Austin and Texas A&M Aggies in College Station. But in the field of astronomy and astrophysics the two universities have worked closely together. And perhaps no one is more excited than A&M’s Nick Suntzeff about the new data that will come down over the next four years from the Hobby-Eberly telescope.

    Suntzeff is most well known for co-founding the High-Z Supernova Search Team along with Brian Schmidt, one of two research groups that discovered dark energy in 1998. This startling observation that the expansion rate of the Universe was in fact accelerating upended physicists’ understanding of the cosmos. They continue to grapple with understanding the mysterious force—hence the enigmatic appellation dark energy—that could be causing this acceleration.

    Dawn of the cosmos

    When scientists observe quantum mechanics, they see tiny energy fluctuations. They think these same fluctuations occurred at the very dawn of the Universe, Suntzeff explained to Ars. And as the early Universe expanded, so did these fluctuations. Then, at about 1 second, when the temperature of the Universe was about 10 billion degrees Kelvin, these fluctuations were essentially imprinted onto dark matter. From then on, this dark matter (whatever it actually is) responded only to the force of gravity.

    Meanwhile, normal matter and light were also filling the Universe, and they were more strongly affected by electromagnetism than gravity. As the Universe expanded, this light and matter rippled outward at the speed of sound. Then, at 380,000 years, Suntzeff said these sound waves “froze,” leaving the cosmic microwave background.

    These ripples, frozen with respect to one another, expanded outward as the Universe likewise grew. They can still be faintly seen today—many galaxies are spaced apart by about 500 million light years, the size of the largest ripples. But what happened between this freezing long ago, and what astronomers see today, is a mystery.

    The Texas experiment will allow astronomers to fill in some of that gap. They should be able to tease apart the two forces acting upon the expansion of the Universe. There’s the gravitational clumping, due to dark matter, which is holding back expansion. Then there’s the acceleration due to dark energy. Because the Universe’s expansion rate is now accelerating, dark energy appears to be dominating now. But is it constant? And when did it overtake dark matter’s gravitational pull?

    “I like to think of it sort of as a flag,” Suntzeff said. “We don’t see the wind, but we know the strength of the wind by the way the flag ripples in the breeze. The same with the ripples. We don’t see dark energy and dark matter, but we see how they push and pull the ripples over time, and therefore we can measure their strengths over time.”
    The universe’s end?

    Funding for the $42 million experiment at McDonald Observatory, called HETDEX for Hobby-Eberly Telescope Dark Energy Experiment, will come from three different sources: one-third from the state of Texas, one-third from the federal government, and a third from private foundations.

    The telescope is in the Davis Mountains of West Texas, which provide some of the darkest and clearest skies in the continental United States. The upgraded version took its first image on July 29. Completing the experiment will take three or four years, but astronomers expect to have a pretty good idea about their findings within the first year.

    If dark energy is constant, then our Universe has a dark, lonely future, as most of what we can now observe will eventually disappear over the horizon at speeds faster than that of light. But if dark energy changes over time, then it is hard to know what will happen, Suntzeff said. One unlikely scenario—among many, he said—is a phase transition. Dark energy might go through some kind of catalytic change that would propagate through the Universe. Then it might be game over, which would be a nice thing to know about in advance.

    Or perhaps not.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:27 am on October 9, 2015 Permalink | Reply
    Tags: ars technica, , , Space Shuttle and Centaur   

    From ars technica: “A deathblow to the Death Star: The rise and fall of NASA’s Shuttle-Centaur” 

    Ars Technica
    ars technica

    Oct 9, 2015
    Emily Carney

    In January 1986, astronaut Rick Hauck approached his STS-61F crew four months before their mission was scheduled to launch. The shuttle Challenger was set to deploy the Ulysses solar probe on a trajectory to Jupiter, utilizing a liquid-fueled Centaur G-Prime stage. While an upcoming launch should be an exciting time for any astronaut, Hauck’s was anything but optimistic. As he spoke to his crew, his tone was grave. He couldn’t recall the exact quote in a 2003 Johnson Space Center (JSC) oral history, but the message remained clear.

    “NASA is doing business different from the way it has in the past. Safety is being compromised, and if any of you want to take yourself off this flight, I will support you.”

    Hauck wasn’t just spooked by the lax approach that eventually led to the Challenger explosion. Layered on top of that concern was the planned method of sending Ulysses away from Earth. The Centaur was fueled by a combustible mix of liquid hydrogen and oxygen, and it would be carried to orbit inside the shuttle’s payload bay.

    The unstoppable shuttle

    Hauck’s words may have seemed shocking, but they were prescient. In the early 1980s, the space shuttle seemed unstoppable. Technically called the US Space Transportation System program, the shuttle was on the verge of entering what was being called its “Golden Age” in 1984. The idea of disaster seemed remote. As experience with the craft grew, nothing seemed to have gone wrong (at least nothing the public was aware of). It seemed nothing could go wrong.

    In 1985, the program enjoyed a record nine successful spaceflights, and NASA was expected to launch a staggering 15 missions in 1986. The manifest for 1986 was beyond ambitious, including but not limited to a Department of Defense mission into a polar orbit from Vandenberg Air Force Base, the deployment of the Hubble telescope to low Earth orbit, and the delivery of two craft destined for deep space: Galileo and Ulysses.

    The space shuttle had been touted as part space vehicle and part “cargo bus,” something that would make traveling to orbit routine. The intense schedule suggested it would finally fulfill the promise that had faded during the wait for its long-delayed maiden flight in April 1981. As astronaut John Young, who commanded that historic first flight, stated in his book Forever Young, “When we finished STS-1, it was clear we had to make the space shuttle what we hoped it could be—a routine access-to-space vehicle.”

    To meet strict deadlines, however, safety was starting to slide. Following the last test flight (STS-4, completed in July 1982), crews no longer wore pressure suits during launch and reentry, making shuttle flights look as “routine” as airplane rides. The shuttle had no ejection capability at the time, so its occupants were committed to the launch through the bitter end.

    Yet by mid-1985, the space shuttle program had already experienced several near-disasters. Critics of the program had long fretted over the design of the system, which boasted two segmented solid rocket boosters and an external tank. The boosters were already noted to have experienced “blow by” in the O-rings of their joints, which could leak hot exhaust out the sides of the structure. It was an issue that would later come to the forefront in a horrific display during the Challenger disaster.

    But there were other close calls that the public was largely unaware of. In late July 1985, the program had experienced an “Abort to Orbit” condition during the launch of STS-51F, commanded by Gordon Fullerton. A center engine had failed en route to space, which should normally call for the shuttle’s immediate return. Instead, a quick call was made by Booster Systems Engineer Jenny Howard to “inhibit main engine limits,” which may have prevented another engine from failing, possibly saving the orbiter Challenger and its seven-man crew. (The mission did reach orbit, but a lower one than planned.)


    download mp4 video here.
    Howard makes the call to push the engines past their assigned limits.

    People who followed things closely recognized the problems. The “Space Shuttle” section of Jane’s Spaceflight Directory 1986 (which was largely written the year before) underscored the risky nature of the early program: “The narrow safety margins and near disasters during the launch phase are already nearly forgotten, save by those responsible for averting actual disaster.”
    The push for Shuttle-Centaur

    All of those risks existed when the shuttle was simply carrying an inert cargo to orbit. Shuttle-Centaur, the high-energy solution intended to propel Galileo and Ulysses into space, was anything but inert.

    Shuttle-Centaur was born from a desire to send heavier payloads on a direct trajectory to deep space targets from America’s flagship space vehicles.

    6
    Centaur-2A upper stage of an Atlas IIA

    The Centaur rocket was older than NASA itself. According to a 2012 NASA History article, the US Air Force teamed up with General Dynamics/Astronautics Corp. to develop a rocket stage that could be carried to orbit and then ignite to propel heavier loads into space. In 1958 the proposal was accepted by the government’s Advanced Research Products Agency, and the upper stage that would become Centaur began its development.

    The first successful flight of a Centaur (married to an Atlas booster) was made on November 27, 1963. While the launch vehicle carried no payload, it did demonstrate that a liquid hydrogen/liquid oxygen upper stage worked. In the years since, the Centaur has helped propel a wide variety of spacecraft to deep-space destinations. Both Voyagers 1 and 2 received a much-needed boost from their Centaur stages en route to the Solar System’s outer planets and beyond.

    NASA Voyager 1
    Voyager 1

    General Dynamics was tasked with adapting the rocket stage so it could be taken to orbit on the shuttle. A Convair/General Dynamics poster from this period read enthusiastically, “In 1986, we’re going to Jupiter…and we need your help.” The artwork on the poster appeared retro-futuristic, boasting a spacecraft propelled by a silvery rocket stage that looked like something out of a sci-fi fantasy novel or Omni magazine. In the distance, a space shuttle—payload bay doors open—hovered over an exquisite Earth-scape.

    2
    General Dynamics’ artistic rendering of Shuttle-Centaur, with optimistic text about a 1986 target date for launch.
    The San Diego Air & Space Museum Archives on Flickr.

    The verbiage from a 1984 paper titled Shuttle Centaur Project Perspective, written by Edwin T. Muckley of NASA’s Lewis (now Glenn) Research Center, suggested that Jupiter would be the first of many deep-space destinations. Muckley optimistically announced the technology: “It’s expected to meet the demands of a wide range of users including NASA, the DOD, private industry, and the European Space Agency (ESA).”

    The paper went on to describe the two different versions of the liquid-fueled rocket, meant to be cradled inside the orbiters’ payload bays. “The initial version, designated G-Prime, is the larger of the two, with a length of 9.1 m (30 ft.). This vehicle will be used to launch the Galileo and International Solar Polar Missions (ISPM) [later called Ulysses] to Jupiter in May 1986.”

    According to Muckley, the shorter version, Centaur G, was to be used to launch DOD payloads, the Magellan spacecraft to Venus, and TDRSS [tracking and data relay satellite system] missions. He added optimistically, “…[It] is expected to provide launch services well into the 1990s.”

    NASA Magellan
    Magellan

    Dennis Jenkins’ book Space Shuttle: The History of the National Space Transportation System, the First 100 Missions discussed why Centaur became seen as desirable for use on the shuttle in the 1970s and early 1980s. A booster designed specifically for the shuttle called the Inertial Upper Stage (developed by Boeing) did not have enough power to directly deliver deep-space payloads (this solid stage would be used for smaller satellites such as TDRSS hardware). As the author explained, “First and most important was that Centaur was more powerful and had the ability to propel a payload directly to another planet. Second, Centaur was ‘gentler’—solid rockets had a harsh initial thrust that had the potential to damage the sensitive instruments aboard a planetary payload.”

    However, the Centaur aboard the shuttle also had its drawbacks. First, it required changes in the way the shuttle operated. A crew needed to be reduced in size to four in order to fit a heavier payload and a precipitously thin-skinned, liquid-fueled rocket stage inside a space shuttle’s payload bay. And the added weight meant that the shuttle could only be sent to its lowest possible orbit.

    In addition, during launch, the space shuttles’ main engines (SSMEs) would be taxed unlike any other time in program history. Even with smaller crews and a food-prep galley removed mid-deck, the shuttle’s main engines would have to be throttled up to an unheard-of 109-percent thrust level to deliver the shuttle, payload, and its crew to orbit. The previous “maximum” had been 104 percent.

    But the risks of the shuttle launch were only a secondary concern. “The perceived advantage of the IUS [Inertial Upper Stage] over the Centaur was safety—LH2 [liquid hydrogen] presented a significant challenge,” Jenkins noted. “Nevertheless, NASA decided to accept the risk and go with the Centaur.”

    While a host of unknowns remained concerning launching a volatile, liquid-fueled rocket stage on the back of a space shuttle armed with a liquid-filled tank and two solid rocket boosters, NASA and its contractors galloped full speed toward a May 1986 launch deadline for both spacecraft. The project would be helmed by NASA’s Lewis. It was decided that the orbiters Challenger and Discovery would be modified to carry Centaur (the then-new orbiter Atlantis was delivered with Centaur capability) with launch pad modifications taking place at the Kennedy Space Center and Vandenberg.

    The “Death Star” launches

    The launch plan was dramatic: two shuttles, Challenger and Atlantis, were to be on Pads 39B and 39A in mid-1986, carrying Ulysses and Galileo, each linked to the Shuttle-Centaur. The turnaround was also to be especially quick: these launches would take place within five days of one another.

    The commander of the first shuttle mission, John Young, was known for his laconic sense of humor. He began to refer to missions 61F (Ulysses) and 61G (Galileo) as the “Death Star” missions. He wasn’t entirely joking.

    The thin-skinned Centaur posed a host of risks to the crews. In an AmericaSpace article, space historian Ben Evans pointed out that gaseous hydrogen would periodically have to be “bled off” to keep its tank within pressure limits. However, if too much hydrogen was vented, the payloads would not have enough fuel to make their treks to Jupiter. Time was of the essence, and the crews would be under considerable stress. Their first deployment opportunities would occur a mere seven hours post-launch, and three deployment “windows” were scheduled.

    The venting itself posed its own problems. There was a concern about the position of the stage’s vents, which were located near the exhaust ports for the shuttles’ Auxiliary Power Units—close enough that some worried venting could cause an explosion.

    Another big concern involved what would happen if the shuttle had to dump the stage’s liquid fuel prior to performing a Return-to-Launch-Site (RTLS) abort or a Transatlantic (TAL) abort. There was worry that the fuel would “slosh” around in the payload bay, rendering the shuttle uncontrollable. (There were also worries about the feasibility of these abort plans with a normal shuttle cargo, but that’s another story.)

    These concerns filtered down to the crews. According to Evans, astronaut John Fabian was originally meant to be on the crew of 61G, but he resigned partly due to safety concerns surrounding Shuttle-Centaur. “He spent enough time with the 61G crew to see a technician clambering onto the Centaur with an untethered wrench in his back pocket and another smoothing out a weld, then accidentally scarring the booster’s thin skin with a tool,” the historian wrote. “In Fabian’s mind, it was bad enough that the Shuttle was carrying a volatile booster with limited redundancy, without adding new worries about poor quality control oversight and a lax attitude towards safety.”

    4
    Astronauts John Fabian and Dave Walker pose in front of what almost became their “ride” during a Shuttle-Centaur rollout ceremony in mid-1985.
    NASA/Glenn Research Center

    STS-61F’s commander, Hauck, had also developed misgivings about Shuttle-Centaur. In the 2003 JSC oral history, he bluntly discussed the unforgiving nature of his mission:

    “…[If] you’ve got a return-to-launch-site abort or a transatlantic abort and you’ve got to land, and you’ve got a rocket filled with liquid oxygen, liquid hydrogen in the cargo bay, you’ve got to get rid of the liquid oxygen and liquid hydrogen, so that means you’ve got to dump it while you’re flying through this contingency abort. And to make sure that it can dump safely, you need to have redundant parallel dump valves, helium systems that control the dump valves, software that makes sure that contingencies can be taken care of. And then when you land, here you’re sitting with the Shuttle-Centaur in the cargo bay that you haven’t been able to dump all of it, so you’re venting gaseous hydrogen out this side, gaseous oxygen out that side, and this is just not a good idea.”

    Even as late as January 1986, Hauck and his crew were still working out issues with the system’s helium-actuated dump valves. He related, “…[It] was clear that the program was willing to compromise on the margins in the propulsive force being provided by the pressurized helium… I think it was conceded this was going to be the riskiest mission the Shuttle would have flown up to that point.”
    Saved by disaster

    Within weeks, the potential crisis was derailed dramatically by an actual crisis, one that was etched all over the skies of central Florida on an uncharacteristically cold morning. On January 28, 1986, Challenger—meant to hoist Hauck, his crew, Ulysses, and its Shuttle-Centaur in May—was destroyed shortly after its launch, its crew of seven a total loss. On that ill-fated mission, safety had been dangerously compromised, with the shuttle launching following a brutal cold snap that made the boosters’ o-rings inflexible and primed to fail.

    It became clear NASA had to develop a different attitude toward risk management. Keeping risks as low as possible meant putting Shuttle-Centaur on the chopping block. In June 1986, a Los Angeles Times article announced the death-blow to the Death Star.

    “The National Aeronautics and Space Administration Thursday canceled development of a modified Centaur rocket that it had planned to carry into orbit aboard the space shuttle and then use to fire scientific payloads to Jupiter and the Sun. NASA Administrator James C. Fletcher said the Centaur ‘would not meet safety criteria being applied to other cargo or elements of the space shuttle system.’ His decision came after urgent NASA and congressional investigations of potential safety problems following the Jan. 28 destruction of the shuttle Challenger 73 seconds after launch.”

    5
    Astronauts Rick Hauck, John Fabian, and Dave Walker pose by a Shuttle-Centaur stage in mid-1985 during a rollout ceremony. Hauck and Fabian both had misgivings about Shuttle-Centaur. The San Diego Air & Space Museum Archives on Flickr.

    After a long investigation and many ensuing changes, the space shuttle made its return to flight with STS-26 (helmed by Hauck) in September 1988. Discovery and the rest of the fleet boasted redesigned solid rocket boosters with added redundancy. In addition, crews had a “bailout” option if something went wrong during launch, and they wore pressure suits during ascent and reentry for the first time since 1982.

    Galileo was successfully deployed from Atlantis (STS-34) using an IUS in October 1989, while Ulysses utilized an IUS and PAM-S (Payload Assist Module) to begin its journey following its deployment from Discovery (STS-41) in October 1990.

    NASA Galileo
    Galileo

    As for Shuttle-Centaur? Relegated to the history books as a “what if,” a model now exists at the US Space and Rocket Center in Huntsville, Alabama. It still looks every inch the shiny, sci-fi dream depicted in posters and artists’ renderings back in the 1980s. However, this “Death Star” remains on terra firma, representing what Jim Banke described as the “naive arrogance” of the space shuttle’s Golden Age.

    Additional sources

    Hitt, D., & Smith, H. (2014). Bold they rise: The space shuttle early years, 1972 – 1986. Lincoln, NE: University of Nebraska Press.
    Jenkins, D. R. (2012). Space shuttle: The history of the national space transportation system, the first 100 missions. Cape Canaveral, FL: Published by author.
    Turnill, R. (Ed.). (1986). Jane’s spaceflight directory (2nd ed.). London, England: Jane’s Publishing Company Limited.
    Young, J. W., & Hansen, J. R. (2012). Forever young: A life of adventure in air and space. Gainesville, FL: University Press of Florida.
    Dawson, V., & Bowles, M.D. (2004). Taming liquid hydrogen: The Centaur upper stage rocket, 1958 – 2002. Washington, D.C.: National Aeronautics and Space Administration.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: