Tagged: ars technica Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:03 am on July 25, 2016 Permalink | Reply
    Tags: , ars technica, Final International Technology Roadmap for Semiconductors (ITRS),   

    From ars technica: “Transistors will stop shrinking in 2021, but Moore’s law will live on” 

    Ars Technica
    ars technica

    25/7/2016
    Sebastian Anthony

    1
    A 22nm Haswell wafer, with a pin for scale. No image credit

    Transistors will stop shrinking after 2021, but Moore’s law will probably continue, according to the final International Technology Roadmap for Semiconductors (ITRS).

    The ITRS—which has been produced almost annually by a collaboration of most of the world’s major semiconductor companies since 1993—is about as authoritative as it gets when it comes to predicting the future of computing. The 2015 roadmap will however be its last.

    The most interesting aspect of the ITRS is that it tries to predict what materials and processes we might be using in the next 15 years. The idea is that, by collaborating on such a roadmap, the companies involved can sink their R&D money into the “right” technologies.

    For example, despite all the fuss surrounding graphene and carbon nanotubes a few years back, the 2011 ITRS predicted that it would still be at least 10 to 15 years before they were actually used in memory or logic devices. Germanium and III-V semiconductors, though, were predicted to be only five to 10 years away. Thus, if you were deciding where to invest your R&D money, you might opt for III-V rather than nanotubes (which appears to be what Intel and IBM are doing).

    The latest and last ITRS focuses on two key areas: that it will no longer be economically viable to shrink transistors after 2021—and, pray tell, what might be done to keep Moore’s law going despite transistors reaching their minimal limit. (Remember, Moore’s law simply predicts a doubling of transistor density within a given integrated circuit, not the size or performance of those transistors.)

    The first problem has been known about for a long while. Basically, starting at around the 65nm node in 2006, the economic gains from moving to smaller transistors have been slowly dribbling away. Previously, moving to a smaller node meant you could cram tons more chips onto a single silicon wafer, at a reasonably small price increase. With recent nodes like 22 or 14nm, though, there are so many additional steps required that it costs a lot more to manufacture a completed wafer—not to mention additional costs for things like package-on-package (PoP) and through-silicon vias (TSV) packaging.

    This is the primary reason that the semiconductor industry has been whittled from around 20 leading-edge logic-manufacturing companies in 2000, down to just four today: Intel, TSMC, GlobalFoundries, and Samsung. (IBM recently left the business by selling its fabs to GloFo.)

    2
    A diagram showing future transistor topologies, from Applied Materials (which makes the machines that actually create the various layers/features on a die). Gate-all-around is shown at the top.

    The second problem—how to keep increasing transistor density—has a couple of likely solutions. First, ITRS expects that chip makers and designers will begin to move away from FinFET in 2019, towards gate-all-around transistor designs. Then, a few years later, these transistors will become vertical, with the channel fashioned out of some kind of nanowire. This will allow for a massive increase in transistor density, similar to recent advances in 3D V-NAND memory.

    The gains won’t last for long though, according to ITRS: by 2024 (so, just eight years from now), we will once again run up against a thermal ceiling. Basically, there is a hard limit on how much heat can be dissipated from a given surface area. So, as chips get smaller and/or denser, it eventually becomes impossible to keep the chip cool. The only real solution is to completely rethink chip packaging and cooling. To begin with, we’ll probably see microfluidic channels that increase the effective surface area for heat transfer. But after that, as we stack circuits on top of each other, we’ll need something even fancier. Electronic blood, perhaps?

    The final ITRS is one of the most beastly reports I’ve ever seen, spanning seven different sections and hundreds of pages and diagrams. Suffice it to say I’ve only touched on a tiny portion of the roadmap here. There are large sections on heterogeneous integration, and also some important bits on connectivity (semiconductors play a key role in modulating optical and radio signals).

    3
    Here’s what ASML’s EUV lithography machine may eventually look like. Pretty large, eh?

    I’ll leave you with one more important short-term nugget, though. We are fast approaching the cut-off date for choosing which lithography and patterning techs will be used for commercial 7nm and 5nm logic chips.

    As you may know, extreme ultraviolet (EUV) has been waiting in the wings for years now, never quite reaching full readiness due to its extremely high power usage and some resolution concerns. In the mean time, chip makers have fallen back on increasing levels of multiple patterning—multiple lithographic exposures, which increase manufacturing time (and costs).

    Now, however, directed self-assembly (DSA)—where the patterns assemble themselves—is also getting very close to readiness. If either technology wants to be used over multiple patterning for 7nm logic, the ITRS says they will need to prove their readiness in the next few months.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 10:13 am on July 22, 2016 Permalink | Reply
    Tags: ars technica, ,   

    from ars technica: “Gravity doesn’t care about quantum spin” 

    Ars Technica
    ars technica

    7/16/2016
    Chris Lee

    1
    An atomic clock based on a fountain of atoms. NSF

    Physics, as you may have read before, is based around two wildly successful theories. On the grand scale, galaxies, planets, and all the other big stuff dance to the tune of gravity. But, like your teenage daughter, all the little stuff stares in bewildered embarrassment at gravity’s dancing. Quantum mechanics is the only beat the little stuff is willing get down to. Unlike teenage rebellion, though, no one claims to understand what keeps relativity and quantum mechanics from getting along.

    Because we refuse to believe that these two theories are separate, physicists are constantly trying to find a way to fit them together. Part and parcel with creating a unifying model is finding evidence of a connection between the gravity and quantum mechanics. For example, showing that the gravitational force experienced by a particle depended on the particle’s internal quantum state would be a great sign of a deeper connection between the two theories. The latest attempt to show this uses a new way to look for coupling between gravity and the quantum property called spin.

    I’m free, free fallin’

    One of the cornerstones of general relativity is that objects move in straight lines through a curved spacetime. So, if two objects have identical masses and are in free fall, they should follow identical trajectories. And this is what we have observed since the time of Galileo (although I seem to recall that Galileo’s public experiment came to an embarrassing end due to differences in air resistance).

    The quantum state of an object doesn’t seem to make a difference. However, if there is some common theory that underlies general relativity and quantum mechanics, at some level, gravity probably has to act differently on different quantum states.

    To see this effect means measuring very tiny differences in free fall trajectories. Until recently, that was close to impossible. But it may be possible now thanks to the realization of Bose-Einstein condensates. The condensates themselves don’t necessarily provide the tools we need, but the equipment used to create a condensate allows us to manipulate clouds of atoms with exquisite precision. This precision is the basis of a new free fall test from researchers in China.

    Surge like a fountain, like tide

    The basic principle behind the new work is simple. If you want to measure acceleration due to gravity, you create a fountain of atoms and measure how long it takes for an atom to travel from the bottom of the fountain to the top and back again. As long as you know the starting velocity of the atoms and measure the time accurately, then you can calculate the force due to gravity. To do that, you need to impart a well-defined momentum to the cloud at a specific time.

    Quantum superposition

    Superposition is nothing more than addition for waves. Let’s say we have two sets of waves that overlap in space and time. At any given point, a trough may line up with a peak, their peaks may line up, or anything in between. Superposition tells us how to add up these waves so that the result reconstructs the patterns that we observe in nature.

    Then you need to measure the transit time. This is done using the way quantum states evolve in time, which also means you need to prepare the cloud of atoms in a precisely defined quantum state.

    If I put the cloud into a superposition of two states, then that superposition will evolve in time. What do I mean by that? Let’s say that I set up a superposition between states A and B. Now, when I take a measurement, I won’t get a mixture of A and B; I only ever get A or B. But the probability of obtaining A (or B) oscillates in time. So at one moment, the probability might be 50 percent, a short time later it is 75 percent, then a little while later it is 100 percent. Then it starts to fall until it reaches zero and then it starts to increase again.

    This oscillation has a regular period that is defined by the environment. So, under controlled circumstances, I set the superposition state as the atomic cloud drifts out the top of the fountain, and at a certain time later, I make a measurement. Each atom reports either state A or state B. The ratio of the amount of A and B tells me how much time has passed for the atoms, and, therefore, what the force of gravity was during their time in the fountain.

    Once you have that working, the experiment is dead simple (he says in the tone of someone who is confident he will never have to actually build the apparatus or perform the experiment). Essentially, you take your atomic cloud and choose a couple of different atomic states. Place the atoms in one of those states and measure the free fall time. Then repeat the experiment for the second state. Any difference, in this ideal case, is due to gravity acting differently on the two quantum states. Simple, right?

    Practically speaking, this is kind-a-sorta really, really difficult.

    I feel like I’m spinnin’

    Obviously, you have to choose a pair of quantum states to compare. In the case of our Chinese researchers, they chose to test for coupling between gravity and a particle’s intrinsic angular momentum, called spin. This choice makes sense because we know that in macroscopic bodies, the rotation of a body (in other words, its angular momentum) modifies the local gravitational field. So, depending on the direction and magnitude of the angular momentum, the local gravitational field will be different. Maybe we can see this classical effect in quantum states, too?

    However, quantum spin is, confusingly, not related to the rotation of a body. Indeed, if you calculate how fast an electron needs to rotate in order to generate its spin angular momentum, you’ll come up with a ridiculous number (especially if you take the idea of the electron being a point particle seriously). Nevertheless, particles like electrons and protons, as well as composite particles like atoms, have intrinsic spin angular momentum. So, an experiment comparing the free fall of particles with the same spin, but oriented in different directions, makes perfect sense.

    Except for one thing: magnetic fields. The spin of a particle is also coupled to its magnetic moment. That means that if there are any changes in the magnetic field around the atom fountain, the atomic cloud will experience a force due to these variations. Since the researchers want to measure a difference between two spin states that have opposite orientations, this is bad. They will always find that the two spin populations have different fountain trajectories, but the difference will largely be due to variations in the magnetic field, rather than to differences in gravitational forces.

    So the story of this research is eliminating stray magnetic fields. Indeed, the researchers spend most of their paper describing how they test for magnetic fields before using additional electromagnets to cancel out stray fields. They even invented a new measurement technique that partially compensates for any remaining variations in the magnetic fields. To a large extent, the researchers were successful.

    So, does gravity care about your spin?

    Short answer: no. The researchers obtained a null result, meaning that, to within the precision of their measurements, there was no detectable difference in atomic free falls when atoms were in different spin states.

    But this is really just the beginning of the experiment. We can expect even more sensitive measurements from the same researchers within the next few years. And the strategies that they used to increase accuracy can be transferred to other high-precision measurements.

    Physical Review Letters, 2016, DOI: 10.1103/PhysRevLett.117.023001

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 2:16 pm on November 9, 2015 Permalink | Reply
    Tags: ars technica, , , ,   

    From ars technica: “Finally some answers on dark energy, the mysterious master of the Universe” 

    Ars Technica
    ars technica

    Nov 5, 2015
    Eric Berger

    U Texas McDonald Observatory Hobby-Eberle 9.1 meter Telescope
    U Texas McDonald Observatory Hobby Eberle 9.1 meter Telescope Interior
    U Texas McDonald Observatory Hobby-Eberle 9.1 meter Telescope

    Unless you’re an astrophysicist, you probably don’t sit around thinking about dark energy all that often. That’s understandable, as dark energy doesn’t really affect anyone’s life. But when you stop to ponder dark energy, it’s really rather remarkable. This mysterious force, which makes up the bulk of the Universe but was only discovered 17 years ago, somehow is blasting the vast cosmos apart at ever-increasing rates.

    Astrophysicists do sit around and think about dark energy a lot. And they’re desperate for more information about it as, right now, they have essentially two data points. One shows the Universe in its infancy, at 380,000 years old, thanks to observations of the cosmic microwave background radiation. And by pointing their telescopes into the sky and looking about, they can measure the present expansion rate of the Universe.

    But astronomers would desperately like to know what happened in between the Big Bang and now. Is dark energy constant, or is it accelerating? Or, more crazily still, might it be about to undergo some kind of phase change and turn everything into ice, as ice-nine did in Kurt Vonnegut’s novel Cat’s Cradle? Probably not, but really, no one knows.

    The Plan

    Fortunately astronomers in West Texas have a $42 million plan to use the world’s fourth largest optical telescope to get some answers. Until now, the 9-meter Hobby-Eberly telescope at McDonald Observatory has excelled at observing very distant objects, but this has necessitated a narrow field of view. However, with a clever new optical system, astronomers have expanded the telescope’s field of view by a factor of 120, to nearly the size of a full Moon. The next step is to build a suite of spectrographs and, using 34,000 optical fibers, wire them into the focal plane of the telescope.

    “We’re going to make this 3-D map of the Universe,” Karl Gebhardt, a professor of astronomy at the University of Texas at Austin, told Ars. “On this giant map, for every image that we take, we’ll get that many spectra. No other telescope can touch this kind of information.”

    With this detailed information about the location and age of objects in the sky, astronomers hope to gain an understanding of how dark energy affected the expansion rate of the Universe 5 billion to 10 billion years ago. There are many theories about what dark energy might be and how the expansion rate has changed over time. Those theories make predictions that can now be tested with actual data.

    In Texas, there’s a fierce sporting rivalry between the Longhorns in Austin and Texas A&M Aggies in College Station. But in the field of astronomy and astrophysics the two universities have worked closely together. And perhaps no one is more excited than A&M’s Nick Suntzeff about the new data that will come down over the next four years from the Hobby-Eberly telescope.

    Suntzeff is most well known for co-founding the High-Z Supernova Search Team along with Brian Schmidt, one of two research groups that discovered dark energy in 1998. This startling observation that the expansion rate of the Universe was in fact accelerating upended physicists’ understanding of the cosmos. They continue to grapple with understanding the mysterious force—hence the enigmatic appellation dark energy—that could be causing this acceleration.

    Dawn of the cosmos

    When scientists observe quantum mechanics, they see tiny energy fluctuations. They think these same fluctuations occurred at the very dawn of the Universe, Suntzeff explained to Ars. And as the early Universe expanded, so did these fluctuations. Then, at about 1 second, when the temperature of the Universe was about 10 billion degrees Kelvin, these fluctuations were essentially imprinted onto dark matter. From then on, this dark matter (whatever it actually is) responded only to the force of gravity.

    Meanwhile, normal matter and light were also filling the Universe, and they were more strongly affected by electromagnetism than gravity. As the Universe expanded, this light and matter rippled outward at the speed of sound. Then, at 380,000 years, Suntzeff said these sound waves “froze,” leaving the cosmic microwave background.

    These ripples, frozen with respect to one another, expanded outward as the Universe likewise grew. They can still be faintly seen today—many galaxies are spaced apart by about 500 million light years, the size of the largest ripples. But what happened between this freezing long ago, and what astronomers see today, is a mystery.

    The Texas experiment will allow astronomers to fill in some of that gap. They should be able to tease apart the two forces acting upon the expansion of the Universe. There’s the gravitational clumping, due to dark matter, which is holding back expansion. Then there’s the acceleration due to dark energy. Because the Universe’s expansion rate is now accelerating, dark energy appears to be dominating now. But is it constant? And when did it overtake dark matter’s gravitational pull?

    “I like to think of it sort of as a flag,” Suntzeff said. “We don’t see the wind, but we know the strength of the wind by the way the flag ripples in the breeze. The same with the ripples. We don’t see dark energy and dark matter, but we see how they push and pull the ripples over time, and therefore we can measure their strengths over time.”
    The universe’s end?

    Funding for the $42 million experiment at McDonald Observatory, called HETDEX for Hobby-Eberly Telescope Dark Energy Experiment, will come from three different sources: one-third from the state of Texas, one-third from the federal government, and a third from private foundations.

    The telescope is in the Davis Mountains of West Texas, which provide some of the darkest and clearest skies in the continental United States. The upgraded version took its first image on July 29. Completing the experiment will take three or four years, but astronomers expect to have a pretty good idea about their findings within the first year.

    If dark energy is constant, then our Universe has a dark, lonely future, as most of what we can now observe will eventually disappear over the horizon at speeds faster than that of light. But if dark energy changes over time, then it is hard to know what will happen, Suntzeff said. One unlikely scenario—among many, he said—is a phase transition. Dark energy might go through some kind of catalytic change that would propagate through the Universe. Then it might be game over, which would be a nice thing to know about in advance.

    Or perhaps not.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:27 am on October 9, 2015 Permalink | Reply
    Tags: ars technica, , , Space Shuttle and Centaur   

    From ars technica: “A deathblow to the Death Star: The rise and fall of NASA’s Shuttle-Centaur” 

    Ars Technica
    ars technica

    Oct 9, 2015
    Emily Carney

    In January 1986, astronaut Rick Hauck approached his STS-61F crew four months before their mission was scheduled to launch. The shuttle Challenger was set to deploy the Ulysses solar probe on a trajectory to Jupiter, utilizing a liquid-fueled Centaur G-Prime stage. While an upcoming launch should be an exciting time for any astronaut, Hauck’s was anything but optimistic. As he spoke to his crew, his tone was grave. He couldn’t recall the exact quote in a 2003 Johnson Space Center (JSC) oral history, but the message remained clear.

    “NASA is doing business different from the way it has in the past. Safety is being compromised, and if any of you want to take yourself off this flight, I will support you.”

    Hauck wasn’t just spooked by the lax approach that eventually led to the Challenger explosion. Layered on top of that concern was the planned method of sending Ulysses away from Earth. The Centaur was fueled by a combustible mix of liquid hydrogen and oxygen, and it would be carried to orbit inside the shuttle’s payload bay.

    The unstoppable shuttle

    Hauck’s words may have seemed shocking, but they were prescient. In the early 1980s, the space shuttle seemed unstoppable. Technically called the US Space Transportation System program, the shuttle was on the verge of entering what was being called its “Golden Age” in 1984. The idea of disaster seemed remote. As experience with the craft grew, nothing seemed to have gone wrong (at least nothing the public was aware of). It seemed nothing could go wrong.

    In 1985, the program enjoyed a record nine successful spaceflights, and NASA was expected to launch a staggering 15 missions in 1986. The manifest for 1986 was beyond ambitious, including but not limited to a Department of Defense mission into a polar orbit from Vandenberg Air Force Base, the deployment of the Hubble telescope to low Earth orbit, and the delivery of two craft destined for deep space: Galileo and Ulysses.

    The space shuttle had been touted as part space vehicle and part “cargo bus,” something that would make traveling to orbit routine. The intense schedule suggested it would finally fulfill the promise that had faded during the wait for its long-delayed maiden flight in April 1981. As astronaut John Young, who commanded that historic first flight, stated in his book Forever Young, “When we finished STS-1, it was clear we had to make the space shuttle what we hoped it could be—a routine access-to-space vehicle.”

    To meet strict deadlines, however, safety was starting to slide. Following the last test flight (STS-4, completed in July 1982), crews no longer wore pressure suits during launch and reentry, making shuttle flights look as “routine” as airplane rides. The shuttle had no ejection capability at the time, so its occupants were committed to the launch through the bitter end.

    Yet by mid-1985, the space shuttle program had already experienced several near-disasters. Critics of the program had long fretted over the design of the system, which boasted two segmented solid rocket boosters and an external tank. The boosters were already noted to have experienced “blow by” in the O-rings of their joints, which could leak hot exhaust out the sides of the structure. It was an issue that would later come to the forefront in a horrific display during the Challenger disaster.

    But there were other close calls that the public was largely unaware of. In late July 1985, the program had experienced an “Abort to Orbit” condition during the launch of STS-51F, commanded by Gordon Fullerton. A center engine had failed en route to space, which should normally call for the shuttle’s immediate return. Instead, a quick call was made by Booster Systems Engineer Jenny Howard to “inhibit main engine limits,” which may have prevented another engine from failing, possibly saving the orbiter Challenger and its seven-man crew. (The mission did reach orbit, but a lower one than planned.)


    download mp4 video here.
    Howard makes the call to push the engines past their assigned limits.

    People who followed things closely recognized the problems. The “Space Shuttle” section of Jane’s Spaceflight Directory 1986 (which was largely written the year before) underscored the risky nature of the early program: “The narrow safety margins and near disasters during the launch phase are already nearly forgotten, save by those responsible for averting actual disaster.”
    The push for Shuttle-Centaur

    All of those risks existed when the shuttle was simply carrying an inert cargo to orbit. Shuttle-Centaur, the high-energy solution intended to propel Galileo and Ulysses into space, was anything but inert.

    Shuttle-Centaur was born from a desire to send heavier payloads on a direct trajectory to deep space targets from America’s flagship space vehicles.

    6
    Centaur-2A upper stage of an Atlas IIA

    The Centaur rocket was older than NASA itself. According to a 2012 NASA History article, the US Air Force teamed up with General Dynamics/Astronautics Corp. to develop a rocket stage that could be carried to orbit and then ignite to propel heavier loads into space. In 1958 the proposal was accepted by the government’s Advanced Research Products Agency, and the upper stage that would become Centaur began its development.

    The first successful flight of a Centaur (married to an Atlas booster) was made on November 27, 1963. While the launch vehicle carried no payload, it did demonstrate that a liquid hydrogen/liquid oxygen upper stage worked. In the years since, the Centaur has helped propel a wide variety of spacecraft to deep-space destinations. Both Voyagers 1 and 2 received a much-needed boost from their Centaur stages en route to the Solar System’s outer planets and beyond.

    NASA Voyager 1
    Voyager 1

    General Dynamics was tasked with adapting the rocket stage so it could be taken to orbit on the shuttle. A Convair/General Dynamics poster from this period read enthusiastically, “In 1986, we’re going to Jupiter…and we need your help.” The artwork on the poster appeared retro-futuristic, boasting a spacecraft propelled by a silvery rocket stage that looked like something out of a sci-fi fantasy novel or Omni magazine. In the distance, a space shuttle—payload bay doors open—hovered over an exquisite Earth-scape.

    2
    General Dynamics’ artistic rendering of Shuttle-Centaur, with optimistic text about a 1986 target date for launch.
    The San Diego Air & Space Museum Archives on Flickr.

    The verbiage from a 1984 paper titled Shuttle Centaur Project Perspective, written by Edwin T. Muckley of NASA’s Lewis (now Glenn) Research Center, suggested that Jupiter would be the first of many deep-space destinations. Muckley optimistically announced the technology: “It’s expected to meet the demands of a wide range of users including NASA, the DOD, private industry, and the European Space Agency (ESA).”

    The paper went on to describe the two different versions of the liquid-fueled rocket, meant to be cradled inside the orbiters’ payload bays. “The initial version, designated G-Prime, is the larger of the two, with a length of 9.1 m (30 ft.). This vehicle will be used to launch the Galileo and International Solar Polar Missions (ISPM) [later called Ulysses] to Jupiter in May 1986.”

    According to Muckley, the shorter version, Centaur G, was to be used to launch DOD payloads, the Magellan spacecraft to Venus, and TDRSS [tracking and data relay satellite system] missions. He added optimistically, “…[It] is expected to provide launch services well into the 1990s.”

    NASA Magellan
    Magellan

    Dennis Jenkins’ book Space Shuttle: The History of the National Space Transportation System, the First 100 Missions discussed why Centaur became seen as desirable for use on the shuttle in the 1970s and early 1980s. A booster designed specifically for the shuttle called the Inertial Upper Stage (developed by Boeing) did not have enough power to directly deliver deep-space payloads (this solid stage would be used for smaller satellites such as TDRSS hardware). As the author explained, “First and most important was that Centaur was more powerful and had the ability to propel a payload directly to another planet. Second, Centaur was ‘gentler’—solid rockets had a harsh initial thrust that had the potential to damage the sensitive instruments aboard a planetary payload.”

    However, the Centaur aboard the shuttle also had its drawbacks. First, it required changes in the way the shuttle operated. A crew needed to be reduced in size to four in order to fit a heavier payload and a precipitously thin-skinned, liquid-fueled rocket stage inside a space shuttle’s payload bay. And the added weight meant that the shuttle could only be sent to its lowest possible orbit.

    In addition, during launch, the space shuttles’ main engines (SSMEs) would be taxed unlike any other time in program history. Even with smaller crews and a food-prep galley removed mid-deck, the shuttle’s main engines would have to be throttled up to an unheard-of 109-percent thrust level to deliver the shuttle, payload, and its crew to orbit. The previous “maximum” had been 104 percent.

    But the risks of the shuttle launch were only a secondary concern. “The perceived advantage of the IUS [Inertial Upper Stage] over the Centaur was safety—LH2 [liquid hydrogen] presented a significant challenge,” Jenkins noted. “Nevertheless, NASA decided to accept the risk and go with the Centaur.”

    While a host of unknowns remained concerning launching a volatile, liquid-fueled rocket stage on the back of a space shuttle armed with a liquid-filled tank and two solid rocket boosters, NASA and its contractors galloped full speed toward a May 1986 launch deadline for both spacecraft. The project would be helmed by NASA’s Lewis. It was decided that the orbiters Challenger and Discovery would be modified to carry Centaur (the then-new orbiter Atlantis was delivered with Centaur capability) with launch pad modifications taking place at the Kennedy Space Center and Vandenberg.

    The “Death Star” launches

    The launch plan was dramatic: two shuttles, Challenger and Atlantis, were to be on Pads 39B and 39A in mid-1986, carrying Ulysses and Galileo, each linked to the Shuttle-Centaur. The turnaround was also to be especially quick: these launches would take place within five days of one another.

    The commander of the first shuttle mission, John Young, was known for his laconic sense of humor. He began to refer to missions 61F (Ulysses) and 61G (Galileo) as the “Death Star” missions. He wasn’t entirely joking.

    The thin-skinned Centaur posed a host of risks to the crews. In an AmericaSpace article, space historian Ben Evans pointed out that gaseous hydrogen would periodically have to be “bled off” to keep its tank within pressure limits. However, if too much hydrogen was vented, the payloads would not have enough fuel to make their treks to Jupiter. Time was of the essence, and the crews would be under considerable stress. Their first deployment opportunities would occur a mere seven hours post-launch, and three deployment “windows” were scheduled.

    The venting itself posed its own problems. There was a concern about the position of the stage’s vents, which were located near the exhaust ports for the shuttles’ Auxiliary Power Units—close enough that some worried venting could cause an explosion.

    Another big concern involved what would happen if the shuttle had to dump the stage’s liquid fuel prior to performing a Return-to-Launch-Site (RTLS) abort or a Transatlantic (TAL) abort. There was worry that the fuel would “slosh” around in the payload bay, rendering the shuttle uncontrollable. (There were also worries about the feasibility of these abort plans with a normal shuttle cargo, but that’s another story.)

    These concerns filtered down to the crews. According to Evans, astronaut John Fabian was originally meant to be on the crew of 61G, but he resigned partly due to safety concerns surrounding Shuttle-Centaur. “He spent enough time with the 61G crew to see a technician clambering onto the Centaur with an untethered wrench in his back pocket and another smoothing out a weld, then accidentally scarring the booster’s thin skin with a tool,” the historian wrote. “In Fabian’s mind, it was bad enough that the Shuttle was carrying a volatile booster with limited redundancy, without adding new worries about poor quality control oversight and a lax attitude towards safety.”

    4
    Astronauts John Fabian and Dave Walker pose in front of what almost became their “ride” during a Shuttle-Centaur rollout ceremony in mid-1985.
    NASA/Glenn Research Center

    STS-61F’s commander, Hauck, had also developed misgivings about Shuttle-Centaur. In the 2003 JSC oral history, he bluntly discussed the unforgiving nature of his mission:

    “…[If] you’ve got a return-to-launch-site abort or a transatlantic abort and you’ve got to land, and you’ve got a rocket filled with liquid oxygen, liquid hydrogen in the cargo bay, you’ve got to get rid of the liquid oxygen and liquid hydrogen, so that means you’ve got to dump it while you’re flying through this contingency abort. And to make sure that it can dump safely, you need to have redundant parallel dump valves, helium systems that control the dump valves, software that makes sure that contingencies can be taken care of. And then when you land, here you’re sitting with the Shuttle-Centaur in the cargo bay that you haven’t been able to dump all of it, so you’re venting gaseous hydrogen out this side, gaseous oxygen out that side, and this is just not a good idea.”

    Even as late as January 1986, Hauck and his crew were still working out issues with the system’s helium-actuated dump valves. He related, “…[It] was clear that the program was willing to compromise on the margins in the propulsive force being provided by the pressurized helium… I think it was conceded this was going to be the riskiest mission the Shuttle would have flown up to that point.”
    Saved by disaster

    Within weeks, the potential crisis was derailed dramatically by an actual crisis, one that was etched all over the skies of central Florida on an uncharacteristically cold morning. On January 28, 1986, Challenger—meant to hoist Hauck, his crew, Ulysses, and its Shuttle-Centaur in May—was destroyed shortly after its launch, its crew of seven a total loss. On that ill-fated mission, safety had been dangerously compromised, with the shuttle launching following a brutal cold snap that made the boosters’ o-rings inflexible and primed to fail.

    It became clear NASA had to develop a different attitude toward risk management. Keeping risks as low as possible meant putting Shuttle-Centaur on the chopping block. In June 1986, a Los Angeles Times article announced the death-blow to the Death Star.

    “The National Aeronautics and Space Administration Thursday canceled development of a modified Centaur rocket that it had planned to carry into orbit aboard the space shuttle and then use to fire scientific payloads to Jupiter and the Sun. NASA Administrator James C. Fletcher said the Centaur ‘would not meet safety criteria being applied to other cargo or elements of the space shuttle system.’ His decision came after urgent NASA and congressional investigations of potential safety problems following the Jan. 28 destruction of the shuttle Challenger 73 seconds after launch.”

    5
    Astronauts Rick Hauck, John Fabian, and Dave Walker pose by a Shuttle-Centaur stage in mid-1985 during a rollout ceremony. Hauck and Fabian both had misgivings about Shuttle-Centaur. The San Diego Air & Space Museum Archives on Flickr.

    After a long investigation and many ensuing changes, the space shuttle made its return to flight with STS-26 (helmed by Hauck) in September 1988. Discovery and the rest of the fleet boasted redesigned solid rocket boosters with added redundancy. In addition, crews had a “bailout” option if something went wrong during launch, and they wore pressure suits during ascent and reentry for the first time since 1982.

    Galileo was successfully deployed from Atlantis (STS-34) using an IUS in October 1989, while Ulysses utilized an IUS and PAM-S (Payload Assist Module) to begin its journey following its deployment from Discovery (STS-41) in October 1990.

    NASA Galileo
    Galileo

    As for Shuttle-Centaur? Relegated to the history books as a “what if,” a model now exists at the US Space and Rocket Center in Huntsville, Alabama. It still looks every inch the shiny, sci-fi dream depicted in posters and artists’ renderings back in the 1980s. However, this “Death Star” remains on terra firma, representing what Jim Banke described as the “naive arrogance” of the space shuttle’s Golden Age.

    Additional sources

    Hitt, D., & Smith, H. (2014). Bold they rise: The space shuttle early years, 1972 – 1986. Lincoln, NE: University of Nebraska Press.
    Jenkins, D. R. (2012). Space shuttle: The history of the national space transportation system, the first 100 missions. Cape Canaveral, FL: Published by author.
    Turnill, R. (Ed.). (1986). Jane’s spaceflight directory (2nd ed.). London, England: Jane’s Publishing Company Limited.
    Young, J. W., & Hansen, J. R. (2012). Forever young: A life of adventure in air and space. Gainesville, FL: University Press of Florida.
    Dawson, V., & Bowles, M.D. (2004). Taming liquid hydrogen: The Centaur upper stage rocket, 1958 – 2002. Washington, D.C.: National Aeronautics and Space Administration.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 10:41 am on August 26, 2015 Permalink | Reply
    Tags: , ars technica, , ,   

    From ars technica: “Quantum dots may be key to turning windows into photovoltaics” 

    Ars Technica
    ars technica

    Aug 26, 2015
    John Timmer

    1
    Some day, this might generate electricity. Flickr user Ricardo Wang

    While wind may be one of the most economical power sources out there, photovoltaic solar energy has a big advantage: it can go small. While wind gets cheaper as turbines grow larger, the PV hardware scales down to fit wherever we have infrastructure. In fact, simply throwing solar on our existing building stock could generate a very large amount of carbon-free electricity.

    But that also highlights solar’s weakness: we have to install it after the infrastructure is in place, and that installation adds considerably to its cost. Now, some researchers have come up with some hardware that could allow photovoltaics to be incorporated into a basic building component: windows. The solar windows would filter out a small chunk of the solar spectrum and convert roughly a third of it to electricity.

    As you’re probably aware, photovoltaic hardware has to absorb light in order to work, and a typical silicon panel appears black. So, to put any of that hardware (and its supporting wiring) into a window that doesn’t block the view is rather challenging. One option is to use materials that only capture a part of the solar spectrum, but these tend to leave the light that enters the building with a distinctive tint.

    The new hardware takes a very different approach. The entire window is filled with a diffuse cloud of quantum dots that absorb almost all of the solar spectrum. As a result, the “glass” portion of things simply dims the light passing through the window slightly. (The quantum dots are actually embedded in a transparent polymer, but that could be embedded in or coat glass.) The end result is what optics people call a neutral density filter, something often used in photography. In fact, tests with the glass show that the light it transmits meets the highest standards for indoor lighting.

    Of course, simply absorbing the light doesn’t help generate electricity. And, in fact, the quantum dots aren’t used to generate the electricity. Instead, the authors generated quantum dots made of copper, indium, and selenium, covered in a layer of zinc sulfide. (The authors note that there are no toxic metals involved here.) These dots absorb light across a broad band of spectrum, but re-emit it at a specific wavelength in the infrared. The polymer they’re embedded in acts as a waveguide to take many of the photons to the thin edge of the glass.

    And here’s where things get interesting: the wavelength of infrared the quantum dots emit happens to be very efficiently absorbed by a silicon photovoltaic device. So, if you simply place these devices along the edges of the glass, they’ll be fed a steady diet of photons.

    The authors model the device’s behavior and find that nearly half the infrared photons end up being fed the photovoltaic devices (equal amounts get converted to heat or escape the window entirely). It’s notable that the devices are small, though (about 12cm squares)—larger panes would presumably allow even more photons to escape.

    The authors tested a few of the devices, one that filtered out 20 percent of the sunlight and one that only captured 10 percent. The low-level filter sent about one percent of the incident light to the sides, while the darker one sent over three percent.

    There will be losses in the conversion to electricity as well, so this isn’t going to come close to competing with a dedicated panel on a sunny roof. Which is fine, because it’s simply not meant to. Any visit to a major city will serve as a good reminder that we’re regularly building giant walls of glass that currently reflect vast amounts of sunlight, blinding or baking (or both!) the city’s inhabitants on a sunny day. If we could cheaply harvest a bit of that instead, we’re ahead of the game.

    Nature Nanotechnology, 2015. DOI: 10.1038/NNANO.2015.178 (About DOIs).

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 6:54 am on July 28, 2015 Permalink | Reply
    Tags: , ars technica,   

    From Ars Technica: “Inside the world’s quietest room” 

    Ars Technica logo

    ars technica

    Jul 28, 2015
    Sebastian Anthony

    In a hole, on some bedrock a few miles outside central Zurich, there lived a spin-polarised scanning electron microscope. Not a nasty, dirty, wet hole: it was a nanotech hole, and that means quiet. And electromagnetically shielded. And vibration-free. And cool.

    When you want to carry out experiments at the atomic scale—when you want to pick up a single atom and move it to the other end of a molecule—it requires incredibly exacting equipment. That equipment, though, is worthless without an equally exacting laboratory to put it in. If you’re peering down the (figurative) barrel of a microscope at a single atom, you need to make sure there are absolutely no physical vibrations at all, or you’ll just get a blurry image. Similarly, atoms really don’t like to sit still: you don’t want to spend a few hours setting up a transmission electron microscope (TEM), only to have a temperature fluctuation or EM field imbue the atoms with enough energy to start jumping around on their own accord.

    One solution, as you have probably gathered from the introduction to this story, is to build a bunker deep underground, completely from scratch, with every facet of the project simulated, designed, and built with a singular purpose in mind: to block out the outside world entirely. That’s exactly what IBM Research did back in 2011, when it opened the Binnig and Rohrer Nanotechnology Center.

    5

    The Center, which is located just outside Zurich in Rüschlikon, cost about €80 million (£60 million, $90 million) to build, which includes equipment costs of around €27 million (£20 million, $30 million). IBM constructed and owns the building, but IBM Research and ETH Zurich have shared use of the building and equipment. ETH and IBM collaborate on a lot of research, especially on nanoscale stuff.

    1
    The entrance hall to the Binnig and Rohrer Nanotechnology Center.

    2

    Deep below the Center there are six quiet rooms—or, to put it another way, rooms that are almost completely devoid of any kind of noise, from acoustic waves to physical vibrations to electromagnetic radiation. Each room is dedicated to a different nanometre-scale experiment: in one room, I was shown a Raman microscope, which is used for “fingerprinting” molecules; in another, a giant TEM, which is like an optical microscope, but it uses a beam of electrons instead of light to resolve details as small as 0.09nm. Every room is eerily deadened and quiet, which is juxtapositionally belied by the hulking silhouette of a multi-million-pound apparatus sitting in the middle of it. After investigating a few rooms, I notice that my phone is uncharacteristically lifeless. “That’s the nickel-iron box that encases every room,” my guide informs me.

    It’s impossible to go into every design feature of the noise-free rooms, but I’ll run through the most important and the most interesting. For a start, the rooms are built directly on the bedrock, significantly reducing vibrations from a nearby road and an underground train. Then, the walls of each room are clad with the aforementioned nickel-iron alloy, screening against most external electromagnetic fields, including those produced by experiments in nearby rooms. There are dozens of external sources of EM radiation, but the strongest are generated by mobile phone masts, overhead power lines, and the (electric) underground train, all of which would play havoc with IBM’s nanoscale experiments.

    Internally, most rooms are divided in two: there’s a small ante chamber, which is where the human controller sits, and then the main space with the actual experiment/equipment. Humans generate around 100 watts of heat, and not inconsiderable amounts of noise and vibration, so it’s best to keep them away from experiments while they’re running.

    To provide even more isolation, there are two separate floors in each room: one suspended floor for the scientists to walk on, and another separate floor that only the equipment sits on. The latter isn’t actually a floor: it’s a giant (up-to-68-ton) concrete block that rests on active air suspension. Any vibrations that make it through the bedrock, or that come from large trucks rumbling by, are damped in real time by the air suspension.

    We’re not done yet! To minimise acoustic noise (i.e. sound), the rooms are lined with acoustically absorbent material. Furthermore, if an experiment has noisy ancillary components (a vacuum pump, electrical transformer, etc.), they are placed in another room away from the main apparatus, so that they’re physically and audibly isolated.

    And finally, there’s some very clever air conditioning that’s quiet, generates minimal air flux, and is capable of keeping the temperature in the rooms very stable. In every room, the suspended floor (the human-designated bit) is perforated with holes. Cold air slowly ekes out of these holes, rises to the ceiling, and is then sucked out. The air flow was hardly noticeable, except for on my ankles: in a moment of unwarranted hipness earlier that morning, I had decided to wear boat shoes without socks.

    That’s about it for the major, physical features of IBM Research’s quiet rooms, but there are two other bits that are pretty neat. First, the whole place is lit with LEDs, driven by a DC power supply that is far enough away that its EM emissions don’t interfere. Second, each room is equipped with three pairs of Helmholtz coils, oriented so that they cover the X, Y, and Z axes. These coils are tuned to cancel out any residual magnetic fields that haven’t already been damped by various other shields, such as the Earth’s magnetic field.

    3
    Labelled images of IBM’s noise-free labs, showing various important features

    Just how quiet are the rooms?

    So, after all that effort—each of the six rooms cost about €1.4 million to build, before equipment—just how quiet are the rooms below the Binnig and Rohrer Nanotechnology Center? Let’s break it down by the type of noise.

    The temperature at waist height in the rooms is set to 21 degrees Celsius, with a stability of 0.1°C per hour (i.e. it would take an hour for the temperature to rise to 21.1°C).

    Electromagnetic fields produced by AC sources are damped to less than 3 nT (nanotesla)—or about 1,500 times weaker than the magnetic field produced by a fridge magnet. From DC sources, it’s damped to 20 nT.

    The vibration damping is probably the most impressive: for the equipment on the concrete pedestals, movement is reduced to less than 300nm/s at 1Hz, and less than 10nm/s above 100Hz. These are well below the specs of NIST’s Advanced Measurement Laboratory in Maryland, USA.

    Somewhat ironically for the world’s quietest rooms, the weakest link is acoustic noise. Even though the rooms themselves are shielded from outside noises, and the acoustically absorbent material does a good job of stopping internal sound waves dead, there’s no avoiding the quiet hum of some of the machines or the slight susurration of the ventilation system.

    The acoustic noise level in the rooms is always below 30 dB, dipping down as low as 21 dB if there isn’t a noisy experiment running. In human terms, the rooms were definitely quiet, but not so quiet that I could feel my sanity slipping away, or anything crazy like that. I was a little bit disappointed that I couldn’t hear my various internal organs shifting around, truth be told.

    Why did IBM build six of these rooms?

    “You’re only as good as your tools.” It’s a trite, overused statement, but in this case it perfectly describes why IBM and ETH Zurich spent so many millions of euros on the quiet rooms.

    Big machines like the TEM or spin-SEM need to be kept very still, with as little outside interference as possible: if you can’t stay within the machine’s nominal operational parameters, you’re not going to get much scientifically useful data out of it.

    On the flip side, however, if you surpass the machine’s optimal parameters—if you reduce the amount of vibration, noise, etc. beyond the “recommended specs”—then you can produce images and graphs with more resolution than even the manufacturer thought possible.

    IBM Research’s spin-SEM, for example, used to be located in the basement of the main building, on a normal concrete floor. After being relocated to the quiet rooms, the lead scientist who uses the the spin-SEM said its resolution is 2-3 times better (an utterly huge gain, in case you were wondering).

    For much the same reason, my guide said that “several tooling manufacturers” have contacted IBM Research to ask if they can test their equipment in the noise-free labs: they want to see just how well it will perform under near-perfect conditions.

    The best story, though, I saved for last. Back in the ’80s and ’90s, before the Center was built, the IBM researchers didn’t have a specialised nanotechnology facility: they just worked in their labs, which were usually located down in the basement. When Gerd Binnig and Heinrich Rohrer invented the scanning tunnelling microscope (STM)—an achievement that would later net them a Nobel prize—they worked in the dead of night to minimise vibrations from the nearby road and other outside interference.

    After the new building was finished—which, incidentally, is named after Binnig and Rohrer—my guide spoke to some IBM retirees who had just finished inspecting the noise-free rooms. “We wish we’d had these rooms back in the ’80s and 90s, so that we didn’t have to work at 3am,” they said.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:36 am on July 14, 2015 Permalink | Reply
    Tags: ars technica, , , ,   

    From ars technica: “Huge population of “Ultra-Dark Galaxies” discovered” 

    Ars Technica
    ars technica

    Jul 11, 2015
    Xaq Rzetelny

    1

    About 321 million light-years away from us is the Coma Cluster, a massive grouping of more than 1,000 galaxies.

    2
    A Sloan Digital Sky Survey/Spitzer Space Telescope mosaic of the Coma Cluster in long-wavelength infrared (red), short-wavelength infrared (green), and visible light. The many faint green smudges are dwarf galaxies in the cluster.
    Credit: NASA/JPL-Caltech/GSFC/SDSS

    Some of its galaxies are a little unusual, however: they’re incredibly dim. So dim, in fact, that they have earned the title of “Ultra-Dark Galaxies” (UDGs). (The term is actually “Ultra-Diffuse Galaxies”, as their visible matter is thinly spread, though “ultra-dark” has been used by some sources and, let’s face it, sounds a lot better). This was discovered earlier this year in a study that identified 47 such galaxies.

    Dimness isn’t necessarily unusual in a galaxy. Most of a galaxy’s light comes from its stars, so the smaller a galaxy is (and thus the fewer stars it has), the dimmer it will be. We’ve found many dwarf galaxies that are significantly dimmer than their larger cousins.

    What was so unusual about these 47 is that they’re not small enough to account for their dimness. In fact, many of them are roughly the size of our own Milky Way (ranging in diameter from 1.5 to 4.6 kiloparsecs, compared with the Milky Way’s roughly 3.6) but have only roughly one thousandth of the Milky Way’s stars. The authors of the recent study interpret this to mean that these galaxies must be even more dominated by dark matter than are ordinary galaxies.

    Finding the dark

    Intrigued by this tantalizing observation, a group of researchers constructed a more detailed study. Using archival data from the 8.2-meter Subaru telescope, they examined the sky region in question and discovered more UDGs—854 of them. Given that the images they were working with don’t cover the full cluster, the researchers estimated that there should be roughly 1,000 UDGs visible in the cluster altogether.

    NAOJ Subaru Telescope
    NAOJ Subaru Telescope interior
    NAOJ/Subaru

    There are a lot of small caveats to this conclusion. First of all, it’s not certain that all these galaxies are actually in the Coma Cluster, as some might just be along the same line of sight. However, it’s very likely that most of them do lie within the cluster. If the UDGs aren’t part of the cluster, then they’re probably a typical sample of what we’d observe in any patch of sky the same size as the Subaru observation. If that’s true, then the Universe has an absurdly high number of UDGs, and we should have seen more of them already.

    In this particular patch of sky, the concentration of UDGs is stronger towards the center of the Coma Cluster. While that doesn’t prove they’re part of the cluster, it’s strongly suggestive.

    Dark tug-of-war

    The dim galaxies’ relationship to the cluster probably has something to do with the mechanism that made the UDGs so dark in the first place. These galaxies would have had an ample supply of gas with which to make stars, so something must have prevented that from happening. This could be because the gas was somehow stripped from its galaxy or because something cut off a supply of gas from elsewhere.

    The dense environment in the cluster might be responsible for this. Gravitational interactions can pull the galaxies apart or strip them of their gas. These encounters can also deplete the gas near the galaxies, cutting off the inflow of new material. Since there are plenty of galaxies swirling around in the dense cluster, there are plenty of opportunities for this to happen to an unfortunate galaxy. The victims of these vampiric attacks might become dark, losing their ability to form stars. Neither living nor dead, these bodies still roam the Universe, perhaps waiting to strip unsuspecting galaxies of their gas.

    But unlike those bitten by movie vampires, the galaxies have a way to fight back. Rather than letting their blood (or in this case gas) get sucked away, the galaxy’s own gravity can hang onto it. And since most of a galaxy’s mass comes in the form of dark matter, the mysterious substance is pretty important in the tug-of-war over the galaxy’s star-forming material. The more dark matter a galaxy’s got, the more likely it will be able to hold onto its material when other galaxies pass by.

    “We believe that something invisible must be protecting the fragile star systems of these galaxies, something with a high mass,” said Jin Koda, an astrophysicist with Stony Brook University and the paper’s lead author. “That ‘something’ is very likely an excessive amount of dark matter.”

    The role dark matter plays in this struggle is useful for researchers here on Earth. If they want to find out how much dark matter one of these UDGs has, all they have to do is look at how much material the galaxy has held onto. While the results of an encounter between galaxies are complicated and dependent on many factors, this technique can at least give them a rough idea.

    Close to the core

    Near the core of the Coma Cluster, there’s a higher density of galaxies, and so many more opportunities for galaxies to lose their gas in encounters. Tidal forces are much stronger there, and as such it takes more dark matter to continue to hold onto material.

    The earlier study’s smaller sample of UDGs didn’t see any of them very close to the core, and it seemed safe to assume any potential UDGs deeper in had been ripped apart entirely. That provided a clue as to the amount of dark matter these galaxies contain: not enough to hold them together in the core. The authors of that study used this information to put an upper limit on the percentage of dark matter in the UDGs, but it was very high—up to 98 percent. But even galaxies with 98 percent dark matter shouldn’t survive in the rough center of the cluster.

    Thus, in the new study, researchers didn’t expect to find UDGs any closer to the core. But they did. These galaxies are less clearly resolved because, in the cluster’s center, more interference from background objects mucks up the view. But assuming they have been correctly identified, they’ve got even more dark matter than the previous estimate: greater than 99 percent. There can be no doubt these UDGs live up to their (unofficial) name, as everything else the UDG includes—stars, black holes, planets, gas—make up less than one percent of the galaxy’s mass.

    Into the dark

    The discovery of so many dark galaxies in the Coma Cluster is a stride forward in the exploration of these objects. (Note: some of the objects included in the study had been previously discovered and were included in galaxy catalogs, but they were inconsistently classified, with many of them not identified as UDGs at all). The study’s large sample size compared strengthens its conclusions and also provides a more detailed picture of how these dark galaxies come to be.

    Many questions remain for future work to address, however. It’s still not known exactly how many of the objects identified in the study are actually part of the Coma Cluster, though it is likely that most are. Another question is whether the Coma Cluster’s UDG distribution is typical of other clusters, which will determine how well the findings of this study can be extrapolated elsewhere in the Universe. Modeling should also provide a more detailed look into the complex interactions of galaxies in the cluster, including the exact mechanisms responsible for the creation of UDGs.

    And crucially, UDGs offer an excellent opportunity to observe and study dark matter. Situations like this one, where dark matter’s interactions with baryonic (ordinary) matter can be observed, are ripe for study.

    “This discovery of dark galaxies may be the tip of the iceberg,” said Dr. Koda. “We may find more if we look for fainter galaxies embedded in a large amount of dark matter, with the Subaru Telescope and additional observations may expose this hidden side of the Universe.”

    The Astrophysical Journal Letters, 2015. DOI: 10.1088/2041-8205/807/1/L2 (About DOIs)

    Suprisingly, the institution responsible for this research is not named, nor are we given the names of the team members and their affiliations.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 10:55 am on March 18, 2015 Permalink | Reply
    Tags: ars technica, , ,   

    From ars technica: “Shining an X-Ray torch on quantum gravity” 

    Ars Technica
    ars technica

    Mar 17, 2015
    Chris Lee

    1
    This free electron laser could eventually provide a test of quantum gravity. BNL

    Quantum mechanics has been successful beyond the wildest dreams of its founders. The lives and times of atoms, governed by quantum mechanics, play out before us on the grand stage of space and time. And the stage is an integral part of the show, bending and warping around the actors according to the rules of general relativity. The actors—atoms and molecules—respond to this shifting stage, but they have no influence on how it warps and flows around them.

    This is puzzling to us. Why is it such a one directional thing: general relativity influences quantum mechanics, but quantum mechanics has no influence on general relativity? It’s a puzzle that is born of human expectation rather than evidence. We expect that, since quantum mechanics is punctuated by sharp jumps, somehow space and time should do the same.

    There’s also the expectation that, if space and time acted a bit more quantum-ish, then the equations of general relativity would be better behaved. In general relativity, it is possible to bend space and time infinitely sharply. This is something we simply cannot understand: what would infinitely bent space look like? To most physicists, it looks like something that cannot actually be real, indicating a problem with the theory. Might this be where the actors influence the stage?

    Quantum mechanics and relativity on the clock

    To try and catch the actors modifying the stage requires the most precise experiments ever devised. Nothing we have so far will get us close, so a new idea from a pair of German physicists is very welcome. They focus on what’s perhaps the most promising avenue for detecting quantum influences on space-time: time-dilation experiments. Modern clocks rely on the quantum nature of atoms to measure time. And the flow of time depends on relative speed and gravitational acceleration. Hence, we can test general relativity, special relativity, and quantum mechanics all in the same experiment.

    To get an idea of how this works, let’s take a look at the traditional atomic clock. In an atomic clock, we carefully prepare some atoms in a predefined superposition state: that is the atom is prepared such that it has a fifty percent chance of being in state A, and a fifty percent chance of being in state B. As time passes, the environment around the atom forces the superposition state to change. At some later point, it will have a seventy five percent chance of being in state A; even later, it will certainly be in state A. Keep on going, however, and the chance of being in state A starts to shrink, and it continues to do so until the atom is certainly in state B. Provided that the atom is undisturbed, these oscillations will continue.

    These periodic oscillations provide the perfect ticking clock. We simply define the period of an oscillation to be our base unit of time. To couple this to general relativity measurements is, in principle, rather simple. Build two clocks and place them beside each other. At a certain moment, we start counting ticks from both clocks. When one clock reaches a thousand (for instance), we compare the number of ticks from the two clocks. If we have done our job right, both clocks should have reached a thousand ticks.

    If we shoot one into space, however, and perform the same experiment, and relativity demands that the clock in orbit record more ticks than the clock on Earth. The way we record the passing of time is by a phenomena that is purely quantum in nature, while the passing of time is modified by gravity. These experiments work really well. But at present, they are not sensitive enough to detect any deviation from either quantum mechanics or general relativity.

    Going nuclear

    That’s where the new ideas come in. The researchers propose, essentially, to create something similar to an atomic clock, but instead of tracking the oscillation atomic states, they want to track nuclear states. Usually, when I discuss atoms, I ignore the nucleus entirely. Yes, it is there, but I only really care about the influence the nucleus has on the energetic states of the electrons that surround it. However, in one key way the nucleus is just like the electron cloud that surrounds it: it has its own set of energetic states. It is possible to excite nuclear states (using X-Ray radiation) and, afterwards, they will return the ground state by emitting an X-Ray.

    So let’s imagine that we have a crystal of silver sitting on the surface of the Earth. The silver atoms all experience a slightly different flow of time because the atoms at the top of the crystal are further away from the center of the Earth compared to the atoms at the bottom of the crystal.

    To kick things off, we send in a single X-Ray photon, which is absorbed by the crystal. This is where the awesomeness of quantum mechanics puts on sunglasses and starts dancing. We don’t know which silver atom absorbed the photon, so we have to consider that all of them absorbed a tiny fraction of the photon. This shared absorption now means that all of the silver atoms enter a superposition state of having absorbed and not absorbed a photon. This superposition state changes with time, just like in an atomic clock.

    In the absence of an outside environment, all the silver atoms will change in lockstep. And when the photon is re-emitted from the crystal, all the atoms will contribute to that emission. So each atom behaves as if it is emitting a partial photon. These photons add together, and a single photon flies off in the same direction as the absorbed photon had been traveling. Essentially because all the atoms are in lockstep, the charge oscillations that emit the photon add up in phase only in the direction that the absorbed photon was flying.

    Gravity, though, causes the atoms to fall out of lockstep. So when the time comes to emit, the charge oscillations are all slightly out of phase with each other. But they are not random: those at the top of the crystal are just slightly ahead of those at the bottom of the crystal. As a result, the direction for which the individual contributions add up in phase is not in the same direction as the flight path of the absorbed photon, but at a very slight angle.

    How big is this angle? That depends on the size of the crystal and how long it takes the environment to randomize the emission process. For a crystal of silver atoms that is less than 1mm thick, the angle could be as large as 100 micro-degrees, which is small but probably measurable.
    Spinning crystals

    That, however, is only the beginning of a seam of clever. If the crystal is placed on the outside of a cylinder and rotated during the experiment, then the top atoms of the crystal are moving faster than the bottom, meaning that the time-dilation experienced at the top of the crystal is greater than that at the bottom. This has exactly the same effect as placing the crystal in a gravitational field, but now the strength of that field is governed by the rate of rotation.

    In any case, by spinning a 10mm diameter cylinder very fast (70,000 revolutions per second), the angular deflection is vastly increased. For silver, for instance, it reaches 90 degrees. With such a large signal, even smaller deviations from the predictions of general relativity should be detectable in the lab. Importantly, these deviations happen on very small length scales, where we would normally start thinking about quantum effects in matter. Experiments like these may even be sensitive enough to see the influence of quantum mechanics on space and time.

    A physical implementation of this experiment will be challenging but not impossible. The biggest issue is probably the X-Ray source and doing single photon experiments in the X-Ray regime. Following that, the crystals need to be extremely pure, and something called a coherent state needs to be created within them. This is certainly not trivial. Given that it took atomic physicists a long time to achieve this for electronic transitions, I think it will take a lot more work to make it happen at X-Ray frequencies.

    On the upside free electron lasers have come a very long way, and they have much better control over beam intensities and stability. This is, hopefully, the sort of challenge that beam-line scientists live for.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 11:07 am on March 9, 2015 Permalink | Reply
    Tags: ars technica,   

    From ars technica: “Imaging a supernova with neutrinos” 

    Ars Technica
    ars technica

    Mar 4, 2015
    John Timmer

    1
    Two men in a rubber raft inspect the wall of photodetectors of the partly filled Super-Kamiokande neutrino (BNL)

    There are lots of ways to describe how rarely neutrinos interact with normal matter. Duke’s Kate Scholberg, who works on them, provided yet another. A 10 Mega-electron Volt gamma ray will, on average, go through 20 centimeters of carbon before it’s absorbed; a 10 MeV neutrino will go a light year. “It’s called the weak interaction for a reason,” she quipped, referring to the weak-force-generated processes that produce and absorb these particles.

    But there’s one type of event that produces so many of these elusive particles that we can’t miss it: a core-collapse supernova, which occurs when a star can no longer produce enough energy to counteract the pull of gravity. We typically spot these through the copious amounts of light they produce, but in energetic terms, that’s just a rounding error: Scholberg said that 99 percent of the gravitational energy of the supernova goes into producing neutrinos.

    Within instants of the start of the collapse, gravity forces electrons and protons to fuse, producing neutrons and releasing neutrinos. While the energy that goes into producing light gets held up by complicated interactions with the outer shells of the collapsing star, neutrinos pass right through any intervening matter. Most of them do, at least; there are so many produced that their rare interactions collectively matter, though our supernova models haven’t quite settled on how yet.

    But our models do say that, if we could detect them all, we’d see their flavors (neutrinos come in three of them) change over time, and distinct patterns of emission during the star’s infall, accretion of matter, and then post-supernova cooling. Black hole formation would create a sudden stop to their emission, so they could provide a unique window into the events. Unfortunately, there’s the issue of too few of them interacting with our detectors to learn much.

    The last nearby supernova, SN 1987a, saw a burst of 20 electron antineutrinos be detected about 2.5 hours before the light from the explosion became visible.

    2
    Remnant of SN 1987A seen in light overlays of different spectra. ALMA data (radio, in red) shows newly formed dust in the center of the remnant. Hubble (visible, in green) and Chandra (X-ray, in blue) data show the expanding shock wave.

    ALMA Array

    NASA Hubble Telescope
    Hubble

    NASA Chandra Telescope
    Chandra

    (Scholberg quipped that the Super-Kamiokande detector “generated orders of magnitude more papers than neutrinos.”) But researchers weren’t looking for this, so the burst was only recognized after the fact.

    Super-Kamiokande experiment Japan
    Super-Kamiokande detector

    That’s changed now. Researchers can go to a Web page hosted by Brookhaven National Lab and have an alert sent to them if any of a handful of detectors pick up a burst of neutrinos. The Daya Bay, IceCube, and Super-Kamiokande detectors are all part of this program.) When the next burst of neutrinos arrives, astronomers will be alert and searching for the source.

    Daya Bay
    Daya Bay

    ICECUBE neutrino detector
    IceCube

    “The neutrinos are coming!” Scholberg said. “The supernovae have already happened, their wavefronts are on their way.” She said estimates are that there are three core collapse supernovae in our neighborhood each century and, by that measure, “we’re due.”

    If that supernova has occurred in the galactic core, it will put on quite a show. Rather than detecting individual events, the entire area of ice monitored by the IceCube detector will end up glowing. The Super-Kamiokande detector will see 10,000 individual neutrinos; “It will light up like a Christmas tree,” Scholberg said.

    It’ll be an impressive show, and it’s one that I’m sure most physicists (along with me) hope happen in their lifetimes. But if it takes a little time, the show may be even better. There are apparently plans afoot to build a “Hyper-Kamiokande,” which would be able to detect 100,000 neutrinos from a galactic core supernova. Imagine how many papers that would produce.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:02 pm on October 5, 2014 Permalink | Reply
    Tags: ars technica, ,   

    From ars technica: “Exploring the monstrous creatures at the edges of the dark matter map” 

    Ars Technica
    ars technica

    Sept 30 2014
    Matthew Francis

    So far, we’ve focused on the simplest dark matter models, consisting of one type of object and minimal interactions among individual dark matter particles. However, that’s not how ordinary matter behaves: the interactions among different particle types enable the existence of atoms, molecules, and us. Maybe the same sort of thing is true for dark matter, which could be subject to new forces acting primarily between particles.

    Some theories describe a kind of “dark electromagnetism” where particles carry charges like electricity, but they’re governed by a force that doesn’t influence electrons and the like. Just as normal electromagnetism describes light, these models include “dark photons,” which sound like something from the last season of Star Trek: The Next Generation (after the writers ran out of ideas).

    elec
    Diagram of a solenoid and its magnetic field lines. The shape of all lines is correct according to the laws of electrodynamics.

    Like many WDM candidates, dark photons would be difficult—if not impossible—to detect directly, but if they exist, they would carry energy away from interacting dark matter systems. That would be detectable by its effect on things like the structure of neutron stars and other compact astronomical bodies. Observations of these objects would let researchers place some stringent limits on the strength of dark forces. Another consequence is that dark forces would tend to turn spherical galactic halos into flatter, more disk-like structures. Since we don’t see that in real galaxies, there are strong constraints on how much dark forces can affect dark matter motion.

    som
    The “Sombrero” galaxy shows that matter interacting with itself flattens into disks. Dark matter doesn’t seem to do that, limiting the strength of possible interactions between particles.
    NASA, ESA, and The Hubble Heritage Team (STScI/AURA)

    NASA Hubble Telescope
    NASA/ESA Hubble

    Another side effect of dark forces is that there should be dark antimatter and dark matter-antimatter annihilation. The results of such interactions could include ordinary photons, another intriguing hint in the wake of observations of excess gamma-rays, possibly due to dark matter annihilation in the Milky Way and other galaxies.

    What’s cooler than cold dark matter?

    While most low-mass particles are “hot,” a hypothetical particle known as the axion is an exception. Axions were first predicted as a solution to a thorny problem in the physics of the strong nuclear force, but certain properties make them appealing as dark matter candidates. Mainly, they are electrically neutral and don’t interact directly with ordinary matter except through gravity.

    Axions are also very low-mass (at least in one proposed version), but unlike hot dark matter, they “condensed” in the early Universe into a slow, thick soup. In other words, they behave much like cold dark matter, but without the large mass usually implied by the term.

    Axions aren’t part of the Standard Model, but in a sense they’re a minimally invasive addition. Unlike supersymmetry, which involves adding one particle for each type in the Standard Model, axions are just one particle type, albeit one with some unique properties. (To be fair, these aren’t mutually exclusive concepts: it’s possible both SUSY particles and axions are real, and some versions of SUSY even include a hypothetical partner for axions.)

    sm
    The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    Like WDM, axions don’t interact directly with ordinary matter. But according to theory, in a strong magnetic field, axions and photons can oscillate into each other, switching smoothly between particle types. That means axions could be created all the time near black holes, neutron stars, or other places with intense magnetic fields—possibly including superconducting circuits here on Earth. This is how experiments hunt for axions, most notably the Axion Dark Matter eXperiment (ADMX).

    So far, no experiment has turned up axions, at least of the type we’d expect to see. Particle physics has a lot of wiggle-room for possibilities, so it’s too soon to say no axions exist, but axion partisans are disappointed. A universe with axions makes more sense than one without, but it wouldn’t be the first time something that really seemed to be a good idea didn’t quite work out.

    A physicist’s fear

    Long as it is becoming, this list is far from complete. We’ve excluded exotic particles with sufficiently tiny electric charges to be nearly invisible, weird (but unlikely) interactions that change the character of known particles under special circumstances, plus a number of other possibilities. One interesting candidate is jokingly known a WIMPzilla, which consists of one or more particle type more than a trillion times the mass of a proton. These would have been born at a much earlier era than WIMPs, when the Universe was even hotter. Because they are so much heavier, WIMPzillas can be rarer and interact more readily with normal matter, but—as with other more exotic candidates—they aren’t really considered to be a strong possibility.

    godzilla
    If the leading ideas for dark matter don’t hold up to experimental scrutiny, then we’ve definitely sailed off the map into the unknown.
    Castle Gallery, College of New Rochelle

    And more non-WIMP dark matter candidates seem to crop up every year, though many are implausible enough they won’t garner much attention even from other theorists. However, each guess—even unlikely ones—can help us understand what dark matter can be, and what it can’t.

    We’ve also omitted a whole other can of worms known as “modified gravity”—a proposition that the matter we see is all there is, and the observational phenomena that don’t make sense can be explained by a different theory of gravity. So far, no modified gravity model has reproduced all the observed phenomena attributed to dark matter, though of course that doesn’t say it can never happen.

    To put it another way: most astronomers and cosmologists accept that dark matter exists because it’s the simplest explanation that accounts for all the observational data. If you want a more grumpy description, you could say that dark matter is the worst idea, except for all the other options.

    Of course, Nature is sly. Perhaps more than one of these dark matter candidates is out there. A world with both axions and WIMPs—motivated as they are by different problems arising from the Standard Model—would be confounding but not beyond reason. Given the unexpected zoo of normal particles discovered in the 20th century, maybe we’ll be pleasantly surprised; after all, wouldn’t it be nice if several of our hypotheses were simultaneously correct for once? (I’m a both/and kind of guy.) More than one type might also help explain why we have yet to see any dark matter in our detectors so far. If a substantial fraction of dark matter particles is made of axions, then the density of WIMPs or WDM must be correspondingly lower and vice versa.

    But a bigger worry lurks in the minds of many researchers. Maybe dark matter doesn’t interact with ordinary matter at all, and it doesn’t annihilate in a way we can detect easily. Then the “dark sector” is removed from anything we can probe experimentally, and that’s an upsetting thought. Researchers would have a hard time explaining how such particles came to be after the Big Bang, but worse: without a way to study their properties in the lab, we would be stuck with the kind of phenomenology we have now. Dark matter would be perpetually assigned to placeholder status.

    In old maps made by European cartographers, distant lands were sometimes shown populated by monstrous beings. Today of course, everyone knows that those lands are inhabited by other human beings and creatures that, while sometimes strange, aren’t the monsters of our imagination. Our hope is that the monstrous beings of our theoretical space imaginings will some day seem ordinary, too, and “dark matter” will be part of physics as we know it.

    See the full article here.

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: