Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:36 pm on October 30, 2014 Permalink | Reply
    Tags: , ,   

    From livescience: “Ancient Stone Circles in Mideast Baffle Archaeologists” 


    October 30, 2014
    Owen Jarus

    Huge stone circles in the Middle East have been imaged from above, revealing details of structures that have been shrouded in mystery for decades.

    Archaeologists in Jordan have taken high-resolution aerial images of 11 ancient “Big Circles,” all but one of which are around 400 meters (1,312 feet) in diameter. Why they are so similar is unknown but the similarity seems “too close to be a coincidence” said researcher David Kennedy.

    The Big Circles (as archaeologists call them) were built with low stone walls that are no more than a few feet high. The circles originally contained no openings, and people would have had to hop over the walls in order to get inside.

    Their purpose is unknown, and archaeologists are unsure when these structures were built. Analysis of the photographs, as well as artifacts found on the ground, suggest the circles date back at least 2,000 years, but they may be much older. They could even have been constructed in prehistoric times, before writing was invented, scientists say.

    Though the Big Circles were first spotted by aircraft in the 1920s, little research has focused on these structures, and many scientists are not even aware of their existence, something these archaeologists hope the new aerial images will help to change.

    The “most important contribution is simply to collect and make known a large group of rather remarkable sites,” writes Kennedy, a professor at the University of Western Australia, in an article published recently in the journal Zeitschrift für Orient Archäologie.

    In addition to the 11 photographed circles, researchers have identified another similar circle in Jordan, which appears to have been only partially completed, Kennedy noted. Old satellite imagery also reveals two circles, one in Jordan and another in Syria, which have both been destroyed. The circle in Syria was destroyed within the last decade and the one in Jordan a few decades ago. A separate research team, from Durham University, investigated the Syria circle before it was completely gone.

    While there are many smaller stone circles in the Middle East, what makes these 11 Big Circles stand out is their large size and ancient age, Kennedy said.

    Kennedy has been leading the Aerial Archaeology in Jordan Project (AAJ) since 1997 and also co-directs the Aerial Photographic Archive for Archaeology in the Middle East (APAAME).

    Building the Big Circles

    The circles would not have been hard to build, Kennedy said. They were constructed mainly with local rocks, and a dozen people working hard could potentially complete a Big Circle in a week, Kennedy told Live Science in an email.

    The area near the Azraq Oasis in Jordan has hundreds of wheels, large structures made of stone that date back at least 2,000 years.

    Another cluster of wheels found near the Azraq Oasis.

    Cairns, or piles of stones, are often found associated with the wheels, sometimes circling the perimeter and other times in among the spokes.

    However, building the circles in a precise shape would have taken some planning. “In the case of those circles that [are] near-precise circles, it would have required at least one person as ‘architect,'” Kennedy said, adding that this architect could simply have tied a long rope to a post and walked in a circle, marking the ground as he or she moved around. “That would also explain the glitches [in the circles] where the land was uneven,” as the architect wouldn’t have been able to keep walking in a perfect circle at those spots.

    The purpose of the Big Circles is a mystery, Kennedy said. It seems unlikely that they were originally used as corrals, as the walls were no more than a few feet high, the circles contain no structures that would have helped maintain an animal herd and there’s no need for animal corrals to have such a precise shape, he said.

    One of the circles contains three cairns, or rock piles, on its edges that may have been used for burial. However, Kennedy said, “my inference is that the cairns [were built] later, when the enclosure was no longer significant.”

    Solving the circle mystery

    In order to solve the mystery, archaeologists must conduct more actual fieldwork, Kennedy said, noting that aerial images are helpful but can’t replace excavation.

    Archaeologists Graham Philip and Jennie Bradbury, both with Durham University in England, have examined a Big Circle they found near Homs in Syria. While the circle was “badly damaged” when the researchers found it, they completed their fieldwork before land development completely destroyed the structure.

    This Big Circle was positioned in such a way that it could give someone standing inside it a “panoramic” view of a basin that would have held crops and settlements, the researchers reported in a 2010 paper in the journal Levant. This “may have played an important part in the location of the enclosure,” the two archaeologists wrote in the Levant article.

    Recent satellite imagery shows that the circle near Homs is now virtually destroyed, Kennedy wrote.

    Megalithic landscape

    While the purpose of the Big Circles remains unknown, the research by Kennedy and his team shows that the creations were part of a landscape rich in stone structures.

    His team has found thousands of stone structures in Jordan and the broader Middle East. They come in a variety of shapes, including “Wheels” (circular structures with spokes radiating out); Kites (stone structures that forced animals to run into a kill zone); Pendants (lines of stone cairns that run from burials); and walls (mysterious structures that meander across the landscape for more than a mile — or up to several thousand meters — and have no apparent practical use).

    The aerial photography program his team is conducting, combined with satellite imagery from sites like Google Earth, has led to many discoveries, Kennedy said. “As soon as you get up a few hundred feet, it all comes into focus. You can suddenly see the shape of what you’ve been looking at,” Kennedy said in a YouTube video made by Google as part of their Search Stories series.

    See the full article, with more images, here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 4:21 pm on October 30, 2014 Permalink | Reply
    Tags: , , , , ,   

    From “Existence of a group of ‘quiet’ quasars confirmed” 


    Oct 29, 2014
    Provided by Institute of Astrophysics of Andalusia

    Aeons ago, the universe was different: mergers of galaxies were common and gigantic black holes with masses equivalent to billions of times that of the Sun formed in their nuclei. As they captured the surrounding gas, these black holes emitted energy. Known as quasars, these very distant and tremendously high energy objects have local relatives with much lower energy whose existence raises numerous questions: are there also such “quiet” quasars at much larger distances? Are the latter dying versions of the former or are they completely different?

    An artist´s view of the heart of a quasar. Credit: NASA

    Light from distant quasars takes billions of years to reach us, so when we detect it we are actually looking at the universe as it was a long time ago. “Astronomers have always wanted to compare past and present, but it has been almost impossible because at great distances we can only see the brightest objects and nearby such objects no longer exist”, says Jack W. Sulentic, astronomer at the Institute of Astrophysics of Andalusia (IAA-CSIC), who is leading the research. “Until now we have compared very luminous distant quasars with weaker ones closeby, which is tantamount to comparing household light bulbs with the lights in a football stadium”. Now we are able to detect the household light bulbs very far away in the distant past.

    The more distant, the more luminous?

    Quasars appear to evolve with distance: the farther away one gets, the brighter they are. This could indicate that quasars extinguish over time or it could be the result of a simple observational bias masking a different reality: that gigantic quasars evolving very quickly, most of them already extinct, coexist with a quiet population that evolves at a much slower rhythm but which our technological limitations do not yet allow us to research.

    To solve this riddle it was necessary to look for low luminosity quasars at enormous distances and to compare their characteristics with those of nearby quasars of equal luminosity, something thus far almost impossible to do, because it requires observing objects about a hundreds of times weaker than those we are used to studying at those distances.

    The tremendous light-gathering power of the GTC telescope, has recently enabled Sulentic and his team to obtain for the first time spectroscopic data from distant, low luminosity quasars similar to typical nearby ones. Data reliable enough to establish essential parameters such as chemical composition, mass of the central black hole or rate at which it absorbs matter.

    Grand Telescope de Canaries
    Grand Telescope de Canaries interior

    “We have been able to confirm that, indeed, apart from the highly energetic and rapidly evolving quasars, there is another population that evolves slowly. This population of quasars appears to follow the quasar main sequence discovered by Sulentic and colleagues in 2000. There does not even seem to be a strong relation between this type of quasars, which we see in our environment and those “monsters” that started to glow more than ten billion years ago”, says Ascensión del Olmo another IAA-CSIC researcher taking part in the study.

    They have, nonetheless, found differences in this population of quiet quasars. “The local quasars present a higher proportion of heavy elements such as aluminium, iron or magnesium, than the distant relatives, which most likely reflects enrichment by the birth and death of successive generations of stars,” says Jack W. Sulentic (IAA-CSIC). “This result is an excellent example of the new perspectives on the universe which the new 10 meter-class of telescopes such as GTC are yielding,” the researcher concludes

    See the full article here.

    About in 100 Words™ (formerly is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004,’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes in its list of the Global Top 2,000 Websites. community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 3:48 pm on October 30, 2014 Permalink | Reply
    Tags: , , , Paleobiology,   

    From NYT: “From Ancient DNA, a Clearer Picture of Europeans Today” 

    New York Times

    The New York Times

    OCT. 30, 2014
    Carl Zimmer

    About 50,000 years ago, humans from Africa first set foot in Europe. They hunted woolly mammoths and other big game — sometimes to extinction. Eventually, they began grazing livestock and raising crops.

    They chopped down forests and drained swamps, turning villages into towns, then cities and capitals of empires. But even as they altered the Continent, Europeans changed, too.

    Their skin and hair grew lighter. They gained genetic traits particular to the regions in which they lived: Northern Europeans, for example, grew taller than Southern Europeans.

    Up till now, scientists have learned about evolution on the Continent mostly by looking at living Europeans. But advances in biotechnology have made it possible to begin extracting entire DNA from the bones of ancestors who lived thousands of years ago. Their genomes are like time machines, allowing scientists to see bits of European history playing out over thousands of years.

    Recently David Reich, a geneticist at Harvard Medical School, and his colleagues analyzed the genomes of nine ancient Europeans. Eight belonged to hunter-gatherers who lived about 8,000 years ago, seven in what is now Sweden and one in Luxembourg. The ninth came from a farmer who lived 7,000 years ago in present-day Germany.

    The scientists compared these genomes with those of living Europeans. As they reported last month in Nature, the study revealed something scientists never knew: Europeans today have genes from three very different populations.

    The oldest of these populations were the first Europeans, who appear to have lived as hunter-gatherers. The second were farmers who expanded into Europe about 8,500 years ago from the Near East.

    But most living Europeans also carry genes from a third population, which appears to have arrived more recently. Dr. Reich and his colleagues found the closest match in DNA taken from a 24,000-year-old individual in Siberia, suggesting that the third wave of immigrants hailed from north Eurasia. The ancient Europeans that the scientists studied did not share this North Eurasian DNA. They concluded that this third wave must have moved into Europe after 7,000 years ago.

    Last week, another team of scientists based at University College Dublin reported data from an even bigger haul of ancient European genomes — 13, all told. While Dr. Reich and his colleagues studied ancient Europeans separated by hundreds of miles, the Dublin team focused on just one region in Central Europe called the Great Hungarian Plain.

    The people whose genomes the scientists retrieved lived on the plain at various times between 7,700 years ago and 2,800 years ago.

    “What’s really exciting here is to have a transect through time,” said Johannes Krause, a co-director of the Max Planck Institute for History and the Sciences in Jena, Germany, who was not involved in the study. “It’s the first time that’s been done.”

    Archaeological digs have revealed evidence of farming on the plain as long as 8,000 years ago. People there raised crops like barley, and raised cattle and other livestock. Shards of pottery show that they consumed milk.

    The oldest genomes retrieved from human remains in the area — one from a man and one from a woman — date back to the dawn of agriculture on the plain. The woman’s DNA showed that she belonged to the ancient farming population documented by Dr. Reich and his colleagues.

    The man, however, did not have the genes of a farmer. He belonged to the oldest population of hunter-gatherers.

    “The archaeological information isn’t enough to say whether he was married to a local farmer,” said Ron Pinhasi, an archaeologist at University College Dublin and a co-author of the new study. It may even be that the man’s skull was a trophy of some sort, Dr. Pinhasi added.

    Archaeologists have found that early farming culture didn’t change drastically for the next 3,700 years. But about 4,000 years ago, the Bronze Age arrived. People started using bronze tools, trading over longer networks and moving into fortified towns.

    Dr. Pinhasi and his colleagues found that the era also brought a sudden shift in human DNA. A new population arrived on the Great Hungarian Plain, and Dr. Reich believes he knows who they were: the northern Eurasians.

    “It’s very exciting,” he said. “It documents that by this time in Central Europe, this Eastern influence had already arrived.”

    At the start of the Bronze Age, life settled down on the plain for a thousand years. But then came the Iron Age, bringing another shift in culture — and genes.

    People began traveling across the plain by horse-drawn chariots and wagons, and the genomes from 2,800 years ago show that the people of the Bronze Age had begun to be supplanted by a new Iron Age population. These are the people most closely related to living Hungarians.

    In the new study, Dr. Pinhasi and his colleagues also surveyed individual genes known to have changed over the course of European history.

    Today, for example, people in Hungary tend to have light skin and light brown hair, and half of them carry a mutation that lets them digest milk as adults. It took thousands of years for the genes for these traits to appear on the Great Hungarian Plain, the scientists found.

    The hunter-gatherer that lived 7,700 years ago, for example, probably had black hair and dark skin, along with blue eyes. His genes suggest that he also probably couldn’t digest milk — not surprising, since he came from a population that didn’t raise livestock.

    The ancient farmer woman, on the other hand, probably had dark brown hair and brown eyes. But like the hunter-gatherers, she lacked the genetic mutation for digesting milk.

    A 7,700-year-old skeleton of a woman found in Hungary has yielded DNA. Scientists have found that she belonged to a wave of early farmers who moved into Europe from the Near East. Credit Ron Pinhasi

    It is not until 6,400 years ago that the scientists find the first genetic evidence on the Great Hungarian Plain for light brown hair. And the milk mutation appeared even later, just 3,100 years ago.

    It is possible that these new genes and others were brought to the plain by successive waves of immigrants. But natural selection probably played a role in making these genes pervasive.

    Genetic mutations that enable people to drink milk as adults, for example, could have helped them survive famines. In cow-herding cultures, scientists have found, the milk-drinking mutation led to a 10 percent increase in the number of children.

    If that’s true, then for 4,600 years people on the Great Hungarian Plain were milking cows but lacked the ability to digest milk. Dr. Pinhasi suggested that they only used milk at first to make cheese and yogurt, which would have been easier to digest.

    Daniel G. Bradley, a geneticist at University College Dublin and co-author of the new study, predicted more unexpected results would emerge as scientists gather more ancient DNA in Europe.

    “The past is going to be a different country,” he said, “and it’s going to surprise us.”

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 3:34 pm on October 30, 2014 Permalink | Reply
    Tags: , , , ,   

    From Hubble: “Hubble Sees ‘Ghost Light’ From Dead Galaxies” 

    NASA Hubble Telescope


    October 30, 2014
    Ray Villard
    Space Telescope Science Institute, Baltimore

    NASA’s Hubble Space Telescope has picked up the faint, ghostly glow of stars ejected from ancient galaxies that were gravitationally ripped apart several billion years ago. The mayhem happened 4 billion light-years away, inside an immense collection of nearly 500 galaxies nicknamed “Pandora’s Cluster,” also known as Abell 2744.

    The scattered stars are no longer bound to any one galaxy, and drift freely between galaxies in the cluster. By observing the light from the orphaned stars, Hubble astronomers have assembled forensic evidence that suggests as many as six galaxies were torn to pieces inside the cluster over a stretch of 6 billion years.

    Massive galaxy cluster Abell 2744, nicknamed Pandora’s Cluster, takes on a ghostly look where total starlight has been artificially colored blue in this Hubble view.
    Image Credit: NASA/ESA/IAC/HFF Team, STScI

    Computer modeling of the gravitational dynamics among galaxies in a cluster suggests that galaxies as big as our Milky Way Galaxy are the likely candidates as the source of the stars. The doomed galaxies would have been pulled apart like taffy if they plunged through the center of a galaxy cluster where gravitational tidal forces are strongest. Astronomers have long hypothesized that the light from scattered stars should be detectable after such galaxies are disassembled. However, the predicted “intracluster” glow of stars is very faint and was therefore a challenge to identify.

    “The Hubble data revealing the ghost light are important steps forward in understanding the evolution of galaxy clusters,” said Ignacio Trujillo of The Instituto de Astrofísica de Canarias (IAC), Santa Cruz de Tenerife, Spain. “It is also amazingly beautiful in that we found the telltale glow by utilizing Hubble’s unique capabilities.”

    The team estimates that the combined light of about 200 billion outcast stars contributes approximately 10 percent of the cluster’s brightness.

    “The results are in good agreement with what has been predicted to happen inside massive galaxy clusters,” said Mireia Montes of the IAC, lead author of the paper published in the Oct. 1 issue of the Astrophysical Journal.

    Because these extremely faint stars are brightest at near-infrared wavelengths of light, the team emphasized that this type of observation could only be accomplished with Hubble’s infrared sensitivity to extraordinarily dim light.

    Hubble measurements determined that the phantom stars are rich in heavier elements like oxygen, carbon, and nitrogen. This means the scattered stars must be second or third-generation stars enriched with the elements forged in the hearts of the universe’s first-generation stars. Spiral galaxies – like the ones believed to be torn apart — can sustain ongoing star formation that creates chemically-enriched stars.

    Weighing more than 4 trillion solar masses, Abell 2744 is a target in the Frontier Fields program. This ambitious three-year effort teams Hubble and NASA’s other Great Observatories to look at select massive galaxy clusters to help astronomers probe the remote universe. Galaxy clusters are so massive that their gravity deflects light passing through them, magnifying, brightening, and distorting light in a phenomenon called gravitational lensing. Astronomers exploit this property of space to use the clusters as a zoom lens to magnify the images of far-more-distant galaxies that otherwise would be too faint to be seen.

    Montes’ team used the Hubble data to probe the environment of the foreground cluster itself. There are five other Frontier Fields clusters in the program, and the team plans to look for the eerie “ghost light” in these clusters, too.

    See the full article here.

    Another Hubble view of Abell 2744

    Description Abell 2744, nicknamed Pandora’s Cluster. The galaxies in the cluster make up less than five percent of its mass. The gas (around 20 percent) is so hot that it shines only in X-rays (coloured red in this image). The distribution of invisible dark matter (making up around 75 percent of the cluster’s mass) is coloured here in blue.
    Date 22 June 2011

    The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI), is a free-standing science center, located on the campus of The Johns Hopkins University and operated by the Association of Universities for Research in Astronomy (AURA) for NASA, conducts Hubble science operations.

    ESA50 Logo large

    AURA Icon

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 2:58 pm on October 30, 2014 Permalink | Reply
    Tags: , , , ,   

    From LBL: “Lord of the Microrings” 

    Berkeley Logo

    Berkeley Lab

    October 30, 2014
    Lynn Yarris (510) 486-5375

    A significant breakthrough in laser technology has been reported by the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley. Scientists led by Xiang Zhang, a physicist with joint appointments at Berkeley Lab and UC Berkeley, have developed a unique microring laser cavity that can produce single-mode lasing even from a conventional multi-mode laser cavity. This ability to provide single-mode lasing on demand holds ramifications for a wide range of applications including optical metrology and interferometry, optical data storage, high-resolution spectroscopy and optical communications.

    “Losses are typically undesirable in optics but, by deliberately exploiting the interplay between optical loss and gain based on the concept of parity-time symmetry, we have designed a microring laser cavity that exhibits intrinsic single-mode lasing regardless of the gain spectral bandwidth,” says Zhang, who directs Berkeley Lab’s Materials Sciences Division and is UC Berkeley’s Ernest S. Kuh Endowed Chair Professor. “This approach also provides an experimental platform to study parity-time symmetry and phase transition phenomena that originated from quantum field theory yet have been inaccessible so far in experiments. It can fundamentally broaden optical science at both semi-classical and quantum levels”

    Xiang Zhang, director of Berkeley Lab’s Materials Sciences Division. (Photo by Roy Kaltschmidt)

    Zhang, who also directs the National Science Foundation’s Nano-scale Science and Engineering Center, and is a member of the Kavli Energy NanoSciences Institute at Berkeley, is the corresponding author of a paper in Science that describes this work. The paper is titled Single-Mode Laser by Parity-time Symmetry Breaking. Co-authors are Liang Feng, Zi Jing Wong, Ren-Min Ma and Yuan Wang.

    A laser cavity or resonator is the mirrored component of a laser in which light reflected multiple times yields a standing wave at certain resonance frequencies called modes. Laser cavities typically support multiple modes because their dimensions are much larger than optical wavelengths. Competition between modes limits the optical gain in amplitude and results in random fluctuations and instabilities in the emitted laser beams.

    “For many applications, single-mode lasing is desirable for its stable operation, better beam quality, and easier manipulation,” Zhang says. “Light emission from a single-mode laser is monochromatic with low phase and intensity noises, but creating sufficiently modulated optical gain and loss to obtain single-mode lasing has been a challenge.”
    Scanning electron microscope image of the fabricated PT symmetry microring laser cavity.

    Scanning electron microscope image of the fabricated PT symmetry microring laser cavity.

    While mode manipulation and selection strategies have been developed to achieve single-mode lasing, each of these strategies has only been applicable to specific configurations. The microring laser cavity developed by Zhang’s group is the first successful concept for a general design. The key to their success is using the concept of the breaking of parity-time (PT) symmetry. The law of parity-time symmetry dictates that the properties of a system, like a beam of light, remain the same even if the system’s spatial configuration is reversed, like a mirror image, or the direction of time runs backward. Zhang and his group discovered a phenomenon called “thresholdless parity-time symmetry breaking” that provides them with unprecedented control over the resonant modes of their microring laser cavity, a critical requirement for emission control in laser physics and applications.

    Liang Feng

    “Thresholdless PT symmetry breaking means that our light beam undergoes symmetry breaking once the gain/loss contrast is introduced no matter how large this contrast is,” says Liang Feng, lead author of the Science paper, a recent posdoc in Zhang’s group and now an assistant professor with the University at Buffalo. “In other words, the threshold for PT symmetry breaking is zero gain/loss contrast.”

    Zhang, Feng and the other members of the team were able to exploit the phenomenon of thresholdless PT symmetry breaking through the fabrication of a unique microring laser cavity. This cavity consists of bilayered structures of chromium/germanium arranged periodically in the azimuthal direction on top of a microring resonator made from an indium-gallium-arsenide-phosphide compound on a substrate of indium phosphide. The diameter of the microring is 9 micrometers.

    “The introduced rotational symmetry in our microring resonator is continuous, mimicking an infinite system,” says Feng. “The counterintuitive discovery we made is that PT symmetry does not hold even at an infinitesimal gain/loss modulation when a system is rotationally symmetric. This was not observed in previous one-dimensional PT modulation systems because those finite systems did not support any continuous symmetry operations.”

    Using the continuous rotational symmetry of their microring laser cavity to facilitate thresholdless PT symmetry breaking,

    Zhang, Feng and their collaborators are able to delicately manipulate optical gain and loss in such a manner as to ultimately yield single-mode lasing.

    “PT symmetry breaking means an optical mode can be gain-dominant for lasing, whereas PT symmetry means all the modes remain passive,” says Zi-Jing Wong, co-lead author and a graduate student in Zhang’s group. “With our microring laser cavity, we facilitate a desired mode in PT symmetry breaking, while keeping all other modes PT symmetric. Although PT symmetry breaking by itself cannot guarantee single-mode lasing, when acting together with PT symmetry for all other modes, it facilitates single-mode lasing.”

    In their Science paper, the researchers suggest that single-mode lasing through PT-symmetry breaking could pave the way to next generation optoelectronic devices for communications and computing as it enables the independent manipulation of multiple laser beams without the “crosstalk” problems that plague today’s systems. Their microring laser cavity concept might also be used to engineer optical modes in a typical multi-mode laser cavity to create a desired lasing mode and emission pattern.

    “Our microring laser cavities could also replace the large laser boxes that are routinely used in labs and industry today,” Feng says. “Moreover, the demonstrated single-mode operation regardless of gain spectral bandwidth may create a laser chip carrying trillions of informational signals at different frequencies. This would make it possible to shrink a huge datacenter onto a tiny photonic chip.”

    This research was supported by the Office of Naval Research MURI program.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 12:12 pm on October 30, 2014 Permalink | Reply
    Tags: , , , , , ,   

    From FNAL- “Frontier Science Result: CDF A charming result” 

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Thursday, Oct. 30, 2014
    Diego Tonelli and Andy Beretvas

    Physicists gave funny names to the heavy quark cousins of those that make up ordinary matter: charm, strange, bottom, top. The Standard Model predicts that the laws governing the decays of strange, charm and bottom quarks differ if particles are replaced with antiparticles and observed in a mirror. This difference, CP violation in particle physics lingo, has been established for strange and bottom quarks. But for charm quarks the differences are so tiny that no one has observed them so far. Observing differences larger than predictions could provide much sought-after indications of new phenomena.

    The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    A team of CDF scientists searched for these tiny differences by analyzing millions of decays of particles decaying into pairs of charged kaons and pions, sifting through roughly a thousand trillion proton-antiproton collisions from the full CDF Run II data set. They studied CP violation by looking at whether the difference between the numbers of charm and anticharm decays occurring in each chunk of decay time varies with decay time itself.

    The results have a tiny uncertainty (two parts per thousand) but do not show any evidence for CP violation, as shown in the upper figure. The small residual decay asymmetry, which is constant in decay time, is due to the asymmetric layout of the detector. The combined result of charm decays into a pair of kaons and a pair of pions is the CP asymmetry parameter AΓ , which is equal to -0.12 ± 0.12 percent. The results are consistent with the current best determinations. Combined with them, they will improve the exclusion constraints on the presence of new phenomena in nature.

    These plots show the effective lifetime asymmetries as function of decay time for D →K+K- (top) and D → π+π- (bottom) samples. Results of the fits not allowing for (dotted red line) and allowing for (solid blue line) CP violation are overlaid.

    See the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 11:58 am on October 30, 2014 Permalink | Reply
    Tags: , , , , , ,   

    From LC Newsline: “The future of Higgs physics” 

    Linear Collider Collaboration header
    Linear Collider Collaboration

    30 October 2014
    Joykrit Mitra

    In 2012, the ATLAS and CMS experiments at CERN’s Large Hadron Collider announced the discovery of the Higgs boson. The Higgs was expected to be the final piece of the particular jigsaw that is the Standard Model of particle physics, and its discovery was a monumental event.

    Event recorded with the CMS detector in 2012 at a proton-proton centre of mass energy of 8 TeV. The event shows characteristics expected from the decay of the SM Higgs boson to a pair of photons (dashed yellow lines and green towers). Image: L. Taylor, CMS collaboration /CERN

    The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    CERN CMS New

    But more precise studies of it are needed than the LHC is able to provide. That is why, years earlier, a machine like the International Linear Collider had been envisioned as a Higgs factory, and the Higgs discovery set the stage for its possible construction.

    ILC schematic
    ILC schematic

    Over the years, instruments for probing the universe have become more sophisticated. More refined data has hinted that aspects of the Standard Model are incomplete. If built, a machine such as the ILC will help reveal how wide a gulf there is between the universe and our understanding of it by probing the Higgs to unprecedented levels. And perhaps, as some physicists think, it will uproot the Standard Model and make way for an entirely new physics.

    In the textbook version, the Higgs boson is a single particle, and its alleged progenitor, the mysterious Higgs field that pervades every point in the universe, is a single field. But this theory is still to be tested.

    “We don’t know whether the Higgs field is one field or many fields,” said Michael Peskin of SLAC’s Theoretical Physics Group. “We’re just now scratching the surface at the LHC.”

    The LHC collides proton beams together, and the collision environment is not a clean one. Protons are made up of quarks and gluons, and in an LHC collision it’s really these many component parts – not the larger proton – that interact. During a collision, there are simply too many components in the mix to determine the initial energies of each one. Without knowing them, it’s not possible to precisely calculate properties of the particles generated from the collision. Furthermore, Higgs events at the LHC are exceptionally rare, and there is so much background that the amount of data that scientists have to sift through to glean information on the Higgs is astronomical.

    “There are many ways to produce an event that looks like the Higgs at the LHC,” Peskin said. “Lots of other things happen that look exactly like what you’re trying to find.”

    The ILC, on the other hand, would collide electrons and positrons, which are themselves fundamental particles. They have no component parts. Scientists would know their precise initial energy states and there will be significantly fewer distractions from the measurement standpoint. The ILC is designed to be able to accelerate particle beams up to energies of 250 billion electronvolts, extendable eventually to 500 billion electronvolts. The higher the particles’ energies, the larger will be the number of Higgs events. It’s the best possible scenario to probe the Higgs.

    If the ILC is built, physicists will first want to test whether the Higgs particle discovered at the LHC indeed has the properties predicted by the Standard Model. To do this, they plan to study Higgs couplings with known subatomic particles. The higher a particle’s mass, the proportionally stronger its coupling ought to be with the Higgs boson. The ILC will be sensitive enough to detect and accurately measure Higgs couplings with light particles, for instance with charm quarks. Such a coupling can be detected at the LHC in principle but is very difficult to measure accurately.

    The ILC can also help measure the exact lifetime of the Higgs boson. The more particles the Higgs couples to, the faster it decays and disappears. A difference between the measured lifetime and the projected lifetime—calculated from the Standard Model—could reveal what fraction of possible particles—or the Higgs’ interactions with them— we’ve actually discovered.

    “Maybe the Higgs interacts with something new that is very hard to detect at a hadron collider, for example if it cannot be observed directly, like neutrinos,” speculated John Campbell of Fermilab’s Theoretical Physics Department.

    These investigations could yield some surprises. Unexpected vagaries in measurement could point to yet undiscovered particles, which in turn would indicate that the Standard Model is incomplete. The Standard Model also has predictions for the coupling between two Higgs bosons, and physicists hope to study this as well to check if there are indeed multiple kinds of Higgs particles.

    “It could be that the Higgs boson is only a part of the story, and it has explained what’s happened at colliders so far,” Campbell said. “The self-coupling of the Higgs is there in the Standard Model to make it self-consistent. If not the Higgs, then some other thing has to play that role that self-couplings play in the model. Other explanations could also provide dark matter candidates, but it’s all speculation at this point.”

    3D plot showing how dark matter distribution in our universe has grown clumpier over time. (Image: NASA, ESA, R. Massey from California Institute of Technology)

    The Standard Model has been very self-consistent so far, but some physicists think it isn’t entirely valid. It ignores the universe’s
    accelerating expansion caused by dark energy, as well as the mysterious dark matter that still allows matter to clump together and galaxies to form. There is speculation about the existence of undiscovered mediator particles that might be exchanged between dark matter and the Higgs field. The Higgs particle could be a likely gateway to this unknown physics.

    With the LHC set to be operational again next year, an optimistic possibility is that a new particle or two might be dredged out from trillions of collision events in the near future. If built, the ILC would be able to build on such discoveries, just as in case of the Higgs boson, and provide a platform for more precise investigation.

    The collaboration between a hadron collider like the LHC and an electron-positron collider of the scale of the ILC could uncover new territories to be explored and help map them with precision, making particle physics that much richer.

    See the full article here.

    The Linear Collider Collaboration is an organisation that brings the two most likely candidates, the Compact Linear Collider Study (CLIC) and the International Liner Collider (ILC), together under one roof. Headed by former LHC Project Manager Lyn Evans, it strives to coordinate the research and development work that is being done for accelerators and detectors around the world and to take the project linear collider to the next step: a decision that it will be built, and where.

    Some 2000 scientists – particle physicists, accelerator physicists, engineers – are involved in the ILC or in CLIC, and often in both projects. They work on state-of-the-art detector technologies, new acceleration techniques, the civil engineering aspect of building a straight tunnel of at least 30 kilometres in length, a reliable cost estimate and many more aspects that projects of this scale require. The Linear Collider Collaboration ensures that synergies between the two friendly competitors are used to the maximum.

    Linear Collider Colaboration Banner

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 4:34 am on October 30, 2014 Permalink | Reply
    Tags: , , , , , ,   

    From “Planetary Atmospheres a Key to Assessing Possibilities for Life” 

    Astrobiology Magazine

    Astrobiology Magazine

    Oct 30, 2014
    No Writer Credit

    A planetary atmosphere is a delicate thing. On Earth, we are familiar with the ozone hole — a tear in our upper atmosphere caused by human-created chemicals that thin away the ozone. Threats to an atmosphere, however, can also come from natural causes.

    Earth’s atmosphere likely changed from a helium-heavy one to the nitrogen and oxygen mix we see today. Credit: NASA

    If a big enough asteroid smacks into a planet, it can strip the atmosphere away. Radiation from a star can also make an atmosphere balloon, causing its lighter elements to escape into space.

    Understanding how permanent an atmosphere is, where it came from, and most importantly what it is made of are key to understanding if a planet outside our solar system is habitable for life. Our instruments aren’t yet sophisticated enough to look at atmospheres surrounding Earth-sized planets, but astronomers are starting to gather data on larger worlds to do comparative studies.

    One such example was recently accepted in the journal Astrophysical Journal and is available now in a preprint version on Arxiv. The astronomers created models of planetary formation and then simulated atmospheric stripping, the process where a young star’s radiation can push lighter elements out into space.

    Next, the team compared their findings to data gathered from NASA’s planet-hunting Kepler Space Telescope. The researchers predict that the atmospheric mass of the planets Kepler found is, in some cases, far greater than the thin veneer of air covering Earth.

    NASA Kepler Telescope

    Co-author Christoph Mordasini, who studies planet and star formation at the Max Planck Institute for Astronomy in Heidelberg, Germany, cautioned there is likely an observational bias with the Kepler data.

    “Kepler systems are so compact, with the planets closer to their star than in our solar system,” said Mordasini.

    Astronomers are still trying to understand why.

    “Maybe some of these objects formed early in their system’s history, in the presence of lots of gas and dust,” he said. “This would have made their atmospheres relatively massive compared to Earth. Our planet probably only formed when the gas was already gone, so it could not form a similar atmosphere.”

    Blowing gas away

    Planetary systems come to be in a cloud of gas and dust, the theory goes. If enough mass gathers in a part of the cloud, that section collapses and creates a star surrounded by a thin disk. When the star ignites, its radiative force will gradually clear the area around it of any debris.

    Over just a few million years, the hydrogen and helium in the disk surrounding the star partially spirals onto the star, while the rest gets pushed farther and farther out into space. Proto-Earth likely had a hydrogen-rich atmosphere at this stage, but over time (with processes such as vulcanism, comet impacts, and biological activity) its atmosphere gradually changed to the nitrogen and oxygen composition we see today.

    Kepler’s data has showed other differences from our own solar system. In our own solar system, there is a vast size difference between Earth and the next-biggest planet, Neptune, which has a radius almost four times that of Earth’s. This means there’s a big dividing line when it comes to size between terrestrial planets and gas giants in our solar system.

    This global view of the surface of Venus is centered at 180 degrees east longitude. Magellan synthetic aperture radar mosaics from the first cycle of Magellan mapping are mapped onto a computer-simulated globe to create this image. Data gaps are filled with Pioneer Venus Orbiter data, or a constant mid-range value. Simulated color is used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. Credit: NASA/JPL

    In Kepler surveys (as well as surveys from other planet-hunting telescopes), scientists have found more of a gradient. There are other planetary systems out there with planets in between Earth’s and Neptune’s sizes, which are sometimes called “super-Earths” or “mini-Neptunes.” Whether planets of this size are habitable is up for debate.

    “The gap between the Earth’s and Uranus’ or Neptune’s size, and also in their composition, doesn’t exist in extrasolar planets. So, what we see in the Solar System is not the rule,” Mordasini said.

    The planets that Kepler has picked up, however, tend to be massive and closer to their star, and are therefore easier to detect. They pass more frequently across the face of their parent star, making them more easily spotted from Earth.

    The size implies that they managed to grab their disk’s primordial hydrogen and helium atmosphere before it got blown away. Hydrogen and helium are light elements, so a star’s radiation would puff up the hydrogen and helium atmosphere far more than what we see on Earth, with its heavier elements.

    What does this mean? The team predicts that in some cases, when astronomers measure the radius of a planet, that measurement also includes a bulky atmosphere. In other words, the planet underneath could be a lot smaller than what Kepler’s measurements could indicate.

    This process assumes that the planet has an iron core and silica mantle, just like the Earth, but orbits its parent star about 10 times closer than we do ours. If the atmosphere is more massive — even 1 percent of the planet’s mass is many thousands of times more massive than Earth’s — it creates more pressure on the surface.

    “It depends, but you can imagine this pressure is comparable to the deepest parts of the Earth’s ocean. Additionally, these atmospheres can be isolating and insulating for heat, so it’s also very hot on the surface,” Mordasini said.

    High temperatures on Earth are known to destroy amino acids, the building blocks of carbon-based life.

    Delicate atmosphere

    The atmosphere may be more massive, but it is also delicate. It wouldn’t take too much of a push to send hydrogen, the lightest element, away from the planet and into space.

    A habitable zone planet, Kepler-69c, in an artist’s impression. The world is probably an inhospitable “super-Venus,” but then again, it might be habitable, depending on the character of its atmosphere. Credit: NASA Ames/JPL-Caltech

    Young stars like the Sun in its youth are especially active in x-rays and ultraviolet radiation. When these forms of light hit a planetary atmosphere, they tend to heat it up. Since heating expands gases, the atmosphere grows. An atmosphere that flows beyond certain heights can get so high that part of it gets “unbounded” from the planet’s gravity and escapes into space.

    In our own solar system, for example, Mars likely lost its hydrogen to space over time while a heavier kind of hydrogen (called deuterium) remained behind. A new NASA orbiting spacecraft called Mars Atmosphere and Volatile Evolution (MAVEN) has just arrived at the Red Planet to study more about atmospheric escape today and researchers will to try to extrapolate that knowledge to space.


    By contrast, the planet Venus is an example of having an exceptionally persistent atmosphere. The mostly carbon dioxide atmosphere is so thick today that the planet is completely shrouded in clouds. Underneath the atmosphere is a hellish environment, one in which the spacecraft that have made it there have only survived a few minutes in the 864 º Fahrenheit (462 º Celsius) heat on the surface. It is widely presumed that atmospheres like that of Venus would be too hot for carbon-based life.

    Why Venus, Mars and Earth are so different in their atmospheric composition and history is among the questions puzzling astronomers today. Understanding atmospheric escape on each of these worlds will be helpful, scientists say.

    “How strong atmospheric escape is depends on fundamental properties such as mass or planetary orbit,” Mordasini said. “We found out for giant planets like Jupiter, the operation is typically not as strong.”

    Future work of the team includes considering atmospheres that are not made of hydrogen or helium, which could bring researchers a step closer to understanding how different types of elements work on planets. Eventually, this could feed into models predicting habitability.

    See the full article here.


    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 4:11 am on October 30, 2014 Permalink | Reply
    Tags: , ,   

    From livescience: “70,000-Year-Old Mammoth Skeleton Uncovered in Idaho” 


    October 28, 2014
    Megan Gannon

    The skeleton of a mammoth was discovered this month on the banks of a reservoir in Idaho. Paleontologists have rescued part of its skull and a tusk, but there could be a lot more buried below the surface.

    Excavators raced rising water levels to unearth the exposed mammoth fossil. Credit: Bureau of Reclamation Photo by Dave Walsh

    “We may even have a complete mammoth,” said Mary Thompson, a vertebrate paleontologist and senior collections manager at the Idaho Museum of Natural History. “This is very unique for us.”

    Every year, when water levels drop in Idaho’s American Falls Reservoir, teams of paleontologists and volunteers with the Bureau of Reclamation3 walk the beaches in search of fossils. The ancient bones of camels, bison latifrons, giant ground sloths, saber-toothed cats and other extinct Ice Age beasts sometimes poke out of the freshly eroded reservoir banks.

    The fossil was discovered by a volunteer with the Bureau of Reclamation who was scanning the reservoir beaches for freshly revealed fossils. (Credit: Bureau of Reclamation Photo by Dave Walsh)

    Earlier this month, one volunteer stumbled upon the mammoth fossil on a cliff face about 30 feet (9 meters) below the reservoir’s high-water mark. Thompson said she could tell it was from a mammoth as soon as she got the pictures in her email inbox. She and a team of students and volunteers mounted a quick, two-and-a-half-day excavation to dig up the bones as they raced rising water levels.

    “I’ve been here since 1990, and we haven’t gotten anything this complete from that site since then,” Thompson told Live Science. “Out of this area, we have one other complete mammoth.”

    This photo, taken on Oct. 16, shows Idaho State University students Casey Dooms and Jeff Castro brushing the mammoth fossil clean on the edge of American Falls Reservoir in southeastern Idaho. Some pieces of the skeleton were excavated over the course of two and a half days. (Credit: Bureau of Reclamation Photo by Dave Walsh)

    The excavators used plaster casts to remove most of the mammoth’s right tusk, which was about 7.5 inches (19 centimeters) in diameter. They also found part of its skull, a chunk of its mandible and two jagged upper molars. The specimen was transferred to the Idaho Museum of Natural History at Idaho State University in Pocatello.

    From the rings in the tusk, the researchers estimated that the mammoth was 16 years old — a fully grown adult — when it died. Based on the age of the surrounding sediments, Thompson thinks the mammoth must have been buried on its right side more than 72,000 years ago.

    The unexcavated parts of the mammoth were covered up with geotextile and soil after the short dig. Thompson said she hopes to bring a team back next year with ground-penetrating radar tools to get a better idea of what actually lies below the surface. If they are dealing with a full mammoth skeleton or even a partial skeleton, the team might need to figure out a way to get a backhoe down the steep reservoir bank to help with the excavation. This month, they did all the heavy lifting and digging by hand and with shovels.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 6:29 pm on October 29, 2014 Permalink | Reply
    Tags: , , ,   

    From LLNL: “Tiny carbon nanotube pores make big impact “ 

    Lawrence Livermore National Laboratory

    Oct. 29, 2014

    Anne M Stark

    A team led by the Lawrence Livermore scientists has created a new kind of ion channel consisting of short carbon nanotubes, which can be inserted into synthetic bilayers and live cell membranes to form tiny pores that transport water, protons, small ions and DNA.

    These carbon nanotube “porins” have significant implications for future health care and bioengineering applications. Nanotube porins eventually could be used to deliver drugs to the body, serve as a foundation of novel biosensors and DNA sequencing applications, and be used as components of synthetic cells.

    Researchers have long been interested in developing synthetic analogs of biological membrane channels that could replicate high efficiency and extreme selectivity for transporting ions and molecules that are typically found in natural systems. However, these efforts always involved problems working with synthetics and they never matched the capabilities of biological proteins.

    Unlike taking a pill which is absorbed slowly and is delivered to the entire body, carbon nanotubes can pinpoint an exact area to treat without harming surrounding other organs.

    “Many good and efficient drugs that treat diseases of one organ are quite toxic to another,” said Aleksandr Noy, an LLNL biophysicist who led the study and is the senior author on the paper appearing in the Oct. 30 issue of the journal, Nature. “This is why delivery to a particular part of the body and only releasing it there is much better.”

    From left: Lawrence Livermore National Laboratory scientists Aleksandr Noy, Kyunghoon Kim and Jia Geng hold up a model of a carbon nanotube that can be inserted into live cells, which can pinpoint an exact area to treat without harming other organs. Photo by Julie Russell.

    The Lawrence Livermore team, together with colleagues at the Molecular Foundry at the Lawrence Berkeley National Laboratory, University of California Merced and Berkeley campuses, and University of Basque Country in Spain created a much more efficient, biocompatible membrane pore channel out of a carbon nanotube (CNT) — a straw-like molecule that consists of a rolled up graphene sheet.

    This research showed that despite their structural simplicity, CNT porins display many characteristic behaviors of natural ion channels: they spontaneously insert into the membranes, switch between metastable conductance states, and display characteristic macromolecule-induced blockades. The team also found that, just like in the biological channels, local channel and membrane charges could control the ionic conductance and ion selectivity of the CNT porins.

    “We found that these nanopores are a promising biomimetic platform for developing cell interfaces, studying transport in biological channels, and creating biosensors,” Noy said. “We are thinking about CNT porins as a first truly versatile synthetic nanopore that can create a range of applications in biology and materials science.”

    “Taken together, our findings establish CNT porins as a promising prototype of a synthetic membrane channel with inherent robustness toward biological and chemical challenges and exceptional biocompatibility that should prove valuable for bionanofluidic and cellular interface applications,” said Jia Geng, a postdoc who is the first co-author of the paper.

    Kyunghoon Kim, a postdoc and another co-author, added: “We also expect that our CNT porins could be modified with synthetic ‘gates’ to dramatically alter their selectivity, opening up exciting possibilities for their use in synthetic cells, drug delivery and biosensing.”

    Other LLNL researchers include Ramya Tunuguntla, Kang Rae Cho, Dayannara Munoz and Morris Wang. The team members performed some of the work at the Molecular Foundry, a DOE user facility as a part of its user project.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    DOE Seal
    ScienceSprings relies on technology from

    MAINGEAR computers



Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 347 other followers

%d bloggers like this: