Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:29 pm on July 29, 2015 Permalink | Reply
    Tags: , , , Physics   

    From Physics- “Viewpoint: Sky Survey Casts Light on the Dark Universe” 

    Physics LogoAbout Physics

    Physics Logo 2

    Physics

    July 29, 2015
    Catherine Heymans, Institute for Astronomy, University of Edinburgh

    The Dark Energy Survey has generated a map of invisible dark matter by observing tiny gravitationally induced distortions in the images of distant galaxies

    1
    As the light from distant galaxies travels to us, it passes by massive structures filled with invisible dark matter (shown here as gray spheres). The gravity from this dark matter warps the surrounding spacetime, causing distortions in the perceived shapes of the background galaxies. Each galaxy becomes slightly elongated (more elliptical), depending on the distribution of dark matter along its light path. Galaxies near to each other in the sky (as shown on the left side) are distorted equally, making them appear more aligned. By measuring this alignment, astronomers can infer the size and location of massive structures (dotted circles), and thereby construct a map of the dark matter. APS/Alan Stonebraker; galaxy images from STScI/AURA, NASA, ESA, and the Hubble Heritage Team

    If you talked to any cosmologist today, you would most likely witness two conflicting emotions. The first is quite self-congratulatory as the community has collectively nailed down the precise quantities of each component in the Universe, using multiple independent observations. The second is one of panic, when they admit that the major dark constituents of the Universe that are inferred to exist remain elusive. The Dark Energy Survey (DES) is one of three optical imaging surveys in this decade’s hot competition to uncover the true nature of the dark side of our Universe. One way to demonstrate the future potential of these surveys is to identify and map dense clumps of dark matter through the distortions they cause in the perceived shapes of background galaxies. Chihway Chang of the Swiss Federal Institute of Technology (ETH) in Zurich, and Vinu Vikram of Argonne National Laboratory, Illinois, present a map of dark matter from the first 3% of data from DES, which is observing the Southern Sky over a full five-year mission [1, 2]. This very first glimpse reveals just how powerful this new sky survey will become in its quest to understand dark energy, which is driving the acceleration of the Universe and thereby affecting the distribution of the dark matter that they have mapped.

    Dark matter is invisible, but it makes up 84% of all the matter in the Universe [3]. We only infer its presence because of the effect it has on both the light and matter that we can see. In the early Universe, the dominant clumps of dark matter gravitationally attract the normal matter that will go on to form the galaxies that we can see today. Dark matter really dictates where and when galaxies form in the Universe.

    Dark energy constitutes 68.5% of the total cosmic energy density of the Universe, apparently providing extra fuel to accelerate the post big-bang expansion of the Universe. Recent rapid expansion would make it hard for large-scale structures of matter to form and grow. Scientists hope to uncover the origin of dark energy by observing how it affects the growth of structures in the Universe over time.

    Einstein’s theory of general relativity tells us that mass curves spacetime. As light travels towards us from the distant Universe, its path is bent as it passes clumps of matter. Imagine light emitted from two neighboring distant galaxies (see Fig. 1). As these two rays of light traverse the Universe, over billions of light years, they will both pass by the same structures of matter. Every time the paths of the light rays are bent during this journey, the images of those neighboring galaxies that we observe become distorted in the same way. In the most extreme cases, this “gravitational lensing” results in the galaxies being stretched out into long arcs. But in the majority of situations, the lensing effect is more subtle, causing galaxies to appear more elongated, or elliptical, in one direction. This results in an apparent alignment of galaxies in the same part of the sky. The more matter that the light has passed, the stronger the resulting alignment will be. The amount of distortion, and hence alignment, is usually quite small, so to see it we need to take some sort of statistical average of all the distant galaxies in a small patch of sky. With no matter, we would expect to find that our average of randomly oriented galaxies would look like a circle (i.e., with no preferred orientation). With matter, however, we’ll find our average of lensed-aligned galaxies to be an ellipse.

    We can think of the induced galaxy alignment as a faint signature that dark matter has written across the cosmos to tell us exactly where it is and how much of it there is. It’s precisely this signature that the Dark Energy Survey has observed with the 4-meter Blanco telescope in Chile.

    CTIO Victor M Blanco 4m Telescope
    CTIO Victor M Blanco 4m Telescope interior
    CTIO/ Victor M Blanco telescope

    In two companion papers, Chang and Vikram and their colleagues present imaging data of a contiguous area of sky spanning 139 square degrees. This is roughly the area of a patch of sky covered by your hand held at arm’s length. By analyzing these images containing 2 million distant galaxies, the researchers are able to map dark matter across this region of space.

    The DES dark matter map is not the first of its kind. Several pioneering analyses have come before it, most notably the Canada-France-Hawaii Telescope [CFHT] Lensing Survey [4], which used 4 times the number of distant galaxies that DES used to map an area of similar size but at higher resolution.

    CFHT
    CFHT Interior
    CFHT

    The two teams have reached the same conclusions, though: The luminous matter that we can see is housed within the dark matter structures that we cannot see, and this dark matter forms a cosmic web of filaments, knots, and voids. As it continues collecting and analyzing data, DES will be able to map how these dark matter structures evolve over time.

    At this point we should address just how challenging this observational measurement is. The change that we wish to detect in galaxy ellipticity induced by the dark matter is of order 1%, an almost imperceptible amount considering the difference between a circle and a line would be a change in ellipticity of 100%. When the light from these distant galaxies reaches Earth, the atmosphere, telescope, and camera induce an additional distortion in the ellipticity of the observed galaxies of the order 10%. Luckily there are stars in our own Milky Way Galaxy that act as point sources, allowing us to model this terrestrial distortion. Much of the painstaking work presented by Vikram and Chang convinces us that the correction that they apply is sufficiently accurate, and furthermore, that the design of DES is optimized to minimize this source of error in their analysis.

    The next few years will be extremely exciting for dark Universe enthusiasts. When the Dark Energy Survey completes its data collection, it will be able to map dark matter over 5000 square degrees using gravitational lensing. Two rival surveys—the European Kilo-Degree Survey [5], and the Hyper-Suprime Camera survey [6]—will both image a smaller area, totaling 1500 square degrees of the cosmos, but with higher precision and depth. These three international collaborations will use their pinpointing of dark matter to confront different cosmological theories that try to explain the mysterious accelerating expansion of our Universe. The acceleration, fueled by dark energy, affects the clumpiness of the dark matter distribution. Mapping mass across cosmic time therefore enables the measurement of dark energy properties at different epochs. A measurement that revealed a time evolution in the dark energy would rule out the current baseline theory that dark energy arises from some universal cosmological constant. The work by Vikram and Chang is just the thrilling start. Expect to see great advances in our understanding as the data collected by these three major new facilities grows.

    This research is published in Physical Review Letters and Physical Review D.

    References

    V. Vikram et al., “Wide-Field Lensing Mass Maps from DES Science Verification Data: Methodology and Detailed Analysis,” Phys Rev. D 92, 022006 (2015).
    C. Chang et al., “Wide-Field Lensing Mass Maps from DES Science Verification Data,” Phys. Rev. Lett. 115, 051301 (2015).
    P. A. R. Ade et al. (Planck Collaboration), “Planck 2015 Results. XIII. Cosmological Parameters,” arXiv:1502.01589.
    L. Van Waerbeke et al., “CFHTLenS: Mapping the Large-Scale Structure with Gravitational Lensing,” Mon. Not. R. Astron. Soc. 433, 3373 (2013).
    J. T. A. de Jong et al., “The Kilo-Degree Survey,” The Messenger 154, 44 (2013).
    S. Miyazaki et al., “Properties of Weak Lensing Clusters Detected on Hyper Suprime-Cam 2.3 Square Degree Field,” Astrophys. J. 807, 22 (2015).

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org).

     
  • richardmitnick 4:04 pm on July 28, 2015 Permalink | Reply
    Tags: , , , , , Physics   

    From BNL: “New Computer Model Could Explain how Simple Molecules Took First Step Toward Life” 

    Brookhaven Lab

    July 28, 2015
    Alasdair Wilkins

    Two Brookhaven researchers developed theoretical model to explain the origins of self-replicating molecules

    1
    Brookhaven researchers Sergei Maslov (left) and Alexi Tkachenko developed a theoretical model to explain molecular self-replication.

    Nearly four billion years ago, the earliest precursors of life on Earth emerged. First small, simple molecules, or monomers, banded together to form larger, more complex molecules, or polymers. Then those polymers developed a mechanism that allowed them to self-replicate and pass their structure on to future generations.

    We wouldn’t be here today if molecules had not made that fateful transition to self-replication. Yet despite the fact that biochemists have spent decades searching for the specific chemical process that can explain how simple molecules could make this leap, we still don’t really understand how it happened.

    Now Sergei Maslov, a computational biologist at the U.S. Department of Energy’s Brookhaven National Laboratory and adjunct professor at Stony Brook University, and Alexei Tkachenko, a scientist at Brookhaven’s Center for Functional Nanomaterials (CFN), have taken a different, more conceptual approach. They’ve developed a model that explains how monomers could very rapidly make the jump to more complex polymers. And what their model points to could have intriguing implications for CFN’s work in engineering artificial self-assembly at the nanoscale. Their work is published in the July 28, 2015 issue of The Journal of Chemical Physics.

    To understand their work, let’s consider the most famous organic polymer, and the carrier of life’s genetic code: DNA. This polymer is composed of long chains of specific monomers called nucleotides, of which the four kinds are adenine, thymine, guanine, and cytosine (A, T, G, C). In a DNA double helix, each specific nucleotide pairs with another: A with T, and G with C. Because of this complementary pairing, it would be possible to put a complete piece of DNA back together even if just one of the two strands was intact.

    While DNA has become the molecule of choice for encoding biological information, its close cousin RNA likely played this role at the dawn of life. This is known as the RNA world hypothesis, and it’s the scenario that Maslov and Tkachenko considered in their work.

    The single complete RNA strand is called a template strand, and the use of a template to piece together monomer fragments is what is known as template-assisted ligation. This concept is at the crux of their work. They asked whether that piecing together of complementary monomer chains into more complex polymers could occur not as the healing of a broken polymer, but rather as the formation of something new.

    “Suppose we don’t have any polymers at all, and we start with just monomers in a test tube,” explained Tkachenko. “Will that mixture ever find its way to make those polymers? The answer is rather remarkable: Yes, it will! You would think there is some chicken-and-egg problem—that, in order to make polymers, you already need polymers there to provide the template for their formation. Turns out that you don’t really.”

    Instilling memory

    2
    A schematic drawing of template-assisted ligation, shown in this model to give rise to autocatalytic systems. No image credit.

    Maslov and Tkachenko’s model imagines some kind of regular cycle in which conditions change in a predictable fashion—say, the transition between night and day. Imagine a world in which complex polymers break apart during the day, then repair themselves at night. The presence of a template strand means that the polymer reassembles itself precisely as it was the night before. That self-replication process means the polymer can transmit information about itself from one generation to the next. That ability to pass information along is a fundamental property of life.

    “The way our system replicates from one day cycle to the next is that it preserves a memory of what was there,” said Maslov. “It’s relatively easy to make lots of long polymers, but they will have no memory. The template provides the memory. Right now, we are solving the problem of how to get long polymer chains capable of memory transmission from one unit to another to select a small subset of polymers out of an astronomically large number of solutions.”

    According to Maslov and Tkachenko’s model, a molecular system only needs a very tiny percentage of more complex molecules—even just dimers, or pairs of identical molecules joined together—to start merging into the longer chains that will eventually become self-replicating polymers. This neatly sidesteps one of the most vexing puzzles of the origins of life: Self-replicating chains likely need to be very specific sequences of at least 100 paired monomers, yet the odds of 100 such pairs randomly assembling themselves in just the right order is practically zero.

    “If conditions are right, there is what we call a first-order transition, where you go from this soup of completely dispersed monomers to this new solution where you have these long chains appearing,” said Tkachenko. “And we now have this mechanism for the emergence of these polymers that can potentially carry information and transmit it downstream. Once this threshold is passed, we expect monomers to be able to form polymers, taking us from the primordial soup to a primordial soufflé.”

    While the model’s concept of template-assisted ligation does describe how DNA—as well as RNA—repairs itself, Maslov and Tkachenko’s work doesn’t require that either of those was the specific polymer for the origin of life.

    “Our model could also describe a proto-RNA molecule. It could be something completely different,” Maslov said.

    Order from disorder

    The fact that Maslov and Tkachenko’s model doesn’t require the presence of a specific molecule speaks to their more theoretical approach.

    “It’s a different mentality from what a biochemist would do,” said Tkachenko. “A biochemist would be fixated on specific molecules. We, being ignorant physicists, tried to work our way from a general conceptual point of view, as there’s a fundamental problem.”

    That fundamental problem is the second law of thermodynamics, which states that systems tend toward increasing disorder and lack of organization. The formation of long polymer chains from monomers is the precise opposite of that.

    “How do you start with the regular laws of physics and get to these laws of biology which makes things run backward, which make things more complex, rather than less complex?” Tkachenko queried. “That’s exactly the jump that we want to understand.”

    Applications in nanoscience

    The work is an outgrowth of efforts at the Center for Functional Nanomaterials, a DOE Office of Science User Facility, to use DNA and other biomolecules to direct the self-assembly of nanoparticles into large, ordered arrays. While CFN doesn’t typically focus on these kinds of primordial biological questions, Maslov and Tkachenko’s modeling work could help CFN scientists engaged in cutting-edge nanoscience research to engineer even larger and more complex assemblies using nanostructured building blocks.

    “There is a huge interest in making engineered self-assembled structures, so we were essentially thinking about two problems at once,” said Tkachenko. “One is relevant to biologists, and second asks whether we can engineer a nanosystem that will do what our model does.”

    The next step will be to determine whether template-aided ligation can allow polymers to begin undergoing the evolutionary changes that characterize life as we know it. While this first round of research involved relatively modest computational resources, that next phase will require far more involved models and simulations.

    Maslov and Tkachenko’s work has solved the problem of how long polymer chains capable of information transmission from one generation to the next could emerge from the world of simple monomers. Now they are turning their attention to how such a system could naturally narrow itself down from exponentially many polymers to only a select few with desirable sequences.

    “What we needed to show here was that this template-based ligation does result in a set of polymer chains, starting just from monomers,” said Tkachenko. “So the next question we will be asking is whether, because of this template-based merger, we will be able to see specific sequences that will be more ‘fit’ than others. So this work sets the stage for the shift to the Darwinian phase.”

    This work was supported by the DOE Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 9:45 am on July 28, 2015 Permalink | Reply
    Tags: , , Physics, Surface-plasmon-polaritons   

    From ETH: “Smaller, faster, cheaper” 

    ETH Zurich bloc

    ETH Zurich

    28.07.2015
    Oliver Morsch

    1
    Colourized electron microscope image of a micro-modulator made of gold. In the slit in the centre of the picture light is converted into plasmon polaritons, modulated and then re-converted into light pulses. (Photo: Haffner et al. Nature Photonics)

    Transmitting large amounts of data, such as those needed to keep the internet running, requires high-performance modulators that turn electric signals into light signals. Researchers at ETH Zurich have now developed a modulator that is a hundred times smaller than conventional models.

    In February 1880 in his laboratory in Washington the American inventor Alexander Graham Bell developed a device which he himself called his greatest achievement, greater even than the telephone: the “photophone”. Bell’s idea to transmit spoken words over large distances using light was the forerunner of a technology without which the modern internet would be unthinkable. Today, huge amounts of data are sent incredibly fast through fibre-optic cables as light pulses. For that purpose they first have to be converted from electrical signals, which are used by computers and telephones, into optical signals. In Bell’s days it was a simple, very thin mirror that turned sound waves into modulated light. Today’s electro-optic modulators are more complicated, but they do have one thing in common with their distant ancestor: at several centimeters they are still rather large, especially when compared with electronic devices that can be as small as a few micrometers.

    In a seminal paper in the scientific journal Nature Photonics, Juerg Leuthold, professor of photonics and communications at ETH Zurich, and his colleagues now present a novel modulator that is a hundred times smaller and that can, therefore, be easily integrated into electronic circuits. Moreover, the new modulator is considerably cheaper and faster than common models, and it uses far less energy.

    The plasmon-trick

    For this sleight of hand the researchers led by Leuthold and his doctoral student Christian Haffner, who contributed to the development of the modulator, use a technical trick. In order to build the smallest possible modulator they first need to focus a light beam whose intensity they want to modulate into a very small volume. The laws of optics, however, dictate that such a volume cannot be smaller than the wavelength of the light itself. Modern telecommunications use laser light with a wavelength of one and a half micrometers, which accordingly is the lower limit for the size of a modulator.

    In order to beat that limit and to make the device even smaller, the light is first turned into so-called surface-plasmon-polaritons. Plasmon-polaritons are a combination of electromagnetic fields and electrons that propagate along a surface of a metal strip. At the end of the strip they are converted back to light once again. The advantage of this detour is that plasmon-polaritons can be confined in a much smaller space than the light they originated from.

    Refractive index changed from the outside

    In order to control the power of the light that exits the device, and thus to create the pulses necessary for data transfer, the researchers use the working principle of an interferometer. For instance, a laser beam can be split onto two arms by a beam-splitter and recombined with beam combiner. The light waves then overlap (they “interfere”) and strengthen or weaken each other, depending on how their relative state of phase in the two arms of the interferometer. A change in phase can result from a difference in the refractive index, which determines the speed of the waves. If one arm contains a material whose refractive index can be changed from the outside, the relative phase of the two waves can be controlled and hence the interferometer can be used as a light modulator.

    In the modulator developed by the ETH researchers it is not light beams, but rather plasmon-polaritons that are sent through an interferometer that is only half a micrometer wide. By applying a voltage the refractive index and hence the velocity of the plasmons in one arm of the interferometer can be varied, which in turn changes their amplitude of oscillation at the exit. After that, the plasmons are re-converted into light, which is fed into a fibre optic cable for further transmission.

    Faster communication with less energy

    Literature reference

    Haffner C et al.: All-plasmonic Mach-Zehnder modulator enabling optical high-speed communication at the microscale. Nature Photonics, 27 July 2015, doi: 10.1038/nphoton.2015.127

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ETH Zurich campus
    ETH Zurich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zurich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zurich, underlining the excellent reputation of the university.

     
  • richardmitnick 9:37 am on July 18, 2015 Permalink | Reply
    Tags: , , Buckminsterfullerene, , Physics   

    From NOVA: “Long-Lasting “Spaceballs” Solve Century-Old Astronomy Puzzle” 

    PBS NOVA

    NOVA

    17 Jul 2015
    Anna Lieb

    1
    Buckminsterfullerene

    Nearly 100 years ago, a graduate student named Mary Lea Heger observed contaminated starlight. As the light traveled to her telescope, it was interacting with great clouds of—something—in the spaces between stars. No one could figure out what that something was, until now.

    When you pass sunlight through a prism, it separates into different colors, which correspond to different wavelengths. Astronomers like Heger often analyze incoming light by looking at something called a spectrum, which is a bit like looking at the colorful end of a prism. A spectrum shows you the relative strengths of the different wavelengths of the light comprising your sample. If the light interacts with something before it gets to you—say, a cloud of gas, the spectrum will change, because the gas absorbs some wavelengths more than others.

    Heger’s spectra had an unusual pattern that didn’t match any known substances, so no one could figure out what those interstellar clouds were made of. The spectra, referred to as “diffuse interstellar bands” (DIB), remained mysterious for many decades.

    In 1983, scientists accidentally discovered a strange molecule called buckminsterfullerene. Commonly known as a “buckyball,” this substance consists of 60 carbon atoms in a soccer-ball shaped arrangement. John Maier, a chemist at the University of Basel in Switzerland, and his collaborators suspected that buckyballs might be part of the strange signal coming from Heger’s interstellar medium, but they needed to know how buckyballs behave in space—a challenging thing to measure here on Earth.

    Here’s Elizabeth Gibney, reporting for Nature News:

    Maier’s team analysed that behaviour by measuring the light-absorption of buckyballs at a temperature of near-absolute zero and in an extremely high vacuum, achieved by trapping the ions using electric fields, in a buffer of neutral helium gas. “It was so technically challenging to create conditions such as in interstellar space that it took 20 years of experimental development,” says Maier.

    Maier’s team published the work in the journal Nature this week, helping to shed light on what the Nature commentary calls “one of the longest-standing mysteries of modern astronomy.” Not only do the results show that Buckminsterfullerine comprises some of the mysterious interstellar medium, but they also suggest these “spaceballs” are stable enough to last millions of years as they wander far and wide through space.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 11:51 am on July 17, 2015 Permalink | Reply
    Tags: , , , Physics   

    From FNAL- “Frontier Science Result: MINERvA Neutrinos in nuclei: studying group effects of interactions” 

    FNAL Home


    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    July 17, 2015

    1
    Joel Mousseau, University of Florida

    These results were presented by the author at a recent Joint Experimental-Theoretical Physics Seminar. Mousseau’s presentation is available online.

    2
    This plot shows the ratio for iron (top) and lead (bottom) for neutrino deep inelastic scattering cross sections versus the fractional momentum of the struck quarks (Bjorken-x) for MINERvA data (black points) and the prediction (red line).

    Physics is a holistic science in which we consider not only the individual parts but also how these parts combine into groups. Nucleons, or protons and neutrons, combine in groups to form atomic nuclei. The differences between how free nucleons behave and how nucleons inside a nucleus (bound nucleons) behave are called nuclear effects.

    In the past, scientists have measured nuclear effects using beams of high-energy electrons. These high-energy beams allow electrons to interact with the quarks contained inside nucleons and nuclei, an interaction called deep inelastic scattering, or DIS. Scientists can now also bombard nuclei with neutrinos, which can also induce deep inelastic scattering. Studying these interactions can help us understand the behavior of quarks.

    Using a beam of neutrinos, MINERvA has performed the first neutrino DIS analysis in the energy range of 5 to 50 GeV.

    FNAL MINERvA
    MINERvA

    Neutrinos and electrons interact with quarks within the nucleus differently; we do not expect nuclear effects in neutrino DIS will be the same as electron DIS.

    MINERvA observes DIS interactions by measuring the cross section, or probability, of a neutrino interacting with quarks inside bound nucleons as a function of a property called Bjorken-x. Bjorken-x is proportional to the momentum of the quark that was stuck inside the nucleon.

    MINERvA took data on neutrino interactions with carbon, iron and lead nuclei. We compared these data to a theoretical model that assumes the nuclear effects for both neutrino and electron interactions are the same.

    We found that the data did not agree with the assumption in the lowest Bjorken-x values (0.1 to 0.2 — see figures) for lead. Further, the cross section for lead at those values differs significantly from those for carbon or iron. We say that the nuclear effect is enhanced in that region for lead.

    This enhancement was seen in a previous MINERvA inclusive analysis that considered all kinds of interactions together — without singling out deep inelastic scattering.

    In contrast, the model in the largest Bjorken-x range (0.4 to 0.75) agrees very well with data. This is intriguing, since the cause of nuclear effects in this region is not well understood. Whatever underlying physics governs behavior in this region, it appears to be the same for neutrinos and electrons.

    This information is very valuable in building new models of this mysterious effect. Understanding these effects are a priority for MINERvA and will be studied more extensively using data we are currently collecting, taken at higher energies and higher statistics. This data will be invaluable in resolving the theoretical puzzles at large and small Bjorken-x.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 1:54 pm on July 16, 2015 Permalink | Reply
    Tags: , , Physics, Weyl Points   

    From MIT: “Long-sought phenomenon finally detected” 


    MIT News

    July 16, 2015
    David L. Chandler | MIT News Office

    1
    The gyroid surface with a dime on top. Image: Ling Lu and Qinghui Yan

    Weyl points, first predicted in 1929, observed for the first time.

    Part of a 1929 prediction by physicist Hermann Weyl — of a kind of massless particle that features a singular point in its energy spectrum called the “Weyl point” — has finally been confirmed by direct observation for the first time, says an international team of physicists led by researchers at MIT. The finding could lead to new kinds of high-power single-mode lasers and other optical devices, the team says.

    For decades, physicists thought that the subatomic particles called neutrinos were, in fact, the massless particles that Weyl had predicted — a possibility that was ultimately eliminated by the 1998 discovery that neutrinos do have a small mass. While thousands of scientific papers have been written about the theoretical particles, until this year there had seemed little hope of actually confirming their existence.

    “Every single paper written about Weyl points was theoretical, until now,” says Marin Soljačić, a professor of physics at MIT and the senior author of a paper published this week in the journal Science confirming the detection. (Another team of researchers at Princeton University and elsewhere independently made a different detection of Weyl particles; their paper appears in the same issue of Science).

    Ling Lu, a research scientist at MIT and lead author of that team’s paper, says the elusive points can be thought of as equivalent to theoretical entities known as magnetic monopoles. These do not exist in the real world: They would be the equivalent of cutting a bar magnet in half and ending up with separate north and south magnets, whereas what really happens is you end up with two shorter magnets, each with two poles. But physicists often carry out their calculations in terms of momentum space (also called reciprocal space) rather than ordinary three-dimensional space, Lu explains, and in that framework magnetic monopoles can exist — and their properties match those of Weyl points.

    The achievement was made possible by a novel use of a material called a photonic crystal. In this case, Lu was able to calculate precise measurements for the construction of a photonic crystal predicted to produce the manifestation of Weyl points — with dimensions and precise angles between arrays of holes drilled through the material, a configuration known as a gyroid structure. This prediction was then proved correct by a variety of sophisticated measurements that exactly matched the characteristics expected for such points.

    Some kinds of gyroid structures exist in nature, Lu points out, such as in certain butterfly wings. In such natural occurrences, gyroids are self-assembled, and their structure was already known and understood.

    Two years ago, researchers had predicted that by breaking the symmetries in a kind of mathematical surfaces called “gyroids” in a certain way, it might be possible to generate Weyl points — but realizing that prediction required the team to calculate and build their own materials. In order to make these easier to work with, the crystal was designed to operate at microwave frequencies, but the same principles could be used to make a device that would work with visible light, Lu says. “We know a few groups that are trying to do that,” he says.

    A number of applications could take advantage of these new findings, Soljačić says. For example, photonic crystals based on this design could be used to make large-volume single-mode laser devices. Usually, Soljačić says, when you scale up a laser, there are many more modes for the light to follow, making it increasingly difficult to isolate the single desired mode for the laser beam, and drastically limiting the quality of the laser beam that can be delivered.

    But with the new system, “No matter how much you scale it up, there are very few possible modes,” he says. “You can scale it up as large as you want, in three dimensions, unlike other optical systems.”

    That issue of scalability in optical systems is “quite fundamental,” Lu says; this new approach offers a way to circumvent it. “We have other applications in mind,” he says, to take advantage of the device’s “optical selectivity in a 3-D bulk object.” For example, a block of material could allow only one precise angle and color of light to pass through, while all others would be blocked.

    “This is an interesting development, not just because Weyl points have been experimentally observed, but also because they endow the photonics crystals which realize them with unique optical properties,” says Ashvin Vishwanath, a professor of physics at the University of California at Berkeley who was not involved in this research. “Professor Soljačić’s group has a track record of rapidly converting new science into creative devices with industry applications, and I am looking forward to seeing how Weyl photonics crystals evolve.”

    Besides Lu and Soljačić, the team included Zhiyu Wang, Dexin Ye, and Lixin Ran of Zhejiang University in China and, at MIT, assistant professor of physics Liang Fu and John Joannopoulos, the Francis Wright Davis Professor of Physics and director of the Institute for Soldier Nanotechnologies (ISN). The work was supported by the U.S. Army through the ISN, the Department of Energy, the National Science Foundation, and the Chinese National Science Foundation.

    See the full article here.
    The Princeton University article is here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:44 pm on July 15, 2015 Permalink | Reply
    Tags: , , , , Physics   

    From FNAL: “Mu2e’s opportunistic run on the Open Science Grid” 

    FNAL Home

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    July 15, 2015
    Hanah Chang

    1
    To conduct full event simulations, Mu2e requires time on more than one computing grid. This graphic shows, by the size of each area, the fraction of the recent Mu2e simulation production through Fermigrid, the University of Nebraska, CMS computing at Caltech, MIT and Fermilab, the ATLAS Midwest and Michigan Tier-2s (MWT2 and AGLT2), Syracuse University (SU-OSG). and other sites — all accessed through the Open Science Grid. No image credit

    Scientists in Fermilab’s Mu2e collaboration are facing a challenging task: In order to get DOE approval to build their experiment and take data, they must scale up the simulations used to design their detector.

    FNAL Mu2e experiment
    Mu2E

    Their aim is to complete this simulation campaign, as they call it, in time for the next DOE critical-decision review, which Mu2e hopes will give the green light to proceed with experiment construction and data taking. The team estimated that they would need the computing capacity of about 4,000 CPUs for four to five months (followed by a much smaller need for the rest of the year). Because of the large size of the campaign and the limited computing resources at Fermilab, which are shared among all the lab’s experiments, the Mu2e team adapted their workflow and data management systems to run a majority of the simulations at sites other than Fermilab. They then ran simulations across the Open Science Grid using distributed high-throughput computer facilities.

    Mu2e scientist Andrei Gaponenko explained that last year, Mu2e used more than their allocation of computing by using any and all available CPU cycles not used by other experiments locally on FermiGrid. The experiment decided to continue this concept on the Open Science Grid, or OSG, by running “opportunistically” on as many available remote computing resources as possible.

    “There were some technical hurdles to overcome,” Gaponenko said. Not only did the scripts have to be able to see the Mu2e software, but also all of the remote sites — more than 25 — had to be able to run this software, which was originally installed at Fermilab. Further, the local operating system software needed to be compatible.

    “A lot of people worked very hard to make this possible,” he said. Members of the OSG Production Support team helped support the endeavor — getting Mu2e authorized to run at the remote sites and helping debug problems with the job processing and data handling. Members of the Scientific Computing Division supported the experiment’s underlying scripts, software and data management tools.

    The move to use OSG proved valuable, even with the inevitable hurdles of starting something new.

    “As Mu2e experimenters, we are pilot users on OSG, and we are grabbing cycles opportunistically whenever we can. We had issues, but we solved them,” said Rob Kutschke, Mu2e analysis coordinator. “While we did not expect things to work perfectly the first time, very quickly we were able to get many hundreds of thousands of CPU hours per day.”

    Ray Culbertson, Mu2e production coordinator, agreed.

    “We exceeded our baseline goals, met the stretch goals and will continue to maintain schedule,” Culbertson said.

    Ken Herner, a member of the support team in the Scientific Computing Division that helped the experimenters port their applications to OSG, hopes that Mu2e will serve as an example for more experiments that currently conduct their event processing computing locally at Fermilab.

    “The important thing is demonstrating to other experiments here that it can work and it can work really well,” Herner said. “Ideally, this sort of running should become the norm. What you really want is to just submit the job, and if it runs on site, great. And if it runs off site, great — just give me as many resources as possible.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 11:58 am on July 14, 2015 Permalink | Reply
    Tags: , , , , Physics   

    From CERN: “CERN’s LHCb experiment reports observation of exotic pentaquark particles” 

    CERN New Masthead

    14 Jul 2015
    No Writer Credit

    Today, the LHCb experiment at CERN’s Large Hadron Collider has reported the discovery of a class of particles known as pentaquarks. The collaboration has submitted a paper reporting these findings to the journal Physical Review Letters.

    “The pentaquark is not just any new particle,” said LHCb spokesperson Guy Wilkinson. “It represents a way to aggregate quarks, namely the fundamental constituents of ordinary protons and neutrons, in a pattern that has never been observed before in over fifty years of experimental searches. Studying its properties may allow us to understand better how ordinary matter, the protons and neutrons from which we’re all made, is constituted.”

    Our understanding of the structure of matter was revolutionized in 1964 when American physicist, Murray Gell-Mann, proposed that a category of particles known as baryons, which includes protons and neutrons, are comprised of three fractionally charged objects called quarks, and that another category, mesons, are formed of quark-antiquark pairs. Gell-Mann was awarded the Nobel Prize in physics for this work in 1969. This quark model also allows the existence of other quark composite states, such as pentaquarks composed of four quarks and an antiquark. Until now, however, no conclusive evidence for pentaquarks had been seen.

    LHCb researchers looked for pentaquark states by examining the decay of a baryon known as Λb (Lambda b) into three other particles, a J/ψ- (J-psi), a proton and a charged kaon. Studying the spectrum of masses of the J/ψ and the proton revealed that intermediate states were sometimes involved in their production. These have been named Pc(4450)+ and Pc(4380)+, the former being clearly visible as a peak in the data, with the latter being required to describe the data fully.

    “Benefitting from the large data set provided by the LHC, and the excellent precision of our detector, we have examined all possibilities for these signals, and conclude that they can only be explained by pentaquark states”, says LHCb physicist Tomasz Skwarnicki of Syracuse University.

    “More precisely the states must be formed of two up quarks, one down quark, one charm quark and one anti-charm quark.”

    Earlier experiments that have searched for pentaquarks have proved inconclusive. Where the LHCb experiment differs is that it has been able to look for pentaquarks from many perspectives, with all pointing to the same conclusion. It’s as if the previous searches were looking for silhouettes in the dark, whereas LHCb conducted the search with the lights on, and from all angles. The next step in the analysis will be to study how the quarks are bound together within the pentaquarks.

    “The quarks could be tightly bound,” said LHCb physicist Liming Zhang of Tsinghua University, “or they could be loosely bound in a sort of meson-baryon molecule, in which the meson and baryon feel a residual strong force similar to the one binding protons and neutrons to form nuclei.”

    More studies will be needed to distinguish between these possibilities, and to see what else pentaquarks can teach us. The new data that LHCb will collect in LHC run 2 will allow progress to be made on these questions.

    3
    2
    Illustration of the possible layout of the quarks in a pentaquark particle such as those discovered at LHCb. The five quarks might be tightly bonded (left). They might also be assembled into a meson (one quark and one antiquark) and a baryon (three quarks), weakly bound together. © CERN

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Meet CERN in a variety of places:

    Cern Courier

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New
    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New

    LHC

    CERN LHC New
    CERN LHC Grand Tunnel

    LHC particles

    Quantum Diaries

     
  • richardmitnick 10:59 am on June 9, 2015 Permalink | Reply
    Tags: , , Physics   

    From NYT: “A Crisis at the Edge of Physics” 

    New York Times

    The New York Times

    JUNE 5, 2015
    ADAM FRANK and MARCELO GLEISER

    1
    Gérard DuBois

    DO physicists need empirical evidence to confirm their theories?

    You may think that the answer is an obvious yes, experimental confirmation being the very heart of science. But a growing controversy at the frontiers of physics and cosmology suggests that the situation is not so simple.

    A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called Scientific Method: Defend the Integrity of Physics. They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”

    Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility.

    How did we get to this impasse? In a way, the landmark detection three years ago of the elusive Higgs boson particle by researchers at the Large Hadron Collider marked the end of an era.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Predicted about 50 years ago, the Higgs particle is the linchpin of what physicists call the “standard model” of particle physics, a powerful mathematical theory that accounts for all the fundamental entities in the quantum world (quarks and leptons) and all the known forces acting between them (gravity, electromagnetism and the strong and weak nuclear forces).

    4
    Standard Model of Particle Physics. The diagram shows the elementary particles of the Standard Model (the Higgs boson, the three generations of quarks and leptons, and the gauge bosons), including their names, masses, spins, charges, chiralities, and interactions with the strong, weak and electromagnetic forces. It also depicts the crucial role of the Higgs boson in electroweak symmetry breaking, and shows how the properties of the various particles differ in the (high-energy) symmetric phase (top) and the (low-energy) broken-symmetry phase (bottom).

    But the standard model, despite the glory of its vindication, is also a dead end. It offers no path forward to unite its vision of nature’s tiny building blocks with the other great edifice of 20th-century physics: Einstein’s cosmic-scale description of gravity. Without a unification of these two theories — a so-called theory of quantum gravity — we have no idea why our universe is made up of just these particles, forces and properties. (We also can’t know how to truly understand the Big Bang, the cosmic event that marked the beginning of time.)

    This is where the specter of an evidence-independent science arises. For most of the last half-century, physicists have struggled to move beyond the standard model to reach the ultimate goal of uniting gravity and the quantum world. Many tantalizing possibilities (like the often-discussed string theory) have been explored, but so far with no concrete success in terms of experimental validation.

    Today, the favored theory for the next step beyond the standard model is called supersymmetry (which is also the basis for string theory). Supersymmetry predicts the existence of a “partner” particle for every particle that we currently know.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    It doubles the number of elementary particles of matter in nature. The theory is elegant mathematically, and the particles whose existence it predicts might also explain the universe’s unaccounted-for “dark matter.” As a result, many researchers were confident that supersymmetry would be experimentally validated soon after the Large Hadron Collider became operational.

    That’s not how things worked out, however. To date, no supersymmetric particles have been found. If the Large Hadron Collider cannot detect these particles, many physicists will declare supersymmetry — and, by extension, string theory — just another beautiful idea in physics that didn’t pan out.

    But many won’t. Some may choose instead to simply retune their models to predict supersymmetric particles at masses beyond the reach of the Large Hadron Collider’s power of detection — and that of any foreseeable substitute.

    Implicit in such a maneuver is a philosophical question: How are we to determine whether a theory is true if it cannot be validated experimentally? Should we abandon it just because, at a given level of technological capacity, empirical support might be impossible? If not, how long should we wait for such experimental machinery before moving on: ten years? Fifty years? Centuries?

    Consider, likewise, the cutting-edge theory in physics that suggests that our universe is just one universe in a profusion of separate universes that make up the so-called multiverse. This theory could help solve some deep scientific conundrums about our own universe (such as the so-called fine-tuning problem), but at considerable cost: Namely, the additional universes of the multiverse would lie beyond our powers of observation and could never be directly investigated. Multiverse advocates argue nonetheless that we should keep exploring the idea — and search for indirect evidence of other universes.

    The opposing camp, in response, has its own questions. If a theory successfully explains what we can detect but does so by positing entities that we can’t detect (like other universes or the hyperdimensional superstrings of string theory) then what is the status of these posited entities? Should we consider them as real as the verified particles of the standard model? How are scientific claims about them any different from any other untestable — but useful — explanations of reality?

    Recall the epicycles, the imaginary circles that Ptolemy used and formalized around A.D. 150 to describe the motions of planets. Although Ptolemy had no evidence for their existence, epicycles successfully explained what the ancients could see in the night sky, so they were accepted as real. But they were eventually shown to be a fiction, more than 1,500 years later. Are superstrings and the multiverse, painstakingly theorized by hundreds of brilliant scientists, anything more than modern-day epicycles?

    Just a few days ago, scientists restarted investigations with the Large Hadron Collider, after a two-year hiatus. Upgrades have made it even more powerful, and physicists are eager to explore the properties of the Higgs particle in greater detail. If the upgraded collider does discover supersymmetric particles, it will be an astonishing triumph of modern physics. But if nothing is found, our next steps may prove to be difficult and controversial, challenging not just how we do science but what it means to do science at all.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:15 am on June 5, 2015 Permalink | Reply
    Tags: , , Physics   

    From livescience: “The 9 Biggest Unsolved Mysteries in Physics” 2012 but Really Worth Your Time 

    Livescience

    July 03, 2012
    Natalie Wolchover

    In 1900, the British physicist Lord Kelvin is said to have pronounced: “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” Within three decades, quantum mechanics and Einstein’s theory of relativity had revolutionized the field. Today, no physicist would dare assert that our physical knowledge of the universe is near completion. To the contrary, each new discovery seems to unlock a Pandora’s box of even bigger, even deeper physics questions. These are our picks for the most profound open questions of all.

    What is dark energy?

    1
    Credit: NASA

    No matter how astrophysicists crunch the numbers, the universe simply doesn’t add up. Even though gravity is pulling inward on space-time — the “fabric” of the cosmos — it keeps expanding outward faster and faster. To account for this, astrophysicists have proposed an invisible agent that counteracts gravity by pushing space-time apart. They call it dark energy. In the most widely accepted model of dark energy, it is a “cosmological constant [(usually denoted by the Greek capital letter lambda: Λ] “: an inherent property of space itself, which has “negative pressure” driving space apart. As space expands, more space is created, and with it, more dark energy. Based on the observed rate of expansion, scientists know that the sum of all the dark energy must make up more than 70 percent of the total contents of the universe. But no one knows how to look for it.

    What is dark matter?

    2
    Credit: ESO/L. Calçada

    Evidently, about 84 percent of the matter in the universe does not absorb or emit light. “Dark matter,” as it is called, cannot be seen directly, and it hasn’t yet been detected by indirect means, either. Instead, dark matter’s existence and properties are inferred from its gravitational effects on visible matter, radiation and the structure of the universe. This shadowy substance is thought to pervade the outskirts of galaxies, and may be composed of “weakly interacting massive particles,” or WIMPs. Worldwide, there are several detectors on the lookout for WIMPs, but so far, not one has been found.

    Why is there an arrow of time?

    3
    Credit: Image via Shutterstock

    Time moves forward because a property of the universe called “entropy,” roughly defined as the level of disorder, only increases, and so there is no way to reverse a rise in entropy after it has occurred. The fact that entropy increases is a matter of logic: There are more disordered arrangements of particles than there are ordered arrangements, and so as things change, they tend to fall into disarray. But the underlying question here is, why was entropy so low in the past? Put differently, why was the universe so ordered at its beginning, when a huge amount of energy was crammed together in a small amount of space?

    Are there parallel universes?

    4
    Credit: Image via Shutterstock

    Astrophysical data suggests space-time might be “flat,” rather than curved, and thus that it goes on forever. If so, then the region we can see (which we think of as “the universe”) is just one patch in an infinitely large “quilted multiverse.” At the same time, the laws of quantum mechanics dictate that there are only a finite number of possible particle configurations within each cosmic patch (10^10^122 distinct possibilities). So, with an infinite number of cosmic patches, the particle arrangements within them are forced to repeat — infinitely many times over. This means there are infinitely many parallel universes: cosmic patches exactly the same as ours (containing someone exactly like you), as well as patches that differ by just one particle’s position, patches that differ by two particles’ positions, and so on down to patches that are totally different from ours.

    Is there something wrong with that logic, or is its bizarre outcome true? And if it is true, how might we ever detect the presence of parallel universes?

    Why is there more matter than antimatter?

    5
    Credit: Image via Shutterstock

    The question of why there is so much more matter than its oppositely-charged and oppositely-spinning twin, antimatter, is actually a question of why anything exists at all. One assumes the universe would treat matter and antimatter symmetrically, and thus that, at the moment of the Big Bang, equal amounts of matter and antimatter should have been produced. But if that had happened, there would have been a total annihilation of both: Protons would have canceled with antiprotons, electrons with anti-electrons (positrons), neutrons with antineutrons, and so on, leaving behind a dull sea of photons in a matterless expanse. For some reason, there was excess matter that didn’t get annihilated, and here we are. For this, there is no accepted explanation.

    What is the fate of the universe?

    6
    Credit: Creative Commons Attribution-Share Alike 3.0 Unported | Bjarmason

    The fate of the universe strongly depends on a factor of unknown value: Ω, a measure of the density of matter and energy throughout the cosmos. If Ω is greater than 1, then space-time would be “closed” like the surface of an enormous sphere. If there is no dark energy, such a universe would eventually stop expanding and would instead start contracting, eventually collapsing in on itself in an event dubbed the “Big Crunch.” If the universe is closed but there is dark energy, the spherical universe would expand forever.

    Alternatively, if Ω is less than 1, then the geometry of space would be “open” like the surface of a saddle. In this case, its ultimate fate is the “Big Freeze” followed by the “Big Rip“: first, the universe’s outward acceleration would tear galaxies and stars apart, leaving all matter frigid and alone. Next, the acceleration would grow so strong that it would overwhelm the effects of the forces that hold atoms together, and everything would be wrenched apart.

    If Ω = 1, the universe would be flat, extending like an infinite plane in all directions. If there is no dark energy, such a planar universe would expand forever but at a continually decelerating rate, approaching a standstill. If there is dark energy, the flat universe ultimately would experience runaway expansion leading to the Big Rip.

    Que sera, sera.

    How do measurements collapse quantum wavefunctions?

    7
    Credit: John D. Norton

    In the strange realm of electrons, photons and the other fundamental particles, quantum mechanics is law. Particles don’t behave like tiny balls, but rather like waves that are spread over a large area. Each particle is described by a “wavefunction,” or probability distribution, which tells what its location, velocity, and other properties are more likely to be, but not what those properties are. The particle actually has a range of values for all the properties, until you experimentally measure one of them — its location, for example — at which point the particle’s wavefunction “collapses” and it adopts just one location.

    But how and why does measuring a particle make its wavefunction collapse, producing the concrete reality that we perceive to exist? The issue, known as the measurement problem, may seem esoteric, but our understanding of what reality is, or if it exists at all, hinges upon the answer.

    Is string theory correct?

    8
    Credit: Creative Commons | Lunch

    When physicists assume all the elementary particles are actually one-dimensional loops, or “strings,” each of which vibrates at a different frequency, physics gets much easier. String theory allows physicists to reconcile the laws governing particles, called quantum mechanics, with the laws governing space-time, called general relativity, and to unify the four fundamental forces of nature into a single framework. But the problem is, string theory can only work in a universe with 10 or 11 dimensions: three large spatial ones, six or seven compacted spatial ones, and a time dimension. The compacted spatial dimensions — as well as the vibrating strings themselves — are about a billionth of a trillionth of the size of an atomic nucleus. There’s no conceivable way to detect anything that small, and so there’s no known way to experimentally validate or invalidate string theory.

    Is there order in chaos?

    Physicists can’t exactly solve the set of equations that describes the behavior of fluids, from water to air to all other liquids and gases. In fact, it isn’t known whether a general solution of the so-called Navier-Stokes equations even exists, or, if there is a solution, whether it describes fluids everywhere, or contains inherently unknowable points called singularities. As a consequence, the nature of chaos is not well understood. Physicists and mathematicians wonder, is the weather merely difficult to predict, or inherently unpredictable? Does turbulence transcend mathematical description, or does it all make sense when you tackle it with the right math?

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 455 other followers

%d bloggers like this: