Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:43 am on May 21, 2016 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “The Quest for a Simple Cancer Test” 



    19 May 2016
    Jeffrey Perkel

    Embedded in a small translucent wafer measuring just under an inch a side, the spiraling coils—like neatly packed iPod earbuds—aren’t much to look at.

    But judging on appearance alone would sell short the brainchild of Chwee Teck Lim of National University Singapore and Jongyoon Han of the Massachusetts Institute of Technology. Those coils sift through millions upon millions of blood cells for faintly detectable indicators of a solid tumor lurking in a patient’s body—the handful of cancer cells that are often found circulating in the blood. Called circulating tumor cells, these cells may well be the seeds of distant metastases, which are responsible for 90% of all cancer deaths.

    Over the past several years, researchers and clinicians have become increasingly fixated on these circulating cells as cellular canaries-in-the-coalmine, indicators of distant disease. The blood of cancer patients is chock-full of potentially telling molecules, and researchers and clinicians are hotly investigating these materials for their efficacy as indicators and predictors of illness, disease progression, response to treatment, and even relapse.

    Soon, a simple blood test could reveal whether a person has cancer.

    For patients with cancer, such tests could provide a welcome respite from painful, invasive, and sometimes dangerous biopsies that typically are used to track and diagnose disease—a fact reflected in the terminology often used to describe the new assays: liquid biopsy. For researchers and clinicians, they provide a noninvasive and repeatable way to monitor how a disease changes over time, even in cases when the tumor itself is inaccessible.

    And unlike the finger-stick testing used by the embattled company Theranos, which recently voided two years of results from their proprietary blood-testing machines, the liquid biopsy methods being researched and developed by teams of scientists around the world use standard blood-drawing techniques and have been subject to peer review.

    In the short term, researchers hope to use liquid biopsies to monitor tumor relapse, track a tumor’s response to targeted therapies, and match patients with the treatments most likely to be effective—the very essence of “personalized medicine.” But longer term, some envision tapping the blood for early diagnosis to catch tumors long before symptoms start, the time when they’re most responsive to treatment.

    For now, most such promises are just that: promises. With the exception of one FDA-approved test, a handful of lab-developed diagnostics, and a slew of clinical trials, few cancer patients today are benefitting from liquid biopsies. But many are betting they soon will be. Liquid biopsies, says Daniel Haber, director of the Massachusetts General Hospital (MGH) Cancer Center, “currently are aspirational—they don’t yet exist in that they’re not part of routine care. But they have the possibility to become so.”

    Revealing Information

    Despite its name, liquid biopsies are not exactly an alternative to solid tissue biopsies, says Mehmet Toner, a professor of biomedical engineering at MGH who studies circulating tumor cells. Patients who are first diagnosed with cancer via a liquid biopsy would likely still undergo a tissue biopsy, both in order to confirm a diagnosis and to guide treatment.

    But liquid biopsies do provide molecular intel that might otherwise be impossible to obtain—for instance, in the treatment of metastatic disease. Oncologists typically biopsy patients with metastatic disease only once, to confirm the diagnosis, says Keith Flaherty, director of the Henri and Belinda Termeer Center for Targeted Therapies at the MGH Cancer Center. But such a test reveals the genetics of the cancer only at the sampled site. Many patients harbor multiple metastases, some in relatively inaccessible locations like the lungs, brain, or bones, and each may contain cells with different genetic signatures and drug susceptibilities. “Liquid biopsies provide an aggregate assessment of a cancer population,” he says.

    Today, says Max Diehn, an assistant professor of radiation oncology at the Stanford University School of Medicine, oncologists can get a read on how a patient responds to therapy using a handful of protein biomarkers found in blood, urine, or other biofluids, such as prostate-specific antigen (PSA) in the case of prostate cancer, or using noninvasive imaging technologies like magnetic resonance imaging (MRI) or computed tomography (CT). But those tests often fall short. Many biomarkers aren’t specific enough to be useful, and imaging is relatively expensive and insensitive. Also, not everything that appears to be a tumor on a scan actually is. And, Flaherty notes, imaging studies reveal little or no molecular information about the tumor itself, information that’s useful in guiding the treatment.

    In contrast, liquid biopsies can reveal not only whether patients are responding to treatment, but also catch game-changing genetic alterations in real time. In one recent study, Nicholas Turner of the Institute for Cancer Research in London and his colleagues examined cell-free tumor DNA (ctDNA), or tumor DNA that’s floating free in the bloodstream, in women with metastatic breast cancer. They were looking for for the presence of mutations in the estrogen receptor gene, ESR1. Breast cancer patients previously treated with so-called aromatase inhibitors often develop ESR1 mutations that render their tumors resistant to two potential treatments, hormonal therapies that target the estrogen receptor and further use of aromatase inhibitors that block the production of estrogen. Turner’s team detected ESR1 mutant ctDNA in 18 of 171 women tested (10.5%), and those women’s tumors tended to progress more rapidly when treated with aromatase inhibitors than did women who lacked such mutations. Those findings had no impact on the patients in the study—the women were analyzed retrospectively—but they suggest that prospective use of ctDNA analysis might be used to shift treatment toward different therapeutic strategies.

    Viktor Adalsteinsson of the Broad Institute of MIT and Harvard, whose group has sequenced more than a thousand liquid biopsy genomes, calls the ESR1 study “promising and illuminating.” At the moment, he says, such data are not being actively used to influence patient treatment, at least not in the Boston area. But Jesse Boehm, associate director of the Broad Cancer Program, says he thinks it could take as little as two years for that to change. “I’ve been here at the Broad for ten years, and I don’t think I’ve ever seen another project grow from scientific concept to potentially game-changing so quickly,” he says.

    Varied Approaches

    Liquid biopsies generally come in one of three forms. One, ctDNA—Adalsteinsson’s material of choice—is the easiest to study, but also the most limited as it relies on probing short snippets of DNA in the bloodstream for a collection of known mutations. The blood is full of DNA, as all cells jettison their nuclear material when they die, so researchers must identify those fragments that are specifically diagnostic of disease. While the genetic mutations behind some prominent cancers have been identified, many more have not. Also, not all genetic changes are revealed in the DNA itself, says Klaus Pantel, director of the Institute of Tumor Biology at the University Medical Center Hamburg-Eppendorf.

    A second class of liquid biopsy focuses on tiny membrane-encapsulated packages of RNA and protein called exosomes. Exosomes provide researchers a glimpse of cancer cells’ gene expression patterns, meaning they can reveal differences that are invisible at the DNA level. But, because both normal and cancerous cells release exosomes, the trick, as with ctDNA, is to isolate and characterize those few particles that stem from the tumor itself.

    The third counts circulating tumor cells, or CTCs. They are not found in healthy individuals, but neither are they prevalent even in very advanced cases, accounting for perhaps one to 100 per billion blood cells, according to Lim. Researchers can simply count the cells, as CTC abundances tend to scale with prognosis.

    But there’s much more that CTCs can do, Pantel says. “You can analyze the DNA, the RNA, and the protein, and you can put the cells in culture, so you can get some information on responsiveness to drugs.” Stefanie Jeffrey, a professor of surgery at Stanford University School of Medicine, has purified CTCs and demonstrated that individual breast cancer CTCs express different genes than the immortalized breast cancer cells typically used in drug development. That, she says, “raises questions” about the way potential drugs are currently evaluated in the early stages of development.

    Similarly, Toner and Haber have developed a device called the CTC-iChip to count and enrich CTCs from whole blood. The size of a CD—indeed, the chips are fabricated using high-throughput CD manufacturing technology—these devices take whole blood, filter out the red cells, platelets, and white blood cells, and keep what’s left, including CTCs. The team has used this device to evaluate hundreds of individual CTCs from breast, pancreatic, and prostate tumor patients to identify possible ways to selectively kill those cells.

    Elsewhere, Caroline Dive, a researcher at the University of Manchester, has even injected CTCs isolated from patients with small-cell lung cancer into mice. The resulting tumors exhibit the same drug sensitivities as the starting human tumors, providing a platform that could be used to better identify treatment options.

    A Range of Uses

    According to Lim, liquid biopsies have five potential applications: early disease detection, cancer staging, treatment monitoring, personalized treatment, and post-cancer surveillance. Of those, most agree, the likely near-term applications are personalized treatment and treatment monitoring. The most difficult is early detection.

    Among other things, early detection requires testing thousands of early-stage patients and healthy volunteers to demonstrate that the tests are sufficiently sensitive to detect cancer early yet specific enough to avoid false positives. A widely adopted assay that was, say, 90% specific could yield perhaps millions of false positives, Pantel says. “I’m sure that’s fantastic for the lawyers, but not for the patients.”

    Still, researchers have begun demonstrating the possibility. In one 2014 study describing a new method for analyzing ctDNA, Diehn, the Stanford radiation oncologist, and his colleague, Ash Alizadeh, an assistant professor of medical oncology also at Stanford, showed that they could detect half of the stage I non-small-cell lung cancer samples it was confronted with, and 100% of tumors stage II and above. That’s despite the fact that ctDNA fragments are only about 170 bases long—a very short amount—and disappear from the blood within about 30 minutes. “There’s constant cell turnover in tumors,” Diehn says. “There’s always some cells dying, and that’s what lets you detect it.”

    In another study, Nickolas Papadopoulos, a professor of oncology and pathology at the Johns Hopkins School of Medicine, and his colleagues surveyed the ctDNA content of 185 individuals across 15 different types of advanced cancer. For some tumor types, including bladder, colorectal, and ovarian, they found ctDNA in every patient tested; other tumors, such as glioblastomas, were more difficult to pick up. “It made sense,” Papadopoulos says. “These tumors are beyond the blood-brain barrier…and they do not shed DNA into the circulation.” In later studies, the team demonstrated that some tumors are more easily found in bodily fluids other than blood. Certain head and neck cancers are readily detected in saliva, for example, and some urogenital cancers can be detected in urine. But in their initial survey, Papadopoulos and his colleagues also tested blood plasma for the ability to detect localized (that is, non-metastatic) tumors, identifying disease in between about half and three-fourths of individuals.

    Though 50% sensitivity isn’t perfect, it’s better than nothing, Papadopoulos says, especially for cancers of the ovaries and pancreas. “Right now, we get 0% of them because there’s no screening test for these cancers.”

    In the meantime, researchers are focusing on personalized therapy. Alizadeh and Diehn, for instance, have tested patients with stage IV metastatic non-small cell lung cancer, a grave diagnosis, who had been taking erlotinib, a drug that targets specific mutations in the EGFR gene. Over time, all patients develop resistance to these drugs, half of them via a new mutation, Diehn says. Diehn and Alizadeh have begun looking for that mutation in the ctDNA of patients whose disease progresses, or returns, as such tumors can be specifically targeted by a new drug, osimertinib. “It’s been shown in a couple of studies that such patients then have a good response rate,” Diehn says, with the median “progression-free survival” doubling from about ten months to 20.

    Toward the Clinic

    Most scientists working on liquid biopsies agree that the technology itself is mature. What’s needed to make a difference in patients’ lives is clinical evidence of sensitivity, selectivity, and efficacy.

    Fortunately, they’re working on it. According to the National Institutes of Health’s clinical trials database, clinicaltrials.gov, over 350 trials are currently studying the use of liquid biopsies in cancer detection, identification, or treatment.

    One recent trial, published in April in JAMA Oncology, examined the ability of ctDNA analysis to detect key mutations in two genes associated with treatment decision, response, and resistance in non-small cell lung cancer. The 180-patient prospective trial determined that the method used could detect the majority (64% –86%) of the tested mutations with no false-positive readings in most cases. Results were returned on average within three days, compared to 12 to 27 days for solid-tissue biopsy. The technique is ready for clinical use, the authors concluded.

    In an ongoing trial, Pantel and his colleagues are focusing on a breast cancer-associated protein called HER2. Several anticancer therapies specifically target HER2-positive tumors, including trastuzumab and lapatinib. The trial is looking for instances of HER2-expressing CTCs in patients with metastatic breast cancer whose original tumor did not express HER2. About 20% of HER2-negative tumors meet that criterion, Pantel says, but before liquid biopsies became an option, there was really no way to find them. Now, his team is testing “whether the change to HER2-positive CTCs is a good predictor for response to HER2-targeted therapy.” If it is, it could unlock potential treatments for patients.

    In another trial, Flaherty, the center director at MGH, and his colleagues are using a series of liquid biopsies in several hundred patients with metastatic melanoma to determine if they could retrospectively predict drug resistance by monitoring for mutations in a particular gene.

    In the meantime, diagnostics firms are developing assays of their own. Currently, there is only one FDA-approved liquid biopsy test on the market in the United States. But there also are a growing handful of lab-developed assays for specific genetic mutations available and several more in development.

    Early cancer screening is farther out, and while many researchers still express skepticism, the application received a high-profile boost in January when sequencing firm Illumina announced it was launching a spinoff company called Grail. The company, which has already raised some $100 million in funding, will leverage “very deep sequencing” to identify rare ctDNA mutations, and plans to launch a “pan-cancer” screening test by 2019.

    Only time will tell, though, whether Grail or any other company is able to fundamentally alter how patients are treated for cancer. But one thing is certain, Flaherty says: Genetic testing, however it is done, only addresses the diagnostics side of the personalized medicine challenge; progress is also required on the drug development side. After all, what good is a test if there’s no way to act on it?

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 12:05 pm on May 19, 2016 Permalink | Reply
    Tags: , Complex Life May Have Emerged One Billion Years Earlier Than We Thought, NOVA   

    From NOVA: “Complex Life May Have Emerged One Billion Years Earlier Than We Thought” 



    18 May 2016
    Allison Eck

    For billions of years, life remained stagnant—stuck in a mode of single-celled simplicity.

    But the latter part of this long stretch of time, a period known as the “boring billion” for its presumably low levels of oxygen, may not have been so boring after all.

    A team of researchers, including Andrew Knoll of Harvard University and Shixing Zhu of the China Geological Survey in Tianjin, has discovered fossil evidence of what appear to be multi-celled eukaryotes—organisms in which differentiated cells contain a nucleus with genetic material—in the Yanshan region of China’s Hebei province. The fossils were preserved in mudstone and are dated to 1.56 billion years ago. The scientists published* their findings in the journal Nature Communications.

    Previously, scientists believed that insufficient oxygen would have stifled organismic growth during the “boring billion.” They also hadn’t found any eukaryotic fossils of similar size that were more than 635 million years old—about 100 million years before the Cambrian Explosion, the rapid proliferation of complex plant and animal life.

    But this new report is true, it could shatter the 635 million-year mark by another one billion.

    A fragment of the granular texture shown in the Yanshan fossils.

    The team says that the variety of fossils is an indication of complexity—a third of the 167 samples are in one of four different shapes, and exhibit a fine, regular cell structure. But while the team is using these details to claim that the leap from single-celled to multi-celled organisms happened earlier than once thought, some scientists are skeptical. They argue that these specimens are merely “colonies” of single-celled bacteria.

    Here’s Paul Rincon, reporting for BBC News:

    Prof Knoll told BBC News: “It looks like the leap from single cells to simple multi-cellularity is easy—in relative terms. It was done many times (over the course of evolution) and this really cements the case that it was done early in the history of eukaryotes.”

    The study is very much up for debate—and that matters not just for the history of life but also for the early evolution of Earth’s chemistry. If Earth was somehow more suitable for multi-cellular life 1.56 billion years ago, we may have to rewrite our planet’s origin story.

    *Science paper:
    Decimetre-scale multicellular eukaryotes from the 1.56-billion-year-old Gaoyuzhuang Formation in North China

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 8:37 pm on May 16, 2016 Permalink | Reply
    Tags: , Beyond Antibiotics, , NOVA   

    From NOVA: “Beyond Antibiotics” 



    05 May 2016
    Jenny Morber
    Photo credits: Paenigenome/Wikipedia (CC BY-SA), Oxford Nanopore Technologies

    Late on the night of December 26, just three days after I had given birth to my son and two days after receiving two blood transfusions for complications, I was back in the hospital. This time, I had gone straight to the emergency room. There we were: me, my days-old baby, and my husband. The ER doctors put us in a separate room to try to isolate us from the contagion filling the hospital floor. But a few hours in, my husband started vomiting. He was asked to leave if he did not wish to become a patient. My baby could not stay with me. It was dangerous just having him there. I was alone.

    I had left the hospital with my baby on Christmas Day. As best I can recall, the discharge nurses had instructed me to call back if I developed a fever over 101˚ F. The day I checked into the ER, our thermometer had read just a few tenths of a degree over.

    The doctors knew I had recently been in the hospital, and they knew that I had lost a lot of blood. But they had no idea which bacterial strain was causing the fever, so they hedged their bets, giving me two different antibiotics—one for MRSA (methicillin-resistant Staphylococcus aureus), a common antibiotic-resistant strain of staph bacteria, and another antibiotic for the common infection culprit E. coli. When my hospital roommate heard I was on a drug for MRSA, she requested to be moved. I lay in bed alone, ordered hospital food, and pumped tainted breast milk.

    In the days that followed, I started eating ice in the hopes that it would trick the nurses’ thermometer and they would let me go home. I fooled no one. This low-grade fever would keep me in the hospital for twice as long as my massive blood loss did following birth.

    My two hospital stays illustrate how the script for medical care has flipped over the last several decades. In many ways, a hospital acquired infection has become more serious than an uncontrolled bleed. Bacteria and other pathogens are developing multi-drug resistance, and our last, best strategies are failing.

    You may have heard the rumblings. Doctors have been warned not to over-prescribe antibiotics. Consumers have been admonished against the antibacterial soaps and creams. We can now buy meat and dairy marked “animals not treated with antibiotics or hormones” in the supermarket. But those measures are only stop-gaps. They will only, perhaps, slow the pace of resistance.

    For years, researchers and doctors have known this. Responsible antibiotic use isn’t enough to win the pathogen war—it “reflects an alarming lack of respect for the incredible power of microbes,” wrote a group of infectious disease experts from across the U.S. in a 2008 “Call to Action” paper*. After all, they write, microbes have been evolving and adapting for 3.5 billion years. Thanks to their combination of genetic plasticity and rapid generation time—they can undergo as many as 500,000 generations during one of ours—they are especially good at overcoming evolutionary obstacles.

    Antibiotic resistance is just another byproduct of those abilities. It has evolved because patients don’t complete a full course of drugs or because animals receive drugs they don’t really need or because a college kid slathers his apartment in antibacterial spray. Antibiotics kill most but not all of the bacteria they encounter. The strongest ones live. These reproduce and pass on their advantages, and sometimes they get together and swap genes. Eventually, the resistant types grow very, very resistant.

    It wouldn’t be a problem if bacteria weren’t evolving resistance faster than we have been able to respond. New antibiotics are difficult to produce, and they don’t make as much profit as drugs for chronic disease, so there has been a dearth of investment. “Why it feels like it’s happening right now is that there aren’t really new antibiotics coming down the pipeline,” says Dr. Carmen Cordova, a microbiologist who works for the nonprofit National Resources Defense Council. “We don’t have really a plan B.”

    Three years ago, science journalist Maryn McKenna published an award-winning article in which she imagines a modern environment devoid of antibiotics. It’s a return to the medical dark ages in which illnesses like tuberculosis, pneumonia, and meningitis are death sentences. It would mean that burn victims, surgery patients, laboring mothers, and those undergoing chemotherapy would have to worry constantly about succumbing to infection.

    President Barack Obama has called antibiotic resistance “one of the most pressing public health issues facing the world today.” A conservative estimate from a British project called the Review on Antimicrobial Resistance states that, if left unchecked, antibiotic resistance will cause 10 million human deaths per year—more than the number of people who currently die from cancer and diabetes combined.

    The threat is looming. Late last year researchers identified bacteria resistant to a last-resort antibiotic in Chinese raw chicken and pork meat, slaughterhouse pigs, and hospital patients. The resistant gene has since been identified in bacteria across Asia and Europe.

    But we are fighting back. President Obama has asked Congress for $1.2 billion over five years for developing new diagnostic tools, creating a database of antibiotic resistant diseases, and funding research to better understand drug resistance. It joins other, ongoing efforts to identify promising new candidates. In labs and universities around the world, researchers are hard at work identifying, testing, and perfecting strategies that go well beyond what, today, we call antibiotics.
    Knocking Out Communications

    We used to think that bacteria were dumb. We thought they ate, pooped, divided, and not much else. We now know that this story is much too simple. Bacteria communicate, keep tabs on their environment, and respond and react. They ask how many others are around and how they are doing, and only when a certain number have congregated do most pathogenic bacteria turn virulent. The danger is in numbers.

    Of course, sometimes they gain the upper hand despite our best efforts. But what if there was a way to hide them from one another? What if we could shut down their party before it starts? That’s the idea behind quorum sensing inhibitors.

    Paenibacillus vortex uses cell-to-cell communication to form colonies with complex shapes.

    Quorum sensing inhibitors (QSIs) are molecules designed to interfere with pathogen communication. When bacteria go looking for friends, they put out small molecules like little flags, saying, “I am here!” For the past two decades, researchers have been working to develop strategies that interfere with every step of the process, by halting production of these flags, obscuring them so that other bacteria cannot recognize them, or blocking responses when they are recognized.

    The results, so far, have been promising. According to Vipin Kalia, a researcher at the Institute of Genomics and Integrative Biology in Delhi India, such quorum sensing inhibitors are less likely to lead to bacterial resistance because, “Antibiotics create a pressure on bacteria because their survival is under threat. QSIs are not threatening their existence and survival.” Several of these inhibitors have been shown to reduce virulence in animals, Kalia says, and two have made it to clinical trials.

    Still, none are currently used to treat human diseases, and while it may be more difficult for bacteria to develop resistance to these inhibitors, it is not impossible. In fact, it may already be happening. In a 2014 paper in the journal Microbial Ecology, Kalia and his colleagues write that “evidence is accumulating that bacteria may develop resistance to QSIs.” It appears that communication is important enough to bacteria that they have several different channels. If one path is blocked, they try to use another. “Apparently” the researchers write, “bacteria do not even need to undergo any genetic change to withstand quorum sensing inhibitors.”

    QSIs may buy us some time, but will it be enough? Fortunately, there are other options.

    Creating an Inhospitable Environment

    The development of new antibiotics has often relied on tweaking existing drugs. It’s a simpler approach than developing an entirely new class of antibiotics, but the modifications are often small enough that bacteria can adapt relatively easily. Fortunately, plants, animals, fungi, and other microbes have been battling it out long before we arrived, and they have evolved a few good tricks.

    One, antimicrobial peptides (AMPs), were first identified in silk moths and are now known to be an innate immune response in almost all organisms from algae and plants to the entire animal kingdom. AMPs use differences between the membranes of a host cell and a bacteria to selectively target only the harmful microbe invader. Bacteria tend to be negatively charged, while mammalian cells tend to be neutral. The positively charged peptides glom onto the bacteria’s membrane and punch holes in it. Rather than having to recognize specific targets on the cell membrane, as traditional antibiotics do, AMPs grab any bacterial membrane that doesn’t belong and shoot it full of holes.

    AMPs happen to be much more resistant to bacterial adaptations. Despite their ancient origins, AMPs remain effective weapons today. “Bacteria can become resistant to antibiotics by simple modifications of the receptor where the drug attacks. To become resistant to AMPs…they need to change their entire membrane chemistry,” says Karen Lienkamp, a junior research group leader working on AMPs at the University of Freiburg in Germany.

    Right now, researchers are working on making materials that have synthetic versions of AMPs, or SAMPs, on the surface. These specially engineered surfaces help reduce microbe contamination on hospital equipment, in the air, and on clothing. Specifically, they prevent biofilms—clumps of bacteria that form protective nets around themselves. “Biofilms are the ‘root of all evil,’ ” Lienkamp says. “Studies with regular antibiotics have shown that you need up to 1,000 times the concentration to kill bacteria that are encapsulated in a biofilm.”

    Other strategies to prevent bacteria from taking up residence on hospital surfaces include extra-slippery materials and materials that shed layers to prevent buildup. Yet as promising as they are, anti-fouling surfaces are only useful for prevention. They won’t help someone who already has rampant infection. What we really need are ways to identify what ails us, and quickly, with customizable drugs to treat them.

    Faster Identification

    Justin O’Grady is a lecturer in medical microbiology at the University of East Anglia in the United Kingdom. Last year, he and others published a paper in the journal Nature Biotechnology about a technology that can take a sample of blood and, in six to eight hours, identify the bacteria that is causing an infection. In hospitals and labs today, it typically takes two to five days.

    The device is a gene sequencer called the MinION, which reads the bacteria’s genetic signature within minutes. It’s small enough to plug into a computer’s USB drive, and at around $1,000 for the necessary equipment, it doesn’t cost nearly as much as traditional laboratory-grade gene sequencers. Moreover, the device, when given another day or so to process the sample, can also identify which genes in an invading bacteria are responsible for the antibiotic resistance. “This would be a personalized medicine approach to antibiotic treatment,” O’Grady says.

    A MinION sequencer.

    While other molecular identification technologies exist, sequencing technologies like the MinION are broad-spectrum microbe detectors. “The advantage of sequencing for this approach is that you get an unbiased diagnosis,” O’Grady says. “You don’t need to know anything about what pathogen might be in there.”

    If something like MinION had been available when I was in the hospital, my experience would probably have been quite different. It wouldn’t have taken doctors five days to determine the root cause, getting me home sooner and saving both the hospital and the insurance company a significant amount of money. I also would not have been administered ineffective antibiotics, preventing a small amount of evolved resistance.

    Instead of doctors treating their patients with best-guess antibiotics while they wait days for culture results to come back, sequencing technologies like these will tell them what is going on by the time the nurse comes around with the second dose. “If we can change the way that we prescribe antibiotics, we can improve antibiotic stewardship and we can improve patient management at the same time.” O’Grady says. “The patient receives better treatment quicker, and society benefits because we keep our potent antibiotics for those who need them most.”

    Building a Library

    The most promising new antibiotics may, counterintuitively, come from the bacterial domain itself. Though nature’s bacterial library could hold a multitude of undiscovered antibiotic recipes, sifting through its diversity is a daunting task: Most bacteria found in the environment—99%—will not grow on a petri dish in traditional cell culture.

    We may soon be able to unlock that library, though, thanks to the work of Kim Lewis, director of the Antimicrobial Discovery Center at Northeastern University in Boston. Lewis reported* last year in the journal Nature that he and his colleagues found a way to culture bacteria from the soil, which have been notoriously difficult to grow in the lab because they rely on signals and molecules from neighboring bacteria to grow.

    Lewis and his colleagues decided that rather than try to painstakingly recreate those conditions, they would just bring a little soil back to the lab. They started by collecting a small scoop of soil, rinsing it in water, mixing the water into culture medium, and then squirting the mixture into a device they developed called the iChip. They then placed the iChip into a bucket of the same soil kept in the lab. After a month they pulled the iChip out, cultured the bacteria, and observed what was growing. The bacteria were still growing in agar—not their ideal environment—but they were happier after the month spent back in the soil.

    “Ten percent of uncultured bacteria from the natural environment require growth factors from neighboring bacteria,” Lewis says. Their soil vacation seems to make the bacteria stronger, more adaptable, and more likely to grow on the iChip.

    Eager to build their library, Lewis and his colleagues began collecting from their back yards. If someone went on vacation, they were given a kit to bring back a sample. One collaborator took a serendipitous trip to Maine and returned with a completely new genus of bacteria, one that produced an incredibly effective antibiotic that kills other types of bacteria, which they named teixobactin. In experiments with human cells and in mice, teixobactin proved exceptionally effective at killing Clostridium difficile (a resistant bacteria that causes ulcers and is most effectively treated with fecal transplants) and Staphylococcus aureus, the bacteria whose resistant strains are called MRSA.

    Teixobactin must complete many hurdles before it can become a drug offered in the doctor’s office. It will need to be formulated to remain active inside the human body, toxicology tests will need to ensure that it does not confer nasty side effects, and experiments will need to determine other medicines it may interact with.

    Lewis is working with a company to improve the teixobactin’s solubility, and he estimates that it will take two years to get to clinical trials, which will then take at least another three years. So, even in the best case scenario, texiobactin will not be helping us to fight off antibiotic resistant disease for the next five years.

    In the Meantime

    Of course these are not the only promising technologies being developed to prevent, fight, and treat antibiotic resistant diseases. Bacteria-targeting viruses, gene editing, nanoparticles, and shotgun-like strategies using multiple drugs are all brimming with potential.

    But each of these needs time—time for research and development and optimization. In the meantime we will have to trust that our current technologies and common sense prevention provides the window we need. (Please, everyone, wash your hands.)

    As for my brush with an antibiotic-resistant infection, I eventually returned home to my husband and children. On the fifth day, lab results revealed why the drugs were not working. I did not have MRSA. I did have E. coli, but the strain I had was resistant to the drug they were giving me. Neither of the antibiotics was having any effect.

    The doctors had made their best guesses, but those guesses had been wrong. The hospital let me leave with new antibiotic and a tube inserted into a vein close to my heart. Don’t let that tube get dirty, they warned me, or the infection could kill you. Don’t get air in the line, they warned me, or that could kill you too.

    To say I was careful is an understatement. Three times a day I fed my newborn, put him down to sleep, and followed the hour-long procedure. Years of work in a nanotech laboratory had taught me how be meticulous. After two weeks, a nurse came and removed the line next to my heart. The treatment had worked. The bacteria I acquired at the hospital had not evolved resistance to every weapon in our arsenal. At least, not yet.

    *Science paper:
    The Epidemic of Antibiotic-Resistant Infections: A Call to Action for the Medical Community from the Infectious Diseases Society of America

    There are other science papers referenced in this article, but no links were provided. I have requested the links. I will update this post with whatever I get in the way of links.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 7:59 pm on May 16, 2016 Permalink | Reply
    Tags: , , , NOVA, Scientists Built a Giant Molecule That Could Fight Nearly Any Viral Infection   

    From NOVA: “Scientists Built a Giant Molecule That Could Fight Nearly Any Viral Infection” 



    16 May 2016
    Allison Eck

    Simian virus 40, a virus found in both monkeys and humans. No image credit.

    The influenza virus. CDC/ Dr. Erskine. L. Palmer; Dr. M. L. Martin via Flickr

    Viruses have eluded our best efforts to fight them off. They mutate much more quickly than bacteria, and most anti-viral drugs that do keep symptoms at bay need to be administered for the rest of a patient’s life.

    But now, researchers may have discovered a workaround: a macromolecule that’s swift and nimble enough to tackle virtually any virus that crosses its path. The scientists, from both IBM and the Institute of Bioengineering and Nanotechnology in Singapore, recently published their findings* in the journal Macromolecules.

    The team concentrated their efforts on the similarities between viruses. Here’s Claire Maldarelli, writing for Popular Science:

    A group of researchers at IBM and the Institute of Bioengineering and Nanotechnology in Singapore sought to understand what makes all viruses alike.

    For their study, the researchers ignored the viruses’ RNA and DNA, which could be key areas to target, but because they change from virus to virus and also mutate, it’s very difficult to target them successfully.

    Instead, the researchers focused on glycoproteins, which sit on the outside of all viruses and attach to cells in the body, allowing the viruses to do their dirty work by infecting cells and making us sick. Using that knowledge, the researchers created a macromolecule, which is basically one giant molecule made of smaller subunits. This macromolecule has key factors that are crucial in fighting viruses. First, it’s able to attract viruses towards itself using electrostatic charges. Once the virus is close, the macromolecule attaches to the virus and makes the virus unable to attach to healthy cells. Then it neutralizes the virus’ acidity levels, which makes it less able to replicate.

    The researchers found that the molecules did in fact latch onto the a number of viruses’ glycoproteins (including those of the Ebola and dengue viruses) and reduced the number of viruses in their lab experiments. A sugar in the molecules was also able to bind to healthy immune cells that, in turn, destroyed the virus more efficiently.

    If the technique plays out as expected in further experiments, this lone molecule could someday be responsible for ridding humankind of the worst viral infections—from Ebola, to Zika, to the flu. That will take a while, though, and some scientists caution that universal antivirals may be dangerous, anyway—they could upset our immune systems in ways we don’t currently anticipate. Still, this macromolecule is a proof of concept that powerful antiviral drugs are not completely out of reach.

    *Science paper:
    Cooperative Orthogonal Macromolecular Assemblies with Broad Spectrum Antiviral Activity, High Selectivity, and Resistance Mitigation

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 11:36 am on May 16, 2016 Permalink | Reply
    Tags: , , , NOVA, ,   

    From NOVA: “Revealing the Universe’s Mysterious Dark Age” 



    06 Apr 2016 [They just put this in social media]
    Marcus Woo

    The universe wasn’t always like this. Today it’s filled with glittering galaxies, scattered across space like city lights seen from above. But there was a time when all was dark. Really dark.

    Dark Ages Universe ESO
    Dark Ages Universe ESO

    A time-lapse visualization of what the cosmic web’s emergence might have looked like. No image credit

    First, a very brief history of time: from the Big Bang, the universe burst onto the scene as a tiny but glowing inferno of energy. Immediately, it expanded and cooled, dimming into darkness as particles condensed out of the hot soup like droplets of morning dew. Electrons and protons coalesced into atoms, which formed stars, galaxies, planets, and eventually us.

    Inflationary Universe. NASA/WMAP
    Inflationary Universe. NASA/WMAP

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey
    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    But a crucial piece still eludes scientists. It’s a gap of several hundred million years that was filled with darkness—a darkness both literal and metaphorical. Astronomers call this period the dark ages, a time that’s not just bereft of illumination, but also devoid of data.

    The Big Bang left a glowing imprint on the entire sky called the cosmic microwave background,,,

    Cosmic Microwave Background per ESA/Planck
    Cosmic Microwave Background per ESA/Planck

    ,,,representing the universe when it was 380,000 years old. Increasingly precise measurements of this radiation have revealed unprecedented details about the earliest cosmic moments. But from then until the emergence of galaxies big and bright enough for today’s telescopes, scientists don’t have any information. Ever mysterious, these dark ages are the final frontier of cosmology.

    And it’s a fundamental frontier. It represents the universe’s most formative years, when it matured from a primordial soup to the cosmos we recognize today.

    Even without much direct data about this era, researchers have made great strides with theory and computer models, simulating the universe through the birth of the first stars. Soon, they may be able to put those theories to the test. In a few years, a suite of new telescopes with new capabilities will start peering into the darkness, and for the first time, astronomers will reach into the unknown.

    The Final Frontier

    Considering that it’s the entire universe they’re trying to understand, cosmologists have done a pretty good job. Increasingly powerful telescopes have allowed them to peer to greater distances, and because the light takes so long to reach the telescopes, astronomers can see farther back in time, capturing snapshots of a universe only a few hundred million years old, just as it emerged from the dark ages. Given that the universe is now 13.7 billion years old, that’s like taking a picture of the cosmos as a toddler.

    That makes the cosmic microwave background, or CMB, clike a detailed ultrasound. This radiation contains the first photons that escaped the yoke of the universe’s primordial plasma. When the universe was a sea of radiation and particles, photons couldn’t travel freely because they kept running into electrons. But about 380,000 years after the Big Bang, the universe had cooled enough that protons were able to lasso electrons into an orbit to form hydrogen atoms. Without electrons in their way, the newly liberated photons could now fly through the cosmos and, more than 13 billion years later, enter the detectors of instruments like the Planck satellite, giving cosmologists the earliest picture of the universe.

    But from this point on, until the universe was a few hundred million years old—the limit of today’s telescopes—astronomers have nothing. It’s as if they have a photo album documenting a person’s entire life, with pictures of young adulthood, adolescence, childhood, and even before birth, but nothing from when the person learned to talk or walk—years of drastic changes.

    That doesn’t mean astronomers have no clue about this period. “People have thought about the first stars since the 1950s,” says Volker Bromm, a professor of astronomy at the University of Texas, Austin. “But they were very speculative because we did not know enough cosmology.” Not until the 1980s did researchers develop more accurate theories that incorporated dark matter, the still-unknown type of particle or particles that comprises about 85% of the matter in the universe. But the first key breakthrough came in 1993, when NASA’s COBE satellite measured the CMB for the first time, collecting basic but crucial data about what the universe was like at the very beginning—the so-called initial conditions of the cosmos. Theorists such as Martin Rees, now the Astronomer Royal of the United Kingdom, and Avi Loeb, a professor of astrophysics at Harvard, realized you could plug these numbers into the equations that govern how the first gas clouds and stars could form. “You could feed them into a computer simulation,” Loeb says. “It’s a well-defined problem.”

    Both Rees and Loeb would influence Bromm, then a graduate student at Yale. Rees and his early work in the 1980s, in particular, inspired Tom Abel, who was a visiting scientist during the 1990s at the University of Illinois, Urbana-Champaign. Independently, Abel and Bromm would make some of the first computer models of their kind to simulate the first stars. “That really opened the field,” Loeb says. “When I started, there were maybe one or a few people even willing to discuss this subject.”

    Theorists like Bromm and Abel, now a professor at Stanford, have since pieced together a blow-by-blow account of the dark ages. Here’s how they think it all went down.

    Then There Was Light

    In the earliest days, during the time that we see in the CMB, the entire universe was bright and as hot as the surface of the sun. But the universe kept expanding and cooling, and after nearly 15 million years, it was as cool as room temperature. “In principle, if there were planets back then, you could’ve had life on them if they had liquid water on their surface,” Loeb says. The temperature continued to fall, and the infrared radiation that suffused the universe lengthened, shifting to radio waves. “Once you cool even further, the universe became a very dark place,” Loeb says. The dark ages had officially begun.

    Meanwhile, the simulations show, things began to stir. The universe was bumpy, with regions of slightly higher and lower densities, which grew from the random quantum fluctuations that emerged in the Big Bang. These denser regions coaxed dark matter to start clumping together, forming a network of sheets and filaments that crisscrossed the universe. At the intersections, denser globs of dark matter formed. Once these roundish halos grew to about 10,000 times the mass of the Sun, Abel says—a few tens of millions of years after the Big Bang—they had enough gravity to corral hydrogen atoms into the first gas clouds.

    Those clouds could then accumulate more gas, heating up to hundreds of degrees. The heat generated enough pressure to prevent further contraction. Soon, the clouds settled into enormous, but rather dull, balls of gas about 100 light years in diameter, Abel says.

    But if the dark matter halos reached masses 100,000 times that of the sun, they could accrue enough gas that the clouds could heat up to about 1000 degrees—and that’s when things got interesting. The surplus energy allowed hydrogen atoms to merge two at a time and form hydrogen molecules—picture two balls attached with a spring. When two hydrogen molecules collide, they vibrate and emit photons that carry away energy.

    When that happens, the molecules are converting the vibrating energy that is heat into radiation that’s lost into space. These interactions cooled the gas, slowing down the molecules and allowing the clouds to collapse. As the clouds grew denser, their temperatures and pressures soared, igniting nuclear fusion. That’s how the first stars were born.

    These first stars, which formed by the time the universe was a couple hundred million years old, were much bigger than those in today’s universe. By the early 2000s, Abel’s simulations, which he says are the most realistic and advanced yet, showed that the first stars weighed about 30 to 300 times the mass of the sun. Using different techniques and algorithms, Bromm says he arrived at a similar answer. For the first time, researchers had a good idea as to what the first objects in the universe were like.

    Massive stars consume fuel like gas-guzzling SUVs. They live fast and die young, collapsing into supernovae after only a few million years.

    Supernova remnant Crab nebula. NASA/ESA Hubble
    Supernova remnant Crab nebula. NASA/ESA Hubble

    NASA/ESA Hubble Telescope
    NASA/ESA Hubble Telescope

    In cosmic timescales, that’s the blink of an eye. “You really want to think of fireworks at these early times,” Abel says. “Just flashing everywhere.”

    In general, the first stars were sparse, separated by thousands of light years. Over the next couple hundred million years, though, guided by the clustering of dark matter, the stars started grouping together to form baby galaxies. During this cosmic dawn, as astronomers call it, galaxies merged with one another and became bigger galaxies. Only after billions and billions of years would they grow into those like our own Milky Way, with hundreds of billions of stars.

    Lifting the Fog

    But there’s more to the story. The first stars shone in many wavelengths, and especially strongly in ultraviolet. The universe’s expansion would’ve stretched this light to visible and infrared wavelengths, which many of our best telescopes are designed to detect. Problem is, during the time of the first stars, a thick fog of neutral hydrogen gas blanketed the whole universe. This gas absorbed shorter-wavelength ultraviolet light, obscuring the view from telescopes. Fortunately, though, this fog would soon lift.

    “This state of affairs can’t last for very long,” says Richard Ellis, an astronomer at the European Southern Observatory in Germany.

    ESO 50 Large

    “These ultraviolet photons have sufficient energy to break apart the hydrogen atom back into an electron and a proton.” The hydrogen was ionized, turning into a lone proton that could no longer absorb ultraviolet. The gas was now transparent.

    During this so-called period of reionization, galaxies continued to grow, producing more ultraviolet light that ionized the hydrogen surrounding them, clearing out holes in the fog. “You can imagine the hydrogen like Swiss cheese,” Loeb says. Those bubbles grew, and by the time the universe was around 800 million years old, the ultraviolet radiation ionized the hydrogen between the galaxies, leaving the entire cosmos clear and open to the gaze of telescopes. The dark ages were over, revealing a universe that looked more or less like it does today.

    Seeing into the Dark

    Of course, many details have to be worked out. Astronomers like Ellis are focusing on the latter stages of the dark ages, using the most powerful telescopes to extract clues about this reionization epoch.

    One big question has been whether the ultraviolet light from early galaxies was enough to ionize the whole universe. If it wasn’t, astronomers would have to find another exotic source—like black holes that blast powerful, ionizing jets of radiation—that would have finished the job.

    To find the answer, Ellis and a team of astronomers stretched the Hubble Space Telescope to its limits, extracting as much light as possible from one small patch of sky. These observations reached some of the most distant corners of the universe, discovering some of the earliest galaxies ever seen, during the heart of this reionization era. Their observations suggested that galaxies—large populations of small galaxies, in particular—did seem to have enough ultraviolet light to ionize the universe. Maybe nothing exotic is needed.

    NASA/ESA  Hubble Deep Field
    NASA/ESA Hubble Deep Field

    But to know exactly how it happened, astronomers need new telescopes, like the James Webb Space Telescope set for launch in 2018.

    NASA/ESA/CSA Webb Telescope annotated
    “NASA/ESA/CSA Webb Telescope annotated

    “With the current facilities, it’s just an imponderable,” Ellis says. “We don’t have the power to study these galaxies in any detail.”

    Other astronomers are focusing not on the galaxies, but the hydrogen fog itself. It turns out that the spins of a hydrogen atom’s proton and electron can flip-flop in direction. When the spins go from being aligned to unaligned, the atom releases radiation at a wavelength of 21 centimeters, or 8.27 inches, a telltale signal of neutral hydrogen that astronomers call the 21-cm line. The expanding universe would have stretched this signal to the point where it became a collection of radio waves. The more distant the source of light, the more the radiation gets stretched. By using arrays of radio telescopes to measure the extent of this stretching, astronomers can map the distribution of hydrogen at different points in time. They could then track how those holes in the gas grew and grew until the gas was all ionized.

    “It’s surveying the volume of the universe on a scale that you can’t imagine doing in any way other than through this method—it’s really quite incredible,” says Aaron Parsons, an astronomer at the University of California, Berkeley, who’s leading a project called HERA, which will consist of 352 radio antennae in South Africa.

    NSF HERA, South Africa

    Once online, the telescope could give an unprecedented view of reionization. “You can almost imagine making a movie of how the first stars galaxies formed, how they interacted, heated up, ionized, and turned into the galaxies we recognize today.”

    Other telescopes like LOFAR in the Netherlands and the Murchison Widefield Array in Australia will make similar measurements.


    ASTRON LOFAR Radio Antenna Bank
    ASTRON LOFAR Radio Antenna Bank

    SKA Murchison Widefield Array
    SKA Murchison Widefield Array

    But HERA will be more sensitive, Parsons says. And already with 19 working antennae in place, it might be closest to success, adds Loeb, who isn’t part of the HERA team. “Within a couple years, we should have the first detection of the 21-cm line from this epoch of reionization, which would be fantastic because it would allow us to see the environmental effect of ultraviolet radiation from the first stars and first galaxies on the rest of the universe.”

    This kind of data is crucial for informing computer models like the kind that Abel and Bromm have developed. But despite their successes, theorists are at the point where they need data to test whether their models are accurate.

    Unfortunately, that data won’t be pictures of the first stars. Even the most powerful telescopes won’t be able to see the brightest of them. The first galaxies contain only a few hundred stars and are just too small and faint. “We’ll come ever closer,” Abel says. “It’s very difficult to imagine we’ll actually see those in the near future, but we’ll see their brighter cousins.”

    In fact, the darkest of times, during the couple hundred million years between the CMB and the appearance of the first stars, may always remain beyond astronomers’ grasp. “We currently don’t have any idea of how you could get any direct information about that period,” he says.

    Still, new telescopes over the next few decades promise to reveal much of the dark ages and whether the story theorists are telling is true or even more fantastic than they had thought. “Even though I’m a theorist, I’m modest enough to acknowledge the fact that nature is sometimes more imaginative than we are,” Loeb says. “I’m open to surprises.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:38 am on May 15, 2016 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “A String of Supernovae May Have Caused Earth’s Recent Ice Ages” 



    07 Apr 2016 [I think I missed this one]
    Conor Gearin

    You may have thought Earth was safe from supernovae, the giant explosions from dying stars. Astronomers are now saying we should think again.

    Two teams of scientists reached the same surprising conclusion: a cluster of supernova blasts rained radioactive iron on the Earth. One of them blew up as recently as 0.8 million years ago.

    The road to this discovery began in 1999, when scientists took samples from the ocean’s crust, in a layer of rock that formed 2.2 million years ago. A radioactive isotope called iron-60 turned up in the samples. It’s an unstable atom with a short half-life. That means it’s a much younger atom than earthly elements. Had it been around when the planet took shape, it would have degraded long ago. Instead, it must have landed in the crust from outer space.

    An illustration of the radioactive iron cloud from one of the supernovae. No image credit.

    Until recently, astronomers thought that this iron came from just one nearby supernova from an unknown direction. But the two new studies suggest that, in fact, a string of supernovae went off over the past 10 million years—just a blink of an eye in galactic terms. And two of them were near enough to hit Earth with radioactive particles.

    Here’s Daniel Clery, reporting for Science Magazine:

    “Their analysis suggests that two of those supernovae were close enough and recent enough to have contributed to iron-60 on Earth: the first 2.3 million years ago, the second 0.8 million years later; both about 300 light-years from Earth.”

    “We may never be able to identify [the individual stars],” comments Neil Gehrels, an astrophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, “but we can see the regions of intense star formation” where they lived and died.

    These supernovae weren’t close enough to incinerate life on Earth. But all that iron in the air may have changed the planet’s climate. The timing of the explosions happens to correspond to the transition into the Pleistocene epoch, the icy glacial period dominated by mastodons, mammoths, and saber-toothed cats. It’s possible that the radioactive particles in the atmosphere could have seeded clouds—thereby increasing cloud cover—and contributed to the ice ages of the Pleistocene.

    To sort out exactly where in the galaxy the explosions came from, it might take new missions to the moon. Since Earth’s moon has no atmosphere, particles from supernovae could still be positioned exactly as they were when they first fell. Astrophysicists could use iron-60 samples from the moon to point the way to their source.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 11:39 am on May 14, 2016 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “A Quantum Computer Has Been Hooked Up to the Cloud For the First Time” 



    04 May 2016
    Allison Eck

    You can now entangle quantum qubits directly from your smartphone.

    A team at IBM has announced today that it has hooked up a quantum processor—housed at the IBM T.J. Watson Research Center in New York—to the cloud. For the first time in history, non-scientists and scientists alike can run quantum experiments from their desktop or mobile devices.

    “It’s really about starting to have a new community of quantum learners,” said Jay Gambetta, manager of the Theory of Quantum Computing and Information Group at IBM. “We’re trying to take the mysteriousness out of quantum.”

    The five-qubit processor is maintained at a temperature of 15 millikelvin. That’s 180 times colder than outer space.

    IBM is calling the cloud-based quantum platform the IBM Research Quantum Experience (which consists of a simulator as well as the live processor), and it’s a step in the direction of creating a universal quantum computer: one that can perform any calculation that is in the realm of what quantum mechanics predicts. No such computer exists today, but IBM suspects that researchers will find the means to develop one within the next decade.

    Quantum computing is a complicated beast compared to classical computing. Classical computers use bits to process information, where a bit represents either a zero or a one. Quantum computing, on other other hand, employs qubits—which represent either a zero, a one, or a superposition of both.

    IBM’s quantum computer holds five superconducting qubits, a relatively small amount. The most expensive modern-day classical computer could emulate a 30- or 40-qubit system, the researchers say. So it’s not as though IBM’s cloud-based quantum processor is going to solve anything that scientists can’t already figure out using a classical computer. Instead, the strength of IBM’s processor is derived from its use as an educational tool—anyone who is curious can experiment, play with real qubits, and explore tutorials related to quantum computing.

    In addition, scientists who access the processor will be able to use it to develop a better intuition for quantum computing. “We’ll know more about nature itself when we understand these algorithms,” Gambetta said. Specifically, experts can become more skilled at parsing quantum “noise,” or the uncertainty in physical characteristics of quantum nature. If they can minimize uncertainty—flukes in the system that cause the quantum computer to malfunction—in a small, five-qubit processor, then they can scale those lessons to create stronger quantum computers in the future.

    Eventually, given the invention of 50- to 100-qubit processors, scientists may be able to deduce the complex behavior of molecules using quantum computing. They could even make significant strides in artificial intelligence, processing big data, and more.

    IBM’s announcement also marks the launch of the IBM Research Frontiers Institute, a consortium of organizations from various industries (including Samsung and Honda) that plans to collaborate on ground-breaking computing technologies. As classical computing becomes less relevant and Moore’s law starts to fade, such projects will become even more necessary. As Gambetta noted, the amount we know about quantum computing now is similar to what we knew about classical computing in the 1950s and 60s. It’s back to square one.

    “Everything you know about computing, you have to relearn it,” he said.



    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 1:03 pm on May 13, 2016 Permalink | Reply
    Tags: , , , NOVA, quasiparticle collider   

    From NOVA: “Quasiparticle Collider Could Illuminate Mysteries of Superconductivity” 



    When a marten—a small, weasel-like animal—crawled inside a transformer and shut down the Large Hadron Collider, it highlighted the risks of giant science experiments. The bigger the facility, the more chances for the unexpected. Physicists use the Large Hadron Collider (LHC), a 17-mile vacuum tube buried under Geneva, Switzerland, to speed up subatomic particles to near the speed of light and smash them together.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC, 17 mile circular accelerator at CERN

    But a team of scientists has developed a marten-proof way to collide particles—using a device the size of a tabletop.

    Made up of no more than a laser and a tiny crystal, the technique studies different kinds of particles than the LHC does. They’re called quasiparticles. Quasiparticles are, in essence, disturbances that form in a material that can be classified—and modeled—as particles in their own right (even though they are not actual particles). For example, an electron quasiparticle is made up of an electron moving through a medium (in this particular study, a semiconductor crystal), plus the perturbations its negative charge causes in neighboring electrons and atomic nuclei.

    A quantum dot—a nanoscale semiconductor device that tightly packs electrons and electron holes—glows a specific color when it is hit with any kind of light. The team’s new quasiparticle collider could help scientists develop more efficient light-emitting tools, beyond what the quantum dot can do.

    Another example of a quasiparticle that acts as a counterpart to the electron quasiparticle is an electron hole, defined as the lack of electrons in a space surrounded by electrons. In other words, a “hole” quasiparticle is equivalent to a positively charged gap left by an electron on-the-go. So although quasiparticles a little different from what we usually think of as a particle, they’re sort of like air bubbles moving through water. Here’s Elizabeth Gibney, reporting for Nature News:

    It is intuitive for physicists to think in terms of quasiparticles, in the same way that it makes sense to follow a moving bubble in water, rather than trying to chart every molecule that surrounds it, says Mackillo Kira, a physicist at the University of Marburg in Germany and co-author of a report on the quasiparticle collider, published in Nature.

    Usually, the electron quasiparticle and the hole quasiparticle are bound up as a compound quasiparticle called an exciton. Their opposite charges pull them together. But with powerful laser pulses, physicists can cleave the exciton back into its component parts, which rush away from each other. Then they swing back and collide at high speed, producing light particles called photons. The physicists are able to detect the photons, which let them study what happened in the quasiparticle collision.

    Those photons could hold the secrets of how quasiparticles are structured. Though they’re only around for tiny fractions of a second, quasiparticles are an important part of physics. Since quasiparticles form when light is emitted, the new technique could illuminate a way to build better solar cells or to study strange forms of matter such as superconductors, since so-called Bogoliubov quasiparticles represent half of the electron pairs required for superconductivity.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 6:41 am on April 30, 2016 Permalink | Reply
    Tags: , , NOVA, SN 1006 supernova   

    From NOVA: “Ancient Philosophers Help Scientists Decode Brightest Supernova on Record” 



    29 Apr 2016
    Allison Eck

    In April of the year 1006 A.D., spectators around the world were treated to a resplendent display—a supernova, now called SN 1006, that reportedly shone brighter than Venus.

    Astronomers from China, Japan, Europe, and the Middle East all looked to the sky and wrote down what they saw. A few months later, the fireball faded from view.

    Now, Ralph Neuhäuser, an astrophysicist at Friedrich Schiller University Jena in Germany, has uncovered clues* about the supernova (in apparent magnitude, the brightest in recorded history) as part of the writings of Persian scientist Ibn Sina, also known as Avicenna.

    SN1006, NASA Chandra 2011
    SN1006, NASA Chandra 2011

    Here’s Jesse Emspak, reporting for National Geographic:

    “One section of his multipart opus Kitab al-Shifa, or “Book of Healing,” makes note of a transient celestial object that changed color and “threw out sparks” as it faded away. According to Neuhäuser and his colleagues, this object—long mistaken for a comet—is really a record of SN 1006, which Ibn Sina could have witnessed when he lived in northern Iran.

    While SN 1006 was relatively well documented at the time, the newly discovered text adds some detail not seen in other reports. According to the team’s translation, Ibn Sina saw the supernova start out as a faint greenish yellow, twinkle wildly at its peak brightness, then become a whitish color before it ultimately vanished.”

    This text illustrates an evolution of color unlike anything described in alternate accounts of this celestial event. Scientists use this category of supernova—called type 1A—to calculate distances across the universe, as they are “standard candles” that emit the same amount of energy in the form of light no matter how far away they are from Earth. Researchers compare their perceived brightness with their actual brightness to get an accurate measure of where ghostly imprints, or nebulae, of type 1A supernovae are located in space. Thus, perceived changes in hue and brightness during an individual supernova event can help scientists refine the “standard candle” approach.

    When two stars orbit each other and one of those stars transforms into a small but massive white dwarf, the massive dwarf pulls gas from its partner star; eventually, the white dwarf will explode into a type 1A supernova.

    Sag A*  NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way
    Sag A* NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way

    SN 1006 appears to have worked differently: in this case, two white dwarves revolved around one another—then, both lost energy in the form of gravitational waves and collided. Unusual supernovae like SN 1006 one help scientists understand the full spectrum of supernova characteristics.

    Still, some of Neuhäuser’s colleagues say that, while interesting, the color evolution Ibn Sina described may not actually be that useful. Ibn Sina would have observed SN 1006 close to the horizon, so the colors he saw might have distorted by atmospheric effects. What’s potentially more telling is another source that Neuhäuser has uncovered—writings from the historian al-Yamani of Yemen that suggest the supernova happened earlier than we once thought. Taken together, these accounts from a millennium ago will aid in scientists’ ability to decipher the universe.

    *Science paper:
    An Arabic report about supernova SN 1006 by Ibn Sina (Avicenna)

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:58 pm on April 17, 2016 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “Can Quantum Computing Reveal the True Meaning of Quantum Mechanics?” 



    24 Jun 2015 [NOVA just put this up in social media.]
    Scott Aaronson

    Quantum mechanics says not merely that the world is probabilistic, but that it uses rules of probability that no science fiction writer would have had the imagination to invent. These rules involve complex numbers, called “amplitudes,” rather than just probabilities (which are real numbers between 0 and 1). As long as a physical object isn’t interacting with anything else, its state is a huge wave of these amplitudes, one for every configuration that the system could be found in upon measuring it. Left to itself, the wave of amplitudes evolves in a linear, deterministic way. But when you measure the object, you see some definite configuration, with a probability equal to the squared absolute value of its amplitude. The interaction with the measuring device “collapses” the object to whichever configuration you saw.

    Those, more or less, are the alien laws that explain everything from hydrogen atoms to lasers and transistors, and from which no hint of an experimental deviation has ever been found, from the 1920s until today. But could this really be how the universe operates? Is the “bedrock layer of reality” a giant wave of complex numbers encoding potentialities—until someone looks? And what do we mean by “looking,” anyway?

    Could quantum computing help reveal what the laws of quantum mechanics really mean? Adapted from an image by Flickr user Politropix under a Creative Commons license.

    There are different interpretive camps within quantum mechanics, which have squabbled with each other for generations, even though, by design, they all lead to the same predictions for any experiment that anyone can imagine doing. One interpretation is Many Worlds, which says that the different possible configurations of a system (when far enough apart) are literally parallel universes, with the “weight” of each universe given by its amplitude.

    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/
    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/

    In this view, the whole concept of measurement—and of the amplitude waves collapsing on measurement—is a sort of illusion, playing no fundamental role in physics. All that ever happens is linear evolution of the entire universe’s amplitude wave—including a part that describes the atoms of your body, which (the math then demands) “splits” into parallel copies whenever you think you’re making a measurement. Each copy would perceive only itself and not the others. While this might surprise people, Many Worlds is seen by many (certainly by its proponents, who are growing in number) as the conservative option: the one that adds the least to the bare math.

    A second interpretation is Bohmian mechanics, which agrees with Many Worlds about the reality of the giant amplitude wave, but supplements it with a “true” configuration that a physical system is “really” in, regardless of whether or not anyone measures it. The amplitude wave pushes around the “true” configuration in a way that precisely matches the predictions of quantum mechanics. A third option is Niels Bohr’s original “Copenhagen Interpretation,” which says—but in many more words!—that the amplitude wave is just something in your head, a tool you use to make predictions. In this view, “reality” doesn’t even exist prior to your making a measurement of it—and if you don’t understand that, well, that just proves how mired you are in outdated classical ways of thinking, and how stubbornly you insist on asking illegitimate questions.

    But wait: if these interpretations (and others that I omitted) all lead to the same predictions, then how could we ever decide which one is right? More pointedly, does it even mean anything for one to be right and the others wrong, or are these just different flavors of optional verbal seasoning on the same mathematical meat? In his recent quantum mechanics textbook, the great physicist Steven Weinberg reviews the interpretive options, ultimately finding all of them wanting. He ends with the hope that new developments in physics will give us better options. But what could those new developments be?

    In the last few decades, the biggest new thing in quantum mechanics has been the field of quantum computing and information. The goal here, you might say, is to “put the giant amplitude wave to work”: rather than obsessing over its true nature, simply exploit it to do calculations faster than is possible classically, or to help with other information-processing tasks (like communication and encryption). The key insight behind quantum computing was articulated by Richard Feynman in 1982: to write down the state of n interacting particles each of which could be in either of two states, quantum mechanics says you need 2n amplitudes, one for every possible configuration of all n of the particles. Chemists and physicists have known for decades that this can make quantum systems prohibitively difficult to simulate on a classical computer, since 2n grows so rapidly as a function of n.

    But if so, then why not build computers that would themselves take advantage of giant amplitude waves? If nothing else, such computers could be useful for simulating quantum physics! What’s more, in 1994, Peter Shor discovered that such a machine would be useful for more than physical simulations: it could also be used to factor large numbers efficiently, and thereby break most of the cryptography currently used on the Internet. Genuinely useful quantum computers are still a ways away, but experimentalists have made dramatic progress, and have already demonstrated many of the basic building blocks.

    I should add that, for my money, the biggest application of quantum computers will be neither simulation nor codebreaking, but simply proving that this is possible at all! If you like, a useful quantum computer would be the most dramatic demonstration imaginable that our world really does need to be described by a gigantic amplitude wave, that there’s no way around that, no simpler classical reality behind the scenes. It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.

    But if quantum computers provide a new regime in which to probe quantum mechanics, that raises an even broader question: could the field of quantum computing somehow clear up the generations-old debate about the interpretation of quantum mechanics? Indeed, could it do that even before useful quantum computers are built?

    At one level, the answer seems like an obvious “no.” Quantum computing could be seen as “merely” a proposed application of quantum mechanics as that theory has existed in physics books for generations. So, to whatever extent all the interpretations make the same predictions, they also agree with each other about what a quantum computer would do. In particular, if quantum computers are built, you shouldn’t expect any of the interpretive camps I listed before to concede that its ideas were wrong. (More likely that each camp will claim its ideas were vindicated!)

    At another level, however, quantum computing makes certain aspects of quantum mechanics more salient—for example, the fact that it takes 2n amplitudes to describe n particles—and so might make some interpretations seem more natural than others. Indeed that prospect, more than any application, is why quantum computing was invented in the first place. David Deutsch, who’s considered one of the two founders of quantum computing (along with Feynman), is a diehard proponent of the Many Worlds interpretation, and saw quantum computing as a way to convince the world (at least, this world!) of the truth of Many Worlds. Here’s how Deutsch put it in his 1997 book “The Fabric of Reality”:

    “Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?”

    As you might imagine, not all researchers agree that a quantum computer would be “psychological evidence” for Many Worlds, or even that the two things have much to do with each other. Yes, some researchers reply, a quantum computer would take exponential resources to simulate classically (using any known algorithm), but all the interpretations agree about that. And more pointedly: thinking of the branches of a quantum computation as parallel universes might lead you to imagine that a quantum computer could solve hard problems in an instant, by simply “trying each possible solution in a different universe.” That is, indeed, how most popular articles explain quantum computing, but it’s also wrong!

    The issue is this: suppose you’re facing some arbitrary problem—like, say, the Traveling Salesman problem, of finding the shortest path that visits a collection of cities—that’s hard because of a combinatorial explosion of possible solutions. It’s easy to program your quantum computer to assign every possible solution an equal amplitude. At some point, however, you need to make a measurement, which returns a single answer. And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!

    For this reason, the only hope for a quantum-computing advantage comes from interference: the key aspect of amplitudes that has no classical counterpart, and indeed, that taught physicists that the world has to be described with amplitudes in the first place. Interference is customarily illustrated by the double-slit experiment, in which we shoot a photon at a screen with two slits in it, and then observe where the photon lands on a second screen behind it. What we find is that there are certain “dark patches” on the second screen where the photon never appears—and yet, if we close one of the slits, then the photon can appear in those patches. In other words, decreasing the number of ways for the photon to get somewhere can increase the probability that it gets there! According to quantum mechanics, the reason is that the amplitude for the photon to land somewhere can receive a positive contribution from the first slit, and a negative contribution from the second. In that case, if both slits are open, then the two contributions cancel each other out, and the photon never appears there at all. (Because the probability is the amplitude squared, both negative and positive amplitudes correspond to positive probabilities.)

    Likewise, when designing algorithms for quantum computers, the goal is always to choreograph things so that, for each wrong answer, some of the contributions to its amplitude are positive and others are negative, so on average they cancel out, leaving an amplitude close to zero. Meanwhile, the contributions to the right answer’s amplitude should reinforce each other (being, say, all positive, or all negative). If you can arrange this, then when you measure, you’ll see the right answer with high probability.

    It was precisely by orchestrating such a clever interference pattern that Peter Shor managed to devise his quantum algorithm for factoring large numbers. To do so, Shor had to exploit extremely specific properties of the factoring problem: it was not just a matter of “trying each possible divisor in a different parallel universe.” In fact, an important 1994 theorem of Bennett, Bernstein, Brassard, and Vazirani shows that what you might call the “naïve parallel-universe approach” never yields an exponential speed improvement. The naïve approach can reveal solutions in only the square root of the number of steps that a classical computer would need, an important phenomenon called the Grover speedup. But that square-root advantage turns out to be the limit: if you want to do better, then like Shor, you need to find something special about your problem that lets interference reveal its answer.

    What are the implications of these facts for Deutsch’s argument that only Many Worlds can explain how a quantum computer works? At the least, we should say that the “exponential cornucopia of parallel universes” almost always hides from us, revealing itself only in very special interference experiments where all the “universes” collaborate, rather than any one of them shouting above the rest. But one could go even further. One could say: To whatever extent the parallel universes do collaborate in a huge interference pattern to reveal (say) the factors of a number, to that extent they never had separate identities as “parallel universes” at all—even according to the Many Worlds interpretation! Rather, they were just one interfering, quantum-mechanical mush. And from a certain perspective, all the quantum computer did was to linearly transform the way in which we measured that mush, as if we were rotating it to see it from a more revealing angle. Conversely, whenever the branches do act like parallel universes, Many Worlds itself tells us that we only observe one of them—so from a strict empirical standpoint, we could treat the others (if we liked) as unrealized hypotheticals. That, at least, is the sort of reply a modern Copenhagenist might give, if she wanted to answer Deutsch’s argument on its own terms.

    There are other aspects of quantum information that seem more “Copenhagen-like” than “Many-Worlds-like”—or at least, for which thinking about “parallel universes” too naïvely could lead us astray. So for example, suppose Alice sends n quantum-mechanical bits (or qubits) to Bob, then Bob measures qubits in any way he likes. How many classical bits can Alice transmit to Bob that way? If you remember that n qubits require 2n amplitudes to describe, you might conjecture that Alice could achieve an incredible information compression—“storing one bit in each parallel universe.” But alas, an important result called Holevo’s Theorem says that, because of the severe limitations on what Bob learns when he measures the qubits, such compression is impossible. In fact, by sending n qubits to Bob, Alice can reliably communicate only n bits (or 2n bits, if Alice and Bob shared quantum correlations in advance), essentially no better than if she’d sent the bits classically. So for this task, you might say, the amplitude wave acts more like “something in our heads” (as the Copenhagenists always said) than like “something out there in reality” (as the Many-Worlders say).

    But the Many-Worlders don’t need to take this lying down. They could respond, for example, by pointing to other, more specialized communication problems, in which it’s been proven that Alice and Bob can solve using exponentially fewer qubits than classical bits. Here’s one example of such a problem, drawing on a 1999 theorem of Ran Raz and a 2010 theorem of Boaz Klartag and Oded Regev: Alice knows a vector in a high-dimensional space, while Bob knows two orthogonal subspaces. Promised that the vector lies in one of the two subspaces, can you figure out which one holds the vector? Quantumly, Alice can encode the components of her vector as amplitudes—in effect, squeezing n numbers into exponentially fewer qubits. And crucially, after receiving those qubits, Bob can measure them in a way that doesn’t reveal everything about Alice’s vector, but does reveal which subspace it lies in, which is the one thing Bob wanted to know.

    So, do the Many Worlds become “real” for these special problems, but retreat back to being artifacts of the math for ordinary information transmission?

    To my mind, one of the wisest replies came from the mathematician and quantum information theorist Boris Tsirelson, who said: “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” In other words, this is a new ontological category, one that our pre-quantum intuitions simply don’t have a good slot for. From this perspective, the contribution of quantum computing is to delineate for which tasks the giant amplitude wave acts “real and Many-Worldish,” and for which other tasks it acts “formal and Copenhagenish.” Quantum computing can give both sides plenty of fresh ammunition, without handing an obvious victory to either.

    So then, is there any interpretation that flat-out doesn’t fare well under the lens of quantum computing? While some of my colleagues will strongly disagree, I’d put forward Bohmian mechanics as a candidate. Recall that David Bohm’s vision was of real particles, occupying definite positions in ordinary three-dimensional space, but which are jostled around by a giant amplitude wave in a way that perfectly reproduces the predictions of quantum mechanics. A key selling point of Bohm’s interpretation is that it restores the determinism of classical physics: all the uncertainty of measurement, we can say in his picture, arises from lack of knowledge of the initial conditions. I’d describe Bohm’s picture as striking and elegant—as long as we’re only talking about one or two particles at a time.

    But what happens if we try to apply Bohmian mechanics to a quantum computer—say, one that’s running Shor’s algorithm to factor a 10,000-digit number, using hundreds of thousands of particles? We can do that, but if we do, talking about the particles’ “real locations” will add spectacularly little insight. The amplitude wave, you might say, will be “doing all the real work,” with the “true” particle positions bouncing around like comically-irrelevant fluff. Nor, for that matter, will the bouncing be completely deterministic. The reason for this is technical: it has to do with the fact that, while particles’ positions in space are continuous, the 0’s and 1’s in a computer memory (which we might encode, for example, by the spins of the particles) are discrete. And one can prove that, if we want to reproduce the predictions of quantum mechanics for discrete systems, then we need to inject randomness at many times, rather than only at the beginning of the universe.

    But it gets worse. In 2005, I proved a theorem that says that, in any theory like Bohmian mechanics, if you wanted to calculate the entire trajectory of the “real” particles, you’d need to solve problems that are thought to be intractable even for quantum computers. One such problem is the so-called collision problem, where you’re given a cryptographic hash function (a function that maps a long message to a short “hash value”) and asked to find any two messages with the same hash. In 2002, I proved that, at least if you use the “naïve parallel-universe” approach, any quantum algorithm for the collision problem requires at least ~H1/5 steps, where H is the number of possible hash values. (This lower bound was subsequently improved to ~H1/3 by Yaoyun Shi, exactly matching an upper bound of Brassard, Høyer, and Tapp.) By contrast, if (with godlike superpower) you could somehow see the whole histories of Bohmian particles, you could solve the collision problem almost instantly.

    What makes this interesting is that, if you ask to see the locations of Bohmian particles at any one time, you won’t find anything that you couldn’t have easily calculated with a standard, garden-variety quantum computer. It’s only when you ask for the particles’ locations at multiple times—a question that Bohmian mechanics answers, but that ordinary quantum mechanics rejects as meaningless—that you’re able to see multiple messages with the same hash, and thereby solve the collision problem.

    My conclusion is that, if you believe in the reality of Bohmian trajectories, you believe that Nature does even more computational work than a quantum computer could efficiently simulate—but then it hides the fruits of its labor where no one can ever observe it. Now, this sits uneasily with a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. (Admittedly, some people would probably argue that the Many Worlds interpretation violates my “aftershave principle” even more flagrantly than Bohmian mechanics does! But that depends, in part, on what we count as “observation”: just our observations, or also the observations of any parallel-universe doppelgängers?)

    Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing. In the end, asking how quantum computing affects the interpretation of quantum mechanics is sort of like asking how classical computing affects the debate about whether the mind is a machine. In both cases, there was a range of philosophical positions that people defended before a technology came along, and most of those positions still have articulate defenders after the technology. So, by that standard, the technology can’t be said to have “resolved” much! Yet the technology is so striking that even the idea of it—let alone the thing itself—can shift the terms of the debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived. This might, more generally, be the main way technology affects philosophy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 560 other followers

%d bloggers like this: