Tagged: Applied Research Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:18 am on March 21, 2017 Permalink | Reply
    Tags: Applied Research, Cell phone technology, DNA biomarkers, Intercalator dyes, , , UCLA researchers make DNA detection portable affordable using cellphones   

    From UCLA: “UCLA researchers make DNA detection portable, affordable using cellphones” 

    UCLA bloc

    UCLA

    March 20, 2017
    Matthew Chin

    System achieved comparable results to equipment costing tens of thousands of dollars more.

    1
    The combined dye/cellphone reader system achieved comparable results to equipment costing tens of thousands of dollars more. Dino Di Carlo/UCLA

    Researchers at UCLA have developed an improved method to detect the presence of DNA biomarkers of disease that is compatible with use outside of a hospital or lab setting. The new technique leverages the sensors and optics of cellphones to read light produced by a new detector dye mixture that reports the presence of DNA molecules with a signal that is more than 10-times brighter.

    Nucleic acids, such as DNA or RNA, are used in tests for infectious diseases, genetic disorders, cancer mutations that can be targeted by specific drugs, and fetal abnormality tests. The samples used in standard diagnostic tests typically contain only tiny amounts of a disease’s related nucleic acids. To assist optical detection, clinicians amplify the number of nucleic acids making them easier to find with the fluorescent dyes.

    Both the amplification and the optical detection steps have in the past required costly and bulky equipment, largely limiting their use to laboratories.

    In a study published online in the journal ACS Nano, researchers from three UCLA entities — the Henry Samueli School of Engineering and Applied Science, the California NanoSystems Institute, and the David Geffen School of Medicine — showed how to take detection out of the lab and for a fraction of the cost.

    The collaborative team of researchers included lead author Janay Kong, a UCLA Ph.D. student in bioengineering; Qingshan Wei, a post-doctoral researcher in electrical engineering; Aydogan Ozcan, Chancellor’s Professor of Electrical Engineering and Bioengineering; Dino Di Carlo, professor of bioengineering and mechanical and aerospace engineering; and Omai Garner, assistant professor of pathology and medicine at the David Geffen School of Medicine at UCLA.

    The UCLA researchers focused on the challenges with low-cost optical detection. Small changes in light emitted from molecules that associate with DNA, called intercalator dyes, are used to identify DNA amplification, but these dyes are unstable and their changes are too dim for standard cellphone camera sensors.

    But the team discovered an additive that stabilized the intercalator dyes and generated a large increase in fluorescent signal above the background light level, enabling the test to be integrated with inexpensive cellphone based detection methods. The combined novel dye/cellphone reader system achieved comparable results to equipment costing tens of thousands of dollars more.

    To adapt a cellphone to detect the light produced from dyes associated with amplified DNA while those samples are in standard laboratory containers, such as well plates, the team developed a cost-effective, field-portable fiber optic bundle. The fibers in the bundle routed the signal from each well in the plate to a unique location of the camera sensor area. This handheld reader is able to provide comparable results to standard benchtop readers, but at a fraction of the cost, which the authors suggest is a promising sign that the reader could be applied to other fluorescence-based diagnostic tests.

    “Currently nucleic acid amplification tests have issues generating a stable and high signal, which often necessitates the use of calibration dyes and samples which can be limiting for point-of-care use,” Di Carlo said. “The unique dye combination overcomes these issues and is able to generate a thermally stable signal, with a much higher signal to noise ratio. The DNA amplification curves we see look beautiful — without any of the normalization and calibration, which is usually performed, to get to the point that we start at.”

    Additionally, the authors emphasized that the dye combinations discovered should be able to be used universally to detect any nucleic acid amplification, allowing for their use in a multitude of other amplification approaches and tests.

    The team demonstrated the approach using a process called loop-mediated isothermal amplification, or LAMP, with DNA from lambda phage as the target molecule, as a proof of concept, and now plan to adapt the assay to complex clinical samples and nucleic acids associated with pathogens such as influenza.

    The newest demonstration is part of a suite of technologies aimed at democratizing disease diagnosis developed by the UCLA team. Including low-cost optical readout and diagnostics based on consumer-electronic devices, microfluidic-based automation and molecular assays leveraging DNA nanotechnology.

    This interdisciplinary work was supported through a team science grant from the National Science Foundation Emerging Frontiers in Research and Innovation program.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC LA Campus

    For nearly 100 years, UCLA has been a pioneer, persevering through impossibility, turning the futile into the attainable.

    We doubt the critics, reject the status quo and see opportunity in dissatisfaction. Our campus, faculty and students are driven by optimism. It is not naïve; it is essential. And it has fueled every accomplishment, allowing us to redefine what’s possible, time after time.

    This can-do perspective has brought us 12 Nobel Prizes, 12 Rhodes Scholarships, more NCAA titles than any university and more Olympic medals than most nations. Our faculty and alumni helped create the Internet and pioneered reverse osmosis. And more than 100 companies have been created based on technology developed at UCLA.

     
  • richardmitnick 7:47 am on March 21, 2017 Permalink | Reply
    Tags: Applied Research, Australian funding agency turns a blind eye to evidence,   

    From Nature: “Australian funding agency turns a blind eye to evidence” 

    Nature Mag
    Nature Index

    7 March 2017
    Adrian Barnett

    1
    Nature Picture Library / Alamy Stock Photo

    Australia’s main science funder is not taking an evidence-based approach in reforms to its system of funding allocation. Instead it is courting expert opinion, the lowest level of evidence quality, to guide its decision.

    The National Health and Medical Research Council (NHMRC) in Australia funds research for the benefit of human health, usually by providing financial resources for scientists to collect data, run experiments and examine the evidence.

    But, in trying to find the best way to allocate funds, the NHMRC has so far failed to cite any published experiments on funding. A 64-page consultation paper released last year that proposed three alternative models — funding individuals, teams or ideas — did not include a single citation to published work.

    More than 300 expert opinions were summarised and published last week. Not surprisingly, “the feedback provided was diverse with no clear preference for one of the alternative models”.

    The review, prompted by rising application numbers, falling success rates and plummeting morale amongst researchers, is welcome. But a summary of the current evidence may have led to a more useful discussion.

    Studies have been carried out that could inform an evidence-based discussion of funding policy, and are directly pertinent to the three proposed models. For example, a study of a new funding system in France examining more than €10 billion in research funding over five years, found that funding younger applicants had a much larger impact on research output.

    Another study in Canada revealed that it was more effective to divide funding into multiple small grants rather than large grants. Two $1 million dollar grants generally produce more research than one $2 million dollar grant. Other studies have examined the important question of whether current funding systems discourage innovative applications.

    There is even evidence from within the NHMRC. Our research group worked with the NHMRC in 2013 to conduct a randomised trial to examine the level of agreement between independent peer reviewers of grant applications and found concurrence was higher when reviewers examined people compared with projects.

    This has a clear policy implication, suggesting a strong case for putting more money into people as that money would more reliably go to high-quality research.

    Without systematically examining the evidence we are left to rely on expert opinion, which can be wrong. For example, many have predicted that a funding round with no deadlines would create an avalanche of applications. But a recent trial by the National Science Foundation in the United States found a 59% reduction in applications.

    The studies of funding mentioned here are the tip of the iceberg. A group in the United Kingdom recently searched for published evidence on the best designs for funding systems and found more than 1,700 references.

    Ignoring evidence means we risk changing Australia’s national funding policy to a system that has already been trialled and failed elsewhere. The opinions of the research community are important when making such a big change, but researchers also appreciate seeing published evidence.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 11:42 am on March 16, 2017 Permalink | Reply
    Tags: Applied Research, , , , Great Barrier Reef is dying   

    From EarthSky: “Great Barrier Reef is dying” 

    1

    EarthSky

    March 16, 2017
    Deborah Byrd

    1
    Bleached coral in 2016 on the northern Great Barrier Reef. Image via Terry Hughes et al./Nature.

    Great Barrier Reef – the world’s largest reef system – is being increasingly affected by climate change, according to the authors of a cover story in the March 15, 2017 issue of the peer-reviewed journal Nature. Large sections of the reef are now dead, these scientists report. Marine biologist Terry Hughes of the ARC Center of Excellence for Coral Reef Studies led a group that examined changes in the geographic footprint – that is, the area affected – of mass bleaching events on the Great Barrier Reef over the last two decades. They used aerial and underwater survey data combined with satellite-derived measurements of sea surface temperature. Editors at Nature reported:

    “They show that the cumulative footprint of multiple bleaching events has expanded to encompass virtually all of the Great Barrier Reef, reducing the number and size of potential refuges [for fish and other creatures that live in the reef]. The 2016 bleaching event proved the most severe, affecting 91% of individual reefs.”

    2
    The NY Times published this map on March 15, 2017, based on information from the ARC Centre of Excellence for Coral Reef Studies. It shows that individual reefs in each region of the Great Barrier Reef lost different amounts of coral in 2016. Numbers show the range of loss for the middle 50% of observations in each region. Study authors told the NY Times this level of destruction wasn’t expected for another 30 years.

    Hughes and colleagues said in their study [Nature]:

    “During 2015–2016, record temperatures triggered a pan-tropical episode of coral bleaching, the third global-scale event since mass bleaching was first documented in the 1980s …

    The distinctive geographic footprints of recurrent bleaching on the Great Barrier Reef in 1998, 2002 and 2016 were determined by the spatial pattern of sea temperatures in each year. Water quality and fishing pressure had minimal effect on the unprecedented bleaching in 2016, suggesting that local protection of reefs affords little or no resistance to extreme heat. Similarly, past exposure to bleaching in 1998 and 2002 did not lessen the severity of bleaching in 2016.

    Consequently, immediate global action to curb future warming is essential to secure a future for coral reefs.”

    According to the website CoralWatch.org:

    Many stressful environmental conditions can lead to bleaching, however, elevated water temperatures due to global warming have been found to be the major cause of the massive bleaching events observed in recent years. As the sea temperatures cool during winter, corals that have not starved may overcome a bleaching event and recover their [symbiotic dinoflagellates (algae)].

    However, even if they survive, their reproductive capacity is reduced, leading to long-term damage to reef systems.

    4
    In March 2016, researchers could see bleached coral in the northern Great Barrier Reef from the air. Image via James Kerry/ARC Center of Excellence for Coral Reef Studies.

    Bottom line: Authors of a cover story published on March 15, 2017 in the journal Nature called for action to curb warming, to help save coral reefs.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:14 am on March 16, 2017 Permalink | Reply
    Tags: Applied Research, , , , Deep-sea corals, Desmophyllum dianthus, , Study: Cold Climates and Ocean Carbon Sequestration, Why the earth goes through periodic climate change   

    From Caltech: “Study: Cold Climates and Ocean Carbon Sequestration” 

    Caltech Logo

    Caltech

    03/14/2017

    Robert Perkins
    (626) 395-1862
    rperkins@caltech.edu

    1
    Tony Wang (left) and Jess Adkins (right) with samples of Desmophyllum dianthus fossils.

    Deep-sea corals reveal why atmospheric carbon was reduced during colder time periods

    We know a lot about how carbon dioxide (CO2) levels can drive climate change, but how about the way that climate change can cause fluctuations in CO2 levels? New research from an international team of scientists reveals one of the mechanisms by which a colder climate was accompanied by depleted atmospheric CO2 during past ice ages.

    The overall goal of the work is to better understand how and why the earth goes through periodic climate change, which could shed light on how man-made factors could affect the global climate.

    Earth’s average temperature has naturally fluctuated by about 4 to 5 degrees Celsius over the course of the past million years as the planet has cycled in and out of glacial periods. During that time, the earth’s atmospheric CO2 levels have fluctuated between roughly 180 and 280 parts per million (ppm) every 100,000 years or so. (In recent years, man-made carbon emissions have boosted that concentration up to over 400 ppm.)

    About 10 years ago, researchers noticed a close correspondence between the fluctuations in CO2 levels and in temperature over the last million years. When the earth is at its coldest, the amount of CO2 in the atmosphere is also at its lowest. During the most recent ice age, which ended about 11,000 years ago, global temperatures were 5 degrees Celsius lower than they are today, and atmospheric CO2 concentrations were at 180 ppm.

    Using a library of more than 10,000 deep-sea corals collected by Caltech’s Jess Adkins, an international team of scientists has shown that periods of colder climates are associated with higher phytoplankton efficiency and a reduction in nutrients in the surface of the Southern Ocean (the ocean surrounding the Antarctic), which is related to an increase in carbon sequestration in the deep ocean. A paper about their research appears the week of March 13 in the online edition of the Proceedings of the National Academy of Sciences.

    “It is critical to understand why atmospheric CO2 concentration was lower during the ice ages. This will help us understand how the ocean will respond to ongoing anthropogenic CO2 emissions,” says Xingchen (Tony) Wang, lead author of the study. Wang was a graduate student at Princeton while conducting the research in the lab of Daniel Sigman, Dusenbury Professor of Geological and Geophysical Sciences. He is now a Simons Foundation Postdoctoral Fellow on the Origins of Life at Caltech.

    There is 60 times more carbon in the ocean than in the atmosphere—partly because the ocean is so big. The mass of the world’s oceans is roughly 270 times greater than that of the atmosphere. As such, the ocean is the greatest regulator of carbon in the atmosphere, acting as both a sink and a source for atmospheric CO2.

    Biological processes are the main driver of CO2 absorption from the atmosphere to the ocean. Just like photosynthesizing trees and plants on land, plankton at the surface of the sea turn CO2 into sugars that are eventually consumed by other creatures. As the sea creatures who consume those sugars—and the carbon they contain—die, they sink to the deep ocean, where the carbon is locked away from the atmosphere for a long time. This process is called the “biological pump.”

    A healthy population of phytoplankton helps lock away carbon from the atmosphere. In order to thrive, phytoplankton need nutrients—notably, nitrogen, phosphorus, and iron. In most parts of the modern ocean, phytoplankton deplete all of the available nutrients in the surface ocean, and the biological pump operates at maximum efficiency.

    However, in the modern Southern Ocean, there is a limited amount of iron—which means that there are not enough phytoplankton to fully consume the nitrogen and phosphorus in the surface waters. When there is less living biomass, there is also less that can die and sink to the bottom—which results in a decrease in carbon sequestration. The biological pump is not currently operating as efficiently as it theoretically could.

    To track the efficiency of the biological pump over the span of the past 40,000 years, Adkins and his colleagues collected more than 10,000 fossils of the coral Desmophyllum dianthus.

    Why coral? Two reasons: first, as it grows, coral accretes a skeleton around itself, precipitating calcium carbonate (CaCO3) and other trace elements (including nitrogen) out of the water around it. That process creates a rocky record of the chemistry of the ocean. Second, coral can be precisely dated using a combination of radiocarbon and uranium dating.

    “Finding a few centimeter-tall fossil corals 2,000 meters deep in the ocean is no trivial task,” says Adkins, Smits Family Professor of Geochemistry and Global Environmental Science at Caltech.

    Adkins and his colleagues collected coral from the relatively narrow (500-mile) gap known as the Drake Passage between South America and Antarctica (among other places). Because the Southern Ocean flows around Antarctica, all of its waters funnel through that gap—making the samples Adkins collected a robust record of the water throughout the Southern Ocean.

    Wang analyzed the ratios of two isotopes of nitrogen atoms in these corals – nitrogen-14 (14N, the most common variety of the atom, with seven protons and seven neutrons in its nucleus) and nitrogen-15 (15N, which has an extra neutron). When phytoplankton consume nitrogen, they prefer 14N to 15N. As a result, there is a correlation between the ratio of nitrogen isotopes in sinking organic matter (which the corals then eat as it falls to the seafloor) and how much nitrogen is being consumed in the surface ocean—and, by extension, the efficiency of the biological pump.

    A higher amount of 15N in the fossils indicates that the biological pump was operating more efficiently at that time. An analogy would be monitoring what a person eats in their home. If they are eating more of their less-liked foods, then one could assume that the amount of food in their pantry is running low.

    Indeed, Wang found that higher amounts of 15N were present in fossils corresponding to the last ice age, indicating that the biological pump was operating more efficiently during that time. As such, the evidence suggests that colder climates allow more biomass to grow in the surface Southern Ocean—likely because colder climates experience stronger winds, which can blow more iron into the Southern Ocean from the continents. That biomass consumes carbon, then dies and sinks, locking it away from the atmosphere.

    Adkins and his colleagues plan to continue probing the coral library for further details about the cycles of ocean chemistry changes over the past several hundred thousand years.

    The study is titled “Deep-sea coral evidence for lower Southern Ocean surface nitrate concentrations during the last ice age.” Coauthors include scientists from Caltech, Princeton University, Pomona College, the Max Planck Institute for Chemistry in Germany, University of Bristol, and ETH Zurich in Switzerland. This research was funded by the National Science Foundation, Princeton University, the European Research Council, and the Natural Environment Research Council.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 9:07 am on March 14, 2017 Permalink | Reply
    Tags: Applied Research, , EU funding formula revealed, ,   

    From Nature Index: “EU funding formula revealed” 

    Nature Mag
    Nature Index

    1
    Andrew Paterson / Alamy Stock Photo

    Three factors have a significant influence on whether a research institution will apply for and win a prestigious Horizon 2020 grant, researchers have found.

    A review of successful applications from Norway has found prior participation in EU funding programs, existing research funding and an organization’s reputation added the most weight.

    About one in seven applications submitted by Norwegian research institutions were awarded funding, according to the study, led by Simen Enger, a social scientist at the University of Oslo. Out of 130 universities, public research organizations or hospitals that applied, 62 institutions were successful.

    While Enger said his study only reviewed Norwegian institutions and focused on success at an institutional level, he believes the findings offer insights for policy-makers, and institutions seeking EU funding in the future. The research was published in the journal Scientometrics.

    Reputation

    A strong reputation was one of the biggest predictors of an institution being awarded a grant, which the authors measured by the number of citations garnered by an institution’s papers.

    In contrast, they found no significant correlation between productivity, a measure of the number of publications, and success of applications.

    “High numbers of citations mean that the paper is structured in a way that is actually usable. A good proposal has the same quality,” said Francesco Pilla, a researcher from University College Dublin, who has Horizon 2020 funding for his air pollution research.

    But Mohand Kechadi, a computer scientist at University College Dublin, who has evaluated EU proposals, said early-career researchers may not have had time to accumulate citations. “What’s very important in an applicant’s CV is the quality of publications in relation to the project research field,” he says.

    Persistence

    Organizations that had previously gained EU funding were more likely to apply to Horizon 2020, and also more likely to win a grant.

    The authors suggest that taking part in EU-funded collaborations means organizations learn about the application procedure, EU research priorities, and the quality of potential partners.

    Richard Butler, a paleontologist at the University of Birmingham, who won an ERC Starting Grant from Horizon 2020, said reaching the interview stage in a previous funding round strengthened his subsequent application and interview performance.

    National funding

    While the availability of national funding within an organization’s own country did not influence the probability of obtaining a grant, it boosted the likelihood an organization would apply.

    The authors suggest that national funding may give institutions more resources to support their application, such as professional help to draft an application or travel grants to network with potential collaborators.

    However, inadequate national funding is also a catalyst for many researchers to apply for EU funding.

    In the United Kingdom, uncertainty surrounds how researchers who wish to apply or are recipients of Horizon 2020 funding will be affected by the country’s exit from the EU. According to a Digital Science report published last year, the UK gained more than £8 billion in EU research funding between 2006 and 2015, making it the second-highest recipient after Germany. “For the moment, researchers in the UK are continuing to apply for funding as normal”, says Butler.

    The Horizon 2020 programme is undergoing a mid-term review to evaluate its effectiveness. The outcome is due at the end of 2017.

    Enger’s current research is reviewing Horizon 2020 applications from various EU countries over a longer time frame. It will also consider the influence of international collaborative networks on levels of participation and success.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 9:44 am on March 10, 2017 Permalink | Reply
    Tags: Applied Research, Carbon Fibre,   

    From CSIRO: “Carbon fibre coup: Secret recipes and super strength” 

    CSIRO bloc

    Commonwealth Scientific and Industrial Research Organisation

    20th February 2017 [Where has this been hiding?]
    Rachael Vorwerk

    1
    Slightly more elaborate than a pasta maker: this machine helps us to create a new carbon fibre mix. No image credit

    If you enjoy watching motor racing, you’ve no doubt heard the commentators talk a lot about carbon fibre. And if racing doesn’t tickle your fancy, you’ve most likely flown on a plane or driven in a car with carbon fibre components – in fact, carbon fibre is used in civil engineering, the military, cars and aerospace just to name a few areas. This material of the future combines high rigidity, tensile strength and chemical resistance with low weight. It’s far stronger than steel at just a fraction of steel’s weight.

    But did you know that the recipe needed to make the precursor (the material you need to make before you can start manufacturing carbon fibre) is a closely guarded secret? Only a handful of companies around the world can create this precursor (polymer goo) from scratch.

    Our researchers, together with researchers from Deakin University, are now members of this elite club of secret recipe makers. They worked out a way to reverse engineer the material and cracked the secret code to make a new carbon fibre mix – the first time this has have ever been done in Australia – and it’s likely to be the strongest, lightest, version of carbon fibre in the world!

    Carbon fibre & the secret recipe

    So, just how do you make carbon fibre?

    Well, if you’ve ever made pasta, you’ll probably understand how to make carbon fibre!

    The first step in making pasta is to make a dough out of the freshest, best ingredients. This isn’t too dissimilar to the “dough” needed for carbon fibre, that is, the precursor.

    Next, to produce the carbon fibre we need wet spinning lines to mix. This is like kneading the pasta dough. And just like dough through a pasta maker, the polymer goo is stretched into thin, long strands. Polymer goes into the wet spinning line and comes out as 500 – 12,000 separate strands – all finer than human hair (think angel hair pasta instead of spaghetti).

    The strands are stretched on rollers to ensure consistency, stabilised in a series of solutions, and even gets a steam bath along the way. Then the little strands of carbon fibre angel hair are wound onto a spool, which is taken back to the carboniser (kind of like an oven, but a lot more technical!). It changes the polymer’s molecular structure, getting rid of the hydrogen and realigning carbon atoms to make the finished product stronger. It’s this alignment that gives carbon fibre its amazing strength and rigidity.

    Al dente!

    What’s next for carbon fibre?

    We’ve launched a brand new carbon fibre facility with Deakin. It was custom built in Italy by a company specialising in the carbon fibre industry, in fact they liked our design so much they built another for their own factory!

    Because these amazing researchers were able to reverse engineer this secret recipe, we’re now currently testing what could be the next generation of carbon fibre. Remember how we said it was aligned molecular structure that gave carbon fibre its strength? Well, we’ve created a way to control a substance’s molecular structure. This means we have more control over our carbon fibre and can potentially make it even stronger than ever before.

    Carbon fibre isn’t our only love – we’re working hard to be innovators in other manufacturing industries, you can find out more about them here.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 8:14 am on March 10, 2017 Permalink | Reply
    Tags: Applied Research, , Time Crystals, Time crystals' latest quantum weirdness   

    From COSMOS: “‘Time crystals’ latest quantum weirdness” 

    Cosmos Magazine bloc

    COSMOS

    10 March 2017
    Richard A Lovett

    1
    From this article
    Pixabay

    [Other depictions:

    2
    Scientists have confirmed a brand new phase of matter: time crystals – ScienceAlert

    3
    Popular Mechanics

    4
    Berkeley scientists unveil new form of matter known as time crystals

    Enough of that.]

    Two American teams of scientists have independently created the world’s first “time crystals”, but don’t order up a trip on the TARDIS anytime soon, because the crystals in question have nothing to do with time travel.

    Both sets of research have been published in Nature.

    The quest to crystallize time

    [Only one article is popping up. Totally unacceptable from Nature.]

    Here is one in Physical Review Letters.

    “I’m not responsible for its name,” laughs Mikhail Lukin, a physicist at Harvard University, Cambridge, Massachusetts, lead author on one of the papers.

    Chetan Nayak, principal researcher at Microsoft’s Station Q and a professor of physics at the University of California, Santa Barbara, puts it more simply. “What they observed is a new state of matter,” he says.

    Nayak is responsible for a third paper in the journal, explaining the significance of the discovery.

    What’s unique about the crystals, Lukin says, is that they have properties that repeat over time in a manner analogous to the way the atoms in crystal lattices repeat over space.

    Repeating phenomena, of course, aren’t a big deal. “Every year we have spring, summer, and fall,” Lukin notes.

    But most repeating phenomena are easily altered. An AC electrical current, for example, can be changed by altering the spin rate of the dynamo that produces it. The length of the Earth’s seasons would change if, heaven forbid, a giant asteroid hit us, altering our orbit.

    To understand time crystals, we need to start by considering liquids and gases. In these, Lukin says, molecules are uniformly distributed in a way that makes one point in the liquid or gas basically the same as all other points.

    But in crystals, atoms are arranged in repeating patterns that mean that once you know the position of one atom, you can pinpoint the locations of all the others. Furthermore, crystals are rigid. If you bash on one, you aren’t going to see one atom move one way, while another moves a different way, as would happen if you sloshed a tub of water or let the air out of a balloon.

    Crystals are common to our normal understanding of nature. Time crystals aren’t. In fact, it was only recently that anyone even hypothesized they might exist.

    Their atoms operate in a sort of time-array, as opposed to a physical array. The time crystal created by Lukin’s team was a synthetic black diamond, meaning that it was a diamond with a million or so “nitrogen vacancy” impurities — so many they made it appear black.

    The electrons in these impurities have spins: they can react to electromagnetic pulses by flipping 180 degrees, analogous to what happens to nuclei in the human body during magnetic resonance imaging.

    Normally, you would expect the spins to flip back and forth in synchronisation with the pulse. But that is not what happened. Instead, when Lukin’s team tried it with their black diamond, the spins flipped only once for every two or three pulses.

    Shivaji Sondhi, a theoretical physicist at Princeton University in New Jersey, who was part of the team that in 2015 first theorised that such crystals might be possible, compares the effect to repeatedly squeezing on a sponge.

    “When you release the sponge, you expect it to resume its shape,” he says. “Imagine that it only resumes its shape every second squeeze, even though you are applying the same force each time.”

    In the second study, a team lead by the Christopher Monroe, physicist at the University of Maryland, used a chain of 14 charged ytterbium ions, but got essentially the same result.

    Furthermore, the scientists found, varying the incoming electromagnetic pulse didn’t particularly alter the response. In other words, the time crystal’s response was stable, not strongly affected by variances that would normally scramble it and rapidly lead to disorder.

    Applications are up in the air. “It’s very early days,” says Nayak. “I think applications will become more clear as we expand the contexts in which we can create time crystals.”

    One possibility is that this might be used in futuristic quantum computers. “What a time crystal is doing is manipulating quantum information in a period manner,” says Nayak. “That’s potentially useful for quantum information processing.”

    Lukin says that another potential application is in developing sensing instruments capable of working on very small scales. These instruments could be designed with numerous tiny time crystals, tightly packed.

    The crystals would react to electrical or magnetic impulses in their local environment, but would not be easily perturbed by whatever is going on nearby. “We believe these will enable new approaches for [what are] basically quantum sensors,” Lukin says.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:46 am on March 8, 2017 Permalink | Reply
    Tags: , Applied Research, California Fault System Could Produce Magnitude 7.3 Quake, , , Newport-Inglewood/Rose Canyon fault mostly offshore but never more than four miles from the San Diego Orange County and Los Angeles County coast,   

    From Eos: “California Fault System Could Produce Magnitude 7.3 Quake” 

    AGU bloc

    AGU
    Eos news bloc

    Eos

    Mar 7, 2017

    A new study finds rupture of the offshore Newport-Inglewood/Rose Canyon fault that runs from San Diego to Los Angeles is possible.

    1
    A Scripps research vessel tows a hydrophone array used to collect high-resolution bathymetric to better understand offshore California faults. Credit: Scripps Institution of Oceanography, UC San Diego

    A fault system that runs from San Diego to Los Angeles is capable of producing up to magnitude 7.3 earthquakes if the offshore segments rupture and a 7.4 if the southern onshore segment also ruptures, according to a new study led by Scripps Institution of Oceanography at the University of California San Diego.

    The Newport-Inglewood and Rose Canyon faults had been considered separate systems but the study shows that they are actually one continuous fault system running from San Diego Bay to Seal Beach in Orange County, then on land through the Los Angeles basin.

    “This system is mostly offshore but never more than four miles from the San Diego, Orange County, and Los Angeles County coast,” said study lead author Valerie Sahakian, who performed the work during her doctorate at Scripps and is now a postdoctoral fellow with the U.S. Geological Survey in Menlo Park, California. “Even if you have a high 5- or low 6-magnitude earthquake, it can still have a major impact on those regions which are some of the most densely populated in California.”

    The new study was accepted for publication in the Journal of Geophysical Research: Solid Earth, a journal of the American Geophysical Union.

    In the new study, researchers processed data from previous seismic surveys and supplemented it with high-resolution bathymetric data gathered offshore by Scripps researchers between 2006 and 2009 and seismic surveys conducted aboard former Scripps research vessels New Horizon and Melville in 2013. The disparate data have different resolution scales and depth of penetration providing a “nested survey” of the region. This nested approach allowed the scientists to define the fault architecture at an unprecedented scale and thus to create magnitude estimates with more certainty.

    2
    Locations of NIRC fault zone as observed in seismic profiles. Credit: AGU/Journal of Geophysical Research: Solid Earth

    They identified four segments of the strike-slip fault that are broken up by what geoscientists call stepovers, points where the fault is horizontally offset. Scientists generally consider stepovers wider than three kilometers more likely to inhibit ruptures along entire faults and instead contain them to individual segments—creating smaller earthquakes. Because the stepovers in the Newport-Inglewood/Rose Canyon (NIRC) fault are two kilometers wide or less, the Scripps-led team considers a rupture of all the offshore segments is possible, said Neal Driscoll, a geophysicist at Scripps and co-author of the new study.

    The team used two estimation methods to derive the maximum potential a rupture of the entire fault, including one onshore and offshore portions. Both methods yielded estimates between magnitude 6.7 and magnitude 7.3 to 7.4.

    The fault system most famously hosted a 6.4-magnitude quake in Long Beach, California that killed 115 people in 1933. Researchers have found evidence of earlier earthquakes of indeterminate size on onshore portions of the fault, finding that at the northern end of the fault system, there have been between three and five ruptures in the last 11,000 years. At the southern end, there is evidence of a quake that took place roughly 400 years ago and little significant activity for 5,000 years before that.

    Driscoll has recently collected long sediment cores along the offshore portion of the fault to date previous ruptures along the offshore segments, but the work was not part of this study.

    “Further study is warranted to improve the current understanding of hazard and potential ground shaking posed to urban coastal areas from Tijuana to Los Angeles from the NIRC fault,” the study concludes.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 10:44 am on March 6, 2017 Permalink | Reply
    Tags: Applied Research, , , Metabolism matters, Somites   

    From EMBL: “Metabolism matters” 

    EMBL European Molecular Biology Laboratory bloc

    European Molecular Biology Laboratory

    2 March 2017
    Sonia Furtado Neves

    1
    Scientists can now study glycolysis in embryos, thanks to the EMBL scientists’ pyruvate sensor. IMAGE: Vinay Bulusu/EMBL

    Cells at different stages of differentiation get energy in different ways, a new approach developed at EMBL shows.

    Life requires energy. The strategy a cell uses to obtain that energy can influence not only how fast it multiplies but also a variety of other processes, like which of its genes are turned on. This process – called metabolism – is challenging to track in time and space, so it has not been studied much in developing embryos. Alexander Aulehla shares how, in work published this week in Developmental Cell, his lab is starting to fill that gap.

    What did you find?

    To study energy metabolism during development, we looked at how somites – the parts of the embryo that will eventually give rise to the vertebral column and striated muscles – are formed in the mouse embryo. Somites develop from presomitic mesoderm (PSM). But cells in the PSM don’t all become somite cells at once. The tail bud, at one end of the PSM, contains cells that are still in a stem-cell-like state, while at the other end, cells are differentiating into somites.

    We found that cells at different points of the PSM generate energy in different ways. Undifferentiated cells in the tail bud showed a higher rate of glycolysis – they get their energy mainly by breaking down glucose. In contrast, cells that are about to differentiate had a higher rate of respiration. And there’s a gradient between these two extremes, with more glycolysis the closer you get to the tail bud.

    2
    The sensor developed by the EMBL scientists changes colour when cells have higher pyruvate levels (right). IMAGE: Vinay Bulusu/EMBL

    How did you do it?

    This is one of those projects that was really only possible at EMBL. Vinay Bulusu had an EMBL Interdisciplinary Postdoc fellowship (EIPOD) to work in my lab and Carsten Schultz’s lab. That allowed us to say “let’s try something quite daring: can we visualise pyruvate (an important product of glycolysis) in a mouse embryo?” Working with the Schultz lab, Vinay was able to design a sensor (called a FRET sensor) that measures the amount of pyruvate, not just in cells in a dish, but in an embryo. Then we created a transgenic mouse line that expresses that FRET sensor in all cells and used those mice to analyse pyruvate levels during PSM development using real-time imaging. Thanks to a collaboration with Uwe Sauer’s lab at ETH Zurich, we were also able to use mass spectrometry to directly measure other products of glycolysis, to confirm that there’s a higher rate of glycolysis in the tail bud. It will be exciting to see how others can now use the FRET sensor mouse line in different contexts.

    What questions does this raise?

    The main question this raises is ‘why?’ Why do PSM cells, which all seem to proliferate at a similar rate, get energy in different ways? We have evidence this could be linked to – and even influence – the signaling machineries that control differentiation. A companion study published alongside ours confirms this link and we are now investigating exactly how this happens at the molecular level.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    EMSL campus

    EMBL is Europe’s flagship laboratory for the life sciences, with more than 80 independent groups covering the spectrum of molecular biology. EMBL is international, innovative and interdisciplinary – its 1800 employees, from many nations, operate across five sites: the main laboratory in Heidelberg, and outstations in Grenoble; Hamburg; Hinxton, near Cambridge (the European Bioinformatics Institute), and Monterotondo, near Rome. Founded in 1974, EMBL is an inter-governmental organisation funded by public research monies from its member states. The cornerstones of EMBL’s mission are: to perform basic research in molecular biology; to train scientists, students and visitors at all levels; to offer vital services to scientists in the member states; to develop new instruments and methods in the life sciences and actively engage in technology transfer activities, and to integrate European life science research. Around 200 students are enrolled in EMBL’s International PhD programme. Additionally, the Laboratory offers a platform for dialogue with the general public through various science communication activities such as lecture series, visitor programmes and the dissemination of scientific achievements.

     
  • richardmitnick 9:45 am on March 6, 2017 Permalink | Reply
    Tags: Applied Research, , , Fault Slip Potential (FSP) tool, , Stanford scientists develop new tool to reduce risk of triggering manmade earthquakes   

    From Stanford: “Stanford scientists develop new tool to reduce risk of triggering manmade earthquakes” 

    Stanford University Name
    Stanford University

    February 27, 2017
    Ker Than

    A new software tool can help reduce the risk of triggering manmade earthquakes by calculating the probability that oil and gas production activities will trigger slip in nearby faults.

    A new, freely available software tool developed by Stanford scientists will enable energy companies and regulatory agencies to calculate the probability of triggering manmade earthquakes from wastewater injection and other activities associated with oil and gas production.

    “Faults are everywhere in the Earth’s crust, so you can’t avoid them. Fortunately, the majority of them are not active and pose no hazard to the public. The trick is to identify which faults are likely to be problematic, and that’s what our tool does,” said Mark Zoback, professor of geophysics at Stanford’s School of Earth, Energy & Environmental Sciences. Zoback developed the approach with his graduate student Rall Walsh.

    1
    Four wells increase pressure in nearby faults. If a fault is stable, it is green. If a fault is pushed toward slipping, it is colored yellow or red depending on how sensitive it is, how much pressure is put on it, operational uncertainties and the tolerance of the operator. (Image credit: Courtesy Rall Walsh)

    Oil and gas operations can generate significant quantities of “produced water” – brackish water that needs to be disposed of through deep injection to protect drinking water. Energy companies also dispose of water that flows back after hydraulic fracturing in the same way. This process can increase pore pressure – the pressure of groundwater trapped within the tiny spaces inside rocks in the subsurface – which, in turn, increases the pressure on nearby faults, causing them to slip and release seismic energy in the form of earthquakes.

    The Fault Slip Potential (FSP) tool that Walsh and Zoback developed uses three key pieces of information to help determine the probability of a fault being pushed to slip. The first is how much wastewater injection will increase pore pressure at a site. The second is knowledge of the stresses acting in the earth. This information is obtained from monitoring earthquakes or already drilled wells in the area. The final piece of information is knowledge of pre-existing faults in the area. Such information typically comes from data collected by oil and gas companies as they explore for new resources.

    Testing the tool

    Zoback and Walsh have started testing their FSP tool in Oklahoma, which has experienced a sharp rise in the number of earthquakes since 2009, due largely to wastewater injection operations. Their analysis suggests that some wastewater injection wells in Oklahoma were unwittingly placed near stressed faults already primed to slip.

    “Our tool provides a quantitative probabilistic approach for identifying at-risk faults so that they can be avoided,” Walsh said. “Our aim is to make using this tool the first thing that’s done before an injection well is drilled.”

    Regulators could also use the tool to identify areas where proposed injection activities could prove problematic so that enhanced monitoring efforts can be implemented.

    The FSP software program will be made freely available for download at SCITS.stanford.edu on March 2.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: