Tagged: Applied Research & Technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:14 pm on July 16, 2017 Permalink | Reply
    Tags: Applied Research & Technology, Comparing competitiveness, Is America’s digital leadership on the wane?, Just five companies – Comcast Spectrum Verizon CenturyLink and AT&T – serve more than 80 percent of wired-internet customers in ths US, , The US is stalling out, We looked not only at current conditions but also at how fast those conditions are changing   

    From The Conversation: “Is America’s digital leadership on the wane?” Simple Answer Yes 

    Conversation
    The Conversation

    July 16, 2017
    Bhaskar Chakravorti

    American leadership in technology innovation and economic competitiveness is at risk if U.S. policymakers don’t take crucial steps to protect the country’s digital future. The country that gave the world the internet and the very concept of the disruptive startup could find its role in the global innovation economy slipping from reigning incumbent to a disrupted has-been.

    My research, conducted with Ravi Shankar Chaturvedi, investigates our increasingly digital global society, in which physical interactions – in communications, social and political exchange, commerce, media and entertainment – are being displaced by electronically mediated ones. Our most recent report, “Digital Planet 2017: How Competitiveness and Trust in Digital Economies Vary Across the World,” confirms that the U.S. is on the brink of losing its long-held global advantage in digital innovation.

    Our yearlong study examined factors that influence innovation, such as economic conditions, governmental backing, startup funding, research and development spending and entrepreneurial talent across 60 countries. We found that while the U.S. has a very advanced digital environment, the pace of American investment and innovation is slowing. Other countries – not just major powers like China, but also smaller nations like New Zealand, Singapore and the United Arab Emirates – are building significant public and private efforts that we expect to become foundations for future generations of innovation and successful startup businesses.

    Based on our findings, I believe that rolling back net neutrality rules [NYT]will jeopardize the digital startup ecosystem that has created value for customers, wealth for investors and globally recognized leadership for American technology companies and entrepreneurs. The digital economy in the U.S. is already on the verge of stalling; failing to protect an open internet [freepress] would further erode the United States’ digital competitiveness, making a troubling situation even worse.

    2
    Comparing 60 countries’ digital economies. Harvard Business Review, used and reproducible by permission, CC BY-ND.

    Comparing competitiveness

    In the U.S., the reins of internet connectivity are tightly controlled. Just five companies – Comcast, Spectrum, Verizon, CenturyLink and AT&T – serve more than 80 percent of wired-internet customers. What those companies provide is both slower and more expensive than in many countries around the world. Ending net neutrality, as the Trump administration has proposed, would give internet providers even more power, letting them decide which companies’ innovations can reach the public, and at what costs and speeds.

    However, our research shows that the U.S. doesn’t need more limits on startups. Rather, it should work to revive the creative energy that has been America’s gift to the digital planet. For each of the 60 countries we examined, we combined 170 factors – including elements that measure technological infrastructure, government policies and economic activity – into a ranking we call the Digital Evolution Index.

    To evaluate a country’s competitiveness, we looked not only at current conditions, but also at how fast those conditions are changing. For example, we noted not only how many people have broadband internet service, but also how quickly access is becoming available to more of a country’s population. And we observed not just how many consumers are prepared to buy and sell online, but whether this readiness to transact online is increasing each year and by how much.

    The countries formed four major groups:

    “Stand Out” countries can be considered the digital elite; they are both highly digitally evolved and advancing quickly.
    “Stall Out” countries have reached a high level of digital evolution, but risk falling behind due to a slower pace of progress and would benefit from a heightened focus on innovation.
    “Break Out” countries score relatively low for overall digital evolution, but are evolving quickly enough to suggest they have the potential to become strong digital economies.
    “Watch Out” countries are neither well advanced nor improving rapidly. They have a lot of work to do, both in terms of infrastructure development and innovation.

    The US is stalling out

    The picture that emerges for the U.S. is not a pretty one. Despite being the 10th-most digitally advanced country today, America’s progress is slowing. It is close to joining the major EU countries and the Nordic nations in a club of nations that are, digitally speaking, stalling out.

    The “Stand Out” countries are setting new global standards of high states of evolution and high rates of change, and exploring various innovations such as self-driving cars or robot policemen. New Zealand, for example, is investing in a superior telecommunications system and adopting forward-looking policies that create incentives for entrepreneurs. Singapore plans to invest more than US$13 billion in high-tech industries by 2020. The United Arab Emirates has created free-trade zones and is transforming the city of Dubai into a “smart city,” linking sensors and government offices with residents and visitors to create an interconnected web of transportation, utilities and government services.

    3
    India’s smartphone market – a key element of internet connectivity there – is growing rapidly. Shailesh Andrade/Reuters.

    The “Break Out” countries, many in Asia, are typically not as advanced as others at present, but are catching up quickly, and are on pace to surpass some of today’s “Stand Out” nations in the near future. For example, China – the world’s largest retail and e-commerce market, with the world’s largest number of people using the internet – has the fastest-changing digital economy. Another “Break Out” country is India, which is already the world’s second-largest smartphone market. Though only one-fifth of its 1.3 billion people have online access today, by 2030, some estimates suggest, 1 billion Indians will be online.

    By contrast, the U.S. is on the edge between “Stand Out” and “Stall Out.” One reason is that the American startup economy is slowing down: Private startups are attracting huge investments, but those efforts aren’t paying off when the startups are either acquired by bigger companies or offer themselves on the public stock markets.

    Investors, business leaders and policymakers need to take a more realistic look at the best way to profit from innovation, balancing efforts toward both huge results and modest ones. They may need to recall the lesson from the founding of the internet itself: If government invests in key aspects of digital infrastructure, either directly or by creating subsidies and tax incentives, that lays the groundwork for massive private investment and innovation that can transform the economy.

    In addition, investments in Asian digital startups have exceeded those in the U.S. for the first time. According to CB Insights and PwC, US$19.3 billion in venture capital from sources around the world was invested in Asian tech startups in the second quarter of 2017, while the U.S. had $18.4 billion in new investment over the same period.

    This is consistent with our findings that Asian high-momentum countries are the ones in the “Break Out” zone; these countries are the ones most exciting for investors. Over time, the U.S.-Asia gap could widen; both money and talent could migrate to digital hot spots elsewhere, such as China and India, or smaller destinations, such as Singapore and New Zealand.

    For the country that gave the world the foundations of the digital economy and a president who seems perpetually plugged in, falling behind would, indeed, be a disgrace.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 2:16 pm on July 16, 2017 Permalink | Reply
    Tags: Applied Research & Technology, , Oxford University, Vaccine development   

    From Oxford: “The world’s longest endurance event” Sep 19, 2016 But Worth It, It is Important for All of Us to Understand This 

    U Oxford bloc

    Oxford University

    Sep 19, 2016
    Tom Calver

    1
    Oxford University’s Old Road Campus Research Building. Credit: OUImages / Rob Judges.

    The challenge

    “We started the first trial in 2002 and we finished the efficacy trial in 2013.”

    Nine years of trials. That was on top of the time taken to develop the thing in the first place. That’s a long time.

    Le Mans takes 24 hours. The Marathon des Sables, billed as the toughest footrace on earth, is over in six days. The Vendee Globe round the world yacht race can take 150 or so days. These are all considered defining tests of human endurance.

    Vaccine development may not be as physically demanding, but to keep plugging away for twenty years surely requires mental resilience. For Le Mans, the MdS and the Vendee Globe, the finish line is known, fixed; for vaccines, new data may move the finish line at any time. Imagine getting halfway round the world to discover that not only has the end moved two hundred miles further away, but you’re heading in the wrong direction…

    So I decided to ask three Oxford scientists with a wealth of vaccine knowledge about what they did and why they kept doing it.


    What happens in a biomedical lab? Filmed in Oxford University’s Old Road Campus Research Building.

    The first was Helen McShane, Professor of Vaccinology, specialist in TB, who began by outlining the vaccine development process.

    3
    Helen McShane, Professor of Vaccinology. Credit: John Cairns.

    4
    Professor Andrew Pollard, director of the Oxford Vaccine Group. No image credit.

    5
    Professor Adrian Hill, director of the Jenner Institute. Credit: John Cairns.

    ‘We design and construct vaccines. We then test them in different animal models. When they look good we then move them into early clinical testing. And when they look really good we then move them into testing through my collaborations in science in Africa.’

    Put it like that and it sounds quite straightforward. Just along from Professor McShane is Professor Adrian Hill, director of Oxford’s dedicated vaccine research centre, the Jenner Institute. He colours in that outline, drawing on his own experience in malaria vaccine research.

    ‘From the idea to licensing a vaccine is a very long road. You can divide it into two very large parts. One is pre-clinical — before you start vaccinating people — and the second part is clinical, which is all the clinical trials you would do to go from your first human vaccinee to convincing a regulator that the vaccine was safe and effective. Each of those will take years.

    ‘In a way it’s easier to describe the clinical development because it happens in three phases. Phase one, which tends to be a safety trial to show that the vaccine doesn’t do any harm. You typically measure some immune responses. Then in phase two you try to get the dose optimised. You figure out how many immunisations you need to give, what the interval between them should be — the immunisation regime — and you test the vaccine in a lot more people and often in different settings. You might in malaria do that in the country of origin of the vaccine, in our case the UK, say in a West African population where there is Malaria, and then you’d go on from adults where you typically test a vaccine first to the group you really want to immunise, which in the case of malaria is young infants. That takes time, because you can’t just one day immunise a lot of adults and the next week immunise babies — you have to age deescalate carefully.

    ‘And all the time you’re monitoring safety because any serious adverse event that pops up at any stage of the clinical process can flag the end for that vaccine candidate. So that would typically take five or ten years of clinical development — if everything goes well.

    ‘Of course one of the challenges with public sector vaccine development, where you’re raising money to fund your next clinical trial all the time, is every time you’ve done a trial to persuade the next group of funders or next funding agency that the results are promising enough to justify going forward.

    ‘So there’s continuous review and unfortunately the amount of money required to keep going gets larger and larger. A phase one trial is typically maybe half a million pounds, a phase two trial is millions of pounds and a phase three trial is many tens of millions of pounds, so people have to be more and more confident at every stage that this is going to be a real product that will save lives.

    ‘On the pre-clinical side of things, it’s faster but it’s more complex. You have many more decisions to make in early stage research, for example — what antigen you will use in your malaria vaccine — there are thousands. What way you will deliver the antigen? That’s what we call immunogen design — how will you design the vaccine so that you get a strong immune response that will last for a long time? If it doesn’t last for a long time, your vaccine won’t work for very long.

    ‘Other questions are very relevant, for example are you going to be able to manufacture this vaccine, not just to do a trial but eventually to vaccinate maybe 100 million babies around the world every year to stop them getting malaria. That isn’t going to be possible for every type of vaccine you think of and for some that it’s possible for it will be too expensive, because we can’t sell a malaria vaccine for 100 dollars; it has to be cheaper than that because someone else is going to be paying. So there are lots of considerations, there’s lots of debate. You must test your vaccines in animals, for several reasons: to establish safety, to look at the immune response that’s produced to see if it works in small animals. If it doesn’t you’re probably not going to be supported to go further into human testing. So we have actually more people doing pre-clinical malaria research than we have doing malaria clinical trials.’

    Given in such detail, it sounds daunting enough. Yet, Professor McShane adds another detail — a critical one. To trial a candidate vaccine requires volunteers, people willing literally to roll up their sleeves and be injected with a new biological agent, or in the case of the later stages of her MV85A TB vaccine study, willing to volunteer their baby.

    ‘The first study you do is what’s called a ‘phase one — first in man’ and that is literally the first time anything has been tested in man. Those are very small studies, typically 12–20 subjects. We then test it in other populations that are relevant for TB vaccines, so for example we would look at safety and immunogenicity in what we call latently infected people or people with HIV, who are more likely to get TB. Those trials would also take 12–20 subjects.

    ‘We would then move to phase 2a studies in the developing world where TB is prevalent. Here the numbers go up a bit so you then vaccinate 50 to 100 subjects. One of the important target populations for a TB vaccine is infants — because they get a lot of TB. If you give a vaccine to an infant the best case scenario is that you give it at the same time as all their other vaccines because then you minimise additional visits to the clinic and you’re more likely to vaccinate more children. To do that you have to make sure the new vaccine you’re adding doesn’t interfere with the existing vaccines, so you do what’s known as a non-interference study. We did one with 300 babies in the Gambia.

    ‘And then there’s a big jump between those studies and what we call the phase 2b efficacy trials and for those studies you need very big numbers because although there’s a lot of TB in the world, in any population the incidence — new cases per year — is actually quite low. So you need a high incidence population because you need as many new cases as possible — because that’s your measure of efficacy. you’re counting the number of new cases in a given time period in your vaccine arm versus your placebo arm. So in our efficacy trial we had 3000 babies.’

    More common diseases might require fewer volunteers than Professor McShane’s 3000, but even then you’ll need hundreds of people to agree to take part. Without enough volunteers, you have no clinical trials; without clinical trials, you have no progress. Yet, not only are there volunteers, some of those in Oxford who take part in early trials choose to return again and again to support research.

    A mile away is Professor Andrew Pollard, director of the Oxford Vaccine Group, which specialises in paediatric vaccines. He echoes the issue of volunteer recruitment.

    ‘In trials the delivery is an enormous undertaking because of the cost, the logistics and the regulation and just getting out there to get volunteers in to do very complex experiments with them.

    ‘What’s exciting about being at Oxford is we can do it and it works. There’s such a good machine and wonderful people you can press a button and do a study like VAST: We import a vaccine from India, give it to volunteers, challenge them a month later with a pathogen that in some parts of the world is killing people, and at the end of it have a readout that helps inform global policy. You can’t really beat that.’

    The outcome

    So that’s the practical challenge. But is it worth it? Does all that work finally pay off?

    Not always.

    The three researchers seem sanguine about it. Professor McShane wryly notes: ‘Inevitably ‘there are lots of ups and downs along the way.’

    ‘The first four volunteers we vaccinated with MV85A, we saw enormously strong responses to that vaccine, much more than we were expecting and I still remember the day my postdoc walked round the lab with the results and showed me them and I thought he must have made a mistake. They were twenty times higher than we were expecting the responses to be.’

    Professor Pollard calmly tells me that the majority of vaccines in most fields that have been developed don’t even step into phase one, and many of those that do don’t progress much from there:

    ‘We don’t understand the immune system quite well enough to know why, when it doesn’t work, it doesn’t. The reality of day to day doing research is that there is lots of stuff that doesn’t work, which is a bit frustrating.

    ‘There is a huge challenge to get to phase I because of the costs of manufacturing a vaccine to the standards required by regulators…..but for those candidates that look promising further development is very much driven by the likely size of the market and the potential for large investment required to get to phase III being recouped.’

    I push each of them on this — surely they feel more than slight frustration?

    Professor Hill describes a negative result as ‘like losing a football game only worse because this football game has sometimes lasted two or three years — if not more. And you finally get it into the clinic, do an efficacy trial and there’s no efficacy.

    ‘I’ve sat in this chair and just started at a piece of paper which says that vaccine isn’t working at all and that’s a bad experience.’

    Professor McShane describes her experience with MV85A. After the joy of those unexpectedly positive results, the vaccine progressed to that trial in 3000 babies:

    ‘Eight of us — the eight people who led the trial — pretty much locked ourselves in a hotel room in Cape Town and got the results. None of us had seen the results and we were literally handed a folder with all the results in at 9 o’clock on a Monday morning. And we all sat and looked at the results together and all looked at each other.

    ‘Five years of an efficacy trial and ten years of vaccine development, to discover that the vaccine, whilst safe, had not improved protection against TB compared to BCG alone. I was enormously disappointed; pleased it was safe, pleased that there was absolutely no evidence that we had done any harm with this vaccine, but clearly it hadn’t worked.’

    Adrian Hill describes the first field trial of a vaccine his team did in West Africa in 2001. It was the first time they’d really tested their cellular immunity approach. It showed no significant efficacy.

    However, he notes that the study revealed firstly that it was not going to be quick and easy and secondly that they were a long way from getting there and would need to do a lot of optimisation. It was, he realised, going be a long road. The first attempt at a malaria vaccine was produced in 1908 and that we still don’t have a vaccine for the disease.

    Yet there are successes. Andrew Pollard points out that half of the vaccines in the UK immunisation schedule have trials from Oxford underpinning their development:

    ‘You look at the UK immunisations schedule and we’ve more than doubled the number of vaccines we give to children compared with the 90s, which is fantastic for public health. Look at the mortality data for children and they are less likely to die now than twenty years ago. There are probably many reasons for that but vaccines are part of it. There are measurable effects in the population; it’s much more dramatic in the developing world where there are huge reductions in the death rate but there is still a measurable fall in childhood mortality even in this country. I feel quite privileged to have been part of that process.’

    The motivation

    In all the discussions it is clear that the challenges are significant and the outcomes variable. So why?

    With timescales of decades offering only uncertain success, where do they find the strength to keep going? Discover the vaccine for X and you can look back and say it was all worth it. But what keeps you going when it looks like you won’t get there?

    The answers I get show that that understanding is too simple: It’s tempting but wrong to look at vaccine development in a binary way — you develop a vaccine and it either works or it doesn’t.

    But none of the researchers take that view. Compare these three answers describing the response to trials that have not shown a vaccine to work:

    Adrian Hill: ‘The motivation is firstly to understand why it didn’t work and secondly to realise that by it not working you have learned something that is usually pretty important.’

    Andrew Pollard: ‘We’re learned something that means we don’t pursue that avenue in the future or we know how to do something that we didn’t know before — we’ve learnt some of the technologies that were needed.’

    Helen McShane: ‘It’s my view that we will make progress in this field by iteratively doing trials, getting results, feeding them back, using those results to improve the animal models, improve the immunological markers that we use and improve the design of our next generation vaccines.’

    Everything is an opportunity to learn and by learning as much as possible, whatever the headline outcome, it seems to me that researchers can mitigate the blow that comes from that outcome.


    Helen McShane on fighting TB in South Africa.

    Helen McShane’s story of that South African TB trial illustrates that. She described what happened after the initial — disappointing — result.

    ‘I guess we got to Monday afternoon and it took us a few hours to recover from the disappointment collectively as a group and then we realised that yes, this was disappointing but TB vaccines are a difficult field because we don’t have a correlate, we don’t know which — if any — of the animal models predict human efficacy — the only way we can test if a vaccine is going to work is to do these trials.

    ‘I think we were all and are all utterly committed to learning as much as we can from this study — and my lab has just finished conducting an enormous correlate analysis on the blood samples taken from all of those babies to try to find out more about the immune response we should try to induce — so that we can move on and design the next vaccine. You have to just carry on — TB remains a very important cause of death and disease throughout the world.

    ‘Even in that first week when we got that efficacy trial result I was starting to think well we could do this instead. That’s the joy of science really — that there are always other things to try. We will have five or six projects ongoing; the nature of science is that some of those will work and some those won’t work but as long as they don’t all bomb at once then there’s always something to keep you going. There’s always another idea, there’s always something you can do to make it better. I guess it’s just got me hooked.’

    That excitement — the sense of enquiry, discovery and potential — is also clear when I speak to Andrew Pollard. He tells me how technology, such as gene sequencing, means that we can understand much more about both the pathogens and the immune response to them.

    ‘It’s almost that we’re getting to the top of a hill — we’re not quite sure what’s over the top of the hill in the way that we use technology to understand immune responses. We’re getting vast amount of data from genomes, transcriptomes and so on. It feels like we’re developing this enormous new multi-parameter dataset that will help us understand the immune system and how complex it is.

    ‘That’s exciting although our problem is processing that amount of data to turn it into something useful but it feels like when we get over that it is just going to the transform the way we use that information to design the next generation of vaccines that hopefully will be safer — we’ll know what gives you a sore arm or a fever and we’ll stop those things from happening — and perhaps one day we will also be able to design out risks for developing rare serious side effects. So if you can identify all those aspects and at the same time find what makes really good immune responses that protect you, future vaccines could be a single shot that protects you for life.’

    Do not be fooled into thinking that this is the scientific excitement of academics divorced from reality. Professor McShane walks across the road from her office each week, to where Dr McShane runs an HIV clinic.

    ‘It’s very grounding because I go from this wonderful academic stimulating environment where every day I have fascinating conversations. I have this wonderful team of people and this wonderful team of collaborators that I have really interesting conversations with — so then I just go and get my feet put back on the ground and see real life at its rawest.’

    When I ask Professor Pollard about highs and lows, I do so expecting him to talk about moments in research.

    ‘It’s the people stuff which is the best and the worst. The science is what gets you out of bed in the morning. The worst things are some of the personal tragedies that happen to the staff that you work with.

    ‘It’s not the science, which is par for the course — you know, your ups and downs: you get the excitement of the paper coming out or the discovery of something, your work being used to inform policy decisions. For me, it’s the student getting their DPhil and launching on their career or sometimes leaving Oxford and getting their first job after they’ve finished. So it’s the people things that are the bit that matter the most.’

    It seems vaccine research has two drivers: Science and humanity. There’s no doubt that all three experienced researchers are still excited by the journey of discovery they are on. However, there’s also no doubt that they are motivated by the desire to do something good for people. In the end, it’s something Helen McShane said that sticks most in the mind, part of her answer when asked how she kept going.

    The University

    I ask Adrian Hill whether it takes a certain type of person to do vaccinology.

    ‘I think we are self-selected. Just to raise money is tough going — success rates are low. I think you can see the pathway by which something should work and feel you can make it work.’

    That determination to make it work is how in the last sixteen years Oxford has become a key centre for vaccine research. In 1999, when Adrian Hill began trialling his first malaria vaccine Oxford had no other vaccine makers, although there was a small unit testing vaccines made by other people. However, Professor Hill was supported by long-time Oxford partner The Wellcome Trust.

    In many ways that reflects a wider development. Despite it being 220 years since Edward Jenner’s first vaccination, vaccinology has only been seen as a research discipline in its own right in the last twenty to thirty years.

    It used to be that companies made vaccines and some universities did trials. That model has changed, driven by the increasing complexity of vaccine development when faced with diseases like malaria and HIV. The intensive, prolonged programmes of research required are very unlikely to get approval from investors. As Adrian Hill puts it:

    ‘If we’re ever going to crack these tough vaccines like cancer, like HIV, like TB, you need a substantial effort in major research universities.’

    At the same time, major funders like the Wellcome Trust and the Gates Foundation have focussed on global inequality and the paradox that science can put men on the moon yet not stop babies dying of common infectious diseases on earth. In the last fifteen years this large scale funding for global health has focussed on vaccination because of its cost effectiveness as a healthcare intervention.

    This has led to a realisation that vaccinology is a form of translational research that you can carry out in a university from fundamental science through manufacturing into clinical development and even to late stage clinical trials. All of that can be done at Oxford, from the structural testing of possible antigens to clinical trials using the University’s network of overseas units.

    In 2005, an agreement saw the Jenner Institute relocate to Oxford, with a statement of intent that it would be judged on its ability to develop and test new vaccine candidates. Meanwhile, the Oxford Vaccine Group has continued to develop its speciality in paediatric and maternal vaccines.

    The University provides the core facilities that allow researchers to pursue vaccines for diseases from RSV to Cancer. They even include an in-house clinical manufacturing facility where batches of trial vaccines can be produced and where processes can be developed to ensure that an effective vaccine can be made in a way that will scale up to meet global demand.

    It is not just physical facilities however. Oxford’s research governance and oversight of clinical trials ensures that clinical research is of a quality that protects patients, reassures regulators and delivers robust results.

    Beyond these practicalities, the development has means that there is a critical mass of scientists who can exchange ideas and learn from each other. Little wonder that some of the best researchers wanting to get into the field want to do that at Oxford.

    When they do, they will doubtless face similar situations to Professors Hill, McShane and Pollard — unexpectedly good and bad results, breakthroughs and apparent dead ends, days when progress seems swift and days when it seems reversed.

    Their endurance events will build on those that have gone before and those going on now. They will learn from each other, draw support from each other and assist each other. Finish lines will move, sometimes closer, sometimes further away. New discoveries will offer more routes, new data will offer deeper understanding. They will keep going until — eventually — there will be vaccines for TB, for Malaria, for RSV and even for Cancer.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Oxford campus

    Oxford is a collegiate university, consisting of the central University and colleges. The central University is composed of academic departments and research centres, administrative departments, libraries and museums. The 38 colleges are self-governing and financially independent institutions, which are related to the central University in a federal system. There are also six permanent private halls, which were founded by different Christian denominations and which still retain their Christian character.

    The different roles of the colleges and the University have evolved over time.

     
  • richardmitnick 12:46 pm on July 16, 2017 Permalink | Reply
    Tags: Applied Research & Technology, , Broad Institute of MIT and Harvard, , Moran Yassour, The infant microbiome,   

    From Broad: Women in STEM- “Meet a Broadie: Moran Yassour, Microbiome Maven” 

    Broad Institute

    Broad Institute

    7.16.17
    No writer credit

    1
    Moran Yassour

    2
    Credit: BroadIgnite

    Moran Yassour, a postdoctoral researcher in the labs of Eric Lander and Ramnik Xavier at the Broad Institute, is a pioneer in one of biology’s hottest fields: the human microbiome. She’s researching how the circumstances of our birth and early life influence the origin and development of the microbes in our gut. Support from the BroadIgnite community has allowed her to investigate the differences in the gut bacteria between children born by C-section and those born vaginally. Here, she shares more about her research. The interview has been condensed and edited for clarity.

    How did you come to study the infant microbiome? My mother is a computer science teacher, and I always loved genetics. When I was looking for undergrad programs, I came across a program that combined computer science and life science. I really enjoyed it and stayed in the same program for my master’s and Ph.D.

    When I started my postdoc, I knew that I wanted to be in a field that’s a little bit more translational—something I could easily explain to my grandmother, or a stranger in the elevator, and they could understand what I’m doing and why it’s cool.

    I started working on gut microbiome samples of adult patients with inflammatory bowel disease (IBD), but we also had a collaboration with a Finnish group with a cohort of children who got lots of antibiotics. I thought that was super interesting, because I had two young children at the time (a third is now on the way).

    One day I was sitting in day care, and I realized that there are so many things that are different between them. Clearly, they’re going to share a lot of microbes because they’re all licking the same toys, but they have so many different eating habits. So it started me thinking about all the diversity that we see among children of the same age group, even among the same classroom in daycare.

    What is the goal of your BroadIgnite project? In the Finnish data, we saw a pattern that was known before: kids born by C-section have a different microbe signature than kids born by vaginal delivery. What was really interesting and novel, though, was that 20 percent of kids born by vaginal delivery had the C-section microbial signature. We didn’t have the data to explain it. At some point, when I kept complaining that we don’t have all the things that could be relevant, I realized that we should just try to establish a new cohort that would have all the data we were missing.

    Together with Dr. Caroline Mitchell (an OBGYN at MGH) we enrolled 190 families that came to deliver at MGH labor and delivery, and we collected samples from the kids and from different niches of the mother’s body. Now that most of the samples have been sequenced, we can get a better understanding of the differences in the microbial signatures. We can investigate questions like: do we see less transmission of bacterial strains from mother to child in C-section births? And can we identify the bacteria impacted the most?

    What else might influence a baby’s microbiome? We have a project looking at breast milk versus infant formula. The third most common component in breast milk is a type of sugar called human milk oligosaccharides. There are 200 different types of these sugars, and each mother can have a different combination of these sugars in her milk. But the baby itself does not have the enzymes to break these down—basically the mother is feeding the baby’s gut bacteria.

    In formula, none of these sugars are present, partly because they’re very expensive to make. But we also don’t know which sugars to add. We want to understand what the minimal and necessary set is that we can use to supplement formula that would best mimic breast milk. And so we’re trying to understand which bacteria could grow on which sugar, and which bacterial genes enable this potential growth for each sugar.

    It also turns out that cow’s milk allergy is almost twice as prevalent in kids who are exclusively formula fed than in kids who are breastfed. Formula is based on cow’s milk, so it could just be that they get a lot of exposure to cow’s milk protein if they’re exclusively eating formula. On the other hand, we know that exclusively formula-fed babies have different gut bacteria. So that’s what we’re investigating with the Food Allergy Science Initiative, with a cohort of 180 kids, 90 of which got milk allergy and 90 of which did not.

    What role did BroadIgnite play? Many young scientists lack confidence, so when other people think what you’re studying is important and that the methods you’re using are interesting, then that’s fun. The BroadIgnite funding was a really nice boost. It’s an honor to belong to such an extraordinary group of scientists.

    Furthermore, I think that two strong advantages of the BroadIgnite funding are that I could get the funding started very fast, which helped me establish my new cohort, and that the preliminary results from these samples were instrumental in receiving a large NIH grant to further support my projects.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Broad Institute Campus

    The Eli and Edythe L. Broad Institute of Harvard and MIT is founded on two core beliefs:

    This generation has a historic opportunity and responsibility to transform medicine by using systematic approaches in the biological sciences to dramatically accelerate the understanding and treatment of disease.
    To fulfill this mission, we need new kinds of research institutions, with a deeply collaborative spirit across disciplines and organizations, and having the capacity to tackle ambitious challenges.

    The Broad Institute is essentially an “experiment” in a new way of doing science, empowering this generation of researchers to:

    Act nimbly. Encouraging creativity often means moving quickly, and taking risks on new approaches and structures that often defy conventional wisdom.
    Work boldly. Meeting the biomedical challenges of this generation requires the capacity to mount projects at any scale — from a single individual to teams of hundreds of scientists.
    Share openly. Seizing scientific opportunities requires creating methods, tools and massive data sets — and making them available to the entire scientific community to rapidly accelerate biomedical advancement.
    Reach globally. Biomedicine should address the medical challenges of the entire world, not just advanced economies, and include scientists in developing countries as equal partners whose knowledge and experience are critical to driving progress.

    Harvard University

    MIT Widget

     
  • richardmitnick 10:22 pm on July 15, 2017 Permalink | Reply
    Tags: Applied Research & Technology, , , , , , , MEET SURF, , , , U Washington Majorana   

    Meet SURF-Sanford Underground Research Facility, South Dakota, USA 

    SURF logo
    Sanford Underground levels

    THIS POST IS DEDICATED TO CONSTANCE WALTER, Communications Director, fantastic writer, AND MATT KAPUST Creative Services Developer, master photogropher, FOR THEIR TIRELESS EFFORTS IN KEEPING US INFORMED ABOUT PROGRESS FOR SCIENCE IN SOUTH DAKOTA, USA.

    Sanford Underground Research facility

    The SURF story in pictures:

    SURF-Sanford Underground Research Facility


    SURF Above Ground

    SURF Out with the Old


    SURF An Empty Slate


    SURF Carving New Space


    SURF Shotcreting


    SURF Bolting and Wire Mesh


    SURF Outfitting Begins


    SURF circular wooden frame was built to form a concrete ring to hold the 72,000-gallon (272,549 liters) water tank that would house the LUX dark matter detector


    SURF LUX water tank was transported in pieces and welded together in the Davis Cavern


    SURF Ground Support


    SURF Dedicated to Science


    SURF Building a Ship in a Bottle


    SURF Tight Spaces


    SURF Ready for Science


    SURF Entrance Before Outfitting


    SURF Entrance After Outfitting


    SURF Common Corridior


    SURF Davis


    SURF Davis A World Class Site


    SURF Davis a Lab Site


    SURF DUNE LBNF Caverns at Sanford Lab


    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA


    FNAL DUNE Argon tank at SURF

    U Washington LUX Xenon experiment at SURF


    SURF Before Majorana


    U Washington Majorana Demonstrator Experiment at SURF

    This is the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    About us.
    The Sanford Underground Research Facility in Lead, South Dakota, advances our understanding of the universe by providing laboratory space deep underground, where sensitive physics experiments can be shielded from cosmic radiation. Researchers at the Sanford Lab explore some of the most challenging questions facing 21st century physics, such as the origin of matter, the nature of dark matter and the properties of neutrinos. The facility also hosts experiments in other disciplines—including geology, biology and engineering.

    The Sanford Lab is located at the former Homestake gold mine, which was a physics landmark long before being converted into a dedicated science facility. Nuclear chemist Ray Davis earned a share of the Nobel Prize for Physics in 2002 for a solar neutrino experiment he installed 4,850 feet underground in the mine.

    Homestake closed in 2003, but the company donated the property to South Dakota in 2006 for use as an underground laboratory. That same year, philanthropist T. Denny Sanford donated $70 million to the project. The South Dakota Legislature also created the South Dakota Science and Technology Authority to operate the lab. The state Legislature has committed more than $40 million in state funds to the project, and South Dakota also obtained a $10 million Community Development Block Grant to help rehabilitate the facility.

    In 2007, after the National Science Foundation named Homestake as the preferred site for a proposed national Deep Underground Science and Engineering Laboratory (DUSEL), the South Dakota Science and Technology Authority (SDSTA) began reopening the former gold mine.

    In December 2010, the National Science Board decided not to fund further design of DUSEL. However, in 2011 the Department of Energy, through the Lawrence Berkeley National Laboratory, agreed to support ongoing science operations at Sanford Lab, while investigating how to use the underground research facility for other longer-term experiments. The SDSTA, which owns Sanford Lab, continues to operate the facility under that agreement with Berkeley Lab.

    The first two major physics experiments at the Sanford Lab are 4,850 feet underground in an area called the Davis Campus, named for the late Ray Davis. The Large Underground Xenon (LUX) experiment is housed in the same cavern excavated for Ray Davis’s experiment in the 1960s.
    LUX/Dark matter experiment at SURFLUX/Dark matter experiment at SURF

    In October 2013, after an initial run of 80 days, LUX was determined to be the most sensitive detector yet to search for dark matter—a mysterious, yet-to-be-detected substance thought to be the most prevalent matter in the universe. The Majorana Demonstrator experiment, also on the 4850 Level, is searching for a rare phenomenon called “neutrinoless double-beta decay” that could reveal whether subatomic particles called neutrinos can be their own antiparticle. Detection of neutrinoless double-beta decay could help determine why matter prevailed over antimatter. The Majorana Demonstrator experiment is adjacent to the original Davis cavern.

    Another major experiment, the Long Baseline Neutrino Experiment (LBNE)—a collaboration with Fermi National Accelerator Laboratory (Fermilab) and Sanford Lab, is in the preliminary design stages. The project got a major boost last year when Congress approved and the president signed an Omnibus Appropriations bill that will fund LBNE operations through FY 2014. Called the “next frontier of particle physics,” LBNE will follow neutrinos as they travel 800 miles through the earth, from FermiLab in Batavia, Ill., to Sanford Lab.

    Fermilab LBNE
    LBNE

     
  • richardmitnick 4:46 pm on July 15, 2017 Permalink | Reply
    Tags: Applied Research & Technology, , Laser communication to the orbit, , , , Quantum cryptography,   

    From Max Planck Gesellschaft: “Quantum communication with a satellite” 

    Max Planck Gesellschaft

    July 10, 2017
    Prof. Dr. Gerd Leuchs
    Max Planck Institute for the Science of Light, Erlangen
    Phone:+49 9131 7133-100
    Fax:+49 9131 7133-109
    gerd.leuchs@mpl.mpg.de

    What started out as exotic research in physics laboratories could soon change the global communication of sensitive data: quantum cryptography. Interest in this technique has grown rapidly over the last two years or so. The most recent work in this field, which a team headed by Christoph Marquardt and Gerd Leuchs at the Max Planck Institute for the Science of Light in Erlangen is now presenting, is set to heighten the interest of telecommunications companies, banks and governmental institutions even further. This is due to the fact that the physicists collaborating with the company Tesat-Spacecom and the German Aerospace Center have now created one precondition for using quantum cryptography to communicate over large distances as well without any risk of interception. They measured the quantum states of light signals which were transmitted from a geostationary communication satellite 38,000 kilometres away from Earth. The physicists are therefore confident that a global interception-proof communications network based on established satellite technology could be set up within only a few years.

    1
    More versatile than originally thought: A part of the Alphasat I-XL was actually developed to demonstrate data transmission between the Earth observation satellites of the European Copernicus project and Earth, but has now helped a group including researchers from the Max Planck Institute for the Science of Light to test the measurement of quantum states after they have been transmitted over a distance of 38,000 kilometres.© ESA.

    Sensitive data from banks, state institutions or the health sector, for example, must not fall into unauthorized hands. Although modern encryption techniques are far advanced, they can be cracked in many cases if significant, commensurate efforts are expended. And conventional encryption methods would hardly represent a challenge for the quantum computers of the future. While scientists used to think that the realization of such a computer was still a very long way off, considerable progress in the recent past has now raised physicists’ hopes. “A quantum computer could then also crack the data being stored today,” as Christoph Marquardt, leader of a research group at the Max Planck Institute for the Science of Light, states. “And this is why we are harnessing quantum cryptography now in order to develop a secure data transfer method.”

    Quantum mechanics protects a key against spies

    In quantum cryptography, two parties exchange a secret key, which can be used to encrypt messages. Unlike established public key encryption methods, this method cannot be cracked as long as the key does not fall into the wrong hands. In order to prevent this from happening, the two parties send each other keys in the form of quantum states in laser pulses. The laws of quantum mechanics protect a key from spies here, because any eavesdropping attempt will inevitably leave traces in the signals, which sender and recipient will detect immediately. This is because reading quantum information equates to a measurement on the light pulse, which inevitably changes the quantum state of the light.

    In the laboratory and over short distances quantum key distribution already works rather well via optical fibres that are used in optical telecommunications technology. Over large distances the weak and sensitive quantum signals need to be refreshed, which is difficult for reasons similar to those determining the fact that that laser pulses cannot be intercepted unnoticed. Christoph Marquardt and his colleagues are therefore relying on the transmission of quantum states via the atmosphere, between Earth and satellites to be precise, to set up a global communications network that is protected by quantum cryptography.

    2
    Laser communication to the orbit: The infrared image shows the ground station for the communication with the Alphasat I-XL satellite 38,000 kilometres away. The receiver sends an infrared laser beam in the direction of the orbit so that the satellite can find it. Since the beam is scattered by a higher atmospheric layer, it appears as a larger spot. © Imran Khan, MPI for the Science of Light.

    In their current publication [Optica], the researchers showed that this can largely be based on existing technology. Using a measuring device on the Canarian island Teneriffe, they detected the quantum properties of laser pulses which the Alphasat I-XL communications satellite had transmitted to Earth. The satellite circles Earth on a geostationary orbit and therefore appears to stand still in the sky. The satellite, which was launched in 2013, carries laser communication equipment belonging to the European Space Agency ESA. The company Tesat-Spacecom, headquartered in Backnang near Stuttgart, developed the technology in collaboration with the German Aerospace Center as part of the European Copernicus project for Earth observation, which is funded by the German Federal Ministry for Economic Affairs and Energy.


    ESA Sentinels (Copernicus)

    While Alphasat I-XL was never intended for quantum communication, “we found out at some stage, however, that the data transmission of the satellite worked according to the same principle as that of our laboratory experiments,” explains Marquardt, “which is by modulating the amplitude and phase of the light waves.” The amplitude is a measure for the intensity of the light waves and the mutual shift of two waves can be determined with the aid of the phase.

    The laser beam is 800 metres wide after travelling 38,000 kilometres

    For conventional data transmission, the modulation of the amplitude, for example, is made particularly large. This makes it easier to read out in the receiver and guarantees a clear signal. Marquardt and his colleagues were striving to achieve the exact opposite, however: in order to get down to the quantum level with the laser pulses, they have to greatly reduce the amplitude.

    The signal, which is therefore already extremely weak, is attenuated a great deal more as it is being transmitted to Earth. The largest loss occurs due to the widening of the laser beam. After 38,000 kilometres, it has a diameter of 800 metres at the ground, while the diameter of the mirror in the receiving station is a mere 27 centimetres. Further receiving mirrors, which uninvited listeners could use to eavesdrop on the communication, could easily be accommodated in a beam which is widened to such an extent. The quantum cryptography procedure, however, takes this into account. In a simple picture it exploits the fact that a photon – which is what the signals of quantum communication employ – can only be measured once completely: either with the measuring apparatus of the lawful recipient or the eavesdropping device of the spy. The exaction location of where a photon is registered within the beam diameter, however, is still left to chance.

    The experiment carried out at the beginning of 2016 was successful despite the greatly attenuated signal, because the scientists found out that the properties of the signals received on the ground came very close to the limit of quantum noise. The noise of laser light is the term physicists use to describe variations in the detected photons. Some of this irregularity is caused by the inadequacies of the transmitting and receiving equipment or turbulences in the atmosphere, and can therefore be avoided in principle. Other variations result from the laws of quantum physics – more precisely the uncertainty principle – according to which amplitude and phase of the light cannot be specified simultaneously to any arbitrary level of accuracy.

    Quantum cryptography can use established satellite technology

    Since the transmission with the aid of the Tesat system already renders the quantum properties of the light pulses measurable, this technique can be used as the basis on which to develop satellite-based quantum cryptography. “We were particularly impressed by this because the satellite had not been designed for quantum communication,” as Marquardt explains.

    Together with their colleagues from Tesat and other partners, the Erlangen physicists now want to develop a new satellite specifically customized for the needs of quantum cryptography. Since they can largely build on tested and tried technology, the development should take much less time than a completely new development. Their main task is to develop a board computer designed for quantum communication and to render the quantum mechanical random number generator space-proof, which supplies the cryptographic key.

    Consequently, quantum cryptography, which started out as an exotic playground for physicists, became quite close to practical application. The race for the first operational secure system is in full swing. Countries such as Japan, Canada, the USA and China in particular are funneling a lot of money into this research. “The conditions for our research have changed completely,” explains Marquardt. “At the outset, we attempted to whet industry’s appetite for such a method, today they are coming to us without prompting and asking for practicable solutions.” These could become reality in the next five to ten years.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Max Planck Society is Germany’s most successful research organization. Since its establishment in 1948, no fewer than 18 Nobel laureates have emerged from the ranks of its scientists, putting it on a par with the best and most prestigious research institutions worldwide. The more than 15,000 publications each year in internationally renowned scientific journals are proof of the outstanding research work conducted at Max Planck Institutes – and many of those articles are among the most-cited publications in the relevant field.

    What is the basis of this success? The scientific attractiveness of the Max Planck Society is based on its understanding of research: Max Planck Institutes are built up solely around the world’s leading researchers. They themselves define their research subjects and are given the best working conditions, as well as free reign in selecting their staff. This is the core of the Harnack principle, which dates back to Adolph von Harnack, the first president of the Kaiser Wilhelm Society, which was established in 1911. This principle has been successfully applied for nearly one hundred years. The Max Planck Society continues the tradition of its predecessor institution with this structural principle of the person-centered research organization.

    The currently 83 Max Planck Institutes and facilities conduct basic research in the service of the general public in the natural sciences, life sciences, social sciences, and the humanities. Max Planck Institutes focus on research fields that are particularly innovative, or that are especially demanding in terms of funding or time requirements. And their research spectrum is continually evolving: new institutes are established to find answers to seminal, forward-looking scientific questions, while others are closed when, for example, their research field has been widely established at universities. This continuous renewal preserves the scope the Max Planck Society needs to react quickly to pioneering scientific developments.

     
  • richardmitnick 12:33 pm on July 15, 2017 Permalink | Reply
    Tags: Applied Research & Technology, , , Iinspiration comes from advances in semiconductor manufacturing, , , Provide an alternate path for sight and sound to be delivered directly to the brain, Rice team developing flat microscope for the brain, Rice University, Will focus first on vision   

    From Rice: “Rice team developing flat microscope for the brain” 

    Rice U bloc

    Rice University

    July 12, 2017
    Mike Williams

    1
    Rice University engineers have built a lab prototype of a flat microscope they are developing as part of DARPA’s Neural Engineering System Design project. The microscope will sit on the surface of the brain, where it will detect optical signals from neurons in the cortex. The goal is to provide an alternate path for sight and sound to be delivered directly to the brain. (Credit: Rice University)

    Rice University engineers are building a flat microscope, called FlatScope [TM], and developing software that can decode and trigger neurons on the surface of the brain.

    Their goal as part of a new government initiative is to provide an alternate path for sight and sound to be delivered directly to the brain.

    The project is part of a $65 million effort announced this week by the federal Defense Advanced Research Projects Agency (DARPA) to develop a high-resolution neural interface. Among many long-term goals, the Neural Engineering System Design (NESD) program hopes to compensate for a person’s loss of vision or hearing by delivering digital information directly to parts of the brain that can process it.

    Members of Rice’s Electrical and Computer Engineering Department will focus first on vision. They will receive $4 million over four years to develop an optical hardware and software interface. The optical interface will detect signals from modified neurons that generate light when they are active. The project is a collaboration with the Yale University-affiliated John B. Pierce Laboratory led by neuroscientist Vincent Pieribone.

    Current probes that monitor and deliver signals to neurons — for instance, to treat Parkinson’s disease or epilepsy — are extremely limited, according to the Rice team. “State-of-the-art systems have only 16 electrodes, and that creates a real practical limit on how well we can capture and represent information from the brain,” Rice engineer Jacob Robinson said.

    Robinson and Rice colleagues Richard Baraniuk, Ashok Veeraraghavan and Caleb Kemere are charged with developing a thin interface that can monitor and stimulate hundreds of thousands and perhaps millions of neurons in the cortex, the outermost layer of the brain.

    “The inspiration comes from advances in semiconductor manufacturing,” Robinson said. “We’re able to create extremely dense processors with billions of elements on a chip for the phone in your pocket. So why not apply these advances to neural interfaces?”

    Kemere said some teams participating in the multi-institution project are investigating devices with thousands of electrodes to address individual neurons. “We’re taking an all-optical approach where the microscope might be able to visualize a million neurons,” he said.

    That requires neurons to be visible. Pieribone’s Pierce Lab is gathering expertise in bioluminescence — think fireflies and glowing jellyfish — with the goal of programming neurons with proteins that release a photon when triggered. “The idea of manipulating cells to create light when there’s an electrical impulse is not extremely far-fetched in the sense that we are already using fluorescence to measure electrical activity,” Robinson said.

    The scope under development is a cousin to Rice’s FlatCam, developed by Baraniuk and Veeraraghavan to eliminate the need for bulky lenses in cameras. The new project would make FlatCam even flatter, small enough to sit between the skull and cortex without putting additional pressure on the brain, and with enough capacity to sense and deliver signals from perhaps millions of neurons to a computer.

    Alongside the hardware, Rice is modifying FlatCam algorithms to handle data from the brain interface.

    “The microscope we’re building captures three-dimensional images, so we’ll be able to see not only the surface but also to a certain depth below,” Veeraraghavan said. “At the moment we don’t know the limit, but we hope we can see 500 microns deep in tissue.”

    “That should get us to the dense layers of cortex where we think most of the computations are actually happening, where the neurons connect to each other,” Kemere said.

    A team at Columbia University is tackling another major challenge: The ability to wirelessly power and gather data from the interface.

    In its announcement, DARPA described its goals for the implantable package. “Part of the fundamental research challenge will be developing a deep understanding of how the brain processes hearing, speech and vision simultaneously with individual neuron-level precision and at a scale sufficient to represent detailed imagery and sound,” according to the agency. “The selected teams will apply insights into those biological processes to the development of strategies for interpreting neuronal activity quickly and with minimal power and computational resources.”

    “It’s amazing,” Kemere said. “Our team is working on three crazy challenges, and each one of them is pushing the boundaries. It’s really exciting. This particular DARPA project is fun because they didn’t just pick one science-fiction challenge: They decided to let it be DARPA-hard in multiple dimensions.”

    Baraniuk is the Victor E. Cameron Professor of Electrical and Computer Engineering. Robinson, Veeraraghavan and Kemere are assistant professors of electrical and computer engineering.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Rice U campus

    In his 1912 inaugural address, Rice University president Edgar Odell Lovett set forth an ambitious vision for a great research university in Houston, Texas; one dedicated to excellence across the range of human endeavor. With this bold beginning in mind, and with Rice’s centennial approaching, it is time to ask again what we aspire to in a dynamic and shrinking world in which education and the production of knowledge will play an even greater role. What shall our vision be for Rice as we prepare for its second century, and how ought we to advance over the next decade?

    This was the fundamental question posed in the Call to Conversation, a document released to the Rice community in summer 2005. The Call to Conversation asked us to reexamine many aspects of our enterprise, from our fundamental mission and aspirations to the manner in which we define and achieve excellence. It identified the pressures of a constantly changing and increasingly competitive landscape; it asked us to assess honestly Rice’s comparative strengths and weaknesses; and it called on us to define strategic priorities for the future, an effort that will be a focus of the next phase of this process.

     
  • richardmitnick 4:14 pm on July 14, 2017 Permalink | Reply
    Tags: Applied Research & Technology, , , TSRI   

    From TSRI: ” San Diego Team Tests Best Delivery Mode for Potential HIV Vaccine” 

    Scripps
    Scripps Research Institute

    July 17, 2017
    Gina Kirchweger

    For decades, HIV has successfully evaded all efforts to create an effective vaccine but researchers at The Scripps Research Institute (TSRI) and the La Jolla Institute for Allergy and Immunology (LJI) are steadily inching closer. Their latest study, published in a recent issue of Immunity, demonstrates that optimizing the mode and timing of vaccine delivery is crucial to inducing a protective immune response in a preclinical model.

    More than any other factors, administering the vaccine candidate subcutaneously and increasing the time intervals between immunizations improved the efficacy of the experimental vaccine and reliably induced neutralizing antibodies. Neutralizing antibodies are a key component of an effective immune response. They latch onto and inactive invading viruses before they can gain a foothold in the body and have been notoriously difficult to generate for HIV.

    “This study is an important staging point on the long journey toward an HIV vaccine,” says TSRI Professor Dennis R. Burton, Ph.D, who is also scientific director of the International AIDS Vaccine Initiative (IAVI) Neutralizing Antibody Center and of the National Institutes of Health’s Center for HIV/AIDS Vaccine Immunology and Immunogen Discovery (CHAVI-ID) at TSRI. “The vaccine candidates we worked with here are probably the most promising prototypes out there, and one will go into people in 2018,” says Burton.

    “There had been a lot of big question marks and this study was designed to get as many answers as possible before we go into human clinical trials,” adds senior co-author Shane Crotty, Ph.D., a professor in LJI’s Division of Vaccine Discovery. “We are confident that our results will be predictive going forward.”

    HIV has faded from the headlines, mainly because the development of antiretroviral drugs has turned AIDS into a chronic, manageable disease. Yet, only about half of the roughly 36.7 million people currently infected with HIV worldwide are able to get the medicines they need to control the virus. At the same time, the rate of new infections has remained stubbornly high, emphasizing the need for a preventive vaccine.

    The latest findings are the culmination of years of collaborative and painstaking research by a dozen research teams centered around the development, improvement, and study of artificial protein trimers that faithfully mimic a protein spike found on the viral surface. At the core of this effort is the CHAVI-ID immunogen working group, comprised of TSRI’s own William R. Schief, Ph.D., Andrew B. Ward, Ph.D., Ian A. Wilson, D.Phil. and Richard T. Wyatt, Ph.D., in addition to Crotty and Burton. This group of laboratories in collaboration with Darrell J. Irvine, Ph.D., professor at MIT, and Rogier W. Sanders, Ph.D., professor at the University of Amsterdam, provided the cutting-edge immunogens tested in the study.

    The recombinant trimers, or SOSIPs as they are called, were unreliable in earlier, smaller studies conducted in non-human primates. Non-human primates, and especially rhesus macaques, are considered the most appropriate pre-clinical model for HIV vaccine studies, because their immune system most closely resembles that of humans.

    “The animals’ immune responses, although the right kind, weren’t very robust and a few didn’t respond at all,” explains Colin Havenar-Daughton, Ph.D., a scientific associate in the Crotty lab. “That caused significant concern that the immunogen wouldn’t consistently trigger an effective immune response in all individuals in a human clinical trial.”

    In an effort to reliably induce a neutralizing antibody response, the collaborators tested multiple variations of the trimers and immunization protocols side-by-side to determine the best strategy going forward. Crotty and Burton and their colleagues teamed up with Professor Dan Barouch, M.D., Ph.D., Director of the Center for Virology and Vaccine Research at Beth Israel Deaconess Medical Center, who coordinated the immunizations.

    The design of the study was largely guided by what the collaborators had learned in a previous study via fine needling sampling of the lymph nodes, where the scientists observed follicular helper T cells help direct the maturation steps of antibody-producing B cells. Administering the vaccine subcutaneously versus the more conventional intramuscular route, and spacing the injection at 8 weeks instead of the more common 4-6 weeks, reliably induced a strong functional immune response in all animals.

    Using an osmotic pump to slowly release the vaccine over a period of two weeks resulted in the highest neutralizing antibody titers ever measured following SOSIP immunizations in non-human primates. While osmotic pumps are not a practical way to deliver vaccines, they illustrate an important point. “Depending on how we gave the vaccine, there was a bigger difference due to immunization route than we would have predicted,” says Matthias Pauthner, a graduate student in Burton’s lab and the study’s co-lead author. “We can help translate what we know now into the clinic.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Scripps Research Institute (TSRI), one of the world’s largest, private, non-profit research organizations, stands at the forefront of basic biomedical science, a vital segment of medical research that seeks to comprehend the most fundamental processes of life. Over the last decades, the institute has established a lengthy track record of major contributions to the betterment of health and the human condition.

    The institute — which is located on campuses in La Jolla, California, and Jupiter, Florida — has become internationally recognized for its research into immunology, molecular and cellular biology, chemistry, neurosciences, autoimmune diseases, cardiovascular diseases, virology, and synthetic vaccine development. Particularly significant is the institute’s study of the basic structure and design of biological molecules; in this arena TSRI is among a handful of the world’s leading centers.

    The institute’s educational programs are also first rate. TSRI’s Graduate Program is consistently ranked among the best in the nation in its fields of biology and chemistry.

     
  • richardmitnick 2:32 pm on July 14, 2017 Permalink | Reply
    Tags: Applied Research & Technology, , , , ,   

    From temblor: “M=4.2 earthquake in Oklahoma widely felt throughout Midwest” 

    1

    temblor

    July 14, 2017
    David Jacobson

    1
    Shaking from today’s M=4.2 earthquake was widely felt in Oklahoma’s capital of Oklahoma City.

    At 8:47 a.m. local time this morning, a M=4.2 earthquake struck central Oklahoma in between the cities of Oklahoma City, Tulsa, and Stillwater. This was followed by five aftershocks, the largest of which was a M=3.8. At 10 a.m. local time, there have been over 1,500 felt reports from the mainshock on the USGS website, from all over the state of Oklahoma, and even in Wichita, Kansas, over 200 km away. So far, there are no reports of damage, which is unlikely given this quake’s moderate magnitude. Additionally, the USGS PAGER system estimates that economic losses should remain extremely minimal, and any fatalities are very unlikely.

    2
    This Temblor map shows the location of today’s M=4.2 earthquake in Oklahoma. This quake was widely felt throughout the state, and was also felt in 4 other surrounding states based on USGS felt reports.

    According to the USGS, today’s earthquake occurred at a depth of 9.3 km, and was right-lateral strike-slip in nature. This depth is relatively deep for Oklahoma, but still within the range frequently seen. Based on the fault map shown in the Temblor map above, and the strike-slip component of today’s earthquake, it occurred on an unmapped fault in the region. However, the orientation of the structure on which the quake struck is consistent with the regional compression direction outlined in Walsh and Zoback, 2016. Also labeled in the Temblor map is the large northeast-southwest-trending Wilzetta Fault. Based on this fault’s strike (northeast-southwest) and the regional compression, it is not at the preferred orientation to undergo a large rupture. This is good as given its length, it is capable of producing a large magnitude earthquake. Instead, faults with orientations similar to the fault on which today’s quake occurred have a higher likelihood of rupturing.

    While when most people think of earthquakes in Oklahoma, they think of induced quakes, based on Walsh and Zoback, 2016, there are no high output disposal wells in the area around today’s earthquake. While it is possible that in the last two years more wells have been put in, this is unlikely since following the 2016 M=5.8 Pawnee earthquake, disposal has been limited around the state. Therefore, today’s quake may have been more natural than many that occur in the state.

    Because Temblor does not factor in induced seismicity into the Hazard Rank, we must examine a Petersen et al., 2017 study in which both natural and induced seismicity is factored into the likelihood of damage. The map below shows the chance of damage from an earthquake in 2017 for the entire country. What may be eye-opening is that Oklahoma City has a higher likelihood of experiencing earthquake damage this year than both San Francisco and Los Angeles.

    3
    The 2017 seismic hazard forecast map reveals that Oklahoma City actually has a higher threat of experiencing a damaging earthquake than San Francisco and Los Angeles. (Figure from Petersen et. al., 2017)

    References
    USGS

    F. Rall Walsh, III, and Mark D. Zoback, Probabilistic assessment of potential fault slip related to injection-induced earthquakes: Application to north-central Oklahoma, USA, 2016, Geology, doi:10.1130/G38275.1

    Mark D. Petersen, Charles S. Mueller, Morgan P. Moschetti, Susan M. Hoover, Allison M. Shumway, Daniel E. McNamara, Robert A. Williams, Andrea L. Llenos, William L. Ellsworth, Andrew J. Michael, Justin L. Rubinstein, Arthur F. McGarr, and Kenneth S. Rukstales, 2017 One-Year Seismic-Hazard Forecast for the Central and Eastern United States from Induced and Natural Earthquakes, Seismological Research Letters, March 2017; 88 (2A), DOI: 10.1785/0220170005

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    You can help many citizen scientists in detecting earthquakes and getting the data to emergency services people in affected area.
    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    BOINCLarge

    BOINC WallPaper

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    Earthquake country is beautiful and enticing

    Almost everything we love about areas like the San Francisco bay area, the California Southland, Salt Lake City against the Wasatch range, Seattle on Puget Sound, and Portland, is brought to us by the faults. The faults have sculpted the ridges and valleys, and down-dropped the bays, and lifted the mountains which draw us to these western U.S. cities. So, we enjoy the fruits of the faults every day. That means we must learn to live with their occasional spoils: large but infrequent earthquakes. Becoming quake resilient is a small price to pay for living in such a great part of the world, and it is achievable at modest cost.

    A personal solution to a global problem

    Half of the world’s population lives near active faults, but most of us are unaware of this. You can learn if you are at risk and protect your home, land, and family.

    Temblor enables everyone in the continental United States, and many parts of the world, to learn their seismic, landslide, tsunami, and flood hazard. We help you determine the best way to reduce the risk to your home with proactive solutions.

    Earthquake maps, soil liquefaction, landslide zones, cost of earthquake damage

    In our iPhone and Android and web app, Temblor estimates the likelihood of seismic shaking and home damage. We show how the damage and its costs can be decreased by buying or renting a seismically safe home or retrofitting an older home.

    Please share Temblor with your friends and family to help them, and everyone, live well in earthquake country.

    Temblor is free and ad-free, and is a 2017 recipient of a highly competitive Small Business Innovation Research (‘SBIR’) grant from the U.S. National Science Foundation.

    ShakeAlert: Earthquake Early Warning

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications by 2018.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds, depending on the distance to the epicenter of the earthquake. For very large events like those expected on the San Andreas fault zone or the Cascadia subduction zone the warning time could be much longer because the affected area is much larger. ShakeAlert can give enough time to slow and stop trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications by 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” test users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California. This “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities
    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

     
  • richardmitnick 2:08 pm on July 14, 2017 Permalink | Reply
    Tags: Applied Research & Technology, , , , ,   

    From temblor: “M=6.4 earthquake strikes off the coast of Papua New Guinea” 

    1

    temblor

    July 13, 2017
    David Jacobson

    1
    Today’s M=6.4 earthquake in Papua New Guinea struck near the island of New Ireland, in the eastern part of the country. (Photo from: Simon’s Jam Jar)

    At 3:36 a.m. local time, a M=6.4 earthquake struck Papua New Guinea just off the island of New Ireland. The eastern part of the country is sparsely populated meaning people were only exposed to light and lesser degrees of shaking. Because of this damage and fatalities are unlikely, and so far no reports of them have come in. Another reason why strong shaking was not felt is because the earthquake occurred offshore and at a depth of 47 km, according to the USGS (The European-Mediterranean Seismological Centre assigned it a depth of 40 km). Based on the USGS focal mechanism, this earthquake was thrust in nature. While compressional earthquakes are common in this region, given the proximity to the New Britain Trench, the strike of today’s earthquake makes it hard to reconcile.

    2
    This Temblor map shows the location of today’s earthquake in Papua New Guinea. While this earthquake was compressional in nature, based on the quake’s strike, it was likely not associated with subduction at the New Britain Trench.

    In the region around today’s earthquake, much of the seismicity is dominated by the subduction of the Australian Plate. North of the New Britain Trench, the Pacific Plate has been broken up into numerous microplates, all of which are being pushed in various directions. In the USGS map below, relative plate motions are shown, illustrating the complex dynamics of the region. Because of these plate motions, strike-slip and extensional earthquakes are also common. Nonetheless, large subduction zone earthquakes, including a M=7.9 in December 2016 are the events which cause the most damage and fatalities.

    3
    This map from the USGS shows historical seismicity and relative plate motions in the region around today’s M=6.4 earthquake (yellow star). What this map illustrates is that rapid deformation and high rates of seismicity is due to relative motion exceeding 100 mm/yr. In this map, one can see that the majority of quakes are associated with subduction at the New Britain Trench. (Map from USGS)

    Based on the Global Earthquake Activity Rate (GEAR) model, which is available in Temblor, today’s earthquake should not be considered surprising. This model uses global strain rates and seismicity since 1977 to forecast the likely earthquake magnitude in your lifetime anywhere on earth. From this model, which is in the figure below, one can see that a M=7.75+ earthquake is likely in your lifetime in this area. Such a large magnitude is likely because the area is undergoing rapid deformation due to plate motions of upwards of 100 mm/yr. Should there be any large aftershocks (so far there is only one M=4.8 in our catalog) we will update this post.

    4
    This Temblor map shows the Global Earthquake Activity Rate (GEAR) model for the region around Papua New Guinea. This model uses global strain rates and seismicity since 1977 to forecast the likely earthquake magnitude in your lifetime anywhere on earth. From this model, one can see that today’s M=6.4 earthquake should not be considered surprising as a M=7.75+ quake is possible.

    References [No links provided.]
    USGS
    European-Mediterranean Seismological Centre

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    You can help many citizen scientists in detecting earthquakes and getting the data to emergency services people in affected area.
    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    BOINCLarge

    BOINC WallPaper

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    Earthquake country is beautiful and enticing

    Almost everything we love about areas like the San Francisco bay area, the California Southland, Salt Lake City against the Wasatch range, Seattle on Puget Sound, and Portland, is brought to us by the faults. The faults have sculpted the ridges and valleys, and down-dropped the bays, and lifted the mountains which draw us to these western U.S. cities. So, we enjoy the fruits of the faults every day. That means we must learn to live with their occasional spoils: large but infrequent earthquakes. Becoming quake resilient is a small price to pay for living in such a great part of the world, and it is achievable at modest cost.

    A personal solution to a global problem

    Half of the world’s population lives near active faults, but most of us are unaware of this. You can learn if you are at risk and protect your home, land, and family.

    Temblor enables everyone in the continental United States, and many parts of the world, to learn their seismic, landslide, tsunami, and flood hazard. We help you determine the best way to reduce the risk to your home with proactive solutions.

    Earthquake maps, soil liquefaction, landslide zones, cost of earthquake damage

    In our iPhone and Android and web app, Temblor estimates the likelihood of seismic shaking and home damage. We show how the damage and its costs can be decreased by buying or renting a seismically safe home or retrofitting an older home.

    Please share Temblor with your friends and family to help them, and everyone, live well in earthquake country.

    Temblor is free and ad-free, and is a 2017 recipient of a highly competitive Small Business Innovation Research (‘SBIR’) grant from the U.S. National Science Foundation.

    ShakeAlert: Earthquake Early Warning

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications by 2018.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds, depending on the distance to the epicenter of the earthquake. For very large events like those expected on the San Andreas fault zone or the Cascadia subduction zone the warning time could be much longer because the affected area is much larger. ShakeAlert can give enough time to slow and stop trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications by 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” test users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California. This “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities
    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

     
  • richardmitnick 1:34 pm on July 14, 2017 Permalink | Reply
    Tags: A micro-scale wireless network in the brain enabling us to communicate directly with neurons on a scale that hasn’t previously been possible, Applied Research & Technology, Brown has a tradition of innovative multidisciplinary research in brain science, Brown University, , , Neurograins, , They aim to create a “cortical intranet” of tens of thousands of wireless micro-devices — each about the size of a grain of table salt, To develop a wireless neural prosthetic system that can help people who have lost sensory function due to injury or illness, Will require state-of-the-art microscale semiconductor technology   

    From Brown: “Brown to receive up to $19M to engineer next-generation brain-computer interface” 

    Brown University
    Brown University

    [THIS POST IS DEDICATED TO E.B.M., ABOUT TO COMMENCE HIS YEARS AT THIS GREAT INSTITUTION]

    July 10, 2017
    Kevin Stacey
    kevin_stacey@brown.edu

    1
    Neurograins
    A research team led by Brown University professor Arto Nurmikko aims to develop a wireless neural prosthetic system that can help people who have lost sensory function due to injury or illness. Nurmikko Lab / Brown University.

    The project aims to develop a wireless neural prosthetic system made up of thousands of implantable microdevices that could deepen understanding of the brain and lead to new medical therapies.

    With a grant of up to $19 million from the Defense Advanced Research Projects Agency (DARPA), Brown University will lead a collaboration to develop a fully implantable wireless brain interface system able to record and stimulate neural activity with unprecedented detail and precision.

    The international team of engineers, neuroscientists and physicians involved in the project envisions an approach to neural interfaces that is unlike any available today. They aim to create a “cortical intranet” of tens of thousands of wireless micro-devices — each about the size of a grain of table salt — that can be safely implanted onto or into the cerebral cortex, the outer layer of the brain. The implants, dubbed “neurograins,” will operate independently, interfacing with the brain at the level of a single neuron. The activity of the devices will be coordinated wirelessly by a central communications hub in the form of a thin electronic patch worn on the skin or implanted beneath it.

    The system will be designed to have both “read-out” and “write-in” capabilities. It will be able to record neural activity, helping to deepen scientists’ understanding of how the brain processes stimuli from the outside world. It will also have the capability to stimulate neural activity through tiny electrical pulses, a function researchers hope to eventually use in human clinical research aimed at restoring brain function lost to injury or disease.

    “What we’re developing is essentially a micro-scale wireless network in the brain enabling us to communicate directly with neurons on a scale that hasn’t previously been possible,” said Arto Nurmikko, L. Herbert Ballou University Professor of Engineering at Brown and the project’s principal investigator. “The understanding of the brain we can get from such a system will hopefully lead to new therapeutic strategies involving neural stimulation of the brain, which we can implement with this new neurotechnology.”

    The research team will include researchers from Brown, IMEC (a Belgian microtechnology institute), Massachusetts General Hospital, Stanford University, the University of California, Berkeley, the University of California, San Diego, the mobile telecommunications firm Qualcomm, and the Wyss Center for Bio and Neuroengineering in Geneva. The funding, to be distributed over four years, comes from DARPA’s new Neural Engineering System Design (NESD) program, which is aimed at developing new devices “able to provide advanced signal resolution and data-transfer bandwidth between the brain and electronics.”

    At Brown, the work will build on decades of research in neuroengineering and brain-computer interfaces, computational neuroscience and clinical therapeutics through the Brown Institute for Brain Science, the University’s Warren Alpert Medical School and its School of Engineering.

    “Brown has a tradition of innovative multidisciplinary research in brain science, especially with projects that have the potential to transform lives through technology-assisted repair of neurological injuries,” said Jill Pipher, vice president for research at Brown. “This new grant enables a group of outstanding Brown researchers to develop leading-edge technology and solve new computational problems in a quest to understand human brain functionality at a totally new scale.”

    Four Brown faculty members will serve as co-investigators on the project. Leigh Hochberg is a professor of engineering and one of the leaders of the BrainGate consortium, which develops and tests brain-computer interfaces through ongoing human clinical trials. David Borton is an assistant professor of engineering who previously worked with Nurmikko to develop the first fully implantable brain sensor that could transmit information wirelessly. Larry Larson, Sorensen Family Dean of Engineering, is a leader in semiconductor microwave technology and wireless communication. Wilson Truccolo, an assistant professor of neuroscience, has developed unique theoretical and computational approaches to decoding and encoding neural signals from cortical microcircuits. Each will lend their expertise to the project alongside the team’s experts from leading institutions in the U.S. and abroad, with additional collaboration with companies such as software developer Intel Nervana.

    “This is an ambitious project that will require a convergence of expertise from across disciplines,” Larson said. “We work very hard to make the School of Engineering the kind of place where these kinds of projects thrive, and we’re very much looking forward to the work ahead of us.”

    2
    The new system will use microdevices to both “read out” and “write in” neural information. No image credit.

    New challenges, new technologies

    The project will involve many daunting technical challenges, Nurmikko said, which include completing development of the tiny neurograin sensors and coordinating their activity.

    “We need to make the neurograins small enough to be minimally invasive but with extraordinary technical sophistication, which will require state-of-the-art microscale semiconductor technology.” Nurmikko said. “Additionally, we have the challenge of developing the wireless external hub that can process the signals generated by large populations of spatially distributed neurograins at the same time. This is probably the hardest endeavor in my career.”

    Then there’s the challenge of dealing with all of the data the system produces. Current state-of-the-art brain-computer interfaces sample the activity of 100 or so neurons. For this project, the team wants to start at 1,000 neurons and build from there up to 100,000.

    “When you increase the number of neurons tenfold, you increase the amount of data you need to manage by much more than that because the brain operates through nested and interconnected circuits,” Nurmikko said. “So this becomes an enormous big data problem for which we’ll need to develop new computational neuroscience tools.”

    The team will first apply new technologies to the sensory and auditory function in mammals. The level of detail expected from the neurograin system, the researchers say, should yield an entirely new level of understanding of sensory processes in the brain.

    “We aim to be able to read out from the brain how it processes, for example, the difference between touching a smooth, soft surface and a rough, hard one and then apply microscale electrical stimulation directly to the brain to create proxies of such sensation,” Nurmikko said. “Similarly, we aim to advance our understanding of how the brain processes and makes sense of the many complex sounds we listen to every day, which guide our vocal communication in a conversation and stimulate the brain to directly experience such sounds.”

    The Brown-led team is one of six research teams to be awarded grants from DARPA under the NESD program, which was launched last year. Other awarded projects will be led by researchers at Columbia University, the Foundation for Vision and Understanding, John B. Pierce Laboratory, Paradromics and the University of California, Berkeley.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    Welcome to Brown

    Brown U Robinson Hall
    Located in historic Providence, Rhode Island and founded in 1764, Brown University is the seventh-oldest college in the United States. Brown is an independent, coeducational Ivy League institution comprising undergraduate and graduate programs, plus the Alpert Medical School, School of Public Health, School of Engineering, and the School of Professional Studies.

    With its talented and motivated student body and accomplished faculty, Brown is a leading research university that maintains a particular commitment to exceptional undergraduate instruction.

    Brown’s vibrant, diverse community consists of 6,000 undergraduates, 2,000 graduate students, 400 medical school students, more than 5,000 summer, visiting and online students, and nearly 700 faculty members. Brown students come from all 50 states and more than 100 countries.

    Undergraduates pursue bachelor’s degrees in more than 70 concentrations, ranging from Egyptology to cognitive neuroscience. Anything’s possible at Brown—the university’s commitment to undergraduate freedom means students must take responsibility as architects of their courses of study.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: