Tagged: Neuroscience Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:31 am on May 4, 2019 Permalink | Reply
    Tags: "Putting vision models to the test", , , , , Neuroscience, Study shows that artificial neural networks can be used to drive brain activity.   

    From MIT News: “Putting vision models to the test” 

    MIT News
    MIT Widget

    From MIT News

    May 2, 2019
    Anne Trafton

    A computer model of vision created by MIT neuroscientists designed these images that can stimulate very high activity in individual neurons. Image: Pouya Bashivan

    Study shows that artificial neural networks can be used to drive brain activity.

    MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.

    Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.

    The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.

    “People have questioned whether these models provide understanding of the visual system,” he says. “Rather than debate that in an academic sense, we showed that these models are already powerful enough to enable an important new application. Whether you understand how the model works or not, it’s already useful in that sense.”

    MIT postdocs Pouya Bashivan and Kohitij Kar are the lead authors of the paper, which appears in the May 2 online edition of Science.

    Neural control

    Over the past several years, DiCarlo and others have developed models of the visual system based on artificial neural networks. Each network starts out with an arbitrary architecture consisting of model neurons, or nodes, that can be connected to each other with different strengths, also called weights.

    The researchers then train the models on a library of more than 1 million images. As the researchers show the model each image, along with a label for the most prominent object in the image, such as an airplane or a chair, the model learns to recognize objects by changing the strengths of its connections.

    It’s difficult to determine exactly how the model achieves this kind of recognition, but DiCarlo and his colleagues have previously shown that the “neurons” within these models produce activity patterns very similar to those seen in the animal visual cortex in response to the same images.

    In the new study, the researchers wanted to test whether their models could perform some tasks that previously have not been demonstrated. In particular, they wanted to see if the models could be used to control neural activity in the visual cortex of animals.

    “So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” Bashivan says. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

    To achieve this, the researchers first created a one-to-one map of neurons in the brain’s visual area V4 to nodes in the computational model. They did this by showing images to animals and to the models, and comparing their responses to the same images. There are millions of neurons in area V4, but for this study, the researchers created maps for subpopulations of five to 40 neurons at a time.

    “Once each neuron has an assignment, the model allows you to make predictions about that neuron,” DiCarlo says.

    The researchers then set out to see if they could use those predictions to control the activity of individual neurons in the visual cortex. The first type of control, which they called “stretching,” involves showing an image that will drive the activity of a specific neuron far beyond the activity usually elicited by “natural” images similar to those used to train the neural networks.

    The researchers found that when they showed animals these “synthetic” images, which are created by the models and do not resemble natural objects, the target neurons did respond as expected. On average, the neurons showed about 40 percent more activity in response to these images than when they were shown natural images like those used to train the model. This kind of control has never been reported before.

    “That they succeeded in doing this is really amazing. It’s as if, for that neuron at least, its ideal image suddenly leaped into focus. The neuron was suddenly presented with the stimulus it had always been searching for,” says Aaron Batista, an associate professor of bioengineering at the University of Pittsburgh, who was not involved in the study. “This is a remarkable idea, and to pull it off is quite a feat. It is perhaps the strongest validation so far of the use of artificial neural networks to understand real neural networks.”

    In a similar set of experiments, the researchers attempted to generate images that would drive one neuron maximally while also keeping the activity in nearby neurons very low, a more difficult task. For most of the neurons they tested, the researchers were able to enhance the activity of the target neuron with little increase in the surrounding neurons.

    “A common trend in neuroscience is that experimental data collection and computational modeling are executed somewhat independently, resulting in very little model validation, and thus no measurable progress. Our efforts bring back to life this ‘closed loop’ approach, engaging model predictions and neural measurements that are critical to the success of building and testing models that will most resemble the brain,” Kar says.

    Measuring accuracy

    The researchers also showed that they could use the model to predict how neurons of area V4 would respond to synthetic images. Most previous tests of these models have used the same type of naturalistic images that were used to train the model. The MIT team found that the models were about 54 percent accurate at predicting how the brain would respond to the synthetic images, compared to nearly 90 percent accuracy when the natural images are used.

    “In a sense, we’re quantifying how accurate these models are at making predictions outside the domain where they were trained,” Bashivan says. “Ideally the model should be able to predict accurately no matter what the input is.”

    The researchers now hope to improve the models’ accuracy by allowing them to incorporate the new information they learn from seeing the synthetic images, which was not done in this study.

    This kind of control could be useful for neuroscientists who want to study how different neurons interact with each other, and how they might be connected, the researchers say. Farther in the future, this approach could potentially be useful for treating mood disorders such as depression. The researchers are now working on extending their model to the inferotemporal cortex, which feeds into the amygdala, which is involved in processing emotions.

    “If we had a good model of the neurons that are engaged in experiencing emotions or causing various kinds of disorders, then we could use that model to drive the neurons in a way that would help to ameliorate those disorders,” Bashivan says.

    The research was funded by the Intelligence Advanced Research Projects Agency, the MIT-IBM Watson AI Lab, the National Eye Institute, and the Office of Naval Research.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 2:29 pm on November 4, 2017 Permalink | Reply
    Tags: , , Brain waves, , , Neuroscience   

    From INVERSE: For All of my Friernds in Neuroscience: “Nobody Knows Where Brainwaves Come From” 



    August 7, 2017 [Just now in social media]
    Rafi Letzter

    Wub-wub-wub-wub. Brainwaves are electromagnetic proof that we are alive. Decades of research have shown that these pulses of electrical potential reflect events at the root of our impulses and thoughts. As such, they underlie one of humanity’s weightiest moral decisions: deciding whether or not a person is officially dead. If a person goes 30 minutes without producing brainwaves, even a functioning heartbeat can’t convince doctors they’re alive.

    But as much as brainwaves loom in our understanding of the brain, not a single scientist has any idea where they come from.

    At least one researcher, Michael X. Cohen, Ph.D., an assistant professor at the Donders Institute for Brain, Cognition, and Behavior in the Netherlands, thinks it’s time to fix that. In an April op-ed in the journal Trends in Neurosciences, Cohen argued that the time has come for researchers to figure out what those brainwaves they’ve been recording for decades are really all about.

    “This is maybe the most important question for neuroscience right now,” he said to Inverse, but he added that it will be a challenge to convince his colleagues that it matters at all.

    Today, as Facebook races to read your brainwaves, roboticists use them to develop mind control systems, and cybersecurity experts race to protect yours from hackers, it’s clear that Cohen’s sense of urgency is justified.

    Connecting brainwaves to neuron behavior is the next great challenge in neuroscience. No image credit

    What we do know about brainwaves is that when doctors stick silver chloride dots to a person’s scalp and hook the connected electrodes up to an electroencephalography (EEG) machine, the curves that appear on its screen represent the electrical activity inside our skulls. The German neuroscientist Hans Berger spotted the first type of brainwave — alpha waves — back in 1924.

    Researchers soon discovered more of these strange oscillations. There’s the slow, powerful delta wave, which shows up when we’re in deep sleep. There’s the low spikes of the theta wave, whose functions remain largely mysterious. Faster and even stranger is the gamma wave, which some researchers suspect plays a role in consciousness.

    These waves are at the root of our understanding of the shape and structure of human thought, as well as the methods doctors use to figure out how brains break down. It’s thought that alpha waves, for example, are a sign the brain is inhibiting certain mental systems to free up bandwidth for other tasks, like sleeping or imagining. But where does it come from in the first place?

    So far, there’s been no satisfactory answer to this question, but Cohen is determined to find it.

    An alpha brainwave resembles a sine wave. No image credit.

    As one of the world’s leading researchers on the brain’s electrical activity, he hooks people up to EEG machines to figure out how their brains behave when they see a bird, think through a complex decision, or feel sad. But Cohen is the first to admit that what’s lacking in his research is context. Not understanding how those patterns relate to the actual meat of the brain — neurons firing or not firing, getting excited, or shutting down — leaves a huge mystery right at the center of brainwave neuroscience, he says.

    “Over time it started bothering me more and more,” Cohen told Inverse. “There’s so much complexity going on at smaller spatial scales, and we have literally no fucking clue how to get from this big spatial scale to this smaller spatial scale.”

    Part of the reason why it’s so hard to understand neuroscience research in the context of the brain, Cohen explains, is because neuroscientists themselves work in discrete, isolated sub-fields based on how big a chunk of the brain they study. Researchers studying the brain at the smallest level peel open individual neurons and watch the proteins inside them fold. Microcircuit neuroscientists map out the connections between neurons. Cohen zooms out a little further, connecting electrical patterns and human thought, rarely concerning himself with single cells or small groups of neurons.

    But as we begin to fully grasp how complex the brain really is, Cohen says, it’s increasingly imperative to find a way to bridge the research that happens at the macro and micro scales. Finally understanding brainwaves, he says, could be the key to doing so.

    No image caption or credit.

    That’s because brainwaves pulse at every single level of the brain, from the tiniest neuron to the entire 3-pound organ. “If you’re recording from just one neuron, you’ll see oscillations,” Cohen says, using the scientific term for wobbling brainwaves.

    “If you’re recording from a small ensemble of neurons, you’ll see them. And if you’re recording from tens of millions of neurons, you’ll see oscillations.”

    For Cohen, brainwaves are the common thread that can unify neuroscience. But the problem is, most research deals only with the electrical activity produced from tens of millions of neurons at a time, which is the highest resolution a typical EEG machine can capture without needlessly cutting into an innocent study subject’s head. The problem is that this big, rough EEG research in humans isn’t very compatible with the intricate, neuron-scale research done in lab rats. Consequently, we have plenty of information about the brain’s parts but no understanding of how they work together as a whole.

    “It’s the difference between ‘What do Americans like?’ and ‘What does any individual American like?’” Cohen said. “And that’s a huge difference — between what any individual does and what you can say as a generality about an entire culture.”

    While we know that all that electrical activity is the result of charged chemicals sloshing around in our brains in rhythmic, patterned waves, that doesn’t tell us anything about the most important question: Why they’re generated in the first place.

    “The problem with these answers is that they’re totally meaningless from a neuroscience perspective,” Cohen says. “These answers tell you about how it’s physically possible, how the universe is constructed such that we can make these measurements. But there’s a totally different question, which is, what do these measurements mean? What do they tell us about the kinds of computations that are taking place in the brain? And that’s a huge explanatory gap.”

    Despite some puzzlement from fellow scientists, Cohen plans to collect brainwave data from rodents. No image credit.

    There are a few ways to bridge that gap. Scientists like those at the Blue Brain Project in Switzerland are trying to do so by building a computerized brain simulation that’s detailed enough to include the whole organ, as well as individual neurons, which they hope can reveal a kind of cell activity that would cause different kinds of common EEG patterns to appear. The one huge challenge to this approach, however, is that there’s no computer that can simulate a brain’s computations in real time; just a millisecond of one neuron’s time in a simulation can take 10 seconds of real-world time for a computer to figure out. It’s certainly possible, but doing so would cost billions of dollars.

    Cohen’s plan, which relies on real-world experiments, is much simpler.

    Since you can’t cut open a human brain and start sticking electrodes in there to record activity (even in “human rights-challenged places,” Cohen says), he’s relying on rodents instead. But what makes his work different is that he’s hooking those rodents up to EEG machines, which researchers don’t usually do. “They say, why are you wasting your time recording EEG from rats? EEG is for when you don’t have access to the brain, so you record from outside,” he says.

    But rodents have brainwaves, too, and their data can provide much-needed insight into how to bridge the neuron-brainwave divide. His experiments will create two huge data sets that researchers can cross-reference to figure out how neuron function and EEG behavior relate to one another. With the help of some deep-learning algorithms, they’ll then pore over that data to build a map of how individual sparks of neural activity add up to recognizable brainwaves. If Cohen’s experiments are very successful, his team will be able to look at a rodent’s EEG and predict — with what he hopes is more than 98 percent accuracy — exactly how the neural circuits are behaving in its brain.

    “I think we’re not that far away from breakthroughs. Some of these kinds of questions are not so difficult to answer, it’s just that no one has really looked,” he said. But he admits that he’s worried that the segmentation of neuroscience research will get in the way of this whole-brain approach.

    “So this is very terrifying for me and also very difficult, because I have very little experience in the techniques that i think are necessary,” he said.

    Having to admit on his grant applications that his work would employ unfamiliar techniques he has never used made it difficult to get funding, but Cohen ultimately received a grant from the European Union. Now, with the aid of a lab fully staffed with experts in rodent brains, Cohen is ready to get to work.

    Soon enough, we might finally get some answers to one of the oldest and strangest mysteries in neuroscience: where all those wub-wubs really come from and what they really mean.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 3:34 pm on July 31, 2017 Permalink | Reply
    Tags: 3-D microscope gives Johns Hopkins scientists a clearer view, , , , , , Neuroscience   

    From Hopkins: “3-D microscope gives Johns Hopkins scientists a clearer view” 

    Johns Hopkins
    Johns Hopkins University

    Jill Rosen

    Light-sheet technology allows researchers like Kavli Neuroscience Discovery Institute fellow Audrey Branch to observe how cells, ducts, or veins connect without damaging the cells in the sample Image credit: Will Kirk / Homewood Photography.

    Audrey Branch is trying to learn more about aging by studying old and young brains. Specifically, she’s interested in how cells connect to form memories and what might be going wrong with those connections when older people start to forget things.

    Until recently, getting at that question meant months of tedious specimen preparation. And even then, the very prep that made getting a glimpse of the brain’s core possible—slicing what’s already tiny into thousands of pieces—very likely destroyed the delicate connections the Johns Hopkins neuroscientist needed to see.

    That changed this spring when a new, three-dimensional microscope arrived at the university’s Homewood campus, a cutting-edge tool that not only condenses what had been months of work into just hours, but allows researchers unprecedented views of organs, tissue, and even live specimens.

    Just practicing with it, Branch knew it was a game-changer. She cried when she saw the first pictures of a mouse brain, its individual neurons glowing red, and its spindly dendrites, too—showing quite clearly the links between those cells.

    “It feels so amazing to see the brain in a way that no one has ever seen it before,” she said. “It’s pretty much the greatest thing I’ve ever experienced in science.”

    The selective plane florescence light sheet microscope arrived on campus in April, one of the first in operation on the East Coast and the only one in Maryland. Purchased with a grant from the National Institutes of Health, it cost $360,000.

    Unlike other microscopes, this one illuminates specimens from the side, shooting two perfectly aligned planes of light across an object, illuminating a wafer-thin slice of the whole while the camera captures the image—thousands of times over as the specimen moves through the light. When the images are displayed together, the result is a three-dimensional image or video clip of the full object, sort of like the more familiar CAT scan.

    The technology is very new, but Michael McCaffery, director of the university’s Integrated Imaging Center, expects researchers everywhere will be using it within a few years. Just among the Johns Hopkins community, word of the light sheet is already out and scientists have been lining up to use it—even if that requires the minor inconvenience of bringing specimens over from the medical campus.

    “People really want to use this,” McCaffery said. “It fills a niche that until now was unavailable at Hopkins. Simply, there was no instrument that allowed a researcher to take a whole organ, brain, or cardiac muscle, and image them in three-dimensions, in their entirety.”

    The light sheet is the latest advance in modern microscopy—a world that’s been evolving since fluorescence microscopy became the standard in the 1960s. Now, most researchers use confocal microscopes, which use lasers to illuminate a sample point by point—only extremely tiny samples will work—then create computerized images, pixel by pixel.

    Confocals produce vivid, high-resolution images, but the sample size limitations—nothing thicker than about 70 microns, which is about as wide as a strand of human hair—severely handicapped scientists.

    The new light sheet allows samples up to 12 to 15 millimeters, or about a half an inch. Researchers can study much larger samples, even entire organs. And because the samples don’t have to be cut up, researchers like Branch who are interested in how cells, ducts, or veins connect have a chance to observe them, unspoiled.

    “It’s a very big deal for researchers, particularly those interested in the science of connectomics,” McCaffery said. “Mapping the neuronal connections of the brain is the holy grail of neurology.”

    It’s certainly Branch’s holy grail.

    Branch is a Kavli Neuroscience Discovery Institute fellow working in the Krieger School of Arts and Sciences. She wants to know how newborn neurons, which are key to making memories, connect to other cells in the brain—and how those connections might change as people age.

    Scientists know the number of newborn neurons declines with age, and that likely has something to do with why short-term memory declines with age. What Branch wants to do is audit these newborn cells in a young brain, determining how many there are, where they are, and what other cells they communicate with. She can compare that with an older brain and possibly see which connections have broken when memory loss occurs. If she can target the broken connections, there could be a way to treat the area with a drug and stop or slow cognitive decline.

    Branch has been practicing on the light sheet with mouse brains, and she plans to formally investigate her hypothesis with rat brains, which are bigger and more human-like.

    If she didn’t have the light sheet, Branch would have to slice the brain, which is about the size of an olive pit, into tissue-thin sections—about 250 pieces. Each slice would need to be stained, mounted onto a slide, and then imaged. Each of those images would need to be manually assembled into a composite to approximate the whole.

    All of this work would take about a month. Since Branch’s experiment involves 30 brains, it would take her about two and a half years, “if,” she says, “that’s all I did day in and day out.”

    Worse yet, by slicing the brain, she would lose most of the newborn neurons she needed to find, and probably all of the connections. She figures if she had marked 50 newborn neurons, she’d be lucky to find five.

    “It would be impossible to find the connections,” she says. “And it would be impossible to get an idea of who each of those cells is talking to. Maybe it’s not important, but I’m guessing that’s not the case. Neurons in isolation aren’t interesting; it’s who they’re talking to, it’s how they’re wired.

    “I was just going to have to estimate. I’d have missed a lot of the picture, and that’s all anyone’s been able to do.”

    Guy Bar-Klein, a neuroscientist working in the Hal Dietz Lab at the School of Medicine, has been crossing town to spend time at Homewood’s Dunning Hall with the light sheet to study blood vessels in the heart and brain, hoping to better understand what causes aneurysms.

    Without the light-sheet technology, his view would be limited to a minuscule section of tissue, much too small to get a true sense of its vasculature. Now, he has been looking at samples with intact blood vessels, making it possible to spot and track aneurysms—and possibly pinpoint the underlying issues that caused it to form.

    “It’s very exciting,” Bar-Klein said. “I think it gives us a very substantial advantage in understanding the signaling involved in aneurysm formation.”

    Michael Noë, a pathology resident who studies pancreatic cancer, hopes the light sheet’s three-dimensional perspective will allow him to see relationships between tumors and the surrounding nerves and blood vessels. Tumors often grow around nerves, and Noë expects the new perspective of cancerous ducts and nerves could shed light on why.

    “For almost 200 years, pathologists looked at tissue the same way,” he says. “Three-dimensional is almost a whole new world for us. There is a lot of excitement in the department of pathology to apply this technology for the first time to human samples.”

    Before researchers can view tissue of any sort with the light-sheet, their samples must be treated to make them translucent, so the microscope’s light can pass through and create an image. Noë has developed a protocol for clearing human tissue and tumors, work he’s hoping to publish.

    Branch expects to have 3-D images of all 30 of her rat brains in three to six months.

    She’ll see every newborn neuron. She’ll see each dendrite. And hopefully, she’ll find answers – she already knows she’ll find more questions.

    “The technology makes it easier to have confidence about our findings,” she says, “It also opens up an opportunity to ask even more questions — things that before, we didn’t even know we could ask.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Johns Hopkins Campus

    The Johns Hopkins University opened in 1876, with the inauguration of its first president, Daniel Coit Gilman. “What are we aiming at?” Gilman asked in his installation address. “The encouragement of research … and the advancement of individual scholars, who by their excellence will advance the sciences they pursue, and the society where they dwell.”

    The mission laid out by Gilman remains the university’s mission today, summed up in a simple but powerful restatement of Gilman’s own words: “Knowledge for the world.”

    What Gilman created was a research university, dedicated to advancing both students’ knowledge and the state of human knowledge through research and scholarship. Gilman believed that teaching and research are interdependent, that success in one depends on success in the other. A modern university, he believed, must do both well. The realization of Gilman’s philosophy at Johns Hopkins, and at other institutions that later attracted Johns Hopkins-trained scholars, revolutionized higher education in America, leading to the research university system as it exists today.

  • richardmitnick 10:36 am on June 28, 2017 Permalink | Reply
    Tags: , , channelrhodopsin-2, , Neuroscience, , Purkinje cells,   

    From U Washington: “Study shines light on brain cells that control movement” 

    U Washington

    University of Washington

    Michael McCarthy
    Media contact:
    Leila Gray

    In this image of neurons in the cerebellum of the brain, the yellow cells are Purkinje cells in which the channelrhodopsin-2 gene is being produced. Horwitz Lab/UW Medicine

    UW Medicine researchers have developed a technique for inserting a gene into specific cell types in the adult brain in an animal model.

    Recent work shows that the approach can be used to alter the function of brain circuits and change behavior. The study appears in the journal Neuron in the NeuroResources section.

    Gregory Horwitz, associate professor of physiology and biophysics at the University of Washington School of Medicine in Seattle, led the research team. He said that the approach will allow scientists to better understand what roles select cell types play in the brain’s complex circuitry.

    Researchers hope that the approach might someday lead to developing treatments for conditions, such as epilepsy, that might be curable by activating a small group of cells

    “The brain is made up of a mix of many cell types performing different functions. One of the big challenges for neuroscience is finding ways to study the function of specific cell types selectively without affecting the function of other cell types nearby,” Horwitz said. “Our study shows it is possible to selectively target a specific cell type in an adult brain using this technique and affect behavior nearly instantly.”

    In their study, Horowitz and his colleagues at the Washington National Primate Research Center in Seattle inserted a gene into cells in the cerebellum, a small structure located at the back of the brain and tucked under the brain’s larger cerebrum.

    The cerebellum’s primary function is controlling motor movements. Disorders of the cerebellum generally lead to often disabling loss of coordination. Recent research suggests the cerebellum may also be important in learning and may be involved in such conditions as autism and schizophrenia.

    The cells the scientists selected to study are called Purkinje cells. These cells, named after their discoverer, Czech anatomist Jan Evangelista Purkinje, are some of the largest in the human brain. They typically make connections with hundreds of other brain cells.

    “The Purkinje cell is a mysterious cell,” said Horwitz. “It’s one of the biggest and most elaborate neurons and it processes signals from hundreds of thousands of other brain cells. We know it plays a critical role in movement and coordination. We just don’t know how.”

    The gene they inserted, called channelrhodopsin-2, encodes for a light-sensitive protein that inserts itself into the brain cell’s membrane. When exposed to light, it allows ions – tiny charged particles – to pass through the membrane. This triggers the brain cell to fire.

    The technique, called optogenetics, is commonly used to study brain function in mice. But in these studies, the gene must be introduced into the embryonic mouse cell.

    “This ‘transgenic’ approach has proved invaluable in the study of the brain,” Horwitz said. “But if we are someday going to use it to treat disease, we need to find a way to introduce the gene later in life, when most neurological disorders appear.”

    The challenge for his research team was how to introduce channelrhodopsin-2 into a specific cell type in an adult animal. To achieve this, they used a modified virus that carried the gene for channelrhodopsin-2 along with segment of DNA called a promoter. The promoter stimulates the cell to start expressing the gene and make the channelrhodopsin-2 membrane protein. To make sure the gene was expressed only by Purkinje cells, the researchers used a promoter that is strongly active in Purkinje cells, called L7/Pcp2.”

    In their paper, the researchers reported that by painlessly injecting the modified virus into a small area of the cerebellum of rhesus macaque monkeys, the channelrhodopsin-2 was taken up exclusively by the targeted Purkinje cells. The researchers then showed that when they exposed the treated cells to light through a fine optical fiber, they were able stimulate the cells to fire at different rates and affect the animals’ motor control.

    Horwitz said that it was the fact that Purkinje cells express L7/Pcp2 promoter at a higher rate than other cells that made them more likely to produce the channelrhodopsin-2 membrane protein.

    “This experiment demonstrates that you can engineer a viral vector with this specific promoter sequence and target a specific cell type,” he said. “The promoter is the magic. Next, we want to use other promoters to target other cell types involved in other types of behaviors.”

    Horwitz coauthors were: lead author Yasmine El-Shamayleh, a postdoctoral fellow; Yoshiko Kojima, an acting instructor; and Robijanto Soetedjo, a UW School of Medicine research associate professor of physiology and biophysics. All are researchers at the Washington National Primate Research Center.

    This study was funded by National Institutes of Health grants to the researchers; an NIH Office of Research Infrastructure Programs grant to the Washington National Primate Research Center, and a National Eye Institute Center Core Grant for Vision Research to the University of Washington School of Medicine.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The University of Washington is one of the world’s preeminent public universities. Our impact on individuals, on our region, and on the world is profound — whether we are launching young people into a boundless future or confronting the grand challenges of our time through undaunted research and scholarship. Ranked number 10 in the world in Shanghai Jiao Tong University rankings and educating more than 54,000 students annually, our students and faculty work together to turn ideas into impact and in the process transform lives and our world. For more about our impact on the world, every day.
    So what defines us —the students, faculty and community members at the University of Washington? Above all, it’s our belief in possibility and our unshakable optimism. It’s a connection to others, both near and far. It’s a hunger that pushes us to tackle challenges and pursue progress. It’s the conviction that together we can create a world of good. Join us on the journey.

  • richardmitnick 4:36 pm on August 24, 2016 Permalink | Reply
    Tags: , In unstable times the brain reduces cell production to help cope, Neuroscience,   

    From Princeton: “In unstable times, the brain reduces cell production to help cope” 

    Princeton University
    Princeton University

    August 24, 2016
    Morgan Kelly

    People who experience job loss, divorce, death of a loved one or any number of life’s upheavals often adopt coping mechanisms to make the situation less traumatic.

    While these strategies manifest as behaviors, a Princeton University and National Institutes of Health study suggests that our response to stressful situations originates from structural changes in our brain that allow us to adapt to turmoil.

    A study conducted with adult rats showed that the brains of animals faced with disruptions in their social hierarchy produced far fewer new neurons in the hippocampus, the part of the brain responsible for certain types of memory and stress regulation. Rats exhibiting this lack of brain-cell growth, or neurogenesis, reacted to the surrounding upheaval by favoring the company of familiar rats over that of unknown rats, according to a paper published in The Journal of Neuroscience.

    A Princeton University and National Institutes of Health study suggests that our response to stressful situations originates from structural changes in our brain that allows us to adapt to turmoil. Adult rats with disruptions in their social hierarchy produced far fewer new neurons in the hippocampus, the part of the brain responsible for certain types of memory and stress regulation. They also reacted to the disruption by favoring the company of familiar rats. Their behavior manifested six weeks after social disruption, during which time brain-cell growth, or neurogenesis, had decreased by 50 percent. The photo shows adult hippocampal neurons that are less than two weeks old. (Image courtesy of Maya Opendak, New York University)

    The research is among the first to show that adult neurogenesis — or the lack thereof — has an active role in shaping social behavior and adaptation, said first author Maya Opendak, who received her Ph.D. in neuroscience from Princeton in 2015 and conducted the research as a graduate student. The preference for familiar rats may be an adaptive behavior triggered by the reduction in neuron production, she said.

    “Adult-born neurons are thought to have a role in responding to novelty, and the hippocampus participates in resolving conflicts between different goals for use in decision-making,” said Opendak, who is now a postdoctoral research fellow of child and adolescent psychology at the New York University School of Medicine.

    “Data from this study suggest that the reward of social novelty may be altered,” she said. “Indeed, sticking with a known partner rather than approaching a stranger may be beneficial in some circumstances.”

    The findings also show that behavioral responses to instability may be more measured than scientists have come to expect, explained senior author Elizabeth Gould, Princeton’s Dorman T. Warren Professor of Psychology and department chair. Gould and her co-authors were surprised that the disrupted rats did not display any of the stereotypical signs of mental distress such as anxiety or memory loss, she said.

    “Even in the face of what appears to be a very disruptive situation, there was not a negative pathological response but a change that could be viewed as adaptive and beneficial,” said Gould, who also is a professor of neuroscience in the Princeton Neuroscience Institute (PNI).

    “We thought the animals would be more anxious, but we were making our prediction based on all the bias in the field that social disruption is always negative,” she said. “This research highlights the fact that organisms, including humans, are typically resilient in response to disruption and social instability.”

    Co-authors on the paper include: Lily Offit, who received her bachelor’s degree in psychology and neuroscience from Princeton in 2015 and is now a research assistant at Columbia University Medical Center; Patrick Monari, a research specialist in PNI; Timothy Schoenfeld, a postdoctoral researcher at the National Institutes of Health (NIH) who received his Ph.D. in psychology and neuroscience from Princeton in 2012; Anup Sonti, an NIH researcher; and Heather Cameron, an NIH principal investigator of neuroplasticity.

    The study is unusual for mimicking the true social structure of rats, Gould said. Rats live in structured societies that contain a single dominant male. The researchers placed rats into several groups consisting of four males and two females in to a large enclosure known as a visible burrow system. They then monitored the groups until the dominant rat in each one emerged and was identified. After a few days, the alpha rats of two communities were swapped, which reignited the contest for dominance in each group.

    The rats from disrupted hierarchies displayed their preference for familiar fellows six weeks after those turbulent times, during which time neurogenesis had decreased by 50 percent, Opendak said. (Any neurons generated during the time of instability would take four to six weeks to be incorporated into the hippocampus’ circuitry, she said.)

    When the researchers chemically restored adult neurogenesis in these rats, however, the animals’ interest in unknown rats returned to pre-disruption levels. At the same time, the researchers inhibited neuron growth in “naïve” transgenic rats that had not experienced social disruption. They found that the mere cessation of neurogenesis produced the same results as social disruption, particularly a preference for spending time with familiar rats.

    “These results show that the reduction in new neurons is directly responsible for social behavior, something that hasn’t been shown before,” Gould said. The exact mechanism behind how lower neuron growth led to the behavior change is not yet clear, she said.

    Bruce McEwen, professor of neuroendocrinology at The Rockefeller University, said that the research is a “major step forward” in efforts to explore the role of the dentate gyrus — a part of the hippocampus — in social behavior and antidepressant efficacy.

    “The ventral dentate gyrus, where they found these effects, is now implicated in mood-related behaviors and the response to antidepressants,” said McEwen, who is familiar with the research but had no role in it.

    “The connection to social behavior shown here is an important addition because social withdrawal is a key aspect of depression in humans, and the anterior hippocampus in humans is the homolog of the ventral hippocampus in rodents,” McEwen said. “Although there is no ‘animal model’ of human depression, the individual behaviors such as social avoidance, and brain changes such as neurogenesis, have been very useful in elucidating brain mechanisms in human depression.”

    At this point, the extent to which the exact mechanism and behavioral changes the researchers observed in the rats would apply to humans is unknown, Gould and Opendak said. The study’s overall conclusion, however, that social disruption and instability lead to neurological changes that help us to better cope is likely universal, they said.

    “Most people do experience some disruption in their lives, and resilience is the most typical response,” Gould said. “After all, if organisms always responded to stress with depression and anxiety, it’s unlikely early humans would have made it because life in the wild is very stressful.”

    “For people who are exposed to social disruption frequently, our animal model suggests that these life events may be accompanied by long-term changes in brain function and social behavior,” Opendak said. “Although we hope that our findings may guide research on the mechanisms of resilience in humans, it is important as always to exercise caution when extrapolating these data across species.”

    The paper, Lasting Adaptations In Social Behavior Produced By Social Disruption And Inhibition of Adult Neurogenesis, was published June 29 in The Journal of Neuroscience. This work was supported by the National Institute for Mental Health (NIMH).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    Princeton University Campus

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield

  • richardmitnick 11:03 am on August 23, 2016 Permalink | Reply
    Tags: , , Neuroscience,   

    From Scripps: “A Look Deep Inside the Human Brain Reveals a Surprise” 

    Scripps Research Institute

    No writer credit

    In the field of neuroscience, researchers often make the assumption that information they obtain from a tiny brain sample is true for the entire brain.

    Now, a team of scientists at The Scripps Research Institute (TSRI), University of California, San Diego (UC San Diego) and Illumina, Inc., has completed the first large-scale assessment of the way thousands of single neuronal nuclei produce proteins from genetic information (“transcription”) – revealing a surprising diversity in the process. The findings could improve both our understanding of the brain’s normal functioning and how it’s damaged by diseases such as Alzheimer’s, Parkinson’s, ALS and depression.

    [The study is published in Science.]

    The researchers accomplished this feat by isolating and analyzing 3,200 single human neurons, more than 10-fold greater than prior publications, from six Brodmann Areas (larger regions having functional roles) of the human brain.

    “Through a wonderful scientific collaboration, we found an enormous amount of transcriptomic diversity from cell to cell that will be relevant to understanding the normal brain and its diseases such as Alzheimer’s, Parkinson’s, ALS and depression,” said TSRI Professor and neuroscientist Jerold Chun, who co-led the study with bioengineers Kun Zhang and Wei Wang of UC San Diego and Jian-Bing Fan of Illumina.

    While parts of the cerebral cortex look different under a microscope – with different cell shapes and densities that form cortical layers and larger regions having functional roles called “Brodmann Areas” – most researchers treat neurons as a fairly uniform group in their studies. “From a tiny brain sample, researchers often make assumptions that obtained information is true for the entire brain,” said Dr. Chun.

    But the brain isn’t like other organs, Dr. Chun explained. There’s a growing understanding that individual brain cells are unique, and a possibility has been that the microscopic differences among cerebral cortical areas may also reflect unique transcriptomic differences – i.e., differences in the expressed genes, or messenger RNAs (mRNAs), which carry copies of the DNA code outside the nucleus and determine which proteins the cell makes.

    With the help of newly developed tools to isolate and sequence individual cell nuclei (where genetic material is housed in a cell), the researchers deciphered the minute quantities of mRNA within each nucleus, revealing that various combinations of the 16 subtypes tended to cluster in cortical layers and Brodmann Areas, helping explain why these regions look and function differently.

    Neurons exhibited anticipated similarities, yet also many differences in their transcriptomic profiles, revealing single neurons with shared, as well as unique, characteristics that likely lead to differences in cellular function.

    “Now we can actually point to an enormous amount of molecular heterogeneity in single neurons of the brain,” said Gwendolyn E. Kaeser, a UC San Diego Biomedical Sciences Graduate Program student studying in Dr. Chun’s lab at TSRI and co-first author of the study.

    Interestingly, some of these differences in gene expression have roots in very early brain development taking place before birth. The researchers found markers on some neurons showing that they originated from a specific region of fetal brain called the ganglionic eminence, which generates inhibitory neurons destined for the cerebral cortex. These neurons may have particular relevance to developmental brain disorders.

    In future studies, the researchers hope to investigate how single-neuron DNA and mRNA differ in single neurons, groups and between human brains – and how these may be influenced by factors such as stress, medications or disease.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Scripps Research Institute (TSRI), one of the world’s largest, private, non-profit research organizations, stands at the forefront of basic biomedical science, a vital segment of medical research that seeks to comprehend the most fundamental processes of life. Over the last decades, the institute has established a lengthy track record of major contributions to the betterment of health and the human condition.

    The institute — which is located on campuses in La Jolla, California, and Jupiter, Florida — has become internationally recognized for its research into immunology, molecular and cellular biology, chemistry, neurosciences, autoimmune diseases, cardiovascular diseases, virology, and synthetic vaccine development. Particularly significant is the institute’s study of the basic structure and design of biological molecules; in this arena TSRI is among a handful of the world’s leading centers.

    The institute’s educational programs are also first rate. TSRI’s Graduate Program is consistently ranked among the best in the nation in its fields of biology and chemistry.

  • richardmitnick 6:54 am on July 28, 2016 Permalink | Reply
    Tags: , , Neuroscience,   

    From Science: “Neurons get fresh ‘batteries’ after stroke” 



    Jul. 27, 2016
    Emily Underwood

    When neurons are damaged, cells called astrocytes (above) may assist by donating cellular power plants called mitochondria, a new study suggests. GerryShaw/Wikimedia Commons

    If your car’s battery dies, you might call on roadside assistance—or a benevolent bystander—for a jump. When damaged neurons lose their “batteries,” energy-generating mitochondria, they call on a different class of brain cells, astrocytes, for a boost, a new study suggests. These cells respond by donating extra mitochondria to the floundering neurons. The finding, still preliminary, might lead to novel ways to help people recover from stroke or other brain injuries, scientists say.

    “This is a very interesting and important study because it describes a new mechanism whereby astrocytes may protect neurons,” says Reuven Stein, a neurobiologist at The Rabin Institute of Neurobiology in Tel Aviv, Israel, who was not involved in the study.

    To keep up with the energy-intensive work of transmitting information throughout the brain, neurons need a lot of mitochondria, the power plants that produce the molecular fuel—ATP—that keeps cells alive and working. Mitochondria must be replaced often in neurons, in a process of self-replication called fission—the organelles were originally microbes captured inside a cell as part of a symbiosis. But if mitochondria are damaged or if they can’t keep up with a cell’s needs, energy supplies can run out, killing the cell.

    In 2014, researchers published the first evidence that cells can transfer mitochondria in the brain—but it seemed more a matter of throwing out the trash. When neurons expel damaged mitochondria, astrocytes swallow them and break them down. Eng Lo and Kazuhide Hayakawa, both neuroscientsists at Massachusetts General Hospital in Charlestown, wondered whether the transfer could go the other way as well—perhaps astrocytes donated working mitochondria to neurons in distress. Research by other groups supported that idea: A 2012 study, for example, found that stem cells from bone marrow can donate mitochondria to lung cells after severe injury.

    To find out whether this kind of donation was taking place in the brain, Lo and Hayakawa teamed up with researchers in Bejing to test whether astrocytes could be coaxed into expelling healthy, working mitochondria. Previous studies hinted that astrocytes may pick up on neurons’ “help me” signals using an enzyme called CD38, Lo says. The enzyme, produced throughout the body in response to injury or damage, is also made by astrocytes. When Lo and colleagues genetically engineered mice to produce excess CD38, astrocytes from the rodents—extracted and deposited into fluid-filled dishes—expelled large numbers of still-functional mitochondrial particles. Researchers then dumped the mitochondria-rich fluid into another dish containing dying mouse neurons, and found that the cells did, in fact, absorb the mitochondria within 24 hours. The recharged neurons also grew new branches, lived longer, and had higher levels of ATP than cells not receiving the replacement batteries, suggesting that the astrocytes’ mitochondria were beneficial.

    Next, the team needed to determine whether the same phenomenon happens in living animals. So they subjected live, anesthetized mice to a strokelike injury and then injected damaged brain regions with astrocyte-derived mitochondria. After 24 hours, scientists killed the mice, cut into their brains, and examined the tissue microscopically. They saw that the mice neurons had not only absorbed the mitochondria, but also had significantly higher levels of molecules known to promote survival in distressed cells than did mice that had not received the mitochondrial cocktail.

    Finally, the team tested whether CD38 was necessary for the transfer. They injected mice with short segments of RNA designed to interfere with the enzyme’s function. Mice who received the treatment after their simulated “strokes” had far fewer astrocytic mitochondria in their neurons. The rodents also fared twice as badly on neurological tests compared with ones in which CD38 was unblocked , the team reports today in Nature. Lo emphasizes that the work is merely a “proof-of-concept study,” but adds that the outcomes of the neurological tests “tells you [the enzyme] is clinically relevant.”

    Given that CD38 plays many important roles throughout the body, including the immune system, the data are “way too preliminary” to start pursuing drugs that would increase or alter its activity, cautions Frances Lund, a microbiologist at the University of Birmingham in Alabama. It’s not clear, for example, whether the transfer of mitochondria was caused by, or merely correlated with, CD38 levels, she says.

    Still, Jun Chen, a neurobiologist at the University of Pittsburgh in Pennsylvania, is hopeful that the finding could lead to new treatments for diseases attributed to mitochondrial dysfunction. Parkinson’s disease, for example, is a neurodegenerative disorder strongly associated with mitochondrial dysfunction, in which dopamine-producing neurons in certain brain regions die en masse. If the new research pans out, he says, clinicians may one day be able to deliver healthy mitochondria into sick, but still viable, neurons.

    See the full article here .

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

  • richardmitnick 10:10 am on June 29, 2016 Permalink | Reply
    Tags: , , NeuroFab, Neuroscience, New Stanford engineering tools record electrical activity of cells, , Stanford Neurosciences Institute   

    From Stanford: “New Stanford engineering tools record electrical activity of cells” 

    Stanford University Name
    Stanford University

    June 28, 2016
    Amy Adams

    When asked for the biggest idea that would transform neuroscience, Stanford mechanical engineer Nicholas Melosh came up with this: using engineering tools he and his colleagues were developing to record the electrical activity of cells.

    Stanford researchers Greg Pitner and Matt Abramian finalize sample preparation in the Neurofab, mounting the cell culture vessel to the suspended wafer. (Image credit: L.A. Cicero)

    If it works, Melosh’s big idea could help neuroscientists improve on devices that interface with the brain – such as those currently used to relieve symptoms of Parkinson’s disease – screen drugs for electrophysiology side effects, and understand with greater precision the electrical currents that underlie our thoughts, behaviors and memories.

    To make this idea a reality, Melosh, an associate professor of materials science, engineering and photon science, founded the NeuroFab as an initiative of the Stanford Neurosciences Institute. It was one of seven interdisciplinary “Big Ideas” initiatives intended to tackle fundamental problems in neuroscience.

    “Eventually we’d like to create a toolset that would impact many neuroscience labs,” he said.

    Building a bridge

    The NeuroFab serves as both a physical space where engineers and neuroscientists can collaborate on new tools and an intellectual space with regular meetings to discuss ideas.

    Melosh said neuroscience and engineering have a lot in common.

    “Neural activity is electrical in nature and is a natural fit for engineers,” he said. But the language, cultures and skills of the two groups have been hard to bridge.

    John Huguenard is one of the neuroscientists on the other side of that bridge.

    “There is a cultural difference between engineers and biologists,” said Huguenard, a Stanford professor of neurology and neurological sciences.

    “When they start talking about device characteristics, it is lost on us. Similarly, when we say there are different types of neurons with very different properties, it’s lost on them.”

    Huguenard studies widespread neural activity in the brain, such as what occurs in sleep patterns or epilepsy.

    “This work requires us to record from many parts of the brain simultaneously,” he said, something that has been challenging with existing technology.

    Currently, neuroscientists have two primary methods to record cellular electrical activity. One is highly accurate, but can only record from one cell at a time, inevitably killing the cell within about two hours. The other can record long-term from an array of cells, but is not very sensitive.

    So far, teams within the NeuroFab are experimenting with a variety of approaches for reading electrical signals. Two of the more well-developed ones involve conductive nanomaterials, either in the form of nanopillars, which poke up into cells from below, or arrays of linear nanotubes that pass through cells like a bead on a string.

    Other approaches involve the optical recording of electrical fields, massively parallel interfaces based on computer chips, and membrane-fusing electrodes.


    Early in the NeuroFab’s existence, Melosh started talking with Philip Wong, a Stanford professor of electrical engineering, whose lab is developing ways of using highly conductive carbon nanotubes for next-generation computer chips. Wong brought in graduate student Gregory Pitner, who had experience with techniques for making those carbon nanotubes in a variety of configurations.

    “We grow them aligned, we grow them dense, we grow them consistently and reproducibly,” Pitner said, who believes that the dense, aligned nanotubes could provide a conductive surface for cells to grow on.

    That initial design changed when Pitner got a lesson in cell biology from Matthew Abramian, a postdoctoral fellow in Huguenard’s lab.

    Cells, Abramian explained, are surrounded by a halo of sugars and proteins and it’s these molecules that are in contact with a lab dish, not the cell itself. To get access to electrical changes within the cell, Pitner learned that his nanotubes needed to be suspended in a way that would allow the cell to encompass and incorporate the tube.

    “There are all sorts of practical details to understand about cell behavior that go way beyond high school biology,” Pitner said.

    The new design has small troughs for the cells to grow in, containing a single nanotube leading out to a recording station. With that new design, the pair needed to find that right cell type to test whether the idea works.

    If it succeeds, they envision being able to record from any kind of conductive cell including different types of neurons or heart muscle.

    “This would be a device where you can get data on hundreds of neurons at one time,” Abramian said.

    Cultural surprises

    Abramian said the pace of biology came as a surprise to engineers.

    “Engineers just think we’ll grow some cells and the next day we’re going to record,” he said. “That’s not how it works at all.”

    Cells can take up to weeks to grow, and then they don’t always have the anticipated properties, he added.

    By contrast, Abramian said he was amazed by the amount of control engineers have over their designs.

    “They have methods for building really intricate devices, and they can test them right way,” he said.

    By including faculty and students from many disciplines, Pitner said. the NeuroFab helps bridge these gaps in knowledge and expertise.

    “These kinds of relationships are almost impossible to create in a vacuum,” he said.

    Melosh said he hopes tools developed in the NeuroFab will enable bidirectional communication with neurons in a dish, and eventually the brain, which may start to unlock secrets of the brain by measuring from many places at once.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

  • richardmitnick 1:13 pm on January 22, 2016 Permalink | Reply
    Tags: , , Montreal Neurological Institute goes "Open science", Neuroscience   

    From AAAS: “Montreal institute going ‘open’ to accelerate science” 



    Jan. 21, 2016
    Brian Owens

    Temp 1
    The Montreal Neurological Institute plans to free up its findings, including data that point to connections between brain regions communicating at different neural rhythms. SÉBASTIEN DERY, MCCONNELL BRAIN IMAGING CENTRE, MONTREAL NEUROLOGICAL INSTITUTE

    Guy Rouleau, the director of McGill University’s Montreal Neurological Institute (MNI) and Hospital in Canada, is frustrated with how slowly neuroscience research translates into treatments. “We’re doing a really shitty job,” he says. “It’s not because we’re not trying; it has to do with the complexity of the problem.”

    So he and his colleagues at the renowned institute decided to try a radical solution. Starting this year, any work done there will conform to the principles of the “open-
science” movement—all results and data will be made freely available at the time of publication, for example, and the institute will not pursue patents on any of its discoveries. Although some large-scale initiatives like the government-funded Human Genome Project have made all data completely open, MNI will be the first scientific institute to follow that path, Rouleau says.

    “It’s an experiment; no one has ever done this before,” he says. The intent is that neuroscience research will become more efficient if duplication is reduced and data are shared more widely and earlier. Opening access to the tissue samples in MNI’s biobank and to its extensive databank of brain scans and other data will have a major impact, Rouleau hopes. “We think that it is a way to accelerate discovery and the application of neuroscience.”

    After a year of consultations among the institute’s staff, pretty much everyone—about 70 principal investigators and 600 other scientific faculty and staff—has agreed to take part, Rouleau says. Over the next 6 months, individual units will hash out the details of how each will ensure that its work lives up to guiding principles for openness that the institute has developed. They include freely providing all results, data, software, and algorithms; and requiring collaborators from other institutions to also follow the open principles.

    Staff at the institute were generally in favor of the plan, according to Lesley Fellows, a neurologist at MNI, though there were concerns about how to implement some aspects of it—such as how to protect patient confidentiality, and whether there would be sufficient financial support. Yet there is a “moral imperative,” according to Fellows, for research to be shared as openly as possible.

    “While the scale of ‘open’ that can be pursued right now may vary across research areas and will certainly depend on the resources that can be brought to bear, the practical challenges seem worth contending with,” she says. Participation is voluntary, and researchers can pursue patents on their own, but MNI will not pay the fees or help with the paperwork.

    Advocates of open science have welcomed MNI’s move. Brian Nosek, a psychologist and director of the Center for Open Science at the University of Virginia in Charlottesville, says he is “very impressed” with the institute’s plans. “It’s clear they are looking to move the organization towards the ideals of science,” he says.

    Nosek says the decision to eschew patents is especially intriguing. “I haven’t seen others do that before,” he says. But it’s not something that will necessarily work in other scientific fields, like engineering, Nosek predicts. “There is lots of debate in the life sciences now about what should and should not be patented, but that may not translate across disciplines smoothly.”

    Rouleau concedes that the patent ban might mean MNI has to forgo some future licensing income. But he says the kind of early-stage science that the institute does is not really worth protecting. “There is a fair amount of patenting by people at the institute, but the outcomes have not been very useful,” he says, adding that the institute would rather provide data that others could use to develop patentable medicines. “It comes down to what is the reason for our existence? It’s to accelerate science, not to make money.”

    The insistence that any organization or institute that collaborates with MNI will also have to follow open-science principles for that project could help to spread the approach, says Dan Gezelter, a chemist and open-science advocate at the University of Notre Dame in South Bend, Indiana. “It’s a little bit viral. I’ve never seen that before,” he says. Nosek agrees. “There is little that is more powerful in changing behavior than peer pressure,” he says.

    MNI is developing metrics to monitor its open-science experiment and determine whether it has the hoped-for impact. Officials will look at participation by the institute’s own staff, how much their open resources are being used by other researchers, and whether new products or therapies are being developed more quickly. “In 5 years,” 
Rouleau says, “we’ll be able to say ‘these things worked, and these things didn’t.’”

    See the full article here .

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    • eaglegreatone 10:36 pm on January 22, 2016 Permalink | Reply

      AAAS GO to the deepest ones


  • richardmitnick 6:02 pm on December 11, 2015 Permalink | Reply
    Tags: , Neuroscience,   

    From NOVA: “Newly Discovered ‘Stop Neurons’ Could Save Your Life” 



    11 Dec 2015
    Margaux Phares

    Neuroscientists have known since the 1960s what nerves tell a person’s legs to step off the curb to cross the street. But until now, they had no idea which hold the person back to avoid getting hit by a car.

    By stimulating nerve cells with light, a group of neuroscientists at the Karolinska Institut in Stockholm both defined the aptly named “stop neurons” and saw how they work in walking mice. The team used a “bottom-up approach” to explore how the spinal cord, lower in the chain of neural command, communicates with the brain stem, which is higher in the chain.

    “Stop neurons” tell our bodies to stop moving.Photo credits: O. Bendorf/Flickr (CC BY-NC-ND), Julien Bouvier.

    Nerve cells that give rise to other functions we do not consciously think about, like breathing and keeping balance, are located in same area—effectively, as coauthor Ole Kiehn puts it, “one big mess of integrated networks.”

    To find the stop neurons, Kiehn and Julien Bouvier first modified a mouse’s brain stem to be sensitive to light stimulation, then sliced it into smaller and smaller segments. They removed parts until light no longer stimulated the segment. From this, the researchers pinpointed a cluster of “stop neurons” that extend down part of the spinal cord that, when stimulated tell the spinal cord to halt locomotion.

    What particularly surprised Bouvier was that “those stop cells are excitatory.” In order to stop motion, the cells need to be stimulated. It’s not enough to simply interrupt the locomotion signal.

    Bouvier compares it to driving a car. As long as you press the gas pedal, your car will move forward. Going into the study, scientists thought that releasing the pedal would eventually stop the car, or gradually mute the instructions to keep walking. ”But what we found was a brake pedal used only to stop,” Bouvier said.

    Watching the pathway unfold in mice supported their earlier findings. When the researchers pulsed light on stop neurons, the mice came to a stop. Light did not have an effect on mice that had blocked stop neurons—instead of stopping, they kept walking.

    A mouse rigged for an optogenetics experiment is given a blue activation signal.

    Interestingly, the mice that could stop did so smoothly. They finished the step they were about to do. This behavior is very different from freezing, an all-over muscle contraction in response to fear. Bouvier said the smooth stopping allows animals to “keep posture,” making them less likely to fall or lose balance.

    The study, published in the November issue of Cell, is a step toward understanding how the body controls marching orders at the neural level and beyond the muscular level. Thomas Knopfel, a professor of neuroscience at Imperial College London, thinks Bouvier’s study “might be a step forward with medical problems associated with the brain and spinal cord.”

    Leg paralysis from a damaged nerve can disrupt communication between the brain and spinal cord. Knopfel speculated that an implantable device could be connected to this injured nerve, which could help patch this faulty circuit and help a patient learn to move his or her leg again. The same technology the researchers at Karolinska used—called optogenetics—could be used to make this device.

    Kiehn speculated that stop neuron activity might contribute to motor symptoms of Parkinson’s disease. One common symptom of late stage Parkinson’s is an involuntary “freezing gait.” Kiehn thinks this could be a sign that the locomotion “start signal” does not work properly, or that stop neurons may be less active than normal. Future tests will involve trying to identify these neurons in diseased mice.

    Bouvier has further questions in further exploring stop neurons and understanding how the spinal cord is controlled by brain stem. Among them: Are these neurons a “general brake for all behaviors?”

    We may not consciously think about every time we start and stop to walk, but locomotion is the output of many brain activities. Stop neurons are a critical link in this chain of command; they are the neural brake pedal that saves us from cars having to slam theirs’ in the crosswalk. “Even though movement may sound like a boring, noncognitive behavior, it is really one of the most important behaviors,” Kiehn said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: