Tagged: Stanford University Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:13 pm on December 6, 2014 Permalink | Reply
    Tags: , , , Stanford University   

    From Stanford: “Stanford engineers take big step toward using light instead of wires inside computers” 

    Stanford University Name
    Stanford University

    December 2, 2014
    Chris Cesare

    Stanford engineers have designed and built a prism-like device that can split a beam of light into different colors and bend the light at right angles, a development that could eventually lead to computers that use optics, rather than electricity, to carry data.

    They describe what they call an “optical link” in an article in Scientific Reports.

    The optical link is a tiny slice of silicon etched with a pattern that resembles a bar code. When a beam of light is shined at the link, two different wavelengths (colors) of light split off at right angles to the input, forming a T shape. This is a big step toward creating a complete system for connecting computer components with light rather than wires.

    b
    This tiny slice of silicon, etched in Jelena Vuckovic’s lab at Stanford with a pattern that resembles a bar code, is one step on the way toward linking computer components with light instead of wires.

    “Light can carry more data than a wire, and it takes less energy to transmit photons than electrons,” said electrical engineering Professor Jelena Vuckovic, who led the research.

    In previous work her team developed an algorithm that did two things: It automated the process of designing optical structures and it enabled them to create previously unimaginable, nanoscale structures to control light.

    Now, she and lead author Alexander Piggott, a doctoral candidate in electrical engineering, have employed that algorithm to design, build and test a link compatible with current fiber optic networks.

    Creating a silicon prism

    The Stanford structure was made by etching a tiny bar code pattern into silicon that split waves of light like a small-scale prism. The team engineered the effect using a subtle understanding of how the speed of light changes as it moves through different materials.

    What we call the speed of light is how fast light travels in a vacuum. Light travels a bit more slowly in air and even more slowly in water. This speed difference is why a straw in a glass of water looks dislocated.

    A property of materials called the index of refraction characterizes the difference in speed. The higher the index, the more slowly light will travel in that material. Air has an index of refraction of nearly 1 and water of 1.3. Infrared light travels through silicon even more slowly: it has an index of refraction of 3.5.

    The Stanford algorithm designed a structure that alternated strips of silicon and gaps of air in a specific way. The device takes advantage of the fact that as light passes from one medium to the next, some light is reflected and some is transmitted. When light traveled through the silicon bar code, the reflected light interfered with the transmitted light in complicated ways.

    The algorithm designed the bar code to use this subtle interference to direct one wavelength to go left and a different wavelength to go right, all within a tiny silicon chip eight microns long.

    Both 1300-nanometer light and 1550-nanometer light, corresponding to and O-band wavelengths widely used in fiber optic networks, were beamed at the device from above. The bar code-like structure redirected C-band light one way and O-band light the other, right on the chip.

    Convex optimization

    The researchers designed these bar code patterns already knowing their desired function. Since they wanted C-band and O-band light routed in opposite directions, they let the algorithm design a structure to achieve it.

    “We wanted to be able to let the software design the structure of a particular size given only the desired inputs and outputs for the device,” Vuckovic said.

    To design their device they adapted concepts from convex optimization, a mathematical approach to solving complex problems such as stock market trading. With help from Stanford electrical engineering Professor Stephen Boyd, an expert in convex optimization, they discovered how to automatically create novel shapes at the nanoscale to cause light to behave in specific ways.

    “For many years, nanophotonics researchers made structures using simple geometries and regular shapes,” Vuckovic said. “The structures you see produced by this algorithm are nothing like what anyone has done before.”

    The algorithm began its work with a simple design of just silicon. Then, through hundreds of tiny adjustments, it found better and better bar code structures for producing the desired output light.

    Previous designs of nanophotonic structures were based on regular geometric patterns and the designer’s intuition. The Stanford algorithm can design this structure in just 15 minutes on a laptop computer.

    They have also used this algorithm to design a wide variety of other devices, like the super-compact “Swiss cheese” structures that route light beams to different outputs not based on their color, but based on their mode, i.e., based on how they look. For example, a light beam with a single lobe in the cross-section goes to one output, and a double lobed beam (looking like two rivers flowing side by side) goes to the other output. Such a mode router is equally as important as the bar code color splitter, as different modes are also used in optical communications to transmit information.

    The algorithm is the key. It gives researchers a tool to create optical components to perform specific functions, and in many cases such components didn’t even exist before. “There’s no way to analytically design these kinds of devices,” Piggott said.

    Media Contact

    Tom Abate, School of Engineering: (650) 736-2245, tabate@stanford.edu

    Dan Stober, Stanford News Service: (650) 721-6965, dstober@stanford.edu

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 6:44 pm on September 22, 2014 Permalink | Reply
    Tags: , , , Stanford University,   

    From Stanford: “Stanford researchers create ‘evolved’ protein that may stop cancer from spreading” 

    Stanford University Name
    Stanford University

    September 21, 2014
    Tom Abate

    Experimental therapy stopped the metastasis of breast and ovarian cancers in lab mice, pointing toward a safe and effective alternative to chemotherapy.

    A team of Stanford researchers has developed a protein therapy that disrupts the process that causes cancer cells to break away from original tumor sites, travel through the bloodstream and start aggressive new growths elsewhere in the body.

    This process, known as metastasis, can cause cancer to spread with deadly effect.

    “The majority of patients who succumb to cancer fall prey to metastatic forms of the disease,” said Jennifer Cochran, an associate professor of bioengineering who describes a new therapeutic approach in Nature Chemical Biology.

    Today doctors try to slow or stop metastasis with chemotherapy, but these treatments are unfortunately not very effective and have severe side effects.

    The Stanford team seeks to stop metastasis, without side effects, by preventing two proteins – Axl and Gas6 – from interacting to initiate the spread of cancer.

    Axl proteins stand like bristles on the surface of cancer cells, poised to receive biochemical signals from Gas6 proteins.

    When two Gas6 proteins link with two Axls, the signals that are generated enable cancer cells to leave the original tumor site, migrate to other parts of the body and form new cancer nodules.

    To stop this process Cochran used protein engineering to create a harmless version of Axl that acts like a decoy. This decoy Axl latches on to Gas6 proteins in the bloodstream and prevents them from linking with and activating the Axls present on cancer cells.

    In collaboration with Professor Amato Giaccia, co-director of the Radiation Biology Program in the Stanford Cancer Center, the researchers gave intravenous treatments of this bioengineered decoy protein to mice with aggressive breast and ovarian cancers.

    two
    Jennifer Cochran and Amato Giaccia are members of a team of researchers who have developed an experimental therapy to treat metastatic cancer.

    Mice in the breast cancer treatment group had 78 percent fewer metastatic nodules than untreated mice. Mice with ovarian cancer had a 90 percent reduction in metastatic nodules when treated with the engineered decoy protein.

    “This is a very promising therapy that appears to be effective and nontoxic in preclinical experiments,” Giaccia said. “It could open up a new approach to cancer treatment.”

    Giaccia and Cochran are scientific advisors to Ruga Corp., a biotech startup in Palo Alto that has licensed this technology from Stanford. Further preclinical and animal tests must be done before determining whether this therapy is safe and effective in humans.

    Greg Lemke, of the Molecular Neurobiology Laboratory at the Salk Institute, called this “a prime example of what bioengineering can do” to open up new therapeutic approaches to treat metastatic cancer.

    “One of the remarkable things about this work is the binding affinity of the decoy protein,” said Lemke, a noted authority on Axl and Gas6 who was not part of the Stanford experiments.

    “The decoy attaches to Gas6 up to a hundredfold more effectively than the natural Axl,” Lemke said. “It really sops up Gas6 and takes it out of action.”
    Directed evolution

    The Stanford approach is grounded on the fact that all biological processes are driven by the interaction of proteins, the molecules that fit together in lock-and-key fashion to perform all the tasks required for living things to function.

    In nature proteins evolve over millions of years. But bioengineers have developed ways to accelerate the process of improving these tiny parts using technology called directed evolution. This particular application was the subject of the doctoral thesis of Mihalis Kariolis, a bioengineering graduate student in Cochran’s lab.

    Using genetic manipulation, the Stanford team created millions of slightly different DNA sequences. Each DNA sequence coded for a different variant of Axl.

    The researchers then used high-throughput screening to evaluate over 10 million Axl variants. Their goal was to find the variant that bound most tightly to Gas6.

    Kariolis made other tweaks to enable the bioengineered decoy to remain in the bloodstream longer and also to tighten its grip on Gas6, rendering the decoy interaction virtually irreversible.

    Yu Rebecca Miao, a postdoctoral scholar in Giaccia’s lab, designed the testing in animals and worked with Kariolis to administer the decoy Axl to the lab mice. They also did comparison tests to show that sopping up Gas6 resulted in far fewer secondary cancer nodules.

    Irimpan Mathews, a protein crystallography expert at SLAC National Accelerator Laboratory, joined the research effort to help the team better understand the binding mechanism between the Axl decoy and Gas6.

    Protein crystallography captures the interaction of two proteins in a solid form, allowing researchers to take X-ray-like images of how the atoms in each protein bind together. These images showed molecular changes that allowed the bioengineered Axl decoy to bind Gas6 far more tightly than the natural Axl protein.
    Next steps

    Years of work lie ahead to determine whether this protein therapy can be approved to treat cancer in humans. Bioprocess engineers must first scale up production of the Axl decoy to generate pure material for clinical tests. Clinical researchers must then perform additional animal tests in order to win approval for and to conduct human trials. These are expensive and time-consuming steps.

    But these early, hopeful results suggest that the Stanford approach could become a nontoxic way to fight metastatic cancer.

    Glenn Dranoff, a professor of medicine at Harvard Medical School and a leading researcher at the Dana-Farber Cancer Institute, reviewed an advance copy of the Stanford paper but was otherwise unconnected with the research. “It is a beautiful piece of biochemistry and has some nuances that make it particularly exciting,” Dranoff said, noting that tumors often have more than one way to ensure their survival and propagation.

    Axl has two protein cousins, Mer and Tyro3, that can also promote metastasis. Mer and Tyro3 are also activated by Gas6.

    “So one therapeutic decoy might potentially affect all three related proteins that are critical in cancer development and progression,” Dranoff said.

    Erinn Rankin, a postdoctoral fellow in the Giaccia lab, carried out proof of principle experiments that paved the way for this study.

    Other co-authors on the Nature Chemical Biology paper include Douglas Jones, a former doctoral student, and Shiven Kapur, a postdoctoral scholar, both of Cochran’s lab, who contributed to the protein engineering and structural characterization, respectively.

    Cochran said Stanford’s support for interdisciplinary research made this work possible.

    Stanford ChEM-H (Chemistry, Engineering & Medicine for Human Health) provided seed funds that allowed Cochran and Mathews to collaborate on protein structural studies.

    The Stanford Wallace H. Coulter Translational Research Grant Program, which supports collaborations between engineers and medical researchers, supported the efforts of Cochran and Giaccia to apply cutting-edge bioengineering techniques to this critical medical need.

    See the full article here.

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:38 am on September 12, 2014 Permalink | Reply
    Tags: , , Stanford University,   

    From Stanford: “Stanford engineers help describe key mechanism in energy and information storage” 

    Stanford University Name
    Stanford University

    September 11, 2014
    Bjorn Carey

    By observing how hydrogen is absorbed into individual palladium nanocubes, Stanford materials scientists have detailed a key step in storing energy and information in nanomaterials. The work could inform research that leads to longer-lasting batteries or higher-capacity memory devices.

    block
    The palladium nanocubes viewed through a transmission electron microscope. Each black dot is a palladium atom.

    The ideal energy or information storage system is one that can charge and discharge quickly, has a high capacity and can last forever. Nanomaterials are promising to achieve these criteria, but scientists are just beginning to understand their challenging mechanisms.

    Now, a team of Stanford materials scientists and engineers has provided new insight into the storage mechanism of nanomaterials that could facilitate development of improved batteries and memory devices.

    The team, led by Jennifer Dionne, assistant professor of materials science and engineering at Stanford, and consisting of Andrea Baldi, Tarun Narayan and Ai Leen Koh, studied how metallic nanoparticles composed of palladium absorbed and released hydrogen atoms.

    Previously, scientists have studied hydrogen absorption in ensembles of metallic nanoparticles, but this approach makes it difficult to infer information about how the individual nanoparticles behave. The new study reveals that behavior by measuring the hydrogen content in individual palladium nanoparticles exposed to increasing pressures of hydrogen gas.

    The group’s experimental findings are consistent with a mechanism recently proposed for energy storage in lithium ion batteries, underscoring the interest for the broader scientific community. The work is detailed online in the journal Nature Materials.

    The finding was made possible by the use of a specialized transmission electron microscope (TEM) that allowed the team to detect, with near atomic-scale resolution, the process by which hydrogen entered the nanomaterial.

    “Electron microscopy must ordinarily be conducted in high vacuum,” said co-author Ai Leen Koh, a research scientist with the Stanford Nano Shared Facilities. “But the unique capabilities of Stanford’s environmental TEM obviates this requirement, enabling the study of individual nanoparticles both in vacuum and while immersed in a reactive gas.”
    Stretching metal

    The researchers synthesized palladium nanocubes and then dispersed them onto a very thin membrane. After placing the membrane in the TEM, the engineers flowed hydrogen gas past the palladium nanoparticles and gradually increased its pressure.

    At sufficiently high pressures of hydrogen, the gas molecules dissociate on the surface of the nanocubes and individual hydrogen atoms enter into the spaces between the palladium crystals. Interestingly, the absorption and desorption processes appear to be quite sudden.

    “You can think of it like popcorn,” said co-lead author Tarun Narayan, a graduate student in Dionne’s group. “It’s a very binary process, and a pretty sharp transition. Either the hydrogen is in the palladium or it’s not, and it enters and leaves at predictable pressures. And that’s quite important for a good energy storage system.”

    As the hydrogen enters the palladium nanostructure, the material’s volume increases by about 10 percent. This expansion significantly alters the way in which the particle interacts with the electron beam; this disruption indicates the amount of hydrogen absorbed. Because the nanocubes are single-crystalline and effectively “unbound” from the membrane, the researchers were able to study and measure the storage mechanism in unprecedented detail.

    “You have to stretch the palladium to put the hydrogen inside, but you have to pay energy to make it stretch,” said Andrea Baldi, a postdoctoral researcher in Dionne’s group. “Knowing that cost is very important for any battery designs, and because our nanostructures are not glued to a substrate, we’re able to quantify that stretch more accurately than ever before.”
    Next up: Palladium spheres

    Despite the stress of repeated expansion and contraction, the nanocrystals of palladium were not damaged by hydrogen absorption and desorption, as usually happens in larger specimens.

    “At the nanoscale, materials behave quite differently than they do in bulk,” said Dionne, the senior author. “Their increased surface area to volume ratio can significantly impact their mechanical flexibility and, consequently, their ability to charge and discharge ions or atoms.”

    In particular, this research indicates that nanoparticles can load more easily and at much lower pressures than bulk materials. Further, because they have a higher resistance to elastic stress, the formation of defects in these materials is suppressed.

    “Our results suggest that particles in this size regime don’t develop defects even if charged and discharged with hydrogen multiple times,” Narayan said. “Other researchers are starting to see this in lithium ion battery research as well, and we think a lot of what we’ve learned can be applied to that research.”

    Because of its fast storage speeds, stability and ease-of-loading, the hydrogenation of palladium is an excellent model system to study general energy and information storage mechanisms. Palladium, however, is not a likely material for widespread energy storage – it is too heavy and expensive. Yet, the researchers believe the results could be replicated with other systems involving storing hydrogen in metals.

    The next steps involve applying the newly developed single-particle method to a wide range of nanostructures – spheres and rods, for example – to study how storage can be affected by the shape, size and crystallinity of a nanoparticle. Furthermore, they plan to use the electron microscope to determine exactly where atoms or ions are preferentially absorbed within a singe nanoparticle.

    Dionne is an affiliate of the Stanford Institute for Materials and Energy Sciences (SIMES) and SLAC National Accelerator Laboratory. The research was supported by funding from the National Science Foundation, the Air Force Office of Scientific Research, the U.S. Department of Energy, a Young Energy Scientist Fellowship and a Hellman Fellowship.

    See the full article here.

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:01 pm on August 6, 2014 Permalink | Reply
    Tags: , , , Stanford University   

    From Stanford University: “Incredible cartilage” 

    Stanford University Name
    Stanford University

    Sara Wykes

    Focusing on gristle in the effort to improve joint replacements

    Constance Chu was a medical student observing a surgery performed by her teacher when she caught her first glimpse of human articular cartilage, the smooth, glistening coating that covers the ends of bones as they meet at the ankle, knee and hip.

    bones
    Illustration by Jon Han
    “You only have one chance at this,” her teacher, Henry Mankin, MD, chief of orthopaedic surgery at Massachusetts General Hospital, told her. “If you damage this cartilage, it doesn’t grow back.”

    This was the early ’90s, and Mankin was considered one of the 20th century’s leaders in research on cartilage — especially articular cartilage, which is thought to be incapable of recovery from injury because it lacks nerves and blood, the body’s two most important tools for healing. Its basic metabolism was believed to be so slow that the tissue was considered nearly inert. With that set of characteristics, the only hope for damaged joints was to replace them with something artificial.

    Although she started her career replacing joints with artificial materials, Chu is now a Stanford professor of orthopaedic surgery, treating the kind of cartilage and ligament injuries that typically lead to joint replacement. She is convinced, however, that articular cartilage can heal itself. She and several Stanford colleagues are researching ways to predict and track the damage to this all-important bone protector, to find new approaches to its repair and to stem the rapidly rising flood of people whose joints are wearing out.

    “The next generation of orthopedic devices,” says William Maloney, MD, professor and chair of orthopaedics at Stanford, “is going to be biologic in nature: protein and cells, not metal and plastic.”

    Cartilage research has only recently gained wider interest. In fact, when she was a young researcher looking for ways to grow cartilage from stem cells and to capture images of articular cartilage behavior, Chu says, “people were acting like I was crazy. Now everybody wants to be able to do it.”

    Understanding articular cartilage is at the heart of that next generation of orthopaedic devices, pushed by a rapidly rising need for joint replacement. Many people — 27 million of them in the United States — are familiar with the pain caused by damaged articular cartilage, otherwise known as osteoarthritis. That condition is the primary impetus for the knee and hip replacements already given to more than 7 million Americans. Osteoarthritis is distinctly age-related, so the aging of the 49- to 68-year-old baby boomers — now about 15 percent of the population and estimated to rise to nearly 20 percent by 2030 — will push even higher the numbers for osteoarthritis and the joint replacements that usually follow.

    Just last year, another 800,000 knees and hips were replaced. Joint replacement numbers are rising so fast that the American Academy of Orthopedic Surgeons projects that by 2030 the combined demand for hip and knee replacements may outstrip the availability of surgeons to perform the procedures.

    The current plastic and metal replacement parts are good but not perfect, and don’t function as well as a normal joint. Ultimately many of the implants must themselves be replaced. The metal alloys in implants can corrode; plastics, too, will wear out. And metal particles shed by some implants can destroy healthy tissue or cause poisoning.

    Cost is also a driver. In 2005, orthopaedic-implant costs in the United States were $5 billion, double what they had been in 2002. Now, nearly half of Medicare’s annual $20 billion tally for implanted medical device coverage is spent on orthopaedics. Effective prevention or earlier biologic treatment might reduce the rate of replacements and the subsequent cost of those surgeries.

    Orthopaedists are now aiming their work at the key puzzle of how bones and articular cartilage behave. Articular cartilage is perhaps the most challenging component of developing new biologic devices for joints. Most of us might look to our bones as the workhorse of our skeleton, but it is articular cartilage that, ounce for ounce, does the most with the least. Generally no thicker than a dime, it helps our joints remain strong against forces that with each step can add up to three times our body weight.

    No small job, that. The average adult takes 1.2 million steps annually. Stair climbing triples the load joints bear. Mankin and two co-authors of a 2005 lecture on articular cartilage called it the biggest contributor to the “extraordinary functional capacities” of the joints it protects, allowing those joints to move with a level of friction less than any artificial substitute, putting to scorn all machinery, including the metal joint replacements then available.

    The five zones of articular cartilage’s internal architecture are a marvel of functional design — a series of distinctively different cellular arrangements that control and direct water, the main component of articular cartilage. That water acts as the primary weight-bearing element in the cartilage. The cartilage’s layers — some horizontal, some vertical and some in random array — work with the cells’ biochemical reactions to manipulate water within cartilage. “Mother Nature did a brilliant job of engineering,” says Jason Dragoo, MD, associate professor of orthopaedic surgery at Stanford, “to the point that it is difficult to re-create. This is one of the body’s most complex tissues.”

    If researchers succeed in re-creating articular cartilage, it won’t be the first time that a natural substance has been chosen to replace a damaged joint. The first experiments in joint replacement began in the late 19th century with a German physician who used ivory to replace a young woman’s knee. He had already tried aluminum, wood, glass and nickel-plated steel. In the 1930s, an American doctor tested a tempered glass called Pyrex before finding a chrome-cobalt alloy to be more stable.

    The surgery has evolved since the first total knee replacement in 1968. Surgeons make a long incision from about 2 inches above the knee to about 2 inches below. The surgeon cleans and prepares the ends of the thighbone and the top of the shinbone to accommodate the replacement parts. The thighbone is capped with a metal covering that mimics its old, rounded end. Into the top of the shinbone, surgeons insert a stem that will support a circular, plate-shaped metal covering. On top of that covering rests a similarly shaped layer of plastic whose upper surface is curved inward to accept the rounded end of the thighbone. The back of the kneecap is fitted with a metal or ceramic button. With those components in place, the thighbone is rotated around on the shinbone’s tray, with the patella in place to cover the joint.

    The great hope is that insight into the biology of cartilage will allow damaged cartilage to revive, making such drastic intervention unnecessary.

    Clinical trials are taking place around the world to test implants made of materials designed to stimulate new bone and cartilage formation. Many of these materials, however, are created from cadaver tissue, which isn’t easy to come by. Treatments that rely on the patient’s own cells to make replacement cartilage are also plentiful, though not very successful so far, Maloney says. It will take another decade before cell-based cartilage repair will protect joints well enough for any activity that stresses our knees and hips beyond basic movement, he says.

    Later this year, Dragoo plans to start testing a knee joint repair treatment that uses stem cells from the fat pad under the kneecap as a repair material. He will harvest those cells using minimally invasive instruments, put them in a centrifuge to concentrate them, add biologic glue made from blood, and insert that mix into the cartilage defect. “We think the fat pad is there for a reason,” Dragoo says. “We’re taking an immature cell and supplying it with the right environment in the hopes that it stays a cartilage cell.”
    ‘Mother nature did a brilliant job of engineering. This is one of the body’s most complex tissues.’

    Even more reliable, Dragoo says, will be the ability to instruct a 3-D printer to re-create articular cartilage. That may be possible in a couple of years on a small scale to test as a repair for the pothole version of cartilage defects. “And when we can treat potholes,” Dragoo says, “then we can resurface the whole street.”

    Other Stanford researchers are focused not only on better understanding how to work with transplanted forms of cartilage replacements, but also on how to prevent and predict cartilage damage. Marc Safran, MD, professor of orthopaedic surgery, has years of experience treating athletes’ cartilage injuries. He and Garry Gold, MD, a professor of radiology, are studying the knees of marathon runners using imaging to capture what the stress of running does to cartilage, and to investigate ways to prevent such damage. Safran has also been identifying the anatomic differences that make someone more vulnerable to joint damage. “If we can prevent this damage from happening, that will be the real key.”

    That kind of research is valuable because articular cartilage can be damaged by more than just the aging process. If the contact points of the knee joint are altered, then the cartilage’s protective barrier no longer makes contact properly. And the most often damaged element of that arrangement is the anterior cruciate ligament, known more colloquially as the ACL. Young athletes who tear their ACLs set in motion a deterioration of cartilage that can lead to early osteoarthritis and early joint replacement.

    Cartilage and the discs between vertebrae in the spine have many similarities. Another professor of orthopaedic surgery, Serena Hu, MD, has focused on the discs of the spine, searching for new ways to preserve disc strength and function. “By the time a patient comes in with a worn-out disc,” Hu says, “it’s too late to repair or regenerate it. We want to be able to predict if someone with early degenerated, non-painful discs is likely to develop more-degenerated, painful discs. Understanding more about the genetics of disc degeneration will help us determine who will benefit from early intervention.” She has also seen in her research that movement of the spine reduces deterioration. “I’ve always believed that you should stay active,” she says.

    When Chu started her career, she was one of only a few researchers working on articular cartilage, but now she has plenty of collaborators. In fact, the International Cartilage Repair Society, formed in 1997, now has more than 1,300 members in 64 countries. At Stanford, Chu and Tom Andriacchi, PhD, a professor of mechanical engineering and of orthopaedic surgery, are studying how abnormal movement patterns damage articular cartilage. She is working with radiologist Gold on the next generation of MRI techniques to detect cartilage behavior. And she is collaborating with Bill Robinson, MD, PhD, an associate professor of medicine, to develop a blood test “to give us an idea of what is going on with articular cartilage without having to do imaging,” she says.

    But her longest-running project, funded since 2006 by the National Institutes of Health, seeks a way to diagnose osteoarthritis noninvasively before joints start hurting. The key is to recognize damage inside cartilage before the tissue is beyond repair. The current method for diagnosis, arthroscopy, is a surgical procedure in which a camera is inserted inside the joint. She has been looking for a noninvasive alternative.

    So Chu is thrilled by the results of one of her recent experiments, published this summer. The study examined the ability of a new imaging technique called ultrashort echo time MRI mapping to assess cartilage health. It was a small study of 42 subjects: 31 with ACL tears and 11 uninjured. It showed that the MRI method was able to detect damage, and something far more exciting, something that her mentor told her more than 20 years ago was impossible — that articular cartilage could recover. It took time, but after a new type of ACL reconstruction and a year of rest, most of the subjects’ injured cartilage did heal.

    See the full article here.

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:53 pm on August 5, 2014 Permalink | Reply
    Tags: , , Stanford University   

    From Stanford University: “Powerful tool could unlock secrets of Earth’s interior ocean” 

    Stanford University Name
    Stanford University

    July 28, 2014
    Ker Than

    A new way of determining the hydrogen content in mantle rocks could lead to improved estimates of Earth’s interior water and a better understanding of our planet’s early evolution.

    rock
    Graduate student Suzanne Birner and former postdoctoral researcher Lars Hansen collect structural data in the Josephine Peridotite in Oregon. Megan D’Errico

    A new technique for determining the hydrogen content of mantle rocks could lead to more precise estimates of how much water is contained in Earth’s deep interior and an improved understanding of our planet’s early evolution.

    The rocks that make up the planet’s mantle, which extends from about 20 to 1,800 miles beneath the Earth’s surface, harbor hydrogen atoms within their crystal structures. Scientists estimate that if all of that hydrogen were converted to water–by combining with the oxygen that is naturally found in the planet’s interior–it would equal between half to four times as much water as is found in all of the Earth’s oceans combined. “Scientists used to think that there was not a lot of water inside the Earth because mantle minerals weren’t thought to be able to contain much water,” said Jessica Warren, an assistant professor in the department of Geological and Environmental Sciences at Stanford University.

    In the late 1980s, however, scientists realized that minerals that were considered anhydrous, or lacking in water, actually can contain hydrogen atoms, but only at concentrations of a several parts per million. “That sounds miniscule, but if you multiply that by the volume of the mantle, it’s a very significant amount of water,” Warren said.

    The amount of water contained in mantle rocks is known to influence geological processes such as volcanic eruptions. “The amount of water present within the Earth controls how explosive a volcanic eruption will be,” Warren said, “because during an eruption, there is a rapid pressure change and water dissolved in the magma is released as gas.”

    Many scientists also suspect that mantle water directly influences the shift of the continents over geologic timescales. “The amount of water in the mantle controls its viscosity, or resistance to flow, and some scientists have argued that without water inside the Earth, you would not have plate tectonics,” Warren said.

    A better understanding of how much water is locked away inside Earth could also help constrain models of our planet’s early evolution. “One long-standing question is how much water did our planet contain when it formed?” Warren said. “If we don’t know how much water is within the Earth today, it’s hard to project back to the past and model the early Earth and understand its formation.”

    One reason that the estimates for how much water is inside the Earth vary so widely is that the mineral that scientists have traditionally used to estimate mantle water concentrations, called olivine, loses water over time. “Hydrogen diffuses out of olivine very quickly,” Warren explained. “Just the process of being transported from the mantle to the Earth’s surface results in water loss, so it’s difficult to estimate how much water an olivine sample once held.”

    In a recent study, Warren and Erik Hauri, a geochemist at the Carnegie Institution of Washington, propose using pyroxene–the second-most abundant mantle mineral after olivine–as a proxy for estimating mantle water.

    The pair analyzed several samples of peridotite, a rock that contains both olivine and different types of pyroxenes, which were collected from the seafloors of the Arctic and Indian Oceans and from a unique field site in Oregon. “When tectonic plates collide together, a slice of very deep material can get pushed up onto the crust,” Warren said. “At the field site in Oregon, we can actually walk around on what used to be the mantle.”

    By comparing pyroxenes in the field rocks with samples that had been synthesized in the lab, Warren and Hauri concluded that pyroxene retains water better than olivine. The pair suggests that pyroxenes could be a “powerful tool” for estimating the concentration and location of water bound in minerals in the upper mantle. “Our results suggest that pyroxene does not have olivine’s water-loss problem,” Warren said.

    It may be a while, however, before scientists can use pyroxenes to settle the question of just how much water is contained in the Earth’s mantle. “It’s a complicated calculation,” Warren said, “and we are still a long way off from actually being able to perform that estimate.”

    The pair’s research was published earlier this year of the Journal of Geophysical Research: Solid Earth and was recently featured in Eos, a publication of the American Geophysical Union.

    See the full article here.

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:09 am on July 24, 2014 Permalink | Reply
    Tags: , , , Stanford University   

    From physicsworld.com: “Plasmonic chip diagnoses diabetes” 

    physicsworld
    physicsworld.com
    Jul 23, 2014
    Belle Dumé

    A plasmonic chip that can diagnose type-1 diabetes (T1D) has been unveiled by researchers at Stanford University in the US. The chip is capable of detecting diabetes-related biomarkers such as insulin-specific autoantibodies and could be used in hospitals and doctors’ surgeries as a quick and simple way to detect early-stage T1D.

    Diabetes could affect nearly 370 million people worldwide by 2030, according to the World Health Organization. More worrying still, diabetes is now the second most common chronic disease in children. For reasons that are still unclear, the rate of T1D (also known as autoimmune diabetes) in children is increasing by about 3% every year, with a projected increase of a staggering 70% between 2004 and 2020.

    Although T1D was once thought of as being exclusively a childhood disease, around a quarter of individuals now contract it as adults. The rate of type-2 diabetes (T2D) (also called metabolic or diet-induced diabetes), normally seen in overweight adults, has also alarmingly escalated in children since the early 1990s, in part because of the global obesity epidemic. Until quite recently, it was fairly simple to distinguish between T1D and T2D because the diseases had occurred in different groups of people. However, this is becoming more and more difficult because the groups are beginning to overlap. The main problem is that existing diagnostic tests are slow and expensive, and it would be better to detect diabetes as early as possible to ensure the best possible treatment.
    Higher concentration of autoantibodies

    T1D is different from T2D in that patients with the disorder have a much higher concentration of autoantibodies. These are produced by the body and work against one or more pancreatic islet antigens such as insulin, glutamic acid decarboxylase and/or tyrosine phosphatase. Detecting these autoantibodies, and especially those against insulin (which are the first to appear), is therefore a good way to detect T1D. Again, standard tests are not very efficient and even the most widely used technique, radioimmunoassay (RIA) with targeted antigens, is far from ideal because it is slow and relies on toxic radioisotopes.

    In an attempt to overcome these problems, the Stanford researchers have developed an autoantibody test that is more reliable, simple and faster than RIA and similar tests. It comprises a microarray of islet antigens arranged on a plasmonic gold (pGOLD) chip. It can be used to diagnose T1D by detecting the interaction of autoantibodies in a small blood sample with insulin, GAD65 and IA-2, and potentially new biomarkers of the disease. It works with just 2 µL of whole human blood (from a finger-prick sample, for example) and results can be obtained in the same day.

    chip
    Good as gold: detecting diabetes with plasmons

    Enhancing the fluorescence emission

    The team, led by Hongjie Dai, made its pGOLD chip by uniformly coating glass slides with gold nanoparticles that have a surface plasmon resonance in the near-infrared part of the electromagnetic spectrum. Plasmons are collective oscillations of the conduction electrons on the surfaces of the nanoparticles. They allow the nanoparticles to act like tiny antennas, absorbing light at certain resonant frequencies and transferring it efficiently to nearby molecules.

    The result can be a large boost in the fluorescence of the molecule, and the researchers have shown that the pGOLD chip is capable of enhancing the fluorescence emission of near-infrared tags of biological molecules by around 100 times. Together with Brian Feldman’s group, the researchers robotically printed the islet antigens in triplicate spots onto the plasmonic gold slide to create a chip containing a microarray of antigens.

    “We tested our device by applying 2 µL of human serum or blood (diluted by 10 or 100 times) to it,” explains Dai. “If the sample contains autoantibodies that match one or more of the islet antigens on the chip, those antibodies bind to the specific antigens, which are then tagged by a secondary antibody with a near-infrared dye to make the islet spots brightly fluoresce.”
    Antibody detected at much lower concentrations

    The samples came from Feldman’s patients who had new-onset diabetes. They were tested against non-diabetic controls at Stanford University Medical Center.

    The antigen spots fluoresce 100 times more brightly thanks to the plasmonic gold substrate, which allows the antibody to be detected at much lower concentrations (down to just 1 femtomolar) than if ordinary gold were to be employed in the microarray platform.

    “We believe that our technology will be able to address the current clinical need for improved diabetes diagnostics,” Dai says. “The pGOLD platform is also being commercialized by a new start-up company, Nirmidas Biotech, based in San Francisco, aimed at better detecting proteins for a range of research and diagnostic applications. It might even be able to detect biomarkers for other diseases such as heart disease with ultrahigh sensitivity.”

    The researchers describe their plasmonic chip in Nature Medicine.

    This article first appeared on nanotechweb.org

    See the full article here.

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 4:05 pm on July 10, 2014 Permalink | Reply
    Tags: , , , , , Stanford University   

    From SLAC: “Uncertainty Gives Scientists New Confidence in Search for Novel Materials “ 


    SLAC Lab

    July 10, 2014
    Andrew Gordon, agordon@slac.stanford.edu, (650) 926-2282

    Scientists at Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory have found a way to estimate uncertainties in computer calculations that are widely used to speed the search for new materials for industry, electronics, energy, drug design and a host of other applications. The technique, reported in the July 11 issue of Science, should quickly be adopted in studies that produce some 30,000 scientific papers per year.

    “Over the past 10 years our ability to calculate the properties of materials and chemicals, such as reactivity and mechanical strength, has increased enormously. It’s totally exploded,” said Jens Nørskov, a professor at SLAC and Stanford and director of the SUNCAT Center for Interface Science and Catalysis, who led the research.

    “As more and more researchers use computer simulations to predict which materials have the interesting properties we’re looking for – part of a process called ‘materials by design’ ­– knowing the probability for error in these calculations is essential,” he said. “It tells us exactly how much confidence we can put in our results.”

    Nørskov and his colleagues have been at the forefront of developing this approach, using it to find better and cheaper catalysts to speed ammonia synthesis and generate hydrogen gas for fuel, among other things. But the technique they describe in the paper can be broadly applied to all kinds of scientific studies.

    graph
    This image shows the results of calculations aimed at determining which of six chemical elements would make the best catalyst for promoting an ammonia synthesis reaction. Researchers at SLAC and Stanford used Density Functional Theory (DFT) to calculate the strength of the bond between nitrogen atoms and the surfaces of the catalysts. The bond strength, plotted on the horizontal axis, is a key factor in determining the reaction speed, plotted on the vertical axis. Based on thousands of these calculations, which yielded a range of results (colored dots) that reveal the uncertainty involved, researchers estimated an 80 percent chance that ruthenium (Ru, in red) will be a better catalyst than iron (Fe, in orange.) (Andrew Medford and Aleksandra Vojvodic/SUNCAT, Callie Cullum)

    Speeding the Material Design Cycle

    The set of calculations involved in this study is known as DFT, for Density Functional Theory. It predicts bond energies between atoms based on the principles of quantum mechanics. DFT calculations allow scientists to predict hundreds of chemical and materials properties, from the electronic structures of compounds to density, hardness, optical properties and reactivity.

    Because researchers use approximations to simplify the calculations – otherwise they’d take too much computer time – each of these calculated material properties could be off by a fairly wide margin.

    To estimate the size of those errors, the team applied a statistical method: They calculated each property thousands of times, each time tweaking one of the variables to produce slightly different results. That variation in results represents the possible range of error.

    “Even with the estimated uncertainties included, when we compared the calculated properties of different materials we were able to see clear trends,” said Andrew J. Medford, a graduate student with SUNCAT and first author of the study. “We could predict, for instance, that ruthenium would be a better catalyst for synthesizing ammonia than cobalt or nickel, and say what the likelihood is of our prediction being right.”

    An Essential New Tool for Thousands of Studies

    DFT calculations are used in the materials genome initiative to search through millions of solids and compounds, and also widely used in drug design, said Kieron Burke, a professor of chemistry and physics at the University of California-Irvine who was not involved in the study.

    “There were roughly 30,000 papers published last year using DFT,” he said. “I believe the technique they’ve developed will become absolutely necessary for these kinds of calculations in all fields in a very short period of time.”

    Thomas Bligaard, a senior staff scientist in charge of theoretical method development at SUNCAT, said the team has a lot of work ahead in implementing these ideas, especially in calculations attempting to make predictions of new phenomena or new functional materials.

    Other researchers involved in the study were Jess Wellendorff, Aleksandra Vojvodic, Felix Studt, and Frank Abild-Pedersen of SUNCAT and Karsten W. Jacobsen of the Technical University of Denmark. Funding for the research came from the DOE Office of Science.

    See the full article here.

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 12:12 pm on June 24, 2014 Permalink | Reply
    Tags: , , , , , , , , Stanford University   

    From SLAC Lab: “SLAC, Stanford Scientists Play Key Roles in Confirming Cosmic Inflation” 


    SLAC Lab

    March 19, 2014
    Glennda Chui

    Chao-Lin Kuo and Kent Irwin Helped Develop Technology for Imaging Gravitational Waves

    Two scientists at Stanford University and SLAC National Accelerator Laboratory made key contributions to the discovery of the first direct evidence for cosmic inflation – the rapid expansion of the infant universe in the first trillionth of a trillionth of a trillionth of a second after the Big Bang.

    Chao-Lin Kuo is one of four co-leaders of the BICEP2 collaboration that announced the discovery on Monday. An assistant professor at SLAC and Stanford, he led the development of the BICEP2 detector and is building the BICEP3 follow-on experiment in his Stanford lab for deployment at the South Pole later this year.

    ck
    Chao-Lin Kuo at the South Pole research station where the BICEP2 experiment operated from 2010 to 2012. (Photo courtesy of Chao-Lin Kuo)

    BICEP 2
    BICEP With South Pole Telescope

    Kent Irwin invented the type of sensor used in BICEP2 as a graduate student at Stanford, adapted it for X-ray experiments and studies of the cosmos during a 20-year career at the National Institute for Standards and Technology, and returned to SLAC and Stanford as a professor in September to lead a major initiative in sensor development.

    ki
    Kent Irwin (Matt Beardsley/SLAC)

    Both are members of the Kavli Institute for Particle Physics and Astrophysics (KIPAC), which is jointly run by SLAC and Stanford.

    “It’s exciting that the same technology I developed as a grad student to search for tiny particles of dark matter is also being used to do research on the scale of the universe and to study the practical world of batteries, materials and biology in between,” Irwin said. His group is working toward installing a version of the BICEP2 sensors at SLAC’s X-ray light sources – Stanford Synchrotron Radiation Lightsource (SSRL) and Linac Coherent Light Source (LCLS) – as well as at a planned LCLS upgrade.

    Searching for Ripples in Space-time

    BICEP is a series of experiments that began operating at the South Pole in January 2006, taking advantage of the cold, clear, dry conditions to look for a faint, swirling polarization of light in the Cosmic Microwave Background (CMB) radiation. The light in the CMB dates back to 380,000 years after the Big Bang; before that, the early universe was opaque and no light could get through.

    Cosmic Background Radiation Planck
    CMB Planck

    But some theories predicted that gravitational waves – ripples in space-time – would have been released in the first tiny fraction of a second after the Big Bang, as the universe expanded exponentially in what is known as “cosmic inflation.” If that were the case, scientists might be able to detect the imprint of those waves in the form of a slight swirling pattern known as “B-mode polarization” in the CMB.

    On Monday, researchers from the BICEP2 experiment, which ran from January 2010 through December 2012, announced that they had found that smoking-gun signature, confirming the rapid inflation that had been theorized more than 30 years ago by Alan Guth and later modified by Andrei Linde, a Russian theorist who is now at Stanford.

    Building a Better Detector

    Kuo started working on BICEP1 as a postdoctoral researcher at Caltech in 2003. The circuitry in the experiment’s detectors was all made by hand. For the next-generation detector, BICEP2, the collaborating scientists wanted something that could be mass-produced in larger quantities, allowing them to pack more sensors into the array and collect data 10 times faster. So Kuo also started designing that technology, which used photolithography – a standard tool for making computer chips – to print sensors onto high-resolution circuit boards.

    sunset
    The sun sets behind BICEP2 (in the foreground) and the South Pole Telescope (in the background). (Steffen Richter, Harvard University)

    b2
    The BICEP2 detector shown in this electron-beam micrograph works by converting the light from the cosmic microwave background into heat. A titanium film tuned on its transition to a superconducting state makes a sensitive thermometer to measure this heat. The sensors are cooled to just 0.25 degrees above absolute zero to minimize thermal noise. (Anthony Turner, JPL)

    In 2008 Kuo arrived at SLAC and Stanford and began working on the next-generation experiment, BICEP3, for which he is principal investigator. Scheduled for deployment at the South Pole later this year, BICEP3 will look at a larger patch of the sky and collect data 10 times faster than its predecessor; it’s also more sensitive and more compact.

    SLAC took on a bigger role in this research in October 2013 by awarding up to $2 million in Laboratory Directed Research and Development funding over three years for the “KIPAC Initiative for Cosmic Inflation,” with Kuo as principal investigator. The grant establishes a large-scale Cosmic Microwave Background program at the lab, with part of the funding going toward BICEP3, and has a goal of establishing KIPAC as a premier institute for the study of cosmic inflation. There are also plans to establish a comprehensive development, integration, and testing center at SLAC for technologies to further explore the CMB, which holds clues not only to gravitational waves and cosmic inflation but also to dark matter, dark energy and the nature of the neutrino.

    A Fancy Thermometer for Tiny Signals

    Kent Irwin entered the picture in the early 1990s, while a graduate student in the laboratory of Stanford/SLAC Professor Blas Cabrera. There he invented the superconducting Transition Edge Sensor, or TES, for the Cryogenic Dark Matter Search, which is trying to detect incoming particles of dark matter in a former iron mine in Minnesota. When he moved to NIST, he and his team adapted the technology for other uses and also developed a very sensitive way to read out the signal from the sensors with devices known as SQUID multiplexers.

    Printing TES devices on circuit boards and using the SQUID multiplexers to read them out made it possible to create large TES arrays and greatly expanded their applications in astronomy, nuclear non-proliferation, materials analysis and homeland defense. It was also the key factor in allowing the BICEP team to expand the number of detectors in its experiments from 98 in BICEP1 to 500 in BICEP2, and opens the path to even larger arrays that will greatly increase the sensitivity of future experiments.

    A TES is “basically a very fancy thermometer,” Irwin says. “We’re measuring the power coming from the CMB.” The TES receives a microwave signal from an antenna and translates it into heat; the heat then warms a piece of metal that’s chilled to the point where it hovers on the edge of being superconducting – conducting electricity with 100 percent efficiency and no resistance. When a material is at this edge, a tiny bit of incoming heat causes a disproportionately large change in resistance, giving scientists a very sensitive way to measure small temperature changes. The TES devices for BICEP2 were built at NASA’s Jet Propulsion Laboratory, and Irwin’s team at NIST made the SQUID multiplexers.

    The Road Ahead

    Looking ahead, CMB researchers in the United States developed a roadmap leading to a fourth-generation experiment as part of last year’s Snowmass Summer Study, which lays out a long-term direction for the national high energy physics research program. That experiment would deploy hundreds of thousands of detector sensors and stare at a much broader swath of the cosmos at an estimated cost of roughly $100 million.

    “These are incredibly exciting times, with theory, technology and experiment working hand in hand to give us an increasingly clear picture of the very first moments of the universe,” said SLAC Lab Director Chi-Chang Kao. “I want to congratulate everyone in the many collaborating institutions who made this spectacular result possible. We at SLAC are looking forward to continuing to invest and work in this area as part of our robust cosmology program.”

    See the full article here.

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 5:49 am on June 3, 2014 Permalink | Reply
    Tags: , , Stanford University,   

    From The Kavli Institute at Stanford: “Solving big questions requires big computation” 

    KavliFoundation

    The Kavli Foundation

    Understanding the origins of our solar system, the future of our planet or humanity requires complex calculations run on high-power computers.

    A common thread among research efforts across Stanford’s many disciplines is the growing use of sophisticated algorithms, run by brute computing power, to solve big questions.

    In Earth sciences, computer models of climate change or carbon sequestration help drive policy decisions, and in medicine computation is helping unravel the complex relationship between our DNA and disease risk. Even in the social sciences, computation is being used to identify relationships between social networks and behaviors, work that could influence educational programs.

    dell sc

    “There’s really very little research that isn’t dependent on computing,” says Ann Arvin, vice provost and dean of research. Arvin helped support the recently opened Stanford Research Computing Center (SRCC) located at SLAC National Accelerator Laboratory, which expands the available research computing space at Stanford. The building’s green technology also reduces the energy used to cool the servers, lowering the environmental costs of carrying out research.

    “Everyone we’re hiring is computational, and not at a trivial level,” says Stanford Provost John Etchemendy, who provided an initial set of servers at the facility. “It is time that we have this facility to support those faculty.”

    Here are just a few examples of how Stanford faculty are putting computers to work to crack the mysteries of our origins, our planet and ourselves.

    Myths once explained our origins. Now we have algorithms.

    Our Origins

    Q: How did the universe form?

    For thousands of years, humans have looked to the night sky and created myths to explain the origins of the planets and stars. The real answer could soon come from the elegant computer simulations conducted by Tom Abel, an associate professor of physics at Stanford.

    Cosmologists face an ironic conundrum. By studying the current universe, we have gained a tremendous understanding of what occurred in the fractions of a second after the Big Bang, and how the first 400,000 years created the ingredients – gases, energy, etc. – that would eventually become the stars, planets and everything else. But we still don’t know what happened after those early years to create what we see in the night sky.

    “It’s the perfect problem for a physicist, because we know the initial conditions very well,” says Abel, who is also director of the Kavli Institute for Particle Astrophysics and Cosmology at SLAC. “If you know the laws of physics correctly, you should be able to exactly calculate what will happen next.”

    Easier said than done. Abel’s calculations must incorporate the laws of chemistry, atomic physics, gravity, how atoms and molecules radiate, gas and fluid dynamics and interactions, the forces associated with dark matter and so on. Those processes must then be simulated out over the course of hundreds of millions, and eventually billions, of years. Further complicating matters, a single galaxy holds one billion moving stars, and the simulation needs to consider their interactions in order to create an accurate prediction of how the universe came to be.

    “Any of the advances we make will come from writing smarter algorithms,” Abel says. “The key point of the new facility is it will allow for rapid turnaround, which will allow us to constantly develop and refine and validate new algorithms. And this will help us understand how the very first things were formed in the universe.” —Bjorn Carey //

    Q: How did we evolve?

    The human genome is essentially a gigantic data set. Deep within each person’s six billion data points are minute variations that tell the story of human evolution, and provide clues to how scientists can combat modern-day diseases.

    To better understand the causes and consequences of these genetic variations, Jonathan Pritchard, a professor of genetics and of biology, writes computer programs that can investigate those links. “Genetic variation affects how cells work, both in healthy variation and in response to disease,” Pritchard says. How that variation displays itself – in appearance or how cells work – and whether natural selection favors those changes within a population drives evolution.

    Consider, for example, variation in the gene that codes for lactase, an enzyme that allows mammals to digest milk. Most mammals turn off the lactase gene after they’ve been weaned from their mother’s milk. In populations that have historically revolved around dairy farming, however, Pritchard’s algorithms have helped to elucidate signals of strong selection since the advent of agriculture to enable people to process milk active throughout life. There has been similarly strong selection on skin pigmentation in non-Africans that allow better synthesis of vitamin D in regions where people are exposed to less sunlight.

    The algorithms and machine learning methods Pritchard used have the potential to yield powerful medical insights. Studying variations in how genes are regulated within a population could reveal how and where particular proteins bind to DNA, or which genes are turned on in different cell types­ – information that could help design novel therapies. These inquiries can generate hundreds of thousands of data sets and can only be parsed with up to tens of thousands of hours of computer work.

    Pritchard is bracing for an even bigger explosion of data; as genome sequencing technologies become less expensive, he expects the number of individually sequenced genomes to jump by as much as a hundredfold in the next few years. “Storing and analyzing vast amounts of data is a fundamental challenge that all genomics groups are dealing with,” says Pritchard, who is a member of Stanford Bio-X.

    “Having access to SRCC will make our inquiries go easier and more quickly, and we can move on faster to making the next discovery.” —Bjorn Carey //
    7 billion people live on Earth. Computers might help us survive ourselves.

    Our Planet
    Q: How can we predict future climates?

    There is no lab large enough to conduct experiments on the global-scale interactions between air, water and land that control Earth’s climate, so Stanford’s Noah Diffenbaugh and his students use supercomputers.

    Computer simulations reveal that if human emissions of greenhouse gases continue at their current pace, global warming over the next century is likely to occur faster than any global-scale shift recorded in the past 65 million years. This will increase the likelihood and severity of droughts, heat waves, heavy downpours and other extreme weather events.

    Climate scientists must incorporate into their predictions a growing number of data streams – including direct measurements as well as remote-sensing observations from satellites, aircraft-based sensors, and ground-based arrays.

    “That takes a lot of computing power, especially as we try to figure out how to use newer unstructured forms of data, such as from mobile sensors,” says Diffenbaugh, an associate professor of environmental Earth system science and a senior fellow at the Stanford Woods Institute for the Environment.

    Diffenbaugh’s team plans to use the increased computing resources available at SRCC to simulate air circulation patterns at the kilometer-scale over multiple decades. This has rarely been attempted before, and could help scientists answer questions such as how the recurring El Niño ocean circulation pattern interacts with elevated atmospheric carbon dioxide levels to affect the occurrence of tornadoes in the United States.

    “We plan to use the new computing cluster to run very large high-resolution simulations of climate over regions like the U.S. and India,” Diffenbaugh says. One of the most important benefits of SRCC, however, is not one that can be measured in computing power or cycles.

    “Perhaps most importantly, the new center is bringing together scholars from across campus who are using similar methodologies to figure out new solutions to existing problems, and hopefully to tackle new problems that we haven’t imagined yet.” —Ker Than //

    Q: How can we predict if climate solutions work?

    The capture and trapping of carbon dioxide gas deep underground is one of the most viable options for mitigating the effects of global warming, but only if we can understand how that stored gas interacts with the surrounding structures.

    Hamdi Tchelepi, a professor of energy resources engineering, uses supercomputers to study interactions between injected CO2 gas and the complex rock-fluid system in the subsurface.

    “Carbon sequestration is not a simple reversal of the technology that allows us to extract oil and gas. The physics involved is more complicated, ranging from the micro-scale of sand grains to extremely large geological formations that may extend hundreds of kilometers, and the timescales are on the order of centuries, not decades,” says Tchelepi, who is also the co-director of the Stanford Center for Computational Earth and Environmental Sciences (CEES).

    For example, modeling how a large plume of CO2 injected into the ground migrates and settles within the subsurface, and whether it might escape from the injection site to affect the air quality of a faraway city, can require the solving of tens of millions of equations simultaneously. SRCC will help augment the high computing power already available to Stanford Earth scientists and students through CEES, and will serve as a testing ground for custom algorithms developed by CEES researchers to simulate complex physical processes.

    Tchelepi, who is also affiliated with the Precourt Institute for Energy, says people are often surprised to learn the role that supercomputing plays in modern Earth sciences, but Earth scientists use more computer resources than almost anybody except the defense industry, and their computing needs can influence the designs of next-generation hardware.

    “Earth science is about understanding the complex and ever-changing dynamics of flowing air, water, oil, gas, CO2 and heat. That’s a lot of physics, requiring extensive computing resources to model.” —Ker Than //
    Q: How can we build more efficent energy networks?

    When folks crank their air conditioners during a heat wave, you can almost hear the electric grid moan. The sudden, larger-than-average demand for electricity can stress electric plants, and energy providers scramble to redistribute the load, or ask industrial users to temporarily shut down. To handle those sudden spikes in use more efficiently, Ram Rajagopal, an assistant professor of civil and environmental engineering, used supercomputers to analyze the energy usage patterns of 200,000 anonymous households and businesses in Northern California and from that develop a model that could tune consumer demand and lead to a more flexible “smart grid.”

    Today, utility companies base forecasts on a 24-hour cycle that aggregates millions of households. Not surprisingly, power use peaks in the morning and evening, when people are at home. But when Rajagopal looked at 1.6 billion hourly data points he plotted dramatic variations.

    Some households conformed to the norm and others didn’t. This forms the statistical underpinning for a new way to price and purchase power – by aggregating as few as a thousand customers into a unit with a predictable usage pattern. “If we want to thwart global warming we need to give this technology to communities,” says Rajagopal. Some consumers might want to pay whatever it costs to stay cool on hot days, others might conserve or defer demand to get price breaks. “I’m talking about neighborhood power that could be aligned to your beliefs,” says Rajagopal.

    Establishing a responsive smart grid and creative energy economies will become even more important as solar and wind energy – which face hourly supply limitations due to Mother Nature – become a larger slice of the energy pie. —Tom Abate //

    Know thyself. Let computation help.

    Ourselves

    Q: How does our DNA make us who we are?

    Our DNA is sometimes referred to as our body’s blueprint, but it’s really more of a sketch. Sure, it determines a lot of things, but so do the viruses and bacteria swarming our bodies, our encounters with environmental chemicals that lodge in our tissues and the chemical stew that ensues when our immune system responds to disease states.

    All of this taken together – our DNA, the chemicals, the antibodies coursing through our veins and so much more – determines our physical state at any point in time. And all that information makes for a lot of data if, like genetics professor Michael Snyder, you collected it 75 times over the course of four years.

    Snyder is a proponent of what he calls “personal omics profiling,” or the study of all that makes up our person, and he’s starting with himself. “What we’re collecting is a detailed molecular portrait of a person throughout time,” he says.

    So far, he’s turning out to be a pretty interesting test case. In one round of assessment he learned that he was becoming diabetic and was able to control the condition long before it would have been detected through a periodic medical exam.

    If personal omics profiling is going to go mainstream, serious computing will be required to tease out which of the myriad tests Snyder’s team currently runs give meaningful information and should be part of routine screening. Snyder’s sampling alone has already generated a half of a petabyte of data – roughly enough raw information to fill about a dishwasher-size rack of servers.

    Right now, that data and the computer power required to understand it reside on campus, but new servers will be located at SRCC. “I think you are going to see a lot more projects like this,” says Snyder, who is also a Stanford Bio-X affiliate and a member of the Stanford Cancer Center.

    “Computing is becoming increasingly important in medicine.” —Amy Adams //

    Q: How do we learn to read?

    A love letter, with all of its associated emotions, conveys its message with the same set of squiggly letters as a newspaper, novel or an instruction manual. How our brains learn to interpret a series of lines and curves into language that carries meaning or imparts knowledge is something psychology Professor Brian Wandell has been trying to understand.

    Wandell hopes to tease out differences between the brain scans of kids learning to read normally and those who are struggling, and use that information to find the right support for kids who need help. “As we acquire information about the outcome of different reading interventions we can go back to our database to understand whether there is some particular profile in the child that works better with intervention 1, and a second profile that works better with intervention 2,” says Wandell, a Stanford Bio-X member who is also the Isaac and Madeline Stein Family Professor and professor, by courtesy, of electrical engineering.

    His team developed a way of scanning kids’ brains with magnetic resonance imaging, then knitting the million collected samples together with complex algorithms that reveal how the nerve fibers connect different parts of the brain. “If you try to do this on your laptop, it will take half a day or more for each child,” he says. Instead, he uses powerful computers to reveal specific brain changes as kids learn to read.

    Wandell is associate director of the Stanford Neurosciences Institute, where he is leading the effort to develop a computing strategy – one that involves making use of SRCC rather than including computing space in their planned new building. He says one advantage of having faculty share computing space and systems is to speed scientific progress.

    “Our hope for the new facility is that it gives us the chance to set the standards for a better environment for sharing computations and data, spreading knowledge rapidly through the community,”

    Q: How do we work effectively together?

    There comes a time in every person’s life when it becomes easy to settle for the known relationship, for better or for worse, rather than seek out new ties with those who better inspire creativity and ensure success.

    Or so finds Daniel McFarland, professor of education and, by courtesy, of organizational behavior, who has studied how academic collaborations form and persist. McFarland and his own collaborators tracked signs of academic ties such as when Stanford faculty co-authored a paper, cited the same publications or got a grant together. Armed with 15 years of collaboration output on 3,000 faculty members, they developed a computer model of how networks form and strengthen over time.

    “Social networks are large, interdependent forms of data that quickly confront limits of computing power, and especially so when we study network evolution,” says McFarland.

    Their work has shown that once academic relationships have established, they tend to continue out of habit, regardless of whether they are the most productive fit. He argues that successful academic programs or businesses should work to bring new members into collaborations and also spark new ties to prevent more senior people from falling back on known but less effective relationships. At the same time, he comes down in favor of retreats and team building exercises to strengthen existing good collaborations.

    McFarland’s work has implications for Stanford’s many interdisciplinary programs. He has found that collaborations across disciplines often fall apart due in part to the distant ties between researchers. “To form and sustain these ties, pairs of colleagues must interact frequently to share knowledge,” he writes. “This is perhaps why interdisciplinary centers may be useful organizational means of corralling faculty and promoting continued distant collaborations.” —Amy Adams //

    Q: What can computers tell us about how our body works?

    As you sip your morning cup of coffee, the caffeine makes its way to your cells, slots into a receptor site on the cells’ surface and triggers a series of reactions that jolt you awake. A similar process takes place when Zantac provides relief for stomach ulcers, or when chemical signals produced in the brain travel cell-to-cell through your nervous system to your heart, telling it to beat.

    In each of these instances, a drug or natural chemical is activating a cell’s G-protein coupled receptor (GPCR), the cellular target of roughly half of all known drugs, says Vijay Pande, a professor of chemistry and, by courtesy, of structural biology and of computer science at Stanford. This exchange is a complex one, though. In order for caffeine or any other molecule to influence a cell, it must fit snugly into the receptor site, which consists of 4,000 atoms and transforms between an active and inactive configuration. Current imaging technologies are unable to view that transformation, so Pande has been simulating it using his Folding@Home distributed computer network.

    So far, Pande’s group has demonstrated a few hundred microseconds of the receptor’s transformation. Although that’s an extraordinarily long chunk of time compared to similar techniques, Pande is looking forward to accessing the SRCC to investigate the basic biophysics of GPCR and other proteins. Greater computing power, he says, will allow his team to simulate larger molecules in greater detail, simulate folding sequences for longer periods of time and visualize multiple molecules as they interact. It might even lead to atom-level simulations of processes at the scale of an entire cell. All of this knowledge could be applied to computationally design novel drugs and therapies.

    “Having more computer power can dramatically change every aspect of what we can do in my lab,” says Pande, who is also a Stanford Bio-X affiliate. “Much like having more powerful rockets could radically change NASA, access to greater computing power will let us go way beyond where we can go routinely today. —Bjorn Carey //

    See the full article here.

    The Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.

    The Foundation’s mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 4:50 pm on May 22, 2014 Permalink | Reply
    Tags: , , , , , Stanford University   

    From SLAC Lab: “Stanford Researchers Discover Immune System’s Rules of Engagement” 


    SLAC Lab

    May 22, 2014
    Media Contacts:
    K. Chris Garcia, Department of Molecular & Cellular Physiology, Stanford School of Medicine: (650) 498-7111, kcgarcia@stanford.edu
    Dan Stober, Stanford News Service: (650) 721-6965, dstober@stanford.edu
    Andy Freeberg, SLAC National Accelerator Laboratory: (650) 926-4359, afreeberg@slac.stanford.edu

    Study finds surprising similarities in the way immune system defenders bind to disease-causing invaders.

    A study led by researchers at Stanford’s School of Medicine reveals how T cells, the immune system’s foot soldiers, respond to an enormous number of potential health threats.

    t
    Stanford School of Medicine researchers, working with scientists at the SLAC National Accelerator Laboratory, have made discoveries about the ways in which T cell receptors (shown in bright red) recognize invaders in the body. (Eric Smith and K. Christopher Garcia / Stanford University)

    X-ray studies at the Department of Energy’s SLAC National Accelerator Laboratory, combined with Stanford biological studies and computational analysis, revealed remarkable similarities in the structure of binding sites, which allow a given T cell to recognize many different invaders that provoke an immune response.

    t3
    T-cells use their receptors (red) to recognize different peptides (blue and yellow) presented on the surface of cells, a key mechanism to detect and combat infection. (Eric Smith and K. Christopher Garcia/Stanford University)

    t2
    This illustration shows the binding sites of a T-cell receptor (highlighted red) and a peptide (orange). Similarities in binding sites allow T-cells to bind to many different peptides. (Eric Smith and K. Christopher Garcia, Stanford University)

    The research demonstrates a faster, more reliable way to identify large numbers of antigens, the targets of the immune response, which could speed the discovery of disease treatments. It also may lead to a better understanding of what T cells recognize when fighting cancers and why they are triggered to attack healthy cells in autoimmune diseases such as diabetes and multiple sclerosis.

    “Until now, it often has been a real mystery which antigens T cells are recognizing; there are whole classes of disease where we don’t have this information,” said Michael Birnbaum, a graduate student who led the research at the School of Medicine in the laboratory of K. Christopher Garcia, the study’s senior author and a professor of molecular and cellular physiology and of structural biology.

    “Now it’s far more feasible to take a T cell that is important in a disease or autoimmune disorder and figure out what antigens it will respond to,” Birnbaum said.

    T cells are triggered into action by protein fragments, called peptides, displayed on a cell’s surface. In the case of an infected cell, peptide antigens from a pathogen can trigger a T cell to kill the infected cell. The research provides a sort of rulebook that can be used with high success to track down antigens likely to activate a given T cell, easing a bottleneck that has constrained such studies.

    Combination Approach

    In the study, researchers exposed a handful of mouse and human T-cell receptors to hundreds of millions of peptides, and found hundreds of peptides that bound to each type. Then they compiled and compared the detailed sequence – the order of the chemical building blocks – of the peptides that bound to each T-cell receptor.

    From that sample set, which represents just a tiny fraction of all peptides, a detailed computational analysis identified other likely binding matches. Researchers compared the 3-D structures of T cells and their unique receptors bound to different peptides at SLAC’s Stanford Synchrotron Research Lightsource (SSRL).

    “The X-ray work at SSRL was a key breakthrough in the study,” Birnbaum said. “Very different peptides aligned almost perfectly with remarkably similar binding sites. It took us a while to figure out this structural similarity was a common feature, not an oddity – that a vast number of unique peptides could be recognized in the same way.”

    Researchers also checked the sequencing of the peptides that were known to bind with a given T cell and found striking similarities there, too.

    “T-cell receptors are ‘cross-reactive,’ but in fairly limited ways. Like a multilingual person who can speak Spanish and French but can’t understand Japanese, a receptor can engage with a broad set of peptides related to one another,” Birnbaum said.

    Impact on Biomedical Science

    Finding out whether a given peptide activates a specific T-cell receptor has been a historically piecemeal process with a 20 to 30 percent success rate, involving burdensome hit-and-miss studies of biological samples. “This latest research provides a framework that can improve the success rate to as high as 90 percent,” Birnbaum said.

    “This is an important illustration of how SSRL’s X-ray-imaging capabilities allow researchers to get detailed structural information on technically very challenging systems,” said Britt Hedman, professor of photon science and science director at SSRL. “To understand the factors behind T-cell-receptor binding to peptides will have major impact on biomedical developments, including vaccine design and immunotherapy.”

    Additional contributors included the laboratories of Mark Davis, the Burt and Marion Avery Family Professor at Stanford School of Medicine, and Kai Wucherpfennig at the Dana Farber Cancer Institute and Harvard University. The research was supported by the National Institutes of Health and the Howard Hughes Medical Institute. SSRL is a scientific user facility supported by DOE’s Office of Science.

    See the full article here.

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 378 other followers

%d bloggers like this: