Tagged: Robotics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:02 pm on May 19, 2017 Permalink | Reply
    Tags: 3D-printed Soft Four Legged Robot Can Walk on Sand and Stone, , , Robotics,   

    From UCSD: “3D-printed Soft Four Legged Robot Can Walk on Sand and Stone” 

    UC San Diego bloc

    UC San Diego

    May 17, 2017
    Ioana Patringenaru

    1
    UC San Diego Jacobs School of Engineering mechanical engineering graduate student Dylan Trotman from the Tolley Lab with the 3D-printed, four-legged robot being pressented at the 2017 IEEE International Conference on Robotics and Automation (ICRA). The entire photo set is on Flickr. Photo credit: UC San Diego Jacobs School of Engineering / David Baillot

    Engineers at the University of California San Diego have developed the first soft robot that is capable of walking on rough surfaces, such as sand and pebbles. The 3D-printed, four-legged robot can climb over obstacles and walk on different terrains.

    Researchers led by Michael Tolley, a mechanical engineering professor at the University of California San Diego, will present the robot at the IEEE International Conference on Robotics and Automation from May 29 to June 3 in Singapore. The robot could be used to capture sensor readings in dangerous environments or for search and rescue.

    The breakthrough was possible thanks to a high-end printer that allowed researchers to print soft and rigid materials together within the same components. This made it possible for researchers to design more complex shapes for the robot’s legs.

    Bringing together soft and rigid materials will help create a new generation of fast, agile robots that are more adaptable than their predecessors and can safely work side by side with humans, said Tolley. The idea of blending soft and hard materials into the robot’s body came from nature, he added. “In nature, complexity has a very low cost,” Tolley said. “Using new manufacturing techniques like 3D printing, we’re trying to translate this to robotics.”

    3-D printing soft and rigid robots rather than relying on molds to manufacture them is much cheaper and faster, Tolley pointed out. So far, soft robots have only been able to shuffle or crawl on the ground without being able to lift their legs. This robot is actually able to walk.

    Researchers successfully tested the tethered robot on large rocks, inclined surfaces and sand (see video). The robot also was able to transition from walking to crawling into an increasingly confined space, much like a cat wiggling into a crawl space.

    Dylan Drotman, a Ph.D. student at the Jacobs School of Engineering at UC San Diego, led the effort to design the legs and the robot’s control systems. He also developed models to predict how the robot would move, which he then compared to how the robot actually behaved in a real-life environment.

    How it’s made

    The legs are made up of three parallel, connected sealed inflatable chambers, or actuators, 3D-printed from a rubber-like material. The chambers are hollow on the inside, so they can be inflated. On the outside, the chambers are bellowed, which allows engineers to better control the legs’ movements. For example, when one chamber is inflated and the other two aren’t, the leg bends. The legs are laid out in the shape of an X and connected to a rigid body.

    The robot’s gait depends on the order of the timing, the amount of pressure and the order in which the pistons in its four legs are inflated. The robot’s walking behavior in real life also closely matched the researcher’s predictions. This will allow engineers to make better educated decisions when designing soft robots.

    The current quadruped robot prototype is tethered to an open source board and an air pump. Researchers are now working on miniaturizing both the board and the pump so that the robot can walk independently. The challenge here is to find the right design for the board and the right components, such as power sources and batteries, Tolley said.

    3D Printed Soft Actuators for a Legged Robot Capable of Navigating Unstructured Terrain

    Authors: Dylan Drotman, Saurabh Jadhav, Mahmood Karimi, Philip deZonia, Michael T. Tolley

    This work is supported by the UC San Diego Frontiers of Innovation Scholarship Program and the Office of Naval Research grant number N000141712062.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC San Diego Campus

    The University of California, San Diego (also referred to as UC San Diego or UCSD), is a public research university located in the La Jolla area of San Diego, California, in the United States.[12] The university occupies 2,141 acres (866 ha) near the coast of the Pacific Ocean with the main campus resting on approximately 1,152 acres (466 ha).[13] Established in 1960 near the pre-existing Scripps Institution of Oceanography, UC San Diego is the seventh oldest of the 10 University of California campuses and offers over 200 undergraduate and graduate degree programs, enrolling about 22,700 undergraduate and 6,300 graduate students. UC San Diego is one of America’s Public Ivy universities, which recognizes top public research universities in the United States. UC San Diego was ranked 8th among public universities and 37th among all universities in the United States, and rated the 18th Top World University by U.S. News & World Report ‘s 2015 rankings.

     
  • richardmitnick 3:48 pm on May 18, 2017 Permalink | Reply
    Tags: , CLEARN, , , Robotics   

    From MIT: “Teaching robots to teach other robots” 

    MIT News

    MIT Widget

    MIT News

    May 10, 2017
    Adam Conner-Simons | CSAIL

    1
    MIT doctoral candidate Claudia Pérez-D’Arpino discusses her work teaching the Optimus robot to perform various tasks, including picking up a bottle. Photo: Jason Dorfman/MIT CSAIL

    CSAIL approach allows robots to learn a wider range of tasks using some basic knowledge and a single demo.

    Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.

    Both methods have drawbacks. Robots that learn from demonstration can’t easily transfer one skill they’ve learned to another situation and remain accurate. On the other hand, motion planning systems that use sampling or optimization can adapt to these changes but are time-consuming, since they usually have to be hand-coded by expert programmers.

    Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently developed a system that aims to bridge the two techniques: C-LEARN, which allows noncoders to teach robots a range of tasks simply by providing some information about how objects are typically manipulated and then showing the robot a single demo of the task.

    Importantly, this enables users to teach robots skills that can be automatically transferred to other robots that have different ways of moving — a key time- and cost-saving measure for companies that want a range of robots to perform similar actions.

    “By combining the intuitiveness of learning from demonstration with the precision of motion-planning algorithms, this approach can help robots do new types of tasks that they haven’t been able to learn before, like multistep assembly using both of their arms,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah.

    The team tested the system on Optimus, a new two-armed robot designed for bomb disposal that they programmed to perform tasks such as opening doors, transporting objects, and extracting objects from containers. In simulations they showed that Optimus’ learned skills could be seamlessly transferred to Atlas, CSAIL’s 6-foot-tall, 400-pound humanoid robot.

    A paper describing C-LEARN was recently accepted to the IEEE International Conference on Robotics and Automation (ICRA), which takes place May 29 to June 3 in Singapore.

    How it works

    With C-LEARN the user first gives the robot a knowledge base of information on how to reach and grasp various objects that have different constraints. (The C in C-LEARN stands for “constraints.”) For example, a tire and a steering wheel have similar shapes, but to attach them to a car, the robot has to configure its arms differently to move them. The knowledge base contains the information needed for the robot to do that.

    The operator then uses a 3-D interface to show the robot a single demonstration of the specific task, which is represented by a sequence of relevant moments known as “keyframes.” By matching these keyframes to the different situations in the knowledge base, the robot can automatically suggest motion plans for the operator to approve or edit as needed.

    “This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Pérez-D’Arpino. “We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”

    One challenge was that existing constraints that could be learned from demonstrations weren’t accurate enough to enable robots to precisely manipulate objects. To overcome that, the researchers developed constraints inspired by computer-aided design (CAD) programs that can tell the robot if its hands should be parallel or perpendicular to the objects it is interacting with.

    The team also showed that the robot performed even better when it collaborated with humans. While the robot successfully executed tasks 87.5 percent of the time on its own, it did so 100 percent of the time when it had an operator that could correct minor errors related to the robot’s occasional inaccurate sensor measurements.

    “Having a knowledge base is fairly common, but what’s not common is integrating it with learning from demonstration,” says Dmitry Berenson, an assistant professor at the University of Michigan’s Electrical Engineering and Computer Science Department. “That’s very helpful, because if you are dealing with the same objects over and over again, you don’t want to then have to start from scratch to teach the robot every new task.”

    Applications

    The system is part of a larger wave of research focused on making learning-from-demonstration approaches more adaptive. If you’re a robot that has learned to take an object out of a tube from a demonstration, you might not be able to do it if there’s an obstacle in the way that requires you to move your arm differently. However, a robot trained with C-LEARN can do this, because it does not learn one specific way to perform the action.

    “It’s good for the field that we’re moving away from directly imitating motion, toward actually trying to infer the principles behind the motion,” Berenson says. “By using these learned constraints in a motion planner, we can make systems that are far more flexible than those which just try to mimic what’s being demonstrated”

    Shah says that advanced LfD methods could prove important in time-sensitive scenarios such as bomb disposal and disaster response, where robots are currently tele-operated at the level of individual joint movements.

    “Something as simple as picking up a box could take 20-30 minutes, which is significant for an emergency situation,” says Pérez-D’Arpino.

    C-LEARN can’t yet handle certain advanced tasks, such as avoiding collisions or planning for different step sequences for a given task. But the team is hopeful that incorporating more insights from human learning will give robots an even wider range of physical capabilities.

    “Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah. “It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multiarm and multistep tasks necessary for assembly manufacturing and ship or aircraft maintenance.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:05 pm on March 24, 2017 Permalink | Reply
    Tags: "Tree on a chip", , may be used to make small robots move., Microfluidic device generates passive hydraulic power, , Robotics   

    From MIT: “Engineers design “tree-on-a-chip” 

    MIT News

    MIT Widget

    MIT News

    March 20, 2017
    Jennifer Chu

    1
    Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants.

    2
    Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days.
    Courtesy of the researchers

    Microfluidic device generates passive hydraulic power, may be used to make small robots move.

    Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

    Now engineers at MIT and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

    Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

    “The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

    Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

    A hydraulic lift

    The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

    “For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

    The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

    The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

    “This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

    In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

    It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

    Running on sugar

    With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

    To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

    With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

    “As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

    Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

    “If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

    This research was supported, in part, by the Defense Advance Research Projects Agency.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 4:35 am on December 27, 2016 Permalink | Reply
    Tags: , Robotics,   

    From U Michigan: “Robotics building design approved, including space for Ford” 

    U Michigan bloc

    University of Michigan

    9/15/2016 [When, oh when, will U Michigan figure out the benefits of social media?]
    Nicole Casal Moore

    1
    No image caption. No image credit

    Robotic technologies for air, sea and roads, for factories, hospitals and homes will have tailored lab space in the University of Michigan’s planned Robotics Laboratory.

    Today, the U-M Board of Regents approved the schematic design for the $75 million facility, which is slated for the northeast corner of North Campus in the College of Engineering.

    The 140,000 square-foot building will house a three-story fly zone for autonomous aerial vehicles, an outdoor obstacle course for walking ‘bots, and high-bay garage space for self-driving cars, among other features. And in a unique collaboration, Ford Motor Co. will provide funding to add a fourth floor that it will lease for dedicated space where Ford researchers will eventually be based. The shared space grows a long-standing and broad partnership between U-M and Ford that includes projects to advance a variety of technologies such as driverless and connected vehicles.

    Construction is scheduled to begin after a comprehensive fundraising effort for College of Engineering funds and be completed in the winter of 2020.

    2
    “Many places with strong robotics reputations are computer science-dominated and they don’t test their theories on machines to the extent that we do. At U-M, most of our faculty members have an in-house robot. We put our algorithms in motion.”
    -Jessy Grizzle, U-M director of robotics

    When the building opens, U-M will become one of an elite few universities with a dedicated robotics facility. It will be the only university whose lab is down the road from a proving ground for driverless and connected vehicles. Mcity, U-M’s simulated urban and suburban environment for safe, controlled testing of advanced mobility vehicles and technologies, is located a half mile from the Robotics Laboratory site.

    “The University of Michigan has long been a global leader in robotics and our new facility will give our faculty members room to reach for world-changing advances and set them in motion,” said Professor Alec Gallimore, the Robert J. Vlasic Dean of Engineering. “Robots have come a long way from programmed machines bolted to the factory floor. Today they move through the world around us. They communicate and interact with each other and with us. They’re making our work, our travel, and our lives easier, more efficient and safer.”

    3

    Fifteen professors will be core robotics faculty members when the facility opens, and more than 35 across the university are working in the field. They are developing prosthetic limbs that could one day be controlled by the brain, an autonomous wheelchair that can sense obstacles and avoid them, efficient walking robots that have the potential to assist in search or rescue operations, and self-driving and connected cars designed to transform transportation, among other innovations.

    Most of the core faculty members conduct their research on an actual robot, which is unique to U-M.

    “What makes us special is that most of us here do both robotics theory and hardware,” said Jessy Grizzle, the Elmer G. Gilbert Distinguished University Professor and the Jerry W. and Carol L. Levin Professor of Engineering.

    “Many places with strong robotics reputations are computer science-dominated and they don’t test their theories on machines to the extent that we do. At U-M, most of our faculty members have an in-house robot. We put our algorithms in motion.”

    3
    No image caption. No image credit

    Grizzle has been named director of robotics at U-M. He came to the university in 1987 as a feedback control theorist, but quickly expanded his research into other areas. Among his achievements is the development of a theoretically sound and efficient method for control of bipedal robot locomotion, which resulted in the world’s fastest two-legged running robot with knees. He was also a key player in pioneering a model-based programming approach to the control of hybrid electric vehicles that is rapidly becoming an industry standard. The approach takes into account the random fluctuations in traffic patterns to make these vehicles as efficient as possible.

    “This new facility will give us cutting-edge lab space to test our theories on a broader scale, and in a collaborative environment that invites the exchange of ideas,” Grizzle said.

    4

    The building’s schematic design shows a sleek, slate gray and silver façade integrated into the environment in a style described as “machine in the garden.” In addition to the specialized labs, it will include two large shared lab spaces, a start-up style open collaboration area, offices for 30 faculty members and more than 100 graduate students and postdoctoral researchers, and two classrooms. U-M offers interdisciplinary masters and PhD degrees in robotics.

    A grand atrium will be flanked by glass walls that serve as windows into high-tech labs and a museum for retired robots. Public and school tours will be available.

    The partnership that puts Ford engineers on the fourth floor is designed to enrich opportunities for collaborative research, as well as educational opportunities for students to gain hands-on experiences.

    “With the new building’s proximity to Mcity, Ford and U-M are poised to accelerate the development of autonomous vehicles,” said Ken Washington, Ford vice president of research and advanced engineering. “This co-located lab on the U-M campus will magnify and deepen a collaborative research effort that is already unprecedented in scale.”

    With a decade-long history, the Ford/U-M Innovation Alliance has led to nearly 200 collaborative research projects. Its joint autonomous vehicle project is the largest university research effort Ford has sponsored on any campus, and the largest industry-funded individual research project at U-M. The technical innovations Ford and U-M produce through it are intended to deliver order of magnitude reductions in traffic deaths and collisions.

    Ford today announced that U-M assistant professors Matthew Johnson-Roberson and Ram Vasudevan will lead the joint Ford/U-M autonomous vehicle research project going forward. Johnson-Roberson is in the Department of Naval Architecture and Marine Engineering and Vasudevan is in the Department of Mechanical Engineering.

    The robotics building project is expected to provide an average of 66 on-site construction jobs.

    Grizzle will take part in a Reddit Science AMA (ask me anything) about his work with bipedal robots on Wednesday, Sept. 28 from 1-2 PM ET. Watch for more details on the day of the AMA.

    About Michigan Engineering: The University of Michigan College of Engineering is one of the top engineering schools in the country. Eight academic departments are ranked in the nation’s top 10 — some twice for different programs. Its research budget is one of the largest of any public university. Its faculty and students are making a difference at the frontiers of fields as diverse as nanotechnology, sustainability, healthcare, national security and robotics. They are involved in spacecraft missions across the solar system, and have developed partnerships with automotive industry leaders to transform transportation. Its entrepreneurial culture encourages faculty and students alike to move their innovations beyond the laboratory and into the real world to benefit society. Its alumni base of more than 75,000 spans the globe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U MIchigan Campus

    The University of Michigan (U-M, UM, UMich, or U of M), frequently referred to simply as Michigan, is a public research university located in Ann Arbor, Michigan, United States. Originally, founded in 1817 in Detroit as the Catholepistemiad, or University of Michigania, 20 years before the Michigan Territory officially became a state, the University of Michigan is the state’s oldest university. The university moved to Ann Arbor in 1837 onto 40 acres (16 ha) of what is now known as Central Campus. Since its establishment in Ann Arbor, the university campus has expanded to include more than 584 major buildings with a combined area of more than 34 million gross square feet (781 acres or 3.16 km²), and has two satellite campuses located in Flint and Dearborn. The University was one of the founding members of the Association of American Universities.

    Considered one of the foremost research universities in the United States,[7] the university has very high research activity and its comprehensive graduate program offers doctoral degrees in the humanities, social sciences, and STEM fields (Science, Technology, Engineering and Mathematics) as well as professional degrees in business, medicine, law, pharmacy, nursing, social work and dentistry. Michigan’s body of living alumni (as of 2012) comprises more than 500,000. Besides academic life, Michigan’s athletic teams compete in Division I of the NCAA and are collectively known as the Wolverines. They are members of the Big Ten Conference.

     
  • richardmitnick 10:52 am on December 8, 2016 Permalink | Reply
    Tags: , , Robotics   

    From JPL: “From Monterey Bay to Europa” 

    NASA JPL Banner

    JPL-Caltech

    November 30, 2016
    Andrew Good
    Jet Propulsion Laboratory, Pasadena, Calif.
    818-393-2433
    andrew.c.good@jpl.nasa.gov

    1
    JPL’s Steve Chien with several of the underwater drones used in a research project earlier this year. Chien, along with his research collaborators, are developing artificial intelligence for these drones. Image Credit: NASA/JPL-Caltech.

    If you think operating a robot in space is hard, try doing it in the ocean.

    Saltwater can corrode your robot and block its radio signals.

    Kelp forests can tangle it up, and you might not get it back.

    Sharks will even try to take bites out of its wings.

    The ocean is basically a big obstacle course of robot death. Despite this, robotic submersibles have become critical tools for ocean research. While satellites can study the ocean surface, their signals can’t penetrate the water. A better way to study what’s below is to look beneath yourself — or send a robot in your place.

    That’s why a team of researchers from NASA and other institutions recently visited choppy waters in Monterey Bay, California. Their ongoing research is developing artificial intelligence for submersibles, helping them track signs of life below the waves. Doing so won’t just benefit our understanding of Earth’s marine environments; the team hopes this artificial intelligence will someday be used to explore the icy oceans believed to exist on moons like Europa. If confirmed, these oceans are thought to be some of the most likely places to host life in the outer solar system.

    A fleet of six coordinated drones was used to study Monterey Bay. The fleet roved for miles seeking out changes in temperature and salinity. To plot their routes, forecasts of these ocean features were sent to the drones from shore.

    The drones also sensed how the ocean actively changed around them. A major goal for the research team is to develop artificial intelligence that seamlessly integrates both kinds of data.

    “Autonomous drones are important for ocean research, but today’s drones don’t make decisions on the fly,” said Steve Chien, one of the research team’s members. Chien leads the Artificial Intelligence Group at NASA’s Jet Propulsion Laboratory, Pasadena, California. “In order to study unpredictable ocean phenomena, we need to develop submersibles that can navigate and make decisions on their own, and in real-time. Doing so would help us understand our own oceans — and maybe those on other planets.”

    Other research members hail from Caltech in Pasadena; the Monterey Bay Aquarium Research Institute, Moss Landing, California; Woods Hole Oceanographic Institute, Woods Hole, Massachusetts; and Remote Sensing Solutions, Barnstable, Massachusetts.

    If successful, this project could lead to submersibles that can plot their own course as they go, based on what they detect in the water around them. That could change how we collect data, while also developing the kind of autonomy needed for planetary exploration, said Andrew Thompson, assistant professor of environmental science and engineering at Caltech.

    “Our goal is to remove the human effort from the day-to-day piloting of these robots and focus that time on analyzing the data collected,” Thompson said. “We want to give these submersibles the freedom and ability to collect useful information without putting a hand in to correct them.”

    At the smallest levels, marine life exists as “biocommunities.” Nutrients in the water are needed to support plankton; small fish follow the plankton; big fish follow them. Find the nutrients, and you can follow the breadcrumb trail to other marine life.

    But that’s easier said than done. Those nutrients are swept around by ocean currents, and can change direction suddenly. Life under the sea is constantly shifting in every direction, and at varying scales of size.

    “It’s all three dimensions plus time,” Chien said about the challenges of tracking ocean features. “Phenomena like algal blooms are hundreds of kilometers across. But small things like dinoflagellate clouds are just dozens of meters across.”

    It might be easy for a fish to track these features, but it’s nearly impossible for an unintelligent robot.

    “Truly autonomous fleets of robots have been a holy grail in oceanography for decades,” Thompson said. “Bringing JPL’s exploration and AI experience to this problem should allow us to lay the groundwork for carrying out similar activities in more challenging regions, like Earth’s polar regions and even oceans on other planets.”

    The recent field work at Monterey Bay was funded by JPL and Caltech’s Keck Institute for Space Studies (KISS). Additional research is planned in the spring of 2017.

    For more information about this research, visit:

    http://kiss.caltech.edu/new_website/techdev/seafloor/seafloor.html

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NASA JPL Campus

    Jet Propulsion Laboratory (JPL) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge [1], on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    Caltech Logo

    NASA image

     
  • richardmitnick 8:44 am on April 28, 2016 Permalink | Reply
    Tags: , Robotics,   

    From Stanford: “Maiden voyage of Stanford’s humanoid robotic diver recovers treasures from King Louis XIV’s wrecked flagship” 

    Stanford University Name
    Stanford University

    1
    OceanOne, a new humanoid robotic diver from Stanford, explores a 17th century shipwreck. (Image credit: Frederic Osada and Teddy Seguin/DRASSM)

    April 27, 2016
    Bjorn Carey


    Access mp4 video here .
    Video by Kurt Hickman

    Oussama Khatib held his breath as he swam through the wreck of La Lune, 100 meters below the Mediterranean. The flagship of King Louis XIV sank here in 1664, 20 miles off the southern coast of France, and no human had touched the ruins – or the countless treasures and artifacts the ship once carried – in the centuries since.

    With guidance from a team of skilled deep-sea archaeologists who had studied the site, Khatib, a professor of computer science at Stanford, spotted a grapefruit-size vase. He hovered precisely over the vase, reached out, felt its contours and weight, and stuck a finger inside to get a good grip. He swam over to a recovery basket, gently laid down the vase and shut the lid. Then he stood up and high-fived the dozen archaeologists and engineers who had been crowded around him.

    This entire time Khatib had been sitting comfortably in a boat, using a set of joysticks to control OceanOne, a humanoid diving robot outfitted with human vision, haptic force feedback and an artificial brain – in essence, a virtual diver.

    When the vase returned to the boat, Khatib was the first person to touch it in hundreds of years. It was in remarkably good condition, though it showed every day of its time underwater: The surface was covered in ocean detritus, and it smelled like raw oysters. The team members were overjoyed, and when they popped bottles of champagne, they made sure to give their heroic robot a celebratory bath.

    The expedition to La Lune was OceanOne’s maiden voyage. Based on its astonishing success, Khatib hopes that the robot will one day take on highly skilled underwater tasks too dangerous for human divers, as well as open up a whole new realm of ocean exploration.

    “OceanOne will be your avatar,” Khatib said. “The intent here is to have a human diving virtually, to put the human out of harm’s way. Having a machine that has human characteristics that can project the human diver’s embodiment at depth is going to be amazing.”
    Anatomy of a robo-mermaid

    The concept for OceanOne was born from the need to study coral reefs deep in the Red Sea, far below the comfortable range of human divers. No existing robotic submarine can dive with the skill and care of a human diver, so OceanOne was conceived and built from the ground up, a successful marriage of robotics, artificial intelligence and haptic feedback systems.

    OceanOne looks something like a robo-mermaid. Roughly five feet long from end to end, its torso features a head with stereoscopic vision that shows the pilot exactly what the robot sees, and two fully articulated arms. The “tail” section houses batteries, computers and eight multi-directional thrusters.

    The body looks far unlike conventional boxy robotic submersibles, but it’s the hands that really set OceanOne apart. Each fully articulated wrist is fitted with force sensors that relay haptic feedback to the pilot’s controls, so the human can feel whether the robot is grasping something firm and heavy, or light and delicate. (Eventually, each finger will be covered with tactile sensors.) The ‘bot’s brain also reads the data and makes sure that its hands keep a firm grip on objects, but that they don’t damage things by squeezing too tightly. In addition to exploring shipwrecks, this makes it adept at manipulating delicate coral reef research and precisely placing underwater sensors.

    “You can feel exactly what the robot is doing,” Khatib said. “It’s almost like you are there; with the sense of touch you create a new dimension of perception.”

    2

    The pilot can take control at any moment, but most frequently won’t need to lift a finger. Sensors throughout the robot gauge current and turbulence, automatically activating the thrusters to keep the robot in place. And even as the body moves, quick-firing motors adjust the arms to keep its hands steady as it works. Navigation relies on perception of the environment, from both sensors and cameras, and these data run through smart algorithms that help OceanOne avoid collisions. If it senses that its thrusters won’t slow it down quickly enough, it can quickly brace for impact with its arms, an advantage of a humanoid body build.

    A human touch

    The humanoid form also means that when OceanOne dives alongside actual humans, its pilot can communicate through hand gestures during complex tasks or scientific experiments. Ultimately, though, Khatib designed OceanOne with an eye toward getting human divers out of harm’s way. Every aspect of the robot’s design is meant to allow it to take on tasks that are either dangerous – deep-water mining, oil-rig maintenance or underwater disaster situations like the Fukushima Daiichi power plant – or simply beyond the physical limits of human divers.

    “We connect the human to the robot in very intuitive and meaningful way. The human can provide intuition and expertise and cognitive abilities to the robot,” Khatib said. “The two bring together an amazing synergy. The human and robot can do things in areas too dangerous for a human, while the human is still there.”

    Khatib was forced to showcase this attribute while recovering the vase. As OceanOne swam through the wreck, it wedged itself between two cannons. Firing the thrusters in reverse wouldn’t extricate it, so Khatib took control of the arms, motioned for the bot to perform a sort of pushup, and OceanOne was free.

    The expedition to La Lune was made possible in large part thanks to the efforts of Michel L’Hour, the director of underwater archaeology research in France’s Ministry of Culture. Previous remote studies of the shipwreck conducted by L’Hour’s team made it possible for OceanOne to navigate the site. Vincent Creuze of the Universite de Montpellier in France commanded the support underwater vehicle that provided third-person visuals of OceanOne and held its support tether at a safe distance.

    Several students played key roles in OceanOne’s success, including graduate students Gerald Brantner, Xiyang Yeh, Boyeon Kim and Brian Soe, who joined Khatib in France for the expedition, as well as Shameek Ganguly, Mikael Jorda, and a number of undergraduate and graduate students. Khatib also drew on the expertise of Mark Cutkosky, a professor of mechanical engineering, for designing and building the robotic arms.

    Next month, OceanOne will return to the Stanford campus, where Khatib and his students will continue iterating on the platform. The prototype robot is a fleet of one, but Khatib hopes to build more units, which would work in concert during a dive.

    In addition to Stanford, the development of the robot was supported by Meka Robotics and the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 4:33 pm on March 3, 2016 Permalink | Reply
    Tags: , , , Robotics   

    From Cornell: “Light-up skin stretches boundaries of robotics” 

    Cornell Bloc

    Cornell University

    March 3, 2016
    Tom Fleischman
    cunews@cornell.edu

    Electroluminescent skin Cornell
    The research group of Rob Shepherd, assistant professor of mechanical and aerospace engineering, has developed a highly stretchable electroluminescent skin capable of stretching to nearly six times its original size while still emitting light. The group’s work is documented in a paper published online March 3 in the journal Science. Chris Larson

    A health care robot that displays a patient’s temperature and pulse, and even reacts to a patient’s mood.

    An autonomous vehicle with an information display interface that can be changed based on the passenger’s needs.

    Even in this age of smartphones and other electronics wonders, these ideas sound quite futuristic. But a team of Cornell graduate students – led by Rob Shepherd, assistant professor of mechanical and aerospace engineering – has developed an electroluminescent “skin” that stretches to more than six times its original size while still emitting light. The discovery could lead to significant advances in health care, transportation, electronic communication and other areas.

    “This material can stretch with the body of a soft robot, and that’s what our group does,” Shepherd said, noting that the material has two key properties: “It allows robots to change their color, and it also allows displays to change their shape.”

    This hyper-elastic light-emitting capacitor (HLEC), which can endure more than twice the strain of previously tested stretchable displays, consists of layers of transparent hydrogel electrodes sandwiching a dielectric (insulating) elastomer sheet. The elastomer changes luminance and capacitance (the ability to store an electrical charge) when stretched, rolled and otherwise deformed.

    “We can take these pixels that change color and put them on these robots, and now we have the ability to change their color,” Shepherd said. “Why is that important? For one thing, when robots become more and more a part of our lives, the ability for them to have emotional connection with us will be important. So to be able to change their color in response to mood or the tone of the room we believe is going to be important for human-robot interactions.”

    In addition to its ability to emit light under a strain of greater than 480 percent its original size, the group’s HLEC was shown to be capable of being integrated into a soft robotic system. Three six-layer HLEC panels were bound together to form a crawling soft robot, with the top four layers making up the light-up skin and the bottom two the pneumatic actuators.

    The chambers were alternately inflated and deflated, with the resulting curvature creating an undulating, “walking” motion.

    Shepherd credited a group of four graduate students – Bryan Peele, Chris Larson, Shuo Li and Sanlin Robinson – with coming up with the idea for the material. All but Li were in Shepherd’s Rheology and Processing of Soft Materials class in spring 2014, when the seeds for this discovery were planted.

    “They would say something like, ‘OK, we have a single pixel that can stretch 500 percent in length.’ And so I’d say, ‘That’s cool, but what is the application for it?’” Shepherd said. “And that’s the biggest thing – you can have something cool, but you need to find a reason to use it.”

    In addition to the four graduate students, all members of the Shepherd Group, contributors included Massimo Tottaro, Lucia Beccai and Barbara Mazzolai of the Italian Institute of Technology’s Center for Micro-BioRobotics, a world leader in robotics study. Shepherd met Beccai and Mazzolai at a conference two years ago; this was their first research collaboration.

    The group’s paper, Highly Stretchable Electroluminescent Skin for Optical Signaling and Tactile Sensing, is published in the March 3 online edition of the journal Science.

    Although Shepherd admitted to “not being very fashion-forward,” another application involves wearable electronics. While wearable technology today involves putting hard electronics onto a soft base (think Apple Watch or Fitbit), this discovery paves the way for devices that fully conform to the wearer’s shape.

    “You could have a rubber band that goes around your arm that also displays information,” Larson said. “You could be in a meeting and have a rubber band-like device on your arm and could be checking your email. That’s obviously in the future, but that’s the direction we’re looking in.”

    The Shepherd Group has also developed a lightweight, stretchable material with the consistency of memory foam, with the potential for use in prosthetic body parts, artificial organs and soft robotics.

    The group’s latest work was supported by a grant from the Army Research Office, a 2015 award from the Air Force Office of Scientific Research, and two grants from the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    Once called “the first American university” by educational historian Frederick Rudolph, Cornell University represents a distinctive mix of eminent scholarship and democratic ideals. Adding practical subjects to the classics and admitting qualified students regardless of nationality, race, social circumstance, gender, or religion was quite a departure when Cornell was founded in 1865.

    Today’s Cornell reflects this heritage of egalitarian excellence. It is home to the nation’s first colleges devoted to hotel administration, industrial and labor relations, and veterinary medicine. Both a private university and the land-grant institution of New York State, Cornell University is the most educationally diverse member of the Ivy League.

    On the Ithaca campus alone nearly 20,000 students representing every state and 120 countries choose from among 4,000 courses in 11 undergraduate, graduate, and professional schools. Many undergraduates participate in a wide range of interdisciplinary programs, play meaningful roles in original research, and study in Cornell programs in Washington, New York City, and the world over.

     
  • richardmitnick 7:45 am on October 27, 2015 Permalink | Reply
    Tags: , Robotics,   

    From TUM: “Robots learn to walk” 

    Techniche Universitat Munchen

    Techniche Universitat Munchen

    27.10.2015

    Contact:
    Dr. Daniel Renjewski
    Technical University of Munich (TUM)
    Chair of Robotics and Embedded Systems (Prof. Alois Knoll)
    +49 (0)89 289 18133
    daniel.renjewski@tum.de

    Prof. Jonathan Hurst
    Oregon State University
    Associate Professor of Mechanical Engineering
    +1-541-737-7010
    jonathan.hurst@oregonstate.edu

    Springs instead of muscles: The “ATRIAS” robot walks like a human

    1
    Illustration of “ATRIAS” by Mikhail Jones


    download the mp4 video here.

    Humanoid robots are intended to become more and more like people. But walking on two legs – one of the characteristic features of the human being – continues to pose particular problems to these machines. Dr. Daniel Renjewski of the Technical University of Munich (TUM), together with his colleagues at Oregon State University, has developed a robot whose gait comes closer than ever before to that of humans. The results of their study could also be used to develop better prostheses.

    When we walk, we are not consciously aware of the structure of the ground. Our body has the ability to automatically compensate for small uneven patches without tripping or coming to a standstill. Walking robots, such as the humanoid “Asimo” from Japan, closely resemble humans in appearance, but tend to walk slowly and stiffly. They also use up a lot of energy in the process.

    3
    ASIMO

    Humans and animals do not think about walking, explains Dr. Daniel Renjewski of the Chair of Robotics and Embedded Systems at the TUM. “The intelligence lies in the mechanics.” Tendons and muscles cushion the impact of any uneven patches in the ground. “When we walk, one might say that we fall from one step into the other,” says Renjewski. This means that our gait is sometimes unstable. Were we to interrupt our stride in mid-movement, we would fall.

    Previous walking robots: stable but stiff

    This type of dynamic movement is difficult to control in a robot designed using conventional approaches. To guarantee that the machine is always stable and does not fall, the engineers therefore have to measure where the robot is located at every point in time, and how its center of gravity shifts as it moves. The price for this accurate steering is that its movements are by necessity controlled and stiff. In most cases, the machines walk on level terrain in the laboratory and are only required to avoid defined obstacles.

    The goal for Renjewski and his colleagues at Oregon State University was to develop a robot whose gait is the same as that of a human. The name they gave to this robot, which they describe in a report in the journal IEEE Transactions on Robotics was “ATRIAS” (Assume The Robot Is A Sphere).

    Theory and practice

    The development of ATRIAS is based on the so-called spring-mass model, first presented in 1989. It describes the underlying principle of walking on two legs. In this model, the entire mass of the body is concentrated into one point, which is attached to a massless spring. The spring is a simplified representation of the muscles, bones and tendons on which the forces generated during walking act in the real world.

    The researchers had to make a few more adjustments to enable them to implement this theoretical model technically: in reality, the springs also have mass, and motors are needed to compensate for unavoidable damping in the system.

    Keeping steady

    ATRIAS has three motors in each leg for this purpose. Two of the motors act directly on the two leg springs. The third motor ensures lateral stability. ATRIAS’s legs make up ten percent of its total mass, to get as close as possible to the theoretical lack of mass.

    Trials showed that it walks three times more efficiently than other human-sized biped robots. Even outside forces, such as being hit by a ball or a rough terrain, cannot unbalance it. Prof. Jonathan Hurst of Oregon State University, and the initiator of the study, is sure that this type of locomotion will catch on in walking robots of the future. If the technology is improved even further, these robots could, for instance, be used to assist firefighting.

    Better prostheses

    However, these research results are also meaningful for people. In the next research step, Renjewski, who moved to the TUM in May of this year, is working on transferring these findings to robots for gait rehabilitation and prostheses.

    Original publication: Daniel Renjewski, Alexander Sprowitz, Andrew Peekema, Mikhail Jones, Jonathan Hurst: Exciting Engineered Passive Dynamics in a Bipedal Robot; IEEE Transactions on Robotics, Volume 31, Issue 5; DOI: 10.1109/TRO.2015.2473456

    The work was supported by the National Science Foundation, the Defense Advanced Research Projects Agency and the Human Frontier Science Program.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Techniche Universitat Munchin Campus

    Technische Universität München (TUM) is one of Europe’s top universities. It is committed to excellence in research and teaching, interdisciplinary education and the active promotion of promising young scientists. The university also forges strong links with companies and scientific institutions across the world. TUM was one of the first universities in Germany to be named a University of Excellence. Moreover, TUM regularly ranks among the best European universities in international rankings.

     
  • richardmitnick 8:27 am on August 21, 2015 Permalink | Reply
    Tags: , , Robotics   

    From Caltech: “Crush, the RoboSub, Places in International Competition” 

    Caltech Logo
    Caltech

    08/20/2015
    Lori Dajose

    1
    Crush, the RoboSub Credit: Caltech Robotics Team

    The Caltech Robotics Team—composed of 30 Caltech undergrads and recent alumni—placed fourth in the 18th Annual International RoboSub Competition, held July 20–26 in San Diego, California. The competition, hosted by the Association for Unmanned Vehicle Systems International (AUVSI) Foundation and cosponsored by the U.S. Office of Naval Research, challenges teams of student engineers to perform realistic missions with autonomous underwater vehicles (AUVs) in an underwater environment. Thirty-seven teams from across the globe competed in this year’s event.

    The challenge was to build a robotic submarine that could autonomously navigate an obstacle course, completing tasks such as driving through a gate, bumping into colored buoys, shooting torpedoes through holes, and dropping markers into designated bins. The only human involvement during the competition was the initial placement of the vehicle into the water.

    The Caltech team was divided into three groups, responsible for the mechanical, electrical, and software systems of the robot, which they named Crush. A fourth group managed the team’s fund-raising and outreach efforts. The mechanical team, led by Edward Fouad, a senior in mechanical engineering, was responsible for building grippers, a propulsion system, and a pressure hull to house the robot’s electronics. The autonomous capabilities of the robot were programmed from scratch by the software team, led by Kushal Agarwal, a junior in computer science. The electrical team, led by Torkom Pailevanian, a senior in electrical engineering, designed an inertial measurement unit consisting of gyroscopes and accelerometers that allow the robot to orient itself in 3-D space.

    Started in 1998, the Annual RoboSub Competition is designed to introduce young students into high-tech STEM fields such as maritime robotics. This year’s team from Caltech was led by Justin Koch—who graduated in June with his BS in mechanical engineering—and advised by Joel Burdick, the Richard L. and Dorothy M. Hayman Professor of Mechanical Engineering and Bioengineering.

    “Last year, as a first-year team, we placed seventh overall and were awarded Best New Entry,” says Koch. “I’m definitely very excited with how we did as only a second-year team!”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 5:07 pm on August 15, 2015 Permalink | Reply
    Tags: NASA Swamp Works, Robotics   

    From NASA: “Extreme Access Flyer to Take Planetary Exploration Airborne” 

    NASA

    NASA

    July 30, 2015
    Steven Siceloff

    1
    A prototype built to test Extreme Access Flyer systems in different environments.
    Credits: NASA/Swamp Works

    Swamp Works engineers at NASA’s Kennedy Space Center in Florida are inventing a flying robotic vehicle that can gather samples on other worlds in places inaccessible to rovers. The vehicles – similar to quad-copters but designed for the thin atmosphere of Mars and the airless voids of asteroids and the moon – would use a lander as a base to replenish batteries and propellants between flights.

    “This is a prospecting robot,” said Rob Mueller, senior technologist for advanced projects at Swamp Works. “The first step in being able to use resources on Mars or an asteroid is to find out where the resources are. They are most likely in hard-to-access areas where there is permanent shadow. Some of the crater walls are angled 30 degrees or more, and that’s far too steep for a traditional rover to navigate and climb.”

    The machines being built fall under the name Extreme Access Flyers, and their designers intend to create vehicles that can travel into the shaded regions of a crater and pull out small amounts of soil to see whether it holds the water-ice promised by readings from orbiting spacecraft. Running on propellants made from resources on the distant worlds, the machines would be able to execute hundreds of explorative sorties during their mission. They also would be small enough for a lander to bring several of them to the surface at once, so if one fails, the mission isn’t lost.

    If that sounds a lot like a job for a quad-copter, it kind of is. On Earth, a quad-copter with its four rotors and outfitted with a digger or sampling device of some sort would be able to execute many missions with no problem. On other worlds, though, the machine would require very large rotors since the atmosphere on Mars is thin and there is no air on an asteroid or the moon. Also, the flyer would have to operate autonomously, figuring out on its own where it is and where it is going since there is no GPS to help it navigate and the communications delays are too large to control it directly from Earth.

    Cold-gas jets using oxygen or steam water vapor will take on the lifting and maneuvering duties performed by the rotors on Earth. For navigation, the team is programming the flyer to recognize terrain and landmarks and guide itself to areas controllers on Earth send it to or even scout on its own the best places to take samples from.

    “It would have enough propellant to fly for a number of minutes on Mars or on the moon, hours on an asteroid,” said DuPuis.

    For the sampling itself, designers currently envision a modular approach that would let the flyer take one tool at a time to a sample area to gather about seven grams of material at a time. That’s enough for instruments to analyze and, throughout the course of many flights, is enough to gather samples that would show Earth-bound scientists a complete geological picture of an area.

    It’s work that would’ve been too complicated to research even five years ago, particularly with off-the-shelf components. Now though, the advent of autonomous flight controllers, laser-guidance and mapping systems combined with innovations in 3-D printing make the chances of developing a successful prototype flyer much more likely. Also, a partnership with Embry-Riddle Aeronautical University and Honeybee Robotic Spacecraft Mechanisms is providing more expertise.

    “The flight control systems of commercially available small, unmanned multi-rotor aerial vehicles are not too dissimilar to a spacecraft controller,” Mike DuPuis, co-investigator of the Extreme Access Flyer project. “That was the starting point for developing a controller.”

    In the Swamp Works laboratory, the team has assembled several models designed to test aspects of the final machine. A large quad-copter about five feet across that uses ducted fans is about the size of the prototype the team has in mind for an operational mission in space. It’s been tested at the planetary surface analogous test site built for the Morpheus lander project at the north end of the Shuttle Landing Facility’s runway.

    A smaller ducted fan flyer, about the size of a person’s palm is routinely flown inside a 10-by-10-foot cube to test software and control abilities. Another, primarily built with asteroid exploration in mind, is suspended inside a gimbal device that lets it maneuver much as it would in zero gravity, using nitrogen high pressure cold gas thrusters to tilt and spin while the team judges its behavior in a virtual simulated world on a computer that shows what its flight around an asteroid would look like.

    The team started at a low level of technological readiness two years ago and is steadily pushing the mission and design closer to a state where it can be made into a flight-ready craft.

    The uses for the sampling vehicle may not be solely extraterrestrial, Mueller said. On Earth, an aerial vehicle that can pull a few grams of dirt from an area potentially brimming with toxins would be very valuable for first responders or those researching a new area who do not want to risk humans. Mueller said the effects of a nuclear radiation leak on surrounding areas, for example, could be measured with soil gathered quickly by a vehicle like the Extreme Access Flyer.

    “We’re an innovations lab, so in everything we do, we try to come up with new solutions,” Mueller said.

    In addition to scouting craters for water and other elements that can be processed into fuel for large spacecraft and air for humans, the flyer would be capable of exploring lava tubes that are known to exist on Mars and the moon and are found in many volcanic areas on Earth. Because some are thought to be 30 feet or bigger in diameter, an extreme access flyer could navigate autonomously during a robotic precursor mission and find a safe place for astronauts during their journey to Mars.

    “You could put a whole habitat inside a lava tube to shelter astronauts from radiation, thermal extremes, weather and micrometeorites,” Mueller said.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Aeronautics and Space Administration (NASA) is the agency of the United States government that is responsible for the nation’s civilian space program and for aeronautics and aerospace research.

    President Dwight D. Eisenhower established the National Aeronautics and Space Administration (NASA) in 1958 with a distinctly civilian (rather than military) orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29, 1958, disestablishing NASA’s predecessor, the National Advisory Committee for Aeronautics (NACA). The new agency became operational on October 1, 1958.

    Since that time, most U.S. space exploration efforts have been led by NASA, including the Apollo moon-landing missions, the Skylab space station, and later the Space Shuttle. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle and Commercial Crew vehicles. The agency is also responsible for the Launch Services Program (LSP) which provides oversight of launch operations and countdown management for unmanned NASA launches. Most recently, NASA announced a new Space Launch System that it said would take the agency’s astronauts farther into space than ever before and lay the cornerstone for future human space exploration efforts by the U.S.

    NASA science is focused on better understanding Earth through the Earth Observing System, advancing heliophysics through the efforts of the Science Mission Directorate’s Heliophysics Research Program, exploring bodies throughout the Solar System with advanced robotic missions such as New Horizons, and researching astrophysics topics, such as the Big Bang, through the Great Observatories [Hubble, Chandra, Spitzer, and associated programs. NASA shares data with various national and international organizations such as from the [JAXA]Greenhouse Gases Observing Satellite.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: