Tagged: Robotics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:33 am on February 19, 2020 Permalink | Reply
    Tags: , , , Robotics   

    From JHU HUB: “By studying snakes, engineers learn how to build better robots” 

    From JHU HUB

    2.18.20
    Chanapa Tantibanchachai
    chanapa@jhu.edu
    Office phone 443-997-5056
    Cell phone 928-458-9656

    Johns Hopkins mechanical engineers design a snake robot based on the climbing technique of the kingsnake that could help advance search-and-rescue technology

    4
    Kingsnake, the model for this work.

    Snakes live in diverse environments ranging from unbearably hot deserts to lush tropical forests. But regardless of their habitat, they are able to slither up trees, rocks, and shrubbery with ease. By studying how the creatures move, a team of Johns Hopkins engineers have created a snake robot that can nimbly and stably climb large steps.

    The team’s new findings, published in Journal of Experimental Biology and Royal Society Open Science, could advance the creation of search and rescue robots that can successfully navigate treacherous terrain.

    “We look to these creepy creatures for movement inspiration because they’re already so adept at stably scaling obstacles in their day-to-day lives,” says Chen Li, an assistant professor of mechanical engineering at Johns Hopkins University and the papers’ senior author. “Hopefully our robot can learn how to bob and weave across surfaces just like snakes.”


    Snake Robot Locomotion

    Previous studies had mainly observed snake movements on flat surfaces, but rarely examined their movement in 3D terrain, except for on trees, says Li. These studies did not account for real-life large obstacles snakes encounter, such as rubble and debris, that a search and rescue robot would similarly have to climb over.

    Li’s team first studied how the variable kingsnake, a snake that can commonly be found living in both deserts and pine-oak forests, climbed steps in Li’s Terradynamics Lab. Li’s lab melds the fields of engineering, biology, and physics together to study animal movements for tips and tricks to build more versatile robots.

    “These snakes have to regularly travel across boulders and fallen trees; they’re the masters of movement and there’s much we can learn from them,” Li says.

    Li and his team ran a series of experiments that changed step height and surface friction to observe how the snakes contorted their bodies in response to these barriers. They found that snakes partitioned their bodies into three movement sections: a front and rear section wriggled back and forth on the horizontal steps like a wave, while the section between remained stiff, hovering just so, to bridge the height of the step. The wriggling portions, they noticed, provided stability to keep the snake from tipping over.

    2
    Image credit: Will Kirk / Johns Hopkins University

    As the snakes moved onto the step, these three body movement sections traveled down the snake’s body. As more and more of the snake reached the top of the step, its front body section would get longer and its rear section would get shorter while the middle body section remained the height of the step, suspended vertically.

    If the steps got taller and more slippery, the snakes would move more slowly and wriggle their front and rear body less to maintain stability.

    After analyzing their videos and noting how snakes climbed steps in the lab, Qiyuan Fu, a graduate student in Li’s lab, created a robot to mimic the animals’ movements.

    At first, the robot snake had difficulty staying stable on large steps and often wobbled and flipped over or got stuck on the steps. To address these issues, the researchers inserted a suspension system (like that in a vehicle) into each body segment so it could compress against the surface when needed. After this, the snake robot was less wobbly, more stable, and climbed steps as high as 38% of its body length with a nearly 100% success rate.

    Compared to snake robots from other studies, Li’s snake robot was speedier and more stable than all but one, and even came close to mimicking the actual snake’s speed. One downside of the added body suspension system, however, was that the robot required more electricity.

    “The animal is still far more superior, but these results are promising for the field of robots that can travel across large obstacles,” adds Li.

    Next, the team will test and improve the snake robot for even more complex 3-D terrain with more unstructured large obstacles.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    About the Hub
    We’ve been doing some thinking — quite a bit, actually — about all the things that go on at Johns Hopkins. Discovering the glue that holds the universe together, for example. Or unraveling the mysteries of Alzheimer’s disease. Or studying butterflies in flight to fine-tune the construction of aerial surveillance robots. Heady stuff, and a lot of it.

    In fact, Johns Hopkins does so much, in so many places, that it’s hard to wrap your brain around it all. It’s too big, too disparate, too far-flung.

    We created the Hub to be the news center for all this diverse, decentralized activity, a place where you can see what’s new, what’s important, what Johns Hopkins is up to that’s worth sharing. It’s where smart people (like you) can learn about all the smart stuff going on here.

    At the Hub, you might read about cutting-edge cancer research or deep-trench diving vehicles or bionic arms. About the psychology of hoarders or the delicate work of restoring ancient manuscripts or the mad motor-skills brilliance of a guy who can solve a Rubik’s Cube in under eight seconds.

    There’s no telling what you’ll find here because there’s no way of knowing what Johns Hopkins will do next. But when it happens, this is where you’ll find it.

    The Johns Hopkins Universityopened in 1876, with the inauguration of its first president, Daniel Coit Gilman. “What are we aiming at?” Gilman asked in his installation address. “The encouragement of research … and the advancement of individual scholars, who by their excellence will advance the sciences they pursue, and the society where they dwell.”

    The mission laid out by Gilman remains the university’s mission today, summed up in a simple but powerful restatement of Gilman’s own words: “Knowledge for the world.”

    What Gilman created was a research university, dedicated to advancing both students’ knowledge and the state of human knowledge through research and scholarship. Gilman believed that teaching and research are interdependent, that success in one depends on success in the other. A modern university, he believed, must do both well. The realization of Gilman’s philosophy at Johns Hopkins, and at other institutions that later attracted Johns Hopkins-trained scholars, revolutionized higher education in America, leading to the research university system as it exists today.

     
  • richardmitnick 10:50 am on October 30, 2019 Permalink | Reply
    Tags: "Self-transforming robot blocks jump; spin; flip; and identify each other", , M-blocks, , Robotics   

    From MIT News: “Self-transforming robot blocks jump, spin, flip, and identify each other” 

    MIT News

    From MIT News

    October 30, 2019
    Rachel Gordon | MIT CSAIL

    1
    One modular robotic cube snaps into place with rest of the M-blocks. Image: Jason Dorfman/MIT CSAIL

    Developed at MIT’s Computer Science and Artificial Intelligence Laboratory, robots can self-assemble to form various structures with applications including inspection.

    Swarms of simple, interacting robots have the potential to unlock stealthy abilities for accomplishing complex tasks. Getting these robots to achieve a true hive-like mind of coordination, though, has proved to be a hurdle.

    In an effort to change this, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a surprisingly simple scheme: self-assembling robotic cubes that can climb over and around one another, leap through the air, and roll across the ground.

    Six years after the project’s first iteration, the robots can now “communicate” with each other using a barcode-like system on each face of the block that allows the modules to identify each other. The autonomous fleet of 16 blocks can now accomplish simple tasks or behaviors, such as forming a line, following arrows, or tracking light.

    Inside each modular “M-Block” is a flywheel that moves at 20,000 revolutions per minute, using angular momentum when the flywheel is braked. On each edge and every face are permanent magnets that let any two cubes attach to each other.

    While the cubes can’t be manipulated quite as easily as, say, those from the video game “Minecraft,” the team envisions strong applications in inspection, and eventually disaster response. Imagine a burning building where a staircase has disappeared. In the future, you can envision simply throwing M-Blocks on the ground, and watching them build out a temporary staircase for climbing up to the roof, or down to the basement to rescue victims.

    “M stands for motion, magnet, and magic,” says MIT Professor and CSAIL Director Daniela Rus. “’Motion,’ because the cubes can move by jumping. ‘Magnet,’ because the cubes can connect to other cubes using magnets, and once connected they can move together and connect to assemble structures. ‘Magic,’ because we don’t see any moving parts, and the cube appears to be driven by magic.”

    While the mechanism is quite intricate on the inside, the exterior is just the opposite, which enables more robust connections. Beyond inspection and rescue, the researchers also imagine using the blocks for things like gaming, manufacturing, and health care.

    “The unique thing about our approach is that it’s inexpensive, robust, and potentially easier to scale to a million modules,” says CSAIL PhD student John Romanishin, lead author on a new paper [IROS] about the system. “M-Blocks can move in a general way. Other robotic systems have much more complicated movement mechanisms that require many steps, but our system is more scalable.”

    Romanishin wrote the paper alongside Rus and undergraduate student John Mamish of the University of Michigan. They will present the paper on M-blocks at IEEE’s International Conference on Intelligent Robots and Systems in November in Macau.

    Previous modular robot systems typically tackle movement using unit modules with small robotic arms known as external actuators. These systems require a lot of coordination for even the simplest movements, with multiple commands for one jump or hop.

    On the communication side, other attempts have involved the use of infrared light or radio waves, which can quickly get clunky: If you have lots of robots in a small area and they’re all trying to send each other signals, it opens up a messy channel of conflict and confusion.

    When a system uses radio signals to communicate, the signals can interfere with each other when there are many radios in a small volume.

    Back in 2013, the team built out their mechanism for M-Blocks. They created six-faced cubes that move about using something called “inertial forces.” This means that, instead of using moving arms that help connect the structures, the blocks have a mass inside of them which they “throw” against the side of the module, which causes the block to rotate and move.

    Each module can move in four cardinal directions when placed on any one of the six faces, which results in 24 different movement directions. Without little arms and appendages sticking out of the blocks, it’s a lot easier for them to stay free of damage and avoid collisions.

    Knowing that the team had tackled the physical hurdles, the critical challenge still persisted: How to make these cubes communicate and reliably identify the configuration of neighboring modules?

    Romanishin came up with algorithms designed to help the robots accomplish simple tasks, or “behaviors,” which led them to the idea of a barcode-like system where the robots can sense the identity and face of what other blocks they’re connected to.

    In one experiment, the team had the modules turn into a line from a random structure, and they watched if the modules could determine the specific way that they were connected to each other. If they weren’t, they’d have to pick a direction and roll that way until they ended up on the end of the line.

    Essentially, the blocks used the configuration of how they’re connected to each other in order to guide the motion that they choose to move — and 90 percent of the M-Blocks succeeded in getting into a line.

    The team notes that building out the electronics was very challenging, especially when trying to fit intricate hardware inside such a small package. To make the M-Block swarms a larger reality, the team wants just that — more and more robots to make bigger swarms with stronger capabilities for various structures.

    The project was supported, in part, by the National Science Foundation and Amazon Robotics

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:24 pm on October 2, 2019 Permalink | Reply
    Tags: , , Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer; cosmology; brain imaging and materials science among others., , Bigger and better telescopes and accelerators and of course supercomputers on which they could run larger multiscale simulations, , Robotics, The influx of massive data sets and the computing power to sift sort and analyze it., The size of the simulations we are running is so big the problems that we are trying to solve are getting bigger so that these AI methods can no longer be seen as a luxury but as must-have technology.   

    From Argonne Leadership Computing Facility: “Artificial Intelligence: Transforming science, improving lives” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    September 30, 2019
    Mary Fitzpatrick
    John Spizzirri

    Commitment to developing artificial intelligence (AI) as a national research strategy in the United States may have unequivocally defined 2019 as the Year of AI — particularly at the federal level, more specifically throughout the U.S. Department of Energy (DOE) and its national laboratory complex.

    In February, the White House established the Executive Order on Maintaining American Leadership in Artificial Intelligence (American AI Initiative) to expand the nation’s leadership role in AI research. Its goals are to fuel economic growth, enhance national security and improve quality of life.

    The initiative injects substantial and much-needed research dollars into federal facilities across the United States, promoting technology advances and innovation and enhancing collaboration with nongovernment partners and allies abroad.

    In response, DOE has made AI — along with exascale supercomputing and quantum computing — a major element of its $5.5 billion scientific R&D budget and established the Artificial Intelligence and Technology Office, which will serve to coordinate AI work being done across the DOE.

    At DOE facilities like Argonne National Laboratory, researchers have already begun using AI to design better materials and processes, safeguard the nation’s power grid, accelerate treatments in brain trauma and cancer and develop next-generation microelectronics for applications in AI-enabled devices.

    Over the last two years, Argonne has made significant strides toward implementing its own AI initiative. Leveraging the Laboratory’s broad capabilities and world-class facilities, it has set out to explore and expand new AI techniques; encourage collaboration; automate traditional research methods, as well as lab facilities and drive discovery.

    In July, it hosted an AI for Science town hall, the first of four such events that also included Oak Ridge and Lawrence Berkeley national laboratories and DOE’s Office of Science.

    3

    Engaging nearly 350 members of the AI community, the town hall served to stimulate conversation around expanding the development and use of AI, while addressing critical challenges by using the initiative framework called AI for Science.

    “AI for Science requires new research and infrastructure, and we have to move a lot of data around and keep track of thousands of models,” says Rick Stevens, Associate Laboratory Director for Argonne’s Computing, Environment and Life Sciences (CELS) Directorate and a professor of computer science at the University of Chicago.

    1
    Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences, is helping to develop the CANDLE computer architecture on the patient level, which is meant to help guide drug treatment choices for tumors based on a much wider assortment of data than currently used.

    “How do we distribute this production capability to thousands of people? We need to have system software with different capabilities for AI than for simulation software to optimize workflows. And these are just a few of the issues we have to begin to consider.”

    The conversation has just begun and continues through Laboratory-wide talks and events, such as a recent AI for Science workshop aimed at growing interest in AI capabilities through technical hands-on sessions.

    Argonne also will host DOE’s Innovation XLab Artificial Intelligence Summit in Chicago, meant to showcase the assets and capabilities of the national laboratories and facilitate an exchange of information and ideas between industry, universities, investors and end-use customers with Lab innovators and experts.
    What exactly is AI?

    Ask any number of researchers to define AI and you’re bound to get — well, first, a long pause and perhaps a chuckle — a range of answers from the more conventional ​“utilizing computing to mimic the way we interpret data but at a scale not possible by human capability” to ​“a technology that augments the human brain.”

    Taken together, AI might well be viewed as a multi-component toolbox that enables computers to learn, recognize patterns, solve problems, explore complex datasets and adapt to changing conditions — much like humans, but one day, maybe better.

    While the definitions and the tools may vary, the goals remain the same: utilize or develop the most advanced AI technologies to more effectively address the most pressing issues in science, medicine and technology, and accelerate discovery in those areas.

    At Argonne, AI has become a critical tool for modeling and prediction across almost all areas where the Laboratory has significant domain expertise: chemistry, materials, photon science, environmental and manufacturing sciences, biomedicine, genomics and cosmology.

    A key component of Argonne’s AI toolbox is a technique called machine learning and its derivatives, such as deep learning. The latter is built on neural networks comprising many layers of artificial neurons that learn internal representations of data, mimicking human information-gathering-processing systems like the brain.

    “Deep learning is the use of multi-layered neural networks to do machine learning, a program that gets smarter or more accurate as it gets more data to learn from. It’s very successful at learning to solve problems,” says Stevens.

    A staunch supporter of AI, particularly deep learning, Stevens is principal investigator on a multi-institutional effort that is developing the deep neural network application CANDLE (CANcer Distributed Learning Environment), that integrates deep learning with novel data, modeling and simulation techniques to accelerate cancer research.

    Coupled with the power of Argonne’s forthcoming exascale computer Aurora — which has the capacity to deliver a billion billion calculations per second — the CANDLE environment will enable a more personalized and effective approach to cancer treatment.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    And that is just a small sample of AI’s potential in science. Currently, all across Argonne, researchers are involved in more than 60 AI-related investigations, many of them driven by machine learning.

    Argonne Distinguished Fellow Valerie Taylor’s work looks at how applications execute on computers and large-scale, high-performance computing systems. Using machine learning, she and her colleagues model an execution’s behavior and then use that model to provide feedback on how to best modify the application for better performance.

    “Better performance may be shorter execution time or, using generated metrics such as energy, it may be reducing the average power,” says Taylor, director of Argonne’s Mathematics and Computer Science (MCS) division. ​“We use statistical analysis to develop the models and identify hints on how to modify the application.”

    Material scientists are exploring the use of machine learning to optimize models of complex material properties in the discovery and design of new materials that could benefit energy storage, electronics, renewable energy resources and additive manufacturing, to name just a few areas.

    And still more projects address complex transportation and vehicle efficiency issues by enhancing engine design, minimizing road congestion, increasing energy efficiency and improving safety.

    Beyond the deep

    Beyond deep learning, there are many sub-ranges of AI that people have been working on for years, notes Stevens. ​“And while machine learning now dominates, something else might emerge as a strength.”

    Natural language processing, for example, is commercially recognizable as voice-activated technologies — think Siri — and on-the-fly language translators. Exceeding those capabilities is its ability to review, analyze and summarize information about a given topic from journal articles, reports and other publications, and extract and coalesce select information from massive and disparate datasets.

    Immersive visualization can place us into 3D worlds of our own making, interject objects or data into our current reality or improve upon human pattern recognition. Argonne researchers have found application for virtual and augmented reality in the 3D visualization of complicated data sets and the detection of flaws or instabilities in mechanical systems.

    And of course, there is robotics — a program started at Argonne in the late 1940s and rebooted in 1999 — that is just beginning to take advantage of Argonne’s expanding AI toolkit, whether to conduct research in a specific domain or improve upon its more utilitarian use in decommissioning nuclear power plants.

    Until recently, according to Stevens, AI has been a loose collection of methods using very different underlying mechanisms, and the people using them weren’t necessarily communicating their progress or potentials with one another.

    But with a federal initiative in hand and a Laboratory-wide vision, that is beginning to change.

    Among those trying to find new ways to collaborate and combine these different AI methods is Marius Stan, a computational scientist in Argonne’s Applied Materials division (AMD) and a senior fellow at both the University of Chicago’s Consortium for Advanced Science and Engineering and the Northwestern-Argonne Institute for Science and Engineering.

    Stan leads a research area called Intelligent Materials Design that focuses on combining different elements of AI to discover and design new materials and to optimize and control complex synthesis and manufacturing processes.

    Work on the latter has created a collaboration between Stan and colleagues in the Applied Materials and Energy Systems divisions, and the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

    Merging machine learning and computer vision with the Flame Spray Pyrolysis technology at Argonne’s Materials Engineering Research Facility, the team has developed an AI ​“intelligent software” that can optimize, in real time, the manufacturing process.

    “Our idea was to use the AI to better understand and control in real time — first in a virtual, experimental setup, then in reality — a complex synthesis process,” says Stan.

    Automating the process leads to a safer and much faster process compared to those led by humans. But even more intriguing is the potential that the AI process might observe materials with better properties than did the researchers.

    What drove us to AI?

    Whether or not they concur on a definition, most researchers will agree that the impetus for the escalation of AI in scientific research was the influx of massive data sets and the computing power to sift, sort and analyze it.

    Not only was the push coming from big corporations brimming with user data, but the tools that drive science were getting more expansive — bigger and better telescopes and accelerators and of course supercomputers, on which they could run larger, multiscale simulations.

    “The size of the simulations we are running is so big, the problems that we are trying to solve are getting bigger, so that these AI methods can no longer be seen as a luxury, but as must-have technology,” notes Prasanna Balaprakash, a computer scientist in MCS and ALCF.

    Data and compute size also drove the convergence of more traditional techniques, such as simulation and data analysis, with machine and deep learning. Where analysis of data generated by simulation would eventually lead to changes in an underlying model, that data is now being fed back into machine learning models and used to guide more precise simulations.

    “More or less anybody who is doing large-scale computation is adopting an approach that puts machine learning in the middle of this complex computing process and AI will continue to integrate with simulation in new ways,” says Stevens.

    “And where the majority of users are in theory-modeling-simulation, they will be integrated with experimentalists on data-intense efforts. So the population of people who will be part of this initiative will be more diverse.”

    But while AI is leading to faster time-to-solution and more precise results, the number of data points, parameters and iterations required to get to those results can still prove monumental.

    Focused on the automated design and development of scalable algorithms, Balaprakash and his Argonne colleagues are developing new types of AI algorithms and methods to more efficiently solve large-scale problems that deal with different ranges of data. These additions are intended to make existing systems scale better on supercomputers, like those housed at the ALCF; a necessity in the light of exascale computing.

    “We are developing an automated machine learning system for a wide range of scientific applications, from analyzing cancer drug data to climate modeling,” says Balaprakash. ​“One way to speed up a simulation is to replace the computationally expensive part with an AI-based predictive model that can make the simulation faster.”

    Industry support

    The AI techniques that are expected to drive discovery are only as good as the tech that drives them, making collaboration between industry and the national labs essential.

    “Industry is investing a tremendous amount in building up AI tools,” says Taylor. ​“Their efforts shouldn’t be duplicated, but they should be leveraged. Also, industry comes in with a different perspective, so by working together, the solutions become more robust.”

    Argonne has long had relationships with computing manufacturers to deliver a succession of ever-more powerful machines to handle the exponential growth in data size and simulation scale. Its most recent partnership is that with semiconductor chip manufacturer Intel and supercomputer manufacturer Cray to develop the exascale machine Aurora.

    But the Laboratory is also collaborating with a host of other industrial partners in the development or provision of everything from chip design to deep learning-enabled video cameras.

    One of these, Cerebras, is working with Argonne to test a first-of-its-kind AI accelerator that provides a 100–500 times improvement over existing AI accelerators. As its first U.S. customer, Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer, cosmology, brain imaging and materials science, among others.

    The National Science Foundation-funded Array of Things, a partnership between Argonne, the University of Chicago and the City of Chicago, actively seeks commercial vendors to supply technologies for its edge computing network of programmable, multi-sensor devices.

    But Argonne and the other national labs are not the only ones to benefit from these collaborations. Companies understand the value in working with such organizations, recognizing that the AI tools developed by the labs, combined with the kinds of large-scale problems they seek to solve, offer industry unique benefits in terms of business transformation and economic growth, explains Balaprakash.

    “Companies are interested in working with us because of the type of scientific applications that we have for machine learning,” he adds ​“What we have is so diverse, it makes them think a lot harder about how to architect a chip or design software for these types of workloads and science applications. It’s a win-win for both of us.”

    AI’s future, our future

    “There is one area where I don’t see AI surpassing humans any time soon, and that is hypotheses formulation,” says Stan, ​“because that requires creativity. Humans propose interesting projects and for that you need to be creative, make correlations, propose something out of the ordinary. It’s still human territory but machines may soon take the lead.

    “It may happen,” he says, and adds that he’s working on it.

    In the meantime, Argonne researchers continue to push the boundaries of existing AI methods and forge new components for the AI toolbox. Deep learning techniques like neuromorphic algorithms that exhibit the adaptive nature of insects in an equally small computational space can be used at the ​“edge” — where there are few computing resources; as in cell phones or urban sensors.

    An optimizing neural network called a neural architecture search, where one neural network system improves another, is helping to automate deep-learning-based predictive model development in several scientific and engineering domains, such as cancer drug discovery and weather forecasting using supercomputers.

    Just as big data and better computational tools drove the convergence of simulation, data analysis and visualization, the introduction of the exascale computer Aurora into the Argonne complex of leadership-class tools and experts will only serve to accelerate the evolution of AI and witness its full assimilation into traditional techniques.

    The tools may change, the definitions may change, but AI is here to stay as an integral part of the scientific method and our lives.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 12:18 pm on August 20, 2019 Permalink | Reply
    Tags: "GRASP Lab’s high-flying robots", Autonomous airborne robots which can link together and work together to complete tasks and break apart., David Saldaña, From a remote computer Saldaña can send a command to eight robots simultaneously to perform a certain task., GRASP- General Robotics; Automation; Sensing Perception Laboratory, Penn Engineering Research and Collaboration Hub, , Robotics, Saldaña also spends much of his time designing the shape and structure of the modular robots., The typical workflow for creating the modular robots is to create a mathematical model; write computer code; and run simulations before testing a new program out on actual drones.   

    From Penn Today: “GRASP Lab’s high-flying robots” 


    From Penn Today

    August 19, 2019

    Credits
    Gina Vitale Writer
    Eric Sucar Photographer

    Postdoctoral researcher David Saldaña is working on algorithms and designs for autonomous airborne robots which can link together, break apart, and work together to complete tasks.

    1
    David Saldaña, a postdoctoral researcher with the GRASP lab, works on flying robots that can self-assemble into specific structures or align themselves around an object to lift it up.

    In the Penn Engineering Research and Collaboration Hub, there is a wide-open space with high ceilings and a padded floor. All around it are aisles of soldering equipment, propped-up prototypes, and metal parts of many shapes and sizes. Nestled on the third floor of the looming Pennovation Center building in the Grays Ferry neighborhood, it’s the perfect venue for robotics research.

    This is where Saldaña, a member of the General Robotics, Automation, Sensing & Perception (GRASP) Laboratory, and his collaborators in the School of Engineering and Applied Science perform test flights with some of his robots. Although the designs vary in size, the newest square prototype is about the size of a shoebox. Each can be remote controlled like most drones, but when he activates several of them at once, they autonomously come together in the air.

    From a remote computer, Saldaña can send a command to eight robots simultaneously to perform a certain task such as getting into a formation of four across, two wide. The robots rise, communicate with each other to determine their spots in the two rows, align, and snap together using the magnets on their corners. When he tells them to disassemble, one coupling at a time, they break apart by angling down in opposite directions similar to a pencil being snapped in half.

    3
    The typical workflow for creating the modular robots is to create a mathematical model, write computer code, and run simulations before testing a new program out on actual drones. Saldaña also spends much of his time designing the shape and structure of the modular robots.

    If you ask Saldaña, there are a lot of problems that can be solved by robots. With a little more honing, robots like this could have major real-world applications. Saldaña uses the example of a remote area where a bridge has collapsed, leaving people stranded during a natural disaster. These robots, which are small and easy to transport, could self-assemble and hover where the bridge used to stand, providing a pathway for stranded people to safely cross.

    One of the reasons this isn’t already common practice is because autonomous control of robots is a hard task in obstacle-filled environments like the outdoors. Operating hundreds of them in those situations is no small challenge.

    “Autonomous control of a single robot is difficult,” he says. “Somebody’s going to hit the robot. It has to avoid obstacles. Simple tasks, like moving from point A to point B, can be very complicated if you do it in a forest. So that’s for a single robot, but now when you have 100 robots, the things become even more complicated. The problem is even harder. And that’s what we tried to solve here.”

    3
    Some examples of the prototypes made and tested by the ModLab.

    Another part of the challenge is making sure the robots are communicating sufficiently with each other. Because they don’t have predetermined positions for a given formation, they must figure out which unit can go in each spot based on current position. Then, once together, they must move smoothly as a single unit.

    “It’s like when you walk with someone, and you are holding on to the other person. It’s not that easy to walk,” Saldaña says. “That’s the same with these robots. Now one robot is holding the other, and if they don’t coordinate, they crash.”

    Looking to the future, Saldaña says another application could be for package delivery. Delivery drones grasp objects with a mechanical claw-like mechanism, but with the way Saldaña’s robots work, the robots can both grasp and lift the object by forming a shell around it and rising upwards.

    Saldaña shows how four of these robots fly down to a coffee cup on the floor and arrange themselves in a diamond pattern around it, touching each other only at their magnetized edges. When they are formed closely enough that the lip of the coffee cup extends just over the top of them, they gently rise, lifting the coffee cup and transporting it to a preprogrammed destination.

    5

    “We can also transport soda, beers, depending on what you want,” Saldaña jokes. Right now, the prototypes haven’t yet moved beyond lifting small beverage cups. But as their designs are continually refined, there is potential for lifting even bigger objects.

    With a number of scientific publications under his belt, including his most recent work published in Robotics and Automation Letters, Saldaña emphasizes the collaborative nature of his project, with support from his advisors Vijay Kumar and Mark Yim, along with the ModLab and the many graduate students he’s worked with during his two years at Penn.

    “David’s work is a great example of what the GRASP Lab and Penn Engineering is all about,” says Kumar, the dean of Penn Engineering. “When you bring people with lots of different ideas and skills together, that’s where real innovation occurs.”

    In his time as a post-doctoral researcher, Saldaña has made the flying robots smaller and more efficient through several iterations. This fall, as he leaves to accept a position at Lehigh University, he hopes to continue to collaborate with GRASP, now as an alumni.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Penn campus

    Academic life at Penn is unparalleled, with 100 countries and every U.S. state represented in one of the Ivy League’s most diverse student bodies. Consistently ranked among the top 10 universities in the country, Penn enrolls 10,000 undergraduate students and welcomes an additional 10,000 students to our world-renowned graduate and professional schools.

    Penn’s award-winning educators and scholars encourage students to pursue inquiry and discovery, follow their passions, and address the world’s most challenging problems through an interdisciplinary approach.

     
  • richardmitnick 8:32 am on August 19, 2019 Permalink | Reply
    Tags: 2019 RoboCup Millennium Challenge, According to IDC the global robotics market was worth $151 billion in 2018 and that’s expected to double to $315.5 billion by 2021., , , Robotics   

    From CSIROscope- “Cashing in: Australia’s role in $1trn robotic revolution” 

    CSIRO bloc

    From CSIROscope

    19 August 2019
    Adrian Turner

    Fifteen international teams from Australia, Brazil, China, Germany, Iran, Japan and Portugal recently descended on Sydney for the 2019 RoboCup Millennium Challenge. Eleven fully autonomous virtual robots known as “agents” played as part of each team without the assistance of a remote control and complying with FIFA rules. The nail-biting final came down to the wire, with an Australian team emerging victorious over the 2018 world champions with seconds to spare.

    But this was more than a game, it highlighted Australia’s strengths in robotics and the speed with which the field is evolving.

    1
    Robots of the team NomadZ (ETH Zurich) of Switzerland, 1st and 2nd of left,and the Australian Runswift team (University of New South Wales), right, challenge for the ball during a soccer match.

    According to IDC the global robotics market was worth $151 billion in 2018, and that’s expected to double to $315.5 billion by 2021. Robots are used today in wide-ranging fields such as precision agriculture, mining, medical procedures, construction, biosecurity, transportation and even for companionship.

    Advancements in robotics have been accompanied by a fear that robots and automation will take our jobs along the way. While there are short-term risks with forecasts of 40 per cent of jobs potentially being displaced, it’s not clear that there will be an overall reduction in the number of jobs over time. The World Economic Forum suggests that the opposite will occur. In their Future of Jobs 2018 report, the authors concluded that while automation technologies including artificial intelligence could see 75 million jobs displaced globally, 133 million new roles may emerge as companies shake up their division of labour between humans and machines, translating to an additional 58 million new jobs created by 2022.

    A recent report by AlphaBeta estimates that automation can boost Australia’s productivity and national income by (up to) $2.2 trillion by 2030 and result in improved health and safety, the development of new products and services, new types of jobs and new business models. In that same report AlphaBeta concluded that by 2025 automation in manufacturing could increase by 6 per cent along with an 11 per cent reduction in injuries while wages for non-automatable tasks will rise 20 per cent.

    The key to unlocking economic and societal benefit from robotics will be to have them do things not possible or economic before. Take caring of an ageing population that is forecast to live longer but with a smaller workforce to support them. The math doesn’t add up without new methods for care to keep people out of hospitals and in their homes longer. Or supporting children with autism to develop social interaction and communication skills with Kaspar, a social robot being trialled by researchers at the University of New South and CSIRO. Robots can help with dangerous jobs too. CSIRO’s Data61 spinout Emesent develops drones capable of travelling in GPS-denied environments utilising 3D LiDAR technology. They travel down mineshafts to safely inspect hard to access areas of underground mines, so people don’t have to.

    On the other side of the world, a Harvard University group has spent the last 12 years creating a robotic bee capable of partially untethered flight powered by artificial muscles beating the wings 120 times a second. The ultimate objective of the program is to create a robobee swarm for use in natural disasters and artificial pollination given the devastating effectives of colony collapse disorder on bee populations and consequently food pollination. The US Department of Agriculture estimates that of the 1400 crops grown for food, 80 per cent depend on pollination and globally pollination services are likely worth more than $3 trillion.

    Robotic advancements

    Advancement in robotics is accelerating. They will increasingly evolve from isolated machines to be seamlessly integrated with our environments and each other. When one robot encounters an obstacle or new context and learns, the entire network of robots can instantaneously learn.

    Other advancements include the use of more tactile skins with embedded pressure sensors, and more flexible sensors. A team of engineers from the university of Delaware have created flexible carbon nanotube coatings on fibres that include cotton and wool, resulting in shape forming, flexible and pressure sensitive skins. Just as with the robobee there are also advancements in collaborative robots, or cobots, that can be used for resilient search and rescue operations among other things.

    We are also witnessing improvements in dexterity. The California-based Intuitive Surgical has developed a robot allowing a surgeon to control three fully articulated instruments to treat deep-seated damaged or diseased tissues or organs. Robots are also being developed that can unfold and soft robotics that will be important for applications that involve people contact. The challenge until recently has been a lack of actuators or artificial muscles that can replicate the versatility of real muscles. Advancements are being made with one design made from inexpensive materials reportedly able to lift 200 times its weight. Another compelling advancement is in augmenting our own muscles via wearable robots or exoskeletons. Applications today range from helping prevent workplace injury to helping people function more fully after spinal cord damage or strokes.

    Australia can benefit substantially from robotics in areas like managing environmental threats, maintaining vital urban infrastructure, maximise crop yields in drought-affected regions, transportation or supporting law enforcement. Australia was the first country to automate its ports and mine sites and we have strong university capabilities at QUT and Sydney University among others. Today there are about 1100 robotics companies in the country and CSIRO’s Data61 recently opened the largest robotic motion-capture facility in the southern hemisphere.

    The question of how Australia can capitalise on the trillion-dollar artificial intelligence and robotics revolution will be the focal point of the upcoming D61+LIVE conference in Sydney this October. Like all other industry creation opportunities in front of us right now, the opportunity is perishable and the way to maximise the benefit as a country is to be a global leader in parts. The Australian Robocup team has shown us how it’s done. Game on.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    So what can we expect these new radio projects to discover? We have no idea, but history tells us that they are almost certain to deliver some major surprises.

    Making these new discoveries may not be so simple. Gone are the days when astronomers could just notice something odd as they browse their tables and graphs.

    Nowadays, astronomers are more likely to be distilling their answers from carefully-posed queries to databases containing petabytes of data. Human brains are just not up to the job of making unexpected discoveries in these circumstances, and instead we will need to develop “learning machines” to help us discover the unexpected.

    With the right tools and careful insight, who knows what we might find.

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 7:59 am on July 22, 2019 Permalink | Reply
    Tags: "For Climbing Robots, A tiny climbing robot rolls up a wall gripping with fishhooks - technology adapted from LEMUR's gripping feet., Ice Worm moves by scrunching and extending its joints like an inchworm., , RoboSimian can walk on four legs crawl move like an inchworm and slide on its belly., Robotics, The climbing robot LEMUR, the Sky's the Limit"   

    From NASA JPL-Caltech: “For Climbing Robots, the Sky’s the Limit” 

    NASA JPL Banner

    From NASA JPL-Caltech

    July 10, 2019

    Arielle Samuelson
    Jet Propulsion Laboratory, Pasadena, Calif.
    818-354-0307
    arielle.a.samuelson@jpl.nasa.gov

    1
    The climbing robot LEMUR rests after scaling a cliff in Death Valley, California. The robot uses special gripping technology that has helped lead to a series of new, off-roading robots that can explore other worlds.Credit: NASA/JPL-Caltech

    2
    A tiny climbing robot rolls up a wall, gripping with fishhooks – technology adapted from LEMUR’s gripping feet.Credit: NASA/JPL-Caltech

    3
    RoboSimian can walk on four legs, crawl, move like an inchworm and slide on its belly. In this photo it stands on the Devil’s Golf Course in Death Valley, California, for field testing with engineer Brendan Chamberlain-Simon.Credit: NASA/JPL-Caltech

    4
    For Climbing Robots, the Sky’s the Limit
    Ice Worm climbs an icy wall like an inchworm, an adaptation of LEMUR’s design.Credit: NASA/JPL-Caltech

    Robots can drive on the plains and craters of Mars, but what if we could explore cliffs, polar caps and other hard-to-reach places on the Red Planet and beyond? Designed by engineers at NASA’s Jet Propulsion Laboratory in Pasadena, California, a four-limbed robot named LEMUR (Limbed Excursion Mechanical Utility Robot) can scale rock walls, gripping with hundreds of tiny fishhooks in each of its 16 fingers and using artificial intelligence (AI) to find its way around obstacles. In its last field test in Death Valley, California, in early 2019, LEMUR chose a route up a cliff while scanning the rock for ancient fossils from the sea that once filled the area.

    LEMUR was originally conceived as a repair robot for the International Space Station. Although the project has since concluded, it helped lead to a new generation of walking, climbing and crawling robots. In future missions to Mars or icy moons, robots with AI and climbing technology derived from LEMUR could aid in the search for similar signs of life. Those robots are being developed now, honing technology that may one day be part of future missions to distant worlds.

    A Mechanical Worm for Icy Worlds

    How does a robot navigate a slippery, icy surface? For Ice Worm, the answer is one inch at a time. Adapted from a single limb of LEMUR, Ice Worm moves by scrunching and extending its joints like an inchworm. The robot climbs ice walls by drilling one end at a time into the hard surface. It can use the same technique to stabilize itself while taking scientific samples, even on a precipice. The robot also has LEMUR’s AI, enabling it to navigate by learning from past mistakes. To hone its technical skills, JPL project lead Aaron Parness tests Ice Worm on glaciers in Antarctica and ice caves on Mount St. Helens so that it can one day contribute to science on Earth and more distant worlds: Ice Worm is part of a generation of projects being developed to explore the icy moons of Saturn and Jupiter, which may have oceans under their frozen crusts.


    Robots can land on the Moon and drive on Mars, but what about the places they can’t reach? Designed by engineers as NASA’s Jet Propulsion Laboratory in Pasadena, California, a four-limbed robot named LEMUR (Limbed Excursion Mechanical Utility Robot) can scale rock walls, gripping with hundreds of tiny fishhooks in each of its 16 fingers and using artificial intelligence to find its way around obstacles. In its last field test in Death Valley, California, in early 2019, LEMUR chose a route up a cliff, scanning the rock for ancient fossils from the sea that once filled the area.

    A Robotic Ape on the Tundra

    Ice Worm isn’t the only approach being developed for icy worlds like Saturn’s moon Enceladus, where geysers at the south pole blast liquid into space. A rover in this unpredictable world would need to be able to move on ice and silty, crumbling ground. RoboSimian is being developed to meet that challenge.

    Originally built as a disaster-relief robot for the Defense Advanced Research Projects Agency (DARPA), it has been modified to move in icy environments. Nicknamed “King Louie” after the character in “The Jungle Book,” RoboSimian can walk on four legs, crawl, move like an inchworm and slide on its belly like a penguin. It has the same four limbs as LEMUR, but JPL engineers replaced its gripping feet with springy wheels made from music wire (the kind of wire found in a piano). Flexible wheels help King Louie roll over uneven ground, which would be essential in a place like Enceladus.

    Tiny Climbers

    Micro-climbers are wheeled vehicles small enough to fit in a coat pocket but strong enough to scale walls and survive falls up to 9 feet (3 meters). Developed by JPL for the military, some micro-climbers use LEMUR’s fishhook grippers to cling to rough surfaces, like boulders and cave walls. Others can scale smooth surfaces, using technology inspired by a gecko’s sticky feet. The gecko adhesive, like the lizard it’s named for, relies on microscopic angled hairs that generate van der Waals forces – atomic forces that cause “stickiness” if both objects are in close proximity.

    Enhancing this gecko-like stickiness, the robots’ hybrid wheels also use an electrical charge to cling to walls (the same phenomenon makes your hair stick to a balloon after you rub it on your head). JPL engineers created the gecko adhesive for the first generation of LEMUR, using van der Waals forces to help it cling to metal walls, even in zero gravity. Micro-climbers with this adhesive or gripping technology could repair future spacecraft or explore hard-to-reach spots on the Moon, Mars and beyond.

    Ocean to Asteroid Grippers

    Just as astronauts train underwater for spacewalks, technology built for ocean exploration can be a good prototype for missions to places with nearly zero gravity. The Underwater Gripper is one of the gripping hands from LEMUR, with the same 16 fingers and 250 fishhooks for grasping irregular surfaces. It could one day be sent for operations on an asteroid or other small body in the solar system. For now, it’s attached to the underwater research vessel Nautilus operated by the Ocean Exploration Trust off the coast of Hawaii, where it helps take deep ocean samples from more than a mile below the surface.

    A Cliff-Climbing Mini-Helicopter

    The small, solar-powered helicopter accompanying NASA’s Mars 2020 rover will fly in short bursts as a technology demonstration, paving the way for future flying missions at the Red Planet. But JPL engineer Arash Kalantari isn’t content to simply fly; he’s developing a concept for a gripper that could allow a flying robot to cling to Martian cliffsides. The perching mechanism is adapted from LEMUR’s design: It has clawed feet with embedded fishhooks that grip rock much like a bird clings to a branch. While there, the robot would recharge its batteries via solar panels, giving it the freedom to roam and search for evidence of life.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NASA JPL Campus

    Jet Propulsion Laboratory (JPL)) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge, on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    Caltech Logo

    NASA image

     
  • richardmitnick 9:19 am on July 5, 2019 Permalink | Reply
    Tags: , Robotics, SpaceBok robot   

    From European Space Agency: “Jumping space robot ‘flies’ like a spacecraft” 

    ESA Space For Europe Banner

    From European Space Agency

    1
    Spacebok jumping in simulated lunar gravity

    4 July 2019

    Astronauts on the Moon found themselves hopping around, rather than simply walking. Switzerland’s SpaceBok planetary exploration robot has followed their example, launching all four legs off the ground during tests at ESA’s technical heart.

    SpaceBok is a quadruped robot designed and built by a Swiss student team from ETH Zurich and ZHAW Zurich. It is currently being tested using robotic facilities at ESA’s

    ESA Estec

    technical centre in the Netherlands.

    Work is proceeding under the leadership of PhD student Hendrik Kolvenbach from ETH Zurich’s Robotic Systems Lab, currently based at ESTEC. The robot is being used to investigate the potential of ‘dynamic walking’ to get around in low gravity environments.

    Hendrik explains: “Instead of static walking, where at least three legs stay on the ground at all times, dynamic walking allows for gaits with full flight phases during which all legs stay off the ground. Animals make use of dynamic gaits due to their efficiency, but until recently, the computational power and algorithms required for control made it challenging to realise them on robots.

    “For the lower gravity environments of the Moon, Mars or asteroids, jumping off the ground like this turns out to be a very efficient way to get around.”

    2
    Hendrik Kolvenbach with SpaceBok. Work on SpaceBok is proceeding under the leadership of PhD student Hendrik Kolvenbach from ETH Zurich’s Robotic Systems Lab, currently based at ESTEC. The robot is being used to investigate the potential of ‘dynamic walking’ to get around in low gravity environments.

    3
    Simulating low-gravity conditions

    “Astronauts moving in the one-sixth gravity of the Moon adopted jumping instinctively. SpaceBok could potentially go up to 2 m high in lunar gravity, although such a height poses new challenges. Once it comes off the ground the legged robot needs to stabilise itself to come down again safely – it’s basically behaving like a mini-spacecraft at this point,” says team member Alexander Dietsche.

    “So what we’ve done is harness one of the methods a conventional satellite uses to control its orientation, called a reaction wheel. It can be accelerated and decelerated to trigger an equal and opposite reaction in SpaceBok itself,” explains team member Philip Arm.

    “Additionally, SpaceBok’s legs incorporate springs to store energy during landing and release it at take-off, significantly reducing the energy needed to achieve those jumps,” adds another team member, Benjamin Sun.

    The team is slowly increasing the height of the robot’s repetitive jumps, up to 1.3 m in simulated lunar gravity conditions so far.

    Test rigs have been set up to simulate various gravity environments, mimicking not only lunar conditions but also the very low gravities of asteroids. The lower the gravity the longer the flight phase can be for each robot jump, but effective control is needed for both take-off and landing.

    To simulate the vanishingly low gravity of asteroids, the SpaceBok team made use of the flattest floor in the Netherlands – a 4.8 x 9 m epoxy floor smoothed to an overall flatness within 0.8 mm, called the Orbital Robotics Bench for Integrated Technology (ORBIT), part of ESA’s Orbital Robotics and Guidance Navigation and Control Laboratory.

    4
    Robot mounted sideways

    SpaceBok was placed on its side, then attached to a free-floating platform to reproduce zero-G conditions in two dimensions. When jumping off a wall its reaction wheel allowed it to twirl around mid-jump, letting it land feet first again on the other side of the chamber – as if it was jumping along a scaled-down single low-gravity surface.

    Hendrik added: “The testing went sufficiently well that we even used SpaceBok to play a live-action game of Pong, the video game classic.”

    6
    SpaceBok robot

    Testing will continue in more realistic conditions, with jumps made over obstacles, hilly terrain, and realistic soil, eventually moving out of doors.

    Hendrik is studying at ESTEC through ESA’s Networking Partnering Initiative, intended to harness advanced academic research for space applications.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

    ESA50 Logo large

     
  • richardmitnick 9:34 am on May 5, 2019 Permalink | Reply
    Tags: "America's infrastructure is like a third-world country" said Ray LaHood transportation secretary under President Obama., , But the next generation of these machines it seems clear will gain more autonomy and machine learning technologies, In short the infrastructure robots are coming; in fact some of them are already here., Infrastructure spending, Robotics, Robots and artificial intelligence can help us build the infrastructure we need here and around the world., The early infrastructure robots don't use much AI.   

    From WIRED: “Spend Part of the $2 Trillion Infrastructure Plan on Robots” 

    Wired logo

    From WIRED

    5.5.19
    Gretchen Greene

    1
    Alexis Rosenfeld/Getty Images

    This week, the Democrats and President Trump are talking about a $2 trillion infrastructure plan, a number in line with American Society of Civil Engineers’ estimates for infrastructure needs, but it isn’t clear where the money will come from or if a bipartisan plan will actually move forward.

    The ASCE’s 2017 report card gave America’s infrastructure a D+ with scant progress these last 20 years. “America’s infrastructure is like a third-world country,” said Ray LaHood, transportation secretary under President Obama. If we don’t make a major infrastructure investment, our enormous infrastructure needs will just keep growing. We need good new ideas to make the most of whatever money is approved by the federal government or local governments.

    New technologies are threatening jobs but they also offer the possibility of completing projects we otherwise couldn’t afford, minimizing disruption, improving safety and optimizing systems in ways humans working alone could not. Robots and artificial intelligence can help us build the infrastructure we need, here and around the world. In short, the infrastructure robots are coming; in fact, some of them are already here.

    In Minnesota, spider-like bridge inspection drones crawl along high abutments and into narrow gaps while hovering drones inspect the undersides of bridge decks. Their access is better, cheaper, and safer with less disruption of traffic. Gas pipe repair robots allow utility crews in Boston, NYC, and Edinburgh, Scotland, to finish a job in a third of the time, without digging up the street at every joint or interrupting service because the robots can safely work inside pressurized lines. In Saudi Arabia and Mexico, water pipe inspection robots are inserted in one fire hydrant, carried by the water flow and captured with a net at another fire hydrant down the line, reporting the locations of leaks a tenth to a third the size old methods could find. In Connecticut, drones are replacing low flying helicopters for power line inspections.

    In Oslo, Norway, submarine drones are mapping the landscape of underwater garbage: old tires and toys, plastic bags and the carcasses of abandoned cars, so boats with cranes and human divers can be deployed to clean up the fjords.

    In Fukushima, Japan, engineers have embarked on a half century project one expert called more challenging than putting a man on the moon: designing and building robots that can operate in an extremely challenging environment to find, recover, and seal the lost radioactive fuel from the biggest nuclear plant disaster cleanup effort in history.

    The early infrastructure robots don’t use much AI. They are remotely controlled or tethered, relaying video to human operators to interpret, carrying tools a human operator can use from a distance, and relying on a human operator to tell them where to go. Their genius lies in their ability to squeeze into small spaces, levitate in the sky, or dive into the water and survive in harsh environments, going places humans can’t go easily, safely, cheaply, or at all.

    But the next generation of these machines, it seems clear, will gain more autonomy, adopting computer vision, autonomous vehicle navigation and machine learning technologies. Semi-autonomous drones and robots are in testing and early commercial deployment for inspection of industrial assets, land surveying and sidewalk snow clearing.

    It’s not a big step to imagine robots creeping through the gas and water lines all day and night, mapping their own course, quietly fixing leaks, docking at charging and maintenance stations as needed, like a Roomba underground. Above ground robots could patrol the roads, the power grid and the waterways, cleaning up trash and fixing potholes, electrical wires and bridges or reporting what they can’t fix, directing a human crew to the spot.

    Machine learning software systems are learning to predict code violations, safety incidents, mechanical failures and natural disasters, directing robotic or human resources to intervene. They are being used for fire code, health code and industrial safety inspection prioritization in Pittsburgh, New York, New Orleans, Boston, Chicago and British Columbia, Canada. Rolls Royce is testing machine learning to predict engine failures. The oil and gas industry is automating the detection of serious pipeline corrosion, adding machine learning to the pipeline robot pigs it has used for decades. British Columbia is trying to predict elevator problems. Pittsburgh is trying to predict landslides on roads.

    Robots, sensors and machine learning are being used to direct water to where we want it before it ever hits a pipeline and to reduce pollution. Tech startups in Boston and San Francisco are using sensors and machine learning to create hyperlocal air quality and weather data and predictions. In crowded industrial cities in Guangzhou, China, pollution-detecting airborne drones help law enforcement identify which factory should be punished for emissions. China has used chemical carrying drones to disperse smog and to make rain and is considering the creation of a vast network of fuel burning chambers, planes, drones and artillery, guided by real-time data from satellites, to seed clouds over the Tibetan plateau, the source of most of Asia’s biggest rivers, an area three times the size of Spain.

    Advances in robotics, hardware and artificial intelligence have combined to make a new vision possible for how infrastructure maintenance and repair is carried out. More importantly, they offer a vision for how we might be able to afford to do the work we can’t put off forever.

    There’s a rising army of robots, ready to serve.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:19 pm on April 12, 2019 Permalink | Reply
    Tags: , , , , , , , Robotics,   

    From University of New South Wales: “Sky’s the limit: celebrating engineering that’s out of this world” 

    U NSW bloc

    From University of New South Wales

    12 Apr 2019
    Cecilia Duong

    Researchers from UNSW Engineering are harnessing new technologies to help build Australia’s space future.

    1
    An impression of UNSW Cubesat in orbit. Image: Jamie Tufrey

    On International Day of Human Space Flight – an annual celebration of the beginning of the space era for mankind that’s designed to reaffirm the important contribution of space science and technology in today’s world – UNSW Engineering is looking at some of its own space-related research highlights.

    Whether it’s finding ways to mine water on the moon or developing space cells with the highest efficiencies, researchers from UNSW Engineering are harnessing new technologies to help build Australia’s space future. Our student-led projects, such as BlueSAT and American Institute of Aeronautics and Astronautics (AIAA Rocketry), are also providing students with real-world experience in multi-disciplinary space engineering projects to continue to promote space technology in Australia.

    Here are a few highlights of how UNSW Engineering research is innovating both on Earth and in space.

    Mining water on the Moon
    2
    Image: Shutterstock

    A team of UNSW Engineers have put together a multi-university, agency and industry project team to investigate the possibilities of mining water on the moon to produce rocket fuel.

    Find out more.

    Satellite solar technology comes down to Earth
    3
    Solar cells used in space are achieving higher efficiencies than those used at ground level, and now there are ways to have them working on Earth without breaking the bank.

    Researchers from the School of Photovoltaics Renewable Energy Engineering are no strangers to setting new records for solar cell efficiency levels but Associate Professor Ned Ekins-Daukes has made it his mission to develop space cells with the highest efficiencies at the lowest weight.

    Find out more.

    Students shine in off-world robotics competition
    4
    UNSW’s Off-World Robotics team – part of the long-running BLUEsat student-led project – achieved their best placing in the competition to date.

    A team of eight UNSW Engineering students came eighth in the European Rover Challenge (ERC) in Poland, one of the world’s biggest international space and robotics events, defeating 57 teams from around the globe.

    Find out more.

    Exploring a little-understood region above Earth
    5
    Associate Professor Elias Aboutanios with UNSW-Ec0. Photo:Grant Turner

    UNSW-EC0, a CubeSat built by a team led by Australian Centre for Space Engineering Research (ACSER) deputy director Associate Professor Elias Aboutanios, is studying the atomic composition of the thermosphere using an on-board ion neutral mass spectrometer.

    Find out more.

    Rocketing into an internship
    6
    Third-year Aerospace Engineering student, Sam Wilkinson, scored an internship at Rocket Lab in New Zealand.

    Third-year Aerospace Engineering student, Sam Wilkinson, describes how he landed an internship at an international aerospace company, which works with organisations such as NASA, without going through the usual application process.

    Find out more.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U NSW Campus

    Welcome to UNSW Australia (The University of New South Wales), one of Australia’s leading research and teaching universities. At UNSW, we take pride in the broad range and high quality of our teaching programs. Our teaching gains strength and currency from our research activities, strong industry links and our international nature; UNSW has a strong regional and global engagement.

    In developing new ideas and promoting lasting knowledge we are creating an academic environment where outstanding students and scholars from around the world can be inspired to excel in their programs of study and research. Partnerships with both local and global communities allow UNSW to share knowledge, debate and research outcomes. UNSW’s public events include concert performances, open days and public forums on issues such as the environment, healthcare and global politics. We encourage you to explore the UNSW website so you can find out more about what we do.

     
  • richardmitnick 12:02 pm on February 20, 2019 Permalink | Reply
    Tags: "Robots track moving objects with unprecedented precision", , , RFID tags, Robotics   

    From MIT News: “Robots track moving objects with unprecedented precision” 

    MIT News
    MIT Widget

    From MIT News

    February 18, 2019
    Rob Matheson

    System uses RFID tags to home in on targets; could benefit robotic manufacturing, collaborative drones, and other applications.

    1
    MIT Media Lab researchers are using RFID tags to help robots home in on moving objects with unprecedented speed and accuracy, potentially enabling greater collaboration in robotic packaging and assembly and among swarms of drones. Photo courtesy of the researchers.

    A novel system developed at MIT uses RFID tags to help robots home in on moving objects with unprecedented speed and accuracy. The system could enable greater collaboration and precision by robots working on packaging and assembly, and by swarms of drones carrying out search-and-rescue missions.

    In a paper being presented next week at the USENIX Symposium on Networked Systems Design and Implementation, the researchers show that robots using the system can locate tagged objects within 7.5 milliseconds, on average, and with an error of less than a centimeter.

    In the system, called TurboTrack, an RFID (radio-frequency identification) tag can be applied to any object. A reader sends a wireless signal that reflects off the RFID tag and other nearby objects, and rebounds to the reader. An algorithm sifts through all the reflected signals to find the RFID tag’s response. Final computations then leverage the RFID tag’s movement — even though this usually decreases precision — to improve its localization accuracy.

    The researchers say the system could replace computer vision for some robotic tasks. As with its human counterpart, computer vision is limited by what it can see, and it can fail to notice objects in cluttered environments. Radio frequency signals have no such restrictions: They can identify targets without visualization, within clutter and through walls.

    To validate the system, the researchers attached one RFID tag to a cap and another to a bottle. A robotic arm located the cap and placed it onto the bottle, held by another robotic arm. In another demonstration, the researchers tracked RFID-equipped nanodrones during docking, maneuvering, and flying. In both tasks, the system was as accurate and fast as traditional computer-vision systems, while working in scenarios where computer vision fails, the researchers report.

    “If you use RF signals for tasks typically done using computer vision, not only do you enable robots to do human things, but you can also enable them to do superhuman things,” says Fadel Adib, an assistant professor and principal investigator in the MIT Media Lab, and founding director of the Signal Kinetics Research Group. “And you can do it in a scalable way, because these RFID tags are only 3 cents each.”

    In manufacturing, the system could enable robot arms to be more precise and versatile in, say, picking up, assembling, and packaging items along an assembly line. Another promising application is using handheld “nanodrones” for search and rescue missions. Nanodrones currently use computer vision and methods to stitch together captured images for localization purposes. These drones often get confused in chaotic areas, lose each other behind walls, and can’t uniquely identify each other. This all limits their ability to, say, spread out over an area and collaborate to search for a missing person. Using the researchers’ system, nanodrones in swarms could better locate each other, for greater control and collaboration.

    “You could enable a swarm of nanodrones to form in certain ways, fly into cluttered environments, and even environments hidden from sight, with great precision,” says first author Zhihong Luo, a graduate student in the Signal Kinetics Research Group.

    The other Media Lab co-authors on the paper are visiting student Qiping Zhang, postdoc Yunfei Ma, and Research Assistant Manish Singh.

    Super resolution

    Adib’s group has been working for years on using radio signals for tracking and identification purposes, such as detecting contamination in bottled foods, communicating with devices inside the body, and managing warehouse inventory.

    Similar systems have attempted to use RFID tags for localization tasks. But these come with trade-offs in either accuracy or speed. To be accurate, it may take them several seconds to find a moving object; to increase speed, they lose accuracy.

    The challenge was achieving both speed and accuracy simultaneously. To do so, the researchers drew inspiration from an imaging technique called “super-resolution imaging.” These systems stitch together images from multiple angles to achieve a finer-resolution image.

    “The idea was to apply these super-resolution systems to radio signals,” Adib says. “As something moves, you get more perspectives in tracking it, so you can exploit the movement for accuracy.”

    The system combines a standard RFID reader with a “helper” component that’s used to localize radio frequency signals. The helper shoots out a wideband signal comprising multiple frequencies, building on a modulation scheme used in wireless communication, called orthogonal frequency-division multiplexing.

    The system captures all the signals rebounding off objects in the environment, including the RFID tag. One of those signals carries a signal that’s specific to the specific RFID tag, because RFID signals reflect and absorb an incoming signal in a certain pattern, corresponding to bits of 0s and 1s, that the system can recognize.

    Because these signals travel at the speed of light, the system can compute a “time of flight” — measuring distance by calculating the time it takes a signal to travel between a transmitter and receiver — to gauge the location of the tag, as well as the other objects in the environment. But this provides only a ballpark localization figure, not subcentimter precision.

    Leveraging movement

    To zoom in on the tag’s location, the researchers developed what they call a “space-time super-resolution” algorithm.

    The algorithm combines the location estimations for all rebounding signals, including the RFID signal, which it determined using time of flight. Using some probability calculations, it narrows down that group to a handful of potential locations for the RFID tag.

    As the tag moves, its signal angle slightly alters — a change that also corresponds to a certain location. The algorithm then can use that angle change to track the tag’s distance as it moves. By constantly comparing that changing distance measurement to all other distance measurements from other signals, it can find the tag in a three-dimensional space. This all happens in a fraction of a second.

    “The high-level idea is that, by combining these measurements over time and over space, you get a better reconstruction of the tag’s position,” Adib says.

    The work was sponsored, in part, by the National Science Foundation.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: