Google Links 14 Robotic Arms For Collaborative Learning

The field of robotics has come a long way over the past few decades, and lately, there have been numerous impressive advancements in the field from companies such as Boston Dynamics and KAIST. Google is also working on bringing robotics to a new level, and specifically, the company is more invested in bridging the gap between the sensorimotor skills of humans and robots. Earlier this week Google's researchers have revealed their latest creation consisting in 14 separate robot arms designed to learn and improve their sensorimotor skills by sharing their experience.

Although robots, in general, can be extremely precise, those who exhibit such qualities usually follow the "Sense-Plan-Act" formula as opposed to acting based on real sensorimotor skills. These types of robots use their sensors to gather information, they use the information to create a world model and plan their next move, and lastly, they act according to the said model. This means that despite their precision in controlled environments such as production lines, in natural environments these robots fail to showcase human-like sensorimotor skills. They are programmed to recognize objects and to act on a strict set of rules or algorithms, and cannot quickly adapt and overtake new obstacles.

To overcome these problems, Google's researchers on robotic hand-eye coordination and grasping have linked 14 separate robot arms and dedicated them to the task of learning simple tasks such as grasping objects in cluttered environments. Each day, the researchers had these robot arms pick up objects from boxes at random, and at the end of every day, the researchers collected the data and used it to train a deep convolutional neural network. After more than 800,000 grasp attempts the researchers have begun observing the "beginnings of intelligent reactive behaviors". The robot arms not only become better at picking up objects from a cluttered environment, but they are also able to observe their own gripper and self-correct its motion in real time (different from the "Sense-Plan-Act" paradigm). Through practice, the robots have developed various techniques for picking hard or soft objects, and they also showcased pre-grasp behaviors, such as pushing certain objects aside in order to isolate a single item from a group, and grasp it more easily. It's worth reminding that these actions have not been pre-programmed, but they are the result of linking learning with continuous feedback and control.

Copyright ©2019 Android Headlines. All Rights Reserved
This post may contain affiliate links. See our privacy policy for more information.
You May Like These
More Like This:
About the Author

Mihai Matei

Senior Staff Writer
Mihai has written for Androidheadlines since 2016 and is a Senior Writer for the site. Mihai has a background in arts and owned a couple of small businesses in the late 2000s, namely an interior design firm and a clothing manufacturing line. He dabbled with real-estate for a short while and worked as a tech news writer for several publications since 2011. He always had an appreciation for silicon-based technology and hopes it will contribute to a better humanity. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now