The field of robotics has come a long way over the past few decades, and lately, there have been numerous impressive advancements in the field from companies such as Boston Dynamics and KAIST. Google is also working on bringing robotics to a new level, and specifically, the company is more invested in bridging the gap between the sensorimotor skills of humans and robots. Earlier this week Google’s researchers have revealed their latest creation consisting in 14 separate robot arms designed to learn and improve their sensorimotor skills by sharing their experience.
Black Friday 2017 Deals: Find Great Deals on Android Smartphones, TV’s, Smart Speakers, Chromebooks and More.
Although robots, in general, can be extremely precise, those who exhibit such qualities usually follow the “Sense-Plan-Act” formula as opposed to acting based on real sensorimotor skills. These types of robots use their sensors to gather information, they use the information to create a world model and plan their next move, and lastly, they act according to the said model. This means that despite their precision in controlled environments such as production lines, in natural environments these robots fail to showcase human-like sensorimotor skills. They are programmed to recognize objects and to act on a strict set of rules or algorithms, and cannot quickly adapt and overtake new obstacles.
To overcome these problems, Google’s researchers on robotic hand-eye coordination and grasping have linked 14 separate robot arms and dedicated them to the task of learning simple tasks such as grasping objects in cluttered environments. Each day, the researchers had these robot arms pick up objects from boxes at random, and at the end of every day, the researchers collected the data and used it to train a deep convolutional neural network. After more than 800,000 grasp attempts the researchers have begun observing the “beginnings of intelligent reactive behaviors”. The robot arms not only become better at picking up objects from a cluttered environment, but they are also able to observe their own gripper and self-correct its motion in real time (different from the “Sense-Plan-Act” paradigm). Through practice, the robots have developed various techniques for picking hard or soft objects, and they also showcased pre-grasp behaviors, such as pushing certain objects aside in order to isolate a single item from a group, and grasp it more easily. It’s worth reminding that these actions have not been pre-programmed, but they are the result of linking learning with continuous feedback and control.