Home / SmartTech / Google's AI teaches robots how to move by watching dogs

Google's AI teaches robots how to move by watching dogs

Google researchers developed an AI system that learns from animal movements to give robots more mobility. This reveals a preprint paper and blog post released this week. The co-authors believe that their approach could support the development of robots that can perform real-world tasks, such as moving materials between multi-level warehouses and fulfillment centers.

The framework of the team consists of a motion detection clip of an animal – a dog. in this case – and uses enhanced learning, a training technique that encourages software agents to achieve goals through rewards and train a control policy. By providing various reference movements for the system, the researchers were able to “teach” a four-legged Unitree Laikago robot to perform a variety of behaviors, from fast walking (up to 2.6 miles an hour) to hops and bends

To validate their approach, the researchers first compiled a dataset of real dogs that perform different abilities. (Most of the training took place in a physics simulation so that the pose of the reference movements could be followed precisely.) The researchers then used the different movements in the reward function (which describes how agents should behave) to train millions of samples of around 200 people simulated robot to imitate movement skills.

 Google robot simulation

However, simulators generally only offer a rough approximation to the real world. To remedy this, the researchers used an adaptation technique that randomized the dynamics in the simulation, for example, different physical quantities such as the mass and friction of the robot. These values ​​were calculated using a numerical representation encoder – i. H. Associated with a code – passed as input to the robot control policy. When deploying the guideline on a real robot, the researchers removed the encoder and looked directly for a number of variables that the robot could use to successfully perform skills.

  VB TRansform 2020: The AI ​​event for managers. San Francisco July 15-16

The team states that it was able to adapt a policy to the real world with less than 8 minutes of real data in about 50 attempts. They also showed that the real robot has learned to imitate various movements of a dog, including pace and trot, as well as artistically animated keyframe movements such as a dynamic turn.

"We show this by using reference movements data, a single learning-based approach is able to automatically synthesize controllers for a diverse repertoire behavior [of] for robots with legs," wrote the co-authors in the work. "By integrating exemplary efficient domain adaptation techniques into the training process, our system can learn adaptive guidelines in the simulation, which can then be quickly adapted for real deployment."

 Google robot simulation

The control policy was not perfect – due to algorithm and hardware restrictions, it could not learn highly dynamic behaviors like big jumps and runs and was not as stable as the best manually designed controls. (In 5 episodes for a total of 15 experiments per method, the real robot fell on average after 6 seconds at speed, after 5 seconds when troting backwards, 9 seconds when turning and 10 seconds when bouncing.) The researchers left it to the future work, the Improve the robustness of the controller and develop frameworks that can learn from other sources of movement data such as video clips.

Source link