Recreating walking and running in robots has always been a challenge, but a group of researchers has made strides in overcoming these difficulties. They have developed a new method that combines central pattern generators (CPGs) with deep reinforcement learning to mimic human motion more effectively. This innovative approach not only replicates walking and running movements but also generates new movements for frequencies where data is lacking, facilitates smooth transitions between different motions, and allows robots to adapt to unstable surfaces.
On April 15, 2024, the details of this breakthrough were published in the journal IEEE Robotics and Automation Letters.
Walking and running involve biological redundancies that enable humans to adjust to different environments and change their speed. Replicating these complex movements in robots is a significant challenge due to their intricacy and variability.
Current models struggle in unfamiliar or difficult environments, as they are typically designed to find a single correct solution. However, human motion encompasses a wide range of movements, making it challenging to determine the most efficient one.
Deep reinforcement learning (DRL) has emerged as a solution to this problem by utilizing neural networks to handle complex tasks and learn from sensory inputs. While DRL offers powerful learning capabilities, it can be computationally intensive, especially in systems with high degrees of freedom.
Another approach is imitation learning, where robots mimic human motion data to learn tasks. Although effective in stable environments, imitation learning falters when faced with novel situations. Its narrow scope of learned behaviors limits its adaptability.
Professor Mitsuhiro Hayashibe from Tohoku University’s Graduate School of Engineering explains, “We combined imitation learning with CPG-like controllers and applied deep learning to a reflex neural network supporting the CPGs to overcome the limitations of both approaches.”
CPGs are neural circuits in the spinal cord that generate rhythmic muscle activity, while reflex circuits provide feedback for adjusting movements. By integrating CPG and reflex structures, the adaptive imitated CPG (AI-CPG) method achieves superior adaptability and stability in motion generation.
This breakthrough represents a significant advancement in generating human-like movements in robotics, with enhanced environmental adaptability. The research team included members from Tohoku University and the Swiss Federal Institute of Technology in Lausanne.