Picture a slime-like robot that can alter its shape effortlessly to fit through tight spaces, potentially used in the human body to extract unwanted objects.
Although such a robot currently only exists in the confines of a lab, researchers are actively developing flexible robots for applications in healthcare, wearable technology, and industrial systems.
But how does one control a soft robot without joints or limbs that can change its shape at will? MIT researchers are working to address this challenge.
They have created a control algorithm that can autonomously learn how to manipulate a reconfigurable robot to accomplish a specific task, even when the task requires the robot to morph multiple times. The team has also designed a simulator to test control algorithms for deformable robots on various shape-changing tasks.
Their approach successfully completed all eight tasks they evaluated, surpassing other algorithms. The technique excelled particularly in complex tasks. For example, in one test, the robot had to shrink its height, grow two small legs to navigate a narrow pipe, then retract the legs and extend its torso to open the pipe’s lid.
While reconfigurable soft robots are still in the early stages of development, this method could potentially lead to versatile robots that can adapt their shapes for a range of tasks.
“When people think of soft robots, they often imagine robots that are elastic and return to their original form. Our robot, however, can actually change its morphology. It’s remarkable that our method has been so successful given the novelty of the concept,” says Boyuan Chen, an EECS graduate student and co-author of the research.
Chen’s co-authors include lead author Suning Huang, an undergraduate student at Tsinghua University, Huazhe Xu, an assistant professor at Tsinghua University, and senior author Vincent Sitzmann, an assistant professor of EECS at MIT. The research will be presented at the International Conference on Learning Representations.
Controlling dynamic motion
Traditionally, scientists teach robots to perform tasks using reinforcement learning, a trial-and-error process where the robot is rewarded for actions that bring it closer to a goal.
This method works well for robots with consistent, well-defined moving parts like a gripper with fingers. However, shape-shifting robots controlled by magnetic fields can deform and elongate their entire bodies.
“A robot with thousands of small muscle pieces to control makes traditional learning methods difficult,” explains Chen.
To address this challenge, the researchers approached the problem differently. Instead of controlling each tiny muscle individually, their algorithm first learns to control groups of adjacent muscles that work together.
Once the algorithm has explored different actions by focusing on muscle groups, it refines the process to optimize the action plan it has learned. This coarse-to-fine approach allows the algorithm to efficiently control the robot.
“Taking a random action can lead to a significant change in outcome because you are controlling several muscles at once. This coarse-to-fine methodology is key,” says Sitzmann.
The researchers treat the robot’s action space as an image, using a machine-learning model to generate a 2D action space that includes the robot and its surroundings. They simulate robot motion using the material-point-method, where the action space is covered by points and overlaid with a grid.
By designing their algorithm to understand the correlations between nearby action points, the researchers can predict the robot’s movements more efficiently.
Building a simulator
To test their approach, the researchers developed a simulation environment called DittoGym.
DittoGym includes eight tasks that evaluate a reconfigurable robot’s ability to change shape dynamically. For instance, one task requires the robot to elongate and curve its body to maneuver around obstacles, while another task involves changing shape to mimic letters of the alphabet.
“Our task selection in DittoGym combines generic reinforcement learning benchmarks with the specific requirements of reconfigurable robots. Each task is designed to test important properties such as long-horizon explorations, environmental analysis, and interaction with objects,” explains Huang. “We believe these tasks provide a comprehensive assessment of the flexibility of reconfigurable robots and the effectiveness of our reinforcement learning approach.”
Their algorithm outperformed baseline methods and was the only technique capable of completing multistage tasks that required shape changes.
“The strong correlation between action points close to each other is crucial for the success of our method,” says Chen.
While it may be some time before shape-shifting robots are used in real-world applications, Chen and his team hope their work inspires other researchers to explore reconfigurable soft robots and consider utilizing 2D action spaces for complex control problems.