A team of Korean engineering researchers has accomplished a remarkable feat by developing cutting-edge quadrupedal robot technology capable of traversing steps and uneven terrains without visual or tactile sensors. Even in dire circumstances such as darkness or dense smoke hindering visual confirmation, this revolutionary technology ensures stable movement. Professor Hyun Myung's research team at the Urban Robotics Lab in the School of Electrical Engineering, KAIST, has pioneered a walking robot control technology known as "DreamWaQ," enabling robust blind locomotion in diverse and unconventional environments.Conventional walking robot controllers rely on kinematics and/or dynamics models, employing a model-based control approach. However, in atypical settings like uneven fields, swift acquisition of terrain feature information becomes vital to maintain stability during locomotion. Traditionally, this process heavily relies on the cognitive ability to survey the surrounding environment.In contrast, Professor Hyun Myung's research team has developed a controller based on deep reinforcement learning (RL) methods, which swiftly calculates appropriate control commands for each motor of the walking robot using data from various simulated environments. Unlike existing controllers that necessitate separate adaptations for real robots, this novel controller can be seamlessly applied to diverse walking robots without requiring additional tuning processes.The DreamWaQ controller, created by the research team, comprises a context estimation network that evaluates ground and robot information, along with a policy network that generates optimal control commands. The context-aided estimator network implicitly estimates ground information and explicitly assesses the robot's status through inertial and joint information. These estimations are then fed into the policy network to generate precise control commands. Both networks are simultaneously learned through simulations.While the context-aided estimator network is acquired through supervised learning, the policy network employs an actor-critic architecture, a deep RL methodology. In the simulation, where surrounding terrain information is known, the critic network evaluates the policy of the actor network, which can only implicitly infer terrain information.Remarkably, the entire learning process can be completed in approximately an hour on a GPU-enabled PC, and the actual robot is equipped with only the network of learned actors. Without direct visual observation of the surroundings, the robot relies solely on its inertial sensor (IMU) and joint angle measurements to imagine environments similar to those learned during simulation. Upon encountering obstacles such as stairs, the robot quickly determines the terrain information upon foot contact, enabling swift adaptation and transmission of appropriate control commands to each motor for agile locomotion.The DreamWaQer robot has successfully demonstrated its capabilities in both laboratory and outdoor environments, effortlessly navigating curbs, speed bumps, tree roots, and gravel fields. It conquered staircases with height differences two-thirds of its body, showcasing its exceptional adaptability. Regardless of the environment, the research team verified the robot's stable locomotion, ranging from a slow speed of 0.3 m/s to a relatively fast speed of 1.0 m/s.This groundbreaking study, titled "DreamWaQ: Learning Robust Quadrupedal Locomotion With Implicit Terrain Imagination via Deep Reinforcement Learning," was conducted by I Made Aswin Nahrendra, a doctoral student, as the first author, with co-author Byeongho Yu. The study has been accepted for presentation at the upcoming IEEE International Conference on Robotics and Automation (ICRA) scheduled to be held in London.
A team of Korean engineering researchers has accomplished a remarkable feat by developing cutting-edge quadrupedal robot technology capable of traversing steps and uneven terrains without visual or tactile sensors. Even in dire circumstances such as darkness or dense smoke hindering visual confirmation, this revolutionary technology ensures stable movement. Professor Hyun Myung's research team at the Urban Robotics Lab in the School of Electrical Engineering, KAIST, has pioneered a walking robot control technology known as "DreamWaQ," enabling robust blind locomotion in diverse and unconventional environments.Conventional walking robot controllers rely on kinematics and/or dynamics models, employing a model-based control approach. However, in atypical settings like uneven fields, swift acquisition of terrain feature information becomes vital to maintain stability during locomotion. Traditionally, this process heavily relies on the cognitive ability to survey the surrounding environment.In contrast, Professor Hyun Myung's research team has developed a controller based on deep reinforcement learning (RL) methods, which swiftly calculates appropriate control commands for each motor of the walking robot using data from various simulated environments. Unlike existing controllers that necessitate separate adaptations for real robots, this novel controller can be seamlessly applied to diverse walking robots without requiring additional tuning processes.The DreamWaQ controller, created by the research team, comprises a context estimation network that evaluates ground and robot information, along with a policy network that generates optimal control commands. The context-aided estimator network implicitly estimates ground information and explicitly assesses the robot's status through inertial and joint information. These estimations are then fed into the policy network to generate precise control commands. Both networks are simultaneously learned through simulations.While the context-aided estimator network is acquired through supervised learning, the policy network employs an actor-critic architecture, a deep RL methodology. In the simulation, where surrounding terrain information is known, the critic network evaluates the policy of the actor network, which can only implicitly infer terrain information.Remarkably, the entire learning process can be completed in approximately an hour on a GPU-enabled PC, and the actual robot is equipped with only the network of learned actors. Without direct visual observation of the surroundings, the robot relies solely on its inertial sensor (IMU) and joint angle measurements to imagine environments similar to those learned during simulation. Upon encountering obstacles such as stairs, the robot quickly determines the terrain information upon foot contact, enabling swift adaptation and transmission of appropriate control commands to each motor for agile locomotion.The DreamWaQer robot has successfully demonstrated its capabilities in both laboratory and outdoor environments, effortlessly navigating curbs, speed bumps, tree roots, and gravel fields. It conquered staircases with height differences two-thirds of its body, showcasing its exceptional adaptability. Regardless of the environment, the research team verified the robot's stable locomotion, ranging from a slow speed of 0.3 m/s to a relatively fast speed of 1.0 m/s.This groundbreaking study, titled "DreamWaQ: Learning Robust Quadrupedal Locomotion With Implicit Terrain Imagination via Deep Reinforcement Learning," was conducted by I Made Aswin Nahrendra, a doctoral student, as the first author, with co-author Byeongho Yu. The study has been accepted for presentation at the upcoming IEEE International Conference on Robotics and Automation (ICRA) scheduled to be held in London.