
Please see
Humanoid robots can quickly dance in a much more realistic way, and even like us, thanks to a new software framework for tracking human movements.
Developed by researchers at UC San Diego, Berkeley, CA, MIT, and NVIDIA, Exbody2 is a new way to enable humanoid robots to perform realistic movements based on detailed human scans and motion tracking visualizations It’s technology.
Researchers hope that future humanoid robots can perform a much wider range of tasks by mimicking human movements more accurately. For example, teaching methods can help robots operate in roles that require fine movement, such as by taking items from the shelf, moving with care around humans and other machines.
Exbody2 works by collecting simulated motion based on human motion capture scans and converting them into usable motion data for robots to replicate. This framework allows robots to replicate complex movements. This allows the robot to move hard and not move, allowing it to adapt to different tasks without the need for extensive retraining.
Related: 8 of the most strange robots in the world right now
All this is taught using reinforcement learning. This is a subset of machine learning to ensure that the robot is fed with a large amount of data and takes the optimal route in a given situation. A good output simulated by the researcher is assigned a positive or negative score to “reward” the model for the desired outcome. This means replicating motion accurately without compromising the bot’s stability.
The framework can also get short motion clips, such as a few seconds of dance, and synthesize new frames of motion for reference, allowing the robot to complete movements for longer periods of time.
Dance with a robot
In the video posted to YouTube, robots trained through Exbody2 dance, spar and exercise alongside human subjects. Additionally, the robot mimics the researcher’s movements in real time using additional code entitled “Hybrid Analysis Inverse Kinematics,” developed by the Machine Vision and Intelligence Group at Shanghai Jiaoton University.
Currently, the dataset of Exbody2 focuses primarily on upper body movements. A study uploaded to the preprint server ARXIV on December 17, 2024, researchers behind the framework were due to concerns that this would cause instability by introducing too many movements in the lower half of the robot. I explained that.
“Underly simplified tasks can limit the ability to generalize to new situations in training policies, but overly complex tasks can go beyond the operational capabilities of the robot and lead to ineffective learning outcomes. It has sex,” they write. Therefore, part of the preparation of the dataset involves exclusion or alteration of entries that are characterized by complex, intra-hypophysis movements beyond the capabilities of the robot. ”
The researcher’s dataset contains over 2,800 movements, of which 1,919 arrive as surface shape (AMASS) datasets from motion capture archives. This is because over 11,000 individual human movements and 40 hours of detailed motion data aimed at deep, non-commercial learning when neural networks are trained to identify or reproduce patterns. It is a large dataset of human movements, including:
As Exbody2 has proven effectiveness in replicating human-like movements of humanoid robots, the team has not manually curated the dataset to make it possible to use only appropriate information to the framework. Rely on problems to achieve results. Researchers suggest that automated dataset collections will help smooth this process in the future.