FIP endows real-time, accurate motion capture on daily clothes by fusing flex and inertial sensors.
What if our clothes could capture our body motion accurately? This paper introduces Flexible Inertial Poser (FIP), a novel motion-capturing system using daily garments with two elbow-attached flex sensors and four Inertial Measurement Units (IMUs). However, the inevitable sensor displacements in loose wearables degrade joint tracking accuracy significantly. To address this, we identify the distinct characteristics of the flex and inertial sensor displacements and develop a Displacement Latent Diffusion Model and a Physics-informed Calibrator to compensate for sensor displacements based on such observations, resulting in a substantial improvement in motion capture accuracy. Notably, our system outperforms the accuracy of state-of-the-art (SOTA) real-time posture estimation with a significant advance in elbow joint tracking. FIP opens up opportunities for ubiquitous human-computer interactions and diverse interactive applications such as Metaverse, rehabilitation, and fitness analysis.
The process of prototyping: (a) Assemble the sensors together by soldering; (b) Cut the fabric into pieces according to the patterns; (c) Integrate the assembled sensors into the fabric through heat pressing and sew the fabric into the garment.
The fabrication of FIP garments follows a structured process integrating sensors seamlessly into daily wear. The system incorporates two flex sensors and four IMUs into a loose-fitting jacket, ensuring both comfort and accuracy. Sensors are embedded using heat pressing, and electrical components are routed through flexible wiring channels.
The fabrication process consists of three stages: sensor assembly, fabric cutting, and garment integration. First, the flex sensors and IMUs are soldered onto their respective connection boards, forming the sensor network. Next, the garment is designed with specialized fabric patterns, ensuring proper placement of sensors. Finally, sensors are heat-pressed onto the fabric, with wiring seamlessly integrated to maintain garment flexibility and comfort.
Our design prioritizes wearability and durability, making FIP suitable for real-world applications such as motion tracking in Metaverse, rehabilitation, and fitness monitoring.
FIP's motion capture algorithm consists of three core modules:
These components work in synergy to enhance robustness, ensuring precise motion tracking even in loose-fitting garments.
Pipeline Overview. Left: For data preparation, we first utilize a simulation body-fabric model to synthesize the IMU Real-time Displacement. Then, for training a robust pose predictor, we train a Displacement Latent Diffusion Model (DLDM) to generate enough diverse data that covers real-world distribution. At last, we train a Pose Fusion Predictor leveraging simulated flex sensor data and generated IMU data, with the supervision of SMPL Pose. Right: In our testing phase, flex sensor readings will be first input to our Physical-informed Calibrator to address the Primary Displacement, which will be input to the pre-trained Pose Fusion Predictor with IMU data.
Qualitative results: our approach outperforms all SOTA methods in motion capture with a clear advantage in elbow joint tracking.
Our FIP system achieves state-of-the-art motion capture performance by significantly reducing tracking errors. Key evaluation metrics include:
These results highlight FIP’s potential for applications in human-computer interaction, rehabilitation, and immersive virtual environments.
We demonstrate applications of our Clothes-based MoCap system in various human-computer interaction scenarios, including virtual and augmented reality (VR/AR), rehabilitation, and fitness analysis, leveraging the system's robustness, accessibility, and comfort.
Applications of our approach: (a) Metaverse, (b) rehabilitation, (c) fitness analysis.