Enhancing Domain Generalization in 3D Human Pose Estimation: A Dual-Augmentor Framework

Authors

  • Aryan Gupta University of Jaipur, India
  • Meera Patel University of Jaipur, India

Abstract

Achieving robust 3D human pose estimation across diverse domains remains a significant challenge due to variations in environments, subjects, and capture conditions. This paper presents a novel Dual-Augmentor Framework designed to enhance domain generalization in 3D human pose estimation. The framework integrates two complementary augmentation strategies: (1) a Data Augmentation Module that diversifies training data through synthetic transformations and domain-specific variations, and (2) a Model Augmentation Module that employs an ensemble of models with varied architectures and training regimes to enhance adaptability. Through extensive experiments on multiple benchmark datasets, our Dual-Augmentor Framework demonstrates superior performance in cross-domain scenarios, significantly reducing estimation errors compared to state-of-the-art methods. This work provides a robust solution for deploying 3D human pose estimation models in real-world applications with varying domain characteristics. The Style Augmentor focuses on diversifying the appearance of training data, simulating various visual conditions, while the Pose Augmentor generates realistic pose variations to enrich the pose distribution. Together, these augmentors create a more robust training set, enabling the model to learn domain-invariant features effectively. Extensive experiments demonstrate that our Dual-Augmentor Framework significantly improves the generalization capabilities of state-of-the-art 3D human pose estimation models, achieving superior performance across multiple benchmark datasets.

Downloads

Published

2024-05-12

Issue

Section

Articles