GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields

Published in arXiv, 2024

Abstract

Recent advances in Neural Radiance Fields (NeRF) have demonstrated promising results in 3D scene representations, including 3D human representations. However, these representations often lack crucial information on the underlying human pose and structure, which is crucial for AR/VR applications and games. In this paper, we introduce a novel approach, termed GHNeRF, designed to address these limitations by learning 2D/3D joint locations of human subjects with NeRF representation. GHNeRF uses a pre-trained 2D encoder streamlined to extract essential human features from 2D images, which are then incorporated into the NeRF framework in order to encode human biomechanic features. This allows our network to simultaneously learn biomechanic features, such as joint locations, along with human geometry and texture. To assess the effectiveness of our method, we conduct a comprehensive comparison with state-of-the-art human NeRF techniques and joint estimation algorithms. Our results show that GHNeRF can achieve state-of-the-art results in near real-time.

Download paper here

Recommended citation:

@misc{dey2024ghnerf,
      title={GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields},
      author={Arnab Dey and Di Yang and Rohith Agaram and Antitza Dantcheva and Andrew I. Comport and Srinath Sridhar and Jean Martinet},
      year={2024},
      eprint={2404.06246},
      archivePrefix={arXiv},
      primaryClass={cs.CV}}