Re-HOLD: Video Hand Object Interaction Reenactment via adaptive Layout-instructed Diffusion Model

CVPR 2025

Yingying Fan1, Quanwei Yang2, Kaisiyuan Wang3*, Hang Zhou3, Yingying Li3, Haocheng Feng3 Errui Ding3, Yu Wu1* Jingdong Wang3
1School of Computer Science, Wuhan University, 2University of Science and Technology of China, 3Department of Computer Vision Technology (VIS), Baidu Inc.

The overview of our proposed framework Re-HOLD.

We propose a two-branch framework that consists of a Reference U-Net and a Denoising U-Net. The Reference U-Net takes a reference object image for object texture encoding while the denoising one takes noise latent and layout guidance as input for diffusion processing. To enhance the quality of HOI generation, we adopt the HOI Restoration Module and a hand memory bank for hand information restoration. The object memory bank is designed to store fine-grained object information.

Abstract

Current digital human studies focusing on lip-syncing and body movement are no longer sufficient to meet the growing industrial demand, while human video generation techniques that support interacting with real-world environments (e.g., objects) have not been well investigated. Despite human hand synthesis already being an intricate problem, generating objects in contact with hands and their interactions presents an even more challenging task, especially when the objects exhibit obvious variations in size and shape.

To cope with these issues, we present a novel video Reenactment framework focusing on Human-Object Interaction (HOI) via an adaptive Layout-instructed Diffusion model (Re-HOLD). Our key insight is to employ specialized layout representation for hands and objects, respectively. Such representations enable effective disentanglement of hand modeling and object adaptation to diverse motion sequences. To further improve the generation quality of HOI, we have designed an interactive textural enhancement module for both hands and objects by introducing two independent memory banks. We also propose a layout-adjusting strategy for the cross-object reenactment scenario to adaptively adjust unreasonable layouts caused by diverse object sizes during inference.

Comprehensive qualitative and quantitative evaluations demonstrate that our proposed framework significantly outperforms existing methods.

Video

BibTeX

@article{fan2025ReHOLD,
  author    = {Yinying Fan, Quanwei Yang, Kaisiyuan Wang, Hang Zhou, Yingying Li, Haocheng Feng, Errui Ding, Yu Wu, Jingdong Wang.},
  title     = {Re-HOLD: Video Hand Object Interaction Reenactment via adaptive Layout-instructed Diffusion Model},
  journal   = {CVPR},
  year      = {2025},
}