Home Abstract Links Results BibTex Acknowledgments

Overview Video

Abstract

We tackle the challenging problem of human-object interaction (HOI) detection. Existing methods either recognize the interaction of each human-object pair in isolation or perform joint inference based on complex appearance-based features. In this paper, we leverage an abstract spatial-semantic representation to describe each human-object pair and aggregate the contextual information of the scene via a dual relation graph (one human-centric and one object-centric). Our proposed dual relation graph effectively captures discriminative cues from the scene to resolve ambiguity from local predictions. Our model is conceptually simple and leads to favorable results compared to the state-of-the-art HOI detection algorithms on two large-scale benchmark datasets.

Results

More iteration of feature aggregation leads to a more accurate prediction. The human-centric and object-centric subgraph in the spatial semantic stream propagates contextual information to produce increasingly accurate HOI predictions.


HOI detection results on V-COCO test set.


HOI detection results on HICO-DET test set.

Paper


DRG: Dual Relation Graph for Human-Object Interaction Detection

Citation

Chen Gao, Jiarui Xu, Yuliang Zou and Jia-Bin Huang. "DRG: Dual Relation Graph for Human-Object Interaction Detection", in European Conference on Computer Vision (ECCV), 2020

BibTex

@inproceedings{Gao-ECCV-DRG,
  Author    = {Gao, Chen and Xu, Jiarui and Zou, Yuliang and Huang, Jia-Bin},
  Title     = {DRG: Dual Relation Graph for Human-Object Interaction Detection},
  booktitle = {Proc. European Conference on Computer Vision (ECCV)},
  year      = {2020}
  }

Acknowledgments

This code follows the implementation architecture of maskrcnn-benchmark, iCAN and No Frills. We thank Meng-Li Shih for providing this website template.