Papers
arxiv:2603.07966

Listening with the Eyes: Benchmarking Egocentric Co-Speech Grounding across Space and Time

Published on Mar 9
Authors:
,
,
,
,
,
,
,

Abstract

In situated collaboration, speakers often use intentionally underspecified deictic commands (e.g., ``pass me that''), whose referent becomes identifiable only by aligning speech with a brief co-speech pointing stroke. However, many embodied benchmarks admit language-only shortcuts, allowing MLLMs to perform well without learning the audio--visual alignment required by deictic interaction. To bridge this gap, we introduce Egocentric Co-Speech Grounding (EcoG), where grounding is executable only if an agent jointly predicts What, Where, and When. To operationalize this, we present EcoG-Bench, an evaluation-only bilingual (EN/ZH) diagnostic benchmark of 811 egocentric clips with dense spatial annotations and millisecond-level stroke supervision. It is organized under a Progressive Cognitive Evaluation protocol. Benchmarking state-of-the-art MLLMs reveals a severe executability gap: while human subjects achieve near-ceiling performance on EcoG-Bench (96.9\% strict Eco-Accuracy), the best native video-audio setting remains low (Gemini-3-Pro: 17.0\%). Moreover, in a diagnostic ablation, replacing the native video--audio interface with timestamped frame samples and externally verified ASR (with word-level timing) substantially improves the same model (17.0\%to42.9\%). Overall, EcoG-Bench provides a strict, executable testbed for event-level speech--gesture binding, and suggests that multimodal interfaces may bottleneck the observability of temporal alignment cues, independently of model reasoning.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.07966
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.07966 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.07966 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.07966 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.