Datasets:
image imagewidth (px) 540 1.92k |
|---|
SWITCH: Benchmarking Interaction and Verification on Real-World Interfaces in Lifelong Embodied Agents
⚠️ Dataset Note: This repository hosts the 30% public subset of the full SWITCH-Basic v1 benchmark. It is intended for public exploration, preliminary evaluation, and community feedback.
Overview
SWITCH-Basic covers the collection and annotation of real-world Tangible Computer Interfaces (TCI) interaction data, which we systematically structure into five distinct tasks. These tasks are designed to evaluate models across three crucial capability dimensions: Perception/Spatial Reasoning, Causal Reasoning/Planning, and Verification.
We conduct a comprehensive evaluation of state-of-the-art large multi-modality models (LMMMs) on this benchmark, providing detailed analysis of their strengths and limitations, thereby offering insights to guide future model development in real-world interactive tasks. Furthermore, we leverage the benchmark to evaluate advanced generative models, like Veo3. By comparing generated videos against ground truth, we illustrate how current generative models still exhibit significant room for improvement in logical consistency and fine-grained interaction for real-word use, thus underscoring the importance of SWITCH's target scenarios.
Key Tasks Supported
This dataset provides annotations to support the following core tasks:
- Task-Aware Visual Question Answering (VQA): Composed of two complementary sub-tasks:
- (a) UI State Recognition: assessing whether the model can recognize and describe the current state of TCI elements within the scene.
- (b) Goal-Oriented Reasoning: testing whether the model can interpret the purpose and outcome of actions, reasoning about whether these interactions successfully achieve the intended task goals.
- Semantic UI Comprehension: Tests whether a model can accurately localize and interpret actionable UI elements in cluttered or dynamic settings, reasoning about their spatial and functional relationships while inferring human intent.
- Action Generation: Evaluates a model's ability to infer intent and plan executable, context-aware action sequences.
- (a) UI Action Identification: Detect the relevant interaction region, recognize its affordances, and predict the appropriate mode of interaction.
- (b) Action Execution Planning: Generate the necessary physical actions to perform the intended TCI operation.
- State Transition Prediction: Evaluates causal reasoning and short-term prediction.
- (a) UI-State Transition: Predict changes in the visual or functional state of TCI elements after an action.
- (b) Environment-State Transition: Predict corresponding updates in the surrounding physical or visual environment.
- (c) Coupled Transition: Reason about interdependent updates where TCI and environment states jointly change.
- Result Verification:
- (a) Verification Planning: Test whether the model can infer what actions or checks are required to verify the outcome of a previous operation.
- (b) Expected State Prediction: Assess whether the model can predict what the expected state should look like after a successful interaction.
Data Visualization (Local HTML Viewer)
To help researchers better understand the dataset structure and visually inspect the samples without writing parsing scripts, we provide a local HTML viewer out-of-the-box.
How to use:
- Clone or download this dataset repository to your local machine.
- Open
viewer.html(for Chinese) orviewer_eng.html(for English) directly in your web browser. - Browse the dataset conveniently!
Dataset Structure
The sub-dataset repository is organized as follows:
switchBasic_release0212/
├── action/ # Action Task
│ ├── img2txt/ # Modality input format
│ │ ├── imgs/ # Image assets for this specific task & format
│ │ └── vqa.json # Annotation file containing QA pairs and metadata
│ ├── img2video/
│ │ ├── imgs/ # Input image assets
│ │ ├── videos/ # Output video assets
│ │ └── vqa.json # Annotation file
│ ├── video2txt/ # Other modality formats...
│ └── video2video/
├── final_state/ # State Transition Prediction task
├── ui_grounding/ # Semantic UI Comprehension task
├── verification_action/ # Result Verification (Action) task
├── verification_state/ # Result Verification (State) task
├── vqa_state/ # Task-Aware VQA (State Recognition)
├── vqa_task/ # Task-Aware VQA (Goal-Oriented Reasoning)
├── viewer.html # Visualization (Local HTML Viewer) in Chinese
├── viewer_eng.html # Visualization (Local HTML Viewer) in English
└── README.md
Each task folder contains subfolders defining the input modalities (e.g., img2txt for image in question, text choices as answers; img2video for image in question, video choices as answers). Inside these format folders, you will find the corresponding media assets (imgs/, videos/) and the vqa.json file containing the detailed annotations.
Leaderboard & Full Set Evaluation
Please visit the SWITCH-Basic v1 Leaderboard Space for more details on the latest model results.
License
This dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. It is restricted to academic research and non-commercial purposes only.
📖 Citation
If you refer to this version of the benchmark, please name it as SWITCH-Basic v1. If you utilize SWITCH scenarios or data in your research, please cite:
@article{switch2025,
title={{SWITCH}: {B}enchmarking Modeling and Handling of Tangible Interfaces in Long-horizon Embodied Scenarios},
author={Jieru Lin, Zhiwei Yu, Börje F. Karlsson},
journal={arXiv preprint arXiv:2511.17649},
year={2025}
}
Contributing & Contact
We welcome contributions and feedback! Please feel free to submit issues or pull requests. For questions or inquiries, please reach out to the BAAI-Agents team.
- Downloads last month
- 3