Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions
Abstract
Adaptive group elicitation framework combines LLM-based information gain scoring with graph neural networks to improve population-level predictions under budget constraints.
Eliciting information to reduce uncertainty about latent group-level properties from surveys and other collective assessments requires allocating limited questioning effort under real costs and missing data. Although large language models enable adaptive, multi-turn interactions in natural language, most existing elicitation methods optimize what to ask with a fixed respondent pool, and do not adapt respondent selection or leverage population structure when responses are partial or incomplete. To address this gap, we study adaptive group elicitation, a multi-round setting where an agent adaptively selects both questions and respondents under explicit query and participation budgets. We propose a theoretically grounded framework that combines (i) an LLM-based expected information gain objective for scoring candidate questions with (ii) heterogeneous graph neural network propagation that aggregates observed responses and participant attributes to impute missing responses and guide per-round respondent selection. This closed-loop procedure queries a small, informative subset of individuals while inferring population-level responses via structured similarity. Across three real-world opinion datasets, our method consistently improves population-level response prediction under constrained budgets, including a >12% relative gain on CES at a 10% respondent budget.
Community
π§ When we try to understand a groupβs true preferences, the challenge is not only what to ask, but whom to query for what.
We study a new problem setting:
Adaptive Group Elicitation
Under real costs and missing data, the system must dynamically decide:
π β Which question to ask
π π₯ Which individuals to query
π π How to leverage population structure to infer unobserved responses
Most prior work optimizes only question selection,
assuming a fixed respondent pool.
But in practice:
β’ π Individuals differ in their contribution to uncertainty reduction
β’ π Population structure induces correlated responses
β’ π§© Observations are sparse and incomplete
We propose a framework that combines:
π§ LLM-based expected information gain for scoring candidate questions
π Heterogeneous GNN propagation to aggregate responses and attributes
π― Per-round adaptive respondent selection under explicit budgets
By querying a small, informative subset of individuals,
the model infers population-level responses through structured similarity.
Core insight:
Group-level elicitation is a budget-constrained, structure-aware uncertainty reduction problem β
not merely a survey design task.
π Paper: https://arxiv.org/pdf/2602.14279
π» Code: https://github.com/ZDCSlab/Group-Adaptive-Elicitation
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Cold-Start Personalization via Training-Free Priors from Structured World Models (2026)
- Don't Always Pick the Highest-Performing Model: An Information Theoretic View of LLM Ensemble Selection (2026)
- The PROPER Approach to Proactivity: Benchmarking and Advancing Knowledge Gap Navigation (2026)
- Do Reasoning Models Ask Better Questions? A Formal Information-Theoretic Analysis on Multi-Turn LLM Games (2026)
- Optimal Budgeted Adaptation of Large Language Models (2026)
- Causal Preference Elicitation (2026)
- Think When Needed: Model-Aware Reasoning Routing for LLM-based Ranking (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper