🪩 DISCO: Diversifying Sample Condensation for Efficient Model Evaluation

Tübingen AI Center, University of Tübingen, Parameter Lab

TLDR: Evaluate your LLMs on benchmarks like MMLU at 1% cost.

Banner image

Imbalance: More evaluation budget is spent on less informative samples in test sets.

Abstract

Evaluating modern machine learning models has become prohibitively expensive. Benchmarks such as LMMs-Eval and HELM demand thousands of GPU hours per model. Costly evaluation reduces inclusivity, slows the cycle of innovation, and worsens environmental impact. The typical approach follows two steps. First, select an anchor subset of data. Second, train a mapping from the accuracy on this subset to the final test result. The drawback is that anchor selection depends on clustering, which can be complex and sensitive to design choices. We argue that promoting diversity among samples is not essential; what matters is to select samples that maximise diversity in model responses. Our method, Diversifying Sample Condensation (DISCO), selects the top-k samples with the greatest model disagreements. This uses greedy, sample-wise statistics rather than global clustering. The approach is conceptually simpler. From a theoretical view, inter-model disagreement provides an information-theoretically optimal rule for such greedy selection. DISCO shows empirical gains over prior methods, achieving state-of-the-art results in performance prediction across MMLU, Hellaswag, Winogrande, and ARC.

problem statement: efficient evaluation

Problem overview. We aim at selecting a much smaller evaluation dataset than the original evaluation dataset, while keeping the estimated performances as close as possible.

method details

DISCO overview. First, we select a subset of an evaluation dataset with the most informative samples. Second, we predict the performance of unseen models from their signatures, i.e., outputs on the selected samples.

results

DISCO compression of MMLU, HellaSwag, Winogrande and Arc datasets. We compare our DISCO against baselines. For each dataset, we reduce the test set to 100 data points, achieving inference cost reduction of up to 99.3% and 99.0% on MMLU and HellaSwag, respectively. Main metrics are mean absolute error (MAE), measured in %p difference in accuracy, and the Spearman rank correlation (Rank) between the true model ranking and the estimated model ranking. Main observations: (1) Model signature is an effective strategy for performance estimation. (2) Condensing dataset into the top-k diversifying samples (e.g. according to Predictive Diversity Score (PDS)) allows DISCO to achieve the state of the art in test-set compression.

results

DISCO compression of ImageNet validation dataset. We evaluate the generalisation of our DISCO to the computer vision domain. The main metrics are mean absolute error (MAE), measured in %p difference in accuracy, and the Spearman rank correlation (Rank) between the true model ranking and the estimated model ranking. Main observations: (1) Same as for language experiments, model signature is an effective strategy for performance estimation. (2) Using PDS on top improves performance even more.

results

MMLU performance estimation vs. compression rates. Mean absolute error (MAE), measured in %p difference in accuracy, and the Spearman rank correlation between the true model ranking and the estimated model ranking are shown. At 100 samples, the results are identical to the table with the main results. Main observations: DISCO achieves a superior efficiency–precision trade-off across various compression rates compared to baselines. For extreme compression rate, kNN is a better choice than random forest (RF).

BibTeX


@misc{rubinstein2025discodiversifyingsamplecondensation,
    title={DISCO: Diversifying Sample Condensation for Efficient Model Evaluation},
    author={Alexander Rubinstein and Benjamin Raible and Martin Gubri and Seong Joon Oh},
    year={2025},
    eprint={2510.07959},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
    url={https://arxiv.org/abs/2510.07959},
}