Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    TypeError
Message:      Couldn't cast array of type
struct<batch_size: struct<type: string>, num_index_heads: struct<type: string, value: int64, description: string>, index_head_dim: struct<type: string, value: int64, description: string>, page_size: struct<type: string, value: int64, description: string>, topk: struct<type: string, value: int64, description: string>, max_num_pages: struct<type: string, description: string>, num_pages: struct<type: string, description: string>, kv_cache_num_heads: struct<type: string, value: int64, description: string>, head_dim_with_scale: struct<type: string, value: int64, description: string>>
to
{'num_tokens': {'type': Value('string'), 'description': Value('string')}, 'num_qo_heads': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'head_dim_ckv': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'head_dim_kpe': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'page_size': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'topk': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'num_pages': {'type': Value('string'), 'description': Value('string')}}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 265, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 120, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
                  cast_array_to_feature(
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2092, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<batch_size: struct<type: string>, num_index_heads: struct<type: string, value: int64, description: string>, index_head_dim: struct<type: string, value: int64, description: string>, page_size: struct<type: string, value: int64, description: string>, topk: struct<type: string, value: int64, description: string>, max_num_pages: struct<type: string, description: string>, num_pages: struct<type: string, description: string>, kv_cache_num_heads: struct<type: string, value: int64, description: string>, head_dim_with_scale: struct<type: string, value: int64, description: string>>
              to
              {'num_tokens': {'type': Value('string'), 'description': Value('string')}, 'num_qo_heads': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'head_dim_ckv': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'head_dim_kpe': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'page_size': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'topk': {'type': Value('string'), 'value': Value('int64'), 'description': Value('string')}, 'num_pages': {'type': Value('string'), 'description': Value('string')}}

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

MLSys 2026 FlashInfer-Bench Challenge Dataset

This repository contains the FlashInfer-Bench dataset for the MLSys 2026 Kenrel Generation Challenge.

This dataset targets to be used in the FlashInfer-Bench benchmark system.

It follows the FlashInfer Trace Schema. To use the dataset in the competition, please refer to our starter kit.

Download

Use this command to download the dataset:

git lfs install
git clone https://huggingface.co/datasets/flashinfer-ai/mlsys26-contest

Set the environment variable so that FlashInfer-Bench can find the dataset:

export FIB_DATASET_PATH=/path/to/mlsys26-contest

Tasks

This dataset contains the definitions and workloads for these kernels:

  • Fused Mixture of Experts (MoE)
  • Gated Delta Network (GDN)
  • DeepSeek Sparse Attention (DSA)

Dataset Structure

It is organized as follows:

mlsys26-contest/
├── definitions/
└── workloads/

These components are provided in the dataset:

  • Definition: describes the input, output, and computation logic of a kernel task.
  • Workload: describes the inputs for a definition during real inference. This will be used to benchmark the Solution you provided.

During benchmarking, these components should be provided or generated:

  • Solution: provided by participants, your implementation of the kernel task.
  • Trace: generated by FlashInfer-Bench, the performance and correctness results of your solution on the workloads.
Downloads last month
573