davanstrien HF Staff Claude Opus 4.5 commited on
Commit
7d6ee37
·
1 Parent(s): 6b53e92

Add finepdfs-stats.py - Polars streaming aggregation demo

Browse files

Computes aggregate statistics on FinePDFs datasets using Polars
streaming without downloading the full dataset.

Features:
- Supports both finepdfs-edu (49.5M rows) and finepdfs (476M rows)
- --lang flag for language+script selection (70+ languages)
- --show-plan to display Polars query optimization
- --limit for quick testing
- Uploads results to HF Hub with auto-generated dataset_info
- Timing summary included in output

Demonstrates polars#25521 which reduced API calls from 139 → 1.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>

Files changed (1) hide show
  1. finepdfs-stats.py +543 -0
finepdfs-stats.py ADDED
@@ -0,0 +1,543 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.12"
3
+ # dependencies = [
4
+ # "polars>=1.31.0",
5
+ # "huggingface-hub",
6
+ # "datasets",
7
+ # ]
8
+ # ///
9
+ """
10
+ Compute aggregate statistics on FinePDFs datasets using Polars streaming.
11
+
12
+ Demonstrates the new Polars HF Hub integration (polars#25521) which reduces
13
+ API calls from 139 → 1 for datasets like finepdfs-edu, enabling efficient
14
+ streaming aggregation without downloading the full dataset.
15
+
16
+ Supported datasets:
17
+ - HuggingFaceFW/finepdfs-edu (49.5M rows, 350B tokens) - educational subset
18
+ - HuggingFaceFW/finepdfs (476M rows, 3T tokens) - full dataset
19
+
20
+ This script computes:
21
+ - Per-language statistics (doc count, token totals, avg edu scores)
22
+ - Per-extractor statistics
23
+ - Per-dump statistics
24
+ - Global summary metrics
25
+
26
+ The result is a small summary DataFrame that can be uploaded as a new dataset.
27
+
28
+ Example usage:
29
+ # List available language+script combinations
30
+ uv run finepdfs-stats.py --list-languages
31
+
32
+ # Compute stats for English (default: finepdfs-edu)
33
+ uv run finepdfs-stats.py
34
+
35
+ # Process French documents
36
+ uv run finepdfs-stats.py --lang fra_Latn
37
+
38
+ # Use full finepdfs dataset (476M rows)
39
+ uv run finepdfs-stats.py --source-dataset HuggingFaceFW/finepdfs
40
+
41
+ # Show query plan before execution
42
+ uv run finepdfs-stats.py --show-plan --limit 1000
43
+
44
+ # Limit to first N rows for testing
45
+ uv run finepdfs-stats.py --limit 10000
46
+
47
+ # Save results and upload to HF
48
+ uv run finepdfs-stats.py --output-repo username/finepdfs-edu-stats
49
+
50
+ # Run on HF Jobs (CPU is sufficient, no GPU needed)
51
+ hf jobs uv run finepdfs-stats.py \\
52
+ -s HF_TOKEN \\
53
+ -e HF_XET_HIGH_PERFORMANCE=1 \\
54
+ -- --output-repo username/finepdfs-edu-stats
55
+
56
+ # Or run from a URL
57
+ hf jobs uv run \\
58
+ -s HF_TOKEN \\
59
+ -e HF_XET_HIGH_PERFORMANCE=1 \\
60
+ "https://huggingface.co/datasets/uv-scripts/data-stats/raw/main/finepdfs-stats.py" \\
61
+ -- --output-repo username/finepdfs-edu-stats
62
+
63
+ Why Polars scan_parquet?
64
+ - Lazy evaluation: builds query plan without loading data
65
+ - Streaming execution: processes data in chunks, constant memory
66
+ - Native HF Hub support: hf://datasets/... paths just work
67
+ - Optimized API calls: PR #25521 reduced API calls 10-100x for HF datasets
68
+
69
+ Performance tips:
70
+ - Set HF_XET_HIGH_PERFORMANCE=1 to maximize network/disk utilization
71
+ - Use --limit for quick tests before running on full dataset
72
+ - Use --show-plan to see Polars query optimization (projection pushdown)
73
+ """
74
+
75
+ import argparse
76
+ import logging
77
+ import os
78
+ import sys
79
+ import time
80
+ from pathlib import Path
81
+
82
+ import polars as pl
83
+ from datasets import Dataset
84
+ from huggingface_hub import HfApi, create_repo, list_repo_tree, login
85
+
86
+ logging.basicConfig(
87
+ level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
88
+ )
89
+ logger = logging.getLogger(__name__)
90
+
91
+ # Common language+script codes for finepdfs-edu
92
+ COMMON_LANGUAGES = {
93
+ "eng_Latn": "English (Latin script)",
94
+ "fra_Latn": "French (Latin script)",
95
+ "deu_Latn": "German (Latin script)",
96
+ "spa_Latn": "Spanish (Latin script)",
97
+ "por_Latn": "Portuguese (Latin script)",
98
+ "ita_Latn": "Italian (Latin script)",
99
+ "nld_Latn": "Dutch (Latin script)",
100
+ "pol_Latn": "Polish (Latin script)",
101
+ "rus_Cyrl": "Russian (Cyrillic script)",
102
+ "zho_Hans": "Chinese (Simplified)",
103
+ "zho_Hant": "Chinese (Traditional)",
104
+ "jpn_Jpan": "Japanese",
105
+ "kor_Hang": "Korean",
106
+ "ara_Arab": "Arabic",
107
+ "hin_Deva": "Hindi (Devanagari)",
108
+ }
109
+
110
+
111
+ def list_available_languages(dataset_id: str) -> list[str]:
112
+ """List available language subsets in the dataset."""
113
+ try:
114
+ tree = list_repo_tree(dataset_id, path_in_repo="data", repo_type="dataset")
115
+ languages = [
116
+ item.path.replace("data/", "")
117
+ for item in tree
118
+ if item.path.startswith("data/")
119
+ and "/" not in item.path.replace("data/", "")
120
+ ]
121
+ return sorted(languages)
122
+ except Exception as e:
123
+ logger.warning(f"Could not list languages: {e}")
124
+ return list(COMMON_LANGUAGES.keys())
125
+
126
+
127
+ def compute_language_stats(df: pl.LazyFrame) -> pl.DataFrame:
128
+ """Compute per-language statistics."""
129
+ return (
130
+ df.group_by("language")
131
+ .agg(
132
+ pl.len().alias("doc_count"),
133
+ pl.col("token_count").sum().alias("total_tokens"),
134
+ pl.col("token_count").mean().alias("avg_tokens"),
135
+ pl.col("token_count").median().alias("median_tokens"),
136
+ pl.col("token_count").min().alias("min_tokens"),
137
+ pl.col("token_count").max().alias("max_tokens"),
138
+ pl.col("page_average_lid_score").mean().alias("avg_lid_score"),
139
+ pl.col("is_truncated").sum().alias("truncated_count"),
140
+ pl.col("minhash_cluster_size").mean().alias("avg_cluster_size"),
141
+ pl.col("duplicate_count").sum().alias("total_duplicates"),
142
+ )
143
+ .sort("doc_count", descending=True)
144
+ .collect(engine="streaming")
145
+ )
146
+
147
+
148
+ def compute_extractor_stats(df: pl.LazyFrame) -> pl.DataFrame:
149
+ """Compute per-extractor statistics."""
150
+ return (
151
+ df.group_by("extractor")
152
+ .agg(
153
+ pl.len().alias("doc_count"),
154
+ pl.col("token_count").sum().alias("total_tokens"),
155
+ pl.col("token_count").mean().alias("avg_tokens"),
156
+ pl.col("is_truncated").sum().alias("truncated_count"),
157
+ pl.col("page_average_lid_score").mean().alias("avg_lid_score"),
158
+ )
159
+ .sort("doc_count", descending=True)
160
+ .collect(engine="streaming")
161
+ )
162
+
163
+
164
+ def compute_dump_stats(df: pl.LazyFrame) -> pl.DataFrame:
165
+ """Compute per-dump statistics."""
166
+ return (
167
+ df.group_by("dump")
168
+ .agg(
169
+ pl.len().alias("doc_count"),
170
+ pl.col("token_count").sum().alias("total_tokens"),
171
+ pl.col("token_count").mean().alias("avg_tokens"),
172
+ )
173
+ .sort("doc_count", descending=True)
174
+ .collect(engine="streaming")
175
+ )
176
+
177
+
178
+ def compute_global_stats(df: pl.LazyFrame) -> pl.DataFrame:
179
+ """Compute global summary statistics."""
180
+ return df.select(
181
+ pl.len().alias("total_docs"),
182
+ pl.col("token_count").sum().alias("total_tokens"),
183
+ pl.col("token_count").mean().alias("avg_tokens"),
184
+ pl.col("token_count").median().alias("median_tokens"),
185
+ pl.col("token_count").std().alias("std_tokens"),
186
+ pl.col("is_truncated").sum().alias("truncated_docs"),
187
+ pl.col("is_truncated").mean().alias("truncation_rate"),
188
+ pl.col("minhash_cluster_size").mean().alias("avg_cluster_size"),
189
+ pl.col("duplicate_count").sum().alias("total_duplicates"),
190
+ pl.col("language").n_unique().alias("unique_languages"),
191
+ pl.col("extractor").n_unique().alias("unique_extractors"),
192
+ pl.col("dump").n_unique().alias("unique_dumps"),
193
+ ).collect(engine="streaming")
194
+
195
+
196
+ def create_readme(
197
+ args,
198
+ global_stats: pl.DataFrame,
199
+ timings: dict[str, float],
200
+ ) -> str:
201
+ """Create README content for the stats dataset."""
202
+ stats = global_stats.to_dicts()[0]
203
+ lang_name = COMMON_LANGUAGES.get(args.lang, args.lang)
204
+ total_time = sum(timings.values())
205
+
206
+ # Format timings table
207
+ timing_rows = "\n".join(f"| {name} | {t:.2f}s |" for name, t in timings.items())
208
+
209
+ return f"""---
210
+ tags:
211
+ - statistics
212
+ - polars
213
+ - finepdfs-edu
214
+ license: odc-by
215
+ ---
216
+
217
+ # Statistics for {args.source_dataset} ({lang_name})
218
+
219
+ Aggregate statistics computed using Polars streaming on the [{args.source_dataset}](https://huggingface.co/datasets/{args.source_dataset}) dataset.
220
+
221
+ ## Performance
222
+
223
+ Processed **{stats.get("total_docs", 0):,} documents** in **{total_time:.2f} seconds**.
224
+
225
+ | Step | Time |
226
+ |------|------|
227
+ {timing_rows}
228
+ | **Total** | **{total_time:.2f}s** |
229
+
230
+ > Speed comes from Polars only reading metadata columns (not the `text` column),
231
+ > thanks to Parquet's columnar format and lazy evaluation.
232
+
233
+ ## How This Was Generated
234
+
235
+ This dataset demonstrates **Polars streaming aggregation** with HuggingFace Hub integration.
236
+ Thanks to [polars#25521](https://github.com/pola-rs/polars/pull/25521), `scan_parquet`
237
+ with `hf://` paths now uses far fewer API calls (139 → 1 for finepdfs-edu).
238
+
239
+ ```bash
240
+ uv run finepdfs-stats.py --lang {args.lang} --output-repo {args.output_repo or "username/stats"}
241
+ ```
242
+
243
+ ## Global Summary
244
+
245
+ | Metric | Value |
246
+ |--------|-------|
247
+ | Language | {lang_name} (`{args.lang}`) |
248
+ | Total Documents | {stats.get("total_docs", "N/A"):,} |
249
+ | Total Tokens | {stats.get("total_tokens", "N/A"):,} |
250
+ | Average Tokens/Doc | {stats.get("avg_tokens", 0):,.0f} |
251
+ | Truncated Documents | {stats.get("truncated_docs", 0):,} ({stats.get("truncation_rate", 0) * 100:.1f}%) |
252
+ | Unique Languages | {stats.get("unique_languages", "N/A")} |
253
+ | Unique Extractors | {stats.get("unique_extractors", "N/A")} |
254
+ | Unique Dumps | {stats.get("unique_dumps", "N/A")} |
255
+
256
+ ## Configs
257
+
258
+ - `global_stats` - Overall summary (1 row)
259
+ - `language_stats` - Per-language aggregations
260
+ - `extractor_stats` - Per-extractor aggregations
261
+ - `dump_stats` - Per-dump aggregations
262
+
263
+ ## Usage
264
+
265
+ ```python
266
+ from datasets import load_dataset
267
+
268
+ # Load all configs
269
+ global_stats = load_dataset("{args.output_repo or "username/stats"}", "global_stats")
270
+ lang_stats = load_dataset("{args.output_repo or "username/stats"}", "language_stats")
271
+ extractor_stats = load_dataset("{args.output_repo or "username/stats"}", "extractor_stats")
272
+ dump_stats = load_dataset("{args.output_repo or "username/stats"}", "dump_stats")
273
+ ```
274
+
275
+ ## Source
276
+
277
+ - **Dataset**: [{args.source_dataset}](https://huggingface.co/datasets/{args.source_dataset})
278
+ - **Language**: {args.lang}
279
+ - **Script**: [finepdfs-stats.py](https://huggingface.co/datasets/uv-scripts/data-stats)
280
+ """
281
+
282
+
283
+ def main():
284
+ parser = argparse.ArgumentParser(
285
+ description="Compute aggregate statistics on HF datasets using Polars streaming",
286
+ formatter_class=argparse.RawDescriptionHelpFormatter,
287
+ epilog=__doc__,
288
+ )
289
+
290
+ parser.add_argument(
291
+ "--source-dataset",
292
+ type=str,
293
+ default="HuggingFaceFW/finepdfs-edu",
294
+ help="Source dataset: HuggingFaceFW/finepdfs-edu (49.5M rows) or HuggingFaceFW/finepdfs (476M rows)",
295
+ )
296
+
297
+ parser.add_argument(
298
+ "--show-plan",
299
+ action="store_true",
300
+ help="Show Polars query plan before execution (demonstrates optimization)",
301
+ )
302
+
303
+ parser.add_argument(
304
+ "--lang",
305
+ type=str,
306
+ default="eng_Latn",
307
+ help="Language+script code to process, e.g., eng_Latn, fra_Latn, zho_Hans (default: eng_Latn)",
308
+ )
309
+
310
+ parser.add_argument(
311
+ "--list-languages",
312
+ action="store_true",
313
+ help="List available language+script codes and exit",
314
+ )
315
+
316
+ parser.add_argument(
317
+ "--limit",
318
+ type=int,
319
+ help="Limit to first N rows (for testing)",
320
+ )
321
+
322
+ parser.add_argument(
323
+ "--output-repo",
324
+ type=str,
325
+ help="HuggingFace dataset repository to upload results",
326
+ )
327
+
328
+ parser.add_argument(
329
+ "--output-dir",
330
+ type=str,
331
+ default="./stats_output",
332
+ help="Local directory for output files (default: ./stats_output)",
333
+ )
334
+
335
+ parser.add_argument(
336
+ "--hf-token",
337
+ type=str,
338
+ help="HuggingFace API token (or set HF_TOKEN env var)",
339
+ )
340
+
341
+ parser.add_argument(
342
+ "--private",
343
+ action="store_true",
344
+ help="Make the output dataset private",
345
+ )
346
+
347
+ args = parser.parse_args()
348
+
349
+ # Check for high-performance mode
350
+ if os.environ.get("HF_XET_HIGH_PERFORMANCE"):
351
+ logger.info("High-performance mode enabled (HF_XET_HIGH_PERFORMANCE=1)")
352
+
353
+ # List languages mode
354
+ if args.list_languages:
355
+ print(f"Available language+script codes for {args.source_dataset}:\n")
356
+ print("Common languages:")
357
+ for code, name in COMMON_LANGUAGES.items():
358
+ print(f" {code:12} - {name}")
359
+ print("\nFetching full list from HF Hub...")
360
+ all_langs = list_available_languages(args.source_dataset)
361
+ print(f"\nAll available ({len(all_langs)} total):")
362
+ for lang in all_langs[:30]: # Show first 30
363
+ name = COMMON_LANGUAGES.get(lang, "")
364
+ print(f" {lang:12} {name}")
365
+ if len(all_langs) > 30:
366
+ print(f" ... and {len(all_langs) - 30} more")
367
+ sys.exit(0)
368
+
369
+ # Build the parquet path
370
+ source_path = (
371
+ f"hf://datasets/{args.source_dataset}/data/{args.lang}/train/*.parquet"
372
+ )
373
+ logger.info(f"Scanning: {source_path}")
374
+ logger.info(f"Language: {args.lang} ({COMMON_LANGUAGES.get(args.lang, 'unknown')})")
375
+
376
+ # Create lazy frame - this doesn't load any data yet!
377
+ logger.info("Creating lazy query plan...")
378
+ df = pl.scan_parquet(source_path)
379
+
380
+ # Apply limit if specified
381
+ if args.limit:
382
+ logger.info(f"Limiting to first {args.limit:,} rows")
383
+ df = df.head(args.limit)
384
+
385
+ # Show query plan if requested
386
+ if args.show_plan:
387
+ # Build a sample query to show the plan
388
+ sample_query = df.select(
389
+ pl.len(),
390
+ pl.col("token_count").sum(),
391
+ pl.col("language").n_unique(),
392
+ )
393
+ print("\nQuery Plan (showing Polars optimization):")
394
+ print("=" * 60)
395
+ print(sample_query.explain())
396
+ print("=" * 60)
397
+ print("\nNote: Polars uses projection pushdown - only reads columns needed!")
398
+ print("The 'text' column is never loaded, making this very fast.\n")
399
+
400
+ # Create output directory
401
+ output_dir = Path(args.output_dir)
402
+ output_dir.mkdir(parents=True, exist_ok=True)
403
+
404
+ # Track timings
405
+ timings: dict[str, float] = {}
406
+
407
+ # Compute statistics (streaming execution happens here)
408
+ logger.info("Computing global statistics...")
409
+ start = time.perf_counter()
410
+ global_stats = compute_global_stats(df)
411
+ timings["Global stats"] = time.perf_counter() - start
412
+ print("\nGlobal Statistics:")
413
+ print(global_stats)
414
+ global_stats.write_parquet(output_dir / "global_stats.parquet")
415
+
416
+ logger.info("Computing per-language statistics...")
417
+ start = time.perf_counter()
418
+ # Need to re-scan since we consumed the lazy frame
419
+ df = pl.scan_parquet(source_path)
420
+ if args.limit:
421
+ df = df.head(args.limit)
422
+ lang_stats = compute_language_stats(df)
423
+ timings["Language stats"] = time.perf_counter() - start
424
+ print(f"\nLanguage Statistics ({len(lang_stats)} languages):")
425
+ print(lang_stats.head(20))
426
+ lang_stats.write_parquet(output_dir / "language_stats.parquet")
427
+
428
+ logger.info("Computing per-extractor statistics...")
429
+ start = time.perf_counter()
430
+ df = pl.scan_parquet(source_path)
431
+ if args.limit:
432
+ df = df.head(args.limit)
433
+ extractor_stats = compute_extractor_stats(df)
434
+ timings["Extractor stats"] = time.perf_counter() - start
435
+ print("\nExtractor Statistics:")
436
+ print(extractor_stats)
437
+ extractor_stats.write_parquet(output_dir / "extractor_stats.parquet")
438
+
439
+ logger.info("Computing per-dump statistics...")
440
+ start = time.perf_counter()
441
+ df = pl.scan_parquet(source_path)
442
+ if args.limit:
443
+ df = df.head(args.limit)
444
+ dump_stats = compute_dump_stats(df)
445
+ timings["Dump stats"] = time.perf_counter() - start
446
+ print(f"\nDump Statistics ({len(dump_stats)} dumps):")
447
+ print(dump_stats.head(20))
448
+ dump_stats.write_parquet(output_dir / "dump_stats.parquet")
449
+
450
+ # Print timing summary
451
+ total_time = sum(timings.values())
452
+ print("\nTiming Summary:")
453
+ print("-" * 30)
454
+ for name, t in timings.items():
455
+ print(f" {name}: {t:.2f}s")
456
+ print("-" * 30)
457
+ print(f" Total: {total_time:.2f}s")
458
+
459
+ logger.info(f"Results saved to: {output_dir}")
460
+
461
+ # Upload to HF Hub if requested
462
+ if args.output_repo:
463
+ hf_token = args.hf_token or os.environ.get("HF_TOKEN")
464
+ if hf_token:
465
+ login(token=hf_token)
466
+
467
+ api = HfApi(token=hf_token)
468
+
469
+ logger.info(f"Creating dataset repository: {args.output_repo}")
470
+ try:
471
+ create_repo(
472
+ args.output_repo,
473
+ repo_type="dataset",
474
+ private=args.private,
475
+ token=hf_token,
476
+ )
477
+ logger.info(f"Created new dataset: {args.output_repo}")
478
+ except Exception as e:
479
+ if "already exists" in str(e).lower():
480
+ logger.info(f"Dataset {args.output_repo} already exists, updating...")
481
+ else:
482
+ raise
483
+
484
+ # Upload each stats DataFrame as a separate config using datasets
485
+ configs = {
486
+ "global_stats": global_stats,
487
+ "language_stats": lang_stats,
488
+ "extractor_stats": extractor_stats,
489
+ "dump_stats": dump_stats,
490
+ }
491
+
492
+ for config_name, df in configs.items():
493
+ logger.info(f"Uploading {config_name}...")
494
+ ds = Dataset.from_polars(df)
495
+ ds.push_to_hub(
496
+ args.output_repo,
497
+ config_name=config_name,
498
+ token=hf_token,
499
+ private=args.private,
500
+ )
501
+
502
+ # Create and upload README
503
+ readme_content = create_readme(args, global_stats, timings)
504
+ api.upload_file(
505
+ path_or_fileobj=readme_content.encode(),
506
+ path_in_repo="README.md",
507
+ repo_id=args.output_repo,
508
+ repo_type="dataset",
509
+ token=hf_token,
510
+ )
511
+
512
+ dataset_url = f"https://huggingface.co/datasets/{args.output_repo}"
513
+ logger.info(f"Dataset uploaded: {dataset_url}")
514
+ print(f"\nResults uploaded to: {dataset_url}")
515
+
516
+
517
+ if __name__ == "__main__":
518
+ if len(sys.argv) == 1:
519
+ print("FinePDFs Statistics - Polars Streaming Demo")
520
+ print("=" * 45)
521
+ print("\nCompute aggregate statistics on FinePDFs datasets")
522
+ print("using Polars streaming - no need to download the full dataset!\n")
523
+ print("Example commands:\n")
524
+ print("# List available languages:")
525
+ print("uv run finepdfs-stats.py --list-languages\n")
526
+ print("# Quick test with 10k rows:")
527
+ print("uv run finepdfs-stats.py --limit 10000\n")
528
+ print("# Show query plan (see Polars optimization):")
529
+ print("uv run finepdfs-stats.py --show-plan --limit 1000\n")
530
+ print("# Process English (default: finepdfs-edu):")
531
+ print("uv run finepdfs-stats.py\n")
532
+ print("# Use full finepdfs dataset (476M rows):")
533
+ print("uv run finepdfs-stats.py --source-dataset HuggingFaceFW/finepdfs\n")
534
+ print("# Save results to HF Hub:")
535
+ print("uv run finepdfs-stats.py --output-repo username/finepdfs-edu-stats\n")
536
+ print("# Run on HF Jobs (CPU, with high-performance transfers):")
537
+ print("hf jobs uv run finepdfs-stats.py \\")
538
+ print(" -s HF_TOKEN \\")
539
+ print(" -e HF_XET_HIGH_PERFORMANCE=1 \\")
540
+ print(" -- --output-repo username/stats")
541
+ sys.exit(0)
542
+
543
+ main()