ModernCamemBERT-bio-large
ModernCamemBERT-bio is available in two sizes: base (150M parameters) and large (350M parameters).
Table of Contents
Model Summary
ModernCamemBERT-bio-large is the Large variant of our French biomedical encoder, built by continued pretraining of ModernCamemBERT-large using a CLM detour recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM. This produces lasting representational changes that improve downstream biomedical performance by +1.2pp across 8 French biomedical tasks (6/8 task wins).
| Architecture | ModernBERT |
| Parameters | 350M |
| Layers | 28 |
| Hidden size | 1024 |
| Attention heads | 16 |
| Context length | 8,192 tokens |
| Language | French |
| Base model | almanach/moderncamembert-large |
Usage
You can use this model with the transformers library (v4.48.0+):
pip install -U transformers>=4.48.0
If your GPU supports it, install Flash Attention for best efficiency:
pip install flash-attn
Masked Language Modeling
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "almanach/ModernCamemBERT-bio-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)
text = "Le patient présente une <mask> aiguë du myocarde."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
Fine-tuning (Classification, NER, etc.)
from transformers import AutoTokenizer, AutoModel
model_id = "almanach/ModernCamemBERT-bio-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)
text = "Compte rendu d'hospitalisation du patient admis pour décompensation cardiaque."
inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True)
outputs = model(**inputs)
# outputs.last_hidden_state: [batch, seq_len, 1024]
Note: ModernCamemBERT-bio does not use token type IDs. You can omit the token_type_ids parameter.
Training
Data
| Corpus | Tokens | Description |
|---|---|---|
| MC-Bio | 7B | Quality-filtered French biomedical text (scientific articles, drug leaflets, clinical guidelines) |
| MCQA | 2B | Medical question-answer pairs |
| E3C | 400M | Clinical cases from journals and theses |
| EMEA | 600M | Pharmaceutical documents (European Medicines Agency) |
| Total | 10B |
Methodology
ModernCamemBERT-bio-large is trained in two phases, initialized from ModernCamemBERT-large:
- Phase 1 (CLM detour, 25B tokens): The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation.
- Phase 2 (MLM decay, 2.5B tokens): Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule.
Both phases use the same data mix (27.5B tokens total). Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4× H100 80GB GPUs with Composer. Total training time: ~81 hours wall-clock (324 GPU-h, 9.25 kg CO₂eq).
Why a CLM Detour?
CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers. These changes persist through the MLM decay phase, even when the decay matches the CLM phase in length. The Large model retains 67.2% CKA divergence from its MLM counterpart, compared to 56.5% for Base, showing that the effect scales with model capacity. See our paper for the full mechanistic analysis.
Evaluation
French biomedical benchmark results (8 tasks, 9 seeds per model, macro-averaged F1):
| Model | Ctx | FrACCO-30 | FrACCO-100 | CANTEMIST | DISTEMIST | MedDialog | DiaMed | EMEA | Medline | Avg |
|---|---|---|---|---|---|---|---|---|---|---|
| ModernCamemBERT-bio-large | 8192 | 80.7 | 65.4 | 74.4 | 30.4 | 64.5 | 64.8 | 70.3 | 63.1 | 64.2 |
| MLM baseline Large (ours) | 8192 | 79.4 | 63.3 | 72.6 | 29.1 | 64.5 | 61.2 | 70.4 | 63.5 | 63.0 |
| ModernCamemBERT-bio-base | 8192 | 74.8 | 60.1 | 71.0 | 25.5 | 63.6 | 67.4 | 68.6 | 61.9 | 61.6 |
| ModernCamemBERT | 8192 | 70.1 | 55.3 | 63.3 | 20.2 | 60.6 | 56.4 | 68.0 | 59.7 | 56.7 |
| DrBERT | 512 | 53.0 | 35.6 | 37.9 | 21.4 | 63.6 | 57.0 | 69.6 | 62.8 | 50.1 |
ModernCamemBERT-bio-large achieves 64.2% average F1, the highest score among French biomedical models (+1.2pp over MLM Large, 6/8 task wins).
Intended Use
This model is designed for French biomedical and clinical NLP tasks:
- Named entity recognition (diseases, chemicals, procedures)
- Document classification (clinical specialties, ICD coding)
- Multilabel classification on long clinical documents
- Information extraction from clinical reports, drug leaflets, and scientific articles
The 8,192-token context is critical for long clinical documents (discharge summaries, oncology reports) that are truncated by 512-token models. The Large size provides improved performance over Base at the cost of higher compute requirements.
Related Models
| Model | Language | Parameters |
|---|---|---|
| ModernBERT-bio-base | English | 149M |
| ModernBERT-bio-large | English | 396M |
| ModernCamemBERT-bio-base | French | 150M |
| ModernCamemBERT-bio-large | French | 350M |
Limitations
- Trained on French biomedical text; not suitable for other languages without further adaptation.
- Encoder model: produces contextualized representations, does not generate text.
- Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations.
License
Apache 2.0
Citation
@article{touchent2026clmdetour,
title={A Causal Language Modeling Detour Improves Encoder Continued Pretraining},
author={Touchent, Rian and de la Clergerie, {\'E}ric},
year={2026},
journal={arXiv preprint}
}
Acknowledgments
This work was performed using HPC resources from GENCI-IDRIS (Grant 2024-AD011014393R2).
- Downloads last month
- 188
Dataset used to train rntc/ModernCamemBERT-bio-large
Evaluation results
- f1 on FrACCO-30self-reported80.700
- f1 on FrACCO-100self-reported65.400
- f1 on CANTEMISTself-reported74.400
- f1 on DISTEMISTself-reported30.400
- f1 on MedDialogself-reported64.500
- f1 on DiaMedself-reported64.800
- f1 on EMEAself-reported70.300
- f1 on Medlineself-reported63.100