ClinicalEncoder: a Diagnosable ColBERT for medical texts
Collection
In this collection, you will find other released models and datasets for ClinicalEncoder25, our brand new retrieval and reasoning model for healthcare • 3 items • Updated • 1
ClinicalMap25-for-SnomedCT provides a dense, token-level mapping of SnomedCT clinical concepts to the ClinicalEncoder25 embedding space. This resource enables fast, interpretable semantic search and reasoning over medical terminology, directly compatible with the ClinicalEncoder25 model.
torch._scaled_mm).| File Name | Description |
|---|---|
clinical_map_25_for_sct_concepts.pth |
707,584 rows (707,574 concepts + 10 padding) of 1024 8-bit floats, normalized for cosine similarity. |
sct_concepts.txt |
707,574 SnomedCT concept names, one per line. May contain duplicates. |
# %%
import torch
# %%
# Load the embeddings (8-bit floats)
concept_vectors = torch.load("clinical_map_25_for_sct_concepts.pth", map_location="cpu")
# If your hardware doesn't support FP8, cast to float16
concept_vectors = concept_vectors.to(torch.float16)
# %%
# Load concept names
with open("sct_concepts.txt", "r") as f:
concept_names = [line.strip() for line in f.readlines()]
# %%
# Load the ClinicalEncoder25 model
from transformers import AutoModel, AutoTokenizer
MODEL_NAME = "Parallia/ClinicalEncoder25-Diagnosable-Colbert-L2-for-medical-texts"
model, tokenizer = AutoModel.from_pretrained(MODEL_NAME), AutoTokenizer.from_pretrained(MODEL_NAME)
# %%
# Tokenize your document
doc = "The patient suffers from PAPA syndrome. The patient therefore takes NSAIDs daily, as prophylaxis."
doc_inputs = tokenizer(
doc,
return_tensors="pt",
add_special_tokens=True,
)
doc_tokens = [
tokenizer.decode([tid], clean_up_tokenization_spaces=False)
for tid in doc_inputs["input_ids"][0].tolist()
]
# %%
# Encode your document
with torch.no_grad():
doc_inputs = {k: v.to(model.device) for k, v in doc_inputs.items()}
doc_outputs = model(**doc_inputs)
doc_vectors = doc_outputs.last_hidden_state[0]
# Normalize the token embeddings for easier cosine similarity scoring
doc_vectors = (doc_vectors / doc_vectors.norm(dim=1, keepdim=True).clamp_min(1e-12))
# Ensure the query and embeddings are on the same device, and use the same dtype
doc_vectors = doc_vectors.to(concept_vectors.device).to(concept_vectors.dtype)
# %%
# Compute cosine similarity
if concept_vectors.dtype != torch.float8_e4m3fn:
similarities = torch.mm(doc_vectors, concept_vectors.t())
else:
unit_vector = torch.tensor(1.0, device=concept_vectors.device, dtype=torch.float32)
similarities = torch._scaled_mm(
doc_vectors, concept_vectors.T,
unit_vector, unit_vector,
out_dtype=torch.float16
)
# %%
# Retrieve concept names of the best matches per token, and display their similarity scores
how_many_to_display = 3
for i, token in enumerate(doc_tokens[1:]):
print(f"\nToken #{i} '{token}':")
top_indices = similarities[i].topk(how_many_to_display, largest=True, sorted=True).indices
for idx in top_indices:
print(f"- {concept_names[idx]} ({similarities[0, idx].item()}")
This dataset is released under the CC-BY-NC 4.0 license. For commercial use, please obtain a license. You might also need a SnomedCT license if you intend to map the output of this script to the SnomedCT ontology.