Introducing EuroBERT: A High-Performance Multilingual Encoder Model
•
146
None defined yet.
device=["cuda:0", "cuda:1"] or device=["cpu"]*4 on the model.predict or model.rank calls.dataset_id, e.g. dataset_id="lightonai/NanoBEIR-de" for the German benchmark.output_scores=True to get similarity scores returned. This can be useful for some distillation losses!backend="onnx" or backend="openvino" when initializing a SparseEncoder to get started, but I also included utility functions for optimization, dynamic quantization, and static quantization, plus benchmarks.n-tuple-scores output format from mine_hard_negativesgather_across_devices=True to load in-batch negatives from the other devices too! Essentially a free lunch, pretty big impact potential in my evals.transformers, and you install trackio with pip install trackio, then your experiments will also automatically be tracked locally with trackio. Just open up localhost and have a look at your losses/evals, no logins, no metric uploading.