MEDUSA 0.1 — Medieval European Documents Unified System for Automated text recognition
MEDUSA is a family of Vision Language Models (VLMs) fine-tuned for multilingual medieval handwritten text recognition (HTR) at the line level. These models were developed at the École nationale des chartes – PSL and submitted to the ICDAR 2026 Competition on Multilingual Medieval Handwritten Text Recognition (CMMHWR).
This repository contains two model variants:
| Model | Base | Version | Notes |
|---|---|---|---|
MEDUSA-4B-0.1 |
Qwen3.5-4B | 0.1 | Model used at competition submission |
MEDUSA-9B-0.1 |
Qwen3.5-9B | 0.1 | Best model at competition submission |
Note on versioning. The
0.1variants correspond exactly to the models evaluated in the ICDAR 2026 competition.
System report
For full details on data, training procedure, and results, see the accompanying system report.
Languages and scripts
MEDUSA was trained on a corpus of over 640,000 lines spanning more than twenty repositories, covering the following language families and scripts:
- Romance / Latin: Old French (
fro), Occitan (pro), Old Italian (ita), Old Spanish (osp), Catalan (cat), Old Portuguese (opor), Navarrese (nav), Latin (lat), Venetian (vec), Galician (glg) - Germanic: Middle High German (
gmh), Middle Low German (gml), Old Icelandic (ice), Middle English (enm), Middle Dutch (dum), Old English (ang), Old Norwegian (non), Swedish (swe) - Celtic: Welsh (
wlm), Old Irish (gle) - Slavic: Old Czech (
cze), Old Polish (pol)
Mnuscripts dated roughly from the 9th to the 15th century.
Results (ICDAR 2026 CMMHWR)
Unweighted average CER (%) and WER (%) on internal and official competition test sets. Lower is better.
| Model | Internal CER | Internal WER | Task 1 CER | Task 2 CER | Task 3 CER |
|---|---|---|---|---|---|
| kraken-CATMuS 1.6.0 (baseline) | 17.3 | 53.5 | 9.29 | 7.91 | 25.9 |
| MEDUSA-4B 0.1 | 14.7 | 44.5 | 8.15 | 5.60 | 12.0 |
| MEDUSA-9B 0.1 | 13.2 | 42.6 | 8.03 | 5.24 | 10.8 |
Intended use
These models are designed for line-level HTR on pre-segmented medieval manuscript images. They are not page-level OCR systems: they expect a cropped image of a single text line as input and return the transcription of that line.
The models target CATMuS transcription guidelines, which govern abbreviation expansion, allograph normalisation, and the character set used.
Usage with DocWorkflow
The recommended way to use MEDUSA is via DocWorkflow, the document analysis framework developed at the École nationale des chartes. DocWorkflow handles ALTO XML input/output, line image extraction, batching, CATMuS post-processing, and scoring in a unified pipeline.
Installation
git clone https://github.com/TheoMoins/DocWorkflow
cd docworkflow
pip install -e .
Configuration file
Create a YAML config file (e.g., medusa_inference.yml):
run_name: "Medusa0.1Line-9B"
output_dir: "results"
device: "cuda"
save_image: true
data:
test: "path/to/your/alto/data" # directory with ALTO XML + image pairs
tasks:
htr:
type: VLMLineHTR
config:
use_metadata: true
model_name: 'outputs/Medusa0.1Line-9B'
device_map: "auto"
max_new_tokens: 128
line_batch_size: 8
max_pixels: 401408
prompt: >
Transcribe the handwritten text in this line image.
Keep abbreviations as written, do not expand them.
Modernize word segmentation (split or join words following modern usage).
Use only u and i, never v and j, regardless of the original or modern usage.
Do not record allographic variants, use standard letter forms.
Output ONLY the transcription.
Tip. If your dataset follows dataset-specific transcription conventions, you can provide a
conventions.ymlfile alongside your ALTO data. DocWorkflow will automatically inject the conventions into the prompt via a{conventions}placeholder.
Running inference
docworkflow -c Medusa0.1Line-9B.yml predict -t htr -d test
Direct usage with transformers
The models can also be used directly outside of DocWorkflow, though the CATMuS post-processing step will need to be applied manually if desired.
from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
model_id = "ENC-PSL/Medusa0.1Line-9B"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, device_map="auto")
image = Image.open("path/to/line_image.jpg").convert("RGB")
prompt = "Transcribe the handwritten text in this line image.\nOutput ONLY the transcription."
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": prompt},
{"type": "image", "image": image},
],
}
]
inputs = processor.apply_chat_template(
[messages],
tokenize=True,
add_generation_prompt=True,
return_dict=True,
enable_thinking=False,
return_tensors="pt",
).to(model.device)
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
trimmed = generated_ids[0][inputs["input_ids"].shape[1]:]
transcription = processor.decode(trimmed, skip_special_tokens=True).strip()
print(transcription)
Important. The model is optimised for the prompt given above. Results may degrade if the prompt is modified. The model outputs raw text; to enforce CATMuS compliance (character whitelist, allograph normalisation), apply the post-processing step provided in DocWorkflow (
src/tasks/htr/postprocessing.py).
CATMuS conventions
Transcriptions follow the CATMuS guidelines.
Citation
If you use MEDUSA in your research, please cite:
@unpublished{moins:hal-05600991,
TITLE = {{MEDUSA 0.1: Medieval European Documents Unified System for Automated text recognition System Report for the ICDAR 2026 Competition on Multilingual Medieval Handwritten Text Recognition}},
AUTHOR = {Moins, Th{\'e}o and Cafiero, Florian and Camps, Jean-Baptiste and Conte, Lilla and Guidi, Emilie and Hensley, Brenna and Kapitan, Katarzyna and Macedo, Carolina and Peratello, Paola and Vermaas, Cecile and Vidal-Gor{\`e}ne, Chahan},
URL = {https://enc.hal.science/hal-05600991},
NOTE = {working paper or preprint},
YEAR = {2026},
MONTH = Apr,
PDF = {https://enc.hal.science/hal-05600991v1/file/MEDUSA__MEDieval_Universal_Script_Analysis-6.pdf},
HAL_ID = {hal-05600991},
HAL_VERSION = {v1},
}
Funding
Funded by the European Union (ERC, LostMA, 101117408). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
This work has received support under the Major Research Program of PSL Research University ``CultureLab'' launched by PSL Research University and implemented by ANR with the references ANR-10-IDEX-0001.
Ce travail a bénéficié d'une aide de l’État gérée par l'Agence Nationale de la Recherche au titre de France 2030 portant la référence « ANR-23-IACL-0008»).
Biblissima+ bénéficie d’une aide de l'Etat gérée par l'ANR au titre du Programme d’investissements d’avenir intégré à France 2030, portant la référence ANR-21-ESRE-0005.
This work was granted access to the HPC resources of IDRIS under the allocation 2026-AD011015914R1 made by GENCI.
- Downloads last month
- 42