MixMinMatch
Collection
Collection of datasets from MixMinMatch work. • 16 items • Updated • 2
How to use AdaMLLab/mmBERT-Turkish-Quality-Classifier with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="AdaMLLab/mmBERT-Turkish-Quality-Classifier", trust_remote_code=True) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("AdaMLLab/mmBERT-Turkish-Quality-Classifier", trust_remote_code=True, dtype="auto")A text quality classifier for Turkish pretraining data, trained from mmBERT-small.
This model implements the FineWeb2-HQ approach (Messmer et al., 2025) but uses mmBERT as the encoder for improved Turkish understanding.
from transformers import pipeline
classifier = pipeline("text-classification", model="AdaMLLab/mmBERT-Turkish-Quality-Classifier")
result = classifier("Türkçe metin burada")
@misc{alrashed2025mixminmatch,
title={Mix, MinHash, and Match: Cross-Source Agreement for Multilingual Pretraining Datasets},
author={Sultan Alrashed and Francesco Orabona},
year={2025},
eprint={2512.18834v2},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.18834v2},
}
Base model
jhu-clsp/mmBERT-small