Datasets:
task stringclasses 14
values | input stringlengths 1 86 | expected stringlengths 1 86 | metadata unknown | id stringlengths 17 30 |
|---|---|---|---|---|
spell | gregarious | g-r-e-g-a-r-i-o-u-s | {
"language": "English",
"script": "Latin",
"word": "gregarious"
} | train_spell_00000 |
spell | watermelon | w-a-t-e-r-m-e-l-o-n | {
"language": "English",
"script": "Latin",
"word": "watermelon"
} | train_spell_00001 |
spell | accurate | a-c-c-u-r-a-t-e | {
"language": "English",
"script": "Latin",
"word": "accurate"
} | train_spell_00002 |
spell | glyph | g-l-y-p-h | {
"language": "English",
"script": "Latin",
"word": "glyph"
} | train_spell_00003 |
spell | kaleidoscope | k-a-l-e-i-d-o-s-c-o-p-e | {
"language": "English",
"script": "Latin",
"word": "kaleidoscope"
} | train_spell_00004 |
spell | deified | d-e-i-f-i-e-d | {
"language": "English",
"script": "Latin",
"word": "deified"
} | train_spell_00005 |
spell | academic | a-c-a-d-e-m-i-c | {
"language": "English",
"script": "Latin",
"word": "academic"
} | train_spell_00006 |
spell | sis | s-i-s | {
"language": "English",
"script": "Latin",
"word": "sis"
} | train_spell_00007 |
spell | metamorphosis | m-e-t-a-m-o-r-p-h-o-s-i-s | {
"language": "English",
"script": "Latin",
"word": "metamorphosis"
} | train_spell_00008 |
spell | tot | t-o-t | {
"language": "English",
"script": "Latin",
"word": "tot"
} | train_spell_00009 |
spell | racecar | r-a-c-e-c-a-r | {
"language": "English",
"script": "Latin",
"word": "racecar"
} | train_spell_00010 |
spell | benevolent | b-e-n-e-v-o-l-e-n-t | {
"language": "English",
"script": "Latin",
"word": "benevolent"
} | train_spell_00011 |
spell | pip | p-i-p | {
"language": "English",
"script": "Latin",
"word": "pip"
} | train_spell_00012 |
spell | rhythm | r-h-y-t-h-m | {
"language": "English",
"script": "Latin",
"word": "rhythm"
} | train_spell_00013 |
spell | apricot | a-p-r-i-c-o-t | {
"language": "English",
"script": "Latin",
"word": "apricot"
} | train_spell_00014 |
spell | madam | m-a-d-a-m | {
"language": "English",
"script": "Latin",
"word": "madam"
} | train_spell_00015 |
spell | transcendent | t-r-a-n-s-c-e-n-d-e-n-t | {
"language": "English",
"script": "Latin",
"word": "transcendent"
} | train_spell_00016 |
spell | hypothetical | h-y-p-o-t-h-e-t-i-c-a-l | {
"language": "English",
"script": "Latin",
"word": "hypothetical"
} | train_spell_00017 |
spell | refer | r-e-f-e-r | {
"language": "English",
"script": "Latin",
"word": "refer"
} | train_spell_00018 |
spell | chocolate | c-h-o-c-o-l-a-t-e | {
"language": "English",
"script": "Latin",
"word": "chocolate"
} | train_spell_00019 |
spell | kayak | k-a-y-a-k | {
"language": "English",
"script": "Latin",
"word": "kayak"
} | train_spell_00020 |
spell | mango | m-a-n-g-o | {
"language": "English",
"script": "Latin",
"word": "mango"
} | train_spell_00021 |
spell | caterpillar | c-a-t-e-r-p-i-l-l-a-r | {
"language": "English",
"script": "Latin",
"word": "caterpillar"
} | train_spell_00022 |
spell | language | l-a-n-g-u-a-g-e | {
"language": "English",
"script": "Latin",
"word": "language"
} | train_spell_00023 |
spell | adventure | a-d-v-e-n-t-u-r-e | {
"language": "English",
"script": "Latin",
"word": "adventure"
} | train_spell_00024 |
spell | computer | c-o-m-p-u-t-e-r | {
"language": "English",
"script": "Latin",
"word": "computer"
} | train_spell_00025 |
spell | aqua | a-q-u-a | {
"language": "English",
"script": "Latin",
"word": "aqua"
} | train_spell_00026 |
spell | juxtapose | j-u-x-t-a-p-o-s-e | {
"language": "English",
"script": "Latin",
"word": "juxtapose"
} | train_spell_00027 |
spell | obsequious | o-b-s-e-q-u-i-o-u-s | {
"language": "English",
"script": "Latin",
"word": "obsequious"
} | train_spell_00028 |
spell | coconut | c-o-c-o-n-u-t | {
"language": "English",
"script": "Latin",
"word": "coconut"
} | train_spell_00029 |
spell | dilapidated | d-i-l-a-p-i-d-a-t-e-d | {
"language": "English",
"script": "Latin",
"word": "dilapidated"
} | train_spell_00030 |
spell | pineapple | p-i-n-e-a-p-p-l-e | {
"language": "English",
"script": "Latin",
"word": "pineapple"
} | train_spell_00031 |
spell | world | w-o-r-l-d | {
"language": "English",
"script": "Latin",
"word": "world"
} | train_spell_00032 |
spell | achieve | a-c-h-i-e-v-e | {
"language": "English",
"script": "Latin",
"word": "achieve"
} | train_spell_00033 |
spell | algorithm | a-l-g-o-r-i-t-h-m | {
"language": "English",
"script": "Latin",
"word": "algorithm"
} | train_spell_00034 |
spell | fortuitous | f-o-r-t-u-i-t-o-u-s | {
"language": "English",
"script": "Latin",
"word": "fortuitous"
} | train_spell_00035 |
spell | loquacious | l-o-q-u-a-c-i-o-u-s | {
"language": "English",
"script": "Latin",
"word": "loquacious"
} | train_spell_00036 |
spell | zephyr | z-e-p-h-y-r | {
"language": "English",
"script": "Latin",
"word": "zephyr"
} | train_spell_00037 |
spell | cherry | c-h-e-r-r-y | {
"language": "English",
"script": "Latin",
"word": "cherry"
} | train_spell_00038 |
spell | buzz | b-u-z-z | {
"language": "English",
"script": "Latin",
"word": "buzz"
} | train_spell_00039 |
spell | persimmon | p-e-r-s-i-m-m-o-n | {
"language": "English",
"script": "Latin",
"word": "persimmon"
} | train_spell_00040 |
spell | mom | m-o-m | {
"language": "English",
"script": "Latin",
"word": "mom"
} | train_spell_00041 |
spell | eye | e-y-e | {
"language": "English",
"script": "Latin",
"word": "eye"
} | train_spell_00042 |
spell | avocado | a-v-o-c-a-d-o | {
"language": "English",
"script": "Latin",
"word": "avocado"
} | train_spell_00043 |
spell | cantaloupe | c-a-n-t-a-l-o-u-p-e | {
"language": "English",
"script": "Latin",
"word": "cantaloupe"
} | train_spell_00044 |
spell | rotor | r-o-t-o-r | {
"language": "English",
"script": "Latin",
"word": "rotor"
} | train_spell_00045 |
spell | mountain | m-o-u-n-t-a-i-n | {
"language": "English",
"script": "Latin",
"word": "mountain"
} | train_spell_00046 |
spell | noon | n-o-o-n | {
"language": "English",
"script": "Latin",
"word": "noon"
} | train_spell_00047 |
spell | raspberry | r-a-s-p-b-e-r-r-y | {
"language": "English",
"script": "Latin",
"word": "raspberry"
} | train_spell_00048 |
spell | blackberry | b-l-a-c-k-b-e-r-r-y | {
"language": "English",
"script": "Latin",
"word": "blackberry"
} | train_spell_00049 |
spell | fenêtre | f-e-n-ê-t-r-e | {
"language": "French",
"script": "Latin",
"word": "fenêtre"
} | train_spell_00050 |
spell | rivière | r-i-v-i-è-r-e | {
"language": "French",
"script": "Latin",
"word": "rivière"
} | train_spell_00051 |
spell | bonjour | b-o-n-j-o-u-r | {
"language": "French",
"script": "Latin",
"word": "bonjour"
} | train_spell_00052 |
spell | fleur | f-l-e-u-r | {
"language": "French",
"script": "Latin",
"word": "fleur"
} | train_spell_00053 |
spell | résumé | r-é-s-u-m-é | {
"language": "French",
"script": "Latin",
"word": "résumé"
} | train_spell_00054 |
spell | fiancée | f-i-a-n-c-é-e | {
"language": "French",
"script": "Latin",
"word": "fiancée"
} | train_spell_00055 |
spell | bonjour | b-o-n-j-o-u-r | {
"language": "French",
"script": "Latin",
"word": "bonjour"
} | train_spell_00056 |
spell | liberté | l-i-b-e-r-t-é | {
"language": "French",
"script": "Latin",
"word": "liberté"
} | train_spell_00057 |
spell | livre | l-i-v-r-e | {
"language": "French",
"script": "Latin",
"word": "livre"
} | train_spell_00058 |
spell | bonjour | b-o-n-j-o-u-r | {
"language": "French",
"script": "Latin",
"word": "bonjour"
} | train_spell_00059 |
spell | avion | a-v-i-o-n | {
"language": "French",
"script": "Latin",
"word": "avion"
} | train_spell_00060 |
spell | égalité | é-g-a-l-i-t-é | {
"language": "French",
"script": "Latin",
"word": "égalité"
} | train_spell_00061 |
spell | résumé | r-é-s-u-m-é | {
"language": "French",
"script": "Latin",
"word": "résumé"
} | train_spell_00062 |
spell | ordinateur | o-r-d-i-n-a-t-e-u-r | {
"language": "French",
"script": "Latin",
"word": "ordinateur"
} | train_spell_00063 |
spell | résumé | r-é-s-u-m-é | {
"language": "French",
"script": "Latin",
"word": "résumé"
} | train_spell_00064 |
spell | avion | a-v-i-o-n | {
"language": "French",
"script": "Latin",
"word": "avion"
} | train_spell_00065 |
spell | bibliothèque | b-i-b-l-i-o-t-h-è-q-u-e | {
"language": "French",
"script": "Latin",
"word": "bibliothèque"
} | train_spell_00066 |
spell | blanc | b-l-a-n-c | {
"language": "French",
"script": "Latin",
"word": "blanc"
} | train_spell_00067 |
spell | égalité | é-g-a-l-i-t-é | {
"language": "French",
"script": "Latin",
"word": "égalité"
} | train_spell_00068 |
spell | bibliothèque | b-i-b-l-i-o-t-h-è-q-u-e | {
"language": "French",
"script": "Latin",
"word": "bibliothèque"
} | train_spell_00069 |
spell | rivière | r-i-v-i-è-r-e | {
"language": "French",
"script": "Latin",
"word": "rivière"
} | train_spell_00070 |
spell | détour | d-é-t-o-u-r | {
"language": "French",
"script": "Latin",
"word": "détour"
} | train_spell_00071 |
spell | liberté | l-i-b-e-r-t-é | {
"language": "French",
"script": "Latin",
"word": "liberté"
} | train_spell_00072 |
spell | amour | a-m-o-u-r | {
"language": "French",
"script": "Latin",
"word": "amour"
} | train_spell_00073 |
spell | liberté | l-i-b-e-r-t-é | {
"language": "French",
"script": "Latin",
"word": "liberté"
} | train_spell_00074 |
spell | rivière | r-i-v-i-è-r-e | {
"language": "French",
"script": "Latin",
"word": "rivière"
} | train_spell_00075 |
spell | détour | d-é-t-o-u-r | {
"language": "French",
"script": "Latin",
"word": "détour"
} | train_spell_00076 |
spell | papillon | p-a-p-i-l-l-o-n | {
"language": "French",
"script": "Latin",
"word": "papillon"
} | train_spell_00077 |
spell | fiancée | f-i-a-n-c-é-e | {
"language": "French",
"script": "Latin",
"word": "fiancée"
} | train_spell_00078 |
spell | papillon | p-a-p-i-l-l-o-n | {
"language": "French",
"script": "Latin",
"word": "papillon"
} | train_spell_00079 |
spell | bleu | b-l-e-u | {
"language": "French",
"script": "Latin",
"word": "bleu"
} | train_spell_00080 |
spell | liberté | l-i-b-e-r-t-é | {
"language": "French",
"script": "Latin",
"word": "liberté"
} | train_spell_00081 |
spell | égalité | é-g-a-l-i-t-é | {
"language": "French",
"script": "Latin",
"word": "égalité"
} | train_spell_00082 |
spell | amour | a-m-o-u-r | {
"language": "French",
"script": "Latin",
"word": "amour"
} | train_spell_00083 |
spell | blanc | b-l-a-n-c | {
"language": "French",
"script": "Latin",
"word": "blanc"
} | train_spell_00084 |
spell | café | c-a-f-é | {
"language": "French",
"script": "Latin",
"word": "café"
} | train_spell_00085 |
spell | fleur | f-l-e-u-r | {
"language": "French",
"script": "Latin",
"word": "fleur"
} | train_spell_00086 |
spell | égalité | é-g-a-l-i-t-é | {
"language": "French",
"script": "Latin",
"word": "égalité"
} | train_spell_00087 |
spell | bonjour | b-o-n-j-o-u-r | {
"language": "French",
"script": "Latin",
"word": "bonjour"
} | train_spell_00088 |
spell | ordinateur | o-r-d-i-n-a-t-e-u-r | {
"language": "French",
"script": "Latin",
"word": "ordinateur"
} | train_spell_00089 |
spell | bibliothèque | b-i-b-l-i-o-t-h-è-q-u-e | {
"language": "French",
"script": "Latin",
"word": "bibliothèque"
} | train_spell_00090 |
spell | résumé | r-é-s-u-m-é | {
"language": "French",
"script": "Latin",
"word": "résumé"
} | train_spell_00091 |
spell | soleil | s-o-l-e-i-l | {
"language": "French",
"script": "Latin",
"word": "soleil"
} | train_spell_00092 |
spell | amour | a-m-o-u-r | {
"language": "French",
"script": "Latin",
"word": "amour"
} | train_spell_00093 |
spell | fiancée | f-i-a-n-c-é-e | {
"language": "French",
"script": "Latin",
"word": "fiancée"
} | train_spell_00094 |
spell | soleil | s-o-l-e-i-l | {
"language": "French",
"script": "Latin",
"word": "soleil"
} | train_spell_00095 |
spell | papillon | p-a-p-i-l-l-o-n | {
"language": "French",
"script": "Latin",
"word": "papillon"
} | train_spell_00096 |
spell | café | c-a-f-é | {
"language": "French",
"script": "Latin",
"word": "café"
} | train_spell_00097 |
spell | bonjour | b-o-n-j-o-u-r | {
"language": "French",
"script": "Latin",
"word": "bonjour"
} | train_spell_00098 |
spell | liberté | l-i-b-e-r-t-é | {
"language": "French",
"script": "Latin",
"word": "liberté"
} | train_spell_00099 |
SpellBench — Linguistic Character-Level Evaluation Benchmark
SpellBench is a benchmark for evaluating how well language models handle character-level and word-level linguistic operations. It includes 29,700 items across diverse tasks, with granular tracking of language and script for each sample.
Most LLMs operate at the token level and struggle with tasks that require reasoning about individual characters — spelling, reversing, counting letters, etc. SpellBench provides a standardized way to measure this across diverse scripts, including cases with diacritics (tashkeel), tonal marks, and transliterations.
Quick Start
from datasets import load_dataset
# Load the test split for evaluation
test_ds = load_dataset("omneity-labs/spellbench", split="test")
# Load the train split for few-shot prompting or fine-tuning
train_ds = load_dataset("omneity-labs/spellbench", split="train")
Dataset Splits
| Split | Items | Words/task/language |
|---|---|---|
| train | 14,850 | 50 |
| test | 14,850 | 50 |
| total | 29,700 | — |
The splits are generated by partitioning the word pool for each language 50/50 before generating task items, so a word that appears in a test example is never seen in any training example for that language.
Tasks
SpellBench currently contains 14 implemented tasks across two categories.
Word-Level (9 tasks)
| Task | Description | Example Input | Example Expected | Script Restriction |
|---|---|---|---|---|
spell |
Spell letter by letter with dashes | hello |
h-e-l-l-o |
all scripts |
reverse |
Reverse the characters | hello |
olleh |
all scripts |
word_length |
Count characters | hello |
5 |
all scripts |
first_letter |
First character | hello |
h |
all scripts |
last_letter |
Last character | hello |
o |
all scripts |
is_palindrome |
Check if palindrome | racecar |
true |
all scripts |
vowel_count |
Count Latin vowels (a,e,i,o,u) | hello |
2 |
Latin only |
consonant_count |
Count Latin consonants | hello |
3 |
Latin only |
remove_vowels |
Strip Latin vowels | hello |
hll |
Latin only |
Sentence-Level (5 tasks)
| Task | Description | Example Input | Example Expected |
|---|---|---|---|
word_count |
Count words | the quick brown fox |
4 |
sentence_reverse |
Reverse word order | the quick brown fox |
fox brown quick the |
longest_word |
Find longest word | the quick brown fox |
quick |
shortest_word |
Find shortest word | the quick brown fox |
the |
alphabetical_order |
Sort words A→Z | the quick brown fox |
brown fox quick the |
Item Format
Each item in test.jsonl is a JSON object:
{
"id": "test_spell_00042",
"task": "spell",
"input": "strawberry",
"expected": "s-t-r-a-w-b-e-r-r-y",
"metadata": {
"language": "English",
"script": "Latin",
"word": "strawberry"
}
}
id: Unique identifiertask: Task nameinput: The input to present to the modelexpected: The ground-truth answermetadata: Includeslanguageandscriptfor per-lang analysis.
Evaluation Heuristics
To handle the conversational nature of LLMs, the provided evaluation scripts (run_eval_transformers.py, run_eval_openai.py) use a multi-stage Extraction & Normalization pipeline rather than simple string matching:
1. Answer Extraction
Before comparison, the scripts attempt to isolate the model's intent:
- JSON Parsing: If the model outputs a JSON block, it extracts the value from the
"answer"or"result"keys. - Numeric Selection: For counting tasks (e.g.,
word_length), it extracts the last standalone number in the response to ignore conversational filler. - Boolean Mapping: Detects "true/yes" or "false/no" and maps them to canonical
trueorfalse. - Character Isolation: For letter-based tasks, it looks for characters inside single/double quotes or standalone characters.
- Order Mapping: For
compare_lengths, it maps linguistic descriptions like "the second word" to canonical markers (second). - Fallback: Defaults to the last non-empty line of the response.
2. Normalization
Once an answer is extracted, it is normalized to ensure fairness:
- Case Insensitivity: All comparisons are case-normalized.
- Collection Normalization: For tasks returning lists (e.g.,
unique_letters), the script sorts the characters and ignores separator differences (commas vs spaces) to compare set content rather than formatting. - Whitespace Stripping: Strict stripping of leading/trailing whitespace.
Prompts
Reference prompt templates for each task are in data/prompts.json. These are suggestions — you can use your own prompts. Each task has 3 template variations:
{
"spell": {
"description": "Spell a word letter by letter, separated by dashes",
"prompts": [
"Spell the word '{input}' letter by letter, separating each letter with a dash.",
"Break the word '{input}' into individual letters separated by hyphens.",
"List each character in '{input}' one by one, using dashes between them."
],
"expected_format": "h-e-l-l-o"
}
}
Language & Script Coverage
SpellBench is designed to be a truly multilingual benchmark, moving beyond English-centric character evaluation. It currently covers 28,600 items across 24+ language-script combinations, including:
Supported Languages & Scripts
| Script | Languages |
|---|---|
| Latin | English, French, German, Spanish, Indonesian, Swahili, Yoruba |
| Arabic | Arabic (Modern Standard), Arabic (MSA with Tashkeel/Diacritics), Persian (Farsi) |
| Arabizi | Moroccan Arabizi, Egyptian Arabizi (incorporating numbers like 2, 3, 5, 7, 9) |
| Romanized | Romanized Russian, Romanized Japanese (Hepburn) |
| Cyrillic | Russian, Bulgarian |
| Devanagari | Hindi, Marathi |
| Other Scripts | Greek, Armenian, Georgian, Korean (Hangul), Thai, Hebrew |
Special Features
- Tashkeel (Diacritics): A dedicated Arabic split where every word includes full vocalization. This tests whether models can "see" through diacritics to identify core letters or accurately count the diacritics themselves.
- Arabizi (transliteration with numbers): Modern Arabic dialects written in Latin script using numerals to represent sounds not found in English (e.g.,
3for 'ayn,7for ha). This is a unique challenge for tokenizers. - Tonal Marks & Diacritics: Extensive coverage of Yoruba (tonal marks) and European accents (French, German, Spanish, etc.).
- Romanization: Tests how models handle transliterated concepts (Russian/Japanese) which often have ambiguous tokenization boundaries.
Per-Language Analysis
Every sample in the dataset includes language and script metadata. We recommend reporting accuracy scores broken down by script to identify where models may have "blind spots" due to their training data or vocabulary constraints.
Running the Evaluation
With a HuggingFace Transformers model
python run_eval_transformers.py \
--model "Qwen/Qwen3.5-0.8B" \
--tasks spell,reverse,word_length \
--output results.json
With an OpenAI-compatible API
python run_eval_openai.py \
--base-url "https://api.openai.com/v1" \
--model "gpt-5" \
--tasks spell,reverse \
--output results.json
See the runner scripts for full options.
Regenerating the Dataset
The dataset is deterministic (seed=42). To regenerate:
python generate.py # generates data/
python generate.py --check # verify only
Citation
@misc{spellbench2026,
title={SpellBench: Can LLMs spell? A Character-Level Linguistic Evaluation Benchmark},
author={Omar Kamali},
year={2026},
url={https://github.com/omneity-labs/spellbench}
}
License
Code is MIT licensed. Data is CC BY-SA licensed.
- Downloads last month
- 16