Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: ValueError
Message: invalid literal for int() with base 10: 'af'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 612, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 389, in from_dataset_card_data
dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2138, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2134, in from_yaml_inner
return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2129, in from_yaml_inner
return from_yaml_inner(obj["dtype"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2131, in from_yaml_inner
return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2088, in unsimplify
label_ids = sorted(feature["class_label"]["names"], key=int)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'af'Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Language Models and City Classification (CIS 5300)
Dataset Description
This dataset supports two tasks for learning about n-gram language models:
1. Shakespeare Text Generation
Raw text from Shakespeare's collected works (~4.5 MB, 167K lines) for training character-level n-gram language models. Students build models of increasing order (n=1 through n=7) and generate Shakespeare-style text.
2. City Name Classification
Classify city names by country of origin using character-level language models. The idea: city names from different countries have distinctive character patterns (e.g., German cities often end in "-burg" or "-stadt", Chinese cities have different character distributions).
Countries: Afghanistan, China, Germany, Finland, France, India, Iran, Pakistan, South Africa
Usage
City Classification Task
from datasets import load_dataset
# Load city name classification data
cities = load_dataset("CCB/cis5300-language-models", "cities")
print(cities)
# DatasetDict({
# train: Dataset({features: ['city_name', 'country_code', 'country'], num_rows: 12392})
# validation: Dataset({features: [...], num_rows: 1548})
# test: Dataset({features: [...], num_rows: 1554})
# })
print(cities["train"][0])
# {'city_name': 'shewah', 'country_code': 'af', 'country': 'Afghanistan'}
Shakespeare Corpus
from huggingface_hub import hf_hub_download
# Download Shakespeare text for language modeling
path = hf_hub_download("CCB/cis5300-language-models", "shakespeare.txt", repo_type="dataset")
with open(path) as f:
text = f.read()
print(f"Shakespeare corpus: {len(text):,} characters")
Dataset Structure
Cities Config
| Split | Examples | Description |
|---|---|---|
| train | 12,392 | City names with country labels for training |
| validation | 1,548 | Labeled validation set |
| test | 1,554 | Labeled test set for evaluation |
Fields:
| Field | Type | Description |
|---|---|---|
city_name |
string | Name of the city |
country_code |
string | Two-letter country code (af, cn, de, fi, fr, in, ir, pk, za) |
country |
ClassLabel | Full country name |
Supplementary Files
| File | Size | Description |
|---|---|---|
shakespeare.txt |
4.5 MB | Complete Shakespeare works for training language models |
shakespeare_sonnets.txt |
9 KB | Shakespeare's sonnets (additional training data) |
nytimes_article.txt |
5 KB | News article for cross-domain perplexity evaluation |
cities_test_unlabeled.csv |
20 KB | Unlabeled test cities (for student predictions) |
Source
City name data sourced from GeoNames (CC BY 4.0). Shakespeare text from Project Gutenberg (public domain).
Intended Use
This dataset is used for Homework 3 in CIS 5300: Natural Language Processing at the University of Pennsylvania. Students:
- Build character-level n-gram language models on Shakespeare
- Implement smoothing (add-k) and evaluate with perplexity
- Use interpolation to combine models of different orders
- Apply language models to classify city names by country of origin
Citation
@misc{cis5300-language-models,
title = {CIS 5300 Language Models Dataset},
author = {Callison-Burch, Chris},
year = {2026},
publisher = {University of Pennsylvania},
}
- Downloads last month
- 18