T-pro-it-2.0-GGUF

🚨 Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.

This repository contains T-pro-it-2.0 converted to the GGUF format with llama.cpp.
See the original BF16 model here: t-tech/T-pro-it-2.0.

πŸ“Š Benchmarks

TBD

Available quantisations

Recommendation: choose the highest-quality quantisation that fits your hardware (VRAM / RAM).

Filename (β†’ -gguf) Quant method Bits Size (GB)
t-pro-it-2.0-q4_k_m Q4_K_M 4 19.8
t-pro-it-2.0-q5_k_s Q5_K_S 5 22.6
t-pro-it-2.0-q5_0 Q5_0 5 22.6
t-pro-it-2.0-q5_k_m Q5_K_M 5 23.2
t-pro-it-2.0-q6_k Q6_K 6 26.9
t-pro-it-2.0-q8_0 Q8_0 8 34.8

Size figures assume no GPU off-loading. Off-loading lowers RAM usage and uses VRAM instead.

Quickstart

llama.cpp

Check out our llama.cpp documentation for more usage guide.

We advise you to clone llama.cpp and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository llama.cpp.

./llama-cli -hf t-tech/T-pro-it-2.0-GGUF:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --presence-penalty 1.0 -c 40960 -n 32768 --no-context-shift

ollama

Check out our ollama documentation for more usage guide.

You can run T-pro-2.0 with one command:

ollama run t-tech/T-pro-it-2.0:q8_0

See also t-tech ollama homepage.

Switching Between Thinking and Non-Thinking Mode

You can add /think and /no_think to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.

πŸ“– Citation

If you use this model in your research or projects, please cite:

@inproceedings{stoianov-etal-2026-pro,
    title = "{T}-pro 2.0: An Efficient {R}ussian Hybrid-Reasoning Model and Playground",
    author = "Stoianov, Dmitrii  and
      Taranets, Danil  and
      Tsymboi, Olga  and
      Latypov, Ramil  and
      Dautov, Almaz  and
      Kruglikov, Vladislav  and
      Nikita, Surkov  and
      Abramov, German  and
      Gein, Pavel  and
      Abulkhanov, Dmitry  and
      Gashkov, Mikhail  and
      Zelenkovskiy, Viktor  and
      Batalov, Artem  and
      Medvedev, Aleksandr  and
      Potapov, Anatolii",
    editor = "Croce, Danilo  and
      Leidner, Jochen  and
      Moosavi, Nafise Sadat",
    booktitle = "Proceedings of the 19th Conference of the {E}uropean Chapter of the {A}ssociation for {C}omputational {L}inguistics (Volume 3: System Demonstrations)",
    month = mar,
    year = "2026",
    address = "Rabat, Marocco",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2026.eacl-demo.22/",
    doi = "10.18653/v1/2026.eacl-demo.22",
    pages = "297--319",
    ISBN = "979-8-89176-382-1"
   }
Downloads last month
747
GGUF
Model size
33B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for t-tech/T-pro-it-2.0-GGUF

Base model

Qwen/Qwen3-32B
Quantized
(4)
this model

Collection including t-tech/T-pro-it-2.0-GGUF