starcoder-7b-agent-0.2-gguf

GGUF conversion of Colby/starcoder-7b-agent-0.2, round-2 LoRA fine-tune built on Colby/starcoder-7b-agent-0.1-merged (flattened round 1).

Trained on Roman1111111/claude-opus-4.6-10000x, togethercomputer/CoderForge-Preview, Crownelius/Opus-4.6-Reasoning-3300x, and Colby/starcoder-agent-format-sft.

Chat format: StarCoderChat with and / blocks.

Quantizations

File Format Size
starcoder-7b-agent-0.2-f16.gguf FP16 ~14 GB
starcoder-7b-agent-0.2-q8_0.gguf Q8_0 ~7 GB
starcoder-7b-agent-0.2-q5_k_m.gguf Q5_K_M ~5 GB
starcoder-7b-agent-0.2-q4_k_m.gguf Q4_K_M ~4 GB

Ollama usage

hf download Colby/starcoder-7b-agent-0.2-gguf starcoder-7b-agent-0.2-q4_k_m.gguf
ollama create starcoder-agent:7b -f Modelfile.starcoder-agent
ollama run starcoder-agent:7b
Downloads last month
6
GGUF
Model size
7B params
Architecture
starcoder
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Colby/starcoder-7b-agent-0.2-gguf

Quantized
(3)
this model