raincandy-u/TinyChat
Viewer โข Updated โข 100k โข 21 โข 2
How to use raincandy-u/TinyChat-1776K with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="raincandy-u/TinyChat-1776K") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("raincandy-u/TinyChat-1776K")
model = AutoModelForCausalLM.from_pretrained("raincandy-u/TinyChat-1776K")How to use raincandy-u/TinyChat-1776K with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "raincandy-u/TinyChat-1776K"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "raincandy-u/TinyChat-1776K",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/raincandy-u/TinyChat-1776K
How to use raincandy-u/TinyChat-1776K with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "raincandy-u/TinyChat-1776K" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "raincandy-u/TinyChat-1776K",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "raincandy-u/TinyChat-1776K" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "raincandy-u/TinyChat-1776K",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use raincandy-u/TinyChat-1776K with Docker Model Runner:
docker model run hf.co/raincandy-u/TinyChat-1776K
A tiny LM trained on TinyChat dataset from scratch.
The aim is to try to achieve natural responses on the smallest possible model. Trained using a dataset of 3 year old children level English conversations.
Note: It has no world knowledge, so you should not ask it any intellectual questions.
config = AutoConfig.for_model(
model_type="llama",
hidden_size=192,
intermediate_size=640,
num_attention_heads=16,
num_hidden_layers=3,
num_key_value_heads=4,
tie_word_embeddings=True,
vocab_size=2048,
max_position_embeddings=256
)
<A>Hi, Tom. How are you? <end>
<B>I'm fine, thank you. And you? <end>
<A>Fine. What's your favorite color? <end>
<B>My favorite color is black. <end>
<A>Do you like cats? <end>
<B>
Example output:
Yes, I do. I like it too. They are good for me.
top_k=40,
top_p=0.8,
temperature=1