yahma/alpaca-cleaned
Viewer β’ Updated β’ 51.8k β’ 33.8k β’ 823
This is a LoRA fine-tuned version of Microsoftβs Phi-2 model trained on 500 examples from the yahma/alpaca-cleaned instruction dataset.
howtomakepplragequit β working on scalable, efficient LLM training for real-world instruction-following.
bitsandbytes for efficient memory useyahma/alpaca-cleanedTrainer)Instruction: Give three tips to improve time management.
To use this model in your own project:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("howtomakepplragequit/phi2-lora-instruct")
tokenizer = AutoTokenizer.from_pretrained("howtomakepplragequit/phi2-lora-instruct")
input_text = "### Instruction:\nExplain how machine learning works.\n\n### Response:"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Base model
microsoft/phi-2