LLM Course documentation
Ungraded quiz
0. Setup
1. Transformer models
IntroductionNatural Language Processing and Large Language ModelsTransformers, what can they do?How do Transformers work?How 🤗 Transformers solve tasksTransformer ArchitecturesQuick quizInference with LLMsBias and limitationsSummaryCertification exam
2. Using 🤗 Transformers
3. Fine-tuning a pretrained model
4. Sharing models and tokenizers
5. The 🤗 Datasets library
6. The 🤗 Tokenizers library
7. Classical NLP tasks
8. How to ask for help
9. Building and sharing demos
10. Curate high-quality datasets
11. Fine-tune Large Language Models
12. Build Reasoning Models new
Course Events
Ungraded quiz
So far, this chapter has covered a lot of ground! Don’t worry if you didn’t grasp all the details, but it’s to reflect on what you’ve learned so far with a quiz.
This quiz is ungraded, so you can try it as many times as you want. If you struggle with some questions, follow the tips and revisit the material. You’ll be quizzed on this material again in the certification exam.
1. Explore the Hub and look for the roberta-large-mnli checkpoint. What task does it perform?
2. What will the following code return?
from transformers import pipeline
ner = pipeline("ner", grouped_entities=True)
ner("My name is Sylvain and I work at Hugging Face in Brooklyn.")3. What should replace … in this code sample?
from transformers import pipeline
filler = pipeline("fill-mask", model="bert-base-cased")
result = filler("...")4. Why will this code fail?
from transformers import pipeline
classifier = pipeline("zero-shot-classification")
result = classifier("This is a course about the Transformers library")