LLM Course documentation

Ungraded quiz

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Ungraded quiz

Ask a Question

So far, this chapter has covered a lot of ground! Don’t worry if you didn’t grasp all the details, but it’s to reflect on what you’ve learned so far with a quiz.

This quiz is ungraded, so you can try it as many times as you want. If you struggle with some questions, follow the tips and revisit the material. You’ll be quizzed on this material again in the certification exam.

1. Explore the Hub and look for the roberta-large-mnli checkpoint. What task does it perform?

2. What will the following code return?

from transformers import pipeline

ner = pipeline("ner", grouped_entities=True)
ner("My name is Sylvain and I work at Hugging Face in Brooklyn.")

3. What should replace … in this code sample?

from transformers import pipeline

filler = pipeline("fill-mask", model="bert-base-cased")
result = filler("...")

4. Why will this code fail?

from transformers import pipeline

classifier = pipeline("zero-shot-classification")
result = classifier("This is a course about the Transformers library")

5. What does “transfer learning” mean?

6. True or false? A language model usually does not need labels for its pretraining.

7. Select the sentence that best describes the terms “model”, “architecture”, and “weights”.

8. Which of these types of models would you use for completing prompts with generated text?

9. Which of those types of models would you use for summarizing texts?

10. Which of these types of models would you use for classifying text inputs according to certain labels?

11. What possible source can the bias observed in a model have?

< > Update on GitHub