0
0
NLPml~5 mins

Pre-trained embedding usage in NLP - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is a pre-trained embedding in NLP?
A pre-trained embedding is a set of word or token vectors learned from a large text dataset before being used in a new task. It helps represent words as numbers that capture their meanings and relationships.
Click to reveal answer
beginner
Why use pre-trained embeddings instead of training from scratch?
Pre-trained embeddings save time and resources because they already capture useful language patterns. They improve model performance, especially when you have limited data for your task.
Click to reveal answer
beginner
Name two popular pre-trained embedding models.
Two popular pre-trained embedding models are Word2Vec and GloVe. Both learn word vectors from large text corpora but use different training methods.
Click to reveal answer
intermediate
How do you use pre-trained embeddings in a neural network?
You load the pre-trained vectors and use them as the initial weights for the embedding layer in your neural network. You can keep them fixed or allow fine-tuning during training.
Click to reveal answer
intermediate
What is fine-tuning in the context of pre-trained embeddings?
Fine-tuning means updating the pre-trained embedding weights slightly during your task's training to better fit your specific data and improve performance.
Click to reveal answer
What is the main benefit of using pre-trained embeddings?
AThey always produce random vectors
BThey require more data to train
CThey replace the need for a neural network
DThey reduce training time and improve performance
Which of these is NOT a popular pre-trained embedding model?
ARandomForest
BWord2Vec
CGloVe
DFastText
What does fine-tuning pre-trained embeddings involve?
AFreezing the embeddings so they don't change
BUsing embeddings only for testing
CUpdating embeddings during training to fit new data
DDeleting embeddings after training
How are pre-trained embeddings usually integrated into a model?
AAs the output layer
BAs the initial weights of the embedding layer
CAs the loss function
DAs the optimizer
Which statement about pre-trained embeddings is true?
AThey capture semantic relationships between words
BThey always require training from scratch
CThey cannot be used with neural networks
DThey are only useful for image data
Explain what pre-trained embeddings are and why they are useful in NLP tasks.
Think about how embeddings represent words and how pre-training helps.
You got /4 concepts.
    Describe how you would use pre-trained embeddings in a neural network model and the role of fine-tuning.
    Consider the embedding layer and training process.
    You got /4 concepts.