Recall & Review
beginner
What is the main purpose of a sequence-to-sequence (seq2seq) architecture?
Seq2seq models transform one sequence into another, like translating a sentence from one language to another or summarizing text.
Click to reveal answer
beginner
Name the two main parts of a sequence-to-sequence model.
The encoder, which reads and understands the input sequence, and the decoder, which generates the output sequence step-by-step.
Click to reveal answer
intermediate
How does the encoder in a seq2seq model work?
The encoder processes the input sequence and compresses its information into a fixed-size context vector that summarizes the input for the decoder.
Click to reveal answer
intermediate
What role does the decoder play in a seq2seq model?
The decoder uses the context vector to generate the output sequence one element at a time, often using previous outputs as input for the next step.
Click to reveal answer
advanced
Why is attention mechanism important in seq2seq models?
Attention helps the decoder focus on different parts of the input sequence at each step, improving accuracy especially for long sequences.
Click to reveal answer
What does the encoder in a seq2seq model produce?
✗ Incorrect
The encoder compresses the input sequence into a context vector that the decoder uses to generate output.
Which part of a seq2seq model generates the output sequence?
✗ Incorrect
The decoder produces the output sequence step-by-step using the context vector.
Why is attention used in seq2seq models?
✗ Incorrect
Attention allows the decoder to look at different parts of the input sequence for better output.
In seq2seq, what is typically fed into the decoder at each step?
✗ Incorrect
The decoder uses the previous output and context vector to predict the next token.
Which task is a common use case for seq2seq models?
✗ Incorrect
Seq2seq models are widely used for translating sentences from one language to another.
Explain how the encoder and decoder work together in a sequence-to-sequence model.
Think of the encoder as reading a story and the decoder as retelling it in another language.
You got /4 concepts.
Describe the purpose and benefit of the attention mechanism in seq2seq architectures.
Imagine trying to translate a long sentence by focusing on one word at a time.
You got /3 concepts.