Imagine you have a list of search results from a simple keyword match. Why might you want to re-rank these results using a machine learning model?
Think about how simple keyword matching might miss the best answers.
Re-ranking helps by using a model to score results based on deeper relevance, improving the order users see.
What is the output of the following Python code that re-ranks a list of documents by their scores?
docs = ['doc1', 'doc2', 'doc3'] scores = [0.3, 0.9, 0.5] ranked_docs = [doc for _, doc in sorted(zip(scores, docs), reverse=True)] print(ranked_docs)
Look at how sorting with reverse=True orders scores from highest to lowest.
The code sorts documents by their scores descending, so the highest score doc2 comes first.
You want to re-rank search results by understanding the meaning of queries and documents. Which model type is best suited for this?
Think about models that understand language context deeply.
Transformer models like BERT capture semantic meaning and are effective for re-ranking tasks.
When training a neural re-ranking model, which hyperparameter setting is most important to prevent overfitting on a small dataset?
Regularization helps models generalize better on small data.
Dropout randomly disables neurons during training, reducing overfitting risk.
You have two re-ranking models. Model A has a Mean Reciprocal Rank (MRR) of 0.75, Model B has an MRR of 0.65. What does this tell you?
MRR measures how high the first correct answer appears on average.
A higher MRR means the model tends to place the correct answer closer to the top of the list.