0
0
Agentic AIml~20 mins

Long-term memory with vector stores in Agentic AI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Vector Memory Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
How does a vector store help with long-term memory in AI agents?

Imagine you have an AI agent that needs to remember important information from past conversations. How does using a vector store help the agent keep and find this information later?

AIt deletes old memories automatically to keep only the newest information.
BIt saves all conversations as plain text files for the agent to read later.
CIt stores information as numbers in a way that lets the agent quickly find similar past memories based on meaning.
DIt compresses the data into a single number to save space but loses details.
Attempts:
2 left
💡 Hint

Think about how the agent finds related information, not just storing raw text.

Predict Output
intermediate
2:00remaining
Output of similarity search in a vector store

Given the following code snippet using a vector store, what will be the output?

Agentic AI
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np

# Stored vectors representing past memories
memory_vectors = np.array([[1, 0], [0, 1], [1, 1]])

# New query vector
query_vector = np.array([[0.9, 0.1]])

# Compute similarity scores
scores = cosine_similarity(query_vector, memory_vectors)

# Find index of most similar memory
most_similar_index = np.argmax(scores)
print(most_similar_index)
A0
B1
C2
DNone of the above
Attempts:
2 left
💡 Hint

Look at which stored vector is closest to the query vector in direction.

Model Choice
advanced
2:00remaining
Choosing the best embedding model for long-term memory

You want to build a long-term memory system for an AI agent that remembers conversations. Which embedding model is best to create vectors that capture semantic meaning for diverse topics?

AA pretrained transformer-based model like BERT that creates context-aware embeddings
BA simple bag-of-words model that counts word frequency
CA one-hot encoding model that marks presence or absence of words
DA random vector generator that assigns random numbers to texts
Attempts:
2 left
💡 Hint

Think about which model understands word meaning in context.

Hyperparameter
advanced
2:00remaining
Effect of vector dimension size on memory performance

When creating vector embeddings for long-term memory, what is the effect of increasing the vector dimension size?

AIt always improves memory retrieval speed and accuracy without downsides.
BIt causes the vector store to crash due to memory overflow.
CIt reduces the quality of embeddings because vectors become too sparse.
DIt can improve the detail captured but may slow down search and require more storage.
Attempts:
2 left
💡 Hint

Consider trade-offs between detail and resource use.

🔧 Debug
expert
3:00remaining
Why does this vector store similarity search return wrong results?

Review the code below. The similarity search returns unexpected results. What is the main issue?

Agentic AI
import numpy as np

def cosine_similarity(a, b):
    return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))

memory_vectors = np.array([[1, 0], [0, 1], [1, 1]])
query_vector = np.array([0.9, 0.1])

scores = []
for vec in memory_vectors:
    scores.append(cosine_similarity(query_vector, vec))

most_similar_index = np.argmax(scores)
print(most_similar_index)
AThe cosine_similarity function does not handle vector inputs correctly and returns a scalar instead of an array.
BThe code is correct and will return the expected most similar index.
CThe cosine_similarity function is correct, but the memory_vectors array should be normalized first.
DThe query_vector should be reshaped to 2D before computing similarity to match memory_vectors shape.
Attempts:
2 left
💡 Hint

Check how the cosine similarity is computed for each vector.