0
0
MongoDBquery~20 mins

Schema design for write-heavy workloads in MongoDB - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Write-Heavy Schema Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Choosing the right schema for frequent writes

You have a MongoDB collection that receives thousands of writes per second. Which schema design approach helps reduce write conflicts and improves write performance?

AUse a schema with small, flat documents and avoid embedding large arrays to reduce document size and locking.
BUse a normalized schema with references to separate documents to avoid large document sizes.
CEmbed all related data in a single document to minimize the number of writes.
DStore all data in a single large document to reduce the number of documents and simplify queries.
Attempts:
2 left
💡 Hint

Think about how MongoDB locks documents during writes and how document size affects write speed.

query_result
intermediate
2:00remaining
Result of updating embedded documents in a write-heavy schema

Consider a MongoDB collection where user profiles embed an array of login timestamps. You run this update:

db.users.updateOne({ _id: 1 }, { $push: { logins: ISODate() } })

What is the expected effect on write performance if the logins array grows very large?

AWrite performance degrades because the entire document must be rewritten when the array grows large.
BWrite performance remains stable because MongoDB only updates the array element.
CWrite performance improves because the array is indexed automatically.
DWrite performance is unaffected because MongoDB uses row-level locking.
Attempts:
2 left
💡 Hint

Consider how MongoDB handles document updates and the impact of large arrays on document size.

📝 Syntax
advanced
2:00remaining
Identify the correct schema design for high write throughput

Which MongoDB schema design below best supports a write-heavy workload where each write is a new event that must be stored quickly?

AStore all events in a single document with an ever-growing array of events.
BEmbed events inside user documents and update the entire user document on each event.
CUse a capped collection with a fixed size to store events, overwriting old events automatically.
DStore each event as a separate small document in a collection with an index on the event timestamp.
Attempts:
2 left
💡 Hint

Think about how document size and indexing affect write speed and storage.

🔧 Debug
advanced
2:00remaining
Why does this write-heavy schema cause slow writes?

A MongoDB collection stores user activity logs embedded inside user documents as arrays. Over time, writes slow down significantly. What is the most likely cause?

AMongoDB does not support arrays inside documents, causing errors.
BThe arrays grow too large, causing document rewrites and increased disk I/O.
CThe collection lacks an index on the user ID field.
DThe server is running out of RAM due to too many indexes.
Attempts:
2 left
💡 Hint

Consider how MongoDB handles document updates and the impact of large embedded arrays.

🧠 Conceptual
expert
2:00remaining
Optimizing schema for write-heavy workloads with sharding

You have a MongoDB cluster with sharding enabled to handle a write-heavy workload. Which shard key choice best supports even write distribution and avoids write hotspots?

AUse a monotonically increasing field like a timestamp as the shard key.
BUse a hashed value of a frequently updated field as the shard key.
CUse a compound shard key combining user ID and a random suffix.
DUse a constant value as the shard key for all documents.
Attempts:
2 left
💡 Hint

Think about how shard keys affect data distribution and write load balancing.