What if your database could speed up queries but still keep data accurate--how do you balance that?
Why Denormalization trade-offs in MongoDB? - Purpose & Use Cases
Imagine you have a big notebook where you write down all your friends' phone numbers and addresses. Whenever a friend moves or changes their number, you have to find every page where their info appears and update it manually.
This manual updating is slow and easy to mess up. You might forget to change some pages, leading to wrong or outdated info. It's frustrating and wastes a lot of time.
Denormalization lets you store some repeated information together in one place, so you don't have to jump around to find it. This speeds up reading data and reduces the chance of missing updates, but you must be careful to keep the copies in sync.
db.orders.find({}).forEach(order => {
order.customer = db.customers.findOne({_id: order.customerId});
});db.orders.find({}); // customer info already inside each order documentDenormalization makes your database faster to read and simpler to query by storing related data together, but requires thoughtful updates to keep data accurate.
An online store keeps customer details inside each order record to quickly show order history without extra lookups, improving user experience during busy sales.
Manual updates across many places are slow and error-prone.
Denormalization stores related data together to speed up reads.
It requires careful updates to avoid inconsistent data.