In microservices architecture, bounded contexts define clear boundaries for domain models. Which of the following best describes the Shared Kernel relationship between two bounded contexts?
Think about when two teams need to work together on a small part of the domain.
The Shared Kernel pattern means two bounded contexts share a small, well-defined part of the domain model and collaborate closely to keep it consistent.
You have two bounded contexts: Order Management and Inventory. The Order Management context needs to know when inventory levels change but should not block its operations waiting for Inventory. Which integration pattern fits best?
Consider loose coupling and asynchronous communication.
Event-driven integration allows Inventory to publish changes asynchronously, so Order Management can react without blocking.
You have two bounded contexts: Payments and Notifications. Payments have high traffic during sales, but Notifications have steady low traffic. What is the best approach to scale these contexts?
Think about independent scaling and resource optimization.
Deploying bounded contexts as separate microservices allows scaling each independently based on their load.
Between two bounded contexts, Billing and Customer Support, Billing is the authoritative source of customer data. Customer Support must use Billing's data but cannot change it. Which relationship fits best and what is a key tradeoff?
Consider who controls the data and who must adapt.
Conformist means one context (Customer Support) depends on another's (Billing) model and must conform to its changes, which can cause tight coupling.
In a microservices system with bounded contexts communicating via asynchronous events, the Inventory service publishes stock updates. The Order service consumes these events to update availability. If Inventory publishes 1000 events per second and network latency plus processing adds 200ms delay per event, estimate the maximum delay before Order sees the latest stock update.
Focus on per-event delay, not total event volume backlog.
The maximum delay per event is network plus processing time (200ms). Since events are processed continuously, the delay before Order sees an update is about 200ms.