Consider a Spring Boot application that fetches data from a slow database. Why does adding caching improve the application's performance?
Think about where cached data is stored and how it affects database access.
Caching keeps frequently used data in fast-access memory, so the application doesn't need to query the database every time. This reduces latency and improves performance.
In a Spring Boot app using caching, what occurs when a cached value expires and a request for that data comes in?
Think about how caching frameworks handle expired entries.
When a cached value expires, the next request triggers a fresh data fetch from the source, and the cache is updated with this new data.
Given a Spring Boot service with caching enabled on method getUserById, the following calls happen in order:
- getUserById(1)
- getUserById(2)
- getUserById(1)
- getUserById(3)
- getUserById(2)
How many cache hits occur?
Remember that the first call for each ID loads data into cache; subsequent calls for the same ID hit the cache.
Calls 1, 2, and 4 load data into cache (no hits). Calls 3 and 5 hit the cache for IDs 1 and 2 respectively, so 2 cache hits total.
Look at this Spring Boot method with caching:
@Cacheable("users")
public User getUser(int id) {
return userRepository.findById(id).orElse(null);
}Despite calls to getUser, the cache never stores data. What is the likely cause?
Think about how Spring AOP proxies work with caching annotations.
Spring caching uses proxies that intercept calls from outside the class. Internal calls bypass proxies, so caching does not happen.
Which option correctly caches the result only if the user.isActive() returns true?
Remember Spring cache SpEL expressions require '#' before parameter names and 'unless' negates caching condition.
Option D uses 'unless' with correct SpEL syntax '#user.isActive()' negated to cache only when active is true.