banner
Tenifs

Tenifs

雄关漫道真如铁,而今迈步从头越。
github
follow
zhihu
email

Solutions for Redis cache avalanche, cache breakdown, and cache penetration

Cache Avalanche#

Cache Avalanche: When a large amount of cached data expires (becomes invalid) at the same time or when Redis fails, if there are a large number of user requests that cannot be processed in Redis, these requests will directly access the database, leading to a sudden increase in database pressure. In severe cases, this can cause the database to crash, resulting in a complete system failure.

Solutions:

  • Avoid setting the same expiration time for a large amount of data. We can add a random number when setting the expiration time to ensure that the data does not expire at the same time.
  • Mutex lock: If it is found that the accessed data is not in Redis, add a mutex lock to ensure that only one request constructs the cache (reads data from the database and updates the data to Redis) at the same time. Once the cache is built, release the lock. Requests that fail to acquire the mutex lock should either wait for the lock to be released and then re-read the cache or directly return a null value or default value. When implementing the mutex lock, set a timeout; otherwise, if a request holds the lock and encounters an unexpected situation, blocking indefinitely without releasing the lock, other requests will be unable to acquire the lock, leading to unresponsiveness in the entire system.
  • Background cache updates: The business thread is no longer responsible for updating the cache. The cache does not need to be set with an expiration time, allowing the cache to be "permanently valid," and the task of updating the cache is handed over to a background thread for scheduled updates.

Cache Breakdown#

Cache Breakdown: If some hot data in the cache expires, and a large number of requests access this hot data, it cannot be read from the cache, leading to direct access to the database, which can overwhelm the database with high concurrent requests.

Solutions:

  • Mutex lock: Ensure that only one thread updates the cache at the same time. Requests that fail to acquire the mutex lock should either wait for the lock to be released or directly return a null value or default value.
  • Do not set an expiration time for hot data; instead, asynchronously update the cache in the background, or notify the background thread to update the cache and reset the expiration time before the hot data is about to expire.

Cache Penetration#

Cache Penetration: When the data a user accesses is neither in the cache nor in the database, causing the request to fail when accessing the cache, and then when accessing the database, it is found that this data is also not present, making it impossible to build a cache to serve subsequent requests. When there are a large number of such requests, the pressure on the database will suddenly increase, potentially leading to a database crash.

Solutions:

  • Illegal request validation: Add request parameter validation at the entry point to check whether the parameters are reasonable and valid. If they are illegal parameters, return an error directly to avoid further access to the cache and database.
  • Cache null values or default values: For certain data to be queried, set a null value or default value in the cache so that subsequent requests can directly read the null value or default value from the cache and return it to the application without further querying the database.
  • Bloom filter: When a user request arrives, quickly determine whether the data exists by querying the Bloom filter. If it does not exist, there is no need to query the database, ensuring the normal operation of the database.
Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.