System Design – Caching
Q1. What is Caching in terms of System Design?
Answer: Caching is storing the server-level data in temporary storage called cache so that the upcoming requests can use the data from this cache instead of going again to the server. The caching will reduce backend server load, allow faster access, and reduce latency.
Q2. How many levels can caching be applied?
Answer: Caching can be applied at the hardware level (ex: the cache in the motherboard), at the software level (ex: Operating System), and also at the application level (ex: browser caches webpages). Caching can be applied both at the client level as well as server side.
Q3. How does the caching look while designing systems?
Answer: The caching layer usually falls between request senders and the actual compute or database servers. Caching is a system where multiple servers would be present to take up the requests without requests hitting the actual compute and database servers.
We have a load balancer sending requests to database servers; caching is another set of servers in between these two layers. The requests would go to this caching layer, and in case they are not found here, they would proceed to actual servers. Also, the caching layer is mostly used for reads, but caching is possible for both reads and writes.
Q4. What are the factors to consider while preparing a caching strategy for the systems?
Answer: Below are the factors to consider while designing the caching layer in system design.
Cache size: More cache size means memory consumption, and less means less performance and frequent cache misses and data movement. Hence, it should be very balanced based on the requirements.
Cache Eviction Policy: Based on your access patterns, you decide when, how, and which ones to evict from the cache when it is full. LRU, MRU, LFU, or FIFO are some of the strategies.
Expiration Policy: This decides when data should be considered expired. This allows the removal of stale data. This can be triggered based on age or certain triggers.
Consistency: The effort needed to keep consistency, i.e., keeping data in sync across cache and original in case of updates, is crucial. The data that needs frequent reads but not many writes if kept in cache, can increase performance manifold. However, these decisions will depend on the system requirements.
Hotspots: Hotspot means the data that is repeatedly accessed. Identifying them with methods and tools and keeping them available in the cache can increase the performance manifold.
Caching for distributed systems: As the system becomes more distributed as well as the caching layer, the factors such as data partitioning, replication, and consistency play a crucial role in deciding the caching strategy.
Q5. What are caching strategies, more clearly, the caching eviction strategies?
Answer: Below are different caching strategies used and their descriptions. The caching layer can be considered as, again hashmap and actual data. We need to decide which cache items to be removed and when. And not only the cache data but the keys or metadata pointing to the main data needs arrangement based on outgoing data and incoming data. Incoming data means what new data is getting added to the cache.
LRU: LRU stands for Least Recently Used. The least recently used data will be removed first when the cache is full.
MRU: MRU stands for Most Recently Used. This evicts the most recently used data first when the cache is full.
LFU: LFU stands for Least Frequently Used. This evicts the least frequently used data first when the cache is full.
FIFO: FIFO stands for First-In First Out. This evicts the oldest items first when the cache is full.
Q6. When is caching useful?
Answer: Caching is very useful when the same data is frequently used or when some inconsistency is acceptable for some time compared to improving the performance of reads. Also, when the cost of data retrieval or even writing is high, caching is helpful to lower those costs.
Q7. What are the trade-offs of caching?
Answer: Caching needs extra servers, extra efforts in design, and a more complex system design. The eviction strategies should be correctly designed, or caching may become useless. There is the possibility of stale data many times based on the caching and requirements laid upon.
Q8. What are technologies that are most widely used for caching?
Answer: Some of the most widely used caching technologies are Redis (REmote DIctionary Server), Memcached, Varnish Cache, Apache Ignite, Hazelcast, Ehcache, Nginx Caching, Amazon ElastiCache.
Q9. What are important factors to be considered while choosing caching technologies?
Answer: The factors to consider while choosing technologies are your own requirements, budget, performance requirements, scalability, consistency, data model, and the programming language that your organization is well versed with.
Q10. What are CDNs?
Answer: CDN stands for Content Delivery Network. As the name suggests, it is a network of servers distributed across the geographies. They cache and replicate the content across different parts of the world. When a user makes a request, the nearest CDN will take it up, reducing latency, removing the burden from the actual server, and increasing performance.
CDN usually caches content static and dynamic assets such as web pages, images, videos, HTML, CSS, Javascript, etc.
The usage of CDNs also provides enhanced security in the form of such as Distributed Denial of Service (DDoS) protection, Web Application Firewall (WAF), and SSL/TLS encryption.
Q11. Which are the popular CDNs that are most widely used?
Answer: The CDNs that are most widely used are Cloudflare, Amazon CloudFront, Akamai, Fastly, and Google Cloud CDN.