Memorystore Redis vs Memcached: Which Caching Layer Should You Use

Ben Makansi
February 27, 2026

Memorystore is GCP's managed in-memory caching service, and it supports two engines: Redis and Memcached. They are both open source caching technologies, but they are not interchangeable. Redis is a more featureful data structure server. Memcached is a simpler, leaner key-value cache. The Associate Cloud Engineer exam expects you to know the differences and pick the right one for a given scenario. This article covers what each one is, what they share, and how the exam tests the choice.

It does not cover the deep performance tuning of either engine, every available configuration option, or how to design a cache invalidation strategy. The goal is the working understanding the Associate Cloud Engineer exam expects.

What Memorystore is for

Memorystore is a caching layer. The reason caching exists is that pulling data from a disk-based database is slower than pulling it from memory. Sometimes a lot slower. If you can keep frequently-accessed data in memory, you can serve reads much faster and take pressure off the database underneath.

The canonical examples for the Associate Cloud Engineer exam are a news website caching frequently-accessed articles, a game leaderboard handling thousands of reads and writes per second, an e-commerce platform storing user session data like cart contents, and API response caching. The common thread is "we have data that is read often, where the freshest copy lives somewhere slower, and we need millisecond-or-better access."

Memorystore is fully managed, so you do not run the cache yourself. You provision an instance, point your application at it, and let GCP handle the operations.

Memcached: simple, fast, distributed

Memcached is the older and simpler of the two. It is a pure key-value store. You put a key in, you get a value back. Values are opaque blobs. There is no data type system, no persistence, no replication.

What Memcached does well is be fast and easy to scale horizontally. You add more nodes, the keys get distributed across them, and capacity grows. It is the right fit when the cache content is genuinely disposable, when you need to scale to a lot of memory, and when the access pattern is straightforward key-value lookups.

If a question mentions a session cache that just needs to hold opaque blobs, or a simple lookup cache where every cache miss can be tolerated, Memcached is fine.

Redis: data structures, persistence, more capabilities

Redis is more like a feature-rich in-memory database that happens to be excellent at caching. It supports rich data types: strings, lists, sets, sorted sets, hashes, streams. It supports atomic operations on those types, like incrementing a counter or pushing onto a list, which are useful for things that are not just plain key-value lookups.

Most importantly for the exam, Redis supports persistence and replication. Persistence means the cache can survive a restart, because Redis can write its state to disk. Replication means you can have a primary and a replica for high availability. Memcached does neither of these.

A canonical example of a game leaderboard with thousands of reads and writes per second is a good fit for Redis specifically, because leaderboards need sorted set semantics that Memcached does not provide. If the data structure or the durability matters, Redis is the answer.

Persistence, in practice

Redis persistence is one of the cleanest distinguishing features. With Memorystore for Redis in standard tier, you get replication for high availability. With Redis persistence enabled, the data also survives restarts because Redis snapshots state to disk.

This matters when the cache contents are not entirely disposable. A session store that needs to survive a restart so users do not get logged out. A leaderboard that should not be wiped when the cache instance restarts. These are Redis use cases.

If the cache contents are genuinely disposable, Memcached is fine and may even be cheaper.

What the exam tests

The exam scenarios for Memorystore break into a few patterns.

If you see a scenario describing a need for high availability, persistence, or rich data structures like sorted sets and lists, the answer is Memorystore for Redis. The phrase "must survive a restart" or "needs failover" is the strongest signal.

If you see a scenario describing a simple key-value cache that prioritizes raw scale and where cache misses are acceptable, the answer is Memorystore for Memcached. The phrase "simple distributed cache" or "session blobs" is the signal.

If you see a scenario describing a leaderboard, a sorted set, a counter, or any operation more sophisticated than plain GET and SET, the answer is Redis. Memcached does not support those primitives.

If a scenario describes general caching without specific features, either is a defensible answer, but Redis is more often the recommended default because it is more capable.

One more thing: it is not a database

Memorystore is a caching layer. It is not a primary store. The data in it is a copy of data that lives somewhere else, and the cache exists to make access to that data faster. If the cache is wiped, the underlying data is still in the source of truth.

If a question asks where to permanently store data, Memorystore is the wrong answer regardless of engine. Use Cloud SQL, Firestore, Bigtable, or Cloud Storage for that. Memorystore is for the layer between the application and the source of truth, not for replacing it.

Bottom line

Redis is the more capable option, with data structures, persistence, and replication. Memcached is the simpler option, for plain key-value caching at scale. The Associate Cloud Engineer exam picks the engine based on the requirements in the scenario: persistence, data structures, and HA point to Redis. Plain key-value with disposable data points to Memcached.

My Associate Cloud Engineer course covers Memorystore alongside the rest of the GCP database and caching services the Associate Cloud Engineer exam tests.

arrow