πΎ Caching Strategy (Deep Dive)
Adamondo's caching architecture is designed for extreme performance and reliability. It uses a tiered approach combining a singleton service for manual caching and an ES6 Proxy-based wrapper for automated repository caching.
ποΈ Infrastructure & Resilienceβ
We use Redis (via ioredis) as our centralized data store. The setup is optimized for "fail-fast" behavior to ensure that a Redis outage never cascades into a total application failure.
ioredis Configurationβ
- Max Retries:
maxRetriesPerRequest: 1. This ensures that if Redis is slow or unresponsive, the request fails quickly rather than hanging the Node.js event loop. - Retry Strategy: Uses exponential backoff starting at 50ms, capping at 2000ms.
- Connection Monitoring: The
RedisServiceemits logs forconnect,error, andcloseevents, while maintaining an internalisConnectedstate to bypass cache operations gracefully during downtime.
π‘οΈ The RedisService Singletonβ
Located at apps/backend/src/infrastructure/redis/RedisService.ts, this singleton provides the core API for cache interaction.
Advanced Serialization Strategyβ
Redis only supports string-based values. To maintain type safety and object structure, we use a sophisticated JSON serialization strategy.
π The Date Object Challengeβ
Standard JSON.stringify converts Date objects to ISO strings (e.g., "2024-03-21T...Z"). However, our GraphQL schema requires numeric timestamps (epochs) for many fields to ensure consistency across timezones.
set(key, value): Uses a custom replacer function that detectsDateinstances and converts them to numeric timestamps using.getTime().get(key): Uses a custom reviver function that monitors a predefined list of "Date fields" (e.g.,checkIn,checkOut,createdAt). If a numeric value is found in these fields, it is restored to a fullDateobject.- GraphQL Harmony: Restored
Dateobjects have their.toJSON()method overridden to return an epoch string. This ensures that when GraphQL serializes the result, it stays as a numeric string rather than reverting to ISO format.
// Custom Reviver Example
return JSON.parse(data, (k, v) => {
if (dateFields.includes(k) && typeof v === 'number') {
const date = new Date(v);
date.toJSON = function() { return this.getTime().toString(); };
return date;
}
return v;
});
β‘ Automated Caching: CacheWrapperβ
The most powerful part of our caching layer is the CacheWrapper (apps/backend/src/infrastructure/cache/CacheWrapper.ts). It uses an ES6 Proxy to automatically intercept calls to repositories.
Proxy Interception Logicβ
When a repository is wrapped, every method call is analyzed:
π 1. Automatic Query Cachingβ
Methods starting with find, get, exists, or count are treated as queries.
- Key Generation:
cache:{prefix}:{methodName}:{JSON_args}. This ensures that different arguments (e.g.,listingId: 1vslistingId: 2) generate unique cache entries. - Flow: Check Redis β If Hit: Return β If Miss: Call DB β Save to Redis β Return.
π 2. Mutation-Triggered Invalidationβ
Methods starting with create, update, delete, save, or sync are treated as mutations.
- Invalidation: When a mutation is called, the wrapper automatically triggers a
delByPrefixfor all keys related to that repository (e.g., clearing allcache:listing:*keys). - Async Execution: Invalidation happens in the background (
.catch()handled) so that it doesn't add latency to the client's write operation.
π§Ή Performance-First Invalidationβ
Manual cache clearing is handled via delByPrefix(prefix). To prevent performance bottlenecks on large Redis datasets, we avoid the KEYS * command.
SCANStreams: We useclient.scanStream({ match: 'prefix*', count: 100 }). This iterates through the keyspace in small batches, keeping the Redis server responsive.- Pipelining: Batched keys are deleted using a
pipeline(). This reduces the number of network round-trips to Redis, allowing us to clear thousands of keys in a single operation.
πΊοΈ Usage & Prefixesβ
Standard key prefixes used throughout the ecosystem:
| Prefix | Usage |
|---|---|
cache:listing:* | Listing details, availability, and pricing rules. |
cache:user:* | User profiles and permissions. |
cache:reservation:* | Confirmed bookings and status. |
itinerary:* | Composite views for guest itineraries. |
auth:session:* | Session tokens and OTP states (short TTL). |
π οΈ Debugging & Maintenanceβ
- Cache Bypass: If
REDIS_URLis missing from the environment, theRedisServicelogs a warning and disables caching without throwing errors. - TTL Policies:
- Standard Query: 3600s (1 hour).
- Volatile/Auth data: 300s - 600s.
- Critical Config: 86400s (24 hours).