The comprehensive application of caching strategies
Basic Concepts of Caching Strategies
Caching strategies are one of the key methods to enhance application performance, optimizing response speed by reducing redundant computations and network requests. In Koa2, caching can be applied at multiple levels, including in-memory caching, HTTP caching, and database query caching. Different scenarios require different caching strategies, and making the right choice can significantly improve system throughput.
In-memory caching is the most straightforward implementation, suitable for storing frequently accessed and relatively static data. For example, using an LRU algorithm-based caching library like lru-cache
:
const LRU = require('lru-cache')
const cache = new LRU({
max: 500,
maxAge: 1000 * 60 * 5 // 5 minutes
})
// Middleware example
app.use(async (ctx, next) => {
const key = ctx.url
const cached = cache.get(key)
if (cached) {
ctx.body = cached
return
}
await next()
cache.set(key, ctx.body)
})
HTTP Cache Header Settings
The HTTP protocol provides a robust caching control mechanism, allowing precise control over caching behavior on clients and proxy servers through response headers. In Koa2, these headers can be flexibly set via middleware:
app.use(async (ctx, next) => {
await next()
if (ctx.status === 200 && ctx.method === 'GET') {
ctx.set('Cache-Control', 'public, max-age=3600')
ctx.set('ETag', generateETag(ctx.body))
}
})
Combining strong caching with conditional caching can effectively reduce network traffic. When resources remain unchanged, a 304 Not Modified response saves significant bandwidth:
app.use(async (ctx, next) => {
const ifNoneMatch = ctx.headers['if-none-match']
const etag = generateETagForRequest(ctx)
if (ifNoneMatch === etag) {
ctx.status = 304
return
}
await next()
ctx.set('ETag', etag)
})
Database Query Caching
ORM-layer caching avoids repeatedly executing the same queries. ORM libraries like Sequelize provide query caching functionality:
const result = await Model.findAll({
where: { status: 'active' },
cache: true,
cacheKey: 'active_users'
})
For complex queries, manual caching logic can be implemented:
app.use(async (ctx, next) => {
const cacheKey = `query_${ctx.path}_${JSON.stringify(ctx.query)}`
const cached = await redis.get(cacheKey)
if (cached) {
ctx.body = JSON.parse(cached)
return
}
await next()
if (ctx.status === 200) {
await redis.setex(cacheKey, 300, JSON.stringify(ctx.body))
}
})
Page-Level Caching Strategies
Full-page caching is suitable for pages with infrequently changing content. A Koa2 middleware can implement simple HTML caching:
const pageCache = {}
app.use(async (ctx, next) => {
if (ctx.method !== 'GET' || !ctx.accepts('html')) {
return next()
}
const url = ctx.url
if (pageCache[url] && Date.now() - pageCache[url].timestamp < 60000) {
ctx.type = 'text/html'
ctx.body = pageCache[url].html
return
}
await next()
if (ctx.status === 200 && ctx.type === 'text/html') {
pageCache[url] = {
html: ctx.body,
timestamp: Date.now()
}
}
})
For dynamic content, consider fragment caching (ESI) or component-level caching solutions.
Cache Invalidation and Update Strategies
Cache invalidation is one of the most challenging aspects of caching systems. Common strategies include:
- Time-based expiration: Set a fixed cache duration.
- Active invalidation: Immediately clear related caches when data changes.
- Version control: Force updates via URL or parameter versioning.
An example of active invalidation in Koa2:
// Clear cache after data update
router.post('/articles/:id', async (ctx) => {
await updateArticle(ctx.params.id, ctx.request.body)
await redis.del(`article_${ctx.params.id}`)
await redis.del('article_list')
ctx.status = 204
})
For scenarios with complex dependencies, a publish-subscribe pattern can notify cache invalidation:
const redis = require('redis')
const sub = redis.createClient()
const pub = redis.createClient()
sub.on('message', (channel, key) => {
cache.del(key)
})
// Publish message when data changes
pub.publish('cache_invalidate', `user_${userId}`)
Distributed Caching Practices
In clustered environments, in-memory caches require shared storage. Redis is a common distributed caching solution:
const redis = require('redis')
const client = redis.createClient()
app.use(async (ctx, next) => {
const key = `view_${ctx.path}`
const cached = await client.get(key)
if (cached) {
ctx.body = JSON.parse(cached)
return
}
await next()
if (ctx.status === 200) {
await client.setex(key, 60, JSON.stringify(ctx.body))
}
})
Cache avalanche protection can be achieved with randomized expiration times:
function getCacheTTL() {
const base = 3600 // 1 hour
const random = Math.floor(Math.random() * 600) // 0-10 minutes random
return base + random
}
Cache Performance Monitoring
Comprehensive monitoring helps optimize caching strategies. Key metrics to collect include:
- Cache hit rate
- Cache load time
- Memory usage
A simple monitoring implementation with Koa2 middleware:
const cacheStats = {
hits: 0,
misses: 0
}
app.use(async (ctx, next) => {
const start = Date.now()
const key = ctx.url
const cached = cache.get(key)
if (cached) {
cacheStats.hits++
ctx.body = cached
ctx.set('X-Cache', 'HIT')
return
}
cacheStats.misses++
await next()
if (ctx.status === 200) {
cache.set(key, ctx.body)
}
ctx.set('X-Cache-Time', `${Date.now() - start}ms`)
})
// Expose stats endpoint
router.get('/cache-stats', (ctx) => {
ctx.body = cacheStats
})
Advanced Caching Patterns
For specialized scenarios, consider more advanced caching patterns:
- Write-through: Update cache before updating the database.
- Read-through: Load from the database on cache misses.
- Cache warming: Preload hot data at startup.
A Koa2 example of read-through:
app.use(async (ctx, next) => {
const key = `user_${ctx.params.id}`
let data = await redis.get(key)
if (!data) {
data = await db.query('SELECT * FROM users WHERE id = ?', [ctx.params.id])
if (data) {
await redis.setex(key, 3600, JSON.stringify(data))
}
} else {
data = JSON.parse(data)
}
ctx.body = data
})
Cache warming can be performed at application startup:
async function warmUpCache() {
const hotData = await db.query('SELECT * FROM products ORDER BY views DESC LIMIT 100')
await Promise.all(
hotData.map(item =>
redis.setex(`product_${item.id}`, 86400, JSON.stringify(item))
)
)
}
app.on('listening', warmUpCache)
Caching and Security Considerations
Caching can introduce security risks, so precautions are necessary:
- Sensitive data should not be cached.
- Distinguish between user-private caches.
- Prevent cache poisoning attacks.
Handling user-private caches in Koa2:
app.use(async (ctx, next) => {
const userToken = ctx.cookies.get('token')
const cacheKey = userToken ? `${userToken}_${ctx.url}` : ctx.url
const cached = cache.get(cacheKey)
if (cached) {
ctx.body = cached
return
}
await next()
if (ctx.status === 200) {
cache.set(cacheKey, ctx.body)
}
})
For API responses, the Vary
header ensures correct caching for different content types:
ctx.set('Vary', 'Accept-Encoding, Accept-Language')
本站部分内容来自互联网,一切版权均归源网站或源作者所有。
如果侵犯了你的权益请来信告知我们删除。邮箱:cc@cccx.cn