使用 spring-cache-redis
的缓存注解 CacheEvict
时,如果使用了allEntries = true
的批量清除开关,默认的处理方式是使用 keys
命令来批量查找key,如何换成 scan
呢?
观察项目中使用的 spring-data-redis
包,如果大于 2.6.x
,可以使用如下方式:
@Bean
public CacheManager cacheManager(RedisConnectionFactory redisConnectionFactory, CacheProperties cacheProperties) {
Map<String, RedisCacheConfiguration> configMap = new HashMap<>(16);
cacheProperties.getTtls()
.forEach((k, v) -> configMap.put(k, RedisCacheConfiguration.defaultCacheConfig()
.computePrefixWith(name -> name + SEPARATOR)
.serializeKeysWith(SerializationPair.fromSerializer(RedisSerializer.string()))
.serializeValuesWith(SerializationPair.fromSerializer(cacheSerializer))
.entryTtl(Duration.ofSeconds(v))));
return new RedisCacheManager(
RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory, BatchStrategies.scan(cacheProperties.getScanBatchSize())),
RedisCacheConfiguration.defaultCacheConfig()
.computePrefixWith(name -> name + SEPARATOR)
.serializeKeysWith(SerializationPair.fromSerializer(RedisSerializer.string()))
.serializeValuesWith(SerializationPair.fromSerializer(cacheSerializer))
.entryTtl(Duration.ofSeconds(cacheProperties.getDefaultTtl())), configMap);
}
这段代码里我声明了项目中用来处理缓存配置的 CacheManager
,通常这个manager需要自定义声明,其中的关键就在于声明manager中需使用到的 RedisCacheWriter
的这句
RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory, BatchStrategies.scan(cacheProperties.getScanBatchSize()))
// 如果参数没有指定Strategy是scan,默认便是keys处理
如果小于 2.6.x
稍微麻烦一些,同样在上面声明cacheManager的位置配置,此时需要引入一个自定义的RedisCacheWriter
:
import org.springframework.data.redis.cache.CacheStatistics;
import org.springframework.data.redis.cache.CacheStatisticsCollector;
import org.springframework.data.redis.cache.RedisCacheWriter;
import org.springframework.data.redis.connection.RedisConnection;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.Cursor;
import org.springframework.data.redis.core.ScanOptions;
import org.springframework.lang.NonNull;
import org.springframework.lang.Nullable;
import org.springframework.util.Assert;
import java.time.Duration;
import java.util.HashSet;
import java.util.Set;
/**
* 自定义redis缓存writer,使用scan替代key*
* </br>
* 简化了部分代码,不支持开启操作锁和请求统计,要用时再调整代码
*/
public class CustomizedCacheWriter implements RedisCacheWriter {
private final RedisCacheWriter defaultWriter;
private final RedisConnectionFactory connectionFactory;
private final CacheProperties cacheProperties;
public CustomizedCacheWriter(@NonNull RedisCacheWriter defaultWriter, @NonNull RedisConnectionFactory connectionFactory) {
this.defaultWriter = defaultWriter;
this.connectionFactory = connectionFactory;
}
@Override
public void put(String name, byte[] key, byte[] value, @Nullable Duration ttl) {
defaultWriter.put(name, key, value, ttl);
}
@Override
@Nullable
public byte[] get(String name, byte[] key) {
return defaultWriter.get(name, key);
}
@Override
@Nullable
public byte[] putIfAbsent(String name, byte[] key, byte[] value, @Nullable Duration ttl) {
return defaultWriter.putIfAbsent(name, key, value, ttl);
}
@Override
public void remove(String name, byte[] key) {
defaultWriter.remove(name, key);
}
@Override
public void clean(String name, byte[] pattern) {
Assert.notNull(name, "Name must not be null!");
Assert.notNull(pattern, "Pattern must not be null!");
try (RedisConnection connection = connectionFactory.getConnection()) {
Cursor<byte[]> cursor = connection.scan(
ScanOptions.scanOptions().count(1000).match(new String(pattern)).build());
Set<byte[]> ks = new HashSet<>();
while (cursor.hasNext()) {
ks.add(cursor.next());
}
byte[][] keys = ks.toArray(new byte[0][]);
// byte[][] keys = Optional.ofNullable(connection.keys(pattern)).orElse(Collections.emptySet()).toArray(new byte[0][]);
if (keys.length > 0) {
connection.del(keys);
}
}
}
@Override
public void clearStatistics(String name) {
defaultWriter.clearStatistics(name);
}
@Override
public RedisCacheWriter withStatisticsCollector(CacheStatisticsCollector cacheStatisticsCollector) {
return defaultWriter.withStatisticsCollector(cacheStatisticsCollector);
}
@Override
public CacheStatistics getCacheStatistics(String cacheName) {
return defaultWriter.getCacheStatistics(cacheName);
}
}
上面声明了一个自定义的RedisCacheWriter
实现类,其实是包装了默认的RedisCacheWriter
完成除了clean方法之外的所有事情,而clean方法中使用了 scan
替代了原本的 keys
处理方式。
由于项目中没有使用到处理锁和统计功能,clean方法内容简化了不少,如果有用到锁和统计需要另外处理。
最后在声明manager处调整自定义writer的使用:
RedisCacheWriter cacheWriter = RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory);
return new RedisCacheManager(new CustomizedCacheWriter(cacheWriter, redisConnectionFactory),
RedisCacheConfiguration.defaultCacheConfig()
.computePrefixWith(name -> name + SEPARATOR)
.serializeKeysWith(SerializationPair.fromSerializer(RedisSerializer.string()))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(cacheSerializer))
.entryTtl(Duration.ofSeconds(cacheProperties.getDefaultTtl())), configMap);