Spring Cache优化

前言

缓存是web项目不可或缺的一部分,通过缓存能够降低服务器数据库压力,提高服务器的稳定性及响应速度。

spring cache

spring cache是spring框架自带的一套缓存框架,其具有多种实现,比较常用的是基于Redis的实现,其核心注解有 @CacheConfig@Cacheable@CachePut@CacheEvict,不熟悉用法的可以参考官方文档,有很详细的说明,https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#spring-integration 。建议大家有时间还是多看看spring官方文档,比从网上找文章看高效多了。
这里主要介绍一下@CacheConfig这个注解,此注解有四个属性,cacheNames 用于指定缓存名字,可以按照在缓存中按模块保存,keyGenerator 缓存键生成器,如果指定了缓存键则忽略,cacheManager 由spring管理的缓存管理器的名字,如果没有指定则采用默认的缓存管理器,cacheResolver
spring cache具有极高的易用性,在保存缓存时能够根据Spring EL表达式自由定制缓存键,但是spring cache在使用过程中有两点缺陷:

  • 在使用@CacheEvict时,如果指定了allEntries=true,在从Redis中删除缓存时使用的是 keys指令,keys指令时间复杂度是O(N),如果缓存数量较大会产生明显的阻,因此在生产环境中Redis会禁用这个指令,导致报错。
    看下DefaultRedisCacheWriter的clean方法:
@Override
    public void clean(String name, byte[] pattern) {

        Assert.notNull(name, "Name must not be null!");
        Assert.notNull(pattern, "Pattern must not be null!");

        execute(name, connection -> {

            boolean wasLocked = false;

            try {

                if (isLockingCacheWriter()) {
                    doLock(name, connection);
                    wasLocked = true;
                }
                                //keys 指令
                byte[][] keys = Optional.ofNullable(connection.keys(pattern)).orElse(Collections.emptySet())
                        .toArray(new byte[0][]);

                if (keys.length > 0) {
                    statistics.incDeletesBy(name, keys.length);
                    connection.del(keys);
                }
            } finally {

                if (wasLocked && isLockingCacheWriter()) {
                    doUnlock(name, connection);
                }
            }

            return "OK";
        });
    }

  • 在通过cacheManager属性指定缓存管理器时,如果不指定则采用全局的声明的缓存管理器,无法调整缓存的过期时间,而如果指定了缓存管理器则必须要手动创建一个缓存管理器且需要交给spring托管,无法动态指定缓存管理器。
    这里对上述两个缺陷进行了修改,一是通过scan指令替代keys指令,虽然scan指令的时间复杂度也是O(N),但是其通过指定游标和count能够分批执行,不会导致长时间的阻塞;二是在项目启动后,通过扫描注解动态生成cacheManager,能够满足不同缓存模块指定不同的缓存时间的需求,且无需手动创建RedisCacheManager。
重写DefaultRedisCacheWriter

DefaultRedisCacheWriter是spring cache提供的默认的Redis缓存写出器,其内部封装了缓存增删改查等逻辑,但是由于其不是public修饰的,因此重写了一个Redis缓存写出器,大部分代码均与DefaultRedisCacheWriter相同,只有clean方法做了修改。

package com.cube.share.cache.writer;

import org.springframework.dao.PessimisticLockingFailureException;
import org.springframework.data.redis.cache.CacheStatistics;
import org.springframework.data.redis.cache.CacheStatisticsCollector;
import org.springframework.data.redis.cache.RedisCacheWriter;
import org.springframework.data.redis.connection.RedisConnection;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.RedisStringCommands.SetOption;
import org.springframework.data.redis.core.Cursor;
import org.springframework.data.redis.core.ScanOptions;
import org.springframework.data.redis.core.types.Expiration;
import org.springframework.lang.Nullable;
import org.springframework.util.Assert;

import java.nio.charset.StandardCharsets;
import java.time.Duration;
import java.util.HashSet;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
import java.util.function.Function;

/**
 * @author poker.li
 * @date 2021/7/17 13:20
 * <p>
 * 自定义的RedisCacheWriter的实现,重写DefaultRedisCacheWriter的clear()方法,使用scan指令替换keys指令
 * <p>
 */
@SuppressWarnings({"WeakerAccess", "unused"})
public class IRedisCacheWriter implements RedisCacheWriter {


    private final RedisConnectionFactory connectionFactory;
    private final Duration sleepTime;
    private final CacheStatisticsCollector statistics;

    /**
     * @param connectionFactory must not be {@literal null}.
     */
    public IRedisCacheWriter(RedisConnectionFactory connectionFactory) {
        this(connectionFactory, Duration.ZERO);
    }

    /**
     * @param connectionFactory must not be {@literal null}.
     * @param sleepTime         sleep time between lock request attempts. Must not be {@literal null}. Use {@link Duration#ZERO}
     *                          to disable locking.
     */
    public IRedisCacheWriter(RedisConnectionFactory connectionFactory, Duration sleepTime) {
        this(connectionFactory, sleepTime, CacheStatisticsCollector.none());
    }

    /**
     * @param connectionFactory        must not be {@literal null}.
     * @param sleepTime                sleep time between lock request attempts. Must not be {@literal null}. Use {@link Duration#ZERO}
     *                                 to disable locking.
     * @param cacheStatisticsCollector must not be {@literal null}.
     */
    public IRedisCacheWriter(RedisConnectionFactory connectionFactory, Duration sleepTime,
                             CacheStatisticsCollector cacheStatisticsCollector) {

        Assert.notNull(connectionFactory, "ConnectionFactory must not be null!");
        Assert.notNull(sleepTime, "SleepTime must not be null!");
        Assert.notNull(cacheStatisticsCollector, "CacheStatisticsCollector must not be null!");

        this.connectionFactory = connectionFactory;
        this.sleepTime = sleepTime;
        this.statistics = cacheStatisticsCollector;
    }


    @Override
    public CacheStatistics getCacheStatistics(String cacheName) {
        return statistics.getCacheStatistics(cacheName);
    }


    @Override
    public void clearStatistics(String name) {
        statistics.reset(name);
    }

    @Override
    public RedisCacheWriter withStatisticsCollector(CacheStatisticsCollector cacheStatisticsCollector) {
        return new IRedisCacheWriter(connectionFactory, sleepTime, cacheStatisticsCollector);
    }

    @Override
    public void put(String name, byte[] key, byte[] value, @Nullable Duration ttl) {

        Assert.notNull(name, "Name must not be null!");
        Assert.notNull(key, "Key must not be null!");
        Assert.notNull(value, "Value must not be null!");

        execute(name, connection -> {

            if (shouldExpireWithin(ttl)) {
                connection.set(key, value, Expiration.from(ttl.toMillis(), TimeUnit.MILLISECONDS), SetOption.upsert());
            } else {
                connection.set(key, value);
            }

            return "OK";
        });
    }


    @Override
    public byte[] get(String name, byte[] key) {

        Assert.notNull(name, "Name must not be null!");
        Assert.notNull(key, "Key must not be null!");

        return execute(name, connection -> connection.get(key));
    }


    @Override
    public byte[] putIfAbsent(String name, byte[] key, byte[] value, @Nullable Duration ttl) {

        Assert.notNull(name, "Name must not be null!");
        Assert.notNull(key, "Key must not be null!");
        Assert.notNull(value, "Value must not be null!");

        return execute(name, connection -> {

            if (isLockingCacheWriter()) {
                doLock(name, connection);
            }

            try {
                //noinspection ConstantConditions
                if (connection.setNX(key, value)) {

                    if (shouldExpireWithin(ttl)) {
                        connection.pExpire(key, ttl.toMillis());
                    }
                    return null;
                }

                return connection.get(key);
            } finally {

                if (isLockingCacheWriter()) {
                    doUnlock(name, connection);
                }
            }
        });
    }


    @Override
    public void remove(String name, byte[] key) {

        Assert.notNull(name, "Name must not be null!");
        Assert.notNull(key, "Key must not be null!");

        execute(name, connection -> connection.del(key));
    }

    @Override
    public void clean(String name, byte[] pattern) {

        Assert.notNull(name, "Name must not be null!");
        Assert.notNull(pattern, "Pattern must not be null!");

        execute(name, connection -> {

            boolean wasLocked = false;

            try {

                if (isLockingCacheWriter()) {
                    doLock(name, connection);
                    wasLocked = true;
                }

                //使用scan命令代替keys命令
                Cursor<byte[]> cursor = connection.scan(new ScanOptions.ScanOptionsBuilder().match(new String(pattern)).count(1000).build());
                Set<byte[]> byteSet = new HashSet<>();
                while (cursor.hasNext()) {
                    byteSet.add(cursor.next());
                }

                byte[][] keys = byteSet.toArray(new byte[0][]);

                if (keys.length > 0) {
                    connection.del(keys);
                }
            } finally {

                if (wasLocked && isLockingCacheWriter()) {
                    doUnlock(name, connection);
                }
            }

            return "OK";
        });
    }

    /**
     * Explicitly set a write lock on a cache.
     *
     * @param name the name of the cache to lock.
     */
    void lock(String name) {
        execute(name, connection -> doLock(name, connection));
    }

    /**
     * Explicitly remove a write lock from a cache.
     *
     * @param name the name of the cache to unlock.
     */
    void unlock(String name) {
        executeLockFree(connection -> doUnlock(name, connection));
    }

    private Boolean doLock(String name, RedisConnection connection) {
        return connection.setNX(createCacheLockKey(name), new byte[0]);
    }

    @SuppressWarnings("UnusedReturnValue")
    private Long doUnlock(String name, RedisConnection connection) {
        return connection.del(createCacheLockKey(name));
    }

    private boolean doCheckLock(String name, RedisConnection connection) {
        //noinspection ConstantConditions
        return connection.exists(createCacheLockKey(name));
    }

    /**
     * @return {@literal true} if {@link RedisCacheWriter} uses locks.
     */
    private boolean isLockingCacheWriter() {
        return !sleepTime.isZero() && !sleepTime.isNegative();
    }

    private <T> T execute(String name, Function<RedisConnection, T> callback) {

        try (RedisConnection connection = connectionFactory.getConnection()) {

            checkAndPotentiallyWaitUntilUnlocked(name, connection);
            return callback.apply(connection);
        }
    }

    private void executeLockFree(Consumer<RedisConnection> callback) {

        try (RedisConnection connection = connectionFactory.getConnection()) {
            callback.accept(connection);
        }
    }

    private void checkAndPotentiallyWaitUntilUnlocked(String name, RedisConnection connection) {

        if (!isLockingCacheWriter()) {
            return;
        }

        try {

            while (doCheckLock(name, connection)) {
                Thread.sleep(sleepTime.toMillis());
            }
        } catch (InterruptedException ex) {

            // Re-interrupt current thread, to allow other participants to react.
            Thread.currentThread().interrupt();

            throw new PessimisticLockingFailureException(String.format("Interrupted while waiting to unlock cache %s", name),
                    ex);
        }
    }

    private static boolean shouldExpireWithin(@Nullable Duration ttl) {
        return ttl != null && !ttl.isZero() && !ttl.isNegative();
    }

    private static byte[] createCacheLockKey(String name) {
        return (name + "~lock").getBytes(StandardCharsets.UTF_8);
    }
}
自定义缓存注解替代spring cache的注解
  • @ICacheConfig
package com.cube.share.cache.anonotation;

import org.springframework.cache.annotation.CacheConfig;
import org.springframework.core.annotation.AliasFor;

import java.lang.annotation.*;
import java.util.concurrent.TimeUnit;

/**
 * @author poker.li
 * @date 2021/7/17 16:08
 * <p>
 * 基于{@link org.springframework.cache.annotation.CacheConfig}提供的缓存配置注解
 */
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@CacheConfig
@Inherited
public @interface ICacheConfig {

    /**
     * 缓存前缀名,通过该属性指定不同模块缓存的存放位置,
     * 在Redis中分块展示,对于指定的缓存键key="11235813",存在Redis的实际键为 "sysUser::11235813"
     */
    @AliasFor(annotation = CacheConfig.class, attribute = "cacheNames")
    String[] cacheNames() default {};

    /**
     * 缓存键生成器
     */
    @AliasFor(annotation = CacheConfig.class, attribute = "keyGenerator")
    String keyGenerator() default "";

    /**
     * 缓存管理器,如果没有指定则采用默认的缓存管理器,如果需要自定义缓存的过期时间。
     * 则必须指定该属性,并且要使该属性唯一,这样能创建一个新的RedisCacheManager(bean的名字就是cacheManager)
     */
    @AliasFor(annotation = CacheConfig.class, attribute = "cacheManager")
    String cacheManager() default "";

    @AliasFor(annotation = CacheConfig.class, attribute = "cacheResolver")
    String cacheResolver() default "";

    /**
     * 是否允许缓存存入null
     */
    boolean allowCachingNullValues() default false;

    /**
     * 缓存的有效期限,如果值小于等于0则表示永久保存
     */
    int expire() default 8;

    /**
     * 缓存过期的时间单位
     */
    TimeUnit timeUnit() default TimeUnit.HOURS;

    /**
     * 设置是否兼容事务,
     * 默认是true,只在事务成功提交后才会进行缓存的put/evict操作
     */
    boolean transactionAware() default true;
}

  • @ICache
package com.cube.share.cache.anonotation;

import org.springframework.cache.annotation.Cacheable;
import org.springframework.core.annotation.AliasFor;

import java.lang.annotation.*;

/**
 * @author poker.li
 * @date 2021/7/17 17:08
 * <p>
 * 基于{@link Cacheable}实现的缓存存放注解
 */
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@SuppressWarnings("SpringCacheNamesInspection")
@Cacheable
public @interface ICache {

    /**
     * 缓存的名字(缓存键的前缀),例如,指定为"sysUser",
     * 对于指定的缓存键key="11235813",存在Redis的实际键为 "sysUser::11235813"
     */
    @AliasFor(annotation = Cacheable.class, attribute = "value")
    String[] value() default {};

    /**
     * 缓存的名字(缓存键的前缀)
     */
    @AliasFor(annotation = Cacheable.class, attribute = "cacheNames")
    String[] cacheNames() default {};

    /**
     * 缓存键
     */
    @AliasFor(annotation = Cacheable.class, attribute = "key")
    String key() default "";

    /**
     * 缓存键生成器
     */
    @AliasFor(annotation = Cacheable.class, attribute = "keyGenerator")
    String keyGenerator() default "";

    /**
     * 缓存管理器,如果没有指定则采用默认的缓存管理器
     */
    @AliasFor(annotation = Cacheable.class, attribute = "cacheManager")
    String cacheManager() default "";

    @AliasFor(annotation = Cacheable.class, attribute = "cacheResolver")
    String cacheResolver() default "";

    /**
     * 判断是否放入缓存的条件,使用Spring EL表达式
     */
    @AliasFor(annotation = Cacheable.class, attribute = "condition")
    String condition() default "";

    /**
     * 在方法执行结束后,根据方法的执行结果执行是否需要放入缓存,例如
     * unless = "#result != null",表示仅当方法执行结果不为null时才放入缓存
     */
    @AliasFor(annotation = Cacheable.class, attribute = "unless")
    String unless() default "";

    /**
     * 是否需要同步调用,如果设置为true,具有相同key的多次调用串行执行
     */
    @AliasFor(annotation = Cacheable.class, attribute = "sync")
    boolean sync() default false;

}

  • @ICachePut
package com.cube.share.cache.anonotation;

import org.springframework.cache.annotation.CachePut;
import org.springframework.core.annotation.AliasFor;

import java.lang.annotation.*;

/**
 * @author poker.li
 * @date 2021/7/17 17:33
 * <p>
 * 基于{@link CachePut}提供的缓存更新注解
 */
@SuppressWarnings("SpringCacheNamesInspection")
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@CachePut
public @interface ICachePut {

    /**
     * 缓存的名字(缓存键的前缀),例如,指定为"sysUser",
     * 对于指定的缓存键key="11235813",存在Redis的实际键为 "sysUser::11235813"
     */
    @AliasFor(annotation = CachePut.class, attribute = "value")
    String[] value() default {};

    /**
     * 缓存的名字(缓存键的前缀)
     */
    @AliasFor(annotation = CachePut.class, attribute = "cacheNames")
    String[] cacheNames() default {};

    /**
     * 缓存键
     */
    @AliasFor(annotation = CachePut.class, attribute = "key")
    String key() default "";

    /**
     * 缓存管理器
     */
    @AliasFor(annotation = CachePut.class, attribute = "cacheManager")
    String cacheManager() default "";

    @AliasFor(annotation = CachePut.class, attribute = "cacheResolver")
    String cacheResolver() default "";

    /**
     * 判断是否放入缓存的条件,使用Spring EL表达式
     */
    @AliasFor(annotation = CachePut.class, attribute = "condition")
    String condition() default "";

    /**
     * 在方法执行结束后,根据方法的执行结果执行是否需要放入缓存,例如
     * unless = "#result != null",表示仅当方法执行结果不为null时才放入缓存
     */
    @AliasFor(annotation = CachePut.class, attribute = "unless")
    String unless() default "";
}

  • @ICacheEvict
package com.cube.share.cache.anonotation;

import org.springframework.cache.annotation.CacheEvict;
import org.springframework.core.annotation.AliasFor;

import java.lang.annotation.*;

/**
 * @author cube.li
 * @date 2021/7/17 21:23
 * @description {@link org.springframework.cache.annotation.CacheEvict}提供的缓存清除注解
 */
@SuppressWarnings({"SingleElementAnnotation", "SpringCacheNamesInspection"})
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@CacheEvict
public @interface ICacheEvict {

    /**
     * 缓存的名字(缓存键的前缀),例如,指定为"sysUser",
     * 对于指定的缓存键key="11235813",存在Redis的实际键为 "sysUser::11235813"
     */
    @AliasFor(annotation = CacheEvict.class, attribute = "value")
    String[] value() default {};

    /**
     * 缓存的名字(缓存键的前缀)
     */
    @AliasFor(annotation = CacheEvict.class, attribute = "cacheNames")
    String[] cacheNames() default {};

    /**
     * 缓存键
     */
    @AliasFor(annotation = CacheEvict.class, attribute = "key")
    String key() default "";

    /**
     * 缓存管理器
     */
    @AliasFor(annotation = CacheEvict.class, attribute = "cacheManager")
    String cacheManager() default "";

    @AliasFor(annotation = CacheEvict.class, attribute = "cacheResolver")
    String cacheResolver() default "";

    /**
     * 判断是否放入缓存的条件,使用Spring EL表达式
     */
    @AliasFor(annotation = CacheEvict.class, attribute = "condition")
    String condition() default "";

    /**
     * 是否删除缓存中所有的记录(当前指定的cacheNames下),
     * 如果设置为false,仅删除设定的key
     */
    @AliasFor(annotation = CacheEvict.class, attribute = "allEntries")
    boolean allEntries() default false;

    /**
     * 是否在方法调用前删除缓存,默认是false,仅当方法成功执行后才删除缓存,
     * 如果设定为true,则在调用前即删除缓存,无论方法最终是否调用成功
     */
    @AliasFor(annotation = CacheEvict.class, attribute = "beforeInvocation")
    boolean beforeInvocation() default false;

}

上面的四个注解实际上只有@ICacheConfig对原生注解@CacheConfig做了再封装,增加了三个属性,另外三个注解只是对spring cache对应的原生注解起了个别名,以后可能会有拓展的需要。

指定默认的RedisCacheManager配置
package com.cube.share.cache.config;

import com.cube.share.cache.writer.IRedisCacheWriter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.cache.RedisCacheWriter;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.time.Duration;

/**
 * @author poker.li
 * @date 2021/7/17 14:07
 * <p>
 * Redis配置
 */
@Configuration
@ConditionalOnProperty(prefix = "ICache", name = "enabled", havingValue = "true")
@EnableCaching
public class RedisCacheConfig {

    @Bean
    @Primary
    public RedisCacheManager redisCacheManager(RedisConnectionFactory redisConnectionFactory) {
        RedisCacheConfiguration cacheConfiguration = RedisCacheConfiguration.defaultCacheConfig()
                .entryTtl(Duration.ofHours(8))
                .serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()))
                .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()))
                .disableCachingNullValues();
        RedisCacheManager.RedisCacheManagerBuilder builder = RedisCacheManager.RedisCacheManagerBuilder
                .fromConnectionFactory(redisConnectionFactory)
                .cacheWriter(redisCacheWriter(redisConnectionFactory));
        return builder.transactionAware()
                .cacheDefaults(cacheConfiguration).build();
    }

    @Bean
    public RedisCacheWriter redisCacheWriter(RedisConnectionFactory redisConnectionFactory) {
        return new IRedisCacheWriter(redisConnectionFactory);
    }
}

如果没有指定RedisCacheManager,则采用上述配置的RedisCacheManger作为默认的缓存管理器,其指定了缓存的过期时间是8个小时。

动态生成RedisCacheManager并交给Spring托管
package com.cube.share.cache.processor;

import com.cube.share.cache.anonotation.ICacheConfig;
import com.cube.share.cache.constant.RedisCacheConstant;
import com.cube.share.cache.writer.IRedisCacheWriter;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.config.ConstructorArgumentValues;
import org.springframework.beans.factory.support.DefaultListableBeanFactory;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import org.springframework.lang.NonNull;
import org.springframework.stereotype.Component;

import javax.annotation.PostConstruct;
import javax.annotation.Resource;
import java.time.Duration;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;

/**
 * @author poker.li
 * @date 2021/7/20 11:50
 */
@Component
@SuppressWarnings("unused")
@ConditionalOnProperty(prefix = "ICache", name = "enabled", havingValue = "true")
public class CacheManagerProcessor implements BeanFactoryAware, ApplicationContextAware {

    private DefaultListableBeanFactory beanFactory;

    private ApplicationContext applicationContext;

    @Resource(type = IRedisCacheWriter.class)
    private IRedisCacheWriter redisCacheWriter;

    private Set<String> cacheManagerNameSet = new HashSet<>();

    @PostConstruct
    public void registerCacheManager() {
        cacheManagerNameSet.add(RedisCacheConstant.DEFAULT_CACHE_MANAGER_BEAN_NAME);
        //获取所有使用ICacheConfig注解的Bean
        Map<String, Object> annotatedBeanMap = this.applicationContext.getBeansWithAnnotation(ICacheConfig.class);
        //获取所有Bean上的ICacheConfig注解
        Set<Map.Entry<String, Object>> entrySet = annotatedBeanMap.entrySet();
        for (Map.Entry<String, Object> entry : entrySet) {
            Object instance = entry.getValue();
            ICacheConfig iCacheConfig = instance.getClass().getAnnotation(ICacheConfig.class);
            registerRedisCacheManagerBean(iCacheConfig);
        }
    }

    @Override
    public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
        this.beanFactory = (DefaultListableBeanFactory) beanFactory;
    }

    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        this.applicationContext = applicationContext;
    }

    private void registerRedisCacheManagerBean(ICacheConfig annotation) {
        final String cacheManagerName = annotation.cacheManager();
        if (StringUtils.isBlank(cacheManagerName)) {
            return;
        }

        if (!cacheManagerNameSet.contains(cacheManagerName)) {
            RootBeanDefinition definition = new RootBeanDefinition(RedisCacheManager.class);
            ConstructorArgumentValues argumentValues = new ConstructorArgumentValues();
            argumentValues.addIndexedArgumentValue(0, redisCacheWriter);
            argumentValues.addIndexedArgumentValue(1, getRedisCacheConfiguration(annotation));
            definition.setConstructorArgumentValues(argumentValues);
            beanFactory.registerBeanDefinition(cacheManagerName, definition);

            if (annotation.transactionAware()) {
                //事务
                RedisCacheManager currentManager = applicationContext.getBean(cacheManagerName, RedisCacheManager.class);
                currentManager.setTransactionAware(true);
            }
        }
    }

    @NonNull
    private RedisCacheConfiguration getRedisCacheConfiguration(ICacheConfig annotation) {
        final boolean allowCachingNullValues = annotation.allowCachingNullValues();
        final int expire = annotation.expire();
        final TimeUnit timeUnit = annotation.timeUnit();
        RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
                .serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()))
                .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()));
        if (!allowCachingNullValues) {
            config = config.disableCachingNullValues();
        }
        if (expire > 0) {
            Duration duration = getDuration(expire, timeUnit);
            config = config.entryTtl(duration);
        }
        return config;
    }

    @NonNull
    private RedisCacheManager getRedisCacheManager(ICacheConfig annotation) {
        return RedisCacheManager.RedisCacheManagerBuilder
                .fromCacheWriter(redisCacheWriter)
                .transactionAware()
                .cacheDefaults(getRedisCacheConfiguration(annotation))
                .build();
    }

    @NonNull
    private Duration getDuration(int expire, TimeUnit timeUnit) {
        switch (timeUnit) {
            case DAYS:
                return Duration.ofDays(expire);
            case HOURS:
                return Duration.ofHours(expire);
            case MINUTES:
                return Duration.ofMinutes(expire);
            case SECONDS:
                return Duration.ofSeconds(expire);
            case MILLISECONDS:
                return Duration.ofMillis(expire);
            case NANOSECONDS:
                return Duration.ofNanos(expire);
            default:
                throw new IllegalArgumentException("Illegal Redis Cache Expire TimeUnit!");
        }
    }
}

这里在容器启动后扫描@ICacheConfig注解修饰的Bean,并根据指定的cacheManager属性生成对应的RedisCacheManager管理器。

测试
package com.cube.share.cache.service;

import com.cube.share.cache.anonotation.ICache;
import com.cube.share.cache.anonotation.ICacheConfig;
import com.cube.share.cache.anonotation.ICacheEvict;
import com.cube.share.cache.anonotation.ICachePut;
import com.cube.share.cache.model.SysDepartment;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

/**
 * @author poker.li
 * @date 2021/7/20 14:35
 */
@Service
@Slf4j
@ICacheConfig(cacheNames = "sysDepartment", cacheManager = "sysDepartmentCacheManager", expire = -1)
public class SysDepartmentService {

    @ICache(key = "#a0")
    public SysDepartment getById(Integer id) {
        return new SysDepartment(id, "部门名字" + id, "部门别名" + id);
    }

    @ICachePut(key = "#p0?.id", condition = "#p0 != null")
    public SysDepartment update(SysDepartment sysDepartment) {
        return sysDepartment;
    }

    @ICacheEvict(key = "#p0")
    public void deleteById(Integer id) {
        log.debug("删除: {}", id);
    }
}
package com.cube.share.cache.service;

import com.cube.share.cache.anonotation.ICache;
import com.cube.share.cache.anonotation.ICacheConfig;
import com.cube.share.cache.anonotation.ICachePut;
import com.cube.share.cache.model.SysLog;
import org.springframework.stereotype.Service;

/**
 * @author cube.li
 * @date 2021/7/20 23:27
 * @description
 */
@Service
@ICacheConfig(cacheNames = "sysLog", cacheManager = "sysLogCacheManager", expire = 1)
public class SysLogServiceImpl implements SysLogService {

    @Override
    @ICache(key = "#id")
    public SysLog getById(Integer id) {
        return new SysLog(id, "操作" + id);
    }

    @Override
    @ICachePut(key = "#p0.id", condition = "#p0?.id != null")
    public SysLog update(SysLog sysLog) {
        return sysLog;
    }
}
package com.cube.share.cache.service;

import com.cube.share.cache.anonotation.ICache;
import com.cube.share.cache.anonotation.ICacheConfig;
import com.cube.share.cache.anonotation.ICacheEvict;
import com.cube.share.cache.anonotation.ICachePut;
import com.cube.share.cache.model.SysUser;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

/**
 * @author poker.li
 * @date 2021/7/17 15:28
 */
@Service
@ICacheConfig(cacheNames = "sysUser")
@Slf4j
public class SysUserService {

    @ICache(key = "#p0")
    public SysUser getById(Integer id) {
        return new SysUser(id, id + "姓名", id + "地址");
    }

    @ICachePut(key = "#sysUser.id")
    public SysUser update(SysUser sysUser) {
        return sysUser;
    }

    @ICacheEvict(allEntries = true)
    public void deleteById(Integer id) {
        log.debug("删除 {}", id);
    }
}

在配置文件中开启缓存

spring:
  redis:
    host: 127.0.0.1
    ssl: false
    port: 6379
    database: 1
    connect-timeout: 1000
    lettuce:
      pool:
        max-active: 10
        max-wait: -1
        min-idle: 0
        max-idle: 20
server:
  port: 8899

ICache:
  enabled: true

写几个单元测试,看一下Redis里的数据:


分模块存储.png
全局过期时间.png
指定过期时间1.png

指定过期时间2.png

从测试结果来看,如果指定了cacheManager,则动态生成对应的RedisCacheManager,如果没有指定,则采用默认的缓存管理器。
示例代码: https://gitee.com/li-cube/share/tree/master/cache/src

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 205,132评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,802评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,566评论 0 338
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,858评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,867评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,695评论 1 282
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,064评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,705评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 42,915评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,677评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,796评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,432评论 4 322
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,041评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,992评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,223评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,185评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,535评论 2 343

推荐阅读更多精彩内容