1、前言
为什么会有这种需求?原因是,本来一个数据库有很多表,然后其实数据量也不大。但是经过业务的发展,数据库中有两个表数据量变得特别大,而且由于表特别大,导致查询变得很慢。所以这个表肯定是要被切分的,最简单的情况下,只分表,但是由于计划分4个库,每个库每个表有8张,那么一个库有16张,所以总共有64张表。如果都放 mysql 的话,不知道能不能吃得消。但是最好还是分库吧,比较以后好扩展。
所以,现在需求是:对于这两个大表,最后是进行分库分表。但是其他数据量非常小的表,放到原库不动就行。所以需要多数据源的切换,shardingsphere 只针对分库分表,而别的数据源用来操作剩下的表。
本来由于业务的关系,这两个大表都是相互联系,一起操作,如果能按照一定的业务字段,将两张表的数据路由到同一个库,那么就不存在分布式事务的问题。可惜,事与愿违,这两个大表还夹杂者其他小表的修改操作,所以不得不上分布式事务了。。。。
这里想吐槽一下,shardingsphere 跟各种插件的兼容性不算是太好,我搞了一下午都没搞成功集成,后面用了其他人的案例来参考,stupid!!!
2、操作
首先,我要献上我的 pom 文件,版本最好别乱改,4.x 的 bug 实在是太多了(或者说是与其他系统集成的 bug 太多了),然后我所有的数据源都是 shardingjdbc 来管理的,就是不分库分表也是,只是不分库分表的不配分库分表规则即可。
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!--谨慎用 mybatis-plus,会产生各种错误-->
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>2.1.3</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.18</version>
</dependency>
<!--druid数据源-->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>1.1.16</version>
</dependency>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-spring-boot-starter</artifactId>
<version>4.1.0</version>
</dependency>
<!-- 使用BASE事务时,需要引入此模块 -->
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-transaction-base-seata-at</artifactId>
<version>4.1.0</version>
</dependency>
<!--引入 seata-->
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-all</artifactId>
<version>1.3.0</version>
</dependency>
</dependencies>
然后献上我的 application.properties
server.port=10833
#mybatis-plus
mybatis.mapper-locations=classpath:mapper/*.xml
##打印sql
spring.shardingsphere.props.sql.show=true
#命名空间
spring.shardingsphere.datasource.names=master0,master1,normal
spring.shardingsphere.datasource.master0.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.master0.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.master0.url=jdbc:mysql://localhost:3306/to_data_0?characterEncoding=utf-8
spring.shardingsphere.datasource.master0.username=root
spring.shardingsphere.datasource.master0.password=root
spring.shardingsphere.datasource.master1.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.master1.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.master1.url=jdbc:mysql://localhost:3306/to_data_1?characterEncoding=utf-8
spring.shardingsphere.datasource.master1.username=root
spring.shardingsphere.datasource.master1.password=root
spring.shardingsphere.datasource.normal.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.normal.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.normal.url=jdbc:mysql://localhost:3306/member?characterEncoding=utf-8
spring.shardingsphere.datasource.normal.username=root
spring.shardingsphere.datasource.normal.password=root
#不配置分库分表规则
spring.shardingsphere.sharding.tables.user.actual-data-nodes=normal.user
#根据 so 分库
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=so
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=master$->{so % 2}
spring.shardingsphere.sharding.binding-tables=too,to_waybill
#根据 so 分表(too)
spring.shardingsphere.sharding.tables.too.actual-data-nodes=master$->{0..1}.too_$->{0..1}
spring.shardingsphere.sharding.tables.too.table-strategy.inline.sharding-column=so
spring.shardingsphere.sharding.tables.too.table-strategy.inline.algorithm-expression=too_$->{so % 2}
#根据 so 分表(to_waybill)
spring.shardingsphere.sharding.tables.to_waybill.actual-data-nodes=master$->{0..1}.to_waybill_$->{0..1}
spring.shardingsphere.sharding.tables.to_waybill.table-strategy.inline.sharding-column=so
spring.shardingsphere.sharding.tables.to_waybill.table-strategy.inline.algorithm-expression=to_waybill_$->{so % 2}
开启 sharingjdbc 事务,记得配置事务
package com.snowflake1.test.config;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;
import javax.sql.DataSource;
@Configuration
@EnableTransactionManagement
public class TransactionConfiguration {
@Bean
public PlatformTransactionManager txManager(final DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
@Bean
public JdbcTemplate jdbcTemplate(final DataSource dataSource) {
return new JdbcTemplate(dataSource);
}
}
在事务中操作不同的库
@Transactional
@ShardingTransactionType(TransactionType.BASE) // 使用 seata 注解
public void sharding() {
// 操作 to_data_0、to_data_1 库
Too to1 = new Too(143, 20210305l, "one");
Too to2 = new Too(153, 20210305l, "two");
Too to3 = new Too(163, 20210305l, "three");
toMapper.insert(to1);
toMapper.insert(to2);
toMapper.insert(to3);
ToWaybill too1 = new ToWaybill(143, 20210305l, "one");
ToWaybill too2 = new ToWaybill(153, 20210305l, "two");
ToWaybill too3 = new ToWaybill(163, 20210305l, "three");
toWaybillMapper.insert(too1);
toWaybillMapper.insert(too2);
toWaybillMapper.insert(too3);
// 操作 member 库
userMapper.insert(new User(199l, "ss", "ss", 3));
throw new CIMException("");
}
然后在工程的 classpath 下新建一个 seata.conf 文件:
client {
application.id = example ## 应用唯一主键
transaction.service.group = my_test_tx_group ## 所属事务组
}
还有一点,我不确定啊,我看别人都在自己工程下防止了 registry.conf、file.conf 文件
registry.conf
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "file"
loadBalance = "RandomLoadBalance"
loadBalanceVirtualNodes = 10
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "file"
file {
name = "file.conf"
}
}
file.conf
transport {
# tcp udt unix-domain-socket
type = "TCP"
#NIO NATIVE
server = "NIO"
#enable heartbeat
heartbeat = true
#thread factory for netty
thread-factory {
boss-thread-prefix = "NettyBoss"
worker-thread-prefix = "NettyServerNIOWorker"
server-executor-thread-prefix = "NettyServerBizHandler"
share-boss-worker = false
client-selector-thread-prefix = "NettyClientSelector"
client-selector-thread-size = 1
client-worker-thread-prefix = "NettyClientWorkerThread"
# netty boss thread size,will not be used for UDT
boss-thread-size = 1
#auto default pin or 8
worker-thread-size = 8
}
}
service {
vgroupMapping.my_test_tx_group = "default"
#only support when registry.type=file, please don't set multiple addresses
default.grouplist = "127.0.0.1:8091"
#degrade, current not support
enableDegrade = false
#disable seata
disableGlobalTransaction = false
}
client {
async.commit.buffer.limit = 10000
lock {
retry.internal = 10
retry.times = 30
}
}
然后启动 seata 服务器,seata 服务器启动的话,直接下载 seata 二进制包启动就行。
最后启动工程,直接测试,发现确认是进行两阶段提交。然后造异常,会发现 rollback 的日志。
3、疑问
上面的代码不是全部都是 shardingphere 管理数据源吗?是的!因为我尝试多数据源的时间,出了问题,也没搞好,所以直接就这样了。。。。
4、参考资料
太多,很多篇很成一篇,还需要深入了解 shardingphere、seata 的知识了。