1.前言
HBase服务器端并没有提供update,delete接口,所以这些操作在服务器端都被认作是写入操作。因此HBase中更新,删除操作的流程与写入流程完全一致。那下面就以put操作为例,进行写入流程的分析。
2.HTable类
2.1 HTable的put方法:
@Override
public void put(final Put put) throws IOException {
getBufferedMutator().mutate(put);
if (autoFlush) {
flushCommits();
}
}
@Override
public void put(final List<Put> puts) throws IOException {
getBufferedMutator().mutate(puts);
if (autoFlush) {
flushCommits();
}
}
<1> getBufferedMutator().mutate(put);
BufferedMutator getBufferedMutator() throws IOException {
// mutator:写缓存
if (mutator == null) {
this.mutator = (BufferedMutatorImpl) connection.getBufferedMutator(
//创建mutator实例,传入tableName,线程池,写的buffer大小,最大的keyValue数量
new BufferedMutatorParams(tableName)
.pool(pool)
.writeBufferSize(tableConfiguration.getWriteBufferSize())
.maxKeyValueSize(tableConfiguration.getMaxKeyValueSize())
);
}
return mutator;
}
connection.getBufferedMutator
实际调用ConnectionManager的getBufferedMutator方法,最终返回new BufferedMutatorImpl
构造方法构造。
这里的maxKeyValueSize
是单个Cell的大小
在BufferedMutatorImpl
中:
protected ClusterConnection connection; // non-final so can be overridden in test
private final TableName tableName;
private volatile Configuration conf;
@VisibleForTesting
final ConcurrentLinkedQueue<Mutation> writeAsyncBuffer = new ConcurrentLinkedQueue<Mutation>();
@VisibleForTesting
AtomicLong currentWriteBufferSize = new AtomicLong(0);
private long writeBufferSize;
private final int maxKeyValueSize;
private boolean closed = false;
private final ExecutorService pool;
@VisibleForTesting
protected AsyncProcess ap; // non-final so can be overridden in test
调用mutate方法传入put请求,实际上就是将put请求放入writeAsyncBuffer
中
long toAddSize = 0;
for (Mutation m : ms) {
//如果是put请求,校验写入的每个Cell的大小是否超过限制
if (m instanceof Put) {
validatePut((Put) m);
}
//累加当前请求占据的堆内存大小
toAddSize += m.heapSize();
}
// This behavior is highly non-intuitive... it does not protect us against
// 94-incompatible behavior, which is a timing issue because hasError, the below code
// and setter of hasError are not synchronized. Perhaps it should be removed.
//异步提交请求发生错误
if (ap.hasError()) {
//累加当前的请求的大小(AtomicLong类型,具有原子性)
currentWriteBufferSize.addAndGet(toAddSize);
//将该请求放在缓存中(ConcurrentLinkedQueue,同步且有顺序)
writeAsyncBuffer.addAll(ms);
//不管当前的currentWriteBufferSize是否达到阈值,直接flush,并且同步等待结果返回
backgroundFlushCommits(true);
} else {
currentWriteBufferSize.addAndGet(toAddSize);
writeAsyncBuffer.addAll(ms);
}
// Now try and queue what needs to be queued.
while (currentWriteBufferSize.get() > writeBufferSize) {
backgroundFlushCommits(false);
}
总结:put中,
先getBufferedMutator().mutate(put);
这个方法中在缓存达到阈值时,通过AsyncProcess异步提交(如果异步提交进程初始化异常,那么就会转为同步提交,且不必等待达到阈值)
if (autoFlush) { flushCommits(); }
这个条件分支:
如果autoFlush为true,不管缓存是否达到阈值,都会直接触发AsyncProcess异步提交
2.2 backgroundFlushCommits方法
当传入backgroudFlushCommits的参数为false时执行的是异步提交,参数为true时执行的是同步提交。
极少数情况(异步提交发生异常才会转为同步):
与此同时,可以发现无论异步提交还是同步提交,实际的提交动作是由AsyncProcess ap执行的:
其中最关键的是ap.submit(tableName, buffer, true, null, false);
locateRegion
方法这里就先不赘述了,之后会写一篇专门来分析。
放到
Map<ServerName, MultiAction<Row>> actionsByServer
中
接下里就是多线程的RPC提交:
AsyncProcess中的submit ---》submitMultiActions ---》sendMultiAction ---》
---》getNewMultiActionRunnable
---》SingleServerRequestRunnable类中的run方法
3.总结
(1)把put操作添加到writeAsyncBuffer队列里面,符合条件(自动flush或者超过了阀值writeBufferSize)就通过AsyncProcess异步批量提交。
(2)在提交之前,我们要根据每个rowkey找到它们归属的region server,这个定位的过程是通过HConnection的locateRegion方法获得的,然后再把这些rowkey按照HRegionLocation分组。
(3)通过多线程,一个HRegionLocation构造MultiServerCallable<Row>,然后通过rpcCallerFactory.<MultiResponse> newCaller()执行调用,忽略掉失败重新提交和错误处理,客户端的提交操作到此结束。
相关博客:HBase的put流程源码分析