上一节我们挖了个坑,还剩下 poolUpdater
还没讲,我们这期接着讲一下这个点,我们同样也是从初始化方法开始:
public void init() {
if (inited) {
return;
}
synchronized (this) {
if (inited) {
return;
}
if (intervalSeconds < 10) {
LOG.warn("CAUTION: Purge interval has been set to " + intervalSeconds
+ ". This value should NOT be too small.");
}
if (intervalSeconds <= 0) {
intervalSeconds = DEFAULT_INTERVAL;
}
executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
LOG.debug("Purging the DataSource Pool every " + intervalSeconds + "s.");
try {
removeDataSources();
} catch (Exception e) {
LOG.error("Exception occurred while removing DataSources.", e);
}
}
}, intervalSeconds, intervalSeconds, TimeUnit.SECONDS);
}
}
这里逻辑很简单,显示检查了 intervalSeconds
这个参数,是否符合预期,假如不是改为默认值,然后启动一个定时任务,这个任务会在 intervalSeconds
时间间隔里面调动 removeDataSources
方法,进行检查线程的存活情况,我们接下来看一下 removeDataSources
方法:
public void removeDataSources() {
if (nodesToDel == null || nodesToDel.isEmpty()) {
return;
}
try {
lock.lock();
Map<String, DataSource> map = highAvailableDataSource.getDataSourceMap();
Set<String> copySet = new HashSet<String>(nodesToDel);
for (String nodeName : copySet) {
LOG.info("Start removing Node " + nodeName + ".");
if (!map.containsKey(nodeName)) {
LOG.info("Node " + nodeName + " is NOT existed in the map.");
cancelBlacklistNode(nodeName);
continue;
}
DataSource ds = map.get(nodeName);
if (ds instanceof DruidDataSource) {
DruidDataSource dds = (DruidDataSource) ds;
int activeCount = dds.getActiveCount(); // CAUTION, activeCount MAYBE changed!
if (activeCount > 0) {
LOG.warn("Node " + nodeName + " is still running [activeCount=" + activeCount
+ "], try next time.");
continue;
} else {
LOG.info("Close Node " + nodeName + " and remove it.");
try {
dds.close();
} catch (Exception e) {
LOG.error("Exception occurred while closing Node " + nodeName
+ ", just remove it.", e);
}
}
}
map.remove(nodeName); // Remove the node directly if it is NOT a DruidDataSource.
cancelBlacklistNode(nodeName);
}
} catch (Exception e) {
LOG.error("Exception occurred while removing DataSources.", e);
} finally {
lock.unlock();
}
}
我们可以看到,他主要的逻辑就是遍历 nodesToDel
列表,调用 DruidDataSource
的 getActiveCount
方法获取活动的连接数量,假如数量为 0 ,就调用其 close
方法。除了 poolUpdater
外我们还漏了个方法,就是 createNodeMap
,我们接下来看一下这个方法:
private void createNodeMap() {
if (nodeListener == null) {
// Compatiable with the old version.
// Create a FileNodeListener to watch the dataSourceFile.
FileNodeListener listener = new FileNodeListener();
listener.setFile(dataSourceFile);
listener.setPrefix(propertyPrefix);
nodeListener = listener;
}
nodeListener.setObserver(poolUpdater);
nodeListener.init();
nodeListener.update(); // Do update in the current Thread at the startup
}
这里主要是设置 HighAvailableDataSource
的 nodeListener
,我们先看一下默认的 Lisenter FileNodeListener
,其主要就是设置一下dataSourceFile
, 然后是 setObserver
方法,我们可以看到,这是一个观察者模式, nodeListener
是一个被观察者,poolUpdater
是观察者,我们可以看一下 nodeListener
的 update 方法。
public void update(List<NodeEvent> events) {
if (events != null && !events.isEmpty()) {
this.lastUpdateTime = new Date();
NodeEvent[] arr = new NodeEvent[events.size()];
for (int i = 0; i < events.size(); i++) {
arr[i] = events.get(i);
}
this.setChanged();
this.notifyObservers(arr);
}
}
我们着重看一下最后两行,其实就是通过调用 notifyObservers
方法会通知所有的观察者时间变话,并通过 events
传递给观察者,我们再看一下观察者的 update 方法:
/**
* Process the given NodeEvent[]. Maybe add / delete some nodes.
*/
@Override
public void update(Observable o, Object arg) {
if (!(o instanceof NodeListener)) {
return;
}
if (arg == null || !(arg instanceof NodeEvent[])) {
return;
}
NodeEvent[] events = (NodeEvent[]) arg;
if (events.length <= 0) {
return;
}
try {
LOG.info("Waiting for Lock to start processing NodeEvents.");
lock.lock();
LOG.info("Start processing the NodeEvent[" + events.length + "].");
for (NodeEvent e : events) {
if (e.getType() == NodeEventTypeEnum.ADD) {
addNode(e);
} else if (e.getType() == NodeEventTypeEnum.DELETE) {
deleteNode(e);
}
}
} catch (Exception e) {
LOG.error("Exception occurred while updating Pool.", e);
} finally {
lock.unlock();
}
}
这里会拿到刚才传过来的 events
,然后解析这些 Event
, 然后更新 HighAvailableDataSource
的 DataSourceMa
。 接下来我们看一下默认的 nodeListener
的实现, FileNodeListener
, 我们先看一下他的 init 方法:
@Override
public void init() {
super.init();
if (intervalSeconds <= 0) {
intervalSeconds = 60;
}
executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
LOG.debug("Checking file " + file + " every " + intervalSeconds + "s.");
if (!lock.tryLock()) {
LOG.info("Can not acquire the lock, skip this time.");
return;
}
try {
update();
} catch (Exception e) {
LOG.error("Can NOT update the node list.", e);
} finally {
lock.unlock();
}
}
}, intervalSeconds, intervalSeconds, TimeUnit.SECONDS);
}
这里和 poolUpdater
比较接近,这里会启动一个定时任务,不断地调用update 方法,update 方法,会先调用 refresh 方法,生成 events ,然后再调用刚才的 update 方法,去通知观察者。我们先看一下 refresh 的逻辑:
@Override
public List<NodeEvent> refresh() {
Properties originalProperties = PropertiesUtils.loadProperties(file);
List<String> nameList = PropertiesUtils.loadNameList(originalProperties, getPrefix());
Properties properties = new Properties();
for (String n : nameList) {
String url = originalProperties.getProperty(n + ".url");
String username = originalProperties.getProperty(n + ".username");
String password = originalProperties.getProperty(n + ".password");
if (url == null || url.isEmpty()) {
LOG.warn(n + ".url is EMPTY! IGNORE!");
continue;
} else {
properties.setProperty(n + ".url", url);
}
if (username == null || username.isEmpty()) {
LOG.debug(n + ".username is EMPTY. Maybe you should check the config.");
} else {
properties.setProperty(n + ".username", username);
}
if (password == null || password.isEmpty()) {
LOG.debug(n + ".password is EMPTY. Maybe you should check the config.");
} else {
properties.setProperty(n + ".password", password);
}
}
List<NodeEvent> events = NodeEvent.getEventsByDiffProperties(getProperties(), properties);
if (events != null && !events.isEmpty()) {
LOG.info(events.size() + " different(s) detected.");
setProperties(properties);
}
return events;
}
这里先去获取所有收据有的nameList
,具体的获取逻辑如下,先解析所有初始化的Properties
中,包含 url
的属性,然后截取前的名字,如下我们能获取到 aaa 和 bbb 这个 nameList
。
aaa.url=***
bbb.url=***
接下来是遍历所有的 nameList
,解析出所有 name 对应的配置信息,然后放到新的 properties
中,最后调用 NodeEvent.getEventsByDiffProperties
方法来生成 Events
。类似我们也可以看一下 ZookeeperNodeListener
,我们先看一下他的 init
方法。
@Override
public void init() {
checkParameters();
super.init();
if (client == null) {
client = CuratorFrameworkFactory.builder()
.canBeReadOnly(true)
.connectionTimeoutMs(5000)
.connectString(zkConnectString)
.retryPolicy(new RetryForever(10000))
.sessionTimeoutMs(30000)
.build();
client.start();
privateZkClient = true;
}
cache = new PathChildrenCache(client, path, true);
cache.getListenable().addListener(new PathChildrenCacheListener() {
@Override
public void childEvent(CuratorFramework client, PathChildrenCacheEvent event) throws Exception {
try {
LOG.info("Receive an event: " + event.getType());
lock.lock();
PathChildrenCacheEvent.Type eventType = event.getType();
switch (eventType) {
case CHILD_REMOVED:
updateSingleNode(event, NodeEventTypeEnum.DELETE);
break;
case CHILD_ADDED:
updateSingleNode(event, NodeEventTypeEnum.ADD);
break;
case CONNECTION_RECONNECTED:
refreshAllNodes();
break;
default:
// CHILD_UPDATED
// INITIALIZED
// CONNECTION_LOST
// CONNECTION_SUSPENDED
LOG.info("Received a PathChildrenCacheEvent, IGNORE it: " + event);
}
} finally {
lock.unlock();
LOG.info("Finish the processing of event: " + event.getType());
}
}
});
try {
// Use BUILD_INITIAL_CACHE to force build cache in the current Thread.
// We don't use POST_INITIALIZED_EVENT, so there's no INITIALIZED event.
cache.start(PathChildrenCache.StartMode.BUILD_INITIAL_CACHE);
} catch (Exception e) {
LOG.error("Can't start PathChildrenCache", e);
}
}
druid 是通过 curator 的包,对 zookeeper 的几点进行操作,首先他会注册监听节点的事件监听器,监听 CHILD_REMOVED
,CHILD_ADDED
,CONNECTION_RECONNECTED
这三个事件,当 CHILD_REMOVED
发生,会调用如下方法:
private void updateSingleNode(PathChildrenCacheEvent event, NodeEventTypeEnum type) {
ChildData data = event.getData();
String nodeName = getNodeName(data);
List<String> names = new ArrayList<String>();
names.add(getPrefix() + "." + nodeName);
Properties properties = getPropertiesFromChildData(data);
List<NodeEvent> events = NodeEvent.generateEvents(properties, names, type);
if (events.isEmpty()) {
return;
}
if (type == NodeEventTypeEnum.ADD) {
getProperties().putAll(properties);
} else {
for (String n : properties.stringPropertyNames()) {
getProperties().remove(n);
}
}
super.update(events);
}
这里传入会包括 type
和 PathChildrenCacheEvent
, 接着我们根据 type
和 PathChildrenCacheEvent
生成 Druid
自己定义的 events
, 最后通知所有的观察者。