工作中遇到了ambari指标的搜集和展示问题,因需要增添部分python脚本对平台的脚本数据进行汇聚和整理,所以需要理解ambari-collector的部分流程,所以进行了简要的阅读,故做此分析,以防止后续遗忘。以下代码都存在删减,仅供参考。
- 首先来看一下main.py
from core.controller import Controller
def main(argv=None):
# Allow Ctrl-C
stop_handler = bind_signal_handlers()
server_process_main(stop_handler)
def server_process_main(stop_handler, scmStatus=None):
if scmStatus is not None:
scmStatus.reportStartPending()
config = Configuration()
_init_logging(config)
controller = Controller(config, stop_handler)
logger.info('Starting Server RPC Thread: %s' % ' '.join(sys.argv))
controller.start()
print "Server out at: " + SERVER_OUT_FILE
print "Server log at: " + SERVER_LOG_FILE
save_pid(os.getpid(), PID_OUT_FILE)
if scmStatus is not None:
scmStatus.reportStarted()
#The controller thread finishes when the stop event is signaled
controller.join()
remove_file(PID_OUT_FILE)
pass
由上述代码可以看出来,server_process_main 里面实例化了Controller类,并controller.start()开启了一个线程,那么我们看一下Controller里面的的代码(有删略)
from emitter import Emitter
class Controller(threading.Thread):
def __init__(self, config, stop_handler):
# Process initialization code
threading.Thread.__init__(self)
self.emitter = Emitter(self.config, self.application_metric_map, stop_handler)
def run(self):
self.start_emitter()
Timer(1, self.addsecond).start()
while True:
if (self.event_queue.full()):
logger.warn('Event Queue full!! Suspending further collections.')
else:
self.enqueque_events()
pass
if 0 == self._stop_handler.wait(self.sleep_interval):
logger.info('Shutting down Controller thread')
break
if not self._t is None:
self._t.cancel()
self._t.join(5)
self.emitter.join(5)
pass
def start_emitter(self):
self.emitter.start()
run函数里面执行了start_emitter(),然后对Emitter进行实例化,执行了emmitter.start(),接下来我们看一下Emitter的代码
class Emitter(threading.Thread):
COLLECTOR_URL = "xxxxx"
RETRY_SLEEP_INTERVAL = 5
MAX_RETRY_COUNT = 3
def __init__(self, config, application_metric_map, stop_handler):
threading.Thread.__init__(self)
self.lock = threading.Lock()
self.collector_address = config.get_server_address()
self.send_interval = config.get_send_interval()
self._stop_handler = stop_handler
self.application_metric_map = application_metric_map
def run(self):
while True:
try:
self.submit_metrics()
except Exception, e:
logger.warn('Unable to emit events. %s' % str(e))
pass
if 0 == self._stop_handler.wait(self.send_interval):
logger.info('Shutting down Emitter thread')
return
pass
def submit_metrics(self):
retry_count = 0
# This call will acquire lock on the map and clear contents before returning
# After configured number of retries the data will not be sent to the
# collector
json_data = self.application_metric_map.flatten(None, True)
if json_data is None:
logger.info("Nothing to emit, resume waiting.")
return
pass
response = None
while retry_count < self.MAX_RETRY_COUNT:
try:
response = self.push_metrics(json_data)
except Exception, e:
logger.warn('Error sending metrics to server. %s' % str(e))
pass
if response and response.getcode() == 200:
retry_count = self.MAX_RETRY_COUNT
else:
logger.warn("Retrying after {0} ...".format(self.RETRY_SLEEP_INTERVAL))
retry_count += 1
#Wait for the service stop event instead of sleeping blindly
if 0 == self._stop_handler.wait(self.RETRY_SLEEP_INTERVAL):
return
pass
pass
def push_metrics(self, data):
headers = {"Content-Type" : "application/json", "Accept" : "*/*"}
server = self.COLLECTOR_URL.format(self.collector_address.strip())
logger.info("server: %s" % server)
logger.debug("message to sent: %s" % data)
req = urllib2.Request(server, data, headers)
response = urllib2.urlopen(req, timeout=int(self.send_interval - 10))
if response:
logger.debug("POST response from server: retcode = {0}".format(response.getcode()))
logger.debug(str(response.read()))
pass
return response
由上述代码可以看出来,run函数执行的时候,执行了submit_metrics()函数,重点来了,该函数的核心就是 json_data = self.application_metric_map.flatten(None, True),当前类继承自ApplicationMetricsMap,让我们去查看一下ApplicationMetricsMap的代码
def flatten(self, application_id = None, clear_once_flattened = False):
with self.lock:
timeline_metrics = { "metrics" : [] }
local_metric_map = {}
if application_id:
if self.app_metric_map.has_key(application_id):
local_metric_map = { application_id : self.app_metric_map[application_id] }
else:
logger.info("application_id: {0}, not present in the map.".format(application_id))
else:
local_metric_map = self.app_metric_map.copy()
pass
for appId, metrics in local_metric_map.iteritems():
for metricId, metricData in dict(metrics).iteritems():
# Create a timeline metric object
timeline_metric = {
"hostname" : self.hostname if appId == "HOST" else "",
"metricname" : metricId,
#"appid" : "HOST",
"appid" : appId,
"instanceid" : "",
"starttime" : self.get_start_time(appId, metricId),
"metrics" : metricData
}
timeline_metrics[ "metrics" ].append( timeline_metric )
pass
pass
json_data = json.dumps(timeline_metrics) if len(timeline_metrics[ "metrics" ]) > 0 else None
if clear_once_flattened:
self.app_metric_map.clear()
pass
return json_data
pass
由此函数可以看得出来,该函数主要就是对数据进行一些合并,汇聚形成新的数据结构,但是当第一次在Controller里面执行start_emmiter()时候,该函数并未执行,因为self.app_metric_map的数据结构并未生成,让我们往前看,在Controller的run函数里面有这么一行代码,self.enqueue_events(),从字面意思看出来是事件入队列,让我们找到该函数,最终进行相互调用后是执行了process_service_collection_event
def process_service_collection_event(self, event):
startTime = int(round(time() * 1000))
metrics = None
path = os.path.abspath('.')
for root, dirs, files in os.walk("%s/libs/" % path):
appid = event.get_group_name().split('_')[0]
metricgroup = event.get_group_name().split('_')[1]
if ("%s_metrics.sh" % appid) in filter(lambda x: ".sh" in x, files):
metrics = {appid: self.service_info.get_service_metrics(appid, metricgroup)}
else:
logger.warn('have no %s modules' % appid)
if metrics:
for item in metrics:
self.application_metric_map.put_metric(item, metrics[item], startTime)
pass
这段代码就是执行各个服务的脚本,然后汇聚数据,最终生成metrics变量,然后执行了self.application_metric_map.put_metric(item, metrics[item], startTime),这个application_metric_map其实就是ApplicationMetricMap类的实例,其中有一个函数如下所示:
def put_metric(self, application_id, metric_id_to_value_map, timestamp):
with self.lock:
for metric_name, value in metric_id_to_value_map.iteritems():
metric_map = self.app_metric_map.get(application_id)
if not metric_map:
metric_map = { metric_name : { timestamp : value } }
self.app_metric_map[ application_id ] = metric_map
else:
metric_id_map = metric_map.get(metric_name)
if not metric_id_map:
metric_id_map = { timestamp : value }
metric_map[ metric_name ] = metric_id_map
else:
metric_map[ metric_name ].update( { timestamp : value } )
pass
pass
pass
其实这段代码主要是从脚本中搜集的数据,形成最终的app_metric_map数据,在Controller中一直被无限调用,只是我们第一次执行start_emitter()时候并未执行而已,当从脚本中搜集到数据,才会执行真正的调用,然后通过requests模块,上报到 metrics collector的6188端口中,最终数据落于hbase中。