storm 内建指标说明

The builtin metrics instrument Storm itself.

Reporting Rate(上报周期)

内建指标的上报周期是通过topology.builtin.metrics.bucket.size.secs来配置的。如果值被设置的特别小的话,可能造成消费端(storm内部运行MetricsConsumerBolt来消费内建指标)的过载,因此请谨慎修改此配置值。不设置时,默认为60s.

Tuple Counting Metrics

There are several different metrics related to counting what a bolt or spout does to a tuple. These include things like emitting, transferring, acking, and failing of tuples.

In general all of these tuple count metrics are ** randomly sub-sampled ** unless otherwise stated. This means that the counts you see both on the UI and from the built in metrics are not necessarily exact. In fact by default we sample only 5% of the events and estimate the total number of events from that. The sampling percentage is configurable per topology through the topology.stats.sample.rate config. Setting it to 1.0 will make the counts exact, but be aware that the more events we sample the slower your topology will run (as the metrics are counted in the same code path as tuples are processed). This is why we have a 5% sample rate as the default.

The tuple counting metrics are generally reported to the metrics consumers as maps unless explicitly stated otherwise. They break down each count for finer grained reporting.
The keys to these maps fall into two categories "${stream_name}" or "${upstream_component}:${stream_name}". The former is used for all spout metrics and for outgoing bolt metrics (__emit-count and __transfer-count). The latter is used for bolt metrics that deal with incoming tuples.

So for a word count topology the count bolt might show something like the following for the __ack-count metric

{
    "split:default": 80080
}

But the spout instead would show something like the following for the __ack-count metric.

{
    "default": 12500
}
__ack-count

For bolts it is the number of incoming tuples that had the ack method called on them. For spouts it is the number of tuples trees that were fully acked. See Guaranteeing Message Processing for more information about what a tuple tree is. If acking is disabled this metric is still reported, but it is not really meaningful.

__fail-count

For bolts this is the number of incoming tuples that had the fail method called on them. For spouts this is the number of tuple trees that failed. Tuple trees may fail from timing out or because a bolt called fail on it. The two are not separated out by this metric.

__emit-count

This is the total number of times the emit method was called to send a tuple. This is the same for both bolts and spouts.

__transfer-count

This is the total number of tuples transferred to a downstream bolt/spout for processing. This number will not always match __emit_count. If nothing is registered to receive a tuple down stream the number will be 0 even if tuples were emitted. Similarly if there are multiple down stream consumers it may be a multiple of the number emitted. The grouping also can play a role if it sends the tuple to multiple instances of a single bolt down stream.

__execute-count

This count metric is bolt specific. It counts the number of times that a bolt's execute method was called.

Tuple Latency Metrics

Similar to the tuple counting metrics storm also collects average latency metrics for bolts and spouts. These follow the same structure as the bolt/spout maps and are sub-sampled in the same way as well. In all cases the latency is measured in milliseconds.

__complete-latency

The complete latency is just for spouts. It is the average amount of time it took for ack or fail to be called for a tuple after it was emitted. If acking is disabled this metric is likely to be blank or 0 for all values, and should be ignored.

__execute-latency

This is just for bolts. It is the average amount of time that the bolt spent in the call to the execute method. The higher this gets, the lower the throughput of tuples per bolt instance.

__process-latency

This is also just for bolts. It is the average amount of time between when execute was called to start processing a tuple, to when it was acked or failed by the bolt. If your bolt is a very simple bolt and the processing is synchronous then __process-latency and __execute-latency should be very close to one another, with process latency being slightly smaller. If you are doing a join or have asynchronous processing then it may take a while for a tuple to be acked so the process latency would be higher than the execute latency.

__skipped-max-spout-ms

This metric records how much time a spout was idle because more tuples than topology.max.spout.pending were still outstanding. This is the total time in milliseconds, not the average amount of time and is not sub-sampled.

__skipped-throttle-ms

This metric records how much time a spout was idle because back-pressure indicated that downstream queues in the topology were too full. This is the total time in milliseconds, not the average amount of time and is not sub-sampled.

skipped-inactive-ms

This metric records how much time a spout was idle because the topology was deactivated. This is the total time in milliseconds, not the average amount of time and is not sub-sampled.

Error Reporting Metrics

Storm also collects error reporting metrics for bolts and spouts.

__reported-error-count

This metric records how many errors were reported by a spout/bolt. It is the total number of times the reportError method was called.

Queue Metrics

Each bolt or spout instance in a topology has a receive queue and a send queue. Each worker also has a queue for sending messages to other workers. All of these have metrics that are reported.

The receive queue metrics are reported under the __receive name and send queue metrics are reported under the __sendqueue for the given bolt/spout they are a part of. The metrics for the queue that sends messages to other workers is under the __transfer metric name for the system bolt (__system).

They all have the form.

{
    "arrival_rate_secs": 1229.1195171893523,
    "overflow": 0,
    "read_pos": 103445,
    "write_pos": 103448,
    "sojourn_time_ms": 2.440771591407277,
    "capacity": 1024,
    "population": 19
    "tuple_population": 200
}

In storm we sometimes batch multiple tuples into a single entry in the disruptor queue. This batching is an optimization that has been in storm in some form since the beginning, but the metrics did not always reflect this so be careful with how you interpret the metrics and pay attention to which metrics are for tuples and which metrics are for entries in the disruptor queue. The __receive and __transfer queues can have batching but the __sendqueue should not.

arrival_rate_secs is an estimation of the number of tuples that are inserted into the queue in one second, although it is actually the dequeue rate.
The sojourn_time_ms is calculated from the arrival rate and is an estimate of how many milliseconds each tuple sits in the queue before it is processed.
Prior to STORM-2621 (v1.1.1, v1.2.0, and v2.0.0) these were the rate of entries, not of tuples.

A disruptor queue has a set maximum number of entries. If the regular queue fills up an overflow queue takes over. The number of tuple batches stored in this overflow section are represented by the overflow metric. Storm also does some micro batching of tuples for performance/efficiency reasons so you may see the overflow with a very small number in it even if the queue is not full.

read_pos and write_pos are internal disruptor accounting numbers. You can think of them almost as the total number of entries written (write_pos) or read (read_pos) since the queue was created. They allow for integer overflow so if you use them please take that into account.

capacity is the maximum number of entries in the disruptor queue. population is the number of entries currently filled in the queue.

tuple_population is the number of tuples currently in the queue as opposed to the number of entries. This was added at the same time as STORM-2621 (v1.1.1, v1.2.0, and v2.0.0)

System Bolt (Worker) Metrics

The System Bolt __system provides lots of metrics for different worker wide things. The one metric not described here is the __transfer queue metric, because it fits with the other disruptor metrics described above.

Be aware that the __system bolt is an actual bolt so regular bolt metrics described above also will be reported for it.

Receive (NettyServer)

__recv-iconnection reports stats for the netty server on the worker. This is what gets messages from other workers. It is of the form

{
    "dequeuedMessages": 0,
    "enqueued": {
      "/127.0.0.1:49952": 389951
    }
}

dequeuedMessages is a throwback to older code where there was an internal queue between the server and the bolts/spouts. That is no longer the case and the value can be ignored.
enqueued is a map between the address of the remote worker and the number of tuples that were sent from it to this worker.

Send (Netty Client)

The __send-iconnection metric holds information about all of the clients for this worker. It is of the form

{
    NodeInfo(node:7decee4b-c314-41f4-b362-fd1358c985b3-127.0.01, port:[6701]): {
        "reconnects": 0,
        "src": "/127.0.0.1:49951",
        "pending": 0,
        "dest": "localhost/127.0.0.1:6701",
        "sent": 420779,
        "lostOnSend": 0
    }
}

The value is a map where the key is a NodeInfo class for the downstream worker it is sending messages to. This is the SupervisorId + port. The value is another map with the fields

  • src What host/port this client has used to connect to the receiving worker.
  • dest What host/port this client has connected to.
  • reconnects the number of reconnections that have happened.
  • pending the number of messages that have not been sent. (This corresponds to messages, not tuples)
  • sent the number of messages that have been sent. (This is messages not tuples)
  • lostOnSend. This is the number of messages that were lost because of connection issues. (This is messages not tuples).
JVM Memory

JVM memory usage is reported through memory/nonHeap for off heap memory and memory/heap for on heap memory. These values come from the MemoryUsage mxbean. Each of the metrics are reported as a map with the following keys, and values returned by the corresponding java code.

Key Corresponding Code
maxBytes memUsage.getMax()
committedBytes memUsage.getCommitted()
initBytes memUsage.getInit()
usedBytes memUsage.getUsed()
virtualFreeBytes memUsage.getMax() - memUsage.getUsed()
unusedBytes memUsage.getCommitted() - memUsage.getUsed()
JVM Garbage Collection

The exact GC metric name depends on the garbage collector that your worker uses. The data is all collected from ManagementFactory.getGarbageCollectorMXBeans() and the name of the metrics is "GC/" followed by the name of the returned bean with white space removed. The reported metrics are just

  • count the number of gc events that happened and
  • timeMs the total number of milliseconds that were spent doing gc.

Please refer to the JVM documentation for more details.

JVM Misc
  • threadCount is the number of threads currently in the JVM.
Uptime
  • uptimeSecs reports the number of seconds the worker has been up for
  • newWorkerEvent is 1 when a worker is first started and 0 all other times. This can be used to tell when a worker has crashed and is restarted.
  • startTimeSecs is when the worker started in seconds since the epoch

本文引用自github,storm作者,因官方文档还未更新,因此贴这一篇

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 221,820评论 6 515
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 94,648评论 3 399
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 168,324评论 0 360
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,714评论 1 297
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,724评论 6 397
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 52,328评论 1 310
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,897评论 3 421
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,804评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 46,345评论 1 318
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,431评论 3 340
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,561评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 36,238评论 5 350
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,928评论 3 334
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,417评论 0 24
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,528评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,983评论 3 376
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,573评论 2 359

推荐阅读更多精彩内容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 7,345评论 0 10
  • 4月30日开始入驻简书,发表第一篇文章,到今天7月2日,已超过两个月,我觉得是该好好回顾总结下了。 两个月的时间,...
    无声若有声阅读 526评论 22 14
  • 不可遗忘的国耻 灭绝人性的731部队 “731部队”这几个字,我想每个中国人都不会陌生,...
    郝吉庆阅读 794评论 0 2
  • (根据王佩丰Excel学习视频整理) 一、认识数组 1.数组生成原理 例: 只限定销售区域的金额:=SUM(($A...
    罗恬Sophie阅读 1,091评论 0 1
  • 嘿呀 城西国际大学!看上去像公司一样 每天午餐感觉都吃不完,好多啊。 然后em拍了人生中第一部“电视剧”??原来是...
    想喝阿华田阅读 223评论 0 0