Report of Mqtt Broker Benchmark Test (Template)

1. Brokers

Single Node:

RabbitMQ 3.3.5 release
Mosquito 1.4.10 release
New Mqtt

Cluster:

RabbitMQ 3.3.5 release [4,8] nodes
New Mqtt [4] nodes

2. Mqtt in LineWorks server NCS

2.1 Mqtt usage in NCS

There are two kinds of mqtt clients : publisher subscriber. In LineWorks server usage, the two kinds of mqtt clients are separated,api servers which are less than 20 amount are publishers, LineWorks clients which are about 10000 amount in NCS environment are subscribers.
Figure 1 represents mqtt usage in NCS environment.

Figure 1 : mqtt in ncs environment

2.2 Statistics publish counts on ncs

date qos1 total qos1 max tps qos0 total qos0 max tps
20170118 4807211 630 545497 42
20170116 4760298 685 508954 58
20170113 4761518 684 534527 52
Figure 2 : statistics in one day

3. Test scenario

3.1 Focus on LineWorks real environment usage

We design this test scenario based on statistics of mqtt usage on ncs env.The goal of the benchmark is to evaluate the impact of the number of subscribers on MQTT server, in terms of the delivered throughtput(message rate on the subscriber side), the CPU usage of the server, and the time required to transmit a message from a publisher to a subscriber, i.e. the message transmission latency.There should be no limitation caused by the clients (affecting each other) or by the network.

The scalability test starts with a minimum of 10.000 subscribers, and tries to reach a maximum of 100.000 subscribers. The publishers send messages at a steady rate which is proportional to the subscribers amounts.

The tests do not try to reach the maximum message throughput. The goal is to show how the server scales with the number of publishers, each publishing at a fixed rate.

Figure 3: test environment

3.2 Machines infomation

The machines used for the benchmark all create on nclould, they have the same configuration listed in the table hereafter.

OS Processor RAM
centos 7.2 Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz * 4 8 GB

3.3 Test steps

A test is executed as a sequence of 3 steps:

  1. During the first step, every thread launches the publishers and the subscribers. Connections are opened.
  2. The second step starts after a time delay in order to allow the first step to be completed by every thread. Each thread publishes a message to the topic. In this way, as the total number of threads is known, each thread can handle a countdown and start publishing messages when the countdown reaches zero. Messages are asynchronously published, each message being sent by a different publisher. The message production lasts 5 minutes, which appeared to be long enough to reach a steady states.The second step ends when all the messages have been received by the subscribers. Therefore, the test also checks that there is no message lost by the server. The delivered throughput, CPU usage and message transmission latency are measured after a warmup period equal to 1 minute. The latency is measured for every published message. A timestamp is added to the payload of every message.The latency result shown by the graphics is the average of the latencies of all the messages published after the warmup period.
  3. The third step starts after a time delay allowing not to affect the measures done during the second step. During this step, subscribers unsubscribe and connections are closed.

4. test result

broker|subscriber counts(k)|publish rate(k)|payload size(byte)|qos|cpu|cpu total| latency(ms)|annotation
---------|--------------|-----|----------|--------|-------|-------|------------|----|----------
mosquito| 5 | 0.5 |10 | 1 | 100% | 100 | 180 | cpu max
rabbitmq| 5 | 0.5 |10 | 1 | 107% | 107 | 200 | connection max
rabbitmq| 5 | 0.5 |1000 | 1 | 109% | 109 | 200 | connection max
rabbitmq| 5 | 0.5 |10000 | 1 | 112% | 112 | 220 | connection max
new mqtt| 5 | 0.5 |10 | 1 | 48.8% | 48.8 | 40 |
new mqtt| 5 | 0.5 |1000 | 1 | 52.4% | 52.4 | 40 |
new mqtt| 5 | 0.5 |10000 | 1 | 57.1% | 57.1 | 50 |
new mqtt| 10 | 1 |10 | 1 | 90.3% | 90.3 | 45 |
new mqtt| 10 | 1 |1000 | 1 | 98.9% | 98.9 | 45 |
new mqtt| 10 | 1 |10000 | 1 | 106.6% | 106.6 | 50 |
rabbitmq| 10 | 1 |10 | 1 | 154% * 4 | 616 | 250 |
rabbitmq| 10 | 1 |1000 | 1 | 160% * 4 | 640 | 250 |
rabbitmq| 10 | 1 |10000 | 1 | 167% * 4 | 668 | 300 |
rabbitmq| 40 | 4 |10 | 1 | 130% *8 | 1040 | 1000 | delay max
new mqtt| 40 | 4 |10 | 1 | 137% * 4 | 548 | 40 |
new mqtt| 40 | 4 |1000 | 1 | 149% * 4 | 596 | 40 |
new mqtt| 40 | 4 |10000 | 1 | 160% * 4 | 640 | 50 |
new mqtt| 100 | 10 |10 | 1 | 198.0% * 4 | 792 | 50 |
new mqtt| 100 | 10 |1000 | 1 | 255.2% * 4 | 1020 | 50 |
new mqtt| 100 | 10 |10000 | 1 | 277% * 4 | 1108 | 60 |
mosquito| 5 | 0.5 |10 | 0 | 91% | 91 | 37 | cpu max
rabbitmq| 5 | 0.5 |10 | 0 | 78% | 78 | 57 | connection max
rabbitmq| 5 | 0.5 |1000 | 0 | 80% | 80 | 60 | connection max
rabbitmq| 5 | 0.5 |10000 | 0 | 85% | 85 | 60 | connection max
new mqtt| 5 | 0.5 |10 | 0 | 34.2% | 34.2 | 30 |
new mqtt| 5 | 0.5 |1000 | 0 | 38.7% | 38.7 | 30 |
new mqtt| 5 | 0.5 |10000 | 0 | 42.2% | 42.2 | 40 |
new mqtt| 10 | 1 |10 | 0 | 64.9% | 64.9 | 30 |
new mqtt| 10 | 1 |1000 | 0 | 71.6% | 71.6 | 30 |
new mqtt| 10 | 1 |10000 | 0 | 79.7% | 79.7 | 40 |
rabbitmq| 10 | 1 |10 | 0 | 66% * 4 | 264 | 60 |
rabbitmq| 10 | 1 |1000 | 0 | 75% * 4 | 300 | 60 |
rabbitmq| 10 | 1 |10000 | 0 | 84% * 4 | 336 | 60 |
rabbitmq| 40 | 4 |10 | 0 | 112% * 8 | 896 | 1000 | delay max
new mqtt| 40 | 4 |10 | 0 | 102% * 4 | 408 | 35 |
new mqtt| 40 | 4 |1000 | 0 | 111% * 4 | 444 | 40 |
new mqtt| 40 | 4 |10000 | 0 | 118% * 4 | 472 | 50 |
new mqtt| 100 | 10 |10 | 0 | 166% * 4 | 664 | 50 |
new mqtt| 100 | 10 |1000 | 0 | 211.8% * 4 | 847 | 50 |
new mqtt| 100 | 10 |10000 | 0 | 248.7% *4 | 995 | 55 |

4.1 qos1 (acknowledgement, not retained)

qos1_cpu.PNG
qos1_delayPNG.PNG

4.2 qos0 (no acknowledgement, retained)

qos0_cpu.PNG
qos0_delay.PNG

4.3 discuss

  1. mosquito
  • mosquito only have a single node and only use one cpu, when subscriber users are 5k, cpu reach almost 100%
  1. rabbitmq
  • when 40k user subscribe on 4 nodes, delay is increasing.
  • for 1 node, rabbitmq can establish not more than 6821 subscriber users
[irteam@dev-chenzhaoyu1.ncl ~]$ mosquitto_pub -h 10.113.236.145 -p 1884  -t 123 -q 0 -m hahaha
Error: Connection refused

[irteam@test-mqtt-cluster003.ncl ~]$ netstat -ant | grep 1884 | grep EST | wc -l
6849

 {file_descriptors,[{total_limit,81820},
                    {total_used,6821},
                    {sockets_limit,73636},
                    {sockets_used,6819}]},

3.new mqtt

  • new mqtt use less resources
  • 4 nodes can hold 100k subscribes(tps 10k)
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,530评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 86,403评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,120评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,770评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,758评论 5 367
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,649评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,021评论 3 398
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,675评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,931评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,659评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,751评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,410评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,004评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,969评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,203评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,042评论 2 350
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,493评论 2 343

推荐阅读更多精彩内容