概述
测试单个srs节点的同时推流或拉流的能力。
主机1:
云主机-上海
CPU:Intel(R) Xeon(R) Gold 6161 CPU @ 2.20GHz
MEM:64G
主机2:
云主机-北京
CPU:Intel(R) Xeon(R) Gold 6161 CPU @ 2.20GHz
MEM:64G
测试方法:使用srs自带的srs-bench进行测试。
带宽测试
使用iperf3进行测试:
上海:
1 2 3 4
| $ iperf3 -s -p 12345 -i 1
----------------------------------------------------------- Server listening on 12345
|
北京:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
| $ iperf3 -c xxx.xxx.xxx.219 -p12345 -i 1 -t 30
[ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 16.0 MBytes 135 Mbits/sec 4174 652 KBytes [ 4] 1.00-2.00 sec 21.2 MBytes 178 Mbits/sec 3352 1011 KBytes [ 4] 2.00-3.00 sec 26.2 MBytes 220 Mbits/sec 1293 5.66 KBytes [ 4] 3.00-4.00 sec 45.0 MBytes 378 Mbits/sec 1368 2.62 MBytes [ 4] 4.00-5.00 sec 25.0 MBytes 210 Mbits/sec 2496 1.96 MBytes [ 4] 5.00-6.00 sec 42.5 MBytes 357 Mbits/sec 1203 2.60 MBytes [ 4] 6.00-7.00 sec 33.8 MBytes 283 Mbits/sec 2117 990 KBytes [ 4] 7.00-8.00 sec 33.8 MBytes 283 Mbits/sec 0 1.02 MBytes [ 4] 8.00-9.00 sec 35.0 MBytes 294 Mbits/sec 13 824 KBytes [ 4] 9.00-10.00 sec 27.5 MBytes 231 Mbits/sec 63 860 KBytes [ 4] 10.00-11.00 sec 31.2 MBytes 262 Mbits/sec 0 940 KBytes [ 4] 11.00-12.00 sec 31.2 MBytes 262 Mbits/sec 0 998 KBytes [ 4] 12.00-13.00 sec 35.0 MBytes 294 Mbits/sec 0 1.01 MBytes [ 4] 13.00-14.00 sec 35.0 MBytes 294 Mbits/sec 0 1.04 MBytes [ 4] 14.00-15.00 sec 35.0 MBytes 294 Mbits/sec 0 1.05 MBytes [ 4] 15.00-16.00 sec 36.2 MBytes 304 Mbits/sec 1 1.07 MBytes [ 4] 16.00-17.00 sec 37.5 MBytes 315 Mbits/sec 0 1.10 MBytes [ 4] 17.00-18.00 sec 38.8 MBytes 325 Mbits/sec 0 1.13 MBytes [ 4] 18.00-19.00 sec 30.0 MBytes 252 Mbits/sec 96 894 KBytes [ 4] 19.00-20.00 sec 30.0 MBytes 252 Mbits/sec 1 932 KBytes [ 4] 20.00-21.00 sec 31.2 MBytes 262 Mbits/sec 0 956 KBytes [ 4] 21.00-22.00 sec 32.5 MBytes 273 Mbits/sec 0 983 KBytes [ 4] 22.00-23.00 sec 32.5 MBytes 273 Mbits/sec 0 1008 KBytes [ 4] 23.00-24.00 sec 35.0 MBytes 294 Mbits/sec 0 1.01 MBytes [ 4] 24.00-25.00 sec 35.0 MBytes 294 Mbits/sec 0 1.03 MBytes [ 4] 25.00-26.00 sec 36.2 MBytes 304 Mbits/sec 0 1.06 MBytes [ 4] 26.00-27.00 sec 37.5 MBytes 315 Mbits/sec 0 1.15 MBytes [ 4] 27.00-28.00 sec 41.2 MBytes 346 Mbits/sec 0 1.27 MBytes [ 4] 28.00-29.00 sec 32.5 MBytes 273 Mbits/sec 226 1018 KBytes [ 4] 29.00-30.00 sec 36.2 MBytes 304 Mbits/sec 0 1.09 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 996 MBytes 279 Mbits/sec 16403 sender [ 4] 0.00-30.00 sec 993 MBytes 278 Mbits/sec receiver
iperf Done.
|
北京:
1 2 3 4
| $ iperf3 -s -p 12345 -i 1
----------------------------------------------------------- Server listening on 12345
|
上海:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
| $ iperf3 -c xxx.xxx.xxx.103 -p12345 -i 1 -t 30
[ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 1.48 MBytes 12.4 Mbits/sec 1 56.6 KBytes [ 4] 1.00-2.00 sec 2.49 MBytes 20.9 Mbits/sec 1 80.6 KBytes [ 4] 2.00-3.00 sec 3.48 MBytes 29.2 Mbits/sec 1 106 KBytes [ 4] 3.00-4.00 sec 4.29 MBytes 36.0 Mbits/sec 1 130 KBytes [ 4] 4.00-5.00 sec 4.91 MBytes 41.2 Mbits/sec 0 157 KBytes [ 4] 5.00-6.00 sec 6.46 MBytes 54.2 Mbits/sec 0 184 KBytes [ 4] 6.00-7.00 sec 6.84 MBytes 57.3 Mbits/sec 0 212 KBytes [ 4] 7.00-8.00 sec 8.20 MBytes 68.8 Mbits/sec 0 238 KBytes [ 4] 8.00-9.00 sec 8.76 MBytes 73.5 Mbits/sec 0 266 KBytes [ 4] 9.00-10.00 sec 9.82 MBytes 82.4 Mbits/sec 0 293 KBytes [ 4] 10.00-11.00 sec 11.7 MBytes 98.0 Mbits/sec 0 356 KBytes [ 4] 11.00-12.00 sec 14.4 MBytes 120 Mbits/sec 0 467 KBytes [ 4] 12.00-13.00 sec 19.8 MBytes 166 Mbits/sec 0 602 KBytes [ 4] 13.00-14.00 sec 23.8 MBytes 199 Mbits/sec 0 772 KBytes [ 4] 14.00-15.00 sec 30.0 MBytes 252 Mbits/sec 0 977 KBytes [ 4] 15.00-16.00 sec 37.5 MBytes 315 Mbits/sec 1 1.06 MBytes [ 4] 16.00-17.00 sec 38.8 MBytes 325 Mbits/sec 0 1.09 MBytes [ 4] 17.00-18.00 sec 40.0 MBytes 336 Mbits/sec 0 1.11 MBytes [ 4] 18.00-19.00 sec 36.2 MBytes 304 Mbits/sec 152 846 KBytes [ 4] 19.00-20.00 sec 31.2 MBytes 262 Mbits/sec 4 911 KBytes [ 4] 20.00-21.00 sec 31.2 MBytes 262 Mbits/sec 0 938 KBytes [ 4] 21.00-22.00 sec 33.8 MBytes 283 Mbits/sec 0 964 KBytes [ 4] 22.00-23.00 sec 33.8 MBytes 283 Mbits/sec 0 991 KBytes [ 4] 23.00-24.00 sec 35.0 MBytes 294 Mbits/sec 0 1018 KBytes [ 4] 24.00-25.00 sec 37.5 MBytes 315 Mbits/sec 0 1.02 MBytes [ 4] 25.00-26.00 sec 36.2 MBytes 304 Mbits/sec 0 1.05 MBytes [ 4] 26.00-27.00 sec 32.5 MBytes 273 Mbits/sec 82 813 KBytes [ 4] 27.00-28.00 sec 30.0 MBytes 252 Mbits/sec 0 905 KBytes [ 4] 28.00-29.00 sec 33.8 MBytes 283 Mbits/sec 0 973 KBytes [ 4] 29.00-30.00 sec 33.8 MBytes 283 Mbits/sec 0 1022 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 678 MBytes 189 Mbits/sec 243 sender [ 4] 0.00-30.00 sec 676 MBytes 189 Mbits/sec receiver
iperf Done.
|
可以看到TCP稳定时,大概是300M的带宽。
rtmp推流测试
在北京起一个srs服务器,从上海使用srs-bench推流过去。
这里srs服务器和srs-bench都是单核运行的,为了保证srs-bench不会先达到cpu上限,使用两个srs-bench进程来推流。
先推750路流:
1 2 3 4 5 6 7 8 9 10 11
| $ ./objs/sb_rtmp_publish -i doc/source.200kbps.768x320.flv -c 750 -s 10 -r rtmp://xxx.xxx.xxx.103:2019/live/livestream_{i}?vhost=long.test.com
[2019-10-27 15:58:55.938] [report] [16398] threads:750 alive:750 duration:30 tduration:0 nread:0.79 nwrite:137.69 tasks:750 etasks:0 stasks:0 estasks:0 [2019-10-27 15:59:25.938] [report] [16398] threads:750 alive:750 duration:60 tduration:0 nread:0.39 nwrite:163.65 tasks:750 etasks:0 stasks:0 estasks:0 [2019-10-27 15:59:55.938] [report] [16398] threads:750 alive:750 duration:90 tduration:0 nread:0.26 nwrite:164.61 tasks:750 etasks:0 stasks:0 estasks:0 [2019-10-27 16:00:25.938] [report] [16398] threads:750 alive:750 duration:120 tduration:0 nread:0.20 nwrite:172.49 tasks:750 etasks:0 stasks:0 estasks:0 [2019-10-27 16:00:55.938] [report] [16398] threads:750 alive:750 duration:150 tduration:0 nread:0.16 nwrite:174.96 tasks:750 etasks:0 stasks:0 estasks:0 [2019-10-27 16:01:25.938] [report] [16398] threads:750 alive:750 duration:180 tduration:0 nread:0.13 nwrite:179.18 tasks:750 etasks:0 stasks:0 estasks:0 [2019-10-27 16:01:55.938] [report] [16398] threads:750 alive:750 duration:210 tduration:0 nread:0.11 nwrite:184.73 tasks:750 etasks:0 stasks:0 estasks:0 [2019-10-27 16:02:25.938] [report] [16398] threads:750 alive:750 duration:240 tduration:0 nread:0.10 nwrite:182.79 tasks:750 etasks:0 stasks:0 estasks:0 [2019-10-27 16:02:55.939] [report] [16398] threads:750 alive:750 duration:270 tduration:0 nread:0.09 nwrite:183.50 tasks:750 etasks:0 stasks:0 estasks:0
|
观察服务器cpu占用:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| $ top
top - 15:57:51 up 243 days, 1:29, 4 users, load average: 0.35, 0.16, 0.10 Tasks: 212 total, 2 running, 210 sleeping, 0 stopped, 0 zombie %Cpu(s): 2.2 us, 0.5 sy, 0.0 ni, 96.7 id, 0.0 wa, 0.0 hi, 0.6 si, 0.0 st KiB Mem : 65807864 total, 2352216 free, 1588392 used, 61867256 buff/cache KiB Swap: 4194300 total, 4192468 free, 1832 used. 61573296 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9548 root 20 0 639468 318264 1924 R 50.5 0.5 22:03.87 srs 9554 root 20 0 437456 221308 1928 S 15.6 0.3 11:34.34 srs 10 root 20 0 0 0 0 S 0.3 0.0 236:57.63 rcu_sched 1 root 20 0 199248 3912 2436 S 0.0 0.0 68:38.96 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 1:33.02 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 6 root 20 0 0 0 0 S 0.0 0.0 0:03.29 kworker/u32:0
|
观察服务器IO占用:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| $ dstat
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 2 0 97 0 0 1| 0 0 | 24M 730k| 0 0 | 12k 7223 2 0 97 0 0 1| 0 0 | 23M 715k| 0 0 | 11k 7190 2 1 97 0 0 1| 0 112k| 20M 675k| 0 0 | 11k 7399 2 0 97 0 0 1| 0 0 | 21M 684k| 0 0 | 11k 6847 2 0 97 0 0 1| 0 0 | 23M 695k| 0 0 | 11k 7115 2 0 97 0 0 1| 0 0 | 23M 710k| 0 0 | 11k 7168 2 0 97 0 0 1| 0 0 | 24M 714k| 0 0 | 11k 7377 2 1 97 0 0 1| 0 0 | 27M 743k| 0 0 | 12k 6799 2 0 97 0 0 1| 0 24k| 30M 783k| 0 0 | 13k 7247 2 0 97 0 0 1| 0 0 | 33M 813k| 0 0 | 13k 7354 2 1 97 0 0 1| 0 0 | 36M 833k| 0 0 | 14k 8435 2 0 97 0 0 1| 0 0 | 37M 861k| 0 0 | 14k 7925
|
再推250路流,增加到1000路流:
1 2 3 4 5
| $ ./objs/sb_rtmp_publish -i doc/source.200kbps.768x320.flv -c 250 -s 10 -r rtmp://xxx.xxx.xxx.103:2019/live/livestream1_{i}?vhost=long.test.com
[2019-10-27 16:28:10.008] [report] [16481] threads:250 alive:250 duration:1110 tduration:0 nread:0.01 nwrite:62.53 tasks:262 etasks:12 stasks:0 estasks:0 [2019-10-27 16:28:40.008] [report] [16481] threads:250 alive:250 duration:1140 tduration:0 nread:0.01 nwrite:62.37 tasks:262 etasks:12 stasks:0 estasks:0 [2019-10-27 16:29:10.008] [report] [16481] threads:250 alive:250 duration:1170 tduration:0 nread:0.01 nwrite:62.39 tasks:262 etasks:12 stasks:0 estasks:0
|
观察服务器cpu占用:
1 2 3 4 5 6 7 8 9 10 11 12
| top - 16:26:27 up 243 days, 1:57, 4 users, load average: 0.84, 0.73, 0.61 Tasks: 212 total, 1 running, 211 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.0 us, 0.5 sy, 0.0 ni, 95.7 id, 0.0 wa, 0.0 hi, 0.8 si, 0.0 st KiB Mem : 65807864 total, 2342924 free, 1593668 used, 61871272 buff/cache KiB Swap: 4194300 total, 4192468 free, 1832 used. 61567380 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9548 root 20 0 639468 318264 1924 S 69.4 0.5 39:42.44 srs 15063 root 20 0 155140 5848 4500 S 1.0 0.0 0:00.03 sshd 15065 root 20 0 116496 3088 1648 S 0.7 0.0 0:00.02 bash 10 root 20 0 0 0 0 S 0.3 0.0 237:00.91 rcu_sched 80 root 20 0 0 0 0 S 0.3 0.0 560:22.55 ksoftirqd/14
|
观察服务器IO占用:
1 2 3 4 5 6 7 8 9 10 11 12 13
| ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 3 0 96 0 0 1| 0 0 | 32M 990k| 0 0 | 14k 5548 3 0 96 0 0 1| 0 20k| 31M 994k| 0 0 | 14k 5763 3 0 96 0 0 1| 0 0 | 32M 986k| 0 0 | 14k 5636 3 0 96 0 0 1| 0 0 | 32M 979k| 0 0 | 14k 5777 3 0 96 0 0 1| 0 8192B| 31M 981k| 0 0 | 14k 5762 4 0 95 0 0 1| 0 0 | 32M 989k| 0 0 | 14k 5615 4 1 95 0 0 1| 0 3008k| 33M 1011k| 0 0 | 15k 5571 3 1 96 0 0 1| 0 0 | 35M 1051k| 0 0 | 15k 5471 3 0 96 0 0 1| 0 0 | 37M 1066k| 0 0 | 15k 5289 3 1 95 0 0 1| 44k 4096B| 38M 1079k| 0 0 | 15k 5712 3 0 96 0 0 1| 0 0 | 39M 1086k| 0 0 | 15k 5389
|
增加到1200路流:
1 2 3 4 5
| $ ./objs/sb_rtmp_publish -i doc/source.200kbps.768x320.flv -c 200 -s 10 -r rtmp://xxx.xxx.xxx.103:2019/live/livestream2_{i}?vhost=long.test.com
[2019-10-27 16:33:21.150] [report] [16689] threads:200 alive:199 duration:60 tduration:0 nread:0.11 nwrite:39.88 tasks:208 etasks:9 stasks:0 estasks:0 [2019-10-27 16:33:51.150] [report] [16689] threads:200 alive:200 duration:90 tduration:0 nread:0.08 nwrite:41.05 tasks:219 etasks:19 stasks:0 estasks:0 [2019-10-27 16:34:21.150] [report] [16689] threads:200 alive:200 duration:120 tduration:0 nread:0.06 nwrite:42.49 tasks:219 etasks:19 stasks:0 estasks:0
|
观察服务器cpu占用:
1 2 3 4 5 6 7 8 9 10 11 12
| top - 16:32:01 up 243 days, 2:03, 4 users, load average: 0.98, 0.85, 0.68 Tasks: 213 total, 2 running, 211 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.5 us, 0.5 sy, 0.0 ni, 95.1 id, 0.0 wa, 0.0 hi, 0.9 si, 0.0 st KiB Mem : 65807864 total, 2326480 free, 1607988 used, 61873396 buff/cache KiB Swap: 4194300 total, 4192468 free, 1832 used. 61551888 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9548 root 20 0 675468 321256 1924 R 88.4 0.5 44:02.00 srs 10 root 20 0 0 0 0 S 0.3 0.0 237:01.58 rcu_sched 1060 root 20 0 20.1g 421956 13468 S 0.3 0.6 308:47.03 java 5359 root 20 0 112800 4328 3300 S 0.3 0.0 9:57.35 sshd 6052 root 20 0 13220 796 592 S 0.3 0.0 238:27.05 rngd
|
观察服务器IO占用:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 3 1 95 0 0 1| 0 0 | 35M 1218k| 0 0 | 16k 3206 3 0 95 0 0 1| 0 0 | 35M 1235k| 0 0 | 17k 3217 3 1 95 0 0 1| 0 0 | 35M 1224k| 0 0 | 17k 3194 4 0 95 0 0 1| 0 0 | 35M 1216k| 0 0 | 17k 3381 4 1 95 0 0 1| 0 52k| 35M 1223k| 0 0 | 17k 3177 4 1 95 0 0 1| 0 0 | 35M 1247k| 0 0 | 17k 3289 4 0 95 0 0 1| 0 0 | 35M 1218k| 0 0 | 17k 3346 3 0 95 0 0 1| 0 0 | 35M 1229k| 0 0 | 17k 3093 4 1 95 0 0 1| 0 0 | 35M 1207k| 0 0 | 17k 3226 3 0 95 0 0 1| 0 60k| 35M 1238k| 0 0 | 17k 3097 4 0 95 0 0 1| 0 8192B| 35M 1209k| 0 0 | 17k 3612 3 1 95 0 0 1| 0 0 | 35M 1219k| 0 0 | 16k 3216
|
结论:在推送200kbps的流的情况下,大概在1000路推流时,带宽首先被占满,CPU在70%左右。
rtmp拉流测试
先推一路流上去:
1
| $ ./objs/sb_rtmp_publish -i doc/source.200kbps.768x320.flv -c 1 -s 10 -r rtmp://xxx.xxx.xxx.103:2019/live/livestream?vhost=long.test.com
|
直接拉1000路流::
1 2 3 4
| $ ./objs/sb_rtmp_load -c 1000 -s 10 -r rtmp://xxx.xxx.xxx.103:2019/live/livestream?vhost=long.test.com
[2019-10-27 16:42:00.428] [report] [16756] threads:1000 alive:1000 duration:30 tduration:0 nread:172.40 nwrite:0.93 tasks:1000 etasks:0 stasks:0 estasks:0 [2019-10-27 16:42:30.428] [report] [16756] threads:1000 alive:1000 duration:60 tduration:0 nread:212.69 nwrite:0.46 tasks:1000 etasks:0 stasks:0 estasks:0
|
观察服务器cpu占用:
1 2 3 4 5 6 7 8 9 10 11 12 13
| top - 16:40:27 up 243 days, 2:11, 4 users, load average: 0.14, 0.42, 0.56 Tasks: 213 total, 2 running, 211 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.5 us, 0.3 sy, 0.0 ni, 98.5 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st KiB Mem : 65807864 total, 2276144 free, 1648396 used, 61883324 buff/cache KiB Swap: 4194300 total, 4192468 free, 1832 used. 61505084 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9548 root 20 0 675468 321276 1944 R 13.6 0.5 47:13.14 srs 6572 root 20 0 1949000 44236 14768 S 1.7 0.1 955:10.49 containerd 4921 root 20 0 24576 8708 2380 S 0.3 0.0 85:26.68 srs 6052 root 20 0 13220 796 592 S 0.3 0.0 238:28.16 rngd 1 root 20 0 199248 3912 2436 S 0.0 0.0 68:39.47 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
|
观察服务器IO占用:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 0 0 99 0 0 1| 0 0 |1309k 46M| 0 0 | 14k 880 1 0 98 0 0 1| 0 0 |1312k 45M| 0 0 | 14k 913 0 0 99 0 0 1| 0 0 |1167k 44M| 0 0 | 13k 897 1 0 99 0 0 1| 0 0 |1236k 45M| 0 0 | 12k 810 1 0 99 0 0 1| 0 148k|1309k 47M| 0 0 | 13k 911 0 0 99 0 0 1| 0 0 |1296k 45M| 0 0 | 13k 817 0 0 99 0 0 1| 0 0 |1269k 42M| 0 0 | 13k 843 0 0 99 0 0 1| 0 0 |1195k 44M| 0 0 | 12k 894 0 0 99 0 0 1| 0 0 |1192k 42M| 0 0 | 12k 812 1 0 99 0 0 1| 0 20k|1271k 49M| 0 0 | 12k 795 0 0 99 0 0 1| 0 0 |1243k 41M| 0 0 | 13k 949 1 0 98 0 0 1| 0 0 |1320k 48M| 0 0 | 13k 909 0 0 99 0 0 1| 0 0 |1254k 45M| 0 0 | 13k 858
|
结论:在拉取200kbps的流的情况下,大概在1000路拉流时,带宽45MB左右,CPU在13%左右,上限决定于带宽
flv拉流测试
先推一路流上去:
1
| $ ./objs/sb_rtmp_publish -i doc/source.200kbps.768x320.flv -c 1 -s 10 -r rtmp://xxx.xxx.xxx.103:2019/live/livestream?vhost=long.test.com
|
直接拉1000路流::
1 2 3 4 5
| $ ./objs/sb_http_load -c 1000 -s 10 -r http://long.test.com:3019/live/livestream.flv
[2019-10-27 18:05:31.713] [report] [17375] threads:1000 alive:1000 duration:240 tduration:0 nread:238.23 nwrite:0.00 tasks:1000 etasks:0 stasks:0 estasks:0 [2019-10-27 18:06:01.713] [report] [17375] threads:1000 alive:1000 duration:270 tduration:0 nread:242.30 nwrite:0.00 tasks:1000 etasks:0 stasks:0 estasks:0 [2019-10-27 18:06:31.713] [report] [17375] threads:1000 alive:1000 duration:300 tduration:0 nread:244.52 nwrite:0.00 tasks:1000 etasks:0 stasks:0 estasks:0
|
观察服务器cpu占用:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| top - 18:04:10 up 243 days, 3:35, 4 users, load average: 0.10, 0.17, 0.13 Tasks: 211 total, 2 running, 209 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.2 us, 0.9 sy, 0.0 ni, 97.1 id, 0.0 wa, 0.0 hi, 0.8 si, 0.0 st KiB Mem : 65807864 total, 2559192 free, 1223956 used, 62024716 buff/cache KiB Swap: 4194300 total, 4192468 free, 1832 used. 61930044 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18700 root 20 0 340136 107528 2000 R 41.5 0.2 3:15.25 srs 6572 root 20 0 1949000 43996 14768 S 1.7 0.1 955:25.07 containerd 966 root 20 0 122040 1492 888 S 0.3 0.0 174:11.47 wrapper 9695 root 20 0 161976 2372 1600 R 0.3 0.0 0:29.07 top 1 root 20 0 199248 3912 2436 S 0.0 0.0 68:40.50 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 1:33.02 ksoftirqd/0
|
观察服务器IO占用:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 1 1 97 0 0 1| 0 0 |1138k 42M| 0 0 | 13k 1447 1 1 97 0 0 1| 0 0 |1121k 42M| 0 0 | 13k 1417 1 1 97 0 0 1| 0 0 |1108k 41M| 0 0 | 13k 1436 1 1 97 0 0 1| 0 28k|1110k 44M| 0 0 | 13k 1381 1 1 97 0 0 1| 0 0 |1150k 44M| 0 0 | 13k 1300 1 1 97 0 0 1| 0 0 |1190k 42M| 0 0 | 13k 1306 1 1 98 0 0 1| 0 0 |1086k 42M| 0 0 | 13k 1320 1 1 98 0 0 1| 0 0 |1186k 44M| 0 0 | 13k 1298 1 1 98 0 0 1| 0 68k|1181k 41M| 0 0 | 14k 1348 1 1 98 0 0 1| 0 0 | 853k 33M| 0 0 | 11k 1255 1 1 98 0 0 0| 0 0 | 812k 37M| 0 0 | 10k 1149 3 1 96 0 0 0| 0 0 | 876k 41M| 0 0 | 11k 1308
|
结论:与rtmp类似,在拉取200kbps的流的情况下,大概在1000路拉流时,带宽43MB左右,CPU在41%左右,上限决定于带宽,
但是这里的CPU使用率显然比rtmp高得多。
最后总结
无论是推流还是拉流,实际上的瓶颈都在于带宽。并且在测试中srs实际上只使用了一个CPU核心,实际上还剩余了大量的CPU资源。
由于srs-bench不支持flv推流测试,所以暂时没有对flv推流进行测试。
——————————————————-
2019-11-1补充:局域网测试
上一次测试限于带宽只有300M,导致测试效果是带宽为瓶颈,这里在局域网中进行测试,有更大的带宽。
VM1:192.168.90.40
VM2:192.168.90.45
主机CPU为i7-8700,频率3.2GHz。
带宽测试
VM1:
1 2 3 4 5
| $ iperf3 -s -p 12345 -i 1
----------------------------------------------------------- Server listening on 12345 -----------------------------------------------------------
|
VM2:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| $ iperf3 -c 192.168.90.40 -p12345 -i 1 -t 30
Connecting to host 192.168.90.40, port 12345 [ 4] local 192.168.90.43 port 50380 connected to 192.168.90.40 port 12345 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 111 MBytes 932 Mbits/sec 3 402 KBytes [ 4] 1.00-2.00 sec 113 MBytes 948 Mbits/sec 0 581 KBytes [ 4] 2.00-3.00 sec 111 MBytes 934 Mbits/sec 0 713 KBytes [ 4] 3.00-4.00 sec 111 MBytes 933 Mbits/sec 0 826 KBytes [ 4] 4.00-5.00 sec 111 MBytes 933 Mbits/sec 0 923 KBytes [ 4] 5.00-6.00 sec 111 MBytes 933 Mbits/sec 0 1014 KBytes [ 4] 6.00-7.00 sec 111 MBytes 933 Mbits/sec 0 1.07 MBytes [ 4] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 0 1.14 MBytes [ 4] 8.00-9.00 sec 111 MBytes 933 Mbits/sec 0 1.21 MBytes
|
可以看到带宽接近千兆。
推拉流测试结果
CPU为i7-8700,频率3.2GHz;带宽933 Mbits/sec。
测试项目 |
单路流大小 |
推拉流数 |
CPU使用率 |
带宽占用 |
rtmp推流测试 |
200kbps |
4000 |
100% |
114M |
rtmp拉流测试 |
200kbps |
4000 |
20%~40% |
114M |
flv拉流测试 |
200kbps |
4000 |
60%~80% |
114M |
rtmp推流测试 |
1000kbps |
500 |
40%~50% |
60M~80M |
rtmp推流测试 |
1000kbps |
1000 |
75%~80% |
114M |
rtmp拉流测试 |
1000kbps |
500 |
13%~15% |
60M~80M |
rtmp拉流测试 |
1000kbps |
1000 |
15%~17% |
114M |
flv拉流测试 |
1000kbps |
500 |
18%~20% |
60M~80M |
flv拉流测试 |
1000kbps |
1000 |
45%~50% |
114M |