Project

General

Profile

30k connections, 4.5k packets/sec, below 0.1 sec delivery time.

Avatar?id=6023&size=32x32

Artur Hefczyc TigaseTeam
Added over 4 years ago

<!--30k connections, 4.5k packets/sec, below 0.1 sec delivery time.-->

Results

The subject describes in short load tests I ran last weekend over Tigase server. Below is detailed description of the tests and the environment. Let me just start from the end: results.

Here is a link to server statistics web page screenshot. There is much more to tell however. I was also monitoring CPU and RAM usage during the tests. So here is a table with all the numbers and later on the numbers will be explained in details.

Test resultsStatistics screenshot


Concurrent user connections30 000
Average packets per second4 426
Average packet delivery time25 ms
Average CPU usage40%
Max RAM usage340 MB
Total test time2h, 24m
Number of packets processed31 365 985



Read more for details...

<!--break-->

Test environment

Initially 4 laptops were used:



Tigase ServerC2S Sim 1C2S Sim 2C2S Sim 3
RAM1.5 GB1 GB1.25 GB370 MB
CPUCentrino Duo1.6 GHzCentrino1.7 GHzCentrino1.3 GHzP3 350 MHz
OSGentoo LinuxGentoo LinuxGentoo LinuxGentoo Linux
Kernel2.6.202.6.202.6.202.6.20
JVMSun 1.6.0_02Sun 1.6.0_02Sun 1.6.0_02Sun 1.6.0_02
TCP/IPs30 00012 500 17 500--
DBMySQL 5.0.44------
HTTPApache 2.2------
Monitorget-stats.sh------




TCP/IPs is a number of connections handled by the machine, DB database running on the machine, HTTP web server used to display online statistics, monitor is a software used to retrieve statistics during tests.

Unfortunately the forth laptop (Sim 3) has broken during the tests and I had to exclude it. Thus the final test run was executed on 3 machines.

As a database I used MySQL with default configuration and running on the same machine as Tigase server. There are 200 003 user accounts registered.

Tested software: Tigase server 3.0.2-b696 on Gentoo Linux and JVM - Sun 1.6.0.02

Details

Average CPU usage 40% - what is this?
The CPU usage was not constant during the test what can be seen on the chart on the left. The usage directly depended on the packet traffic. Because of the client side software configuration there was a message send on each connection every few seconds. Sometimes there was a packet sent on 10 thousands connections and sometimes on as few as 500. Thus the packets traffic was variable so the load was variable too. Sometimes server was processing around 10 thousands packets per second and sometimes as few as 500.
The server was running with the profiler attached to it to better monitor the server activity. That was also affecting CPU usage.

The server was running with parameters: -server -Xms200M -Xmx600M. Which means it started with 200 MB memory allocated and was allowed to use up to 600 MB.
It never got to the allowed maximum. Maximum memory it used was about 350 MB but it is very likely it would run with the limit 300 MB. That could significantly impact CPU usage because of garbage collection though.

The server activity was monitored by the profiler attached to the running process and by the utility gathering statistics in regular time intervals - every 10 seconds.
The utility has been collecting server statistics as well as has been measured server response time for each request. Total response time was usually below 25 milliseconds including user authentication and statistics data collection.

Conclusions

The goal of these tests was to see how many concurrent connections the Tigase server can handle. So all connections were established on plain socket without SSL or TLS use.

The results clearly show that the increased number of connections doesn't impact service significantly. The real load factor is the traffic generated by the clients. So in theory we could generate 10 thousands packets per second traffic on a single connections as well as on 30 thousands connections. Increased number of connections doesn't impact server response time as well.

Tigase server uses about 10kB of RAM for each connection so in theory 1GB RAM should be enough handle 100 thousands connections assuming the traffic of up to 10 000 packets/second. And this is a goal for next load tests I am going to execute. The main constraint of doing tests for bigger (> 30 000) connections is client side environment. I need either more client side machines or improve my testing code to consume less resources.

Of course the server could possibly handle much higher traffic on a better spec machine than the laptop I was testing on.