Project

General

Profile

cluster_nodes info question

Raylin lin
Added almost 5 years ago

hi,all:

I have 2 nodes(node A honestyserver:192.168.1.120 as --virt-hosts,node B imserver:192.168.1.125(also 192.168.1.120 as --virt-hosts)) in cluster. after these two nodes started, i started a tsung test, after 2 minutes. select all from cluster_nodes; found that node A took more that 99 mem_usage while node B only 8. how to solve it? thank you.

mysql> select * from cluster_nodes;
+------------------------+------------------------------------------------------------------+---------------------+------+------------+-----------+
| hostname               | password                                                         | last_update         | port | cpu_usage  | mem_usage |
+------------------------+------------------------------------------------------------------+---------------------+------+------------+-----------+
| honestyserver | 9d80998d620a0be5c0b67ddb33db33e731d88ed6e3554382c058b78d94b9e7c6 | 2014-03-31 18:38:46 | 5277 |   64.05863 | 99.646866 |
| imserver      | 32b9b148ff7755b8726fb887f4f6b2c84d58ef03c073fc48c0c371cb2433cf6a | 2014-03-22 18:07:34 | 5277 | 0.11052166 |  8.118886 |
+------------------------+------------------------------------------------------------------+---------------------+------+------------+-----------+
2 rows in set (0.00 sec)

Replies (10)

Avatar?id=6023&size=32x32

Added by Artur Hefczyc TigaseTeam almost 5 years ago

Can you provide us with your init.properties configuration file?

Added by Raylin lin almost 5 years ago

Artur Hefczyc wrote:

Can you provide us with your init.properties configuration file?

nodeA :

config-type=--gen-config-def

--admins=admin@192.168.1.120

--virt-hosts = 192.168.1.120

--debug=server,xmpp.XMPPIOService,db,debug

--user-db=mysql

--user-db-uri=jdbc:mysql://192.168.1.120:3306/tigasedb?user=admin&password=111111&useUnicode=true&characterEncoding=UTF-8

--cluster-connect-all=true

--cluster-mode=true

--cross-domain-policy-file=/etc/tigase/cross-domain-policy.xml

--bosh-extra-headers-file=/etc/tigase/bosh-extra-headers-file.txt

--bosh-ports=5280

--sm-plugins=+jabber:iq:topicUser,+jabber:iq:conversation,+jabber:iq:storeLastMessage

muc/muc-lock-new-room[B]=false

--comp-class-1 = tigase.muc.MUCComponent

--comp-name-1 = muc

--comp-name-2=pubsub

--comp-class-2=tigase.pubsub.PubSubComponent

--monitoring=jmx:9050,http:9080,snmp:9060

nodeB:

config-type=--gen-config-def

--admins=admin@192.168.1.120

--virt-hosts = 192.168.1.120

--debug=server,xmpp.XMPPIOService,db

--user-db=mysql

--user-db-uri=jdbc:mysql://192.168.1.125:3306/tigasedb?user=admin&password=111111&useUnicode=true&characterEncoding=UTF-8

--sm-plugins=+jabber:iq:topicUser,+jabber:iq:conversation,+jabber:iq:storeLastMessage

--cluster-connect-all=true

--cluster-mode=true

--cross-domain-policy-file=/etc/tigase/cross-domain-policy.xml

--bosh-extra-headers-file=/etc/tigase/bosh-extra-headers-file.txt

--bosh-ports=5280

muc/muc-lock-new-room[B]=false

--comp-class-1 = tigase.muc.MUCComponent

--comp-name-1 = muc

--comp-name-2=pubsub

--comp-class-2=tigase.pubsub.PubSubComponent

--monitoring=jmx:9050,http:9080,snmp:9060

we use mysql5.5 db master-slave(Replication)(192.168.1.120 as master while 192.168.1.125 as slave one).

Avatar?id=6023&size=32x32

Added by Artur Hefczyc TigaseTeam almost 5 years ago

Thank you for the configuration files. There are a few issues with your configuration but none of them causes the problem with Tsung you describe, which is probably related to Tsung configuration itself.

  1. You must not use IP address as the virtual host, please create a virtual host name which is not an IP address

  2. Both cluster nodes must connect to the same database - in fact, you should have the same exact, identical init.properties file on all cluster nodes

  3. --cluster-connect-all=true - not necessary, remove

  4. Components - MUC, PubSub, etc... will not work this way in a cluster mode, you need to either use VirtualComponent, external component to use these components in a cluster setup or ACS with real clustering for MUC and PubSub. If you do not test MUC, PubSub and/or not plan to use them do not bother with setting them up

In the Tsung configuration, make sure you list all cluster nodes as servers, so Tsung connect to all nodes and distribute the load. What, most likely, happens in your test scenario is that Tsung connects to only one cluster node, hence the whole load is on this one machine.

Avatar?id=6023&size=32x32

Added by Artur Hefczyc TigaseTeam almost 5 years ago

we use mysql5.5 db master-slave(Replication)(192.168.1.120 as master while 192.168.1.125 as slave one).

As Tigase needs RW access to DB, you should connect all cluster nodes to the master DB.

Added by Raylin lin almost 5 years ago

Artur Hefczyc wrote:

Thank you for the configuration files. There are a few issues with your configuration but none of them causes the problem with Tsung you describe, which is probably related to Tsung configuration itself.

You must not use IP address as the virtual host, please create a virtual host name which is not an IP address

Both cluster nodes must connect to the same database - in fact, you should have the same exact, identical init.properties file on all cluster nodes

--cluster-connect-all=true - not necessary, remove

Components - MUC, PubSub, etc... will not work this way in a cluster mode, you need to either use VirtualComponent, external component to use these components in a cluster setup or ACS with real clustering for MUC and PubSub. If you do not test MUC, PubSub and/or not plan to use them do not bother with setting them up

In the Tsung configuration, make sure you list all cluster nodes as servers, so Tsung connect to all nodes and distribute the load. What, most likely, happens in your test scenario is that Tsung connects to only one cluster node, hence the whole load is on this one machine.

thank Artur Hefczyc for your replies.

I have another two questions:

  (1):Tsung should list all cluster nodes as servers and connect to all nodes by itself. But if We use client(spark or asmack) for test,Does spark  need to decide which node it should connect to by itself.I thought what it need to do is to connect to the node whose virtual host name is --virt-hosts .

  (2):Does--virt-hosts support more that one virtual host name? look like: --virt-hosts = nodeA,nodeB? 

thank you.

Avatar?id=6023&size=32x32

Added by Artur Hefczyc TigaseTeam almost 5 years ago

Raylin lin wrote:

Artur Hefczyc wrote:

Thank you for the configuration files. There are a few issues with your configuration but none of them causes the problem with Tsung you describe, which is probably related to Tsung configuration itself.

You must not use IP address as the virtual host, please create a virtual host name which is not an IP address

Both cluster nodes must connect to the same database - in fact, you should have the same exact, identical init.properties file on all cluster nodes

--cluster-connect-all=true - not necessary, remove

Components - MUC, PubSub, etc... will not work this way in a cluster mode, you need to either use VirtualComponent, external component to use these components in a cluster setup or ACS with real clustering for MUC and PubSub. If you do not test MUC, PubSub and/or not plan to use them do not bother with setting them up

In the Tsung configuration, make sure you list all cluster nodes as servers, so Tsung connect to all nodes and distribute the load. What, most likely, happens in your test scenario is that Tsung connects to only one cluster node, hence the whole load is on this one machine.

thank Artur Hefczyc for your replies.

I have another two questions:

  (1):Tsung should list all cluster nodes as servers and connect to all nodes by itself. But if We use client(spark or asmack) for test,Does spark  need to decide which node it should connect to by itself.I thought what it need to do is to connect to the node whose virtual host name is --virt-hosts .

Yes, this is correct. The client needs to know only the virtual host name and it does not need to know all the cluster nodes names or IP addresses. However, Tsung is somehow limited on this and preparing a proper testing environment is cumbersome. Therefore it is easier to give a list of cluster nodes to Tsung so the load is distributed.

  1. The most popular and easiest method for distributing users on all cluster nodes is using DNS round robin, so your virtual domain resolves on multiple addresses of all cluster nodes. Each time a client asks DNS for address for your virtual domain it gets a different cluster node in return.

  2. Another option is to use Tigase built-in load balancer but not all clients support it

  (2):Does--virt-hosts support more that one virtual host name? look like: --virt-hosts = nodeA,nodeB? 

Yes, you can put multiple virtual hostnames here but, please note that virtual host names are NOT cluster nodes names.

Added by Raylin lin almost 5 years ago

Artur Hefczyc wrote:

Raylin lin wrote:

Artur Hefczyc wrote:

Thank you for the configuration files. There are a few issues with your configuration but none of them causes the problem with Tsung you describe, which is probably related to Tsung configuration itself.

You must not use IP address as the virtual host, please create a virtual host name which is not an IP address

Both cluster nodes must connect to the same database - in fact, you should have the same exact, identical init.properties file on all cluster nodes

--cluster-connect-all=true - not necessary, remove

Components - MUC, PubSub, etc... will not work this way in a cluster mode, you need to either use VirtualComponent, external component to use these components in a cluster setup or ACS with real clustering for MUC and PubSub. If you do not test MUC, PubSub and/or not plan to use them do not bother with setting them up

In the Tsung configuration, make sure you list all cluster nodes as servers, so Tsung connect to all nodes and distribute the load. What, most likely, happens in your test scenario is that Tsung connects to only one cluster node, hence the whole load is on this one machine.

thank Artur Hefczyc for your replies.

I have another two questions:

  (1):Tsung should list all cluster nodes as servers and connect to all nodes by itself. But if We use client(spark or asmack) for test,Does spark  need to decide which node it should connect to by itself.I thought what it need to do is to connect to the node whose virtual host name is --virt-hosts .

Yes, this is correct. The client needs to know only the virtual host name and it does not need to know all the cluster nodes names or IP addresses. However, Tsung is somehow limited on this and preparing a proper testing environment is cumbersome. Therefore it is easier to give a list of cluster nodes to Tsung so the load is distributed.

The most popular and easiest method for distributing users on all cluster nodes is using DNS round robin, so your virtual domain resolves on multiple addresses of all cluster nodes. Each time a client asks DNS for address for your virtual domain it gets a different cluster node in return.

Another option is to use Tigase built-in load balancer but not all clients support it

  (2):Does--virt-hosts support more that one virtual host name? look like: --virt-hosts = nodeA,nodeB? 

Yes, you can put multiple virtual hostnames here but, please note that virtual host names are NOT cluster nodes names.

DNS round robin is good,Does nginx suitable to do this job?

for example:

upstream tigase

{

    server 192.168.1.120:5227  weight=1;

    server 192.168.1.125:5227  weight=1;

}
Avatar?id=6023&size=32x32

Added by Artur Hefczyc TigaseTeam almost 5 years ago

Raylin lin wrote:

DNS round robin is good,

Yes, this is preferred solution.

Does nginx suitable to do this job?

for example:

upstream tigase

{

    server 192.168.1.120:5227  weight=1;

    server 192.168.1.125:5227  weight=1;

}

Well, this is a third option to use a HTTP proxy of some sort. There is however, a scalability problem with this. The proxy might be a bottleneck and you can have max 64k users servers by one proxy to one cluster node. If you can use a multiple proxies, than it should work OK.

Added by Raylin lin almost 5 years ago

hi, Artur Hefczyc:

LVS(Linux Virtual Server) is introduced to me by someone else,Is lvs suitable for tigase cluster? thank you.

Avatar?id=6023&size=32x32

Added by Artur Hefczyc TigaseTeam almost 5 years ago

I do not have any experience with LVS. Maybe somebody else?

    (1-10/10)