Project

General

Profile

newbie question with muc config

cheng huang
Added about 5 years ago

hi all,

I tried muc in tigase but failed. (With tigase-server-5.2.0-beta3-b3269.tar.gz)

My init.properties is :

config-type=--gen-config-def

--admins=admin@jabber.your-great.net

--virt-hosts = jabber.your-great.net

--debug=server

--user-db=mysql

--user-db-uri=jdbc:mysql://10.15.107.76:3306/tigasedb?user=root&password=111111&useUnicode=true&characterEncoding=UTF-8

--comp-name-1 = muc

--comp-class-1 = tigase.muc.MUCComponent

With PSI client, "service discovery" can only show one line with "jabber.your-great.net" (without multi user chat).

Is there anything wrong or missing with my muc config?


Replies (10)

Added by Wojciech Kapcia TigaseTeam about 5 years ago

The package you used is the minimal one and doesn't contain any components; MUC component is bundled in tigase-server-5.2.0-beta3-b3269-dist-max.tar.gz - you can simply copy necessary library from it.

Added by cheng huang about 5 years ago

Thanks for your help.

MUC worked well with *****-dist-max.tar.gz. :)

But there seems to be something wrong with MUC on cluster-mode:

Two-node cluster is deployed. And one client login on each.

Client A created a chatroom, and invited client B to join. Client B received invitation and accepted join.

After that both client could not see other in the chatroom, and could not get other people's words in the chatroom.

Meanwhile, normal chatting between client A and client B is ok.

Is there anything else need to be set to use MUC on cluster-mode?

Added by Wojciech Kapcia TigaseTeam about 5 years ago

Actually simply deploying MUC in cluster environment won't work (if the clients would end up on different nodes) because MUC doesn't support clustering. There are two solutions:

Added by cheng huang about 5 years ago

Thanks for your help!

I tried first one:

Virtual components for the cluster mode.

Logined on the server with virtual component, the client failed to create chatroom.

client showed:

"S2S - Incorrect destinationaddress - one of local virtual hosts or components."

Tigase node with real muc component has hostname as 76.work

Tigase node with virtual component (hostname: 71.work) used init.properties:

config-type=--gen-config-def

--admins=admin@jabber.your-great.net

--virt-hosts = jabber.your-great.net

--cluster-mode=true

--cluster-nodes=76.work,71.work

--debug=server

--user-db=mysql

--user-db-uri=jdbc:mysql://10.15.107.76:3306/tigasedb?user=root&password=111111&useUnicode=true&characterEncoding=UTF-8

--comp-name-1 = muc

--comp-class-1 = tigase.cluster.VirtualComponent

muc/redirect-to=muc@76.work.your-great.net

muc/disco-name=Multi User Chat

muc/disco-node=

muc/disco-type=text

muc/disco-category=conference

muc/disco-features=http://jabber.org/protocol/muc

I guess that I misunderstood "muc/redirect-to" or some other configs. But can not find detail descriptions about them.

Avatar?id=6023&size=32x32

Added by Artur Hefczyc TigaseTeam about 5 years ago

As a domain you have to put the exact domain name of the cluster node where the real MUC is located. In the --cluster-nodes property you used: 76.work as a cluster node, so if this is correct (cluster nodes connected correctly) your virtual MUC config should be:

muc/redirect-to=muc@76.work

Added by cheng huang about 5 years ago

It works! Thank you very much! :)

I will try "Load balancing external components in cluster mode" later.

Added by cheng huang about 5 years ago

In my opinion, "Load balancing external components in cluster mode" is better appoach than "Virtual components for the cluster mode".

So I try to build a cluster with two tigase servers and one muc component to see how it works.

Follow: http://www.tigase.org/content/load-balancing-external-components-cluster-mode

tigase servers are deployed on 76.work and 71.work. muc component is deployed on 72.work.

All machines can ping each other with hostname.

init.properties on tigase servers is:

config-type=--gen-config-def

--admins=admin@work

--virt-hosts = work

--cluster-mode=true

--cluster-nodes=76.work,71.work

--debug=server

--user-db=mysql

--user-db-uri=jdbc:mysql://10.15.107.76:3306/tigasedb?user=root&password=111111&useUnicode=true&characterEncoding=UTF-8

--comp-name-1 = ext

--comp-class-1 = tigase.server.ext.ComponentProtocol

--external = muc.work:muc-secret:listern:5270:work:accept:ReceiverBareJidLB

init.properties on muc component is:

config-type=--gen-config-comp

--admins=admin@work

--virt-hosts = work

--user-db=mysql

--user-db-uri=jdbc:mysql://10.15.107.76:3306/tigasedb?user=root&password=111111&useUnicode=true&characterEncoding=UTF-8

--comp-name-1 = muc

--comp-class-1 = tigase.muc.MUCComponent

--external = muc.work:muc-secret:connect:5270:71.work:accept,76.work

But the muc doesn't work...

tigase.log.0 on muc component show nothing related.

tigase.log.0 on tigase server show:

2013-10-28 16:53:00.201 [pool-7-thread-7] ConnectionManager$1.run() FINE: Reconnecting service for component: s2s, to remote host: muc.work on port: 5,269

2013-10-28 16:53:00.206 [ConnectionOpenThread] ConnectionManager$ConnectionListenerImpl.accept() FINEST: Accept called for service: work@muc.work

2013-10-28 16:53:00.206 [ConnectionOpenThread] ConnectionManager$ConnectionListenerImpl.accept() FINEST: Problem reconnecting the service: CID: work@muc.work, null, type: connect, Socket: null, jid: null, cid: work@muc.work

2013-10-28 16:53:00.206 [ConnectionOpenThread] CIDConnections.checkOpenConnections() FINEST: Scheduling task for openning a new connection for: work@muc.work

2013-10-28 16:53:00.206 [pool-8-thread-7] CIDConnections$2.run() FINEST: Running scheduled task for openning a new connection for: work@muc.work

2013-10-28 16:53:00.207 [pool-8-thread-7] CIDConnections.openOutgoingConnections() FINEST: Checking DNS for host: muc.work for: work@muc.work

2013-10-28 16:53:00.207 [pool-8-thread-7] CIDConnections.initNewConnection() FINEST: STARTING new connection: work@muc.work

2013-10-28 16:53:00.207 [pool-8-thread-7] CIDConnections.initNewConnection() FINEST: work@muc.work connection params: {cid=work@muc.work, ifc=[Ljava.lang.String;@7d4a914f, local-hostname=work, port-no=5269, remote-hostname=muc.work, remote-ip=180.168.41.175, s2s-connection-key=S2S: null, socket=plain, srv-type=_xmpp-server._tcp, type=connect}

2013-10-28 16:53:00.207 [pool-8-thread-7] ConnectionManager.reconnectService() FINER: Reconnecting service for: s2s, scheduling next try in 2secs, cid: work@muc.work

It seems tigase server try to connect "muc.work" instead of "72.work". "muc.work" is not set and server use a wrong ip "180.168.41.175".

I think config in tigase servers should not set muc component to just one node, as we may deploy several muc component nodes later.

Thanks for help!

Added by Wojciech Kapcia TigaseTeam about 5 years ago

On the MUC component machine please try following config:

--external=muc.work:muc-secret:connect:5270:work;71.work;76.work:accept

Also, on the main server machine you have a type-o in the external configuration: listern

Added by cheng huang about 5 years ago

Is the config content in http://www.tigase.org/content/load-balancing-external-components-cluster-mode out-of-time?

I follow the example in the page, but failed.

Follow Kapcia's suggestion, the whole system works very well now.

Clients login on different tigase server can chat in chatroom.

Thanks for all help!

Added by Wojciech Kapcia TigaseTeam about 5 years ago

Apologies, the documentation was outdated and I've updated it.

    (1-10/10)