Project

General

Profile

ClusteredRoomStrategyV2 throwing errors on MUC creation

Clint Fenton
Added over 2 years ago

Hi,

We have enabled MUC clustering between two EC2 instances and it all seems to work fine with the default clustering strategy. No errors, results as expected.

However, when we enable tigase.muc.cluster.ClusteredRoomStrategyV2, we see these errors in the logs:

@[out_7-cl-comp] AbstractMessageReceiver$QueueListener.run() SEVERE: [out_7-cl-comp] Exception during packet processing: from=null, to=null, DATA=muc@ip-172-x-x-25.us-west-1.compute.internalmuc@ip-172-x-x-26.us-west-1.compute.internalownerinfo___3b53fd34-5473-48d0-8393-aa25cf46a000@chat.dev.domain.com0926b457-4c79-4780-b418-bc31a319bc27@muc.chat.dev.domain.commuc@ip-172-x-x-25.us-west-1.compute.internal, SIZE=754, XMLNS=tigase:cluster, PRIORITY=CLUSTER, PERMISSION=NONE, TYPE=set

java.lang.NullPointerException

at tigase.muc.cluster.AbstractClusteredRoomStrategy$c.executeCommand(SourceFile:552)

at tigase.cluster.ClusterController.handleClusterPacket(ClusterController.java:109)

at tigase.cluster.ClusterConnectionManager.processOutPacket(ClusterConnectionManager.java:351)

at tigase.server.AbstractMessageReceiver$QueueListener.run(AbstractMessageReceiver.java:1444)@

These occur about 50% of the time when a new MUC room is created. They don't seem to have any negative effect on the state of the room that is created, but they are marked as SEVERE, so I would like to know more about what might be causing them and whether we should be concerned.

We are on tigase-server-7.0.1-b3810 and using the JARs included with that distribution for clustering.


init.properties
----------------
--comp-name-1 = muc
--comp-class-1 = tigase.muc.cluster.MUCComponentClustered
--comp-name-2 = proxy
--comp-class-2 = tigase.socks5.Socks5ProxyComponent
--virt-hosts = chat.dev.domain.com
--user-db-uri = jdbc:mysql://xxx
--user-db = mysql
--user-repo-pool-size=128
--admins = xxx@chat.dev.domain.com
--cluster-mode = true
--ssl-container-class=tigase.io.SSLContextContainer
--c2s-ports=5222,5223,443,80
config-type = --gen-config-def
bosh/connections/ports[i] = 5280, 5281
bosh/connections/5281/socket = ssl
bosh/connections/5281/type = accept
muc/modules/presences[S]=tigase.muc.modules.PresenceModuleNoBroadcast
muc/muc-lock-new-room[B]=false
muc/history-db=none
muc/default_room_config/muc#roomconfig_persistentroom=false
muc/default_room_config/muc#roomconfig_publicroom=false
muc/default_room_config/tigase#presence_filtering=true
muc/default_room_config/tigase#presence_filtered_affiliations=none
muc/history-db=none
muc/muc-allow-chat-states[B]=true
muc/muc-strategy-class[S]=tigase.muc.cluster.ClusteredRoomStrategyV2
--cm-see-other-host=none
--cm-traffic-throttling = xmpp:0:0:disc,bin:0:0:disc
--cm-ht-traffic-throttling = xmpp:0:0:disc,bin:0:0:disc
bosh/concurrent-requests[I] = 8
bosh/max-inactivity[L] = 30
ws2s/connections/ports[i]=5290,5291
ws2s/connections/5291/socket=ssl
ws2s/connections/5291/type=accept

Replies (2)

(1)

Added by Andrzej Wójcik IoT 1 CloudTigaseTeam over 2 years ago

In ClusteredRoomStrategyV2 we notify other cluster nodes about changes to node configuration (node creation is one of this changes). This strategy requires that rooms are persistent (from https://projects.tigase.org/projects/tigase-acs-muc/wiki/ClusteredRoomStrategyV2):

In this strategy MUC rooms are persistent and each room is hosted on every node. Every node contains full list of rooms (as they are persistent).

This is required due to fact that room configuration is synchronized by reading config from database.

However in your configuration you have following line:

muc/default_room_config/muc#roomconfig_persistentroom=false

which forces MUC component to not save this room during initial configuration save and causes this error.

This issue may lead to some minor issues with synchronization of room configuration between cluster nodes. I would recommend to remove line which disables persistance of MUC rooms from config as persistence of MUC rooms is required for this configuration strategy to work properly.

(1)

Added by Clint Fenton over 2 years ago

Thanks! Updating our config as described resolved this.

    (1-2/2)