4

We have a IBM MQ v8 setup setup with 1 high volume non-persistent queue and many consumers (50+) on that queue. The large number of consumers is needed to be able to process the volume of messages being published on the queue.

What we now notice is that the queue manager is not distributing the messages evenly over the X consumers. A few consumers get up to 300 message per minute, while many other consumers only get a few messages per minute (<10). And, there are many messages on the queue and the queue depth is steadily increasing. CPU and memory on the consumer side are not a problem, utilization of both is < 50%.

Can someone explain how IBM MQ queue manager is distributing messages over multiple consumers? And is it possible to influence this either on server or on consumer side such that messages are distributed more evenly over the available consumers?

Added after Mark Taylors response

The challenge we face is that there are >10.000 messages added to the queue per minute and we're not able to consume them fast enough. Our current setup is that we have a simple consumer that is running in a Docker container and we scale by running multiple Containers. Running 12 consumers (Docker containers) does increase the overall throughput, running 50+ consumers does not add any throughput. Each consumer is simple:

1. connect to queue manager
2. Connect to queue
3. While true
   - Get message from queue
   - Process message (commenting this out does not increase overall performance)

How can we get achieve more message consume performance? Would it for example help if within one container we connect to the queue manager once and then have multiple threads use that same queue manager to connect to the queue and get the messages? Or should we even reuse the queue over multiple threads?

Regards,

Gero

4
  • Are you certain the messages are all non-persistent? You state "non-persistent queue", but the queue level setting DEFPSIST is just a default, the persistence of a message is determined by the application at PUT time. It can not change as it passes over subsequent queues. A app has really for choices at PUT time: 1. Persistent, 2. Non-Persistent, 3. Use Queue Default Persistence, 4. Don't specify anything at all in which case it behaves the same as if #3 were specified.
    – JoshMc
    Commented Jun 8, 2018 at 18:58
  • With persistent messages put outside of syncpoint there can be lock contention on the queue that can cause slow consumer behavior, this is why I am asking if you are sure they are really non-persistent.
    – JoshMc
    Commented Jun 8, 2018 at 18:58
  • I checked properties of messages received from the queue and messages are not persistent (Persistence: 0)
    – Gero
    Commented Jun 11, 2018 at 8:50
  • What API do you use to access MQ, ex: MQI, JMS, something else? Is you App multi threaded? If so how many threads are consuming in each container? A possible improvment would be to set SHARECNV(1) on the SVRCONN channel you connect to or make the client side equivalent change.
    – JoshMc
    Commented Dec 19, 2019 at 14:33

1 Answer 1

6

MQ's default behaviour is to give messages to the MOST RECENT getter. That generally improves performance as that process is most likely to be "hot" (in the processor cache). So you should not expect equal distribution of messages. If you are seeing one application getting most messages that implies that it is regularly getting through processing one message before another is available for retrieval. It is rejoining the queue of waiters before the next message is available.

There are many aspects that affect overall performance including transactionality, retrieval criteria, contention etc so it's not really possible to say what your problem is, or whether changing that default distribution algorithm (there is an undocumented tuning parm that reverses the queue of waiters) would help. And having client connections where the waiting is really being done by the "proxy" svrconn processes and threads makes it more complicated.

1
  • Thanks for the explanation. I added a bit more detail, hope you can explain how we can best scale up the consumption throughput of messages.
    – Gero
    Commented Jun 8, 2018 at 5:51

Not the answer you're looking for? Browse other questions tagged or ask your own question.