RabbitMQ: persistent message with Topic exchange

I am very new to RabbitMQ.

I have set up a 'topic' exchange. The consumers may be started after the publisher. I'd like the consumers to be able to receive messages that have been sent before they were up, and that were not consumed yet.

The exchange is set up with the following parameters:

exchange_type => 'topic'
durable => 1
auto_delete => 0
passive => 0

The messages are published with this parameter:

delivery_mode => 2

Consumers use get() to retrieve the messages from the exchange.

Unfortunately, any message published before any client was up is lost. I have used different combinations.

I guess my problem is that the exchange does not hold messages. Maybe I need to have a queue between the publisher and the queue. But this does not seem to work with a 'topic' exchange where messages are routed by a key.

Any idea how I should proceed. I use the Perl binding Net::RabbitMQ (shouldn't matter) and RabbitMQ 2.2.0.


You need a durable queue to store messages if there are no connected consumers available to process the messages at the time they are published.

An exchange doesn't store messages, but a queue can. The confusing part is that exchanges can be marked as "durable" but all that really means is that the exchange itself will still be there if you restart your broker, but it does not mean that any messages sent to that exchange are automatically persisted.

Given that, here are two options:

  • Perform an administrative step before you start your publishers to create the queue(s) yourself. You could use the web UI or the command line tools to do this. Make sure you create it as a durable queue so that it will store any messages that are routed to it even if there are no active consumers.
  • Assuming your consumers are coded to always declare (and therefore auto-create) their exchanges and queues on startup (and that they declare them as durable), just run all your consumers at least once before starting any publishers. That will ensure that all your queues get created correctly. You can then shut down the consumers until they're really needed because the queues will persistently store any future messages routed to them.
  • I would go for #1. There may not be many steps to perform and you could always script the steps required so that they could be repeated. Plus if all your consumers are going to pull from the same single queue (rather than have a dedicated queue each) it's really a minimal piece of administrative overhead.

    Queues are something to be managed and controlled properly. Otherwise you could end up with rogue consumers declaring durable queues, using them for a few minutes but never again. Soon after you'll have a permanently-growing queue with nothing reducing its size, and an impending broker apocalypse.


    As mentioned by Brian an exchange does not store messages and is mainly responsible for routing messages to either another exchange/s or queue/s. If the exchange is not bound to a queue, then all messages sent to that exchange will be 'lost'

    You should not need to declare fixed client queues in the publisher script since this might not be scalable. Queues can be created dynamically by your publishers and routed internally using exchange-to-exchange binding.

    RabbitMQ supports exchange-to-exchange bindings that will allow for topology flexibility, decoupling and other benefits. You can read more here at RabbitMQ Exchange to Exchange Bindings [AMPQ]

    RabbitMQ Exchange To Exchange Binding

    示例拓扑

    Example Python code to create exchange-to-exchange binding with persistence if no consumer is present using queue.

    #!/usr/bin/env python
    import pika
    import sys
    
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
    host='localhost'))
    channel = connection.channel()
    
    
    #Declares the entry exchange to be used by all producers to send messages. Could be external producers as well
    channel.exchange_declare(exchange='data_gateway',
    exchange_type='fanout',
    durable=True,
    auto_delete=False)
    
    #Declares the processing exchange to be used.Routes messages to various queues. For internal use only
    channel.exchange_declare(exchange='data_distributor',
    exchange_type='topic',
    durable=True,
    auto_delete=False)
    
    #Binds the external/producer facing exchange to the internal exchange
    channel.exchange_bind(destination='data_distributor',source='data_gateway')
    
    ##Create Durable Queues binded to the data_distributor exchange
    channel.queue_declare(queue='trade_db',durable=True)
    channel.queue_declare(queue='trade_stream_service',durable=True)
    channel.queue_declare(queue='ticker_db',durable=True)
    channel.queue_declare(queue='ticker_stream_service',durable=True)
    channel.queue_declare(queue='orderbook_db',durable=True)
    channel.queue_declare(queue='orderbook_stream_service',durable=True)
    
    #Bind queues to exchanges and correct routing key. Allows for messages to be saved when no consumer is present
    channel.queue_bind(queue='orderbook_db',exchange='data_distributor',routing_key='*.*.orderbook')
    channel.queue_bind(queue='orderbook_stream_service',exchange='data_distributor',routing_key='*.*.orderbook')
    channel.queue_bind(queue='ticker_db',exchange='data_distributor',routing_key='*.*.ticker')
    channel.queue_bind(queue='ticker_stream_service',exchange='data_distributor',routing_key='*.*.ticker')
    channel.queue_bind(queue='trade_db',exchange='data_distributor',routing_key='*.*.trade')
    channel.queue_bind(queue='trade_stream_service',exchange='data_distributor',routing_key='*.*.trade')
    
    链接地址: http://www.djcxy.com/p/34204.html

    上一篇: EasyNetQ模型关闭

    下一篇: RabbitMQ:与Topic交换的持久消息