Skip to content

RabbitMQ Publishing Messages in LHDI

Erik Nelsestuen edited this page Dec 21, 2023 · 1 revision

RabbitMQ's Management Console is a staple when troubleshooting an AMQP service environment. As a standard feature of RabbitMQ, the Management Console is typically available on port 15672 on the same host as the RabbitMQ's service which is running on port 5672 by default. As a security consideration, we don't typically want to enable it outside of development environments.

Expose the management console port:

When testing against Dev you can connect to the RabbitMq instance with a Pod Shell session and submit requests directly to RabbitMQ's Management API via curl. Start by confirming that you can access the RabbitMQ Management Console on port 15672:

# search the dev namespace for the active rabbitmq pod name by:
kubectl get pods -n va-abd-rrd-dev | grep rabbitmq

# for example:
rabbitmq_pod_name=vro-rabbitmq

# once you have the correct pod name use it in the following statement to establish a pod shell session:
kubectl exec -it -n va-abd-rrd-dev $rabbitmq_pod_name -c rabbitmq--dev -- sh -c "(bash || ash || sh)"

# confirm that the management console is ready for use, by confirming that port 15672 is available on the container:
kubectl describe svc vro-rabbitmq

# if not, the following command will patch the containers port mapping to allow access on 15672
kubectl patch svc vro-rabbitmq --type merge -p '{"spec": {"ports": [{"port": 5672, "targetPort": 5672, "protocol": "TCP", "name": "http"}, {"port": 15672, "targetPort": 15672, "protocol": "TCP", "name": "console" }] } }'

# when submitting requests to mgmt console while connected directly to the rabbitmq pod, you can use the following:
mgmt_host="http://localhost:15672"

# if connected to another pod within the namespace, identify RabbitMQ's internal IP under the pod's networking information in `describe pod` or the information panel in Lens
# mgmt_host="http://172.20.103.91:15672"

# for request/response style queues, a domain service will publish to a request queue,
# and listen for its response in a reply queue that it will create and manage. For our
# curl based mock domain service, ensure that there is a reply queue created

curl -si -u user:bitnami -X PUT "$mgmt_host/api/exchanges/%2F/reply" \
--header 'Content-Type: application/json' \
--data-raw '{
    "type":"direct",
    "auto_delete":false,
    "durable":true,
    "internal":false,
    "arguments":{}
}'

curl -si -u user:bitnami -X PUT "$mgmt_host/api/queues/%2F/reply" \
--header 'Content-Type: application/json' \
--data-raw '{
    "auto_delete": false,
    "durable": true,
    "arguments": {}
}'

curl -s -u user:bitnami -H "Accept: application/json" -H "Content-Type:application/json" -X POST -d'{
    "vhost": "/",
    "name": "amq.direct",
    "properties": {
        "delivery_mode": 2,
        "headers": {},
        "reply_to": "reply",
        "correlation_id": "9666958"
    },
    "routing_key": "getClaimDetailsQueue",
    "delivery_mode": "1",
    "payload":"{ \"claimId\": 9666958 }",
    "headers": {},
    "props": {},
    "payload_encoding": "string"
}' "$mgmt_host/api/exchanges/%2F/bipApiExchange/publish"

# create an exchange by doing a PUT request to "$mgmt_host/api/exchanges/${vhost}/${name}"
# create a queue by doing a **PUT to
"$mgmt_host/api/queues/${vhost}/${name}"


# to retrieve the contents of a queue:
curl -si -u user:bitnami -X POST "$mgmt_host/api/queues/%2F/reply/get" -d '{"count":1,"ackmode":"ack_requeue_true","encoding":"auto"}'

kubectl patch svc vro-rabbitmq --type merge -p '{"spec": {"ports": [{"port": 5672, "targetPort": 5672, "protocol": "TCP", "name": "http"}, {"port": 15672, "targetPort": 15672, "protocol": "TCP", "name": "console" }] } }'

kubectl get pods -l component='somelabel' -n somenamespace -o jsonpath='{kubectl set env deploymenet/}' svc vro-rabbitmq --type merge -p '{"spec": {"ports": [{"port": 5672, "targetPort": 5672, "protocol": "TCP", "name": "http"}, {"port": 15672, "targetPort": 15672, "protocol": "TCP", "name": "console" }] } }'
Clone this wiki locally