-
Notifications
You must be signed in to change notification settings - Fork 6
VRO RabbitMQ Strategy
The goal of this page is to identify the strategy that VRO implements for Partner Teams' integrations with VRO microservices via RabbitMQ. This document includes:
- Current Drawio Document
- VRO RabbitMQ Declaration Rules
- VRO & Partner Team Responsibilities
- Exchange / Queue Declarations and Bindings
- Global Exchange / Queue Policies
- Requests / Response Model
- Stream / Push Model
- Partner Team Recommendations
For more information on RabbitMQ specifics or AMQP protocol concepts, consult the documentation on the RabbitMQ website.
The diagram below was created in Drawio, and represents the current use of RabbitMQ in the VRO microservices architecture.
When declaring exchanges or queues for any service the following naming convention must be used:
- Format:
<my_service>.<specific_purpose>.<more_narrow>
- Case:
snake_case
Examples: - Requests made to the
svc-bip-api
to get claim details would be declared with this exchange and queue combo- Exchange:
svc_bip_api.requests
- Queue:
svc_bip_api.get_claim_details
An argument forsvc_bip_api.requests.get_claim_details
could be made for the queue name, however, this queue is already bound to the exchangesvc_bip_api.requests
, so it's redundant.
- Exchange:
Queues and Exchanges should exist for a specific reason.
The main service is responsible for declaring persistent exchanges for a specifically purposed message to be routed through.
For example svc-bip-api
should be responsible for declaring it's request exchange svc_bip_api.requests
.
The service that will be consuming from any particular queue is responsible for declaring that queue.
For example, svc-bip-api
should be responsible for declaring queues on its request exchange svc_bip_api.requests
for each of the queues it will be accepting requests on, e.g. svc_bip_api.get_claim_details
, svc_bip_api.get_claim_contentions
, etc. This forces services to declare queues for it's specific purpose.
Another example is svc-bgs-api
declaring a queue to send healthcheck requests to and another queue to pull those responses in an effort to verify the service's connection to RabbitMQ. The healthcheck reply queue is declared by the service using it.
If a services needs to know only of the existence of a queue or exchange, it should declare the queue passively. This means that if the resource does not exist, then the service will fail to create a connection to that. This is contrast to the active declarations in the rules above, where if the resource does not exist, then it is created. By using passive declaration, the declaration rules above are enforced via code.
For example, the EP Merge application performs passive declarations of request queues/exchanges created by the svc-bip-api
to field requests to downstream endpoints.
Services should have their own dead letter exchange, and only if necessary. The DLX must be created via policy. Due to the Consumer Queue Declaration rule, it is only necessary for a service to also its own dead letter queue on that exchange if it will be consuming from that queue.
Exchanges and Queues should not share global policies. Due to the way policies are applied to any resource (queues/exchanges), only one policy can be applied at a time. These policies are set in the definitions.json
file that is loaded upon deployment of the rabbitmq
service. More information on policies can be found here.
The following properties are required to be assigned on declaration:
-
type
- exchanges only auto-delete
durable
-
exclusive
- queues only passive-declare
Due to the way declarations are made, optional parameters should be applied via policy to reduce the burden of deployment to a distributed environment. The main reason for setting some of these properties via policy, is deployability. For example, if message-ttl is set on the queue at declaration time for a specific queue, then every service that declares that queue for use must also set that property on declaration. If the two or more services using that queue do not declare it exactly, then only the first one to declare it will be able to connect to it, and the others will fail with a 406 (see here).
Lets say that we change all services to match the new configuration, then all those services will have to be redeployed, which we just tried to go through (painfully) with the dead letter configurations. Additionally, queues/exchanges will need to be manually deleted if any queues/exchanges were declared with autodelete=false or any cases where autodelete=true but the queue never had a consumer, or the exchange never had any bindings (meaning they will not autodelete themselves).
The docs say that its recommended that all optional parameters are set via policy:
Optional queue arguments can be set in a couple of ways:
- To groups of queues using policies (recommended)
- On a per-queue basis when a queue is declared by a client
- For the x-queue-type argument, using a default queue type
The former option is more flexible, non-intrusive, does not require application modifications and redeployments. Therefore it is highly recommended for most users.
Additionally, there are a handful of properties that are better set at the message level on publish as opposed to at the policy level, message-ttl being on of them. By setting this at the client level, the client (publisher) is charge of how long it should wait for that message to be consumed.
For example, the EP merge app will add a message-ttl via the expires = 0 property to every request it makes to the BIP and BGS services. This means that if the BIP or BGS service is not available to consume the message, the message is dropped or dead lettered immediately, and EP merge assumes it failed, and will continue in processing the merge appropriately. If another application was willing to wait longer, they could set it to 60 seconds or whatever is appropriate for that application's logic.
- declaring exchanges and their type (direct exchange, fanout exchange, topic exchange, or headers exchange)
- declaring request queues
- binding request queues to the exchanges
- setting all global exchange / queue policies (Dead-letter exchanges, TTL, Queue Length Limit, etc.)
- setting any VRO's application queue/exchange parameters that differ from VRO global policies via declarations
- must be declared durable (exchange will survive a broker restart)
- must be auto-deleted (exchange is deleted when the last queue is unbound from it)
- must be declared durable (queue will survive a broker restart)
- must be auto-deleted (queue that has had at least one consumer will be deleted automatically when the last consumer has unsubscribed (disconnected))
- must not be exclusive (queue is able to have more than a single connection besides declaring service)
In order for partner teams to communicate via RabbitMQ to VRO's microservices, queues and exchanges must be declared identically to the VRO microservice that will be processing requests and supplying responses. Failure to do so will result in the applications' inability to consume or publish to an exchange/queue, and depending on the RabbitMQ library client, a runtime exception.
- must match VRO declarations for exchanges
- must not publish messages to the default exchange
- must match VRO declarations for queues
- setting any application response queue parameters that differ from VRO global policies via declarations
- must declare response queues as necessary
- must bind response queues to the appropriate exchanges
- must provide
reply_to
property in request message - must correlate requests made to responses received by using
correlation_id
property in request message- must also keep track of those correlation_ids
- must implement client time-out or retries policies for requests where expected response was never received
- must provide any other message level properties such as
delivery_mode
, see here for more info on message level properties in request message - must implement any sort of publisher acknowledgements if desired
- must provide manual message consumer acknowledgements, rejections, or negative acknowledgements to the server upon receiving a response message
- must validate the
correlation_id
if consuming a message as a response to a request
- must validate the
TBD See policies. Consider
In order to allow multiple partner teams to utilize downstream VRO microservices, the request and response model for each of the VRO's microservices through RabbitMQ shall be consistent. Keep in mind, publishing with the correct message properties and payload is the partner team application's responsibility. It is also the responsibility of each VRO microservice to validate requests before processing, or, in the case of a pass-through service like svc-bip-api
, pass the request as-is to a downstream service and report the response as reported by the downstream service.
The Request / Response Model shall be implemented only on direct exchanges.
The following structure shall be used for publishing requests to VRO microservices via RabbitMQ:
content_type="application/json"
-
app_id
- name of calling application -
reply_to
- name of response queue if a response is desired -
correlation_id
- the string representation of a UUID assigned to a request- will also be returned with the response
- used by the requestor to correlate each response to its original request
The required payload is determined by the VRO microservice to which the requests are routed.
content_type="application/json"
-
correlation_id
- the string representation of a UUID used to correlate the request for which this response was made, if the request contained acorrelation_id
-
app_id
- id of application returning the response - other message properties set by RabbitMQ (see here)
VRO microservices are responsible for determining the majority of the body of the response. The only required fields are the integer value statusCode
and string value statusMessage
for any type of request. The statusCode
should represent a typical rest response code or the VRO microservice's defined values. The statusMessage
should be a string representation of the statusCode for quick readability of the response.
If additional information regarding the status is desired, the VRO microservices is responsible for adding the optional field messages
with statusCode
and statusMessage
. The messages
field is an array of objects each containing the following fields:
-
key
- required string - represents the error key identification -
severity
- required string - represents the severity of the error -
status
- optional integer - represents the HTTP status code -
httpStatus
- optional string - text representation of the response's HTTP status -
text
- optional string - text for more information -
timestamp
- optional string - ISO-8601 date-time of when error occurred
The messages construct is useful for services like the svc-bip-api
where the response might be directly passed through another downstream service.
{
"statusCode":201,
"statusMessage":"CREATED"
}
{
"statusCode":500,
"statusMessage":"INTERNAL_SERVER_ERROR"
}
{
"statusCode":500,
"statusMessage":"INTERNAL_SERVER_ERROR"
"messages": [
{
"key":"java.class.where.error.happened"
"severity":"FATAL"
"status":500,
"httpStatus":"INTERNAL_SERVER_ERROR",
"text":"Something happened downstream..."
"timestamp":"2023-12-04T12:35:12Z"
}
]
}
{
"statusCode":200,
"statusMessage":"OK"
"results":[
{
"fieldName":"thing1",
"intVal":1
},
{
"fieldName":"thing1",
"intVal":1
}
]
}
TBD - be sure to mention "The Stream / Push Model shall be implemented only on fanout exchanges."
Partner teams are welcome to implement their own solutions, provided that those solutions adhere to the information presented above. If there are any questions or concerns, contact the VRO team.
For parter teams implementing their application in python using the Request / Response Model, they can use a solution that follows this strategy by utilizing the hoppy library for asynchronous request/response patterns with a client that has configurable RabbitMQ connection parameters, retries and timeout policies, queue and exchange declarations, and more!