-
Notifications
You must be signed in to change notification settings - Fork 6
Health data assessment containers
The code for health data assessment and analysis is in the python-service
folder. Each folder is designed to run inside its own docker container and communicate with other parts of the application using RabbitMQ. In general, the assessment services loop through a Veterans health data (medications, observation, conditions, procedures etc) and pull out anything relevant to a claimed contention ("relevant" in this case means anything that may help an RVSR rate a claim). The assessment services also provide summary statistics as meta-data that gets stored in the DB. Refer to Plan to Deploy for more general information on "Queue-Processor" architecture.
This container handles all claims with contentions for hypertension (7101 VASRD). There are two queues set up by this container, health-assess.hypertension
for v1 functionality and health-sufficiency-assess.hypertension
for v2. The queue is named this way because the v2 queue will return a flag to describe the sufficiency of evidence for a given claim (the logic is described in 905 ). The health-sufficiency-assess.hypertension
queue leads to a function that is designed to handle combined data from MAS and Lighthouse in a robust fashion. The assessment service does formatting of dates for the PDF, since it is already parsing a date as part of its core functionality. Medications, blood pressure readings and conditions are sorted by date before being returned as an evidence
object. Blood pressure readings without a parse-able date are not considered algorithmically because there is no feasible way of determining whether they meet the date requirements (without a lot of error-prone custom string parsing, and the vast majority of data that will be flowing through the VRO will have parse-able dates). Medications and conditions without a date are attached at the end of the sorted list of objects with dates.
Some "keys" in the request message are validated before any algorithms are run to alleviate the need to constantly catch errors. Some data cleaning is assumed by the upstream services for data collection. To accommodate MAS and Lighthouse as data sources, most dates are not assumed to always be in the correct format. For example some primary dates from MAS could be missing from OCR data and instead a secondary, or "partial" date will be used. For the hypertension queues, "bp_readings" is required to be present and for each blood pressure reading, a diastolic and systolic value must be present. The cerberus Python package is used as a lightweight validator.
To determine sufficiency a collection of algorithms analysis the patient data to determine a few decision points. The patient health data will include diagnosis information as code-able concepts in ICD or SNOMED. The ICD 10 codes {"I10", "401.0", "401.1", "401.9"} are used to filter the complete patient record down to just the relevant objects[^1]. In addition, blood pressure measurements are filtered by date and value.
More information on the logic to determine sufficiency can be found here 905 )
This container handles claims with contentions for asthma (6602 VASRD). On startup it creates a queue, health-assess.asthma
which is only used by v1 endpoints.
The other assessment folders are included in a gradle build if it is given a special property and the docker-compose profile is set to prototype. Use the following commands to build and run those containers.
./gradlew -PenablePrototype build check docker
export COMPOSE_PROFILES=prototype
There is unit testing for each assessment service in service-python/tests/assessclaim
.
To test additional conditions, the VASRD code needs to be added to the dpToDomains
object in svc-lighthouse-api/src/main/java/gov/va/vro/abddataaccess/service/FhirClient.java
. The dpToDomains
map determines which resources to pull from the Lighthouse Patient Health API and later sent to the assessment services.
[^1]: This set of ICD 10 codes is located in assessclaim/src/lib/codesets/