Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Certifying Auditee field names in historical data #3402

Open
sambodeme opened this issue Feb 13, 2024 · 5 comments · May be fixed by #4418
Open

Fix Certifying Auditee field names in historical data #3402

sambodeme opened this issue Feb 13, 2024 · 5 comments · May be fixed by #4418
Assignees
Labels

Comments

@sambodeme
Copy link
Contributor

sambodeme commented Feb 13, 2024

While preparing the data migration documentation, it was noted that AUDITEENAME was incorrectly used instead of AUDITEECERTIFYNAME, and AUDITEETITLE was used instead of AUDITEECERTIFYTITLE. It was determined that this will have a low impact on the disseminated reports as it does not affect the financial aspect of the audit reports and only affects the auditee certifying information. However, because this still introduces some data inaccuracies in the reports in production, it must be addressed.

Possible solutions:

  1. Correct the issue in the source records and re-run dissemination for all historic reports.
  2. Fix the issue on the dissemination side (general table) following a change record approach.
@sambodeme sambodeme added the eng label Feb 13, 2024
@github-project-automation github-project-automation bot moved this to Triage in FAC Feb 13, 2024
@danswick danswick moved this from Triage to Available in FAC Apr 10, 2024
@danswick danswick changed the title Fix Certifying Auditee Info Fix Certifying Auditee field names in historical data Apr 10, 2024
@danswick
Copy link
Contributor

My initial reaction is that option 1 seems preferable. Option 2 would be overwritten if we ever had to re-run dissemination for some other reason.

Another possible approach would be to handle both cases during intake-to-dissemination. We could check for both cases and pass the appropriate one on to the disseminated record.

@jadudm
Copy link
Contributor

jadudm commented Apr 11, 2024

#1 is very expensive in time. Re-running a year of data on a single-core cloud.gov instance takes approximately two weeks.

A third option:

  1. Bring the singleauditreport records in to the production tables.
  2. Add administrative API coverage for that table.
  3. Add an administrative API endpoint that allows for updating records (with appropriate change-management tracking, per @sambodeme 's comment)
  4. Add the ability to re-run dissemination on a record via API.

We could, in this way, begin doing data curation via API, eliminating some of the challenges of trying to do all of this as GH Actions.

I've just suggested an entirely new idea that needs discussion, I think. I'm wrestling with/thinking this way because:

  1. The fact that we need a "source of truth" somehwere, which is our singleauditchecklist. We want the change made there, to avoid the problem @danswick pointed out.
  2. We have a difficult time working with production data, because of our controls. (This is a good thing, but it is something that forces us to do work in lower environments and then manually move data in.)
  3. We have controls built into the admin API that could let us "bless" one or more keys as "write" keys, and in doing so, limit the possibility of "accidents" against the source data.
  4. The API can provide the logging and change tracking we need.
  5. We have no easy way to re-disseminate a record, when it should be a cheap operation/easy to do.
  6. We have multiple other (similar) issues we need to address, and we need to make data curation less expensive. The API moves data curation into the space of stand-alone Python (or, any language) tools that we can all test and validate against lower environments, and run with confidence against production.

Probably a few other things. I have no idea how this would play with the existing migration tooling... it might be a non-starter. But, for ongoing curation work, this might be worth discussing?

@danswick
Copy link
Contributor

We'll tackle this as part of the next batch of curation work.

@sambodeme sambodeme self-assigned this Oct 22, 2024
@sambodeme sambodeme moved this from Backlog to In Progress in FAC Oct 23, 2024
@sambodeme sambodeme linked a pull request Oct 25, 2024 that will close this issue
18 tasks
@sambodeme
Copy link
Contributor Author

As I began reviewing the code and scaffolding the necessary logic to fix the auditee name and title (see ticket #3402), I realized that data curation might be needed to address issues with historical records migrated from Census data. This could be due to various reasons, including bugs in the migration algorithm that were not identified at the time and are now surfacing (or may surface in the future). Additionally, there might be a need to update records in the FAC databases, regardless of their origin. This typically occurs when the FAC team modifies intake validation rules, resulting in existing records that no longer validate against the new rules without updates.

When data curation involves historical records, fixing these issues will often require accessing raw data from the historical Census records table and reusing logic from the census_historical_migration app. The reason for reusing this logic is to maintain consistency, such as the way we handled missing values during migration by replacing them with the GSA_MIGRATION placeholder.

This situation raises questions about how and where to organize the data curation code. Should we create a new app (data-curation) within the Django project and consolidate all data curation work there? This approach has the advantage of centralizing all data curation efforts in one place but may lead to the new app becoming too dependent on others, such as the census_historical_migration or audit apps.

Alternatively, should we include a curation section within each app (one for the census_historical_migration app and one for the audit app)? This approach would make the apps more self-contained and loosely coupled, reducing dependencies between them. However, it also means the curation logic would be spread across multiple apps.

@jadudm
Copy link
Contributor

jadudm commented Oct 28, 2024

Thinking out loud...

The historical_migration code assumes:

  1. The data coming in is formatted as a submission, and
  2. It will need either transformation or permission to go through "incorrectly."

It would be heavyweight, but could all curation be implemented as

  1. Generate a new submission with the updated/fixed data,
  2. Pass it through the migration code

Such that the migration code is, for all intents and purposes, the only place we do this work?

(This is a third option. I haven't thought about how odd or heavyweight it might turn out to be.)

My intuition/assumptions so far have been that having a curation app would be the way to go.

For this particular issue, I've been assuming a management command would have access to both the sac and the census_historical tables. Therefore, the operation is basically:

  1. Loop
    2. Visit each SAC
    3. Get the correct names from the historical tables
    4. with curation_tracking:
    1. Make the change in the sac
    2. Save the sac
    5. Re-disseminate the record

That is, I've been assuming we have 1) the current record and 2) the historical record in hand for all curation work, and therefore each action looks more like a management command that is probably only run once?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: In Progress
Development

Successfully merging a pull request may close this issue.

3 participants