Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MuckRock scraper enhancements #105

Open
6 tasks
josh-chamberlain opened this issue Nov 13, 2024 · 1 comment
Open
6 tasks

MuckRock scraper enhancements #105

josh-chamberlain opened this issue Nov 13, 2024 · 1 comment

Comments

@josh-chamberlain
Copy link
Contributor

josh-chamberlain commented Nov 13, 2024

Context

Requirements

  • The biggest hurdle for import was to deal with agencies that weren't found. This is related to Endpoint to match agencies for submission data-sources-app#550
    • more of the agencies might have matched if the full state name were replaced with the two-letter ISO code ("Massachusetts" → "MA")
  • We should try to assign a record_type. For the use of force CSV, some of them were Use of Force Reports and some of them were Policies & Contracts. I set everything with "policy" or "policies" in the name to Policies & Contracts and everything else to Use of Force Reports.
    • Because policies are general and may apply to many record types, my idea is that we check for the presence of "policy" / "policies", then fall back to the searched-for type of record.
  • I added a generic description "A completed MuckRock records request." to all of them, but we can probably do slightly better. These can be brief and aim to just sum up.
  • Next time we run this, we're going to get duplicates! We should use the unique URL endpoint to filter out submissions: https://data-sources-v2.pdap.dev/api/check/unique-url
    • for speed and efficiency of API usage, we could also save a log of URLs we've collected in the past in the repo
  • We could also include an optional property of submitter_contact_info which contains an email; that way people get credit for their work! If we use automation in the future, we can use something like [email protected] so we have that kind of clue.
  • the CSV could have a simpler schema:
    • name
    • agency_described
    • record_type
    • description
    • source_url
    • readme_url
    • agency_supplied
    • supplying_entity
    • agency_originated
    • originating_agency
    • access_type
    • data_portal_type
@josh-chamberlain josh-chamberlain transferred this issue from Police-Data-Accessibility-Project/scrapers Nov 13, 2024
@maxachis
Copy link
Collaborator

My first priority with this is to do some refactoring, both to clean up the scraper as well as to give myself a better understanding of the ins and outs of the logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants