You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Any additional info you think is relevant, possibly including spatial or temporal subset if applicable?
I am looking for more understanding and control over what data gets fed into the API. Right now, I am a little hazy on the workflow, but want some redundancy with @ranchodeluxe who has been doing all the API support up until now. These are the situations that come up that I would like enough tools/agency over the ingest to respond to:
Situation 1: "The API isn't updating -- why?"
This has come up a few times. I can help check on the data generation on the back-end, but don' have enough transparency to check on any of the issue that crop up after the data get generated. This is usually time sensitive, because we only catch it when we go look in the data for some big fire.
Situation 2: "We need to add a new region to the API real fast"
If a new place starts having an extreme fire season, we try to point the algorithm there. We have gotten a lot of help with spinning up our algorithm in new places, but (I'm at least) still in the dark about how to then get that data into the API as a new feature layer. This also is usually time sensitive because we are trying to spin up measurements of the fire season as it's happening.
Situation 3: "We want to label and organize the data differently before we export it to the public"
There are some columns that make sense to keep around for researchers, but are too much information for the API. Also, we're working with more public-facing systems now (FIRMS) and may need to tweak our column names, or generate new columns now that we are working with a different community. We could use more opportunities to maintain how the data change between data generation and API.
@mccabete: Can you set up a time today or tomorrow with me (and maybe one intern)? Then we can pair on doing this work in v2 and v3. This should help flesh out your questions above and we can backfill any existing documentation as part of this
Contact Details
[email protected]
URL/DOI
N/A
Data License Identifier
N/A
Data Location
s3://maap-ops-workspace/shared/gsfc_landslides/FEDSoutput-s3-conus/
Size Estimate
N/A
Number of Items
N/A
Description
This is the EIS Fire layers that are being exported to the API at https://nasa-impact.github.io/veda-docs/notebooks/quickstarts/wfs.html
Collection Creation Notebook
N/A
Item Creation Notebook
N/A
Checklist
rio cogeo validate
Any additional info you think is relevant, possibly including spatial or temporal subset if applicable?
I am looking for more understanding and control over what data gets fed into the API. Right now, I am a little hazy on the workflow, but want some redundancy with @ranchodeluxe who has been doing all the API support up until now. These are the situations that come up that I would like enough tools/agency over the ingest to respond to:
Situation 1: "The API isn't updating -- why?"
This has come up a few times. I can help check on the data generation on the back-end, but don' have enough transparency to check on any of the issue that crop up after the data get generated. This is usually time sensitive, because we only catch it when we go look in the data for some big fire.
Situation 2: "We need to add a new region to the API real fast"
If a new place starts having an extreme fire season, we try to point the algorithm there. We have gotten a lot of help with spinning up our algorithm in new places, but (I'm at least) still in the dark about how to then get that data into the API as a new feature layer. This also is usually time sensitive because we are trying to spin up measurements of the fire season as it's happening.
Situation 3: "We want to label and organize the data differently before we export it to the public"
There are some columns that make sense to keep around for researchers, but are too much information for the API. Also, we're working with more public-facing systems now (FIRMS) and may need to tweak our column names, or generate new columns now that we are working with a different community. We could use more opportunities to maintain how the data change between data generation and API.
Tagging @eorland (who is also interested in this) and @smohiudd.
To Do
The text was updated successfully, but these errors were encountered: