Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate update of metrics #42

Open
clyne opened this issue Jul 28, 2022 · 13 comments
Open

Automate update of metrics #42

clyne opened this issue Jul 28, 2022 · 13 comments
Assignees

Comments

@clyne
Copy link
Collaborator

clyne commented Jul 28, 2022

Metrics for the landing page are currently generated manually. They should be automated where possible. In particular, # of publications and # of downloads should be automated.

@clyne
Copy link
Collaborator Author

clyne commented Aug 16, 2022

see also #12

@jukent
Copy link
Collaborator

jukent commented Aug 16, 2022

Hi @clyne Can I have higher permissions to this repository so I can add a PAT to the settings for the GitHub actions necessary to do this? -- or you could add a PAT?

@clyne
Copy link
Collaborator Author

clyne commented Aug 16, 2022

done

@jukent
Copy link
Collaborator

jukent commented Aug 22, 2022

I have this working for publications now!!

I reached out to Virginia to get the SIParCS info (#30) which I can then update similarly.

However, I'm not sure how to do the downloads metrics. @clyne Could you post some relevant links to where that info might be? Do we want the total downloads of every VAST package?

@clyne
Copy link
Collaborator Author

clyne commented Aug 22, 2022

I have this working for publications now!!

Awesome!!

I reached out to Virginia to get the SIParCS info (#30) which I can then update similarly.

However, I'm not sure how to do the downloads metrics. @clyne Could you post some relevant links to where that info might be? Do we want the total downloads of every VAST package?

Hmm. That is a question @erogluorhan. Orhan is it possible do script the collection of conda download metrics here?

@erogluorhan
Copy link
Collaborator

Hmm. That is a question @erogluorhan. Orhan is it possible do script the collection of conda download metrics here?

Hi @jukent , we have this https://github.com/NCAR/geocat_conda_metrics/blob/main/scrape_metrics.py for our own scraping purposes for several GeoCAT tools distributed in various channels at Conda. I believe a similar scraper code would work to read metrics for VAST tools. Please have a look at it and let me know if you have any questions.

@jukent
Copy link
Collaborator

jukent commented Aug 23, 2022

Thanks this helps a lot! I'll look more closely at it tomorrow

@jukent
Copy link
Collaborator

jukent commented Aug 23, 2022

@NihanthCW could you give me a list of any relevant VAPOR packages to include?

@jukent
Copy link
Collaborator

jukent commented Aug 25, 2022

I created a yaml file of packages indata/packages.yml that just has the package name, team, and repository url. Do we want to turn this into a viewable page as well with a brief summary of each package, current release number, maybe a logo, and documentation link? If so I will need information from the project leads.

@NihanthCW
Copy link
Collaborator

@NihanthCW could you give me a list of any relevant VAPOR packages to include?

I don't think we have any additional packages. The upcoming python api could be one candidate to include once it is released. I'll keep you posted on that.

@erogluorhan
Copy link
Collaborator

Hi @jukent , I think the automation of scraping metrics is still a work in progress, but'd like to give a few suggestions FWIW:

  • In the GeoCAT case, we needed to record the daily scraped metrics into a DB for future cases where we'd want to run analysis for date intervals. In the VAST webpage case however, I think you'd not need to record things into a DB. Instead, if you only have a daily-running workflow, that would scrape up-to-date metrics from Conda for each package and then update the metrics numbers of the website, that would suffice. If this makes sense, you can get rid of the DB-related lines from scraper python file or others.
  • That being said, I think you'd not need an on-demand scraping workflow as well (since a daily automated workflow would work sufficient for the website)

@jukent
Copy link
Collaborator

jukent commented Aug 29, 2022

Thank @erogluorhan I believe I've done these steps. I'm going to work on the environment failure and will reach out if I hit a roadblock.

@jukent
Copy link
Collaborator

jukent commented Sep 8, 2022

I was having issues with binstar-client. Looks like that moved to now be within anaconda-client. Trying that now. Posting documentation for reference:
https://github.com/Anaconda-Platform/anaconda-client/tree/master/binstar_client/utils

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants