Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check dev experience on starting an issue #26

Open
rndquu opened this issue Jun 28, 2024 · 49 comments
Open

Check dev experience on starting an issue #26

rndquu opened this issue Jun 28, 2024 · 49 comments

Comments

@rndquu
Copy link
Member

rndquu commented Jun 28, 2024

There are certain kinds of tasks which must be completed only by experienced developers.

Check the following issues for example:

It would be great to know that collaborator solving the above issues has prior experience with solidity and ethereum.

Possible solution could be to use chatgpt to parse collaborator's github or CV to make sure a contributor is experienced enough to /start an issue.

@0x4007
Copy link
Member

0x4007 commented Jun 29, 2024

github or CV

Toss out CVs I never believe them. GitHub though, there is proof of work. I am a big fan of using GitHub for this purpose.

make sure a contributor is experienced enough to /start an issue.

How can you determine this?

  1. Amount of repositories containing Solidity.
  2. Amount of commits containing Solidity lines of code.
  3. Amount of Solidity files committed.
  4. Amount of contracts deployed from their address on chain.

In theory we can make configs for all of these, but it would be optimal to choose a single strategy and focus on that for this plugin.

@rndquu
Copy link
Member Author

rndquu commented Jun 29, 2024

Toss out CVs I never believe them. GitHub though, there is proof of work. I am a big fan of using GitHub for this purpose.

Actually feeding a CV into the chatgpt is way more easier compared to parsing contributor's github where, as you've already mentioned, we should parse the number of solidity commits, etc...

@0x4007
Copy link
Member

0x4007 commented Jun 29, 2024

People lie on CVs all the time. It's useless data compared to a portfolio of work.

@gentlementlegen
Copy link
Member

I think within a CV you can write anything. GitHub is reliable, maybe we could try using the statistics? https://docs.github.com/en/rest/metrics/statistics?apiVersion=2022-11-28
I also think that LinkedIn could be a way, but this will require users to have a LinkedIn and agree to share that data.

@Keyrxng
Copy link
Member

Keyrxng commented Jul 12, 2024

We could leverage Gitroll, there is a commercial version too, which creates a CV based on the their github history basically.

My profile scan

That was a public scan which does not include private contributions, you can log in and do a personal one for more info.

It's not very fast in completing scans although once a scan has been completed you can search for it. So first see if a scan exists, if not fire one (scanning my profile takes between 3-5 minutes)

Double checking the search-for function, it may have been removed, when I used this before there was no commercial version so things have changed a bit.

also idk how valid it is lmao but it's a good point of reference if we implement something in-house:

Senior-level Full-Stack Developer
Overall Rating: A (8.69)
Above 97% of people
Top 12% in GB


https://github-readme-stats.vercel.app/api/top-langs/?username=keyrxng

We could fetch top language stats from this endpoint and parse it for solidity percentage. Seems brittle but you'd expect anyone with a relative amount of pushed solidity code would have more than 15/20% I think

@gentlementlegen
Copy link
Member

Didn't know about Gitroll, it's pretty fun. The risk is that it takes into account forks and I know some people just keep forking repos for some reason, I don't know if it takes into account the lines that you actually wrote. But that could be a starting point which might save tons of development hours.

@Keyrxng
Copy link
Member

Keyrxng commented Jul 12, 2024

Didn't know about Gitroll, it's pretty fun.

Surprised you missed it, it had 15 minutes of fame on twitter and everyone was doing it but it was some time ago.

The risk is that it takes into account forks and I know some people just keep forking repos for some reason, I don't know if it takes into account the lines that you actually wrote.

https://gitroll.io/profile/uOp67oGeYgBNu5MjHSCmHHoqY0qV2/repos

(35 detected · 10 scannable)

As far as I can tell it detects forks but does not count them as code you've authored. The repos you see in that link are only my repos so I think it only accounts for code you've authored in your own repos.

But that could be a starting point which might save tons of development hours.

But yeah that's why I mentioned it I doubt it suits our needs out of the box but something to work towards maybe

I know some people just keep forking repos

Ah I see what you mean now, download the source code and push as their own, suddenly they are a 10x, I get you

@Keyrxng
Copy link
Member

Keyrxng commented Jul 12, 2024

Well either way,

download the source code and push as their own, suddenly they are a 10x

we only have available the contributor's public contributions and repos (unless, does the bot have private repo access of a user via the authToken?) which is susceptible to the same sort of manipulation.

If we only have public we are doing a disservice to eligible devs who push private web3 code which is very pretty commonplace for a lot of web3 projects (the blue chips are always public)


  1. Is this XP-Gating specifically for blockchain/web3 tasks, or is this the official XP-Gate plugin?
  2. Why isn't this an extension of the /start|stop command and is being built as a standalone plugin?
  3. Later then would this be refactored to use the real XP system based on earnings (or is this the xp system) for XP-gating or is that another plugin?

@Keyrxng
Copy link
Member

Keyrxng commented Jul 27, 2024

/start

Copy link

ubiquibot bot commented Jul 27, 2024

! Too many assigned issues, you have reached your max limit

@Keyrxng
Copy link
Member

Keyrxng commented Jul 27, 2024

@gentlementlegen can you price this with a few hours I've spent a couple on it so far and I have a couple more expanding the test suite and optimizing/refining things

@0x4007
Copy link
Member

0x4007 commented Jul 29, 2024

@gentlementlegen can you price this with a few hours I've spent a couple on it so far and I have a couple more expanding the test suite and optimizing/refining things

Seems like this is roughly a day task

@0x4007
Copy link
Member

0x4007 commented Jul 29, 2024

@rndquu do you have any remarks on your vision and how granular the guarding will be? Ref: ubiquity-os-marketplace/command-start-stop#17 (comment)

I think labeling is nice to have because even our devpool.directory supports this right now. You can see a "UI/UX" and "Research" task.

I suppose that in the config, the partner should be able to associate an arbitrary label name to a type of code for the check. This is obviously restricted to coding tasks, but seems straightforward to implement if GitHub recognizes code types for the statistics.

Screenshot 2024-07-29 at 16 17 35

@rndquu
Copy link
Member Author

rndquu commented Jul 30, 2024

@rndquu do you have any remarks on your vision and how granular the guarding will be? Ref: ubiquity-os-marketplace/command-start-stop#17 (comment)

Granular enough to reduce the number of false positives.

Not sure how @Keyrxng is going to implement this plugin but at first glance we could:

  1. Fetch original issue repository languages via https://docs.github.com/en/rest/repos/repos?apiVersion=2022-11-28#list-repository-languages (this eliminates the need for issue labels)
  2. Fetch languages from repositories where a contributor once committed
  3. Match either using gpt either some heuristics that makes sense
Screenshot 2024-07-30 at 20 10 28

I don't think the 1st version of this plugin must be super accurate. But it must be accurate enough to allow assigning this issue only to somebody with solidity experience.

@Keyrxng
Copy link
Member

Keyrxng commented Jul 30, 2024

@rndquu the open pr implements things similarly but in reverse and meets the requirements of the spec, it just needs refined a little.

  1. Scan user code statistics (as seen in my readme) not the repo
  2. Have repo config defined item mostImportantLanguage: {Solidity: 25} with a threshold. Since V1 aims to restrict Solidity tasks primarily this works fine as only a couple of solidity-heavy repos exist.

This approach gives us configurability at the repo level but not the org level although a default baseline could be set, age > 1 year && ts > 5 etc...


The reason I chose not to use the repo code stats for V1 is because of situations like in the screenshot below. It's not a problem for the solidity repos but will be a problem in others and will need to be addressed specifically as off the top of my head idk how we'd handle this elegantly

image


Overall, I think the current PR meets the requirements for V1. V2 will likely revolve around the real XP system which would follow the same mapped config setup as labels/tags probably so it should be thought out and tasked I think.

@gentlementlegen
Copy link
Member

@Keyrxng What unit is it exactly when you say Solidity: 25 for example?

@Keyrxng
Copy link
Member

Keyrxng commented Jul 31, 2024

@Keyrxng What unit is it exactly when you say Solidity: 25 for example?

1-100, maybe it can be made clearer what sort of threshold it is.

rndquu's language stats looks like:
image

molecula451's looks like:
image

gitcoindev's looks like:
image

0x4007's looks like:
image

Mine looks like:
image


I had only looked at mine and @rndquu's stats in QA before but looking over more it's making me doubtful that this is going to be as effective as I first thought. V1 will probs need to do some manual user repo parsing as well

Since in the case of newcomer's they won't have any tasks to compare, I think if it's possible to list open/merged user PRs that are solidity based (via octokit.search) this would be a good indicator if any exist. Failing that, we check their own repos... maybe count how many commits they actually authored

Any thoughts?

@gentlementlegen
Copy link
Member

gentlementlegen commented Jul 31, 2024

This seems to be the percentage of a certain language you are mostly using through your repos isn't it? Which means a beginner that only did one project in TypeScript would get a TypeScript: 100? Or did I misunderstand those numbers.

@Keyrxng
Copy link
Member

Keyrxng commented Jul 31, 2024

This seems to be the percentage of a certain language you are mostly using through your repos isn't it? Which means a beginner that only did one project in TypeScript would get a TypeScript: 100? Or did I misunderstand those numbers.

No that sounds right afaik and is why I also included the other markers at first as they'd help out balance that scenario and others that are similar. It looks like this will need to do at least a little bit of manual validation on the user's PRs/repos anyway but without a concrete XP based system we are up against it as there are lots of ways to spoof your github stats and data.

@0x4007
Copy link
Member

0x4007 commented Jul 31, 2024

I think we should get all of the user's commits and determine which languages they are committing in. We can set hard limits for the "ranks". Example:

You need 1000 commits containing TypeScript code to be "pro rank" and 250 to be "intermediate" or "mid" rank.

We can do a ton of requests because we have a six hour runtime.

@Keyrxng
Copy link
Member

Keyrxng commented Jul 31, 2024

We can do a ton of requests because we have a six hour runtime.

not really ideal for users waiting any length of time for a response plus we are bound by the fact that command-start-stop is a worker plugin not an action plugin

The endpoint I'm using to gather user stats is open source and self-hostable if we want to go down that route and keep the plugin fast

It could be killing two birds with one stone actually re: bringing in more devs, idk how exactly but it could be leverage for that purpose somehow. It would be similar to having our own https://gitroll.io/

@Keyrxng
Copy link
Member

Keyrxng commented Jul 31, 2024

Cache might be useful, then we only need to run it once per user. We can rerun if they previously were not high enough level?

I thought of this and we might be able to get away with it with one user with the worker, but if it's a team then that's potentially tens of thousands of commits. Making assumptions here obviously and I'll only know after testing but I expect it to be problematic.

The little work I've done on the faucet, I read that worker limits while appear to be time based are more memory-based than anything else. I have less exp here than any of you folks obviously but if that is your opinion also then that may be a separate issue

I don't think this is a good idea to maintain all this infra for this plugin.

agreed it's not ideal for just this alone so I will proceed with other suggestions

@rndquu
Copy link
Member Author

rndquu commented Jul 31, 2024

@Keyrxng What unit is it exactly when you say Solidity: 25 for example?

1-100, maybe it can be made clearer what sort of threshold it is.

rndquu's language stats looks like: image

As far as I understand those stats are taken from commits, not from forked repositories so it requires quite an effort to spoof those stats.

Anyway I think it's enough for v1. Setting a label like Solidity: 10 for ubiquity/ubiquity-dollar#927 should be enough for initial pre-screen.

@0x4007
Copy link
Member

0x4007 commented Aug 1, 2024

@Keyrxng this can run async in the background via the GitHub action runner. Here is a user flow:

  1. User self assigns via /start
  2. Bot assigns them
  3. Bot runs a "background check"
  4. A few minutes later the bot unassigns with a descriptive warning message that tags them "@user not enough xp" etc

Then the action runner can check all their commits at its leisure.

@Keyrxng
Copy link
Member

Keyrxng commented Aug 1, 2024

poor guy probs gets tagged thousands of times per week 😂

Then the action runner can check all their commits at its leisure.

So should command-start-stop be both a worker (for the initial response) then it should dispatch to the workflow within the same repo?

Or should it be a separate plugin which runs after and has it's own config etc?


I feel like it sort of defeats the purpose having a rapid assignment comment and then potentially them being ejected a min or two later. By that point they may have went ahead and forked repos/checked out branches etc

So maybe the assignment comment needs to be updated so we inform them ahead of time that they are being xp checked and are temporarily assigned until it's verified?

@Keyrxng
Copy link
Member

Keyrxng commented Aug 1, 2024

I just had a thought based on this issue that we should also build in a core-team or similar check so we can restrict issues that seem would be a good fit for it

@0x4007 It would be easy enough to build into the open PR if you could define label schema

  • Internal: (biz-dev) and it's restricted to teams as an idea

@0x4007
Copy link
Member

0x4007 commented Aug 1, 2024

Separate plugin. It's literally a couple of minutes max of wasted effort. I think this is acceptable. Can post a warning while it works. Certainly not perfect but it seems like an acceptable trade off.

core-team or similar check

Private repo is sufficient.

@0x4007
Copy link
Member

0x4007 commented Sep 8, 2024

@Keyrxng this can run async in the background via the GitHub action runner. Here is a user flow:

  1. User self assigns via /start
  2. Bot assigns them
  3. Bot runs a "background check"
  4. A few minutes later the bot unassigns with a descriptive warning message that tags them "@user not enough xp" etc

Then the action runner can check all their commits at its leisure.

I just realized with this plugin enabled we should reply with...

# Please wait... 
# Analyzing your profile to see if you qualify for this task.

...comment before assigning. If they pass, then assign. If not, then edit the message saying that they require more experience. Perhaps something like

! You need more TypeScript projects on your GitHub profile in order to be eligible for this task. 

It could also be really interesting to include a gif of a loader spinner for some of these transient comments.

@Keyrxng
Copy link
Member

Keyrxng commented Sep 8, 2024

I just realized with this plugin enabled we should reply with...

Review has taken me in the direction of this running async after /start. So with this change we'd need to have it run on the same command as in the start-stop manifest and in parallel to /start, we'd then update the comment during the issues.assigned event.

However, if the self-assign checks fail then that event won't fire and so would we delete the comment from within the /start logic or add a listener for the error comment from /start and then remove the comment in this plugin?

Maybe we add a config item to start-stop like xp-guard-enabled and then have /start handling the initial comments and then we just update the comment from here?

If they pass, then assign. If not, then edit the message saying that they require more experience. Perhaps something like

So this should run before /start and it should pass the results of each xp-check for each user into /start so it knows whether to assign them or not?

It could also be really interesting to include a gif of a loader spinner for some of these transient comments.

That would be cool, why not task it out and make an on-brand logo loader?

@0x4007
Copy link
Member

0x4007 commented Sep 8, 2024

It would be more elegant to match the font size and have a small inline spinner. Below is a test


# Please wait... 
# Analyzing your profile to see if you qualify for this task.

image

@Keyrxng
Copy link
Member

Keyrxng commented Sep 8, 2024

Looks cool sort of Tron vibes and you can size it with an img ele

image

This below doesn't render we should probably watch out for this in conversation-rewards?

<img>
<!--- ---!>
</img>

Copy link

ubiquity-os bot commented Sep 12, 2024

@Keyrxng, this task has been idle for a while. Please provide an update.

4 similar comments
Copy link

ubiquity-os bot commented Sep 16, 2024

@Keyrxng, this task has been idle for a while. Please provide an update.

Copy link

ubiquity-os bot commented Sep 19, 2024

@Keyrxng, this task has been idle for a while. Please provide an update.

Copy link

ubiquity-os bot commented Sep 23, 2024

@Keyrxng, this task has been idle for a while. Please provide an update.

Copy link

ubiquity-os bot commented Sep 27, 2024

@Keyrxng, this task has been idle for a while. Please provide an update.

@Keyrxng
Copy link
Member

Keyrxng commented Sep 27, 2024

Looks like the repo for this was deleted maybe as I cannot find it lmao https://github.com/ubiquibot/task-xp-guard/pull/1

https://github.com/ubq-testing/task_xp_guard - with no repo to PR against I'm unsure what to do here as I'm aware I shouldn't be creating as many new repos

@0x4007
Copy link
Member

0x4007 commented Sep 27, 2024

You make your own repo. On your own org. We copy it when it's finished.

@Keyrxng
Copy link
Member

Keyrxng commented Sep 27, 2024

The most recent PR was deleted so I can't reference the conversation so I want to clarify and summarize here as I am currently refactoring my working approach to use your commit counting strategy.

In short, I should collect a user' commits and then reduce the commits down to a tally of file extensions that appeared once per commit regardless of what that commit contains all that matters is appearing in the commit. "deletions" should be ignored from the tally.

  1. I need to collect the user' repos. I assume use GraphQL with something like-
query userRepositories($login: String!) {
            user(login: $login) {
                repositories(first: 100, ownerAffiliations: [OWNER, COLLABORATOR], isFork: false) {
                    nodes {
                        name
                        url
                    }
                }
            }
        }
  1. Now that I have repos I can use octokit.paginate(octokit.repos.listCommits to collect the all of the commits from that repo or GraphQL to obtain the linear commit history.
  2. I can filter out all non-assignee commits and then I need to use octokit.paginate(octokit.repos.getCommit on each commit, as listCommits is a thin payload and doesn't contain the things we need to parse.
  3. I accumulate from the getCommit responses and the end result is the user experience.

I'm at the accumulation stage right now and using @0x4007 as the account to test with.

Total commits fetched:  1261
Covering repos:  31
Fetching commit details for 0x4007/nv
Fetching commit details for RameshPR/VR01B
Fetching commit details for 0x4007/nv2
Fetching commit details for 0x4007/proxy-handler
Fetching commit details for 0x4007/google-apps-scripts-general
Fetching commit details for 0x4007/nvcrm
Fetching commit details for 0x4007/lrdscx-stripe
Fetching commit details for 0x4007/begin-app
Fetching commit details for 0xWildhare/SBTToken
Fetching commit details for 0x4007/uad-ui-archived
Fetching commit details for 0x4007/uniswap-lp-apr
Fetching commit details for 0x4007/opensea-cashback
Fetching commit details for 0x4007/hardhat-typescript-starter
Fetching commit details for Steveantor/msgtester
Fetching commit details for ubiquity/github-labels-processor
Fetching commit details for 0x4007/0x4007
Fetching commit details for 0x4007/scraper-parent-test
Fetching commit details for 0x4007/metamask-cache-test
Fetching commit details for 0x4007/parcel-ethers-test
Fetching commit details for 0x4007/the-grid
Fetching commit details for 0x4007/ubiquibot-sandbox
Fetching commit details for ubiquity-os/ubiquity-os-kernel
Fetching commit details for 0x4007/blog.pavlovcik.com
Fetching commit details for 0x4007/link-pulls
Fetching commit details for 0x4007/hue
Fetching commit details for 0x4007/clone-all-repositories
Fetching commit details for ubiquity-os-marketplace/conversation-rewards
Fetching commit details for 0x4007/cx-linkedin-scraper
Fetching commit details for powerhouse-inc/powerhouse-mirror
Fetching commit details for 0x4007/pull-time-estimator
Fetching commit details for ubiquity-os/ubiquity-os-plugin-installer
Experience via commits:  {
  js: 166,
  md: 176,
  htaccess: 1,
  conf: 2,
  array: 3,
  html: 140,
  htm: 50,
  css: 293,
  json: 1197,
  LICENSE: 5,
  php: 6,
  'service/error_log': 2,
  error_log: 2,
  ts: 2858,
  gitignore: 95,
  map: 19,
  lock: 174,
  npmignore: 3,
  eslintrc: 70,
  MD: 2,
  sh: 65,
  png: 859,
  svg: 2,
  txt: 18,
  tsbuildinfo: 3,
  contracts: 11,
  tsx: 305,
  mp4: 6,
  DS_Store: 16,
  eot: 26,
  'eot?': 10,
  ttf: 30,
  woff: 26,
  gitmodules: 7,
  prettierignore: 16,
  eslintignore: 6,
  yml: 349,
  log: 58,
  prettierrc: 41,
  eslintcache: 3,
  babelrc: 6,
  example: 19,
  solhintignore: 1,
  sol: 1,
  nvmrc: 20,
  commitlintrc: 10,
  cache: 2,
  csv: 2,
  SingletonLock: 3,
  'Default/Extension Scripts/LOG': 2,
  old: 77,
  ldb: 7,
  db: 15,
  DevToolsActivePort: 2,
  SingletonCookie: 2,
  SingletonSocket: 2,
  Variations: 2,
  RunningChromeVersion: 1,
  pma: 3,
  'cache/ShaderCache/data_3': 1,
  lockb: 10,
  toml: 5,
  cjs: 9,
  mjs: 4,
  editorconfig: 2,
  ico: 5,
  license: 2,
  jpg: 2,
  mkv: 2,
  gitconfig: 1,
  'github/CODEOWNERS': 6,
  parcelrc: 1,
  terserrc: 1
}
Time taken:  319540.2787
Total getCommit API calls made:  1261

I've removed hundreds of bad entries manually (left a couple in) and will need to add handling for it but assuming all three aspects are acceptable. 1. Repo collection 2. Tally output 3. Time taken. then how are we transforming these numbers into stats? Just determining weight across the number of different languages?

In a team scenario the call count and time taken could easily 2-3x and so it's eating a huge chunk of our rate limit every time a task is started and the delay is ~5 mins per contributor. If we try to make things go faster we'll hit the secondary rate limit which will knock out all other plugins.

I think a cache was mentioned but I'm not sure how that would work, when would we update a cache entry - every n tasks?

GraphQL does not expose what we need or it does but not the way we need it, i.e it exposes the tree with all files and we can pinpoint files with a path via the tree at that point in time but it doesn't contain the files in the commit. I've been using the explorer for introspection manually and GPT both telling me it's not possible, maybe we are both wrong but I don't think this is a feasible approach without removing the calls to getCommit.

I tried to use Blame and Contribution Graph data from GraphQL too but all roads lead nowhere.

@0x4007
Copy link
Member

0x4007 commented Sep 28, 2024

Maybe we can optimize the rate limits in a different way. For now we can let the job run slowly and as for cache we just need to store the totals somewhere.

The first run will be the heaviest, maybe a separate app can handle that. Later runs could "sync" from the last time it ran. For example since you just checked today, and if I start a task tomorrow, it will only check my last one day of commits which should be a relatively tiny amount.

Seems like the stats you aggregated look pretty great. It shows I'm pretty experienced with typescript which is what I expected.

There's a lot of not extensions in there which should be fixed up later. Also I'm not seeing CSS which is unexpected.

We also eventually should read private repos as well by authenticating through ubiquity-os

@Keyrxng
Copy link
Member

Keyrxng commented Sep 28, 2024

I'm sorry I do not understand how that approach is better/more effective than what I had implemented, if the primary goal is detecting the languages which appears most across a user' commits across repos they own and have collaborated on. My approach indexed the same repos and obtained the same final result (that you know TS) in under 10s and used <10 api calls.

My comment from before:
image


we just need to store the totals somewhere.

Where would that be then, Supabase DB for this plugin or other storage solution?

For example since you just checked today, and if I start a task tomorrow, it will only check my last one day of commits which should be a relatively tiny amount.

So we'd check commits from the beginning of time on the first run. We'd save the final output and the date we last ran a check. Then any subsequent usage of /start would parse commits across all repos from the last check date and it would update the storage with these new stats?

@0x4007
Copy link
Member

0x4007 commented Sep 28, 2024

Ideally git based storage but for now I guess Supabase or whatever works.

@Keyrxng
Copy link
Member

Keyrxng commented Sep 28, 2024

maybe a separate app can handle that

Not ideal as partner' will need to create an additional app just to handle rate limits from a single plugin wouldn't they? or would it be our token that's used across all partners?

The first run will be the heaviest,

So if we are looking at a 5 minute delay on the first run, then really we shouldn't assign the task right away or we are assigning right away?

We also eventually should read private repos as well by authenticating through ubiquity-os

This doesn't work with using an additional app unless it too had private access. Wouldn't this plugin need the APP_ID and APP_PRIVATE_KEY to be able to do that, which is a no-no for plugins? Or use OAuth which isn't feasible in this context?

I just realized with this plugin enabled we should reply with...comment before assigning. If they pass, then assign. If not, then edit the message saying that they require more experience. Perhaps something like

As I understand this comment ^

  • if this plugin is not installed, assignment happens as normal
  • if this plugin is installed, we comment first do our checks and then run /start

@Keyrxng
Copy link
Member

Keyrxng commented Sep 28, 2024

if this plugin is not installed, assignment happens as normal
if this plugin is installed, we comment first do our checks and then run /start

These two plugins were decoupled and run async as two separate plugins in the chain meaning they should both receive the payload at the same time. /start is going to finish first 9/10 so in order for us to avoid assigning them if this plugin is active what can we do?

Well, as two separate plugins that receive the same payload it's possible that this plugin forwards the same payload onto the /start worker, which is beneficial in that if we can stop /start from running via the command and instead have it run when it receives a payload directly from this plugin. Then it's sort of like it's running async.

Otherwise the only way I can see it happening is if /start becomes dependant on this plugin forwarding it's results via the kernel by using a chained config entry containing xp-guard > command-start


This is what the log above looks like converted via gpt-4o:

Extension Count Percentage
ts 2858 51.33%
json 1197 21.50%
png 859 15.43%
yml 349 6.27%
tsx 305 5.48%

Your TS score here is less than the 80% in the picture because there are more "languages" than what was included in my approach. I guess we can address handling language/extension specifics in another task minus the obvious ones that I exclude before merge.

@Keyrxng
Copy link
Member

Keyrxng commented Sep 28, 2024

We also eventually should read private repos as well

I linked https://gitroll.io/ before and we could do something similar

  • create a UI that the user logs in to via GitHub OAuth
  • we index all of their repos with their own token and we store that, which this plugin would fetch from.

pros:

  • we use the contributor's rate limit
  • we have private repo access
  • the user runs it whenever they want to to update their own stats (likely following a rejection)
  • Improves the execution speed of this plugin and removes getCommit calls
  • allows this to remain async without affecting /start too much

cons:

  • requires more infra
  • requires we modify our /start flow such that if an XP entry is not found for a contributor we must refer them to the UI in order to scan

@Keyrxng
Copy link
Member

Keyrxng commented Oct 23, 2024

removing my assignment until the spec is made clearer

@0x4007
Copy link
Member

0x4007 commented Oct 26, 2024

removing my assignment until the spec is made clearer

@rndquu rfc

@rndquu
Copy link
Member Author

rndquu commented Dec 9, 2024

removing my assignment until the spec is made clearer

@rndquu rfc

We need a dead simple "dev experience checker" as described here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants