Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove cache API response #64

Open
oscarssanchez opened this issue Feb 2, 2022 · 0 comments
Open

Remove cache API response #64

oscarssanchez opened this issue Feb 2, 2022 · 0 comments

Comments

@oscarssanchez
Copy link
Contributor

oscarssanchez commented Feb 2, 2022

Per @dkotter

I was able to download Yoast 7.5.3 from here: https://github.com/Yoast/wordpress-seo/releases/tag/7.5.3. Running a scan on that does properly flag that as being insecure and needing to be updated.

That said, the concern here was that maybe there's some issue around caching. We do use a transient to cache the API response (see https://github.com/10up/wpcli-vulnerability-scanner/blob/develop/wpcli-vulnerability-scanner.php#L943). You could easily have a scenario where a scan is run, the API responses are cached and a vulnerability is reported before that cache expires. The next time you run a scan, it wouldn't make a fresh API request and would use the cached data instead, giving you a false result.

But that transient is only cached for a single hour so I don't think that's a fairly common scenario that would happen. Looking at the original report here, Yoast 7.5.3 is being used and the vulnerability there was reported in November of 2018 (while the report here comes from April 2021). Seems unlikely then that this would be the result of stale data, though since every set up is slightly different, possible some weird caching situation.

I think we could easily add a --no-cache flag to our commands, bypassing this cache check. But in my opinion, I'd suggest we remove that caching all together. I understand that the free plan from WP Scan has fairly small limits, so caching helps reduce the likelihood that those limits are reached, but this seems like a tool we would always want fresh data and never want cached data. This isn't something that runs regularly from user requests, it's a command that either is manually run or triggered by something like cron. So I can't think of a reason why we wouldn't want to just remove those caching lines.

All that said, still not sure that is the root cause of this reported issue. Again, can definitely see this causing issues and it does need fixed but seems unlikely that it caused the original reported issue here, just based on timelines.

Originally posted by @dkotter in #54 (comment)

This makes sense to me ^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants