-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to always dump %INC regardless of exit code #1
Comments
David Golden [email protected] writes:
And remember the impact of HARNESS_OPTIONS=j9 andreas |
Good point. That could get tricky, but has prompted me to consider the bigger issue of how distributions are supposed to influence test scheduling. See Perl-Toolchain-Gang/Test-Harness#33 for an example. |
What I like most about Test::PrereqsFromMeta is that it follows the declarations in META.yml. It's in the spirit of divide and conquer. The one thing is whether something is declared, the other is to rely on declarations. If they do not declare, they get the pieces they wanted. If they do declare, the system works better. Nothing to worry about. We are already quite good at declaring dependencies. Two years ago I had to write more than one bugreport about undeclared dependencies per day. Now we're down to one per week or so. And optional dependencies? Aren't they supported in meta yaml spec already? So if they are declared there, we already have enough information. I also like DiagINC. It could help pinpoint cases when the declared modules deviate from the actually used ones. I like them both so much that I'd wish they were one, like use Test::DiagINC 'frommeta'; # just in t/00-all_prereqs.t What do others think? |
@andk points out that having the delta between passing and failing tests is helpful in pinpointing problem modules.
Dumping %INC on every test regardless of exit would accomplish this. Compared to Test::PrereqsFromMeta, it would be more verbose (every test instead of once), but it would be more correct, accounting for conditional module usage.
For AUTOMATED_TESTING, it might be worth it.
The other alternative would be to somehow collect the data across .t runs and present it in a consolidated way afterwards. For example, dump the data to files in
.diaginc
and then have a .t file that consolidates and spews it.(These could, of course, be conditional on AUTOMATED_TESTING)
The text was updated successfully, but these errors were encountered: