You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A decade ago, when Python 3 adoption was a challenge, someone produced a giant table. It showed the top 1000 or so PyPI packages (by download count) and showed how many of them supported Python 3. It reassured developers that there would broad-based support, and it put the focus on the projects that didn't so people could see where work needed to be done.
Buildozer, p4a and kivy-ios should have a similar chart, showing which PyPI packages work on iOS and Android.
Showing something "works" is hard. Showing a project can build against it may have to do as a proxy.
I see this tool as a scheduled CI process that is running pretty much constantly.
It would maintain a list of packages of interest, which would be populated by known recipe targets, a manual list packages that users had previously shown interest in, a manual list of known issues (i.e. recipes that no longer run, or recipes that have been requested and never implemented) also any package that had appeared on the top pypi packages list.
A test would consist of naively specifying the package in a buildspec, and building a hello-world style project on each supported architecture of Android and iOS (and other platforms?). Because some recipes names don't match the package name, there may need to be a mapping made to get the spec right.
For each package, it would maintain information about why it was on the list, the last time it was tested successfully (per platform), and the configuration data (Python version, Buildozer version, p4a version, and package version), and the last time it was tested unsuccessfully (with similar details - and perhaps the build log).
An HTML page would be generated showing these statistics. For failing cases, a link to the associated Issue (if any) would be included.
The text was updated successfully, but these errors were encountered:
That would be nice, right now I'm having issues with my app doesent open on my android and I don't know if its an issue with my spec file, with my code or the modules as it is a complex app and uses several modules, at least I could use "approved" apps and have one last issue to troubleshoot
This is a New Feature Suggestion.
A decade ago, when Python 3 adoption was a challenge, someone produced a giant table. It showed the top 1000 or so PyPI packages (by download count) and showed how many of them supported Python 3. It reassured developers that there would broad-based support, and it put the focus on the projects that didn't so people could see where work needed to be done.
Buildozer, p4a and kivy-ios should have a similar chart, showing which PyPI packages work on iOS and Android.
Showing something "works" is hard. Showing a project can build against it may have to do as a proxy.
I see this tool as a scheduled CI process that is running pretty much constantly.
It would maintain a list of packages of interest, which would be populated by known recipe targets, a manual list packages that users had previously shown interest in, a manual list of known issues (i.e. recipes that no longer run, or recipes that have been requested and never implemented) also any package that had appeared on the top pypi packages list.
A test would consist of naively specifying the package in a buildspec, and building a hello-world style project on each supported architecture of Android and iOS (and other platforms?). Because some recipes names don't match the package name, there may need to be a mapping made to get the spec right.
For each package, it would maintain information about why it was on the list, the last time it was tested successfully (per platform), and the configuration data (Python version, Buildozer version, p4a version, and package version), and the last time it was tested unsuccessfully (with similar details - and perhaps the build log).
An HTML page would be generated showing these statistics. For failing cases, a link to the associated Issue (if any) would be included.
The text was updated successfully, but these errors were encountered: