-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Join crate-ci
org?
#34
Comments
Looks like there is a cargo-kcov. Maybe we can join efforts? |
I can't speak for @roblabla, but as the author of the doc-upload portion of this library, I'd be happy to move that into the crate-ci org under whatever OSI approved license you'd prefer. I've started to think that it'd be best to split that functionality out, actually. I have at least one project where I'd like to use the upload to GitHub Pages but I don't really want to do coverage (it's highly exploratory right now). @epage if I break out the doc upload functionality into, say, |
I've added your blog post to my list of sources to pull from when writing my centralized docs (see crate-ci/crate-ci.github.io#2 ). That'd be great if you want to get your tool up on the org. Without an org, I can see it being easy to bundle everything up in one repo. My hope was that these things would split up. So to check my understanding, doc-upload is meant to upload to github pages? And you feel that renaming it to Could you clarify some things for me?
|
Yeah, probably, unless I can make it more generic to where it works with other service providers.
Mainly that I know how to use GitHub Pages.
Docs.rs only generates docs on publish, and only for one configuration of your crate. With the more manual approach, you can generate docs for multiple configurations, branches, and include documentation that isn't just the API docs as well. Multi-crate workspaces really benefit from workspace doc -- all the crates are linked in the left sidebar [example]. Also search crosses crates that way.
Being able to contribute to the docs from multiple branches. If you have (say) two branches, |
Thanks for the clarifications. Sounds like I have good sound bites for when organizing that part of the documentation :) |
I'd be happy to move the cargo-travis crate to the crate-ci org as well. I read your rust2018 post, and I very much agree with the points outlined \o/. I've been less active than I would have liked, and this crate could use some more love. Concerning the renaming of the commands, that's a fine idea, but I'm afraid that the potential breaking of people's build might make it problematic. Right now, the recommended way to install cargo-travis is to simply get the latest version at the start of each build. If we remove the Regarding This begs the question of which approach is best. I do not have the answer. Using cargo as a dependency certainly brings several disadvantages: it makes the build slower as it has to recompile cargo, and because of the different cargo version from the host, it will also recompile the project, even if they were previously ran. But it does make the code look saner, as we can simply reuse the relatively clean Cargo API instead of using subcommands and parsing the output. And thanks for reaching out and championing this crate-ci project :D |
Is it parsing human readable output or machine formatted metadata? Human readable is a no-go in my mind. I'm unsure which is best between metadata and depending on cargo.
Yeah, we'd have to have a new crate name to avoid breaking people. What are your thoughts about our kcov and coveralls being separate crates? |
Well, cargo-coveralls just delegates the coveralls implementation to kcov (it's really just passing the
I'm not sure how |
What are your thoughts on separating the two so that someone can run What I'm thinking of is decoupling how we gather coverage data from what service we upload to. This means we could migrate our recommendation away from kcov but reuse the same upload mechanism. Or say we find another service is better than coveralls and migrate to it, we just change out the upload mechanism. Maybe thats just not ergonomic enough and we should just merge it all into |
Let me reiterate this: there is no coveralls specific code in this crate. All we do is tell kcov that the user requested the kcov coverage be uploaded to coveralls. There is no coveralls support we could break out. As such, I think the hypothetical crate For coverage tools that don't natively support coveralls, then an upload script would be required. For tools that do, such as kcov, the native support should be used. |
Oh, thanks for clarifying. I thought you meant your Ok, yeah, I agree about dumping |
pinging @kennytm for cargo-kcov; we shouldn't ignore that work. On linking cargo versus calling:
I think that the json of For an example, I ran Results gist https://gist.github.com/CAD97/f0e0cc45472544b32ed0b552cf230885 Of note lines:
All of the test binaries run are available by looking through the produced artifacts for |
@CAD97 ... I forgot why that requirement was written 😂. Probably it was meant to be I prefer not to link to cargo as a library simply because compiling that takes so long. One could also create a |
I'm fine parsing data meant for machines. Seems reasonable to create a crate for it. It'd be nice to avoid linking. The build times aren't a concern for me because we should be uploading prebuilt binaries to github and using those in our CIs. |
So it looks like we are
Is this correct? What are the next steps for moving forward? Should we move one of them independent of figuring out the merge? Which? |
For ghp-upload, I've just got to gather the time to pull it out, which should happen this weekend. (And to make sure it practices the best-practices that we're going to be preaching!) Since I'm putting in the work to split it from cargo-travis I also want to do what I can to decouple the Travis-based assumptions baked into the current design as well, relying more on the git interface. I've got a few ideas that could apply to coverage, mainly on how to parse |
Feel free to pull it out and improve it later :) |
btw I've now created a gitter channel |
@CAD97 you may get more definite response if you’ve filed an issue on the cargo repository or posted on internals.rust-lang.org. |
https://users.rust-lang.org/t/how-stable-is-cargos-message-format-json/15662/2 apparently, it's stable @CAD97 |
And that's why I went and posted it to users.rust-lang.org :) |
Currently working on adding cargo message-format parsing capabilities to cargo_metadata (have to use it in another project). I'll probably refactor cargo-travis to use that afterwards. |
Is the goal of that PR to make If so, sounds similar to https://github.com/crate-ci/escargot/ |
The goal is to make
IMO, the best thing would be for escargot to piggyback cargo-metadata, which would provide the low-level structs, while escargot would provide the high-level APIs. |
I'm curious, how do you plan to support xargo?
Would love an issue / PR for this :). Granted, if this isn't fixed by the time I'm integrating escargot into stager, then I'll probably be implementing it myself.
As I also just posted in oli-obk/cargo_metadata#45 The reason for escargot does not include struct definitions is to ensure clients have the control they need for reducing overhead.
Granted, its probably not worth it with all the other overhead in these kind of operations. |
The parse_message_stream function in cargo_metadata just takes an I could probably PR a polling system into escargot if I have some time. I'll make an issue when I'm not on the phone. As for the parsing, I feel like this is super premature optimization. It's not like cargo generates that much data, and the messages don't have that many fields. I'd need to benchmark, but I really doubt any app would suffer from this overhead. Not to mention, without a stream parser, the current implementation basically allocates the whole thing. And since the serialization format is json, partial parsing isn't going to help perfs all that much. Meanwhile, there is a very real cost in usability. Every user will end up having to redefine essentially the same structs, running rustc manually to discover what the fields are, etc... |
Ok, so it seems like your proposed solution is to only solve the parsing but not how the stream is created. In a way, this makes I'm still curious how you plan for your tool to work with both xargo and cargo, or if it already does, how it does it. FYI it took me a sec to realize you were referring to a function you were adding.
I know; I admitted that. I also just wasn't up for taking the time to creating the structs. Thanks for taking the time to do it!
Yeah, defining the schema somewhere is really useful. That is the usability problem to solve. I was also viewing my usage of escargot as experiments in what static data queries might be like. For a while, I was looking into if it were possible to implement something like jsonpath with serde for being able to construct specific queries. Never ended up going anywhere with that. |
For cargo travis, I'll likely add a --cargo-invocation argument that allows specifying whether to run cargo build or xargo build or cargo xbuild or ... Those will obviously need to support the --message-format=json arg (and implement it properly). I already have a fork of xargo that makes it understand and properly handle --message-output=json. The other tool I'm writing this for is specifically to cross compile binaries for the nintendo switch, so it hardcodes xargo. See https://github.com/MegatonHammer/linkle/blob/master/src/bin/cargo-nro.rs Maybe a simple way to fix the usability problem would be to have a doctest/example in escargot showing how to use the cargo_metadata types I'm adding to parse the messages? That way, escargot can focus on making things fast where it can, while still making the common case simple to use. |
Ok. This is good stuff for me to consider for my work on stager, so I appreciate it! Also, gives me ideas on how to maybe modify
Once things are in and settled down for cargo metadata, I might experiment with how to output both. |
Started creating issues:
|
This is waaay overdue, but I started work porting to cargo-metadata #64. I ended up not using |
Understandable. Could you open an issue on escargot about CompilationMode not being Test and what impact that has? btw even if you don't want to use escargot's |
Also, you might find clap-cargo useful. I'm using it in |
Yup, I've been looking at clap-cargo for some time. I'm using docopt though, so I'd first need to switch to clap/structopt, which will likely come in a subsequent PR. I'll open an issue on escargot once I figure out if the CompilationMode stuff has an impact or if I can just use And I don't need to parse the test output thankfully, since I only run the tests to gather coverage information - so I won't need escargot's json message definitions. But thanks :D. |
Correct, it does not currently build or run the doctests. iirc |
Would you be willing to join forces and move development of your crate to the
crate-ci
org?I think this quote from killercup explains the value of a shared org:
You can see more about my goal for this new org on my rust2018 post
We still need to work out the governance model but you can maintain as much control over your crate as you wish.
Specifically for this project, things I think could be nice to
cargo-coverage
renamed tocargo-kcov
to recognize there are multiple ways to gather coverage information and to allow them to be installed.The text was updated successfully, but these errors were encountered: