Skip to content
This repository has been archived by the owner on Apr 16, 2020. It is now read-only.

Package Manager usecase benchmarks for #ipfs-test-infra project #80

Open
andrew opened this issue Jul 18, 2019 · 5 comments
Open

Package Manager usecase benchmarks for #ipfs-test-infra project #80

andrew opened this issue Jul 18, 2019 · 5 comments

Comments

@andrew
Copy link
Collaborator

andrew commented Jul 18, 2019

Related to #76, #77 and #79

One of the Infrastructure/IPFSaaS team's goals this quarter is to be able to test potential IPFS releases against real-world usecase benchmarks to catch any potential regressions early.

Package managers are one of those usecases, we should package up some of those benchmarks for the @ipfs/wg-infrastructure team to integrate into their test suite.

@meiqimichelle
Copy link
Contributor

We can also explore whether we should do this work as part of https://github.com/ipfs/benchmarks //cc @alanshaw

@alanshaw
Copy link

The test scenarios we have are documented here https://github.com/ipfs/benchmarks/tree/master/tests#nodejs-and-go - happy to talk with either of you about augmenting them for your needs

@andrew
Copy link
Collaborator Author

andrew commented Jul 23, 2019

There's two kinds of things to think about here:

  • tests - integration level tests that cover our usecases that either pass/fail
  • benchmarks - measure the performance of running our usecases over time and ensuring there aren't regressions

@hsanjuan
Copy link
Member

Note that @ipfs/wg-infrastructure is not working on test suites. I'm not sure the pipeline that is being setup by @jimpick or @alanshaw etc can accomodate adding TBs of data. Maybe you will need a separate pipeline. Also, worth checking if the problems seen with multiple repositories of 1TB can already be seen with test-sets of 1 GB.

The bottleneck is probably disk througput on write. This will become slower with flatfs and probably statys reasonably constant with badger (assuming enough memory). All the tests listed by @alanshaw probably grasp a lot of metrics which, if improved, will help adding large repos.

Having tracing for the "add" process is probably the first place to get an idea of what is taking longest.

@jimpick
Copy link

jimpick commented Jul 31, 2019

The existing ipfs/benchmarks setup is using a single bare hardware minion for the tests, I believe. But I'm also doing work with https://github.com/libp2p/testlab which sets up a cluster ... we could design a test target cluster with the resources required for some heavy tests.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants