-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use pytest-benchmark
to run a better benchmark
#691
Comments
I just came across
For the GC I’m wondering, could it be important to leave that on? Let’s say you are doing something in the code that causes a lot of cycles therefore triggering a lot of GC runs making it slow. If it was possible to not create these cycles by changing the code you could speed thing up, but you wouldn’t know this if GC was disabled. I’m just thinking off the top of my head here, and don’t even have code in mind as an example, so it may not make sense to even consider this. |
Per my understanding, You raise some very good points about things like concurrent tests ( As for GC, I agree, by default it makes sense to leave GC enabled for the benchmarks you're considering. |
OK this all sounds good to me, I’d definitely merge a PR for this. We may want to wait until after I finish with #687 because it will be a breaking change. I believe I have the code done, I’m just working on finalizing documentation. Once it is done I would like to benchmark both clients against the official client. There should be no change in async performance. I’m curious to see the performance of the synchronous client vs the official client. My expectation is performance between the two will be roughly equal, but I’d like to verify that. If you are interested and have time it would be nice to set this up on the |
Awesome, I'll follow along anyway and see your progress on these branches/issues! And will be in touch if I have time. |
@sanders41 Here's a very nice snippet that Neo4j's async client maintainer used (source). @pytest_asyncio.fixture
async def aio_benchmark(benchmark, event_loop):
def _wrapper(func, *args, **kwargs):
if asyncio.iscoroutinefunction(func):
@benchmark
def _():
return event_loop.run_until_complete(func(*args, **kwargs))
else:
benchmark(func, *args, **kwargs)
return _wrapper Usage:async def some_async_function_to_test(some_async_fixture):
...
def test_benchmark_please(some_async_fixture, aio_benchmark):
aio_benchmark(some_async_function_to_test, some_async_fixture) |
The full code is in |
For asyncio itself I think we will be fine because I'm using
I ran the current benchmarks with the new code and as expected I saw no difference in the async client performance. I did see slightly better performance from my sync client than the offical client. This surprised me, as I expected the performance to be the same, and I can't explain why. I expect the differnce is just by chance, though I did see the difference over multiple runs. Since performance in the async client came out as expected I am going to go forward with the PR. |
Awesome, sounds like a plan 😎. So you'll be maintaining both the sync and async Python clients yourself? Or would this be something the Meilisearch team will take over from you long term? |
I will continue maintaining this package myself. I am also a maintainer (though not a Meilisearch employee) on the official Meilisearch Python client, and will continue with that. I have seen people that use both this async package and the official package. The APIs between these two packages are slightly different so by providing both options here these people can work with the exact same API in their different applications, the only difference being if they await or not. |
@prrao87 the release of the version 2 client is done now. I left the old style benchmarking because I wanted you to be able to get credit for your idea if you want to implement it. If not no pressure. |
Hey @sanders41 looks good, and the suffix |
I have been pretty busy lately also and haven't had the chance to try out any of the other benchmark packages yet. |
Sounds good, let's revisit next week then! |
Hi @sanders41 I've tried writing the Otherwise, I think we can close this issue and focus on bigger priorities. Thanks! |
Ok, I'll take a look and see if I can come up with a good way to do it. |
if i may chime in why not use https://codspeed.io it is used by pydantic |
codspeed is something I have wanted to look at, but just haven't had time. The tricky part here would be with it running in CI. Slow downs would need to be looked at further to see if it is code under our control that caused the slow down, and not network issues, or Meilisearch itself that slowed things down. What may make sense is to add it to the nightly tests instead of a standard part of CI. Possibly with an option to also trigger a run manually. This way if we are specifically working on speed in code we control we can kick it off to test how we are doing. |
Hi @sanders41, I've been using
pytest-benchmark
lately to run better benchmarks for Python functions on databases, and it depends onpytest
underneath. I've also checked outrich-bench
, but it's not as feature-rich aspytest-benchmark
, specifically lacking these two features:x
warmup cycles before recording timing numbers: this is important when running DB benchmarks because many of them involve optimizations when you have a warm cache to perform a fairer comparison -pytest-benchmark
offers this as an added argumentpytest-benchmark
by just enabling a flagI'm not sure if you've found other better libraries out there to do this, but the main reason I think it matters is that
time.time()
isn't as good astime.perf_counter()
, which is the default system clock used to measure these timings inpytest-benchmark
, and I find that it's quite robust over a range of settings.Was wondering on your thoughts on this, and if you were looking to accept a PR!
The text was updated successfully, but these errors were encountered: