Skip to content

Commit

Permalink
Build cmd in rust
Browse files Browse the repository at this point in the history
## Motivation / Description
The second cpu-intensive part of the request processing is building
the cmd.

Also instead of building dicts of flags we can use a single
flags object, which also simplifies the API of the lower
commands.

I chose to still built a single flags object, but we could
explore building one flags object per meta-command, as the
flags that they support differ, and it could lead to a more
type-safe low-level implementation.

## Performance:
* Initial:
multithreaded: Overall: 110779.55 RPS / 9.03 us/req
singlethreaded: Overall: 111545.63 RPS / 8.96 us/req

* Rust only for response parsing
multithreaded: Overall: 245898.34 RPS / 4.07 us/req
singlethreaded: Overall: 246165.19 RPS / 4.06 us/req

* Now (rust also for build_cmd)
multithreaded: Overall: 319587.03 RPS / 3.13 us/req
singlethreaded: Overall: 323101.77 RPS / 3.10 us/req
  • Loading branch information
bisho committed Nov 20, 2023
1 parent 8358df9 commit e4da606
Show file tree
Hide file tree
Showing 23 changed files with 307 additions and 617 deletions.
4 changes: 1 addition & 3 deletions src/meta_memcache/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,15 @@
)
from meta_memcache.protocol import (
Conflict,
Flag,
IntFlag,
Key,
MetaCommand,
Miss,
NotStored,
ServerVersion,
ResponseFlags,
RequestFlags,
SetMode,
Success,
TokenFlag,
Value,
)
from meta_memcache.routers.default import DefaultRouter
Expand Down
8 changes: 4 additions & 4 deletions src/meta_memcache/cache_client.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from typing import Callable, Iterable, Optional, Tuple
from typing import Callable, Iterable, Optional

from meta_memcache.base.base_cache_client import BaseCacheClient
from meta_memcache.commands.high_level_commands import HighLevelCommandsMixin
Expand All @@ -25,7 +25,7 @@ def cache_client_from_servers(
servers: Iterable[ServerAddress],
connection_pool_factory_fn: Callable[[ServerAddress], ConnectionPool],
serializer: Optional[BaseSerializer] = None,
key_encoder_fn: Callable[[Key], Tuple[bytes, bool]] = default_key_encoder,
key_encoder_fn: Callable[[Key], bytes] = default_key_encoder,
raise_on_server_error: bool = True,
) -> CacheApi:
executor = DefaultExecutor(
Expand All @@ -48,7 +48,7 @@ def cache_client_with_gutter_from_servers(
gutter_ttl: int,
connection_pool_factory_fn: Callable[[ServerAddress], ConnectionPool],
serializer: Optional[BaseSerializer] = None,
key_encoder_fn: Callable[[Key], Tuple[bytes, bool]] = default_key_encoder,
key_encoder_fn: Callable[[Key], bytes] = default_key_encoder,
raise_on_server_error: bool = True,
) -> CacheApi:
executor = DefaultExecutor(
Expand Down Expand Up @@ -76,7 +76,7 @@ def ephemeral_cache_client_from_servers(
max_ttl: int,
connection_pool_factory_fn: Callable[[ServerAddress], ConnectionPool],
serializer: Optional[BaseSerializer] = None,
key_encoder_fn: Callable[[Key], Tuple[bytes, bool]] = default_key_encoder,
key_encoder_fn: Callable[[Key], bytes] = default_key_encoder,
raise_on_server_error: bool = True,
) -> CacheApi:
executor = DefaultExecutor(
Expand Down
Loading

0 comments on commit e4da606

Please sign in to comment.