Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance #134

Open
laurentcau opened this issue Jan 28, 2022 · 0 comments
Open

Performance #134

laurentcau opened this issue Jan 28, 2022 · 0 comments

Comments

@laurentcau
Copy link

laurentcau commented Jan 28, 2022

Hi,

I'm investigating to improve our current internal file format to add more flexibility.
It is basically based on a tree representation, each node can contain some data.
So I implemented an unqlite version where I use one key/value for the tree node (few byte) and one for the data.
Here are the results:
10000 nodes with 1024 bytes => 10240000 bytes
unqlite: 36511744 bytes W:404017us R:210433us
legacy: 10420043 bytes W:12735us R:11907us
file size: 3.5x
write time: 31.7x
read time: 17.67x

I think 4096 is closer to the internal unqlite chunk size, so let's try it:
10000 nodes * 4096 data = 40960000 bytes
unqlite: 89309184 W:850054us R:455387us
legacy: 41140043 W:30292us R:20585us
file size: 2.7x
write time: 28.06x
read time: 22.12x

So, I'm very disappointed with the result in both file size and time.
I understand there will be an overhead for file size and time but that looks too much in this case.
Any comment ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant