You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've been observing a slight increase in the DHT Lookup Latency since around the mid of June 2023. The increase is in the order of ~10% and is captured in our measurement plots at: https://probelab.io/ipfskpi/#dht-lookup-performance-long-plot. This is a tracking issue to identify the cause of the latency increase.
Observing the CDFs of the DHT Lookup latency across different regions over time, we see a clear move towards the right of the plot for several regions, most notably for eu-central, but also ap-south-1 and also af-south-1 (in Week 27).
The majority of nodes still use kubo-v0.18 as per: https://probelab.io/ipfsdht/#kubo-version-distribution, but there are about 3.5k nodes in v0.20.0 and v0.21.0, which could be enough in order to cause this slight increase.
The DHT Lookup Latency seems to have gone down and although I'm not terribly proud that we didn't have time look into this in more detail, I'll close this issue, as it doesn't seem like an alarming case.
Putting the plots side by side, we see that when we have lots of peers that appear offline (seen for less than 10% of the time), latency goes up (red rectangles), while the opposite happens when the number of offline peers goes down (green rectangle).
As discussed in: #57 this is an issue worth looking into in more detail.
Context
We've been observing a slight increase in the DHT Lookup Latency since around the mid of June 2023. The increase is in the order of ~10% and is captured in our measurement plots at: https://probelab.io/ipfskpi/#dht-lookup-performance-long-plot. This is a tracking issue to identify the cause of the latency increase.
Evidence
Below the short-term latency graph (https://probelab.io/ipfsdht/#dht-lookup-performance-overall-plot):
Observing the CDFs of the DHT Lookup latency across different regions over time, we see a clear move towards the right of the plot for several regions, most notably for
eu-central
, but alsoap-south-1
and alsoaf-south-1
(in Week 27).Week 24 (2023-06-12/18)
https://github.com/plprobelab/network-measurements/tree/master/reports/2023/calendar-week-24/ipfs#dht-performance
Week 25 (2023-06-19-25)
https://github.com/plprobelab/network-measurements/tree/master/reports/2023/calendar-week-25/ipfs#dht-performance
Week 26 (2023-06-26 - 2023-07-02)
https://github.com/plprobelab/network-measurements/tree/master/reports/2023/calendar-week-26/ipfs#dht-performance
Week 27 (2023-07-03/09)
https://github.com/plprobelab/network-measurements/tree/master/reports/2023/calendar-week-27/ipfs#dht-performance
Thoughts
The latency seems to be heading back down, but we're not sure if there's a specific reason for this behaviour. Some thoughts:
kubo-v0.21.0-rc1
and later releases at the end of June: https://github.com/ipfs/kubo/releases/tag/v0.21.0-rc1. There doesn't seem to be something that could affect performance there, other than "Saving previously seen nodes for later bootstrapping" (https://github.com/ipfs/kubo/blob/release-v0.21.0/docs/changelogs/v0.21.md#saving-previously-seen-nodes-for-later-bootstrapping), but even in this case, the original bootstrappers are not removed.kubo-v0.20.0
: https://github.com/ipfs/kubo/releases#boxo-under-the-covers. Not sure if something in there could affect performance (?)kubo-v0.18
as per: https://probelab.io/ipfsdht/#kubo-version-distribution, but there are about 3.5k nodes inv0.20.0
andv0.21.0
, which could be enough in order to cause this slight increase.eu-central
node.Any other thoughts @Jorropo @aschmahmann @lidel @hacdias ?
The text was updated successfully, but these errors were encountered: