You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we only use the expected_dk metric after finishing the GET query to find out where to store data.
However there is a possibility that the first response to a GET query contain nodes that are all Sybil nodes and all closer to the target than any other nodes. Because the way the query works, we will only consult these Sybil nodes, and thus the responders ClosestNodes, will all be Sybil.
Usually nothing will be affected by this, since most of the time expected_dk is at the 20th node anyway.
But when the current dht size estimation is 0 because there are no previous queries, we might take too many nodes to query in parallel at once, that might cause too many packet to be dropped by our router. So we need to check #38
The text was updated successfully, but these errors were encountered:
The only way this could happen is if your routing table is poisoned ... maybe we can make some checks for keeping buckets diverse ID and IPs wise ... but not in the query.
Currently we only use the
expected_dk
metric after finishing the GET query to find out where to store data.However there is a possibility that the first response to a GET query contain nodes that are all Sybil nodes and all closer to the target than any other nodes. Because the way the query works, we will only consult these Sybil nodes, and thus the
responders
ClosestNodes, will all be Sybil.Usually nothing will be affected by this, since most of the time
expected_dk
is at the 20th node anyway.But when the current dht size estimation is
0
because there are no previous queries, we might take too many nodes to query in parallel at once, that might cause too many packet to be dropped by our router. So we need to check #38The text was updated successfully, but these errors were encountered: