Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is IPFS serving the closest copy of cached content #18

Open
yiannisbot opened this issue Sep 6, 2022 · 4 comments
Open

Is IPFS serving the closest copy of cached content #18

yiannisbot opened this issue Sep 6, 2022 · 4 comments

Comments

@yiannisbot
Copy link
Member

I'm wondering what would be the outcome of the following experiment.

  • A publisher publishes a file from a US-based node.
  • An EU-based node requests and fetches the file and then either pins it permanently or provides it temporarily.
  • At this point, the provider record should include the PeerID of both (the US and the EU) nodes.
  • Another EU-based peer is requesting the same file.

Have we verified that they will receive the EU-based copy? @dennis-tra did we look into this aspect for the experiments we reported here: https://gateway.ipfs.io/ipfs/bafybeidbzzyvjuzuf7yjet27sftttod5fowge3nzr3ybz5uxxldsdonozq ?

Step 3 above would also be worth a look, i.e., do both PeerIDs end up in all the provider records published in the system? Or if not, at which fraction of the records do we have both peers?

@dennis-tra
Copy link
Contributor

That would be a great study! We did not look into that in our paper and won't be able to deduct information about that from the corresponding datasets :/

You know that but for completeness: In that paper, we published content, resolved the provider record, fetched it from multiple locations and stopped there. At this point, as you said, the provider record should include the PeerIDs of all nodes that fetched the content. We would have had to request the data again and track from where we get it served - which we didn't do 🤷‍♂️

@mxinden
Copy link
Contributor

mxinden commented Sep 7, 2022

Have we verified that they will receive the EU-based copy?

IPFS has no mechanism that favors by geography, right? The closest to this I guess is that IPFS would favor the closer node due to lower latency and higher throughput, ... over time. But not explicitly, but simply because it is faster to retrieve data from that closer node and thus overall retrieves more data from that closer node. Am I missing something?

@yiannisbot
Copy link
Member Author

Thanks for the input @mxinden!

... over time

I guess here you mean that it would favour the fastest user at the Bitswap level, right? I.e., if it establishes a connection to both users and figures out one is faster than the other - but do we reach that stage? 😁

A few things to find out here:

  • when publisher_2 advertises content CID_x (previously advertised by publisher_1) to what fraction of the (20) provider records initially published by publisher_1 does publisher_2's PeerID end up in.
  • who does the client connect to, if they have the PeerID of both publisher_1 and publisher_2?
    • Before connecting to either of the publishers, the client would have to walk the DHT again to do the mapping PeerID -> multiaddress. Contacting both is the preferred way, performance-wise, but adds extra load to DHT servers.
  • The ideal way to proceed, I would argue, is to eventually connect to both publishers and start the transfer through Bitswap. After the first few blocks, the client can identify the fastest of the two publishers, continue with that and prune the other Bitswap session. Overhead stays at low levels and content is delivered faster.
    • One alternative would be to identify the closest peer geographically (from the multiaddress) and proceed with that only. This most likely (but not necessarily) guarantees that content will be delivered faster and avoids an extra Bitswap connection setup.

Curious to find out what happens in practice :)

Of course, in order for that optimisation to have performance impact it would prerequisite that "enough" content in IPFS is stored in more than one peer - which is what we've been asked and have been discussing @dennis-tra :) But I agree it's a great study to do. We probably should craft an RFM out of this.

@dennis-tra
Copy link
Contributor

dennis-tra commented Sep 8, 2022

After the first few blocks, the client can identify the fastest of the two publishers, continue with that and prune the other Bitswap session. Overhead stays at low levels and content is delivered faster.

I can also imagine a mechanism where the client load balances the traffic between the two if the upload bandwidth of one provider doesn't saturate the download bandwidth of the client. Like requesting a part of the graph from one provider and the other part of the graph from the other provider. Does this already happen?

Of course, in order for that optimisation to have performance impact it would prerequisite that "enough" content in IPFS is stored in more than one peer

I think the hydra-boosters could be a good source to determine the statistics around that. They store the provider records in DynamoDB and we could just count the provider records that have 1,2,3,...,n providing peers. This should be a statistically significant indicator of the distribution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants