-
Notifications
You must be signed in to change notification settings - Fork 764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collator Protocol Revamp Draft #616
Comments
Thanks for this writeup! It would make a good RFC once the fellowship/OpenGov goes live on Polkadot. Let's consider supporting collators which simultaneously collate for several parachains at once. A few reasons:
On another point: In light of the research I've been doing on blockspace regions I believe that collators (or even collations themselves) should tag parachain blocks with the region they are intended to occupy. We should support this in the initial design. The goal should be for parachains to be able to hold an unbounded number of regions and for validators not to have to do the messy work of figuring out which parachain blocks are assigned to which regions (as the chain extension logic for parachains has to take this into account anyway). In the last post in that forum thread I suggested that regions should have a unique identifier, which validators receiving announcements can check to see if the region is in surplus or will be soon. |
Slightly related: #3168 Also consider merging all tightly coupled backing subsystems into one with encapsulated modules and nice separation of concerns - clear parent/child relationships. |
Async backing parameters (max depth and such), should take into account how many cores are assigned to a parachain. |
Update: Original ticket still holds. Except of the following: Drop idea of relayersIt is no longer the goal that other nodes can provide a collation for you. Reasoning:
New Reputation MechanismWe will use a much faster learning algorithm and thus are also able to punt on the bootstrapping improvements. Faster learningBuilding up a good reputation, due to lack of consensus data happened locally on each validator. Meaning each validator has to learn individually. But indeed since RFC-0103 we have the means to do better. We will introduce another pub enum UMPSignal {
/// already existing `SelectCore`
SelectCore(CoreSelector, ClaimQueueOffset),
/// Signal to the relay chain a peer that should be considered good by the relay chain:
ApprovedPeer(PeerId),
} A collator can provide that The relay chain will now emit events for those approved peers, whenever a candidate got backed successfully. Validators will scrape the chain for those events and will thus all validators will learn about those approved peers when ever the chain produces a block. What Reputation ManagementIncreasing reputation can now be entirely be sourced by learnings from those relay chain events. Everytime you see such an event for a parachain you would increase the reputation of that peer by 1 for that parachain. We would decrease reputation whenever a peer wasted our time: Advertised a collation that could not be fetched or turned out invalid. In those cases we would decrease the peer's reputation by the number of cores available on the system times 2. Increases happen system wide, while decreases happen per core/validator. Reputations are positive numbers, we don't drop below 0. We do want negative reputation changes be more aggressive than positive ones (factor 2 above), to prevent censoring collators from pushing out new colleagues. Staying connected while not advertising a collation should not be punished. We either just allow enough connections to be able to keep all connections to peers open with good reputations. E.g. 2000 would allow for each parachain to have up to a thousand good nodes. This might work, an alternative is to disconnect connections without pending advertisements in a random fashion when running out of connections. Reputations will be stored in a persisted LRU. The LRU should have a size of 1000 per parachain. When multiple connected peers advertise a collation for some paraid, we will always fetch from the peer with the highest reputation (module cheap sanity checks - e.g. parent_head_data_hash, if the candidate forms a fork, we can ignore it right away). |
We're adding this into existing backing events, not adding new events, right? I'd kinda assumed the PVF new the block builder already when I suggested outputting this id, but yeah fine it's some new input instead. I'm now worried the PVF should've more control over block building though, but maybe that's another topic. 1000 parachains nodes should be overkill, but likely your LRU data strcucture reallocates with more memory when required, with 1000 being some hard limit? |
I think it is fine in this case, not sure what you mean by PVF to have more control over block building. |
Requiring collator advertising the collation to be also the block author would be a really good simplification. We introduce an additional PVF entry point ( |
So, how will new parachain nodes be able to join this reputation system? If the answer is "the parachain will need to handle this somehow", you are just moving the problem to someone else.
If you already have downloaded the entire POV, you can also directly call into |
I was assuming this happens after the block is imported. If we do check and stop execution immediately we should also check if the provided peer id in the inherent is the one of the collator that built the block. Currently a malicious collator needs to import the block before he can create a PoV with his own peer id in the inherent to gain some more reputation, so this makes it not possible. But if we implement transaction streaming or any kind of consensus on what goes in the next block, a malicious collator could build the same block without waiting for it to execute and potentially provide the collation fast enough to gain some free reputation. |
Imported where? I'm confused.
I don't get this. Where do we check this?
A malicious collator doesn't has the private key of the expected author and thus, can not sign any valid block that the validation function will accepted. They can not just add some inherent, because that would change the hash of the block and thus, the signature would fail to verify the block.
The same as above. The malicious collator doesn't has the private key of the expected author and can not just add some inherent. |
You learn faster as you learn good peers from the entire network not just from your own backing. If a collation makes it through somewhere in the entire network, all learn that good peer immediately.
The very first block will be best effort - just as it is now (always). Once the collator brought one block through, it is good. Another possibility is cooperation of an existing collator. There is no requirement that that output
Not sure what this would gain us, apart from preventing the above. You provided a good candidate, you get the opportunity bump the reputation of some peer. Put differently, why would we want to turn down a good collation, because we received it from a different peer?
Could be a useful additional enhancement. If that entry point has a much shorter timeout than the actual PVF, we could more cheaply try out peers with a low reputation (0), to give them an opportunity to gain a reputation. In practice this should not really be an issue though: Conceptually for any non-forky parachain (e.g. Aura based) there is only one eligible block producer, hence only one collator can advertise a collation and then also being able to provide it. Not providing it will cause a reputation drop, hence you can not DoS a reputation 0 node by getting preferred, without losing reputation on your own. (Reputation losses must be larger than reputation increase for providing good collations) By assuming block producers are POV providers the reputation system is made easier, because we no longer need to pick lower scored peers to give them an opportunity. We only need to pick them, if no higher scored peer is advertising a collation for that chain. |
Once ready, we can also use this for improving the bootstrapping story. Technically #6173 could also replace the mechanism suggested here, I am undecided whether that is a good idea:
|
We should be careful that reputation still permits archive nodes to download from availability. A useful infrastructure model for a parachain is: There exist parahcain archive nodes, which the parachain team has a high confidence never disappear, in part because they never make blocks or output anything. There are on-chain rewards which encurage parachain collators, but if they all go away or become malicious, then the parachain team or others can create new collators from the state held by the archive node. I guess this requires having some reserved collator slots too, which folks have always imagined doing by PoW but should probably be done by delegation via some certificate. |
Any update on this? I believe currently collators don't properly disconnect from validators, because I see these logs in the validators when a parachain runs out of core time:
It stays like this forever, with the same peer trying to connect and getting banned exactly every 1 second. Should I create a separate issue? Or any advice on how to fix it? Although since it keeps connecting forever, the collator doesn't actually get banned so this is not a problem? Not sure |
@tmpolaczyk please create a separate ticket. This one is just for discussing the proposed collator protocol design updates. |
With this revamp, I don't think we're solving the big problem of a malicious collator being able to regenerate their peerid very easily after getting a bad reputation.
so how is this not leading to centralisation? |
Board (to be filled): https://github.com/orgs/paritytech/projects/68
Note: Spam resilient accepting of messages is generally useful. So ideally we abstract that part and make it useful in other places as well. E.g. for offchain XCMP on collators.
Scope
In addition the collator protocol changes below, it would be good to take the opportunity to revamp the whole backing pipe line. Those subsystems are very much interconnected and merging them into one subsystem with a hierarchical pipe-line structure would allow for better code encapsulation, improving maintainability and reasoning of the code. E.g. quite a few types would no longer need to be exposed globally, but could be the interface of concrete modules, making it
(a) clear for the reader immediately what the scope and purpose is of those types.
(b) allows for better guarantee of invariants: Fields of the type can be protected and invariants can be guaranteed by the interface.
Current Status
The collator protocol as implemented has not aged very well. Code quality is modest, missing structure, separation of concerns and an overarching design. Apart from that, the connection management to collators needs improvements.
Requirements and Goals of the Collator Protocol
The purpose of the collator protocol is providing the means for collators to provide collations to validators. What complicates matters is that collators are not trusted and even worse, unknown. Any computer on the internet can pretend to be a collator and try to consume validator resources. One important ramification of this is, that collations are not sent by collators directly, but they announce it first and then the validator can decide whether to fetch it via request response or not.
Therefore it is not possible for nodes to just eat all the bandwidth that easily, still with lots of nodes they could open up lots of connections causing DoS this way.
Another concern is nodes announcing a collation, but then when the validator tries to fetch it, either just delivers data very slowly or sending invalid data. In both cases the node would waste time and resources, which could also be used for DoS.
We would like to establish a persistent reputation system for the collator protocol, where nodes can gain the "trust" of validators over time and other collators can then get their collations to the validator via those "trusted" nodes. I am quoting the word "trust" here, because obviously we never trust anybody, what I mean here is that the validator will be willing to invest more resource for those proven peers than others.
Even a persistent reputation system, where only some nodes need to be able to connect, needs to be bootstrapped. Matters are worse for on-demand parachains as there the bootstrapping could take even longer. For this reason there will also be the possibility of paying fees and locking DOT on the relay chain to register
PeerId
s which will then be in that preferred/trusted set from the very beginning.Collator Side
The protocol on the collator side should take care of the following:
ParaId
we are a collator for.Currently connection management is tied to collations. If a collation is a provided via
DistributeCollation
we try to establish the required connections. This should be changed so that connection management is independent of actually provided collations. We should be instructed to establish connections for some set ofParaId
s. By separating concerns here, we can connect in advance before we even have a collation and thus speed things up: Once we receive the collation we can immediately advertise.If a collation is provided without a request to establish connections first, we can still try to connect on the fly. This means this could be a backwards compatible change.
In general it should be possible for a single node to collate for multiple parachains at the same time and at least collate for other collators as well. Goal is, if there is at least one "good" node able to connect to validators there should be a path for collators to get their collation to validators. While persistent reputations are per
ParaId
, being a collator/collation providing node for multiple parachains is not directly benefiting your score, but you gain a slight advantage as your connection might already be established when the next parachain comes up you are collating for. Being a collation providing node for multiple parachains is a rather weak goal, if it complicates matters we can drop it, separating collation production from providing it to validators is a must though.Requirements of projects like https://www.tanssi.network/ should be taken into account as well.
Validator Side
Retrieving collations from nodes/collators by being robust against the following DoS attacks:
Separating PoV production from providing a PoV
Maintaining liveness becomes a lot easier, if we don't require the collator producing a collation also being the node advertising to the backing group. With this, instead of requiring all honest nodes to be able to connect, we reduce the problem to: "Some honest node needs to be able to connect."
Connection Management
We keep a persisted
HashMap
of known peers for eachParaId
. ThisHashMap
is keyed by thePeerId
of the connected peer and stores a reputation score. We will keep in memory thoseHashMap
s for any currently relevant assignments, let's call them score maps.By default we will accept any incoming connection, up to a certain threshold of already established connections, where we will start becoming picky: Once the threshold is reached we will lookup the
PeerId
of incoming connection requests in our score maps to get the score of the peer, if not found, we default to a score of 0. We will then only accept the connection, if its score is higher than the connection score (see below) of any established connection and we will disconnect that lower scored connection (if there are multiple, then the lowest scored). If the connecting peer is scored lower than all the live connections, we will reject the connection. In case the connecting peer has the same score as the lowest scored connection, we decide in some random fashion (reservoir sampling) whether to cancel or terminate one of those low scored existing connections: Connections having a collation fetch in process, will be immune.Given that we will always only utilize relevant score maps at any given point in time, we will automatically stop accepting connections from peers associated with parachains we currently don't care about - iff there is pressure on available connection slots. This is different from the status quo, where we will immediately drop connections if the announced
ParaId
is not relevant right now.Put differently: Connection Management on the validator side is changed to be very permissive, if available slots allow for it. We allow any incoming connection and also keep them as long as there is no scarcity, only then will be disconnect based on score, which is time dependent based on current assignments.
Assuming a suitable score management, this should allow connections of at least some honest peers.
Robust Collation Fetching Problem
Once peers successfully connected, they usually will advertise collations. Now we are facing the next problem and DoS vector: From which peer will we actually fetch the collation? Collations are heavy, we expect them to take several seconds to download them. If we picked a bad peer, it could provide a very slow transmission, having us run into a timeout, with the need to try the next peer, which might do the same. Torrent style fetching of PoVs could actually help there, as we could then start fetching different chunks from all peers advertising the same collation in parallel, without wasting resources. If a peer does not deliver a particular chunk, we can cheaply request it from another peer.
For simplicity we will not start up with torrent style fetching though, but would like to resort to simpler means. One thing that immediately comes to mind, is using the peer's score also for deciding from whom to fetch. The problem: If we simply only fetch from the peer with the highest score, this leads to only one peer building up a score (if always available).
Solving this problem is harder and ties in strongly with how we determine the score, see section
Determining the Score
.Connection Score
Connections, when established, get the score as determined by the entry in the current score maps of the connecting peer (
peer score
), but then it will change over time: At some point the current assignments go out of scope and the score maps get replaced. For peers only serving one parachain, this will usually mean the connection score will drop to 0, if the peer is serving multiple parachains it will be its score as found in the now current score maps.Peer Score
Proof of Work
We don't know anything about collators. In fact any libp2p node on the internet can try to connect to backing validators pretending to be a collator. This is problematic, because they would eat up available connection slots, potentially preventing any real collators from connecting. The underlying problem is that being a libp2p node (having a
PeerId
) is very cheap, hence it is easy to issue millions of connection requests. What we need to be able to do, is churn through them as quickly as possible. By simply looking up thePeerId
in those score maps, we have a very fast way of dropping low scored peers. Therefore to be able to cause DoS you will need to build up a good score for yourPeerId
, which requires effort and is no longer cheap. In essence we are introducing a proof of work concept, but with the work being useful (advertising and providing collations).Determining the Score
Finding a good algorithm for determining the peer score is challenging. We have a few conflicting requirements:
For solving (2), we could increase peer score on each valid/expected advertisement, if a corresponding collation was successfully fetched and validated from any peer. The problem here is, that this would violate (1) as we would potentially advance scores for a lot of peers at the same time. This would likely still be fine for connection management, but not for collation fetching: E.g. assume 100 bad peers in the set and we try one after the other, every time with a timeout of a few seconds: This would be a significant service degradation, which is also easy to keep up, as we can only punish on actual fetches. The good news is, a complete DoS over longer periods of time could be avoided, but an attacker could still mess with block times quite a bit.
For solving (1) we could only advance score on successful collation fetches after they passed validation. We can think of at least two potential issues with this approach:
The compromise, I am proposing, should work with async backing:
This way, lower scored peers get an opportunity ramping up their score, but at the same time malicious actors will hardly affect quality of service. We could even make the system adaptive, by keeping track of the current error rate: If it reaches some threshold we could fallback to parallel fetches: Instead of trying a low scored peer first and then falling back to a high score peer, we download from both at the same time. The purpose of the download from the lower scored peer is only for advancing its score.
The downside of this algorithm is, that it will learn very slowly when under attack. Good seeding would help in those situations, see the next section.
Not providing an advertised collation in time when requested, will lower the peer's reputation the same amount as it would have been increased when providing it. Given that we only score up and down on providing/not providing valid collations that amount can likely be 1 score point.
Note: With asynchronous backing it becomes a lot easier to provide lots of valid collations (bounded, but not just 1), which are valid, but garbage in the sense that only one of them will end up getting backed on chain. Therefore it might make sense to keep track of candidates and only reward providers once we saw them backed on chain. The scraping for this is already in place.
Seeding and on-demand parachains
As mentioned, the systems is learning rather slowly, especially when under attack. To make the system more robust it makes sense to offer the possibility of seeding the score map. We can provide an extrinsic for registering
PeerIds
on chain, which will then be used to seed the score map. By requiring a fee and a deposit for this registration, DoS attempts would become very expensive.Especially in the light of shared collator sets, where collators offer their services to multiple paras it might make sense to even make this weighted. E.g. there can be a high QoS collator service provider who asks for rather high fees, but in exchange accepts higher opportunity costs via a larger deposit - which grants it a higher initial score.
Alternatives we punt on for now
While being able to churn quickly through connection requests should get us a long way, there is actually an even more resilient way of having "good" connections: Have the validator connect to the collator! This is super robust and virtually non DoS able, the problem is that we have the requirement that we want the collator set to be open and opaque to validators. For this reason we punt on this possibility for now.
What could be sensible in a future iteration is, once we reached our connection threshold and we are dropping connection requests, we could try to establish a few connections to nodes with a known high score ourselves.
Relation to substrate peer set slots
In a first version we will implement this purely in the Polkadot code base in the collator protocol itself. We will set the limit on incoming connections to a quite high value (e.g. 3000) and then start connection dropping at a fraction of that value. Later on we might want to think of pushing this to the substrate level itself as then we could drop connections faster - e.g. right at connection establishment.
Relevant Assignments
Validators have to keep track of their core assignments. They should always maintain a view of currently relevant parachains (as given by currently active leaves) and also any upcoming parachains. We should then make sure to always have the score maps relevant to those relevant assignments loaded.
For a start we will not differentiate between current and upcoming assignments and assume they are equally important. Which should be good enough: It might seem that current assignments are more important, but in fact those might already have provided a collation, while the upcoming ones could profit a lot from the early connection establishment, especially when there is an attack where it might take a bit to establish "good" connections.
Code Structure
WIP
The when and how a connection should be established should be separated from the how and when to produce/provide a collation:
Cumulus Changes
To fully utilize the new protocol, parachains should gossip blocks within their own network and have multiple collators/nodes provide the collation to the backing group.
The providing node does not even have to be a collator. Technically it would be possible to have service providers maintaining a good reputation with validators to offer their "collating" services to multiple parachains.
Previous Work on this
Anything still relevant should have made it through to this ticket, regardless for reference, there exists an older variant of this ticket.
Alternative discussion on connection management leading to the solution proposed here.
Implementation steps
Draft, just notes for me to not forget things:
The text was updated successfully, but these errors were encountered: