-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New nodes acceptance process #50
Comments
I think the criteria you set here is great. Correct me if I'm wrong but it seems that there's still a manual component to this system in terms of submission and review? How about completely automating this system? Essentially:
How does one associate metadata with their validator? Moderation? Thoughts? |
While Bitcoin's consensus protocol calls for a different flavor of node dashboard, I like earn.coms's dashboard/leaderboard quite a lot. It uses metrics to quantitatively calculate which nodes are the most beneficial to the bitcoin network. It has an automated process for discovering nodes:
Implementation code: link It also includes a handy tool to make sure your node can be detected by the crawler: Moderation?Since quorum sets are built around trust, I agree that we wouldn't want to allow anyone to claim to be a Lightyear node (where the metadata comes in) and we do want to differentiate between well performing nodes and bad performing nodes. However, I'd support letting the user moderate for themselves. If we have accurate stats showing uptime and historical performance along with some metadata, I think that would enable people to make informed quorum set decisions. |
For validator details, essentially this: stellar/stellar-protocol#111 |
Closing in favor of #107 |
We've been receiving quite a lot of PR adding new nodes but some of them are unmaintained after merging. I've been thinking about the process and I came up with something like this:
stellar.toml
file. The validator's host should be a subdomain of the organization's root domain. If the data matches, we merge the PR and deploy.t
as trial) of time we remove the node from the list and block the organization from adding a new node for the next 3 (b
as block) months.q
as quarter uptime) we remove the node and block adding it back for the next 3 (b
) months.I selected
t
to be 97% because it's around one day in 30 days period. I guess these people would still be learning how to run a node so it sounds fair to have a downtime that sums up to 24 hours.q
is higher (99%) but it's also around one day in 90 days period. We should require high uptime for production nodes.b
is 3 months as it should be high enough to make sure people care about their nodes' uptime but also don't discourage them too much from adding them back.It's not the final process, just a starting proposal. Thoughts?
The text was updated successfully, but these errors were encountered: