-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Add AppendProtocols for a an allocation free to get the protocols #188
base: master
Are you sure you want to change the base?
Conversation
While looking at Kubo benchmarks, if you are using connection filters you allocate ~200MiB/s in the Protocols code path. This new method allows to give some preallocated memory in a slice, and it will be reused instead of allocating more. ``` goos: linux goarch: amd64 pkg: github.com/multiformats/go-multiaddr cpu: AMD Ryzen 5 3600 6-Core Processor BenchmarkProtocols-12 3779694 312.0 ns/op 640 B/op 1 allocs/op BenchmarkAppendProtocols-12 26105854 43.13 ns/op 0 B/op 0 allocs/op ```
37d7f3e
to
5bd7ff1
Compare
What’s the code path that’s calling |
$ rgrep -IE "\.Protocols\(\)" # I've filtered the relevent ones (where it's a multiaddr)
vendor/github.com/libp2p/go-libp2p/p2p/host/routed/routed.go: if addr.Protocols()[0].Code != ma.P_P2P {
vendor/github.com/libp2p/go-libp2p/p2p/protocol/identify/id.go: protos := a.Protocols()
vendor/github.com/libp2p/go-libp2p/p2p/protocol/identify/id.go: if protosMatch(protos, ga.Protocols()) {
vendor/github.com/libp2p/go-libp2p/p2p/net/swarm/swarm_dial.go: protos := addr.Protocols()
vendor/github.com/libp2p/go-libp2p/p2p/net/swarm/swarm_transport.go: protocols := a.Protocols()
vendor/github.com/libp2p/go-libp2p/p2p/net/swarm/swarm_transport.go: protocols := a.Protocols()
vendor/github.com/libp2p/go-libp2p/p2p/net/swarm/swarm_transport.go: protocols := t.Protocols()
vendor/github.com/multiformats/go-multiaddr-fmt/patterns.go: ok, rem := ptrn.partialMatch(a.Protocols())
vendor/github.com/multiformats/go-multiaddr-fmt/patterns.go: pcs := a.Protocols()
vendor/github.com/multiformats/go-multiaddr/README.md:m1.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/net.go: p1s := match.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/net.go: p2s := a.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/ip.go: p := m.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/convert.go: protos := maddr.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/resolve.go: if ia.Protocols()[0].Code != resolve.Protocols()[0].Code {
core/commands/id.go: info.Protocols = node.PeerHost.Mux().Protocols()
core/corehttp/metrics.go: for _, proto := range conns[0].RemoteMultiaddr().Protocols() { I lost my yesterday profile were it was very bad (allocated 7GiB in 30s).
(the first one is the second biggest item on the profile) Ideally the API would have some iterator of some sort like: type Multiaddr interface {
ForEachProtocol(f func(Protocol))
} However this does not solve anything because Multiaddr is an interface (and virtual calls always leak all arguments because the go compiler is not very good at inter-procedural-optimisations). |
The iterator doesn't need to be on the interface. We already have a The My suspicion is that we can get allocs down by >90% by fixing a handful of hot code paths like the two mentioned above. @Jorropo, would like to give that a try? Happy to review PRs! |
While looking at Kubo benchmarks, if you are using connection filters you allocate ~200MiB/s in the Protocols code path.
This new method allows to give some preallocated memory in a slice, and it will be reused instead of allocating more.