Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: Add AppendProtocols for a an allocation free to get the protocols #188

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Jorropo
Copy link
Contributor

@Jorropo Jorropo commented Jan 4, 2023

While looking at Kubo benchmarks, if you are using connection filters you allocate ~200MiB/s in the Protocols code path.

This new method allows to give some preallocated memory in a slice, and it will be reused instead of allocating more.

goos: linux
goarch: amd64
pkg: github.com/multiformats/go-multiaddr
cpu: AMD Ryzen 5 3600 6-Core Processor
BenchmarkProtocols-12          	 3779694	       312.0 ns/op	     640 B/op	       1 allocs/op
BenchmarkAppendProtocols-12    	26105854	        43.13 ns/op	       0 B/op	       0 allocs/op

@Jorropo Jorropo self-assigned this Jan 4, 2023
While looking at Kubo benchmarks, if you are using connection filters you allocate ~200MiB/s in the Protocols code path.

This new method allows to give some preallocated memory in a slice, and it will be reused instead of allocating more.

```
goos: linux
goarch: amd64
pkg: github.com/multiformats/go-multiaddr
cpu: AMD Ryzen 5 3600 6-Core Processor
BenchmarkProtocols-12          	 3779694	       312.0 ns/op	     640 B/op	       1 allocs/op
BenchmarkAppendProtocols-12    	26105854	        43.13 ns/op	       0 B/op	       0 allocs/op
```
@marten-seemann
Copy link
Contributor

What’s the code path that’s calling Protocols? I’ve come across (but not fixed) quite a few cases where using Protocols was a suboptimal choice to begin with.

@marten-seemann marten-seemann self-requested a review January 4, 2023 10:24
@Jorropo
Copy link
Contributor Author

Jorropo commented Jan 4, 2023

@marten-seemann

$ rgrep -IE "\.Protocols\(\)" # I've filtered the relevent ones (where it's a multiaddr)
vendor/github.com/libp2p/go-libp2p/p2p/host/routed/routed.go:		if addr.Protocols()[0].Code != ma.P_P2P {
vendor/github.com/libp2p/go-libp2p/p2p/protocol/identify/id.go:	protos := a.Protocols()
vendor/github.com/libp2p/go-libp2p/p2p/protocol/identify/id.go:		if protosMatch(protos, ga.Protocols()) {
vendor/github.com/libp2p/go-libp2p/p2p/net/swarm/swarm_dial.go:		protos := addr.Protocols()
vendor/github.com/libp2p/go-libp2p/p2p/net/swarm/swarm_transport.go:	protocols := a.Protocols()
vendor/github.com/libp2p/go-libp2p/p2p/net/swarm/swarm_transport.go:	protocols := a.Protocols()
vendor/github.com/libp2p/go-libp2p/p2p/net/swarm/swarm_transport.go:	protocols := t.Protocols()
vendor/github.com/multiformats/go-multiaddr-fmt/patterns.go:	ok, rem := ptrn.partialMatch(a.Protocols())
vendor/github.com/multiformats/go-multiaddr-fmt/patterns.go:	pcs := a.Protocols()
vendor/github.com/multiformats/go-multiaddr/README.md:m1.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/net.go:	p1s := match.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/net.go:		p2s := a.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/ip.go:	p := m.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/convert.go:	protos := maddr.Protocols()
vendor/github.com/multiformats/go-multiaddr/net/resolve.go:		if ia.Protocols()[0].Code != resolve.Protocols()[0].Code {
core/commands/id.go:		info.Protocols = node.PeerHost.Mux().Protocols()
core/corehttp/metrics.go:		for _, proto := range conns[0].RemoteMultiaddr().Protocols() {

I lost my yesterday profile were it was very bad (allocated 7GiB in 30s).
It seems I was wrong about the filtering code. Here is one where it has 80MiB in 30s:

----------------------------------------------------------+-------------
      flat  flat%   sum%        cum   cum%   calls calls% + context 	 	 
----------------------------------------------------------+-------------
                                           38.52MB 47.83% |   github.com/libp2p/go-libp2p/p2p/net/swarm.(*dialWorker).loop
                                           36.02MB 44.72% |   github.com/multiformats/go-multiaddr/net.ResolveUnspecifiedAddress
   80.55MB  5.90% 12.31%    80.55MB  5.90%                | github.com/multiformats/go-multiaddr.(*multiaddr).Protocols
----------------------------------------------------------+-------------
    1.50MB  0.11% 75.01%    45.02MB  3.30%                | github.com/libp2p/go-libp2p/p2p/net/swarm.(*dialWorker).loop
                                           38.52MB 85.56% |   github.com/multiformats/go-multiaddr.(*multiaddr).Protocols
----------------------------------------------------------+-------------
                                           55.02MB   100% |   github.com/multiformats/go-multiaddr/net.ResolveUnspecifiedAddresses
       1MB 0.073% 75.63%    55.02MB  4.03%                | github.com/multiformats/go-multiaddr/net.ResolveUnspecifiedAddress
                                           36.02MB 65.47% |   github.com/multiformats/go-multiaddr.(*multiaddr).Protocols
                                               8MB 14.54% |   github.com/multiformats/go-multiaddr.Split.func1
----------------------------------------------------------+-------------

(the first one is the second biggest item on the profile)

Ideally the API would have some iterator of some sort like:

type Multiaddr interface {
  ForEachProtocol(f func(Protocol))
}

However this does not solve anything because Multiaddr is an interface (and virtual calls always leak all arguments because the go compiler is not very good at inter-procedural-optimisations).

@marten-seemann
Copy link
Contributor

The iterator doesn't need to be on the interface. We already have a ForEach function as a function on the package here.

The net.ResolveUnspecifiedAddress is an excellent example for how we optimize things (in many cases) without adding a new method to the interface here: it only ever uses uses Protocols()[0]. The same applies for the call to that function in Swarm.filterKnownUndialables.

My suspicion is that we can get allocs down by >90% by fixing a handful of hot code paths like the two mentioned above. @Jorropo, would like to give that a try? Happy to review PRs!

@Jorropo Jorropo removed their assignment Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants