Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce allocations performed by the Logger returned from log.With. #28

Merged
merged 2 commits into from
Apr 20, 2015

Conversation

ChrisHines
Copy link
Member

The old method performed O(n) allocs, the new method is O(1), where n is the number of nested With contexts.

(Note: this PR also adds NewDiscardLogger() to the API. It is used in the new benchmarks, but also has other uses, so I thought it should be exported.)

benchmark                           old ns/op     new ns/op     delta
BenchmarkJSONLoggerSimple           7075          7005          -0.99%
BenchmarkJSONLoggerContextual       10195         9345          -8.34%
BenchmarkDiscard                    386           388           +0.52%
BenchmarkOneWith                    729           728           -0.14%
BenchmarkTwoWith                    1191          932           -21.75%
BenchmarkTenWith                    8735          2416          -72.34%
BenchmarkPrefixLoggerSimple         2002          2015          +0.65%
BenchmarkPrefixLoggerContextual     4186          3993          -4.61%

benchmark                           old allocs     new allocs     delta
BenchmarkJSONLoggerSimple           19             19             +0.00%
BenchmarkJSONLoggerContextual       30             28             -6.67%
BenchmarkDiscard                    3              3              +0.00%
BenchmarkOneWith                    4              4              +0.00%
BenchmarkTwoWith                    5              4              -20.00%
BenchmarkTenWith                    13             4              -69.23%
BenchmarkPrefixLoggerSimple         5              5              +0.00%
BenchmarkPrefixLoggerContextual     13             11             -15.38%

benchmark                           old bytes     new bytes     delta
BenchmarkJSONLoggerSimple           812           812           +0.00%
BenchmarkJSONLoggerContextual       1158          1159          +0.09%
BenchmarkDiscard                    64            64            +0.00%
BenchmarkOneWith                    128           128           +0.00%
BenchmarkTwoWith                    224           192           -14.29%
BenchmarkTenWith                    2144          704           -67.16%
BenchmarkPrefixLoggerSimple         145           145           +0.00%
BenchmarkPrefixLoggerContextual     403           402           -0.25%

The old method performed O(n) allocs, the new method is O(1), where n is the number of nested With contexts.
@tsenart
Copy link
Contributor

tsenart commented Apr 18, 2015

I'm not immediately seeing where the With method would be called in a tight loop or even repeatedly. Would you please elucidate me?

keyvals []interface{}
}

func (l *withLogger) Log(kvs ...interface{}) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we keep parameter names consistent? i.e. here it should be keyvals ...interface{}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, sorry about that. Fixed.

@ChrisHines
Copy link
Member Author

@tsenart The problem isn't that With would be called in a tight loop, but that it will be used to build a few (several?) layers of context. The current implementation of With builds a stack of closures that is flattened in every call to Log. Each step of flattening involves an additional slice allocation. This PR flattens the context once during With to avoid incurring the cost for every subsequent call to Log.

Here is some benchmark code that I think represents a typical scenario (let me know if you think this is worth adding to this PR).

func BenchmarkTypicalWith(b *testing.B) {
    logger := log.NewDiscardLogger()
    logger = log.With(logger, "lvl", "info")
    logger = log.With(logger, "host", "hostname")
    logger = log.With(logger, "svc", "svcname")
    b.ReportAllocs()
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        l := log.With(logger, "client", "client-id", "item", "item-id")
        l.Log("step", 1)
        l.Log("step", 2, "param", "arg")
        l.Log("step", 3, "err", errors.New("error text"))
    }
}

I get the results shown below which I think are significant, especially in light of Peter's statement in the discussion of PR #16 that he, "... own[s] and operate[s] services that can become CPU bound based on log output ...." In my experience, logging impacts CPU primarily by heap allocations and formatting into the final []bytes for output. The formatting can sometimes be done asynchronously, but the allocations this benchmark measures would directly impact response time for a request.

benchmark                old ns/op     new ns/op     delta
BenchmarkTypicalWith     10760         6825          -36.57%

benchmark                old allocs     new allocs     delta
BenchmarkTypicalWith     33             23             -30.30%

benchmark                old bytes     new bytes     delta
BenchmarkTypicalWith     2520          1624          -35.56%

@tsenart
Copy link
Contributor

tsenart commented Apr 19, 2015

@ChrisHines: That makes sense now. Thanks for the detailed explanation.

}

func (l *withLogger) Log(keyvals ...interface{}) error {
return l.logger.Log(append(l.keyvals, keyvals...)...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this thread safe?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, l.keyvals is capacity sliced when each withLogger is created. See line 33 and 50. The code passes TestWithConcurrent with race detector and multiple cores enabled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

On Sunday, April 19, 2015, Chris Hines [email protected] wrote:

In log/log.go
#28 (comment):

  • // Using the extra capacity without copying risks a data race that
  • // would violate the Logger interface contract.
  • n := len(keyvals)
  • return &withLogger{
  •   logger:  logger,
    
  •   keyvals: keyvals[:n:n],
    
  • }
    +}

+type withLogger struct {

  • logger Logger
  • keyvals []interface{}
    +}

+func (l *withLogger) Log(keyvals ...interface{}) error {

  • return l.logger.Log(append(l.keyvals, keyvals...)...)

Yes, l.keyvals is capacity sliced when each withLogger is created. See
line 33 and 50. The code passes TestWithConcurrent with race detector and
multiple cores enabled.


Reply to this email directly or view it on GitHub
https://github.com/peterbourgon/gokit/pull/28/files#r28652359.

Sent from Gmail Mobile

@peterbourgon
Copy link
Member

Nice! Thanks!

peterbourgon added a commit that referenced this pull request Apr 20, 2015
Reduce allocations performed by the Logger returned from log.With.
@peterbourgon peterbourgon merged commit 0fba027 into go-kit:master Apr 20, 2015
@ChrisHines ChrisHines deleted the with-fewer-allocs branch April 22, 2015 12:30
guycook pushed a commit to codelingo/kit that referenced this pull request Oct 12, 2016
Reduce allocations performed by the Logger returned from log.With.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants