Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible data race in the logger? #1487

Open
zhengshi1998 opened this issue Nov 29, 2024 · 8 comments
Open

Possible data race in the logger? #1487

zhengshi1998 opened this issue Nov 29, 2024 · 8 comments
Assignees
Labels
bug Something isn't working

Comments

@zhengshi1998
Copy link

I'm using the v1.9.7 - 2024-11-11 release.

After I run the command go build -race && ./go2rtc to build go2rtc together with race detector, some race reports are printed.

17:38:13.843 INF go2rtc platform=linux/amd64 revision=mod.aa12ae0 version=1.9.7
17:38:13.843 INF config path=/home/shizheng/go-benchmarks/go2rtc-1.9.7/go2rtc.yaml
17:38:13.844 ERR [rtsp] listen error=listen tcp :8554: bind: address already in use
17:38:13.844 INF [webrtc] listen addr=:8555/tcp
17:38:13.846 ERR [api] listen error=listen tcp :1984: bind: address already in use
==================
WARNING: DATA RACE
Read at 0x00c000002180 by goroutine 7:
  github.com/AlexxIT/go2rtc/internal/app.(*circularBuffer).Write()
      /home/shizheng/go-benchmarks/go2rtc-1.9.7/internal/app/log.go:120 +0x92
  github.com/rs/zerolog.LevelWriterAdapter.WriteLevel()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/writer.go:27 +0x6f
  github.com/rs/zerolog.(*LevelWriterAdapter).WriteLevel()
      <autogenerated>:1 +0x1f
  github.com/rs/zerolog.multiLevelWriter.WriteLevel()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/writer.go:98 +0xe9
  github.com/rs/zerolog.(*multiLevelWriter).WriteLevel()
      <autogenerated>:1 +0x84
  github.com/rs/zerolog.(*Event).write()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/event.go:80 +0x2b2
  github.com/rs/zerolog.(*Event).msg()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/event.go:151 +0x439
  github.com/rs/zerolog.(*Event).Msg()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/event.go:110 +0xba
  github.com/AlexxIT/go2rtc/internal/api.listen()
      /home/shizheng/go-benchmarks/go2rtc-1.9.7/internal/api/api.go:89 +0x91
  github.com/AlexxIT/go2rtc/internal/api.Init.gowrap1()
      /home/shizheng/go-benchmarks/go2rtc-1.9.7/internal/api/api.go:72 +0x4d

Previous write at 0x00c000002180 by main goroutine:
  github.com/AlexxIT/go2rtc/internal/app.(*circularBuffer).Write()
      /home/shizheng/go-benchmarks/go2rtc-1.9.7/internal/app/log.go:142 +0x525
  github.com/rs/zerolog.LevelWriterAdapter.WriteLevel()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/writer.go:27 +0x6f
  github.com/rs/zerolog.(*LevelWriterAdapter).WriteLevel()
      <autogenerated>:1 +0x1f
  github.com/rs/zerolog.multiLevelWriter.WriteLevel()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/writer.go:98 +0xe9
  github.com/rs/zerolog.(*multiLevelWriter).WriteLevel()
      <autogenerated>:1 +0x84
  github.com/rs/zerolog.(*Event).write()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/event.go:80 +0x2b2
  github.com/rs/zerolog.(*Event).msg()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/event.go:151 +0x439
  github.com/rs/zerolog.(*Event).Msg()
      /home/shizheng/golang/pkg/mod/github.com/rs/[email protected]/event.go:110 +0x5da
  github.com/AlexxIT/go2rtc/internal/webrtc.Init()
      /home/shizheng/go-benchmarks/go2rtc-1.9.7/internal/webrtc/webrtc.go:56 +0x58e
  main.main()
      /home/shizheng/go-benchmarks/go2rtc-1.9.7/main.go:53 +0x78

Goroutine 7 (running) created at:
  github.com/AlexxIT/go2rtc/internal/api.Init()
      /home/shizheng/go-benchmarks/go2rtc-1.9.7/internal/api/api.go:72 +0x8d1
  main.main()
      /home/shizheng/go-benchmarks/go2rtc-1.9.7/main.go:45 +0x64
==================

Looks like this is caused by the logger under go2rtc/internal/app/log.go. The func (b *circularBuffer) Write(p []byte) (n int, err error) method is not protected by mutex, so that during initialization phase, when multiple goroutines try to write to the buffer concurrently, data races could appear.

@AlexxIT AlexxIT added the question Further information is requested label Nov 29, 2024
@AlexxIT
Copy link
Owner

AlexxIT commented Nov 29, 2024

Automatics does see a possible error here. As the person who wrote this, I don't see any possible real errors here.
Mutex will add unnecessary stalling. They should not be allowed in a function such as a logger.

@zhengshi1998
Copy link
Author

Is it possible that in go2rtc/internal/app/log.go, the MemoryLog is initialized to be a buffer for logging and used by multiple goroutines without synchronization?

@AlexxIT
Copy link
Owner

AlexxIT commented Nov 29, 2024

Of course this function is called simultaneously from different goroutines. In very rare cases, this can result in slightly messed up logs. But in my opinion it will never lead to panic or anything really scary. This is intentional behavior, logs should not slow down the program.

I don't like the race warnings either. But I don't know how to get around them without using mutexes.

@zhengshi1998
Copy link
Author

I see, thanks for your reply. Should I see it as a "benign race" (a race that has no harm on the program behavior) ?

@AlexxIT
Copy link
Owner

AlexxIT commented Nov 29, 2024

Try analyzing the code. Whether it can lead to significant errors. In my opinion, no:

go2rtc/internal/app/log.go

Lines 116 to 144 in d7cdc8b

func (b *circularBuffer) Write(p []byte) (n int, err error) {
n = len(p)
// check if chunk has size
if len(b.chunks[b.w])+n > chunkSize {
// increase write chunk index
if b.w++; b.w == cap(b.chunks) {
b.w = 0
}
// check overflow
if b.r == b.w {
// increase read chunk index
if b.r++; b.r == cap(b.chunks) {
b.r = 0
}
}
// check if current chunk exists
if b.w == len(b.chunks) {
// allocate new chunk
b.chunks = append(b.chunks, make([]byte, 0, chunkSize))
} else {
// reset len of current chunk
b.chunks[b.w] = b.chunks[b.w][:0]
}
}
b.chunks[b.w] = append(b.chunks[b.w], p...)
return
}

@zhengshi1998
Copy link
Author

zhengshi1998 commented Nov 29, 2024

Maybe this is a buffer overflow error?

Suppose at some time, b.w = cap(b.chunks) - 1 and there are two goroutines accessing the Write method concurrently.

Both goroutine pass the check if len(b.chunks[b.w])+n > chunkSize at line 120, and at line 122, goroutine 1 executes b.w++. Before goroutine 1 passes the check b.w == cap(b.chunks), goroutine 2 also executes b.w++. Now b.w = cap(b.chunks) + 1, thus both goroutine 1 and 2 fail to pass the check b.w == cap(b.chunks). Then b.w will not be reset to 0, but instead remains cap(b.chunks) + 1.

Later at line 142, when accessing b.chunks[b.w], it leads to a buffer overflow error.

Does this make sense?

@AlexxIT AlexxIT added bug Something isn't working and removed question Further information is requested labels Nov 29, 2024
@AlexxIT
Copy link
Owner

AlexxIT commented Nov 29, 2024

Looks like you right

@AlexxIT AlexxIT self-assigned this Nov 29, 2024
@zhengshi1998
Copy link
Author

Thank you for your response!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants