Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does hey write EOF to remaining parallel connections opened when desired number of requests gets answered by server? #308

Open
miguelpais opened this issue Jul 19, 2023 · 0 comments

Comments

@miguelpais
Copy link

I have the following net Connection read logic inside a simple handler of HTTP connections I've written:

func (r RequestReader) ReadHttpRequest(reader io.Reader) (string, error) {
	var request []byte
	var buffer = make([]byte, 1024)

	for {
		nRead, err := reader.Read(buffer)
		if err == io.EOF {
			if len(request) > 0 {
				break
			} else {
				return "", errors.New("end of file got before content")
			}
		}
		if err != nil {
			return "", errors.New("Reading request failed")
		}

		request = append(request, buffer[:nRead]...)

		if detectEndOfHttpRequest(buffer[:nRead]) {
			break
		}
	}

	return string(request), nil
}

func detectEndOfHttpRequest(buffer []byte) bool {
	return strings.IndexAny(string(buffer), "\r\n\r\n") != -1
}

The function above gets called by the goroutine handler of every new incoming connection to the http-server, as such:

func SpawnHandler(readChannel <-chan net.Conn, routeDispatcher *routing.RouteDispatcher) {
	for {
		select {
		case conn := <-readChannel:
			reader := requestReader.RequestReader{}
			request, err := reader.ReadHttpRequest(conn)
			if err != nil {
				fmt.Printf("Could not read request, error was %s, closing connection...", err)
				conn.Close()
				continue
			}

			// do something with request
		}
	}
}

Currently I have 20 workers reading from readChannel, which is a buffered channel of 20 capacity.
This readChannel channel is being fed from the main http-server loop:

func (h HttpServer) Serve(host, path string) {
	connection, err := net.Listen("tcp", "0.0.0.0:8000")
	if err != nil { /* ... */ }

	fmt.Println("Accepting connections..")
	for true {
		clientConnection, err := connection.Accept()
		if err != nil { /* ... */ }

		select {
		case h.connectionsQueue <- clientConnection:
		default:
			// respond with 429 TOO MANY REQUESTS
		}
	}
}

I'm testing this server using hey. The curious thing is that this code prints:

Could not read request, error was end of file got before content,

For at least 20 requests when testing with 40 concurrent request senders handling 1000 requests, as such:

hey -n 1000 -c 40 -q 100 -m GET http://localhost:8000/ -disable-keepalive

If there is no parallelism (-c set to 1), this problem never occurs, so the question is, how does parallelism on connections which are not kept alive is making my RequestReader to read EOF for an incoming request before any actual HTTP request content is written?

Is it a bug from my side, does hey sometimes creates a connection and closes it if some timeout is reached (not likely since the default timeout of hey is set to 20s and the whole command runs in 1 seconds), or is EOF expected sometimes if we happen to read before the client actually wrote something to it?

Surprisingly, hey still responds as if all the requests were successful, even though I did close the connections to the requests with EOF content without actually presenting a response:

    Summary:
  Total:	0.2556 secs
  Slowest:	0.0064 secs
  Fastest:	0.0003 secs
  Average:	0.0034 secs
  Requests/sec:	3911.9073

  Total data:	29000 bytes
  Size/request:	29 bytes

Response time histogram:
  0.000 [1]	|
  0.001 [93]	|■■■■■■■■■■■■■
  0.001 [20]	|■■■
  0.002 [6]	|■
  0.003 [63]	|■■■■■■■■■
  0.003 [215]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.004 [278]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.005 [147]	|■■■■■■■■■■■■■■■■■■■■■
  0.005 [105]	|■■■■■■■■■■■■■■■
  0.006 [47]	|■■■■■■■
  0.006 [25]	|■■■■


Latency distribution:
  10% in 0.0011 secs
  25% in 0.0030 secs
  50% in 0.0035 secs
  75% in 0.0042 secs
  90% in 0.0050 secs
  95% in 0.0054 secs
  99% in 0.0063 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.0013 secs, 0.0003 secs, 0.0064 secs
  DNS-lookup:	0.0006 secs, 0.0000 secs, 0.0025 secs
  req write:	0.0001 secs, 0.0000 secs, 0.0018 secs
  resp wait:	0.0009 secs, 0.0001 secs, 0.0021 secs
  resp read:	0.0001 secs, 0.0000 secs, 0.0023 secs

Status code distribution:
  [200]	1000 responses
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant