Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

varnishreload return CLI communication error (hdr) #4238

Open
hamadodene opened this issue Nov 28, 2024 · 7 comments
Open

varnishreload return CLI communication error (hdr) #4238

hamadodene opened this issue Nov 28, 2024 · 7 comments
Assignees

Comments

@hamadodene
Copy link

Expected Behavior

I expected that after running the varnishreload command, a new VCL would be created and set as active.

Current Behavior

After running the varnishreload command after modifying a configuration in Varnish, I receive the following error:

usr/sbin/varnishreload
Command: varnishadm -n '' -- vcl.load reload_20241128_080011_3196976 "/etc/varnish/default.vcl"

CLI communication error (hdr)
 varnishadm -n '' -- vcl.load reload_20241128_080011_3196976 "/etc/varnish/default.vcl"
Already a VCL named reload_20241128_080011_3196976
Command failed with error code 106
 varnishadm vcl.list
available   auto   cold    0   reload_20241127_152118_2878672
active      auto   warm   80   reload_20241127_155003_2888428
available   auto   cold    0   reload_20241128_073631_3177805
available   auto   cold    0   reload_20241128_074826_3188938
available   auto   warm    0   reload_20241128_080011_3196976

How you can see reload_20241128_080011_3196976 is available but not set as active and my new configuration seen not reloaded.

The CLI appears to be up and running correctly.

LISTEN 0 10 127.0.0.1:6082 0.0.0.0:*

 varnishadm ping
PONG 1732780987 1.0
telnet 127.0.0.1 6082
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
107 59
yuhamqqkftqnybqlrenhmgsikyiqvvbw

Authentication required.

Possible Solution

A possible workaround we are tried and work is to use vcl.use

Steps to Reproduce (for bugs)

No response

Context

My new configuration is not reloaded.

Varnish Cache version

varnish-7.5.0-1.el8.x86_64

Operating system

Red Hat Enterprise Linux release 8.9 (Ootpa)

Source of binary packages used (if any)

No response

@gquintard
Copy link
Member

Hi! Is it reproducible, or was it a one-time thing?

@hamadodene
Copy link
Author

Hi @gquintard ,
I didn't understand how to reproduce the issue. I have another identical cluster with the same version running on the same OS version, but I can't reproduce the same problem.

Currently, I encounter the error every time I run the varnishreload command.

@dridi
Copy link
Member

dridi commented Nov 28, 2024

The VCL probably took too much time to compile, you can increase the timeout:

varnishreload -t 10 ...

@hamadodene
Copy link
Author

hamadodene commented Nov 28, 2024

@dridi I can't see any option -t:

/sbin/varnishreload: illegal option -- t
Error: wrong usage.

Usage: /sbin/varnishreload [-l <label>] [-m <max>] [-n <workdir>]
           [-p <prefix>] [-w <warmup>] [<file>]
       /sbin/varnishreload -h

Reload and use a VCL on a running Varnish instance.

Available options:
-h           : show this help and exit
-l <label>   : name of the VCL label to reload
-m <max>     : maximum number of available reloads to leave behind
-n <workdir> : specify the name or directory for the varnishd instance
-p <prefix>  : prefix for the name of the loaded and discarded VCLs
-w <warmup>  : the number of seconds between load and use operations

@dridi
Copy link
Member

dridi commented Nov 28, 2024

You can take the script from the 7.6.0 release to get the -t option: https://github.com/varnishcache/pkg-varnish-cache/blob/7.6/systemd/varnishreload

@hamadodene
Copy link
Author

Thanks, @dridi! Using the -t option worked perfectly. We'll update our instance programmatically in the future, but for now, I've manually updated the script.

@dridi
Copy link
Member

dridi commented Nov 28, 2024

I'd like to keep this ticket open to reflect on ergonomics. It appears to be difficult to conclude a timeout when faced with a failure.

@dridi dridi self-assigned this Nov 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants