-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream2 START and END messages should be pushed to all receivers #7
Comments
I don't think this would help because a one PULL client could get two of the start/end messages. |
It's not clear to me when this would happen since the ZMQ doc state:
I did not encounter this situation so far (the msg are correctly round-robin dispatched). Did you? If it can happen then I agree that this would not help. |
Hmm. I may be wrong, but I don't think it is guaranteed and depends on load of the workers. Although it might just be the initial messages before the server realises that there are two clients and then it balances out. |
It is what I suspect when reading
where
Depending on the implementation (byte-weighted fair?), the fair-queuing might interfere with the dispatching of the messages. Taking this into account in our design is not trivial. Do you use multiple clients (multiple processes) at DLS? |
We do, but ever since the first eiger we have had a single application that sits in between the detector and the multiple writers. This is because we wanted to write in sequential blocks of 1000 frames (for example, it is configurable), so we have never had to deal with this problem (if it is a problem). I would very be surprised if fair-queuing in the context of zmq was byte-weighted rather than message-weighted. So, it might be OK, but you will definitely have to handle the issue of late joiners. This is easy to reproduce as a problem if you have a push server with two PULL clients and recv immediately after connection. I have written some simple python scripts to test and it appears that as long as there is a pause between connect and sending data, two clients get the same number of messages. So I think I am wrong on this - sorry about that. You might need to stress test this to convince yourself. # push.py
import zmq
context = zmq.Context()
socket = context.socket(zmq.PUSH)
socket.bind("tcp://127.0.0.1:5557")
while True:
input()
for num in range(1000):
print(f"Sending {num}")
data = {"num": num}
socket.send_json(data) # pull.py
import zmq
context = zmq.Context()
socket = context.socket(zmq.PULL)
socket.connect("tcp://127.0.0.1:5557")
count = 0
while True:
data = socket.recv_json()
print(f"Received {data['num']} - {count}")
count += 1 Output:
and
|
No need to be sorry, and thank you so much for investigating! For now, I'll add the necessary interprocess communication to synchronize on start and end messages. I'll report if we find that some images are not round-robin dispatched. |
It would make the receivers logic much easier (no inter-receiver communication required) if all PULL sockets get a copy of the START and END messages. For this, the DCU would need to push
n
x START and END messages (wheren
is the number of receivers connected).Currently the behavior is the following for an acquisition of 3 images with 3 receivers:
The text was updated successfully, but these errors were encountered: