-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add req_size and resp_size to PDU. #491
Add req_size and resp_size to PDU. #491
Conversation
These will be updated when we create the RPC request and when we receive the RPC response bytes, respectively. The req and resp size containg the RPC header, the NFS header and optional data bytes. User can query the req and resp bytes using the two newly added APIs rpc_pdu_get_req_size() rpc_pdu_get_resp_size()
Not super excited about this. Please have a look at the stats-callback branch. I have proposed a mechanism to register a callback function used to report statistics data for each sent/received PDU. This uses a structure that can be expanded with additional fields without the need to add new functions to the api. This allows applications to register to get a callback for every pdu with useful data about the pdu, such as size, direction, xid, which prodecure it is, whether it was successful or not and the response time in miliseconds. Is this workable for your use case? |
I have implemented stats collections via a simple callback and an dedicated data structure we can extend when adding additional information that a consumer might be interested in. See examples/nfs-stats-cb.c for an example on how to use it. |
Took a look. I agree with your concern about needing to add more APIs, but this can be easily addressed by defining a stats structure which can be returned by a single API rpc_pdu_get_stats(struct rpc_pdu *). The stats structure can be extended w/o needing new APIs for new stats that we may want. The pdu in rpc_pdu_get_stats() means that these are per-request stats, as opposed to per-rpc_context stats which are returned by rpc_get_stats() w/o the pdu. The other concern about the stats being valid only in the callback context, is IMO not so significant. With proper documentation this discipline is something user can easily maintain. Note that they would need the response stats only inside the callback, so that's the most natural place to call rpc_pdu_get_stats() for querying the response stats. The callback-model puts extra burden on the application to map the stats to his request to which the stats correspond. User has to maintain additional data structure to map the callback to his request, especially since the callback is set per rpc_context and not set for a request. Yes, we can call rpc_set_stats_cb() before every rpc_nfs3__task() call, but that doesn't look neat either. So, I think it affects usability. Additional benefit we get from the API approach that I propose is that lot of the context is known to the caller, so most of those fields direction/prog/vers/proc are not really needed. In general, I find this will not be usable for my usecase in the current shape. I'll continue with the change in my fork for our current development. Thanks for your inputs and help! |
/*
/*
Then everytime you want the stats for the _task you just use this pattern:
Inside "your callback" you then just reference the content of That is functionally equivalent to calling a rpc_get_pdu_stats() function from You could even use something similar for nfs_() functions, though since the nfs to rpc mapping is a 1-to-many mapping an nfs_ request would result in a series of multiple rpc calls. |
my_stats cannot be a global or a per-rpc_context structure. It has to be a per-request structure. Since even the issue path stats callback will populate that which can be called by multiple application threads. Overall, I feel the callback approach for stats is not natural and intuitive. |
You could use one my_stats for each rpc context. It doesn't matter how many application threads you have that call rpc_,,,_task(), they only masrshall the data and then enqueue it. Then sending the data, receiving the reply, populating rpc->pdu, and invoking the callback into the application is all done from the context of the service thread, in particular rpc_process_reply(). Regardless, I have added a function rpc_get_pdu_stats() that can extract the exact same data from the callback function. |
There are two places where rpc->stats_cb() is called. One in the call direction and one in the reply direction. |
Yes, you are correct on that. The CALLs will be invoked when the packet is enqueued, in the application thread context. The stats for CALLs are probably a lot less useful than the REPLIES so most applications, and probably your application If you are not interested in any of the CALLS you can just have a if (data->direction == CALL) { in the callback, or don't use callbacks at all and just use rpc_get_rpc_stats() from your rpc-callback. See it as there are two ways to get this data: |
We could change when the stats_cb callback is triggered for CALLs to instead be invoked when the PDU has been written to the socket, and thus invoke it from the service thread instead of the current application thread. |
Sorry, have been caught up in some project deadlines. |
Understood. But please, if you can, if you do checkins that you think might be useful for general purpose please shoot me an email and I will evaluate and merge into my master branch. I would LOVE to hear some actual performance numbers. i.e. How hard can you drive it and sustain transfers. |
Definitely, I'd let you know about any changes that may be of general interest. |
No need for anything fancy or anything, just a "I just added something to my repo that might be useful, have a look" Also let me know if there are something you want me to add that would be useful to you. I can add it to my master and then you can backport to your stable/frozen branch. |
Btw I've sent you multiple emails with PR links. Not sure if you saw them. Checking here since you didn't respond there. |
I saw them. Thanks. I will look through them later today.
…On Sat, 24 Aug 2024 at 03:11, linuxsmiths ***@***.***> wrote:
Btw I've sent you multiple emails with PR links. Not sure if you saw them.
Checking here since you didn't respond there.
—
Reply to this email directly, view it on GitHub
<#491 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADY3EAE7RFHYVFTQPL6X7DZS5UKPAVCNFSM6AAAAABMNNB7Y2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMBXGQ4DQNJRGE>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
These will be updated when we create the RPC request and when we receive the RPC response bytes, respectively. The req and resp size containg the RPC header, the NFS header and optional data bytes.
This is for supporting mountstats style "avg bytes sent" and "avg bytes received".
User can query the req and resp bytes using the two newly added APIs
rpc_pdu_get_req_size()
rpc_pdu_get_resp_size()