Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch from pvDatabase to SharedPV #485

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mdavidsaver
Copy link
Contributor

Quick migration from pvDatabaseCPP to pvas::SharedPV from pvAccessCPP.

http://epics-base.github.io/pvAccessCPP/group__pvas.html

I have only done some quick testing using cs-studio. I think this will behave ~the same as before, excepting pvDatabase specific handling of pvRequest.

May be interesting to test pvRequest "pipeline=true" behavior.

@MarkRivers
Copy link
Member

Would it also make sense to change pvaDriver to use your new PVA code?
https://github.com/areaDetector/pvaDriver

@mdavidsaver
Copy link
Contributor Author

pvaDriver is currently using the lower level pvAccessCPP APIs (doesn't depend on pvDatabase). I'm not sure switching to pvac is worthwhile in isolation.

From what I can tell, with pvaDriver conversion to NDArray happens on a shared PVA worker thread. In some situations this could cause anomalous latency. I would recommend at some point re-designing pvaDriver so that this work happens on some other thread (eg. the port worker). Switching to pvac might make this easier (cf. this example of offloading monitor callbacks).

In the long run, I see migration to PVXS as the way forward. I did this PR as a stepping stone because updating NTNDArrayConverter looked like more effort than I had time for right now.

@mp49
Copy link
Contributor

mp49 commented Jul 6, 2022

I've tested the pipeline=true behavior and it seems fine now. I get a single update for every image generated. Also tested the default request type and with queueSize=100, and they are fine.

I did also test:
pvget -vvv -m -r "record[pipeline=true, queueSize=10]field()" ST99:Det:Det2:PV1:Array
and that prints out 3 updates and then stops.

The only other problem I found was that the information is missing for the timestamp and the codec. For example, with compression enabled:

codec_t codec
        string name 
        any parameters
            (none)
    long compressedSize 585
    long uncompressedSize 4096
    dimension_t[] dimension
        dimension_t 
            int size 64
            int offset 0
            int fullSize 64
            int binning 1
            boolean reverse false
        dimension_t 
            int size 64
            int offset 0
            int fullSize 64
            int binning 1
            boolean reverse false
    int uniqueId 493
    time_t dataTimeStamp <undefined>              
        long secondsPastEpoch 0
        int nanoseconds 0
        int userTag 0

@mp49
Copy link
Contributor

mp49 commented Jul 6, 2022

With this patch I am able to run at higher frame rates before dropping images. I tested using pvaDriver with the default request type (no pipeline, and default queueSize). These tests were run for 1 minute on a VM with 2 cores, with 128x128 UInt8 images. The machine is not heavily loaded. The CPU use values are for the IOC simulating the images and using the PVA plugin.

1st testing without the patch:

Frame Rate CPU Use Lost Frames
100Hz 15% 0.1%
200Hz 28% 1.7%
500Hz 60% 2.5%

And the same test with the patch to the PVA plugin:

Frame Rate CPU Use Lost Frames
100Hz 15% 0.0%
200Hz 30% 0.0%
500Hz 68% 0.1%

Slightly higher CPU use, but I was running at higher frame rate and not losing images.

@mdavidsaver
Copy link
Contributor Author

... The only other problem I found was that the information is missing for the timestamp and the codec.

It seems I inadvertently provided an example of how complicated PVD is. To attach a BitMarker for every field, I was iterating over the vector returned by PVStructure;:getPVFields(), but this is only the direct child fields (eg. timeStamp, but not timeStamp.userTag). Instead it is necessary to iterate over the numeric field offsets, and getSubFieldT() each one.

This should be fixed now.

@mp49
Copy link
Contributor

mp49 commented Jul 7, 2022

I can confirm it fixes the problem with the timestamp and codec information:

codec_t codec
        string name 
        any parameters
            int  5
    long compressedSize 262144
    long uncompressedSize 262144
    dimension_t[] dimension
        dimension_t 
            int size 512
            int offset 0
            int fullSize 512
            int binning 1
            boolean reverse false
        dimension_t 
            int size 512
            int offset 0
            int fullSize 512
            int binning 1
            boolean reverse false
    int uniqueId 3200
    time_t dataTimeStamp 2022-07-07 10:30:22.817  
        long secondsPastEpoch 1657204222
        int nanoseconds 817045569
        int userTag 0

I did more testing with pipeline=true. It seems to work for a while, then stop sending out updates. If I do this:

pvget -vvv -m -r "record[pipeline=true]field(uniqueId)" ST99:Det:Det2:PV1:Array

And then produce images at 100Hz, it stops printing out the uniqueId after a few seconds. If I restart the client, it's fine again for a few seconds. I saw the same effect when I modified pvaDriver to use pipeline=true.

I ran the same test without pipeline=true and it seems to run forever (I tested for a few minutes at 100Hz).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants