Posts by Pierre Chatelier

    Hi,

    I appreciate you trying to deal with my reluctance to change the image stream.

    I have never used the "ImageMonitor", I don't even know how I can read it in the SDK. Where is the doc on the SDK side ?


    Your proposal is interesting, but it only generates timestamps for TTL events, and I won't be able to match them with images. But I think it can be improved. Tell me if I am wrong but, could I instead gather in the image monitor :

    -the current 64b value of the FG counter (pseudo-timestamp), which would be updated at each image

    -the current 64b image number (which will be, I hope, the same ImgNr as in the apc callback (frameindex_t imgNr, struct fg_apc_data* data)

    -the 64b value of the FG counter when the last TTL event occured. This value would be updated only on TTL event


    This way, I could

    -read the Monitor FIFO from my apc callback, so that I will always be able to get reliable information and will support frame loss easily

    -instead of reading "FG_TIMESTAMP_LONG", I could use the 64b pseudo-timestamp from the monitor

    -detect that an event occured without relying on the EventToHost at all


    Is it correct ?

    Would it impact performances a lot ?


    Additional questions

    -why do you mention the clock being ns/10 and the output being 10*ns ?

    -why do you cast the 1x64b to 2x32b before sending to the monitor ?

    Quote

    Indeed the timestamp is not generated by the FG. It is a timestamp by the host PC and not reliable to do any assumptions concerning the trigger pulses. In fact the times of receiving DMA image transfers and such events can totally get mixed up.

    Right, this is indeed bad news for me because it is highly critical in my applications. I will have to reconsider many things.

    At least it is without ambiguity : you should really document that in capital letters.


    I am not sure to understand your proposal about latch counter : can you show me an example design ?


    Anyway, please note that if I cannot attach a timestamp to an event, then the timestamp information is not that useful. In that case, a single bit for "rising edge detected during image" and another bit for "falling edge detected during image" may be enough.


    And as you saw in my first post, I am not sure about how to handle that in the FG pipeline. It's not easy to explain what I have in mind, but let's try.


    -If I store an "event occured" bit in the image, I will miss it in case of frame loss, while "EventToHost" had the very advantage of being delivered anyway. It would be solved if I could attach information to the "EventToHost" channel.


    -If I use a register to store "event occured" and insert that in the first pixel after the "memory buffer", then I think there is a conflict, because a new event could occur during the insertion. So this new event should be associated to the next image while I am still marking the current one.


    -If I use a register to store "event occured" in the tail of the image (last pixel), is it better ? I really don't like footers, I woudl largely prefer headers.


    -If I use a register to store "event occured in image N" in the first pixel of image N+1, I don't find either how to handle the reset properly.


    -An awful solution (for me) would be to add another DmaToPC in order to send bytes instead of "event" in case of tag signal. Those bytes could be the FG counter used as a timestamp. But it is a huge refactoring of my acquisition process, and in some applets I already use several Dmas to handle images, so I would have to add a custom option to let the user select himself if a "dma port" is regular or "to handle events". Booh.


    Any help is welcome.

    This post is rather long, but it is easy to follow.


    0)the context

    1)I observe a problem that I thought should not occur

    2)I try to understand the scenario leading to the observation

    3)I try to find a workaround, either by a small fix, or by totally reconsidering the design, I don't know yet : it depends on 2)


    0)First, let me explain in a few lines the purpose of the whole thing :

    I have two cameras, each one connected to a frame grabber (VQ8-CXP6D) in a single host PC. The two frame grabbers share an OptoTrigger. I have a custom (but very simple) applet running in the frame grabbers (see attachement). The cameras are synchronized by the same external signal generator (~1000fps) (plugged on Pin0 of the applet). An external TTL can also be raised at any time towards the Pin1 of the applet (then redirected as an "EventToHost"). The host PC catch the event, and regarding the timestamp of the event, will "mark" the matching frame as "tagged". There is no performance problem, no frame lost, i use a very efficient Apc frame grabbing with the SDK.


    1)Now, the problem

    The idea is that the TTL should tag images that were taken at the same exact time in the real world. But I can see that it is not the case. If I record a sequence (well, 1 sequence per camera) where a lightbulb is observed, both sequence will (as expected) have a tagged image, (that can be artificially considered time T0), but while camera C1 will see the lightbulb lighting up at frame T1, the second camera C2 can see the light bulb lighting up at frame T2. I have seen T2=T1+dT with dT range from 0 to 10 frames, while I expected T1=T2 everytime.


    2)I tried to understand how it could happen.

    Regarding the applet design (see attachement), I assume that the events on Pin1 are always delivered before the images. When I receive the event in my callback, I look at the timestamp (which is BTW not in fg_event_info.timestamp[0] but in fg_event_info.timestamp[1], don't know why, but that's not the problem), and I push that timestamp in an "events-timestamps-queue" (one queue per frame grabber). In the frame grabbing APC, when a frame is received, I look at the frame timestamp; if the "events-timestamps-queue" (of the same FG) is not empty and the frame timestamp is greater than the first event timestamp, I "tag" the image and pop the first event timestamp from its queue. I really thought that was perfect.

    First question : are the timestamps reliable ? On the doc, I could finally find the little mention "It is a high-performance query counter and related to Microsoft's query performance counter for Windows®." Does that mean that the timestamp is not generated on the FG but in the Host PC ? What does it imply in my use case where I want to catch the event when it occurs (TTL on Pin1), but not when it is received by the SDK (that occurs, as far as I understand "some time" after being received, since it must also be transmitted to the host PC and prepared to be sent to the registered callbacks) ?


    3)What kind of workaround can I imagine ?

    Tagging images is not an easy task. I really want to avoid modifying the image pixels to store information, or even worse (for me), adding data at the end of the image. Even if I wanted to do so, I was not able yet to produce an efficient VA design for that, because

    -if the even occurs during image I acquisition, it seems more reasonable to tag images I+1 and let the software handle that fixed shift

    -if we tag image I+1, it means that a signalCounter that could be used to take decisions should not be reset on a frame level, but should rather be stored in a register, the register being reset after being succefull inserted in the next image. Not that easy (for me) to implement in VA

    -the "FG_IMAGE_TAG" seems not to be designed for that at all, I am not sure how it could be used


    What kind of advices (or solutions) can you give me ?

    Files

    • cxp-sync.png

      (80.15 kB, downloaded 5 times, last: )

    Hello,

    I don't really understand.


    Aa standard applet allow FG_GRAY, FG_GRAY10, FG_GRAY12, FG_GRAY16 at least, and 12b uis not a problem with them.

    But you claim that a custom visual applet will only support FG_GRAY, FG_GRAY16, FG_COL24 and FG_COL48


    First, this is surprising, since anyway, the byte stream is valid : if I emit a 12b stream, I get a 12b stream, whatever the FG_FORMAT value. Why would it be limited in that case ? This is just a incorrect information returned by the applet.


    Second, it works with FG_GRAY10 : if I set a 10b stream, FG_FORMAT is set to FG_GRAY10 and my client software is informed that it must be interpreted as 10b. Why would 10b be different from 12b ?

    Some more information:
    If I set the camera operator to Mono10 and adapt the design to 10b, the Output operator FG_FORMAT appears, as expected, as FG_GRAY10.

    If I set the camera operator to Mono12, cast bit width to 10b at the end, the Output operator FG_FORMAT appears, as expected, as FG_GRAY10.

    If I set the camera operator to Mono10 and cast bit width to 12b for the rest of the design, the Output reverts to this incorrect FG_GRAY instead of expected FG_GRAY12

    (Visual applets 3.1.2 build 77127)

    [edit]

    The standard Acq_SingleCXP6x4AreaGray *does* allow FG_GRAY12, but I really need a specific design.

    I have a very simple applet for a 1280x1024@12bits camera.
    It compiles correctly, I can flash a board with it.

    But when I try to use it with RT MicroDisplay or my own software, the FG_FORMAT of the "Output" Operator always appears as a grayed, disabled FG_GRAY instead of FG_GRAY12.

    Thus, my software interprets it as a 8b image instead of unpacking it from 12b, and the result is wrong.


    Is there any reason for that ?CXP-Mono12-1280x1024-synchro+trigger.va

    But unlike LoadCoefficients, "UpdateROI" cannot be set to 0, so it is always set at 1.

    I think that here again, this is a documentation problem :

    I have juste checked that indeed, the CoefficientsBuffer ROI seems to be taken into account (regarding the fps), after writing 1 into UpdateROI *even if the value is already 1*

    Do you confirm ?

    Should it be the case, I also heavily suggest that the documentation of UpdateROI is updated to explain that !


    [edit]

    For instance, in my own GUI, since UpdateROI has only a single possible value, no GUI event is raised if I rewrite "1" in the associated NumericUpDown control, and thus I do not propagate to the Sgc_setIntegerValue(). That's why I got tricked !

    It means that I should call Sgc_setxxxValue() even if the new value is the same as the current one.

    I have got an additional question.

    If I want to set an ROI, I dynamically set the width and height of the:

    -camera Genicam parameters (mandatory)

    -applet ImageBuffer width and height (optional, but better bandwidth)

    -applet SelectROI width and height (optional, but better bandwidth)

    -applet CoefficientBuffer XLength and YLength (and offset) (required for Shading to be meaningful)

    and it kind of works, but...


    Regarding the bandwidth, since the "real" BufferWidth and BufferHeight of the CoefficientBuffer operator are static and set to 5120x5120 (actually 320x5120 because of the tricky use of the width when using 4x(64b@2x) per file), I have observed that my fps is limited by that hard-coded "full frame" dimensions.

    For instance, in 3520x2000, while it should be ~250fps (that is what I observe in an applet without shading), I am limited to 75fps here.
    I think that it would work if I built a specific version of the applet with a CoefficientBuffer operator adapted to 3520x2000 (i.e. 220x2000), but it is not very handy.

    Am I right in my investigation ? Is there another solution to use a customizable ROI at higher FPS when using a CoefficientBuffer operator ?

    Thanks for your help.
    I summarize here the different information from this thread :


    -The current version of the Appendix.Device Resources states:

    "Due to the shared bandwidth architecture, the applet developer should utilize all 256 bits of the operator’s memory interface (RAM Data Width)"

    But the "256" here is just a specific case, the real "RAM Data Width" may be different (and found in the same documentation for each board model type), and it is 512 for the MA-VCX-QP. (that's why I tried 64b@4x at first instead of 64b@8x for the output of the CoefficientBuffers)
    I suggest that you modify a little that sentence of the doc.

    -For a board using the Shared memory, it is indeed written in the doc that all RAM operators should use the same data width for performance, but I did not realize that it implied to "artifically" ParallelizingUp my 8b@32x camera stream to 8b@64x for the InfiniteSource ImageBuffer storage. It is perfectly logical afterwards, but not trivial at first. You may also insist on that point in the documentation.


    -The current version of CoefficientBuffer (VA 3.1.2) has technical limitations that makes it tricky to use at full bandwidth. Your discussion thread CoefficientBuffer: Maximum performance... is really important.
    I suggest that you include it in the VA documentation (or work on a new, less tricky, CoefficientBuffer operator!)


    -In the shared memory model, no performance gain will be observed by splitting a CoefficientBuffer into several CoefficientBuffer operators, as long as the full data width is properly used.

    Success !


    Setting the CoefficientBuffers to 8x(64b@2x) is OK. Now I have my 50 fps. (and I have already made and tested the program to split my Coefficent .tif files into 128b block parts).


    But I still don't understand why SyncToMax is needed instead of SyncToMin since all image sizes are identical.


    Some other compilations are still going on, I will update the thread with other results to make an exhaustive report.

    CXP-Mono8-5120x5120+shading+bpr+gpi23.jpg

    The design with 8x(64b@2x) coefficients is not yet compiled, I just reported a screen capture of your applet while the compilation is on.

    This is just an informative report showing that for now I have similar figures as yours.

    To avoid Overflow, I found that the speed limit is a period of 83950, which is ~1488Hz for the 125Mhz clock.
    On your machine you seem to achieve 1600Hz.
    As soon as I have the Shading applet compiled, I will report the figures here.


    QuadCXP_Shading_BRudde_FakeCAM2.JPG

    Hi,


    During the compilation, I can already provide you some information :

    First, the performance of my board

    MA-VCX-QP-performance.JPG


    Second, the test of your applet:

    QuadCXP_Shading_BRudde_FakeCAM.JPG

    As you can see, it is around 1500fps rather than 1600 fps.

    I don't understand two things :

    -I couldn't run your applet under MicroDisplay without my camera to be detected (if no camera is found, I cannot start the applet, the buttons are disabled)

    -The ROI(ImageBuffer) >FillLevel is at 75%, so the fifo is full ?!


    I have checked that just setting the input Imagebuffer to 512b/clock is not enough.

    CXP-Mono8-5120x5120+shading+bpr+gpi22.jpg


    CXP-Mono8-5120x5120+shading+bpr+gpi21.jpg

    A CXP Camera can usually be configured either to be in "free run mode", or to be slave of the trigger signal.

    You should look into its genicam configuration.

    With MicroDisplay, you can use Tools > GenicamExplorer, and look for nodes :

    Acquisition Control> Acquisition Mode (something like "Continuous" means "free run", and the camera use its "Acquisition Control > Acquisition frame rate"). If you set the Acquisition mode to "CoaXPress", you can be enslaved to triggers coming from the OptoTrigger, for instance. Some cameras may also have "Single Frame" or "Line 0", when using native ports (not CoaXPress) for the trigger, documented by the manufacturer of the camera.


    For some cameras, it will be rather "Acquisition Control > Trigger Mode" and "Acquisition Control > Trigger Source"


    CXP-trigger.jpg