This post is rather long, but it is easy to follow.
0)the context
1)I observe a problem that I thought should not occur
2)I try to understand the scenario leading to the observation
3)I try to find a workaround, either by a small fix, or by totally reconsidering the design, I don't know yet : it depends on 2)
0)First, let me explain in a few lines the purpose of the whole thing :
I have two cameras, each one connected to a frame grabber (VQ8-CXP6D) in a single host PC. The two frame grabbers share an OptoTrigger. I have a custom (but very simple) applet running in the frame grabbers (see attachement). The cameras are synchronized by the same external signal generator (~1000fps) (plugged on Pin0 of the applet). An external TTL can also be raised at any time towards the Pin1 of the applet (then redirected as an "EventToHost"). The host PC catch the event, and regarding the timestamp of the event, will "mark" the matching frame as "tagged". There is no performance problem, no frame lost, i use a very efficient Apc frame grabbing with the SDK.
1)Now, the problem
The idea is that the TTL should tag images that were taken at the same exact time in the real world. But I can see that it is not the case. If I record a sequence (well, 1 sequence per camera) where a lightbulb is observed, both sequence will (as expected) have a tagged image, (that can be artificially considered time T0), but while camera C1 will see the lightbulb lighting up at frame T1, the second camera C2 can see the light bulb lighting up at frame T2. I have seen T2=T1+dT with dT range from 0 to 10 frames, while I expected T1=T2 everytime.
2)I tried to understand how it could happen.
Regarding the applet design (see attachement), I assume that the events on Pin1 are always delivered before the images. When I receive the event in my callback, I look at the timestamp (which is BTW not in fg_event_info.timestamp[0] but in fg_event_info.timestamp[1], don't know why, but that's not the problem), and I push that timestamp in an "events-timestamps-queue" (one queue per frame grabber). In the frame grabbing APC, when a frame is received, I look at the frame timestamp; if the "events-timestamps-queue" (of the same FG) is not empty and the frame timestamp is greater than the first event timestamp, I "tag" the image and pop the first event timestamp from its queue. I really thought that was perfect.
First question : are the timestamps reliable ? On the doc, I could finally find the little mention "It is a high-performance query counter and related to Microsoft's query performance counter for Windows®." Does that mean that the timestamp is not generated on the FG but in the Host PC ? What does it imply in my use case where I want to catch the event when it occurs (TTL on Pin1), but not when it is received by the SDK (that occurs, as far as I understand "some time" after being received, since it must also be transmitted to the host PC and prepared to be sent to the registered callbacks) ?
3)What kind of workaround can I imagine ?
Tagging images is not an easy task. I really want to avoid modifying the image pixels to store information, or even worse (for me), adding data at the end of the image. Even if I wanted to do so, I was not able yet to produce an efficient VA design for that, because
-if the even occurs during image I acquisition, it seems more reasonable to tag images I+1 and let the software handle that fixed shift
-if we tag image I+1, it means that a signalCounter that could be used to take decisions should not be reset on a frame level, but should rather be stored in a register, the register being reset after being succefull inserted in the next image. Not that easy (for me) to implement in VA
-the "FG_IMAGE_TAG" seems not to be designed for that at all, I am not sure how it could be used
What kind of advices (or solutions) can you give me ?