Capture a Framegrabber IO Event

  • In my application, linescan cameras are acquiring and sending 2D images to my host PC where they are concatenated. The final image length needs to be as long as the camera trigger photoeye signal is ON. Once the photoeye turns OFF, the camera should complete the final 2D image acquisition. I need the host PC to know when the final 2D image has been sent OR when the Trigger photoeye signal turns OFF so that it can determine the last acquired image and finalize the image concatenation. I can see two possible ways to do this:

    1. Use an acquisition enable signal (i.e. analyze a Trigger 5 board input signal) to acquire images and send a value out with the 2D images that represents a serial number. All 2D images with the same serial number are considered part of the final image. Increment a counter (i.e. serial number) every time the photoeye signal goes from low to high (i.e. new image). The problem here is that the host PC is still waiting for whatever grabs come from the camera and does not "know" when the final 2D image has been sent until the next low-to-high transition. To overcome this, if there is a way for the framegrabber to send out one last image on a high-to-low transition and signal that it is the final image then the host PC could know that the sequence of images to concatenate is finished. But how to do this?
    2. Use an event signal in the host PC that monitors the Trigger 5 board input signal of the photoeye. If the photoeye goes from high-to-low (trailing edge of part), then tell the grab worker that image concatenation is complete. If it goes from low-to-high, tell the grab background worker that a image is being acquired. This could be done with serialization/counter as well. But looking at the SDK documentation, I do not see how to capture the IO events from the Trigger 5 board in the host PC.

    I appreciate your thoughts on this. Thanks!

  • Thank you for your detailed question.

    Below you can find a single design giving the answer to 1. and 2.


    VA Design LineScan_Context_Feedback_BRudde_v01.va includes all necessary details for determining the end of an 1D image stream on the basis of an added meta data flag (32bit flag):


    BRudde_Line1D.png


    Additionally you can register a software callback on the gate start/rising and end/falling edge if required.

    Implemented in front of EventToHost operator.


    After building an overlay bit, representing the last pixel of a gate, this flag is used as an appended meta flag of 32bit:

    BRudde_Line1D_2.png


    Therefore the whole flag bit image is reduced to the last pixel of the image:

    BRudde_Line1D_3.png


    In the Runtime/SDK folder %SISODIR5%\SDKExamplesNew\examples\fglib\Events\ you can find the c++ source code to access and register the required sofwtare callbacks:

    Please enjoy ;-)

    Björn RuddeMicroSiSo.png
    Senior Technical Sales and
    Field Application Engineer

  • In order to simulate the above VA design please insert an image longer than 1024 lines.

    A longer image will force several output frames, each including a "last line" of 4 byte, representing a:

    • 0xFFFFFFFF if it is the last image of the single gate image stream
    • 0x0000000 in case of the first or intermediate image for transfer
    • How it looks like in VA simulation:
      pasted-from-clipboard.png

    Björn RuddeMicroSiSo.png
    Senior Technical Sales and
    Field Application Engineer

  • Testing this in VA gives the expected behavior--thank you. In my application, I apply a smoothing filter that requires the image to be at least the size of the Kernel. If an image is acquired that is shorter than can be processed by the image filtering, what is the best way to handle this in VA? In my application, I could probably just "ignore" the processed result while still sending the final image data (last few pixel rows, basically) to the host PC with the "final image" flag. Is there a way to check whether the incoming image is large enough to be processed and to choose not to process it based on its size?

  • ... Is there a way to check whether the incoming image is large enough to be processed and to choose not to process it based on its size?

    This would require to wait for the last line of an image and would cause a full frame latency and require a memory operator.

    Since a VA design is always working on the basis of a pipeline, that should be able to handle the full camera bandwidth, it is possible to make this a decision (based on a VA inserted binary flag or similar) in the related software process behind the VA processing.

    Björn RuddeMicroSiSo.png
    Senior Technical Sales and
    Field Application Engineer