Posts by B.Ru

    Dear rvdbijl,

    is the Ironman board limited to 4 DMA and 4 Imagebuffer instances? VA 3.2.0 won't let me add a 5th one in my project and pass Design Rules Level 1. The resource index won't go over 3 for either one

    The number fo DMA-Transfers is limited to 4. You can use InsertImage/AppendImage, InsertLine/AppendLine to show two images below of each other or next to each other using the Insert*/Appen* approach; please take care that these approaches require some synchronization memory for serializing lines or images. For two results/images that exist at the same time, please use InsertLine/AppendLine and a ImageFiFo (size 1 line per input) to handle the waiting time for the lines being forwarded one after the other. An image serialization is normally "cheaper" concerning hardware ressources that an additional DMA channel.

    The amount of ImageBuffers (real frame grabber hardware RAM modules) is limited to 4.

    Dear customer,

    Concerning your question. If you want to run mDisplay with a single camera and multiple DMAs,

    please make the following setting:

    mDisplay -> Tools -> Settings -> Check "Ignore Camclock status" and apply:


    That may help to start all DMA according to your requirements.

    If that is not showing the expected behaviour, then please paste your VA design here if possible.

    Thank you.

    Dear Theo,

    The Overflow parameter of the ImageBuffer operator is not stable enough to get a reliable feedback.

    That can not besolved by a parameter translation itself.

    This VA design will be a reliable approach:


    It measures the amount of pixels going into the FIFO and leaving it.

    By this the current fill-level can be seen while being the first memory behind the camera (infinite source = enabled) and directly in front of the first RAM related memory (inifinite source = disabled) operator.

    So in case the first RAM memory operator is not accepting or refusing the arriving data this can be detected.

    Additionally there is a EventToHost operator that sends an overflow event in case the small FIFO is geting filled up.

    A progressive detection on the basis of an OK, critical and error fill-level are used.

    Dear Pierre,

    I hope I understand or interpret your question correctly:

    The DMA transfer (DmaToPC's FG_FORMAT) mentions the wrong bitdepth?

    • 8 bit instead of 12?
    • Can you please check if the camera is sending 8/10/12 bit as expected?

    In case the camera sends 8 bit and the design expects 12, the representation of the data would look wrong for 12 bit visualization.

    Dear Theo,

    Thank you for this hint.

    I forwarded your input to the product and development management already.

    As far as I know is the naming of the runtime/SDK CXP Acq_QuadCXPx1* acquisition applets working with four single link cameras versus the VisualApplets operator with the name CXPQuadCamera sourced by two independent naming conventions.

    Both are using Quad, but the runtime applet name is supporting 4 cameras, while the VisualApplets (VA) operator is intended for 4 CXP links representing a single camera.

    Do you have a better naming for these CXP*Camera operators?

    Maybe: CXP6x1Camera, CXP6x2Camera, CXP6x4Camera


    ...effect the parallelism I hope but it might, correct?

    The different VA CXP camera operators support parallelisms above the minimal one on its output node, which is based on maximum interface bandwidth and format mode setting (dynamic/static is affecting this). The paralleism in combination with the system clock defines the maximum bandwidth of a single link. When you selected the CXPQuadCamera operator you will have a default or parallelism 32. In case you require for your project CXPSingleCamera is will be 8.

    The current version of our CXPQuadCamera operator supports CXP x1 x2 x4 link configurations during runtime, and as words of solace I can tell you: The single(x1) or dual(x2) link CXP camera will work with the CXPQuadCamera (CXPx4) VA operator... 8o

    One more hint on something related to that:

    The minimal paralleism of the camera operator will always be used inside it and may affect the image resolution of the image going into your link with an optinally different paralleism.

    As an example for this:

    Camera delivers 1024 x 1024 pixels in a CXP6-4 configuration into the CXPQuadOperator.

    The operator is having a minimal paralleism of 20 and default of 32.

    So the image will get internally the line length of 1040 pixels. Then at the output it becomes a paralleism of 32 and this acts exactly like ParallelUP operator. So you get a line length of 1056 pixels into the link.

    Setting the maximum image width to the correct value is mandatory in this case.

    You can simply use ImageStatistics operator directly behind the camera operator to investigate this.

    After that a ImageBuffer will help selecting the real region of interest of the image.

    The paralleism is not affecting the image height directly.

    Ok, I am now completely of topic ... :huh:


    The remove image procedure inserts a delay of exactly one full frame.

    The previously inserted FIFO was able to store a single line.

    Because of that there was a deadlock as soon as the first line ran into the SYNC(module16)...

    Below you can see a screenshot of the solution and the VA source code below:


    Download VA Design:

    That version will work as expected.

    The VA simulation is not taking care of the synchronization in hardware,

    that detail needs to be fixed by the developer during design time.

    For explanation:

    The remove image approach causes a delay of a full frame in the upper link (by-pass) and requires a memory for a full frame.

    ... Is there a way to check whether the incoming image is large enough to be processed and to choose not to process it based on its size?

    This would require to wait for the last line of an image and would cause a full frame latency and require a memory operator.

    Since a VA design is always working on the basis of a pipeline, that should be able to handle the full camera bandwidth, it is possible to make this a decision (based on a VA inserted binary flag or similar) in the related software process behind the VA processing.

    In order to simulate the above VA design please insert an image longer than 1024 lines.

    A longer image will force several output frames, each including a "last line" of 4 byte, representing a:

    • 0xFFFFFFFF if it is the last image of the single gate image stream
    • 0x0000000 in case of the first or intermediate image for transfer
    • How it looks like in VA simulation:

    Thank you for your detailed question.

    Below you can find a single design giving the answer to 1. and 2.

    VA Design includes all necessary details for determining the end of an 1D image stream on the basis of an added meta data flag (32bit flag):


    Additionally you can register a software callback on the gate start/rising and end/falling edge if required.

    Implemented in front of EventToHost operator.

    After building an overlay bit, representing the last pixel of a gate, this flag is used as an appended meta flag of 32bit:


    Therefore the whole flag bit image is reduced to the last pixel of the image:


    In the Runtime/SDK folder %SISODIR5%\SDKExamplesNew\examples\fglib\Events\ you can find the c++ source code to access and register the required sofwtare callbacks:

    Please enjoy ;-)

    ... But the Coordinate_X(Y) images are 2D images and the COG_X(Y) data is 1D. Any suggestions on this one?

    Writing all COG X/Y coordinates found by a BlobAnalysis_2D directly into an artificial image is not possible so far. The output of the BlobAnalysis is a list of object features, where the COG's are not sorted by X/Y position. Since these list entries as a result of a of a BlobAnalysis are not sorted it is not possible to reconstruct an image on this basis, because the image is produced linearly: pixel by pixel and line by line, starting at the top left corner.

    To solve this it would require a kind of memory. As long as the COG X/Y coordinated describe a small image the operator Histogram can be used. Use CreateBlankImage and SyncToMax to resize the object list to the target resolution and use Y coordinate as MSBits and the X coordinate as LSBits and scale the output into a 2-dimensional image using SplitLine. This will only work in case the required X and Y bits for the ranges are equal or less than 16 bit: That is the maximum number of bits for an histogram.

    This would mean that the required resulting image size is 256x256 or less.

    Example VA design for that idea ( source ) :


    You can simulate with your image and check the result.

    Since the maximum resolution is 256 * 256 pixels in this case, please have a look into the H-Box (norm_COG_XY_List) to see how the area of the oject is used to normalize the COG coordinates and limit these to 8 bit value range: