Posts by Bjoer.Rude

    Dear Ryuji Narita,

    My first idea would be to use the VA design below (VA design, download ) :


    This is only a quick guess, but it may be a solution.

    From my point of view, two small FIFOs were missing in order to avoid a dead-lock condition.

    These FIFOs have the name MiniBuffer_A/B.

    Please tell me if this is a valid fix, because I could not find the time to investigate in runtime.

    My recommendation would be to modify the processing behind the Blob Analysis in a serial way, on the basis of the mentioned operators in the previous post.

    Dear Lily,

    Thank you for your interesting request.

    You mentioned "we can get them" concerning some of the details.

    If that does mean that you already have some VA sources for this, can you please attach these to the thread here?

    That would help me adding the requested details to an existing VA design.

    If you need Maxn for each Pixel n per line, you can use ColMax operator.

    You can build a loop for this on the basis of TxImageLink and RxImageLink and use ReSyncToLine to get the result of the previous line processed for comparison.

    Dear Ryuji,

    Please send or attach a valid image for simulation.

    Instead of using a MultiInput-Sync you can serialize all the data and investigate the stream instead.

    That would require much less ressources and will not cause a dead-lock in case of un-expected data.

    After the Sync you are using an ADD on the synchronized inputs, it is possible to use a FrameSum, RowSum or Count operator instead. I would recommend the FrameSum.

    Dear Arjun,

    The MultiPulse-Mode of the SignalDelay is of static character:

    the number of delay ticks is fixed after synthesis.

    SignalDelay in MultiPulse Mode will not reproduce the duration as expected in case the Tick input is downscaled.

    So it is recommended to use only the edge ( SignalEdge ) and apply the duration later.

    The possible/supported Delay duration itself is based on the Tick frequency applied.

    If using 10bit(can become more) for example, 1023 delay ticks are possible.

    Using 1000 pulses at tick input frequency 1 kHz will delay exactly 1 second.

    If you present finput = 4 Hz, the output frequency will be 4 Hz too, but delayed by the amount of ticks, where tick = ftick.

    finput can be delayed by the precision of ftick.

    So if you want to use different delays you can modify ftick during runtime instead of the SignalDelay delay parameter.

    ftick is normally generated by Generate operator.

    The pulse width in the sketch below is not valid in case of a tick frequency below system clock.

    So please only use the rising edge of the pulses and not as shown below:


    Hi Pierre,

    Thank you for your input.

    I think that here again, this is a documentation problem :

    I have juste checked that indeed, the CoefficientsBuffer ROI seems to be taken into account (regarding the fps), after writing 1 into UpdateROI *even if the value is already 1*

    Do you confirm ?

    Should it be the case, I also heavily suggest that the documentation of UpdateROI is updated to explain that !

    I forwarded your hint concerning the CoefficientBuffer in order to improve and extend our documentation.

    Hi Pierre,

    When using the CoefficienBuffer please make sure that only the required image dimension is read out of it before synchronizing the coefficients with the real image data. In case the amount of read data is more than required bandwidth is getting "wasted".

    You need to write a 1 into its parameter UpdateROI to apply the changes during runtime.

    In case you want to optimize or simplify the parameter access and the configuration itself feel free to think of using the Parameter Library. By this you can implement a single parameter that handles all width/height for example.

    Only the maximum buffer width and height are static. The ROI values can be either static/dynamic depending on your preset during design time. If you set them to dynamic, these can be changed during runtime. Then you can adopt required changes concerning the ROI during runtime and reach the requested bandwidth without rebuilding.

    Hi Pierre,

    Congratulations, that is fantastic!

    Success !

    Setting the CoefficientBuffers to 8x(64b@2x) is OK. Now I have my 50 fps. (and I have already made and tested the program to split my Coefficent .tif files into 128b block parts).

    But I still don't understand why SyncToMax is needed instead of SyncToMin since all image sizes are identical.

    SyncToMin / SyncToMax is identical in case of images of same size, but:

    In case one of the image sources is delivering a larger image the output link is only supporting the minimum image dimension required. Due to that I went into my testing issue yesterday: I did not see that the CoefficientImage was too large...

    So I always prefer SyncToMax in case of identical sizes...

    Hi Pierre,

    Concerning your first question. If you want to run mDisplay without camera,

    mDisplay -> Tools -> Settings -> Check "Ignore Camclock status" and apply:


    By this the connection to a camera is not checked before starting the acquisition.

    Concerning your second question:

    The ImageBuffer FillLevel needs to be below 75% (better 0%) to investigate the maximum bandwidth supported.

    Otherwise the delivered data can not be processed/transferred fast enough.

    Please use the way the Coefficient buffer in my VA design is used.

    The reason for this is the internal memory bandwidth handling of that operator.

    More details on that can be found in a different thread.

    We need full performance of 12.2 GB/s here, so the link needs to be:

    8 Link, Par 2 12201 MB/s 512 MiB

    Thank you.

    This is your design result [Test video].

    1. Yes. It is external trigger mode.

    2. In your design trigger signal is period.

    3. TriggerMismatch.Status is jumping between of 9 and 10.

    In this video we can see that the external line trigger is working as expected, in case we are sending it.

    This is showing that the line trigger and the camera response work together.

    But there are two questions now:

    • Why there is there a mismatch of 9 to 10 lines?
      • What kind of sensor (Dual-Line, Multi-Line, TDI, ...) is used?
        For example a TDI with 8 stages could be the reason for 8 triggers being required for the first line feedback...
    • Why do we receive lines while not sending triggers?
      • This may be a camera feature, but I have no explanation for that.

    From my point of view both questions are something where the camera manual or the camera manufacturer support can give an answer. Is it possible to mention the camera name/manufacturer?