Posts by B.Ru

    Hello JSuriya,


    The DMAFromPC operator does only exist for "pretty old" VD1 frame grabbers.

    mE5 marathon/ironman is not supporting this operator for co-processor functionality.

    While the DMA approach could potentially support GB/s in the downstream (Host-PC-Memory to Grabber) there is a non DMA way to upload images with around 10 MB/s using an alternative register interface instead of DMA.
    What data-rates do you expect for your application?
    I know that 10 MB/s is pretty limited.


    Best regards,

    Dear Pier,


    Please use the newer MicroDisplayX tool of the most current runtime version.
    This version enables a more detailed synchronization:
    Within this new tool you can start cameras and DMAs independently.

    So you can start the camera streams first and then the corresponding DMA.


    The runtme SDK/API already enables independent start and stop of camera streams and DMAs.


    Does this help in your case?


    Best regards,

    Dear Pier,


    You can start GE camera stream independent of processes.
    Just start both camera streams manually and it will work ok.

    If selector is on 1 camera 0 is streaming but I’m waiting a stream form camera 1..

    Best regards,

    Dear Saito-san,


    Decreasing the number of objects within the binary input image itself will help increasing the speed/throughput of the Blob 2D analysis. The accodring setting within the Blob-Anaysis Operator will not help to increase the speed/throughput:


    output_frame_size
    Type static parameter
    Default specify_max_no_of_objects
    Range {maximum_required, specify_max_no_of_objects}

    The maximum number of objects which may theoretically populate an image is: Input image width * input image height / 2 (equal to a check pattern using 4-connected neighborhood). This pattern is quite unlikely. Therefore, you can configure the maximum number of objects in the images via this parameter.


    Above table shows the answer to your question (2) below.

    2. decrease the detect object number of blob 2d operator

    Best regards,

    Dear Saito-san,


    In case it is possible to post your VA-Design in here, I could have a look into it and give you some more detailed feedback,
    A simulation image would be beneficial in parallel.
    If you can not share the VA-Design in here you can send it by email to me.

    Best regards,

    Dear Saito-san,


    In case the output link of the blob-analysis is getting blocked by the further processing after its output, this can result in a limited throughput and only look as a bottleneck, while it is not caused by the blob-analysis itself.

    Best regards,

    Dear Saito-san,

    If you know any way to speed up blob 2d operator, I'd like to know it.

    Using the maximum possible parallelism at the BLOB2D operator enables maximum throughput.
    A high number of small (object noise) objects in the binary image may cause a backlog/bottleneck.
    Applying a mild erosion/dilation=opening to reduce amount of objects.

    Best regards,

    Dear Baoxi,


    Here you can see how to convert monochrome to false color RGB representation color map 'VIRIDIS' as VA data stream:


    pasted-from-clipboard.png


    Here you can find corresponding VA file: ColorMap.va
    It is using color map 'VIRIDIS' as target.


    Here you can find common color maps as RGB values (one file per component) in text files that can be loaded into LUT operator by file interface: ColoMaps_As_RGB_txts.zip

    Related to each 24bit/RGB888 color map you can find 3 txt files in the ZIP subfolder.
    One *_R.txt for the 8bit red component, *._G.txt representing green and *._B.txt for blue.
    Each includes 256 different 8 bit values [0..255].
    Within the LUT operator you can find a "File" button for loading the values from text file; one value per line.


    Here you have an overview of colormaps that are taken from matplotlib 3.4.2 (https://matplotlib.org/stable/…s.html?highlight=colormap)

    and some extra ones:


    colormaps.png


    Most of these colormaps can be chosen within VA simulation probes:
    pasted-from-clipboard.png


    Best regards,

    Hi,


    Yes, you can use a LUT to translate grey values into pseudo 24bit color in VA.

    I would recommend to get the colormap from numpy/matplotlib as text file.


    Within VA simulation you can directly use color map feature in simulation probe.
    You only need to enable "color map".

    Best regards,

    Dear Baoxi,


    You can use the build-in ImageBuffer Width/Height + X/Y Offset to apply a region of interest to the image.
    There are more operators that can do a region of interest, like: DynamicROI or ImageBufferMultiROI(dyn) ...


    If the region of interest is not rectangular, please let us know.


    Best regards,

    Dear Sangrae_Kim,


    You would like to use a CameraLink camera in CL Medium or Full Mode.
    Since CL Medium and CL Full camera operators do not support 14 bit directly, it is required to acquire 6x8bit (Medium) or 8x8bit (Full) or 10x8bit (Deca, Full 10tap) and assemble 14 bit data based on 8bit details.


    The sensor tag geometry is based on a line based re-arrangement and can get solved by BlockRAM.


    So the following steps are required:

    • Decide on the CL Mode : Medium/Full/Deca
    • Based on that assembly of 14bit per pixel based on 8bit details per tap is required
    • Then the sensor tap geometry needs to get implemented

    As soon as you have decided which CL Mode is the preferred choice, please post an acquired image of that mode (8bit applet) here. That image will look really strange due to 8 bit and no sensor tap geometry re-sorting. Please be aware that the resolution will be different within 8 bit when acquiring 14 bit mode from camera.


    Based on that image and the precise description from the camera's manual we can go the next step.

    Thank you.


    Best regards,

    Dear Pierre,


    It would be simpler...

    Could you post a design for that ?

    Very simlar approach using CreateBlankImage + H-Box FG_Time, but then appending it at the end of the next frame by InsertImage.
    Switching in between of different cases is required.
    In case of trigger received the InsertImage should forward, in all other cases not or mark it correspondingly.


    Best regards,

    Hello Pierre,


    Concerning your questions:


    Where is the doc on the SDK side ?

    • I can provide some code for this and adopt the data-format together with you, based on what we design.

    This way, I could

    -read the Monitor FIFO from my apc callback, so that I will always be able to get reliable information and will support frame loss easily

    -instead of reading "FG_TIMESTAMP_LONG", I could use the 64b pseudo-timestamp from the monitor

    -detect that an event occured without relying on the EventToHost at all

    Is it correct ?

    • you can retrieve the event-related dataset (FG_Time and the things we add ...)
    • you can read the "current FG_Time" from the framegrabber when you receive the APC.

    Would it impact performances a lot ?

    • This will not affect the performance of applet of software (SW) in practice. While SW needs to read those values, but register reads are really fast: I would estimate a response/read time of ~300µs.
    • The amount of required FPGA ressources is minimalistic.

    We should go into a VA coaching session of ~1h and we will find a nice solution for your request. We can add a lot of additional details:

    • Trigger/Image Count difference within VA
    • Image receive start time
    • DMA transfer start
    • Generate Trigger-Event for rising/falling edge


    Best regards,

    Hello Pierre,


    Thank you for your detailed explanation of the requirements.


    -An awful solution (for me) would be to add another DmaToPC in order to send bytes instead of "event" in case of tag signal. ...


    Any help is welcome.

    really want to avoid modifying the image pixels to store information, or even worse (for me), adding data at the end of the image.

    From my point of view adding a header or footer to a DMA Transfer is a possible solution,

    but since you do not like/prefer this approach I would like to make the following proposal:


    There is a simple and reliable way of getting FG related timestamps:


    You can use the ImageMonitor operator...

    FGevent_Time.png


    Above you can see an example of event generation and frame grabber (FG) time stamp transfer.
    When you register a software callback so that you will have a function getting called when the edge on the trigger input occurs.


    A very precise FG timestamp is sampled for each trigger here and waits in FIFO until ImageMonitor (link to docu) reads it. You can add more data blocks (image-number, counters, ...) and use several of these in the design. This is named "Image"Monitor but here we simply use it for data "structure".


    The event data (FG time stamp) can be read through register/ImageMonitor interface. If you want to have some sample C++ code for that feel free to contact me.
    FGevent_Clocker.png


    Above you can see how a FG time can be generated using 64bit... You can modify the data-type to your personal preference.
    Current FG Time is shown by GetStatus to Register interface.


    VA DESIGN: FG_TimeStamps_TimeDataForEvent.va


    In case of questions do not hesitate to contact me.


    Best regards,
    Björn

    Hello,


    In case two asynchronous signals need to get synchronous to each other, those require exactly the same mean frequency, so that no single pulse gets lost.
    In case the frequency is identically and there is no drift causing a phase jump, you could use the RsFlipFlop...
    That would be the most simple approach.

    Normally you will see a different situation, especially when the external signal changes its frequency or there is some "distortion" on the signal.

    Then more complex approaches come into place:

    One starting point could become the SignalToPeriod Operator:
    Signal.SignalToPeriod


    Measure both periods, find the max/min/mean using filters (PixelNeighbour/Kernel) and then go for Signal.PeriodToSignal...


    For the measurement itself you can use rising or falling edge.
    Similar for a Phase A/B encoder you want to need to sync to a certain generator.

    Please mention some more details and we can focus on a more suitable approach...


    Best regards,