Posts by B.Ru

    Dear pangfengjiang,


    The data will come into your design based on a stream, so there is no need to "jump" to a specific row, it will come through the link line by line. If you want to remove a specific line you should use the operator RemoveLine. Since both inputs of RemoveLine need to be synchronous to each other you should sum the the average gray value by using RowSum. Since only the last line-pixel of RowSum -output will represent the not normalized average value - sum of all line pixels - a RemovePixel needs to be used to get the focus to the result. With an additional SYNC you can controll the RemoveLine. Both inputs of RemoveLine/Pixel need to be in sync. Independent of the text, the design below will show all details and OPs:


    pasted-from-clipboard.png


    VA Design : LineSelectionAverage.va

    Since the line length is known the sum does not need to get divided, it can be used "un-normalized".


    Best regards,

    Thanks so much. Just one question: why some operators can specify resource type (ImplementationType), some not?

    And when ImplementType = AUTO, what’s the principal of VA to allocate resources?

    Dear Bingnan Liu,


    In case of "ImplementType = AUTO" VisualApplets (VA) will decide which ressource will be used. Only when a FPGA ressource conflict is shown by DRC or is getting obvious a specific discussion on the used "ImplementationType" modification is getting necessary. A good example would be: running out of logic ressources and shifting several operators to ALU/DSPs in order to save Logic/LUTs.


    Only specific operators have the ImplementationType feature, mostly the ones that require a lot of ressources and have an alternative.


    Please do not hesitate to post a VA design over here and we can go through it together and give some recommendations on how to make beneficial modifications.


    Best regards,

    Hello Pangfengjiang,

    Solid approach for this is:
    pasted-from-clipboard.png

    Full VA Design can be downloaded here:
    SignalDuration Averaging.va

    The signal periods are measured behind a debouncer. the debouncer makes sure to remove noise spikes.
    Then a kernel environment on the last 20 values is taken.
    Division of the resulting sum of all periods by 20.
    Removing first 19 averages to get an average for each period after the first 20.
    Generate a period of average last 20 periods, but tricky:

    - Period is showing up as single tick-pulse, getting half the period of duration by SignalWidth.
    Please have a look into documentation of PeriodToSignal and WidthToSignal to fully understand.
    Pulse duration is half of period.

    Best regards,

    Hello JSuriya,


    The DMAFromPC operator does only exist for "pretty old" VD1 frame grabbers.

    mE5 marathon/ironman is not supporting this operator for co-processor functionality.

    While the DMA approach could potentially support GB/s in the downstream (Host-PC-Memory to Grabber) there is a non DMA way to upload images with around 10 MB/s using an alternative register interface instead of DMA.
    What data-rates do you expect for your application?
    I know that 10 MB/s is pretty limited.


    Best regards,

    Dear Pier,


    Please use the newer MicroDisplayX tool of the most current runtime version.
    This version enables a more detailed synchronization:
    Within this new tool you can start cameras and DMAs independently.

    So you can start the camera streams first and then the corresponding DMA.


    The runtme SDK/API already enables independent start and stop of camera streams and DMAs.


    Does this help in your case?


    Best regards,

    Dear Pier,


    You can start GE camera stream independent of processes.
    Just start both camera streams manually and it will work ok.

    If selector is on 1 camera 0 is streaming but I’m waiting a stream form camera 1..

    Best regards,

    Dear Saito-san,


    Decreasing the number of objects within the binary input image itself will help increasing the speed/throughput of the Blob 2D analysis. The accodring setting within the Blob-Anaysis Operator will not help to increase the speed/throughput:


    output_frame_size
    Type static parameter
    Default specify_max_no_of_objects
    Range {maximum_required, specify_max_no_of_objects}

    The maximum number of objects which may theoretically populate an image is: Input image width * input image height / 2 (equal to a check pattern using 4-connected neighborhood). This pattern is quite unlikely. Therefore, you can configure the maximum number of objects in the images via this parameter.


    Above table shows the answer to your question (2) below.

    2. decrease the detect object number of blob 2d operator

    Best regards,

    Dear Saito-san,


    In case it is possible to post your VA-Design in here, I could have a look into it and give you some more detailed feedback,
    A simulation image would be beneficial in parallel.
    If you can not share the VA-Design in here you can send it by email to me.

    Best regards,

    Dear Saito-san,


    In case the output link of the blob-analysis is getting blocked by the further processing after its output, this can result in a limited throughput and only look as a bottleneck, while it is not caused by the blob-analysis itself.

    Best regards,

    Dear Saito-san,

    If you know any way to speed up blob 2d operator, I'd like to know it.

    Using the maximum possible parallelism at the BLOB2D operator enables maximum throughput.
    A high number of small (object noise) objects in the binary image may cause a backlog/bottleneck.
    Applying a mild erosion/dilation=opening to reduce amount of objects.

    Best regards,

    Dear Baoxi,


    Here you can see how to convert monochrome to false color RGB representation color map 'VIRIDIS' as VA data stream:


    pasted-from-clipboard.png


    Here you can find corresponding VA file: ColorMap.va
    It is using color map 'VIRIDIS' as target.


    Here you can find common color maps as RGB values (one file per component) in text files that can be loaded into LUT operator by file interface: ColoMaps_As_RGB_txts.zip

    Related to each 24bit/RGB888 color map you can find 3 txt files in the ZIP subfolder.
    One *_R.txt for the 8bit red component, *._G.txt representing green and *._B.txt for blue.
    Each includes 256 different 8 bit values [0..255].
    Within the LUT operator you can find a "File" button for loading the values from text file; one value per line.


    Here you have an overview of colormaps that are taken from matplotlib 3.4.2 (https://matplotlib.org/stable/…s.html?highlight=colormap)

    and some extra ones:


    colormaps.png


    Most of these colormaps can be chosen within VA simulation probes:
    pasted-from-clipboard.png


    Best regards,

    Hi,


    Yes, you can use a LUT to translate grey values into pseudo 24bit color in VA.

    I would recommend to get the colormap from numpy/matplotlib as text file.


    Within VA simulation you can directly use color map feature in simulation probe.
    You only need to enable "color map".

    Best regards,

    Dear Baoxi,


    You can use the build-in ImageBuffer Width/Height + X/Y Offset to apply a region of interest to the image.
    There are more operators that can do a region of interest, like: DynamicROI or ImageBufferMultiROI(dyn) ...


    If the region of interest is not rectangular, please let us know.


    Best regards,

    Dear Sangrae_Kim,


    You would like to use a CameraLink camera in CL Medium or Full Mode.
    Since CL Medium and CL Full camera operators do not support 14 bit directly, it is required to acquire 6x8bit (Medium) or 8x8bit (Full) or 10x8bit (Deca, Full 10tap) and assemble 14 bit data based on 8bit details.


    The sensor tag geometry is based on a line based re-arrangement and can get solved by BlockRAM.


    So the following steps are required:

    • Decide on the CL Mode : Medium/Full/Deca
    • Based on that assembly of 14bit per pixel based on 8bit details per tap is required
    • Then the sensor tap geometry needs to get implemented

    As soon as you have decided which CL Mode is the preferred choice, please post an acquired image of that mode (8bit applet) here. That image will look really strange due to 8 bit and no sensor tap geometry re-sorting. Please be aware that the resolution will be different within 8 bit when acquiring 14 bit mode from camera.


    Based on that image and the precise description from the camera's manual we can go the next step.

    Thank you.


    Best regards,