Posts by B.Ru

    Dear DXS,

    I have seen the RS485 Operator within the documentation , but I have no real example for it. I just did some succesfull testing on this operator some years ago and it was working as expected having 3 states.
    (…are%20Platform.RS485.html )

    But there are also some other ways of serial communication possible.
    RS232 for example can be done with the naticv GPO and GPI's operators and it only requires a voltage level adoption from TTL to RS232 configuration.

    I will ask within R&D if there is some more documentation on the RS485 operator.
    Thank you.

    Best regards,

    Dear DXS,

    The approach mentioned by CarmenZ is completely correct, but this includes some software interaction.

    The SelectROI operator parameters (width, height, offset X/Y) can not changed within the FPGA without software interaction, but the DynamicROI enables this within the FPGA.

    For other requirements we have similar operators, some of them rely on sofwtare interaction concerning the parameters, others have dynamic inputs for control. So the quistion is what you woould like to modify exactly and I can mention the correct VA operators for it. Some have software parameters, other have control inputs and get direct input via streams of data.

    Best regards,


    Hello DXS,

    I looked through the py code and I do see only one potential irritation (CXP Portmask & Speed).

    FG/DMA Setup is ok.

    Simulator works.

    Camera/SGC Setup is fine.

    From my point of view I would guess that the camera is expecting a trigger or similar, but does not receive one: so no image is getting produced. Since the Camera-Simulator is not updated concerning its resolution, you can observe "old" memory content from within the frame-grabber memory on the right. That is "old" data, it is not an actual live-image of the camera.

    You double-check the error-codes returned from most functions, but is an error reported by one of the Sgc_* functions during runtime?

    Please check if portmask fits to portMask = 0b1111 or is as expected/required:

    portMask = s.PORTMASK0

    Please set speed to speed=s.CXP_SPEED_12500


    Speed is currently None, maybe the automatic approach detects the wrong speed in that case or the camera does not support it...

    Best regards,


    Dear pangfengjiang,

    The data will come into your design based on a stream, so there is no need to "jump" to a specific row, it will come through the link line by line. If you want to remove a specific line you should use the operator RemoveLine. Since both inputs of RemoveLine need to be synchronous to each other you should sum the the average gray value by using RowSum. Since only the last line-pixel of RowSum -output will represent the not normalized average value - sum of all line pixels - a RemovePixel needs to be used to get the focus to the result. With an additional SYNC you can controll the RemoveLine. Both inputs of RemoveLine/Pixel need to be in sync. Independent of the text, the design below will show all details and OPs:


    VA Design :

    Since the line length is known the sum does not need to get divided, it can be used "un-normalized".

    Best regards,

    Thanks so much. Just one question: why some operators can specify resource type (ImplementationType), some not?

    And when ImplementType = AUTO, what’s the principal of VA to allocate resources?

    Dear Bingnan Liu,

    In case of "ImplementType = AUTO" VisualApplets (VA) will decide which ressource will be used. Only when a FPGA ressource conflict is shown by DRC or is getting obvious a specific discussion on the used "ImplementationType" modification is getting necessary. A good example would be: running out of logic ressources and shifting several operators to ALU/DSPs in order to save Logic/LUTs.

    Only specific operators have the ImplementationType feature, mostly the ones that require a lot of ressources and have an alternative.

    Please do not hesitate to post a VA design over here and we can go through it together and give some recommendations on how to make beneficial modifications.

    Best regards,

    Hello Pangfengjiang,

    Solid approach for this is:

    Full VA Design can be downloaded here:

    The signal periods are measured behind a debouncer. the debouncer makes sure to remove noise spikes.
    Then a kernel environment on the last 20 values is taken.
    Division of the resulting sum of all periods by 20.
    Removing first 19 averages to get an average for each period after the first 20.
    Generate a period of average last 20 periods, but tricky:

    - Period is showing up as single tick-pulse, getting half the period of duration by SignalWidth.
    Please have a look into documentation of PeriodToSignal and WidthToSignal to fully understand.
    Pulse duration is half of period.

    Best regards,

    Hello JSuriya,

    The DMAFromPC operator does only exist for "pretty old" VD1 frame grabbers.

    mE5 marathon/ironman is not supporting this operator for co-processor functionality.

    While the DMA approach could potentially support GB/s in the downstream (Host-PC-Memory to Grabber) there is a non DMA way to upload images with around 10 MB/s using an alternative register interface instead of DMA.
    What data-rates do you expect for your application?
    I know that 10 MB/s is pretty limited.

    Best regards,

    Dear Pier,

    Please use the newer MicroDisplayX tool of the most current runtime version.
    This version enables a more detailed synchronization:
    Within this new tool you can start cameras and DMAs independently.

    So you can start the camera streams first and then the corresponding DMA.

    The runtme SDK/API already enables independent start and stop of camera streams and DMAs.

    Does this help in your case?

    Best regards,

    Dear Pier,

    You can start GE camera stream independent of processes.
    Just start both camera streams manually and it will work ok.

    If selector is on 1 camera 0 is streaming but I’m waiting a stream form camera 1..

    Best regards,

    Dear Saito-san,

    Decreasing the number of objects within the binary input image itself will help increasing the speed/throughput of the Blob 2D analysis. The accodring setting within the Blob-Anaysis Operator will not help to increase the speed/throughput:

    Type static parameter
    Default specify_max_no_of_objects
    Range {maximum_required, specify_max_no_of_objects}

    The maximum number of objects which may theoretically populate an image is: Input image width * input image height / 2 (equal to a check pattern using 4-connected neighborhood). This pattern is quite unlikely. Therefore, you can configure the maximum number of objects in the images via this parameter.

    Above table shows the answer to your question (2) below.

    2. decrease the detect object number of blob 2d operator

    Best regards,

    Dear Saito-san,

    In case it is possible to post your VA-Design in here, I could have a look into it and give you some more detailed feedback,
    A simulation image would be beneficial in parallel.
    If you can not share the VA-Design in here you can send it by email to me.

    Best regards,

    Dear Saito-san,

    In case the output link of the blob-analysis is getting blocked by the further processing after its output, this can result in a limited throughput and only look as a bottleneck, while it is not caused by the blob-analysis itself.

    Best regards,

    Dear Saito-san,

    If you know any way to speed up blob 2d operator, I'd like to know it.

    Using the maximum possible parallelism at the BLOB2D operator enables maximum throughput.
    A high number of small (object noise) objects in the binary image may cause a backlog/bottleneck.
    Applying a mild erosion/dilation=opening to reduce amount of objects.

    Best regards,

    Dear Baoxi,

    Here you can see how to convert monochrome to false color RGB representation color map 'VIRIDIS' as VA data stream:


    Here you can find corresponding VA file:
    It is using color map 'VIRIDIS' as target.

    Here you can find common color maps as RGB values (one file per component) in text files that can be loaded into LUT operator by file interface:

    Related to each 24bit/RGB888 color map you can find 3 txt files in the ZIP subfolder.
    One *_R.txt for the 8bit red component, *._G.txt representing green and *._B.txt for blue.
    Each includes 256 different 8 bit values [0..255].
    Within the LUT operator you can find a "File" button for loading the values from text file; one value per line.

    Here you have an overview of colormaps that are taken from matplotlib 3.4.2 (…s.html?highlight=colormap)

    and some extra ones:


    Most of these colormaps can be chosen within VA simulation probes:

    Best regards,