Posts by B.Ru

    Dear Stefan,

    All data in VA designs is always handled as images. All image data is forwarded into the memory of the host system.
    A DMA data transfer will return as a pointer to linear memory in the host system.
    By using a operator like InsertImage it is possible to attach an additional data block at the end of an image.

    Example: An camera image of 1024x1024 pixels at 8bit is processed and additional meta-data is collected; simply add the meta-data by InsertImage and AppendImage operator to the end of the image. On the host-side the data pointer is returned: the first 1024x1024 bytes are image data, while the additional data is the meta-block. If you take care to make a proper memory alignment of the meta data: using 64/32bit types the corresponding data can easily addressed by pointer arithmetics or structs.
    The memory pointer returned by the DMA transfer to the memory block including images & co, can be extended by the used DMA transfer length for handling variable data length. Casting the memory-blocks to the correct data-type is required.
    So, within VA, the image data is processed as designed while the side-band information needs to flow through a parallel path, that is connected to the DMA by using InsertImage and AppendImaqge operator. Converting the side-band meta data to a software related structure can be done by CastBitWidth, CastParallel and similar.
    While the DMA transfer is handling everything as an image, the meta-data is unchanged and requires simply a cast and some pointer-arithmetics.
    Best regards,

    For the aimed bandwidth: I would like to cut the ROI to 1280x300 (0.384 megapixels), and the processing speed ideally achieves 1000 frames per second. So the bandwidth rate would be 384 MHz.


    As for the current frame rate, theoretically, with parallelism 1 should be 125Mhz with 113 fps; but when I adjusted ROI in GenCam as 1280x300, even though I set 113 fps, there was still overflow occurs (detected by the Overflow operator, explained in the forum post). So I guess there is something wrong with my design.


    Could anyone help with my design?

    Dear Bingnan Liu,

    if you could post your VA design over here, I could have a look into it and give some explanations of what is going "wrong" or what needs to be changed. The connected camera configuration would be a welcome detail, since it may affect the bandwidth considerations.
    Best regards,

    Hello together:

    As Oliver stated above it is very likely that it is related to the high bandwidth.
    If it works for low bandwidth it is just a question of the used parallelism.
    To reduce the bandwidth within the design, even for a single image you can use a lower CXP connection speed: CXP3 or CXP6 for example, instead of CXP12.

    If you post your VA design over here we can look deeper into it and apply some changes.


    Best regards,

    Björn

    Hello Daniel,


    Yes, there is an option to make the CXP12 grabber working with that example.

    I would recommend using the CXP12 Impulse CX4S-HEP board as a target platform.

    You can load it into VA and change the target platform and adopt required changes.

    That should be possible straight forward.


    Best regards,

    Björn

    Hello,


    Within an O-Type network (only O-Type operators) you can use the IsLastPixel-Operator - configured to frame - together with a Register to keep the required value. The O-Type nature does not require any synchronization.

    The init value of the register can be used as "not known" indicator for the first image.

    O-Type networks will not cause data-loss.


    At the same time you use a parallel down at the begin of the design after the First Buffer.
    Due to this bottle-neck the ImageBuffer is very likely to overflow, causing a behaviour that is very similar to the observed.


    Best regards,

    Dear DXS,


    I have seen the RS485 Operator within the documentation , but I have no real example for it. I just did some succesfull testing on this operator some years ago and it was working as expected having 3 states.
    ( https://docs.baslerweb.com/vis…are%20Platform.RS485.html )


    But there are also some other ways of serial communication possible.
    RS232 for example can be done with the naticv GPO and GPI's operators and it only requires a voltage level adoption from TTL to RS232 configuration.


    I will ask within R&D if there is some more documentation on the RS485 operator.
    Thank you.


    Best regards,

    Dear DXS,


    The approach mentioned by CarmenZ is completely correct, but this includes some software interaction.

    The SelectROI operator parameters (width, height, offset X/Y) can not changed within the FPGA without software interaction, but the DynamicROI enables this within the FPGA.

    For other requirements we have similar operators, some of them rely on sofwtare interaction concerning the parameters, others have dynamic inputs for control. So the quistion is what you woould like to modify exactly and I can mention the correct VA operators for it. Some have software parameters, other have control inputs and get direct input via streams of data.


    Best regards,

    Björn

    Hello DXS,


    I looked through the py code and I do see only one potential irritation (CXP Portmask & Speed).


    FG/DMA Setup is ok.

    Simulator works.

    Camera/SGC Setup is fine.


    From my point of view I would guess that the camera is expecting a trigger or similar, but does not receive one: so no image is getting produced. Since the Camera-Simulator is not updated concerning its resolution, you can observe "old" memory content from within the frame-grabber memory on the right. That is "old" data, it is not an actual live-image of the camera.


    You double-check the error-codes returned from most functions, but is an error reported by one of the Sgc_* functions during runtime?


    Please check if portmask fits to portMask = 0b1111 or is as expected/required:

    portMask = s.PORTMASK0


    Please set speed to speed=s.CXP_SPEED_12500

    speed=s.LINK_SPEED_NONE

    Speed is currently None, maybe the automatic approach detects the wrong speed in that case or the camera does not support it...


    Best regards,

    Björn

    Dear pangfengjiang,


    The data will come into your design based on a stream, so there is no need to "jump" to a specific row, it will come through the link line by line. If you want to remove a specific line you should use the operator RemoveLine. Since both inputs of RemoveLine need to be synchronous to each other you should sum the the average gray value by using RowSum. Since only the last line-pixel of RowSum -output will represent the not normalized average value - sum of all line pixels - a RemovePixel needs to be used to get the focus to the result. With an additional SYNC you can controll the RemoveLine. Both inputs of RemoveLine/Pixel need to be in sync. Independent of the text, the design below will show all details and OPs:


    pasted-from-clipboard.png


    VA Design : LineSelectionAverage.va

    Since the line length is known the sum does not need to get divided, it can be used "un-normalized".


    Best regards,

    Thanks so much. Just one question: why some operators can specify resource type (ImplementationType), some not?

    And when ImplementType = AUTO, what’s the principal of VA to allocate resources?

    Dear Bingnan Liu,


    In case of "ImplementType = AUTO" VisualApplets (VA) will decide which ressource will be used. Only when a FPGA ressource conflict is shown by DRC or is getting obvious a specific discussion on the used "ImplementationType" modification is getting necessary. A good example would be: running out of logic ressources and shifting several operators to ALU/DSPs in order to save Logic/LUTs.


    Only specific operators have the ImplementationType feature, mostly the ones that require a lot of ressources and have an alternative.


    Please do not hesitate to post a VA design over here and we can go through it together and give some recommendations on how to make beneficial modifications.


    Best regards,

    Hello Pangfengjiang,

    Solid approach for this is:
    pasted-from-clipboard.png

    Full VA Design can be downloaded here:
    SignalDuration Averaging.va

    The signal periods are measured behind a debouncer. the debouncer makes sure to remove noise spikes.
    Then a kernel environment on the last 20 values is taken.
    Division of the resulting sum of all periods by 20.
    Removing first 19 averages to get an average for each period after the first 20.
    Generate a period of average last 20 periods, but tricky:

    - Period is showing up as single tick-pulse, getting half the period of duration by SignalWidth.
    Please have a look into documentation of PeriodToSignal and WidthToSignal to fully understand.
    Pulse duration is half of period.

    Best regards,