Posts by Johannes Trein

    Hi


    PCO edge cameras can be used in a 12(16) Bit rolling shutter mode. To use the camera in this mode a special applet is required to unpack the bits as a 12 Bit Camera Link format was not supported at that time, sort the two sensor areas and do a 12 to 16 Bit unpacking.


    To use the camera with a V-Series frame grabber you need to:

    - use the attached VA design example

    - use the Silicon Software runtime API as the V-Series grabber will not be supported in the PCO CamWare

    - use the PCO SDK for serial communication with the camera. This is mandatory as the camera needs to be parameterized and started for every usage.


    The VA design files does the following. Processing is done according to the documentation of PCO.


    As in Camera Link, no 12 Bit format is defined, the apple unpacks 2 pixel out of 3 byte according to the information and format definition from PCO. To unpack the data we need to follow the following scheme:

    pasted-from-clipboard.png


    The sensor is read in two vertical aligned zones. Next, we have to resort the lines from both zones. The applets follows ‘Mode A: Top-Center/Bottom-Center’.

    pasted-from-clipboard.png


    The 12 Bit values contain 16 Bit packed values compressed by the following scheme.

    pasted-from-clipboard.png

    To unpack, we simply need to do the reverse operation:

    pasted-from-clipboard.png


    PCO edge 5.5 in 12 bit rolling shutter mode.va



    Johannes

    Hi Theo


    Operator ImageBuffer is a line buffer. So the operator will not store the complete image before readout, only one line or more if blocked at the output. So you can use an image size which is larger than the RAM for one of this operator.

    Basically if there is no reason it is better to not store the full image to reduce the latency.

    See file:///Q:/Silicon%20Software/VA/Releases/VisualApplets_3.2/Docu/en/manuals/content/library.Memory.html to find out which memory operator has a line or frame delay.


    The mE5-VCX-QP has a RAM data width of 512 bit. So you should use as much as possible of the data width to speed up the DRAM in the shared memory platform and not to waste memory. As you are using 24 bit per pixel a parallelism of 21 is good in your case. A following SelectROI after the reduction of the parallelism will remove the dummies.


    Johannes

    Hi JSuria


    let me add some ideas:

    - enable mDisplay -> Tools -> Settings -> Check "Ignore Camclock status"

    - disable mDisplay -> Tools -> Settings -> Check "Use GenICam Camera parameter"

    - start the acquisition in microDisplay FIRST

    - after start the acquisition of the three cameras in GenICam explorer

    - last start the trigger


    If your process has multiple DMA ensure that all DMAs are started.


    I hope that one of these ideas help you. Basically your explanations are all correct and we have bring this to work in microDisplay before. In an SDK application it is no problem anyway as the cameras are started manually anyway.


    Johannes

    Dear Saito-san


    please see attached VA file which is used to output the average of a bouncy encoder and similar to the explanations given by Carmen described above.

    You need to add another DIV by 512 to get a 512 higher frequency.


    When I have time I will make a more detailed explanation of this example in another post.


    Johannes

    Hi Arjun


    Xilinx shows some system recommendations on their web pages like in https://www.xilinx.com/product…-tools/vivado/memory.html


    For mE5 marathon FPGAs Xilinx states that a maximum of 3GB RAM is used. For the future frame grabbers which are currently in development XILINX states a peak memory usage of 14GB.


    Xilinx does not recommend any specific CPU and speed. The build process for recent FPGA uses all cores.


    For VisualApplets you can use any recent standard PC. Simulation can be speed up with fast SSDs and a fast CPU. Multi cores are barely used in VisualApplets.


    Johannes

    Hello Oliver


    with the new JPEG operator released in VA 3.2.0 (See New high speed JPEG operator and examples (VA 3.2.0)) you don't need to generate the header anymore. So there are no specific JPEG functions required in the SDK sample. Simply grab the data and write to a file with extension .jpg. If you want to decode the image of course you will need a decoder. You can use any you like such as JpegBitmapDecoder coming with Microsoft C#.


    See C# wrapper documentation.


    Let me know if this'll help you


    Johannes

    Hi Cam


    That would be a nice application. The solution strongly depends on bandwidth requirements.


    The most ovious solution woul be the following:

    - store each input image in DRAM using operator FrameBufferRandomRd

    - Use CreateImage, ModuloCount, DIV, mayber SIN, COS, etc. to generate the read column and rows

    - read from DRAM


    An example using FrameBufferRandomRd together with CreateBlankImage can be found HERE. It is a more complex task but can give you some idea.


    How is the "new line" defined. Could you use a LUT to store point and gradient?


    Johannes

    Hi Janek


    In example Scaling a Line Scan Image you can find a design using scaling in x-direction using a lookup table. The difficulty of the implementation is the x-direction as we have to conquer some difficulties at higher parallelisms. In Y-direction it is more simple as you can do this as a predecessor of the x-scaling.

    To be honest the example is not the easiest to understand as it is strongly optimized for a higher parallelism.


    I don't recommend to use the examples Geometric Transformation and Distortion Correction. They will solve your problem but are implemented for more complex transformations. Your implementation should use BRAM instead of DRAM


    Let us know your input parallelism (bandwidth requirement) and maximum resolution of the image which has to be scaled. Mybe we can find another example as we have done plenty of these implementations.


    Johannes

    Dear Saito san


    we are aware of this type of request. Unfortunately AcquisitionApplets use a lot of additional software which makes it impossible to rebuild its functionality with VisualApplets.


    At this moment your request cannot be solved. The required processing functions of the AcquisitionApplets have to be rebuilt with VisualApplets.


    Johannes

    By the way, the design I provided has been successfully synthesized, and it reports timeout at runtime instead of timing errors during synthesis.

    Oh sorry. I did not see that.


    Are you using microDisplay? I have an idea: As you are using GigE Vision both cameras needs to be started at the same time and be 100% synchronous. microDisplay can only start one camera when you click the "play" button inside the program.


    If you check the buffer fill levels I think that one of them will be in overflow while the other one will be empty.


    There are two solutions:

    1. Use the new version of microDisplay called microDisplayX. It is included in runtime 5.7.0 and you can find it in %sisodir5%/bin/microDisplayX.exe. There is a synchronization option which can be used to start all cameras at the same time
    2. If you want to use the old microDisplay you need to start GenICam explorer in parallel and manually start the second camera by writing to parameter "StartAcquisition"

    Note: If you set both cameras to the same frame rate they will still be a little asynchronous. Soon or later the buffers will be filled. You ned to externaly trigger the cameras to be 100% synchronous.


    Johannes

    Hi Michael


    you wrote

    Thus, the pixel at (0,0) would be the least significant byte of the pixel, and the pixel at (1,0) would be the most significant byte. The pixel at (1,0) would have the incorrect endianness, so it would need to be “flipped.”

    Hm. I can't see that it is flipped from you CL Spec image.

    8-Bit Mono Full 8 taps mode also has a ninth 8-bit pixel which does not exist in 16-Bit Mono Full 4 Taps mode, so this must be sent to the trash.

    If you set the VisualApplets operator to 8 Tap it will not use the nineth pixel.


    I think the solution is very simple:


    • Use FullGrayCamera operator set to 8 Tap static with a Max. Image Width on its output link doubled compared to the cam width.
    • Use operator CastParallel to get 16Bit
    • Use an adaptation of the Tap Sorting examples to sort the four taps.

    I've added a VA sample. It is a little simpler compared to the examples in the VA installation directory but does the same with a little more FPGA ressources.


    Johannes