Posts by Johannes Trein

    Hi


    what you are requesting is a IIR filter instead of an FFR filter. The problem here is that you will need to calculate a current step before getting the new pixel. Any parallelism in a pipeline stage will therefore impossible. This makes this operation so very slow.


    A direct approach in VisualApplets does not exist but you can use loops for your requirement. Instead of the existing loop examples where lines or frames are processed you will need to process a single pixel inside the loop.


    Before going into detail you should be aware of the bandwidth limitation. A fraction of the FPGA clock will be possible.


    Johannes

    Hi Pier


    I added a Dummy FIFO with InfiniteSource = Enable to the design. See attached.


    Using SplitLine will create "End Of Line" markers which need to be inserted in the stream. As cameras cannot be stopped DRC will throw an error. However, our AppendLine operator will exactly remove this markers and generate a gap. So we need a dummy FIFO to trick the DRC.


    Johannes

    Hi Pier,


    there is a very simple solution to do this. Use TrgBoxLine and control the sequence length instead of the image height. To do so you need to append all camera lines into a single very long line and convert to 1D. Now TrgBoxLine can select the desired lines i.e. frames. Convert back to 2D and the image width.


    See attached.


    There exist much smoother options but this is simplest and easiest to use.


    Johannes

    Hi Pier


    as Simon wrote the bit position depends on the data packing. But assuming a byte per byte order you simply need to acquire using an 8 bit mono format und use CastParallel so that you get a 46 bit link.

    I would suggest to use a AcquisitionApplets 8 bit Gray and set the ROI width and height so that you can see the triangulation data. Check the first eight byte to get an idea where your data is located.


    Johannes

    While I was thinking about this again I figured out that my assumption is wrong.

    This will most likely generate blocking signals at the SYNC input and cause ImageBuffer overflows.

    The blocking signals will be generated at the second input. So PulseCounter is stopped while being unstoppable. So the implementation might cause a wrong synchronization but cannot explain the errors you see.


    Anyway please comment my first two phrases.

    ... so you are saying the SignalGate operators switch between whole frame and not lines. So the error will be very unlikely at this position. Even though you have to be very careful that you get an even distribution of even and odd images for the buffers.


    Could you check the fill levels at all ImageBuffer operators during acquisition? Is any of the operator fill level > 50%?

    Johannes

    Hello Arjun


    so you are saying the SignalGate operators switch between whole frame and not lines. So the error will be very unlikely at this position. Even though you have to be very careful that you get an even distribution of even and odd images for the buffers.


    Could you check the fill levels at all ImageBuffer operators during acquisition? Is any of the operator fill level > 50%?


    I think there is an error in module DocID. PulseCount will generate a pixel with every clock cycle. SYNC operator will perform a 0D to 2D synchronization. So for both inputs one pixel is required. This will most likely generate blocking signals at the SYNC input and cause ImageBuffer overflows. However you are saying it is working sometimes. I would assume it cannot work at all. So my assumption might be incomplete.

    pasted-from-clipboard.png


    I changed the design. See attached.


    Let us know your results.


    Johannes

    Hi


    PCO edge cameras can be used in a 12(16) Bit rolling shutter mode. To use the camera in this mode a special applet is required to unpack the bits as a 12 Bit Camera Link format was not supported at that time, sort the two sensor areas and do a 12 to 16 Bit unpacking.


    To use the camera with a V-Series frame grabber you need to:

    - use the attached VA design example

    - use the Silicon Software runtime API as the V-Series grabber will not be supported in the PCO CamWare

    - use the PCO SDK for serial communication with the camera. This is mandatory as the camera needs to be parameterized and started for every usage.


    The VA design files does the following. Processing is done according to the documentation of PCO.


    As in Camera Link, no 12 Bit format is defined, the apple unpacks 2 pixel out of 3 byte according to the information and format definition from PCO. To unpack the data we need to follow the following scheme:

    pasted-from-clipboard.png


    The sensor is read in two vertical aligned zones. Next, we have to resort the lines from both zones. The applets follows ‘Mode A: Top-Center/Bottom-Center’.

    pasted-from-clipboard.png


    The 12 Bit values contain 16 Bit packed values compressed by the following scheme.

    pasted-from-clipboard.png

    To unpack, we simply need to do the reverse operation:

    pasted-from-clipboard.png


    PCO edge 5.5 in 12 bit rolling shutter mode.va



    Johannes

    Hi Theo


    Operator ImageBuffer is a line buffer. So the operator will not store the complete image before readout, only one line or more if blocked at the output. So you can use an image size which is larger than the RAM for one of this operator.

    Basically if there is no reason it is better to not store the full image to reduce the latency.

    See file:///Q:/Silicon%20Software/VA/Releases/VisualApplets_3.2/Docu/en/manuals/content/library.Memory.html to find out which memory operator has a line or frame delay.


    The mE5-VCX-QP has a RAM data width of 512 bit. So you should use as much as possible of the data width to speed up the DRAM in the shared memory platform and not to waste memory. As you are using 24 bit per pixel a parallelism of 21 is good in your case. A following SelectROI after the reduction of the parallelism will remove the dummies.


    Johannes

    Hi JSuria


    let me add some ideas:

    - enable mDisplay -> Tools -> Settings -> Check "Ignore Camclock status"

    - disable mDisplay -> Tools -> Settings -> Check "Use GenICam Camera parameter"

    - start the acquisition in microDisplay FIRST

    - after start the acquisition of the three cameras in GenICam explorer

    - last start the trigger


    If your process has multiple DMA ensure that all DMAs are started.


    I hope that one of these ideas help you. Basically your explanations are all correct and we have bring this to work in microDisplay before. In an SDK application it is no problem anyway as the cameras are started manually anyway.


    Johannes

    Dear Saito-san


    please see attached VA file which is used to output the average of a bouncy encoder and similar to the explanations given by Carmen described above.

    You need to add another DIV by 512 to get a 512 higher frequency.


    When I have time I will make a more detailed explanation of this example in another post.


    Johannes

    Hi Arjun


    Xilinx shows some system recommendations on their web pages like in https://www.xilinx.com/product…-tools/vivado/memory.html


    For mE5 marathon FPGAs Xilinx states that a maximum of 3GB RAM is used. For the future frame grabbers which are currently in development XILINX states a peak memory usage of 14GB.


    Xilinx does not recommend any specific CPU and speed. The build process for recent FPGA uses all cores.


    For VisualApplets you can use any recent standard PC. Simulation can be speed up with fast SSDs and a fast CPU. Multi cores are barely used in VisualApplets.


    Johannes

    Hello Oliver


    with the new JPEG operator released in VA 3.2.0 (See New high speed JPEG operator and examples (VA 3.2.0)) you don't need to generate the header anymore. So there are no specific JPEG functions required in the SDK sample. Simply grab the data and write to a file with extension .jpg. If you want to decode the image of course you will need a decoder. You can use any you like such as JpegBitmapDecoder coming with Microsoft C#.


    See C# wrapper documentation.


    Let me know if this'll help you


    Johannes

    Hi Cam


    That would be a nice application. The solution strongly depends on bandwidth requirements.


    The most ovious solution woul be the following:

    - store each input image in DRAM using operator FrameBufferRandomRd

    - Use CreateImage, ModuloCount, DIV, mayber SIN, COS, etc. to generate the read column and rows

    - read from DRAM


    An example using FrameBufferRandomRd together with CreateBlankImage can be found HERE. It is a more complex task but can give you some idea.


    How is the "new line" defined. Could you use a LUT to store point and gradient?


    Johannes

    Hi Janek


    In example Scaling a Line Scan Image you can find a design using scaling in x-direction using a lookup table. The difficulty of the implementation is the x-direction as we have to conquer some difficulties at higher parallelisms. In Y-direction it is more simple as you can do this as a predecessor of the x-scaling.

    To be honest the example is not the easiest to understand as it is strongly optimized for a higher parallelism.


    I don't recommend to use the examples Geometric Transformation and Distortion Correction. They will solve your problem but are implemented for more complex transformations. Your implementation should use BRAM instead of DRAM


    Let us know your input parallelism (bandwidth requirement) and maximum resolution of the image which has to be scaled. Mybe we can find another example as we have done plenty of these implementations.


    Johannes