Posts by Johannes Trein

    In addition to the great posts above for the use of fixed point arithmetic:


    Notation: u(5,3) means unsigned, 5 integer and 3 fractional bits.


    There is a simple rule for fixed point arithmetic.

    For additions or subtractions you need to use the same number of fractional bits at the input. The result will have the same number of fractional bits. Integer bits will increase by one.

    So u(5,3) + u(9,3) will become u(10,3)

    e.g. 1.5 + 2.25 = 3.75 --> 12 + 18 = 30


    For multiplications and division the number of fractional bits at the inputs can differ. For multiplications the resulting fractional bits will be the sum of the input fractional bits. The resulting integer bits will be the sum of the input integer bits.

    So u(5,3) * u(2,8) will become u(7,11)

    e.g. 1.5 * 2.25 = 3.375 --> 12 * 576 = 6912


    Johannes

    Hi Theo


    see some examples of the alternative implementation to ImageSequence as Björn mentioned with FrameBufferRandomRd in the HDR examples: https://docs.baslerweb.com/liv…gh%20Dynamic%20Range.html


    Copy & Paste of the old ImageSequence operator into a CXP project is allowed as the operator was available in earlier versions for CXP. But it is very likely that you get a timing error. Moreover the operator can only be used with parallelism of four which makes it very inefficient at the share memory concept.


    Johannes

    Hello Sangrae Kim


    you will need to do the De-Bayer before the distortion correction. So your bandwidth will get 3 times higher.

    An implementation on VQ4-GE is possible but the bandwidth might be limited. So it totally depends on the bandwidth requirements and the distortion factor. i.e. several pixel of just between 0 and 2 pixel.


    Johannes

    Hi Jesse


    I guess you get an overflow in your design.

    CXP6x2 can have at maximum 1200 MB/s. If you reduce the parallelism to 8 you can process 1000 MP/s at maximum. Even if there are gaps between the images you cannot process this burst speed.


    So I changed the order of your operators. First do the calculation and remove all unnecessary lines. Next place a FIFO which has to store a single line instead of a whole frame and after that you can reduce the paralellism if the mean camera bandwidth will allow that.


    See attached.


    Johannes

    Hi


    what you are requesting is a IIR filter instead of an FFR filter. The problem here is that you will need to calculate a current step before getting the new pixel. Any parallelism in a pipeline stage will therefore impossible. This makes this operation so very slow.


    A direct approach in VisualApplets does not exist but you can use loops for your requirement. Instead of the existing loop examples where lines or frames are processed you will need to process a single pixel inside the loop.


    Before going into detail you should be aware of the bandwidth limitation. A fraction of the FPGA clock will be possible.


    Johannes

    Hi Pier


    I added a Dummy FIFO with InfiniteSource = Enable to the design. See attached.


    Using SplitLine will create "End Of Line" markers which need to be inserted in the stream. As cameras cannot be stopped DRC will throw an error. However, our AppendLine operator will exactly remove this markers and generate a gap. So we need a dummy FIFO to trick the DRC.


    Johannes

    Hi Pier,


    there is a very simple solution to do this. Use TrgBoxLine and control the sequence length instead of the image height. To do so you need to append all camera lines into a single very long line and convert to 1D. Now TrgBoxLine can select the desired lines i.e. frames. Convert back to 2D and the image width.


    See attached.


    There exist much smoother options but this is simplest and easiest to use.


    Johannes

    Hi Pier


    as Simon wrote the bit position depends on the data packing. But assuming a byte per byte order you simply need to acquire using an 8 bit mono format und use CastParallel so that you get a 46 bit link.

    I would suggest to use a AcquisitionApplets 8 bit Gray and set the ROI width and height so that you can see the triangulation data. Check the first eight byte to get an idea where your data is located.


    Johannes

    While I was thinking about this again I figured out that my assumption is wrong.

    This will most likely generate blocking signals at the SYNC input and cause ImageBuffer overflows.

    The blocking signals will be generated at the second input. So PulseCounter is stopped while being unstoppable. So the implementation might cause a wrong synchronization but cannot explain the errors you see.


    Anyway please comment my first two phrases.

    ... so you are saying the SignalGate operators switch between whole frame and not lines. So the error will be very unlikely at this position. Even though you have to be very careful that you get an even distribution of even and odd images for the buffers.


    Could you check the fill levels at all ImageBuffer operators during acquisition? Is any of the operator fill level > 50%?

    Johannes

    Hello Arjun


    so you are saying the SignalGate operators switch between whole frame and not lines. So the error will be very unlikely at this position. Even though you have to be very careful that you get an even distribution of even and odd images for the buffers.


    Could you check the fill levels at all ImageBuffer operators during acquisition? Is any of the operator fill level > 50%?


    I think there is an error in module DocID. PulseCount will generate a pixel with every clock cycle. SYNC operator will perform a 0D to 2D synchronization. So for both inputs one pixel is required. This will most likely generate blocking signals at the SYNC input and cause ImageBuffer overflows. However you are saying it is working sometimes. I would assume it cannot work at all. So my assumption might be incomplete.

    pasted-from-clipboard.png


    I changed the design. See attached.


    Let us know your results.


    Johannes

    Hi


    PCO edge cameras can be used in a 12(16) Bit rolling shutter mode. To use the camera in this mode a special applet is required to unpack the bits as a 12 Bit Camera Link format was not supported at that time, sort the two sensor areas and do a 12 to 16 Bit unpacking.


    To use the camera with a V-Series frame grabber you need to:

    - use the attached VA design example

    - use the Silicon Software runtime API as the V-Series grabber will not be supported in the PCO CamWare

    - use the PCO SDK for serial communication with the camera. This is mandatory as the camera needs to be parameterized and started for every usage.


    The VA design files does the following. Processing is done according to the documentation of PCO.


    As in Camera Link, no 12 Bit format is defined, the apple unpacks 2 pixel out of 3 byte according to the information and format definition from PCO. To unpack the data we need to follow the following scheme:

    pasted-from-clipboard.png


    The sensor is read in two vertical aligned zones. Next, we have to resort the lines from both zones. The applets follows ‘Mode A: Top-Center/Bottom-Center’.

    pasted-from-clipboard.png


    The 12 Bit values contain 16 Bit packed values compressed by the following scheme.

    pasted-from-clipboard.png

    To unpack, we simply need to do the reverse operation:

    pasted-from-clipboard.png


    PCO edge 5.5 in 12 bit rolling shutter mode.va



    Johannes

    Hi Theo


    Operator ImageBuffer is a line buffer. So the operator will not store the complete image before readout, only one line or more if blocked at the output. So you can use an image size which is larger than the RAM for one of this operator.

    Basically if there is no reason it is better to not store the full image to reduce the latency.

    See file:///Q:/Silicon%20Software/VA/Releases/VisualApplets_3.2/Docu/en/manuals/content/library.Memory.html to find out which memory operator has a line or frame delay.


    The mE5-VCX-QP has a RAM data width of 512 bit. So you should use as much as possible of the data width to speed up the DRAM in the shared memory platform and not to waste memory. As you are using 24 bit per pixel a parallelism of 21 is good in your case. A following SelectROI after the reduction of the parallelism will remove the dummies.


    Johannes