Posts by Johannes Trein

    The blob analysis operators determine the features of objects in binary images. Such features are the area, center of gravity, contour length and coordinates of the minimal paraxial rectangle which fits over the obejcts in the image. These coordinates are also called the bounding box. As you can see the operator will output features and not images. To exctract an ROI from the original image which fits exactly over the object the bounding box coordinates need to be used in a second step using operator ImageBufferMultiRoI.


    For area scan images and line scan images with dedicated image triggers this is easy to implement. However for linescan applications acquiring images of infinite height and arbitrary object posititions it is more difficult as the object could be at the cutting position between images.


    This example shows how to extract blob ROIs from the data of linescan cameras. To overcome the limitation explained above, we use two overlapping sliding windows as an input of the Blob_Anaylsis_2D operator.


    The example is available in two version. The simplified version does not care about dummy pixel caused by the parallelism granualrity and does not append meta information about the ROI size and sequence index. This is done in the advanced version. For a beginner, the simplified version is easier to understand before looking at the advanced version.


    The designs can fully be simulated with the attached image.

    pasted-from-clipboard.png

    Operator CoefficientBuffer will not use the maximum available DRAM bandwidth and memory size of the frame grabber platform in each configuration. I made a list of configurations and measured the corresponding bandwidth in hardware:


    Grabber Configuration Bandwidth Max. Usable Size
    mE5-MA-VCL Grabber Max 6400 MB/s 512 MiB
    mE5-MA-VCL 1 Link, Par 1 999 MB/s* 128 MiB
    mE5-MA-VCL 1 Link, Par 2 1524 MB/s 128 MiB
    mE5-MA-VCL 1 Link, Par 4 1524 MB/s 128 MiB
    mE5-MA-VCL 1 Link, Par 8 1524 MB/s 128 MiB
    mE5-MA-VCL 2 Link, Par 1 1999 MB/s* 256 MiB
    mE5-MA-VCL 2 Link, Par 2 3048 MB/s 256 MiB
    mE5-MA-VCL 2 Link, Par 4 3048 MB/s 256 MiB
    mE5-MA-VCL 2 Link, Par 8 3048 MB/s 256 MiB
    mE5-MA-VCL 4 Link, Par 1 3999 MB/s* 512 MiB
    mE5-MA-VCL 4 Link, Par 2 6097 MB/s 512 MiB
    mE5-MA-VCL 4 Link, Par 4 6097 MB/s 512 MiB
    mE5-MA-VCL 4 Link, Par 8 6097 MB/s 512 MiB
    mE5-MA-VCX-QP Grabber Max 12800 MB/s 512 MiB
    mE5-MA-VCX-QP 1 Link, Par 1 999 MB/s* 64 MiB
    mE5-MA-VCX-QP 1 Link, Par 2 1525 MB/s 64 MiB
    mE5-MA-VCX-QP 1 Link, Par 4 1525 MB/s 64 MiB
    mE5-MA-VCX-QP 1 Link, Par 8 1525 MB/s 64 MiB
    mE5-MA-VCX-QP 4 Link, Par 1 3999 MB/s* 256 MiB
    mE5-MA-VCX-QP 4 Link, Par 2 6101 MB/s 256 MiB
    mE5-MA-VCX-QP 4 Link, Par 4 6101 MB/s 256 MiB
    mE5-MA-VCX-QP 4 Link, Par 8 6101 MB/s 256 MiB
    mE5-MA-VCX-QP 8 Link, Par 1 7999 MB/s* 512 MiB
    mE5-MA-VCX-QP 8 Link, Par 2 12201 MB/s 512 MiB
    mE5-MA-VCX-QP 8 Link, Par 4 12201 MB/s 512 MiB
    mE5-MA-VCX-QP 8 Link, Par 8 12201 MB/s 512 MiB


    *) Limited by Link configuration, not by DRAM

    Green = possible maximum obtained


    The size is valid for one operator. If you combine more operators you can use larger memory sizes.

    The bandwidh represents the speed of one operator. The operator will always use the maximum bandwith of the hardware. So if the bandwidth is not equal to the maximum of the platform the operator "wastes" the bandwidth. Other DRAM operators cannot use the unused bandwidth.

    There is one exception: If the link limits the bandwidth there is some bandwidth left for other operators.

    In most cases you need only a single at maximum speed. As you can see from the table above, the bandwidth is limited with a single link. So you need to use multiple links and cobine the data into a single link. The coefficient data needs to be distributed over multiple files in this case. Depending on the way you combine the links (operators MergePixel, MergeParallel, InsertLine) the memory layout will difer.


    The following screenshot shows a very simple link combination.


    pasted-from-clipboard.png


    Check out the VisualApplets documentation for memory configurations of the specific grabbers. LINK

    Hi

    If you want to implement large dynamic filters you need to separate them in x- and y-direction and need to implement the dynamic coefficients using LUTs or Mult operators.

    However, if you want to implement a mean or average filter you can follow a different approach. Instead of using a kernel and calculating the sum for each pixel you can implement a rolling average for the columns and rows. To do this you need to compare the image with a shifted copy. We simply use CreateImage and an insertion for these black edges.


    Check the attached design: forum.silicon.software/index.php?attachment/14/

    There are many comments and explanations within. The design is kept simple. For special cases modifications and extensions might be required.

    pasted-from-clipboard.png

    What is a flat field correction / shading correction?

    • Subtract fixed pattern noise from camera images (FPN correction)
    • Correct the photo response non uniformity PRNU of the sensor
    • Correct the shading inhomogeneity due to lens and lighting
    • An individual correction value is defined for every pixel of the sensor image.

    Formula:

    • I = incoming camera image
    • I* = output image
    • Offset = FPN correction value
    • Gain = PRNU or shading correction value

    pasted-from-clipboard.png


    The flat field correction uses fixed point arithmetic:

    • Calculation accuracy needs to be sufficientfor the project requirements
    • Fixed point correction values are used

    8 Bit camera example:

    pasted-from-clipboard.png

    --> we define 8 bit for offset and 8 bit for gain correction values à 2 Byte correction values are required for each sensor pixel


    12 Bit camera example:

    pasted-from-clipboard.png

    --> We define 16 Bit for offset and 16 Bit for gain correction values à 4 byte correction values required for each sensor pixel

    VisualApplets Implementation

    pasted-from-clipboard.png


    Performance

    • The FFC implementation can be scaled to any desired performance
    • --> Use a sufficient parallelism. For example parallelism 32 on microEnable 5 frame grabbers will result in 2000MPixel/s
    • --> Use a fast DRAM to store the correction values.
      • See the target device resources to obtain the DRAM architecture of your frame grabber LINK
      • Parameterize the CoefficientBuffer to high speed implementations.
    • Example ironman: Input 2000MPixel/s, 2 Byte correction values/pixel à 4000MByte DRAM bandwidth requirement à 2 DRAMs

    pasted-from-clipboard.png
    Calculation of the Correction Values

    • The calculation of the correction values should be done on the host PC in a software application (C++, Matlab, Halcon, …)
    • For FPN correction values
      • Acquire a dark image i.e. cover the lens
      • Offset(x,y) = DarkImage(x,y)
    • For PRNU of shading correction values
      • Acquire a grayscale image. Pixel values should not be in saturation but as bright as possible.
      • Define a reference gray value white pixels are corrected to.
      • Gain(x,y) = ReferenceValue / (GrayImage(x,y) – DarkImage(x,y))
    • Use a Silicon Software AcquisitionApplets to record the RAW dark and gray camera images. See LINK

    pasted-from-clipboard.png


    The attached VisualApplets design “SimulationOnly_MakeFFCCoefficients.va “ is a simulation only design. It can be used to load the dark and gray images to calculate the offset and gain correction values which can be loaded into operator CoefficientBuffer for the FPGA implementation VisualApplets design “Shading2D_VCX-QP_highSpeed.va”. --> No software program to calculate the offset and gain is required if you use the VisualApplets simulation to get the correction values. Otherwise you can use any software like Matlab, Halcon, C++etc. to get the values.

    pasted-from-clipboard.png


    Next you can apply the generated correction values.

    pasted-from-clipboard.png

    Results

    The following to images show the camera image before and after FFC correction.

    pasted-from-clipboard.png

    Attachments

    The attached ZIP file contains VA Design files and simulation images.

    VisualApplets_FlatFieldCorrection.zip