Posts by Johannes Trein

    Hello cn


    you need to update VisualApplets instead of the runtime. As mentioned VisualApplets version 3.2 will generate correct SDK examples.


    However, of course you can manually correct the generated source code. Here is one example:


    Wrong:

    C
    rc = Fg_getParameterWithType(fg, Device1_Process0_AppletProperties_VisualAppletsVersion_Id, Device1_Process0_AppletProperties_VisualAppletsVersion, 0);

    Corrected:

    C
    rc = Fg_getParameterWithType(fg, Device1_Process0_AppletProperties_VisualAppletsVersion_Id, Device1_Process0_AppletProperties_VisualAppletsVersion, 0, FG_PARAM_TYPE_CHAR_PTR);


    Johannes

    Hi cn


    we updated the SDK generator recently with VisualApplets version 3.1.2. See Relese Notes

    Quote


    SDK code generation has been updated so that generated code compiles with recent versions of the runtime API. (8841)

    There was an error in

    Code
    fgrab_prototyp.h

    for reading and writing of string (char []) parameter. So the runtime was changed some time ago. Thus the SDK generator in VisualApplets had to be adapted.


    Johannes

    Hello Cam


    we have a VisualApplets example for shading correction which also uses an offset correction. See LINK

    This example and the corrections we usually do are based on the idea of saving the dark noise as an image file to the PC and uploading it back to a correction value buffer on the frame grabber.


    If I understand you explanations correct you want to directly use a camera image e.g. the first image after acquisition start and store this in a buffer on the frame grabber. I made a simple VisualApplets example for this. You can select a camera image for "re-use" in a loop.

    Open the design and start the simulation for e.g. 10 image. The build in image simulator will generate a first image of pixel values 100. After this a rolling pattern is generated. The design will use this first image as reference. Note that you need to load any 1024x1024 image do Sim Source "Any dummy imge as PixelToImage has no simulation" as PixelToImage cannot be simulated.


    pasted-from-clipboard.png


    pasted-from-clipboard.png


    The design will not do the correction for a given area e.g the first four pixel. The implementation is quite simple.

    pasted-from-clipboard.png


    See VA file attached.


    Johannes

    It seems that I misunderstood the use of LoadCoefficients. It must be set to 0 then to 1 to trigger loading. The doc is not clear on this point.

    Hi Pierre


    are you using any version of microDisplay? In fact you simply need to rewrite a value 1 to LoadCoefficients. My doubt is that the GUI of microDisplayX (the new microDisplay of runtime 5.7) does not rewrite the parameter if the value did not change.


    Johannes

    Hi all

    I made a quick sample of a VA design to downscale the input image in vertical Y-direction by a fractional number. So for example an input image of 1024 lines and a downscale factor of 1.5 will result in an image of 683 lines.


    The attached design uses a parameter translate so that you can directly input the fractional downscale value into the H-Box.

    pasted-from-clipboard.png


    The implementation uses a kernel to get a current and previous line simultaneously. By using the remainder of the division between the Y-coordinate and the downscale factor the applet decides if a line shall be deleted or not and will use the remainder to interpolate between the two successive lines.


    Note that the DownscaleFactor must be between 1 and the image height. A factor less than 1 i.e. an enlargement of the image is not possible.

    pasted-from-clipboard.png


    See attached.


    Johannes

    Hi Jayasuriya


    what type of shrinking do you need?


    - Binning?

    - Line Dropping? / Decimation?

    - Downsampling by a constant integer value and calculation of the mean value of the lines? (Operator SampleDn)

    - Downsampling by a fractional number and bilinear interpolation?

    Johannes

    Hello Jayasuriya


    your implementation is correct but there are other options as well which might be a little more efficient.


    1. Use operator ColorTransform


    This operator can be used with fractional numbers, Internally they are converted to fixed point values. As you only need one component set the other matrix coeffcients to 0.


    2. Use RGB2YUV. This operator actually performs a RGB to YCbCr conversion what you need.


    3. Same as your implementation but multiply with a power of 2 value instead of 1000. SO you can use shift right instead of a DIV operator. DIV requires many resources.


    See the attached design.

    pasted-from-clipboard.png

    Moreover it is important to set all dynamic parameters to static to save FPGA resources. Furthermore on mE4 a Scale can be replaced by HWMULT to use the FPGA hard coded multipliers. On mE5 use Mult. The program will automatically decide between LUTs or DSPs for multiplication.


    Johannes

    Replacing an older microEnable III or microEnable IV with a microEnable 5 requires some attention. This thread lists the differences between the frame grabber generations to give you a guideline for the replacement.

    For discussion about porting VisualApplets implementation to the new generations please search or open a thread in forum VisualApplets Functionality and Documentation

    Choose a replacement frame grabber

    Depending on the interface I suggest the following replacement options

    Camera Link microEnable 5 marathon ACL
    GigE Vision no discontinuation notice, product is still available
    LVDS no replacement available

    Hardware Changes

    In the following I've listed the major changes in hardware

    • Cable: Camera Link SDR: The microEnable 5 ACL uses Camera Link SDR connectors instead of MDR. So you might need to replace a cable as well
    • Cable length: In general the microEnable 5 series allows longer cable lengths than the older frame grabber series. However, ensure that the new SDR cable can be run with the required pixel clock.
    • PCIe: The microEnable 5 uses PCIexpress x4 Gen 2. If you want to use the frame grabber in an x1 slot ensure that it can physically hold x4 cards. The frame grabber can be used in Gen 1 slots at slower speeds without any problems.
    • Power and Temperature: The microEnable 5 uses passive cooling. The typical power dissipation is 12W. The operating temperature has been increased to 50°C. With an air flow of 100 linear feet per minute you can even use 60°C. Compare the environment conditions with the board you are using. Check the hardware documentation for a detailed listing: LINK
    • PoCL: The microEnable 5 ACL supports Power over Camera Link (PoCL). You need to enable this feature in microDiagnostics if you want to use it. The frame grabber does not need an extra power connector. It will use the PCIe power for PoCL
    • CLIO: A replacement for CLIO is not available.
    • GPIOs: The marathon frame grabbers are extended by a front GPIO connector. So you can replace the additional GPIO card with the front GPIOs. However, you can still use all microEnable 4 GPIO cards (trigger boards) as well as the new Opto Trigger 5 with the extension port of the marathon frame grabbers.

    Check the hardware documentation for more information: LINK

    Runtime Software

    • Version: You need a Silicon Software runtime version 5. We currently recommend the latest release (Feb 2019: RT 5.6.1)
    • Operating System: The runtime is currently available for Windows 7 or higher and Linux. Check the supported operating systems in LINK
      Windows XP, Windows Vista and QNX are not supported anymore.
    • Applet load: On microEnable 5 you cannot use function Fg_Init or Fg_InitEx to load an applet to the frame grabber. Before using these functions you need to flash the required applets to the frame grabber using program microDiagnostics or from command line. New frame grabbers usually are not pre-flashed with the apples of a specific runtime.
    • microDisplay Features: The wizards "Lookup Table Parameters", "Shading Parameters" and "White Balancing" are disabled for microEnable 5.

    AcquisitionApplets


    The available AcquisitionApplets cover most of the functions of the old microEnable III and microEnable IV applets. Features which are changed or discontinued are:

    • Shading Correction / Flat Field Correction is not available
    • The KneeLUT is replaced by full LUTs which allow higher precision but require a different configuration if you want to individually set the elements.
    • Spatial Correction is not available
    • The Bayer De-Mosaicing filters have been replaced by higher quality implementations.
    • The camera simulator now uses pixel per second instead of frame grabber internal taps per second for speed.
    • The ImageTag is not available. Overflows can be detected with the callback event system.

    To find the right replacement AcquisitionApplets check the documentation. For Camera Link use the microEnable 5 ACL documentation LINK


    Please contact the support@silicon-software.de or sales@silicon-software.de for a direct contact.


    Johannes

    Now that I have the camera I could do further tests.


    In fact the camera sends data in the following format: RGBG...

    If you set an AOI of 2048 x 1024 the camera will transfer 2048 x 2048 to the frame grabber. So in contrast to the GenICam PFNC the two stages are separated into two lines. The simple solution is to use an AppendLine operator in VisualApplets.


    The test patterns of the camera seem to not output the RGBG data so I used the sensor image and set a digital gain for red, grreen and blue to allocate the color to the correct positions. By acquiring a RAW image I could verify the algorithm with the VisualApplets simulation.

    pasted-from-clipboard.png


    The HAP runs fine in microDisplay. No parameters need to be changed if you use an ROI of 2048x1024

    TestBlue.PNGTestGreen.PNG


    Attached you will find the updated VA file and the RAW image file for simulation.


    Johannes

    Hello Jayasuriya


    first of all I like to inform you about this post: How to acquire images by turns using multi cameras? Maybe you find some answers there. Besides other information you can read the following:

    ... as you are using a GigEVision camera the camera explicitly needs to be started. microDisplay will do that for you when you click the Play button. However it will start camera 0 with DMA 0. Camera 1 with DMA 1. Etc. As you only have a single DMA camera 1 (you call it camera 2) will not be started by microDisplay.

    Solution: Open the GenICam explorer and connect with the second camera. Search for parameter StartAcquisition and click Execute. This shall start the second camera and you can acquire the images with microDisplay.


    You won't have that type of problem when you integrate the frame grabber with the API using the SDK as you need the code to start the cameras anyway. If you are using a 3rd party interface like Halcon you need to check it.


    In your case your design will work if you can change the operator SourceSelector only between two frames. So you need to stop all cameras, let the pipeline get empty, change the SourceSelector source and restart let's say two out of three cameras.


    If you have a disabled camera asynchronous to your trigger. For example because of a failure you won't be able to switch to your fallback solution. Especially if it is in the middle of a frame transfer. The operators can only switch at the end of a frame.


    Maybe you can further explain the expected camera issues so that I can think about a workaround.


    Johannes

    Hi Theo

    So my main question is why is the Bit Width not a part of the table above? Would you still think I should use more than one link?

    The table already assumes a bit width of 64 to get to those values. I need to add more details to the post.


    I know the operator does not perform very well. Thank you for the feedback. I will forward it.


    Tip: It is difficult to prepare the CoefficientBuffer input files if you are using multiple links. If you don't want to write a software for testing you can use the VisualApplets simulation to generate the coefficient files.


    An alternative to CoefficientBuffer is RamLUT.


    Johannes

    Hi Pierre


    if you use the VisualApplets simulation the program will display the correct image width. So please check the simulation output image width and verify that you are using the same setting for the display size.


    Now that you have shown a screenshot of the VA design I can see that you are using a parallelism of 20.


    From your information I can guess the following parameters:


    - Camera ROI width 1184

    --> Applet camera operator will extend this to 1200

    - Defined ROI in both ImageBuffer operators: 1200 (as 1184 is not possible)

    SelectROI XLength: 1184

    -> Select ROI will cut 1184 pixel. As the output parallelism is 20 the lines are extended by dummy pixel to 1200

    -> DMA buffer image width = 1200


    For 1152 you need to set the ImageBuffer to 1160 what also will be tha DMA buffer image width.


    Maybe you use different settings in ImageBuffer?


    Why are you using SelectROI anyway? It is only reasonable to use it if you put a PARALLELdn operator in front of it.


    Both of the screenshots do not show a multiple of 20.


    I can only further assist if you attach the VA design and the images as well as a full documentation of all settings.


    Johannes

    Hello community


    Today I have a question for you. I want to use a Dalsa Linea Color GigE in RAW mode. This is a bilinear linescan camera having the following pattern:

    Sensor layout of a bilinear line scan camera with color pattern Red/BlueFollowedByBlue/Red_GreenFollowedByGreen


    I need to set the camera to the pixel format BiColorRGBG8 but I don't know the exact output format of the camera.


    Here is the camera manual: https://info.teledynedalsa.com…eries%202k%20and%204K.pdf


    My questions are:

    1. Is the camera sending an image of Width and Height and the image composing takes place in the camera?

    2. What is the pixel format of the camera? RB line first next GG line or pixel 0 RGBG, next pixel 1 RGBG etc?

    3. What is the line width? The CL version of the Dalsa P4 sends the GG line after the RB line. So the LVAL is 2 x width


    I modified the Silicon Software example http://www.siliconsoftware.de/…near%20Bayer%20RB_GG.html to the VQ4 but without having an answer to the questions above I cannot finish the implementation. See attached.


    Maybe someone has an idea?


    Thanks

    Johannes

    Hi Pierre


    I've been reading your post several times now but still don't have the one answer. Let me explain the behavior of Select_ROI

    - if the input image width is larger than the defined offset + width the operator will cut the ROI

    - if the input image width is smaller the operator will not extend the width and will pass the data to the output

    - if you select an offset or width which cannot be divided by the used parallelism the operator has to add dummy pixel to the end of each line.


    You wrote: "... and also modify the FG_WIDTH ..."

    In a VisualApplets applet you don't have FG_WIDTH. You can define

    - all operators changing the width like ImageBuffer and Select_ROI

    - describe the DMA output image size which is used for PC buffer allocation and display window size.

    So please let me know what you mean by FG_WIDTH.


    On a DMA transfer the line length information is not included. So the PC cannot know if the lines have a width of 1184 or 1152. It simply needs to match with the settings.

    If you have a mismatch it looks like in your attached image. From the image we can see that the display size does not correspond to the transfer size. There are some pixel which are shifted to the next lines as there are to many pixel or are shifted to the previous line as some pixel are missing.


    Can you see the problem in the VisualApplets simulation.


    Could you add your VA design so I can have a look at it. Or an extract?


    Johannes