Posts by kevinh

    Hello Michael,

    Thank you - that would be much appreciated. If you are able to create a minimalistic version of your applet that also triggers the same error we can investigate it further.

    As I see it from your screenshot you have two cameras , 4 DMAs . I assume that two DMAs will transport the original image while the other two describe a downsampled output, correct?


    Best Regards,
    Kevin

    Hello MiNe,

    I am sorry to hear that you encountered this issue. Have you already contacted Basler Support on this case, as not to have two threads tackling the same issue. This forum is more focused on VisualApplets Design questions etc. - while the Runtime is still in the same environment, I can not give you a direct answer to this.

    Addtionally, were you the one creating the Applet or was it provided to you? IF you can provide the .hap file we can try to recreate the bug on our site.



    I will ask the appropiate engineers and report back to you once I know more.


    Best Regards,

    Kevin

    Hi Johan,

    Are you working with a fixed framerate? If so the time between frames is constant and you could use that to gather the information in ms.
    Otherwise you must use the signal library to get the amount of ms passed. Furthermore, it is not the time of the frame but the exact ms time of the pixel. Additionally, getting the information from a genicam node and then after retrieving the image using the C++ / Python SDK to write the time in ms on the image might be easier.

    Nevertheless, it should be possible but very tricky to do it directly on the framegrabber.


    Best Regards,

    Kevin

    To answer your question: No you will not need 10 LUTs, you will need one LUT storing all 10 numbers. However, you can only store 2^16 pixels in one LUT, that means your numbers cannot be that large in the image dimensions.

    So if you only want to use one LUT : 2^16 / 10 = 6553, so one number is not allowed to take up more than 6553 pixels. Numbers are usually not square and are higher than wider, so I would go with a maximum of 60*100 (width*height) - however, smaller dimensions should work fine as well.

    Best Regards,

    Kevin

    Dear Johan,

    Attached you can find an exemplary design that will write the image numbers 0-2 in the top left corner. This is just a starting point, for larger numbers you must change the logic etc, but it showcases the general idea.

    I also attached a python script that generates the numbers 0-2 to fill up the LUT. You can find them all in the attached .zip.

    Note that this is not exactly an overlay but an overwrite. If you wish to overlay it you can try to work with color or alpha channels (to work with alpha channels you must use the super pixel format).


    If you wish to have Basler implement a solution for you feel free to contact your sales reprsentive.


    Best Regards,

    Kevin


    Dear Johan,

    That's not what I meant. Instead of using a signle Pixel use a LUT where images of the numbers are stored, e.g. in a size of 16x16 and then replace the pixels of your incoming datastream with the corresponding frame numbers according to the generated LUT adresses and frame indices.

    Best Regards,

    Kevin

    Dear Johan,


    There are two different ways to achieve this:

    Using the CPU during Acquistion:

    1. Use the FramegrabberSDK and some other library such as OpenCV to embed the frame counter into the image after it was recieved from the host ( https://www.tutorialspoint.com…in-opencv-using-cplusplus ). The benefit is that you can have the original image data without the counter and with the counter as well. The downside is that it is not shown on the image itself, so it won't show up using MicroDisplayX.

    Using VisualApplets / FPGA:


    2. You could embed the logic into the Framegrabber by using a combination of ImageNumber/ModuloCount, IF, Coordinate_X, Coordinate_Y and a LUT. In the LUT you have to save the representation of the numbers 0-9 in pixels. With the Coordinate_X, Coordinate_Y you will get the pixel Positions and may overwrite it with the needed content for the frame.


    What should be the highest number of the counter? 9, 99, 999, 9999,99999 - Depending on this you must adapt your logic accordingly to make more room and adress the LUT differently.

    PostProcessing

    3. MicroDisplayX also shows the FrameNumber under the acquired images, it is not directly on the frame but you can read it there as well. Additionally, if you take a sequence of images and have set a large buffer you can then save the images to the disk and each image gets a frame counter in its name. You could write a small script in your prefered language that iterates over the images and puts the image number on it.
    pasted-from-clipboard.png



    I hope that one of these solutions helps you achieve your desired goal.


    Best Regards,

    Kevin

    Hello everyone,

    We at Basler started an Innovation Project to offer Python Programming for FPGAs on our Hardware. In the background Visual Applets is used, however you can now steer it with Python.

    If you wish to know more about this topic you can check out this link: https://www.baslerweb.com/en/i…/fpga-programming-python/


    If you wish to have a discussion about this topic please contact me directly or use the link at the bottom of the page mentioned in the link above.

    Best Regards,


    Kevin

    Hi,

    Sorry for the late reply.

    Please try / investigate:


    1. What is the size of the file on the disk? if it is 0Kb or 1Kb the data is lost, probably due to synchronization issues with OneDrive
    2. VisualApplets 3.4.0 was just released, you can download it here: https://www2.baslerweb.com/en/…anguage=all;version=3.4.0 and see if this works
    3. I do not know the target platform of your design . If you have a plattform that is no longer supported by VisualApplets or was installed by a plattform installer you may need to install the plattform first
    4. If all these steps do not help you, you may upload a file here (or send via Mail) and we can see if we are able to track down the issue and hopefully can retrieve your designs


    Best Regards,

    Kevin

    Dear Bingnan,

    Sorry for the late reply. Please try the following steps:

    1. Check if you lower the frame rate at which point there is no longer an overflow (best to start with 1 FPS and then increase it until it overflows)
    2. Next to changing the camera parameter you must also adapt the value "YLength" and "XLength" of the FullResBuf in your design according to the RoI of the camera, because otherwise the Buffer will add Dummy pixels. This might cause an overflow.
    pasted-from-clipboard.png

    Best Regards,

    Kevin

    Hi Simon,

    Sadly this is not possible to do with a TCL command. However, here are two workarounds that may work:

    You can read out the TargetPath Variable from the VisualApplets.ini, depending on how you installed VisualApplets you may find the VisualApplets.ini under C:\ProgramData\Basler\VisualApplets\<VERSION> or under your Installationfolder. Under this path several subfolders for each plattform are created where then the .hap files are stored with its name from the .va file.

    Another approach relies on setting a different variable like the UserLibDir variable to the same location as the target path under your System settings.

    Then you could use the following syntax to obtain files:


    set designHapPath [GetAbsolutePath "%UserLibDir%/../Designs/<PlatformName>/<DesignName>.hap"]




    I hope this may help to solve your problems.



    Alternativaley, you may contact me via E-Mail, if you are interested we can look if it would be easier to solve your task with the help of PyWizion.


    Best Regards,

    Kevin


    Hello Bingnan,

    I am currently quite busy, so answers may take a while or won't cover the full detail.


    To your question: "1. The trigger example seems to be receiving an external trigger signal, then triggering the camera. How can I make the output-triggering signal controllable?"

    If you open the "Hierachical Box" you will find the following layout:

    pasted-from-clipboard.png

    In the "Select" Operator you can choose the Triggermode, in your case the one you are looking for is the setting "1", so that the signal from "Generator" gets used as a Trigger.

    If you go deeper into the "Generator" you can see the following:

    pasted-from-clipboard.png

    The Operator "Period" can then adjust your signal. Please read the textboxes for further instructions on how to adjust the trigger signal.

    To your secon question:
    "2. With the "Overflow", is that means I can use a larger template? Or it is limited by the resources the "Overflow" controls transfer bits but no effect on this?"

    The overflow operator CUTS off an image, when a memory element is not capable of holding saving the image.
    As you can see in your screenshot from MicroDisplay you have a "1" on Overflow. This means that the memory is not capable of saving the required data with the chosen bandwidth.


    The following screenshots describe areas where you have bootlenekcs in your design:
    pasted-from-clipboard.png

    This will result in your processing pipeline only be able to handle 125 MegaPixel/s , since the SYNC operator creates dummy pixels.

    Also after this HBox you have a ParallelDN that will limit your bandwidth.

    This is all the help I can give you at the moment, please try to remove those bottlenecks, with operator suchs as ParallelUP before the SYNC operator etc.

    Best Regards


    Kevin



    Hello Bingnan,

    I looked at your design, and the first thing I noticed was the parallelDN right after the ImageBuffer at the beginning. I do not know how fast your camera is set, however if you go down to a parallelsim of 1, only 1 pixel is transfered with a rate of 125 MHz.

    If your camera has a resolution of 1280x860, you are transfering 1.101 MegaPixel per Frame. Thus you will only be able to have a framerate of 113 FPS.

    I updated your design, and removed the parallelDN Operator. Furthermore, I added an Overflow operator. This operator has a property called "OverflowOccured". You can look at this parameter in MicroDisplay(X). If it is "1" an Overflow occured, meaning that your buffers are not large enough, or that some other bottleneck exists in your design.

    Speaking of bottlenecks that is often overseen is the DMA Transfer rate, which is 1.800 MB/s for the mE5 VCX QP. So the maximum you can achieve with your resolution is 1635 Hz ( (1800 MegaByte/s / 1280*860) ) - assuming you are using the full DMA Bandwidth. The PCI can transport up to 128 bits in parallel, thus when you have an 8bit image, you can use a parallelsim of 16. In your design it was still one, due to the paralleDN. This means, that only 102 FPS were able to be achieved.



    Regarding your question to the triggers:

    The standard acquisition applet has these functioanlities built in, when you are using your onw custom applet you are responsible for these implementations. You can find an example for the mE5 VCX QP in your VisualApplets installation folder:


    <VASINSTALLDIR>\Examples\Processing\Trigger\me5-MA-VCX-QP\Area


    You can copy the trigger functionality and the camera from there.



    Best Regards and a nice weekend


    Kevin