Posts by kevinh

    Hello Jesse,

    Sorry for the late reply, let me try to answer your questions.

    I simulated your design with my own simulated images that have the resolution of 4096x3072 after debayering, so the following numbers will differ on your side depending on the used input image.

    If you take a look at the SimProbes JpegGrayR, JpegGrayG and JpegGrayB you will notice that they will have a size for example of 1186120 - and this times three (for each channel) is: 3 558 360

    If you now take a look at the probe "JpegColor" you will notice the size of 1 269 096 - which means that you will have a much higher
    compression using the JPEG color encoder.

    Simply put, the algorithm inside the Jpeg color encoder differs vastly from the implementation of using a splitcomponent and three different JPEG_Encoder operators. Thus, resulting in different results.

    The jpeg encoder works in blocks, bot the Color encoder and the grayscale encoder, however the Color encoder works with the YCbCr format and some internal downscaling. If you need more information on this please let me know and I will ask the appropiate engineers at Basler.

    Also, your results may differ with images that have more color information, you could try to create a chessboard with different color patterns.

    Lastly, the color jpeg encoder is better optimized for ressources, if you use three jpeg encoders you will need roughly

    LUT: 25.759
    FFs: 27.396

    BRAM: 135
    Embedded ALU: 141
    RAMLUT: 3075

    But the JPEG color encoder only uses:


    LUT: 13220

    FFS: 19681
    BRAM: 93
    Embedded ALU: 55

    RAMLUT: 1630


    To summarize it: Both implementations have a very different approach on encoding the color information and are therefore hard to compare.

    Best Regards

    Kevin



    Hi dxs,

    Sadly I do not have any information about the RES485 operator, but there is no example for it.


    To interact with the GPI please use the GPI or GPO operator


    Best Kevin

    Hi HeZhi,

    Can you give some more details about your setup? E.g. what Framegrabber are you using and how much data you need to process.

    Is the data already available *before* the acquisition ? E.g. some data for a flat field correction, or is it data coming directly from the camera? If you need to store large amount of data that is available before the acqusition please use DRAM Operators such as RAMLuts or CoefficientBuffers. If you need to buffer data please use ImageBuffers/FrameBufferRandomRead or something simililar.

    If this does not help, you can try to lower the framerate on the camera or on the framegrabber by using an ImageValve.

    Furthermore, if you are able to it would help to tell us more about your application - or even share the design here (if it is okay for you). Then we can check if we are able to see a solution that won't require PC memory.


    Lastly, to answer your question directly: Currently you can write data from the framegrabber to the pc memory via DMA, which is the typical use case. However, accessing the PC Memory from the frame grabber is not supported on newer plattforms and has a really low bandwidth. If this is the only solution for your project please contact our sales contact with a custom project.


    Best Regards,

    Kevin

    Hi pangfengjiang,

    I would need a little bit more details about that issue, however this looks more like support case than an actual Visual applets issue.

    But let me try to understand your problem a little bit better, therefore I have some questions:


    "Hardware: VCL acquisition card, line scan 8K camera (base mode)"

    This is a VCL card, an acquisition card would be an ACL - I just want to make sure that it is really a VCL card, correct?

    "... I developed a function (stroboscopic function) to control LED lights and collect images on VA"

    This functionallity was programmed in Visual Applets?

    "I found that an acquisition card can also control the brightness and darkness of the LED without turning on VA"

    Do you mean an ACL Card with its standard acquisition applets?


    "The other cards cannot control the brightness of the LED light. What is the reason for this and how to deal with it."

    Which other cards are meant by that? Other VCL cards, other ACL cards?




    Can you maybe explain a little bit more in detail what your desired result is?

    HI dxs,

    In MicroDisplayX you have the possibility to execute "Save Current Sequence". This will then save the last images.
    The amount of images depends on your chosen Number of Buffers ( Tools -> Settings -> Number of Buffers ).

    After this you could use some kind of third-Party software or python-scripts using opencv ( https://stackoverflow.com/ques…e-out-of-images-in-python ) to stack them together in a video.


    While there is no example of putting the recieved images in the SDK into .avi file, you can find appropiate functions here:

    https://imaworx.netlify.app/iolibrt.html#writing-video-files


    Alternatively, you can also acess the memory directly from the pointer returned by Fg_getImagePtrEx and then work with OpenCV or similar alternatives to write the recieved image data to video.

    https://stackoverflow.com/ques…webcam-output-to-avi-file



    Best Regads

    Kevin

    HI pangfengjiang,

    Thank you for reaching out. I will discuss this internally, and will come back asap.

    Please be aware, that this forum is not for giving full solutions to specific problems. It is for giving pointers and helping to understand tricky situations. Similiar to Stack Overflow.

    We offer trainings and full implementation as a service, but as mentioned above I will discuss this internally.


    Best Regards,

    Kevin

    Hi Bingnan,

    Sorry for the late reply.


    It is hard to do a remote check but I am assuming the following: Since you are using the Cyclone-1HS-3500 with a resolution of
    1280x860 you should adapt your Maximum Image Width and your ImageBuffer accordingly. You can do so by clicking on the link after the Camera Operator. Furthermore, in the ImageBuffer operator Adapt the XLength to 1280.

    I hope this helps !

    Best Regards

    Kevin

    Hi Bingnan,

    In the step netlist generation you see that the hardware ressources are exhausted on the FPGA.

    pasted-from-clipboard.png



    This should not be over 100% at any of the Resources.


    Below you also find a list of operators and elements that require too much resources.

    Additionally, you can check in the top bar:
    pasted-from-clipboard.png

    There you will find a tabular view of the required sources that can also be sorted.

    Lastly you can right click on each operator or H-Box and click on "FPGA Resources" to see how much resources are needed by this operator or H-Box. If it is grayed out you need to run DRC2.


    If we take a look at the first Ressources we can see that a lot of it is in NCC/Sigma_R

    pasted-from-clipboard.png




    The size of the Kernels are 22x22 which is quite large. Would it be possible to downsample the input images, and therefore downsample the mask to 11x11 ?

    Lastly, the input parallelism is 8. With the Kernel size this becomes 22x22x8 pixels that are computed in parallel. If you put a ParallelDN Operator before the NCC-Hbox you can save up a lot of resources.



    Lastly, there are some other things that could be done to distribute the resouce usage, in some arithmetic operators you can choose the ImplementationType.

    pasted-from-clipboard.png

    Here you can see that there is still some EmbeddedALU left so it would be helpful to not use LUTs here.


    By doing all proposed ideas I was able to reduce the fpga ressources from the previous 766% to 105% on the LUTs, and on all others below 100%.

    pasted-from-clipboard.png

    The applet is attached, however it won't produce the desired results since the kernels are 11x11 and I did not adapt any of them.

    You may take this as a reference to further improve your design

    Best Regards

    Kevin

    Hi IhShin,

    Sorry for the late reply.


    "Normal" depends on the camera you are using. If the camera provides CL_PixelClock, LVAL and FVAL then a 7 is normal. However, it really depends on the camera that you are using if these are available. If your third party camera does only provide LVAL then the "2" is normal.

    For questions regarding cameras you may also contact: https://www.baslerweb.com/en/sales-support/support-contact/

    Best Regards,

    Kevin




    Hi IhShin,

    The FG_CAMSTATUS_EXTENDED provides extended infromation on the pixel clock from the camera. The Variable has 8 bit, and each bit provides different information.

    So, 7 would be -> 000 0111 , 2 would be 0000 00010 , and 1 would be 0000 0001 in binary.

    The following link may help to understand what the parameters mean: https://docs.baslerweb.com/fra…tml#FG_CAMSTATUS_EXTENDED


    • 0 = CameraClk, provided by CameraLink interface. Shows if CL PixelClock is available.
    • 1 = CameraLval, provided by CameraLink interface. Shows if CameraLink LVAL is available, representing a line being transferred into frame grabber.
    • 2 = CameraFval, provided by CameraLink interface. Shows if CameraLink FVAL is available, representing frames being transferred into frame grabber. Not relevant for standard line scan applets.
    • 3 = Camera CC1 Signal, NOT provided by this frame grabber.
    • 4 = ExTrg / external trigger, NOT provided by this frame grabber.
    • 5 = BufferOverflow
    • 6 = BufStatus, LSB
    • 7 = BufStatus, MSB

    So for the value 7:

    CL_PixelClock is available, LVAL is available, FVAL is available (First three bits are one) - all others are zero.

    For the value 2 it means that CL_PixelClock is not available, LVAL is available, FVAL is not available.
    For the value 1 it means that CL_PixelCLock is available, LVAL is not available, FVAL is not available.


    Best Regards,
    Kevin


    Hello,

    I was wondering how Parallelism and RGB worked together and how the pixels are computed when the parallelity is greater than "1" when using the Color Flavor FL_RGB.

    So, I created a very simple Design that shows easily how the RGB Components can be merged / split and even be used in a grayscale image as pixels next to each other with the MergeParallel Operator.

    This applet does not have a real functionallity, but might come in handy.


    Feel free to ask questions if you like!


    Best Regards,

    Kevin

    Hi Bingnan,

    No the parallelity is usally in the "width". Which means, that if you have a parallelity of 8 you will have 8 pixels transported in parallel. This pixels are usually the ones that are next to each other in one line.

    If you set the height in the module properties of the "SplitImage" operator to 1 it will divide an image which has N lines into N images.

    The parellelity will stay, and if you have a parallelity of 8 you will need to split it in parallel of 8 and then combine the logic with an OR Operator at the end.

    This is similiar done in the example design that I have attached before.

    Best Regards,

    Kevin

    Hi Bingnan,

    I think the operator that you are searching for is called "SplitImage". In the Module Properties you can set the Height to "1", thus the image will get split into different lines.

    I will attach you an example design. Note that you need to adapt it to your input image stream and to your parallelilty (currently working with parallelity 2)

    Best Regards

    Kevin

    Note: This was not tested on any hardware.

    Files

    • ToLine.va

      (6.29 kB, downloaded 1 times, last: )

    Hi Bingnan,

    The Y-Coordinate Operator will give you the Y-Coordinates of each Pixel, as you correctly mentioned it will only Count in Y direction. There is also an X-Coodinate operator which will count in X-Direction.

    Please notice that in your current design you are overwriting your image data that comes from the camera because of the Y-Coordinate Operator which only gives you the information about the pixel index. If you want to use both, I suggest using a Branch Opreator before.

    Regarding your questions about the parallelility, if you check the Documentation of the PixelToSignal you will see the following Table:

    pasted-from-clipboard.png

    Here you can see that the Input Link I requires a parallelism of 1.

    Here are some pointers that may help you:

    ParallelDN -> Set it 1

    SplitParallel -> Here you could Split the parallel pixel into links that only have the parallilty of 1 and then compute them further.

    It all depends on what your use case exactly is.


    Best Regards

    Kevin

    Dear pangfengjiang,

    Sorry for the delayed reply, due to the holidays many people are on vacation.

    Like MatthiasR already suggested you can check the examples in the given link. I think the examples "GeometricTransformation_ImageMoments.va" shows you how to rotate an image. If you already know the angle and the center of gravity you can check the HBox "CoordinateTransformation" within the HBox "GeometicTransformation".

    However, since the image is that large it cannot fit into RAM. Maybe duplicate the rotating part and split up the image into two parts by using every even/uneven pixel-index.

    Feel free to contact our sales team for a custom solution.


    Best regards

    Kevin

    Dear Bingnan,

    Thank you for reaching out. The ImageTrigger Box is for accessing the camera, my question is what do you exactly mean by a trigger signal. Do you want to trigger the Camera again if you detect the circle, or do you want to acces the GPIO pins from the frame grabber? If it is the latter, I have attached a small VA example on how to access these.

    Please let me know what do you need in detail.

    Greetings,

    Kevin

    Files