Posts by kevinh

    Hi Jesse,

    I did some testing with OpenCV JPEG compression.

    Firstly, I will attach a design that has a chess board generator with uneven tiles, thus creating a test environment that has no artifacts in it. I adapted your original design, you can use it for testing, but I doubt that it will work in hardware.

    Furthermore, here is a short Python-Snippet for doing the testing I did:






    With this snippet I created the following plot:





    pasted-from-clipboard.png



    This is a visualization of the different artifacts introduced by Visual Applets and OpenCV, respectively.
    They both have a different implementation, therefore the shown artifacts are different.

    However, what is more important: In both cases the JPEG artifacts are diferent in the horizontal in the vertical direction with OpenCV and with Visual Applets.


    Therefore, your observation, that the artifacts are more visible on the horizontal edges than on the vertical edges does not show that there is a bug in the implementation but can be explained by the loss of information due to jpeg compression.




    For further questions on this matter, please refer to our support.



    Best Regards,

    Kevin

    Hi Jesse,

    Thank you for your input.

    The subsampling you implemented does the right job, however it is pre-jpeg compression. So the effect may take place during the jpeg compression.

    Lastly, did you use the output during simulation (from the green box) or did you build your application, put into the framegrabber and saved it from MicroDisplay or a C++ application?


    Best Regards,

    Kevin

    Hi Jesse,

    We are currently investigating this further.

    Just to double check:

    1. When you used C# for encoding, did you use subsampling as well?

    2. What did you use to decode the JPEG image? Did you write your own script?

    3. Did you decode the JPEG in the simulation or in hardware?


    Best Regards

    Kevin

    Hi Jesse,

    We use 4:2:2 subsampling as in the picture shown below:

    1920px-Common_chroma_subsampling_ratios.svg.png



    This means that we filter out some information in the horizontal direction, but not in the vertical direction, which may explains your observed behaviour.

    If you need an other form of subsampling you may contact your appropiate sales manager for a custom request.


    Best Regards,

    Kevin

    Hi Jesse,

    I will ask the appropiate engineers at Basler and will come back to you as soon as I have the information available.
    Please be patient, due to holidays this may take some time.

    What I can tell you at the moment is, that depending on the input parallelism different block sizes are used. Which means that not all the time 8x8 blocks are utilized.

    Best Regards,

    Kevin

    Hi,

    The calculations on how large the output image will be is quite difficult. The Jpeg Algorithm works in a way that different cosine frequencies are stored. This means that the size of your output image is depending on your image you want to decode. Images that have a repeating pattern (e.g. a chess board) are easier to decode in comparison to images that have a high variance and a lot of edges. The worst possible image to decode in jpeg is probably an image just consisting of color noise.

    To understand JPEG better I highly recommend computerphiles videos on that topic:

    Part 1:


    Part 2:



    Most importantly, JPEG compression is NOT losless - which means you will never get the exact same image back after encoding / decoding - even with JPEGQuality set to 100. You will always include some so called "Jpeg artifacts" . These artifacts are most likely to occur on edges. You can read bout it here; https://en.wikipedia.org/wiki/Compression_artifact . This also explains the artifacts you experience in your image.

    Lastly, you can try it with different simulated images - what I did was simply use a Coordinate_X operator followed by a castbitwidth and set it to 8 to simulate a grayscale image. I attached my experimental synthetic image, so you may get the same size results as I did.


    Best Regards

    Kevin

    Hi,

    At the moment there is nothing planned in this direction.

    It is known that Python 3.6 has reached EOL, if there is a strong enough customer need we will implement support for higher Python Versions.

    Do you have a specific reasons for not using 3.6 ? If so, please tell us and we will let you know on how we will proceed.

    Best Regards,

    Kevin

    Hello Jesse,

    Sorry for the late reply, let me try to answer your questions.

    I simulated your design with my own simulated images that have the resolution of 4096x3072 after debayering, so the following numbers will differ on your side depending on the used input image.

    If you take a look at the SimProbes JpegGrayR, JpegGrayG and JpegGrayB you will notice that they will have a size for example of 1186120 - and this times three (for each channel) is: 3 558 360

    If you now take a look at the probe "JpegColor" you will notice the size of 1 269 096 - which means that you will have a much higher
    compression using the JPEG color encoder.

    Simply put, the algorithm inside the Jpeg color encoder differs vastly from the implementation of using a splitcomponent and three different JPEG_Encoder operators. Thus, resulting in different results.

    The jpeg encoder works in blocks, bot the Color encoder and the grayscale encoder, however the Color encoder works with the YCbCr format and some internal downscaling. If you need more information on this please let me know and I will ask the appropiate engineers at Basler.

    Also, your results may differ with images that have more color information, you could try to create a chessboard with different color patterns.

    Lastly, the color jpeg encoder is better optimized for ressources, if you use three jpeg encoders you will need roughly

    LUT: 25.759
    FFs: 27.396

    BRAM: 135
    Embedded ALU: 141
    RAMLUT: 3075

    But the JPEG color encoder only uses:


    LUT: 13220

    FFS: 19681
    BRAM: 93
    Embedded ALU: 55

    RAMLUT: 1630


    To summarize it: Both implementations have a very different approach on encoding the color information and are therefore hard to compare.

    Best Regards

    Kevin



    Hi dxs,

    Sadly I do not have any information about the RES485 operator, but there is no example for it.


    To interact with the GPI please use the GPI or GPO operator


    Best Kevin

    Hi HeZhi,

    Can you give some more details about your setup? E.g. what Framegrabber are you using and how much data you need to process.

    Is the data already available *before* the acquisition ? E.g. some data for a flat field correction, or is it data coming directly from the camera? If you need to store large amount of data that is available before the acqusition please use DRAM Operators such as RAMLuts or CoefficientBuffers. If you need to buffer data please use ImageBuffers/FrameBufferRandomRead or something simililar.

    If this does not help, you can try to lower the framerate on the camera or on the framegrabber by using an ImageValve.

    Furthermore, if you are able to it would help to tell us more about your application - or even share the design here (if it is okay for you). Then we can check if we are able to see a solution that won't require PC memory.


    Lastly, to answer your question directly: Currently you can write data from the framegrabber to the pc memory via DMA, which is the typical use case. However, accessing the PC Memory from the frame grabber is not supported on newer plattforms and has a really low bandwidth. If this is the only solution for your project please contact our sales contact with a custom project.


    Best Regards,

    Kevin

    Hi pangfengjiang,

    I would need a little bit more details about that issue, however this looks more like support case than an actual Visual applets issue.

    But let me try to understand your problem a little bit better, therefore I have some questions:


    "Hardware: VCL acquisition card, line scan 8K camera (base mode)"

    This is a VCL card, an acquisition card would be an ACL - I just want to make sure that it is really a VCL card, correct?

    "... I developed a function (stroboscopic function) to control LED lights and collect images on VA"

    This functionallity was programmed in Visual Applets?

    "I found that an acquisition card can also control the brightness and darkness of the LED without turning on VA"

    Do you mean an ACL Card with its standard acquisition applets?


    "The other cards cannot control the brightness of the LED light. What is the reason for this and how to deal with it."

    Which other cards are meant by that? Other VCL cards, other ACL cards?




    Can you maybe explain a little bit more in detail what your desired result is?

    HI dxs,

    In MicroDisplayX you have the possibility to execute "Save Current Sequence". This will then save the last images.
    The amount of images depends on your chosen Number of Buffers ( Tools -> Settings -> Number of Buffers ).

    After this you could use some kind of third-Party software or python-scripts using opencv ( https://stackoverflow.com/ques…e-out-of-images-in-python ) to stack them together in a video.


    While there is no example of putting the recieved images in the SDK into .avi file, you can find appropiate functions here:

    https://imaworx.netlify.app/iolibrt.html#writing-video-files


    Alternatively, you can also acess the memory directly from the pointer returned by Fg_getImagePtrEx and then work with OpenCV or similar alternatives to write the recieved image data to video.

    https://stackoverflow.com/ques…webcam-output-to-avi-file



    Best Regads

    Kevin

    HI pangfengjiang,

    Thank you for reaching out. I will discuss this internally, and will come back asap.

    Please be aware, that this forum is not for giving full solutions to specific problems. It is for giving pointers and helping to understand tricky situations. Similiar to Stack Overflow.

    We offer trainings and full implementation as a service, but as mentioned above I will discuss this internally.


    Best Regards,

    Kevin