Posts by roeger

    Hi Oliver,


    I did an applet for this camera long ago (I think it was like 2 years ago). I still remember the problems I had building it.

    First of, you need to do the sorting of packages into an image yourself. The camera sends the lines in packages with R, G or B content. That is why you need to activate the HeaderO output of the single Operator and can only use this operator. The Dual-Operator has a build in Scheme that is NOT compatible to this camera.


    I'm not sure if I can share the design I build with you, but some screen shoots to give you the idea is no problem.

    pasted-from-clipboard.png

    You need to get the Color and Line information from the header and append lines of the corresponding color to one line.

    I placed the linenumber information in a second kernel to later react to it.


    pasted-from-clipboard.png


    Here you see my Buffer. It does a spatial correction, based on the line numbers I get from the package header.


    The Design I did had one big flaw: Once you run more than 70 fps (more than the DMA could transport) it run into an overflow and couldn't recover from that.


    Full bandwidth of this camera is impossible, the grabber has a limitation in it's PCIe speed(PCIe Gen 2 x4). But if that is not relevant and slower speeds are OK it should be possible, BUT it's a lot of efford.


    Best regards,

    Björn

    Hi Arjun,


    the first camera you linked is a GiGE Camera. Therefor the CLHS Framegrabber wont support it.

    The second camera can be supported, but has restrictions. These restrictions are:


    1. not the full link speed - The camera uses full CLHS bandwidth, but the card only supports PCIe Gen 2 x4 and is therefor too slow to transmit all data


    2. The camera uses spaceial correction in a way that is hard to be implemented in Visual Applets. I would not recomment doing this if you are not an expert user.


    3. Older firmwares didn't comply to the CLHS standard fully. Therefor we had to take some assumtions in our custom applet on the structure of CLHS packages. I was told the current firmware was corrected so that this could be made more elegant


    So, let me sum it up: Yes we have develooped an Applet to get the second camera to work with our framegrabber, but it has stricked limitations, that's why we wont release it to the broad public. You may use the camera, but wont be easy ot of the box.


    Best regards,

    Björn Röger

    As for the DMA, the design doesn't have the full Bandwidth, since the ParDn will slow it down a bit. But you could also handle the image data as gray to archiev the full Bandwidth.

    In this case the design wouldn't make any sence since than I could simply output the image directly just knowing that I have to reserve 3x as wide buffers.


    I hope that solves your issue!

    Best regards,

    Björn

    Hi Jayasuriya,


    the format you describe is a 1x8 configuration. This means you only have 1 Zone and get 8 Pixel at the same time. The only difficulty here is that tehre is no RGB Camera operator for Full. But you may use the Gray operator instead and just sort the corresponding pixel according to your camera(R -> G -> B).


    I would do it in the following way:

    Dear Silverfly,


    I think the approach you just made is not that good for solving your problem. Sync will expect 2 sources that could be balanced somehow. So what is happening here if you got a unbalanced setting (one camera is sending while the other one is quiet) you will run into a stuck situation at the sync since one input is on hold and therefor blocks all other inputs.

    The rest ist just that the camera buffer fills up and you are stuck.


    I would suggest you do the following change:

    SelectCam.jpg


    Keep in mind to set the SourceSelector.InfiniteSource = Enabled (for the buffer this setting need to be disabled than).


    I hope that helped you solving your problem.


    Best regards,

    Björn

    Dear Theo,


    I just changed the plattform and rebuild the applet. Can you try it with the Hap-File I put here?


    as for the format, Yes you are right. FG_FORMAT is treated as a packed format. For microDisplay it is simply a hint on how it should handle the data. If you write an SDK-Programm you can still stream out 12Bit packed and after wards interpret it correctly.


    In that case you could set the Output format for example to 8Bit and calculate the width like newWidth = width * 12/8.

    Than you grab the bigger image and open the content with anything that interprets the data as a 12Bit packed format.


    But I'm really curious why it is not working at your end.


    Best regards,

    Björn

    Hi Theo,


    I did a small example Applet and the configuration to show how it is possible to show it in microDisplay.

    To get the Image you need to set the Global Access Attribute first, than the Width and Pixelformat accordingly to the image you want to capture (My image is 1024 pixel in 12Bit RGB).


    The Applet I used is the following (I did it on an VF2 Plattform, but actually the plattform doesn't matter).


    In this applet it was not possible to connect the 12 Bit directly to the DMA, that's why I did a bit of recasting and rearanging the bits so that they match the DMA in a byte format. It is basically cosmetics ;)


    Best regards,

    Björn

    Hi Aaron,


    I think you only got a disply issue.

    You could either configure microDisplay via FG_WIDTH and FG_HEIGHT to the dimensions you desire or you could set the max-Dimensions to the values. You have set it to 8Kx6. This means microDisplay expects the image to be of this size and sets the dimensions accordingly.

    To avoid that you can use SetDimension in between your Blob_Analysis Box and the BlobOutput.


    Best regards,

    Björn

    Hi Mike,


    please have a look at "Maximum Datarate | JPEG_Encoder_Gray" in this forum. In this thread there is an example how to use the Gray encoder to encode a color image using multiple encoders. The example is for CL and CXP.


    The basic concept is:


    1. debayer the image

    2. convert the RGB image to YUV

    (3. subsample the U and V component if required)

    4. Encode each component using a gray-encoder

    5. Write a header and the output of each encoded stream into a file (this is done in software)


    Best regards,

    Björn

    Hi Theo,


    if you want to show an Image in 12 Bit in microDisplay you can do that if you do the following steps:


    1. Set Global Access Attribute(in "Miscellaneous" section) to "Read Write Change".

    2. now you can change the DMA "PixelFormat" to the desired Format, for example "gray 12 Bit"


    Now microDisplay will interpret the Image-data accordingly.

    Best regards,

    Björn

    Hi Theo,

    unfortunatly it's not that easy to solve this problem. The Processing Module in our Standard Applets actually is simply a lookup table. The values for these lookup-table are calculated in software and than written to the FPGA.


    So, what you need to do than would be giving the customer a Routine that he can use to load the values accordingly.

    If you want I think we can help you with that.


    Best regards,

    Björn

    Dear Jayasuriya,


    I can offer one other solution you could try. In this Design the "Register" always gets the highest value. The CreateBlankImage generates an Image of the dimensions 1x1 Pixel. Using a SyncToMin the Result of RowMax is synchronized to that 1Pixel image at the end of a Line.


    It's not possible to simulate this kind of design (it uses signal operations). But you can try it. I think this should also do the job. :)


    Best regards,

    Björn

    Dear Jayasuriya,


    I think it's more a question of which version of Visual Applets you are using than of the Plattform. The operator is relativly new and was included in Visual Applets 3.06. If your Version is older than that, you can update to a newer version.

    The link to get the update is either in the product information(you may register for that newsletter at productinfo@silicon-software.de) or you can send an eMail to our support(support@silicon-software.de) to get the link for the current Version (VA 3.1).


    Best regards,

    Björn

    Files

    Dear JSuriya,


    the problem can be soved quite easy. You figured right RowMax gives the maximum value of all pixels seen in the corresponding row. So the last pixel is always the absolute maximum in the row. So the task at hand is to get the last Pixel and remove all other pixel. To get the last Pixel you may use the operator IsLastPixel (library: Synchronzation) . Using this operator you don't need to know how long the row is.

    I modified the testdesign you added accordingly.


    The Part concerning the ResynctoLine I didn't get. Can you explain that in more detail?


    Best regards,

    Björn

    Hi Michael,

    I had a look at your example. Yes you could do that in this way, but I think it's to complicated if you do it like that.

    The intention behind this design is to get all ROIs into a different buffer. This would be possible just using 1 DMA. In your SDK-Program you just need to reserve 4 buffers at the end and you will get each ROI into a seperated buffer (ROI_test_simple.va)

    You just need to run a sequence of 4 images in microDisplay than.

    If you want to combine the 4 ROIs into 1 image and want to show that at the same time in microDisplay than you need a more complexe applet.

    I added 2 versions here:

    1 one is simply appending all ROIs after each other (ROI_test_simpleAppend.va).

    append.jpg

    2. appends them in a Square(ROI_test_simpleAppendSquare.va)

    square.jpg

    I hope that will give you a hint on how you could implement it for your need..


    best regards,

    Björn

    Hello Michael,


    Sorry, but these operator was released by accident. Please do not use it. The operator is not yet fully functional. It was developed to support multiple streams, which so far is not supported by most cameras.


    If the camera sends the images sequentially you need to set the Applet parameters for one ROI and capture n pictures to get all ROIs. If you have some logic in your applet you could also merge them into one bigger image. In this case the size depends on the ROIs. That's why I asked for the applet you intend to use :)


    Best regards,

    Björn

    Hello Michael,


    microDisplay can only show one image, or one ROI, at a time. So I guess your Applet does the merging of the ROIs into one bigger image that you want to see in microDisplay.

    From the image Mroi5 I would say maybe a buffer run full in this process. So I would check if you have big enough buffer for the combining actions.

    If you have a VA File that you can share I can have a look and maybe give a better hint :)


    Best regards,

    Björn

    Hi again,


    I looked at your design. I have some hints for you:


    1. Try to avoid a parallel down ahead of your buffer. You may start with parallelism of 8 at your SingleCamera operator and remove the To8P operator. If you use parallel down at a faster input without buffering you will lose data.

    2. You may use the same construct as I provided in my second example. For that, remove the Buffer(RAM) from Capture. Than add a Fifo with 1 Line in EdgexFilters after Projection_V. This fifo and RAM1 need to be set to infiniteSource enabled than.

    3. I'm not sure if you need SYNC3 in EdgexFilters. It is only needed if the images on the different links have different sizes.


    I hope this will help you with the design.

    Best regards,

    Björn

    Hi Jesse,


    first thing, the Error. Simply remove the "RefYOffset" operator. This operator is not essential, it just makes the handling easier.

    This operator is part of the VisualApplets Parametertranslations library, This Library allows you to build an interface for you. So the customer can get a defined interface and do not have to care that much about how thigs are done in the H-Box.


    This library requires a licence that you don't have at the moment.


    You will get the same result by writing the value you wrote to Y_Offset to get_last_line/YOffsetCmp.Number.


    As for your second question:

    Quote

    When I use a "M" or "P" type applet with synchronization .

    How can I calculate the buffer or FIFO size?

    In this case, why LineBuffer need to add after module28?

    Why the source image need add a RAM?

    Let me first say something on the Why do I need a Buffer.

    Buffer are always used to store data I want to use at a moment in time later. So for the source image that means you need to store all lines ahead of the line that you want to process in the second path. If you do not do that these lines are lost when you need them.


    So thumb rule: You need a buffer when ever there can be a situation that one path need to wait for results from another path/operator.


    (That's why you always need at least one buffer or Fifo. The DMA-Operator is unpredictable in the timing how long transfers need, since it depends on the system or work load.)


    As for the LineBuffer: It is needed because the image data from the Camera operator is an "infinite source". this means you can not stop it. Sometimes processing needs more time than just one clock cycle. In this case the processing path ahead of this module is paused for a while. This can not be done if the source is an "infinite source", since it can not be stoped.

    This is where the Fifo helps. It buffers incoming pixel in case there is a pause signal from one of the processing units after this.


    How to calculate the size? That's a tricky question. You need to consider your image pipeline.

    "What is the worst case that can happen?"

    I set the buffer to 1 Line, because there will be maximum 1 line that is stored. So even if the PC is not taking the Data in time and the calculation later in the pipe might be too slow there wont be any data loss.

    This is actually more than needed. Typically you wont pause that long. I think some pixel also would do.


    I hope that makes it a bit clearer.

    Best regards,

    Björn