Posts by roeger

    Dear Theo,


    I just changed the plattform and rebuild the applet. Can you try it with the Hap-File I put here?


    as for the format, Yes you are right. FG_FORMAT is treated as a packed format. For microDisplay it is simply a hint on how it should handle the data. If you write an SDK-Programm you can still stream out 12Bit packed and after wards interpret it correctly.


    In that case you could set the Output format for example to 8Bit and calculate the width like newWidth = width * 12/8.

    Than you grab the bigger image and open the content with anything that interprets the data as a 12Bit packed format.


    But I'm really curious why it is not working at your end.


    Best regards,

    Björn

    Hi Theo,


    I did a small example Applet and the configuration to show how it is possible to show it in microDisplay.

    To get the Image you need to set the Global Access Attribute first, than the Width and Pixelformat accordingly to the image you want to capture (My image is 1024 pixel in 12Bit RGB).


    The Applet I used is the following (I did it on an VF2 Plattform, but actually the plattform doesn't matter).


    In this applet it was not possible to connect the 12 Bit directly to the DMA, that's why I did a bit of recasting and rearanging the bits so that they match the DMA in a byte format. It is basically cosmetics ;)


    Best regards,

    Björn

    Hi Aaron,


    I think you only got a disply issue.

    You could either configure microDisplay via FG_WIDTH and FG_HEIGHT to the dimensions you desire or you could set the max-Dimensions to the values. You have set it to 8Kx6. This means microDisplay expects the image to be of this size and sets the dimensions accordingly.

    To avoid that you can use SetDimension in between your Blob_Analysis Box and the BlobOutput.


    Best regards,

    Björn

    Hi Mike,


    please have a look at "Maximum Datarate | JPEG_Encoder_Gray" in this forum. In this thread there is an example how to use the Gray encoder to encode a color image using multiple encoders. The example is for CL and CXP.


    The basic concept is:


    1. debayer the image

    2. convert the RGB image to YUV

    (3. subsample the U and V component if required)

    4. Encode each component using a gray-encoder

    5. Write a header and the output of each encoded stream into a file (this is done in software)


    Best regards,

    Björn

    Hi Theo,


    if you want to show an Image in 12 Bit in microDisplay you can do that if you do the following steps:


    1. Set Global Access Attribute(in "Miscellaneous" section) to "Read Write Change".

    2. now you can change the DMA "PixelFormat" to the desired Format, for example "gray 12 Bit"


    Now microDisplay will interpret the Image-data accordingly.

    Best regards,

    Björn

    Hi Theo,

    unfortunatly it's not that easy to solve this problem. The Processing Module in our Standard Applets actually is simply a lookup table. The values for these lookup-table are calculated in software and than written to the FPGA.


    So, what you need to do than would be giving the customer a Routine that he can use to load the values accordingly.

    If you want I think we can help you with that.


    Best regards,

    Björn

    Dear Jayasuriya,


    I can offer one other solution you could try. In this Design the "Register" always gets the highest value. The CreateBlankImage generates an Image of the dimensions 1x1 Pixel. Using a SyncToMin the Result of RowMax is synchronized to that 1Pixel image at the end of a Line.


    It's not possible to simulate this kind of design (it uses signal operations). But you can try it. I think this should also do the job. :)


    Best regards,

    Björn

    Dear Jayasuriya,


    I think it's more a question of which version of Visual Applets you are using than of the Plattform. The operator is relativly new and was included in Visual Applets 3.06. If your Version is older than that, you can update to a newer version.

    The link to get the update is either in the product information(you may register for that newsletter at productinfo@silicon-software.de) or you can send an eMail to our support(support@silicon-software.de) to get the link for the current Version (VA 3.1).


    Best regards,

    Björn

    Files

    Dear JSuriya,


    the problem can be soved quite easy. You figured right RowMax gives the maximum value of all pixels seen in the corresponding row. So the last pixel is always the absolute maximum in the row. So the task at hand is to get the last Pixel and remove all other pixel. To get the last Pixel you may use the operator IsLastPixel (library: Synchronzation) . Using this operator you don't need to know how long the row is.

    I modified the testdesign you added accordingly.


    The Part concerning the ResynctoLine I didn't get. Can you explain that in more detail?


    Best regards,

    Björn

    Hi Michael,

    I had a look at your example. Yes you could do that in this way, but I think it's to complicated if you do it like that.

    The intention behind this design is to get all ROIs into a different buffer. This would be possible just using 1 DMA. In your SDK-Program you just need to reserve 4 buffers at the end and you will get each ROI into a seperated buffer (ROI_test_simple.va)

    You just need to run a sequence of 4 images in microDisplay than.

    If you want to combine the 4 ROIs into 1 image and want to show that at the same time in microDisplay than you need a more complexe applet.

    I added 2 versions here:

    1 one is simply appending all ROIs after each other (ROI_test_simpleAppend.va).

    append.jpg

    2. appends them in a Square(ROI_test_simpleAppendSquare.va)

    square.jpg

    I hope that will give you a hint on how you could implement it for your need..


    best regards,

    Björn

    Hello Michael,


    Sorry, but these operator was released by accident. Please do not use it. The operator is not yet fully functional. It was developed to support multiple streams, which so far is not supported by most cameras.


    If the camera sends the images sequentially you need to set the Applet parameters for one ROI and capture n pictures to get all ROIs. If you have some logic in your applet you could also merge them into one bigger image. In this case the size depends on the ROIs. That's why I asked for the applet you intend to use :)


    Best regards,

    Björn

    Hello Michael,


    microDisplay can only show one image, or one ROI, at a time. So I guess your Applet does the merging of the ROIs into one bigger image that you want to see in microDisplay.

    From the image Mroi5 I would say maybe a buffer run full in this process. So I would check if you have big enough buffer for the combining actions.

    If you have a VA File that you can share I can have a look and maybe give a better hint :)


    Best regards,

    Björn

    Hi again,


    I looked at your design. I have some hints for you:


    1. Try to avoid a parallel down ahead of your buffer. You may start with parallelism of 8 at your SingleCamera operator and remove the To8P operator. If you use parallel down at a faster input without buffering you will lose data.

    2. You may use the same construct as I provided in my second example. For that, remove the Buffer(RAM) from Capture. Than add a Fifo with 1 Line in EdgexFilters after Projection_V. This fifo and RAM1 need to be set to infiniteSource enabled than.

    3. I'm not sure if you need SYNC3 in EdgexFilters. It is only needed if the images on the different links have different sizes.


    I hope this will help you with the design.

    Best regards,

    Björn

    Hi Jesse,


    first thing, the Error. Simply remove the "RefYOffset" operator. This operator is not essential, it just makes the handling easier.

    This operator is part of the VisualApplets Parametertranslations library, This Library allows you to build an interface for you. So the customer can get a defined interface and do not have to care that much about how thigs are done in the H-Box.


    This library requires a licence that you don't have at the moment.


    You will get the same result by writing the value you wrote to Y_Offset to get_last_line/YOffsetCmp.Number.


    As for your second question:

    Quote

    When I use a "M" or "P" type applet with synchronization .

    How can I calculate the buffer or FIFO size?

    In this case, why LineBuffer need to add after module28?

    Why the source image need add a RAM?

    Let me first say something on the Why do I need a Buffer.

    Buffer are always used to store data I want to use at a moment in time later. So for the source image that means you need to store all lines ahead of the line that you want to process in the second path. If you do not do that these lines are lost when you need them.


    So thumb rule: You need a buffer when ever there can be a situation that one path need to wait for results from another path/operator.


    (That's why you always need at least one buffer or Fifo. The DMA-Operator is unpredictable in the timing how long transfers need, since it depends on the system or work load.)


    As for the LineBuffer: It is needed because the image data from the Camera operator is an "infinite source". this means you can not stop it. Sometimes processing needs more time than just one clock cycle. In this case the processing path ahead of this module is paused for a while. This can not be done if the source is an "infinite source", since it can not be stoped.

    This is where the Fifo helps. It buffers incoming pixel in case there is a pause signal from one of the processing units after this.


    How to calculate the size? That's a tricky question. You need to consider your image pipeline.

    "What is the worst case that can happen?"

    I set the buffer to 1 Line, because there will be maximum 1 line that is stored. So even if the PC is not taking the Data in time and the calculation later in the pipe might be too slow there wont be any data loss.

    This is actually more than needed. Typically you wont pause that long. I think some pixel also would do.


    I hope that makes it a bit clearer.

    Best regards,

    Björn

    Hi Jesse,

    I'm not sure, if I understood your problem correct, but for the design you put here I would offer you this optimized Version:


    1. From my understanding there is one buffer redundant and can be simply left out.

    2. For the SelectROI you can as well use the H-Box I put there. In this H-Box all lines except for the one you spezified with Y_Offset are removed. Usually that is a more efficient way if you only need an ROI in Y direction since the Operator needs to take the X-direction in account as well and therefor needs more ressources.


    In the modified File I put there you will have a second process with my modifications.


    Does that help you or do you need something else?

    Best regards,

    Björn

    Hi Mike,


    there are two different concepts to work with our Framegrabber.

    These are:

    1. Applet (lets take it as an extention to the firmware)

    2. Software SDK


    The first one acts as an extention to the basic functions of the hardware. The second one focus on using features that are either implemented in our firmware or by the user in an applet.


    As for the CL-Ser: That is an interface that is already avaiable in every CL-Framegrabber.

    Each CL device needs a bit different handling. That's why there is a standard defining how to manage this different devices. So you don't need to implement a version on the low level but can use an already existing interface.

    OK, that's for the theory.


    As for the "how". There is an example in your Runtime installation:

    %SISODIR5%\CameraLink\ClSer


    You may look at this to get an idea how to handle serial comunication in CL.


    Did that answer your question?

    Best regards,

    Björn

    Hi Jesse,


    since the design you try to build is for a marathon card I would suggest you use Vivado instead of ISE. Vivado will greatly reduce your design times, since it has a better optimisation posibility than ISE.


    As for your question what to do if there is only ISE(e.g. for a Ironman Card):

    You can try to reduce parallelist of LUT and ROM operator to maximum 2(if you need more you can split them into multiple instanzes). This will result in a bit more resources, but it will reduce the clocking in this operators to only use the normal clock. Otherwise it will use a 2x faster clock as well which might make routing more complicated.


    I hope that helps a bit.

    Best regards,

    Björn

    Files

    • Buildtime.jpg

      (126.98 kB, downloaded 3 times, last: )

    Hi Theo,

    I have add you a pre-Version of the example for multi Encoder usage in VA. Currently there is a VA-File for

    1. CL Full Gray with 4 encoders

    2. CL Full Bayer with 6 encoders

    3. CXP Gray using 4 encoders

    I still didn't finish the CXP Color with 6 encoders(the design didn't fit) I still hope I can find a solution to get it squized in somehow. Anyway, in the Example.zip you will find the VA Files and the modified JpegCreator.h/C++ project.

    I was limited with the upload for the jpeg-lib. You will need that one as well(it is included in the old example).


    I hope that will help you get a starting point.


    this is the Example:Example.zip


    PS: I will add the docu and missing parts as soon as I find a way to get it small enough ;)

    best regards,

    Björn

    Hi Simon,


    I use TCL for Applet generation. For that I created a folder with TCL scripts. When I run my base script I set the base folder to the folder the script is in and after that I run relatively from that path.

    You can do that using this code in your main TCL-Script.

    Code
    1. cd [file dirname [info script]]

    after setting the path to the known folder can be done relativly using the cd command.For example if your image folder would be one folder up and than named image


    Code
    1. cd ../image

    I hope that helps you to archieve your goal.


    best regards

    Björn