Posts by Johannes Trein

    Hello Jayasuriya

    first of all I like to inform you about this post: How to acquire images by turns using multi cameras? Maybe you find some answers there. Besides other information you can read the following:

    ... as you are using a GigEVision camera the camera explicitly needs to be started. microDisplay will do that for you when you click the Play button. However it will start camera 0 with DMA 0. Camera 1 with DMA 1. Etc. As you only have a single DMA camera 1 (you call it camera 2) will not be started by microDisplay.

    Solution: Open the GenICam explorer and connect with the second camera. Search for parameter StartAcquisition and click Execute. This shall start the second camera and you can acquire the images with microDisplay.

    You won't have that type of problem when you integrate the frame grabber with the API using the SDK as you need the code to start the cameras anyway. If you are using a 3rd party interface like Halcon you need to check it.

    In your case your design will work if you can change the operator SourceSelector only between two frames. So you need to stop all cameras, let the pipeline get empty, change the SourceSelector source and restart let's say two out of three cameras.

    If you have a disabled camera asynchronous to your trigger. For example because of a failure you won't be able to switch to your fallback solution. Especially if it is in the middle of a frame transfer. The operators can only switch at the end of a frame.

    Maybe you can further explain the expected camera issues so that I can think about a workaround.


    Hi Theo

    So my main question is why is the Bit Width not a part of the table above? Would you still think I should use more than one link?

    The table already assumes a bit width of 64 to get to those values. I need to add more details to the post.

    I know the operator does not perform very well. Thank you for the feedback. I will forward it.

    Tip: It is difficult to prepare the CoefficientBuffer input files if you are using multiple links. If you don't want to write a software for testing you can use the VisualApplets simulation to generate the coefficient files.

    An alternative to CoefficientBuffer is RamLUT.


    Hi Pierre

    if you use the VisualApplets simulation the program will display the correct image width. So please check the simulation output image width and verify that you are using the same setting for the display size.

    Now that you have shown a screenshot of the VA design I can see that you are using a parallelism of 20.

    From your information I can guess the following parameters:

    - Camera ROI width 1184

    --> Applet camera operator will extend this to 1200

    - Defined ROI in both ImageBuffer operators: 1200 (as 1184 is not possible)

    SelectROI XLength: 1184

    -> Select ROI will cut 1184 pixel. As the output parallelism is 20 the lines are extended by dummy pixel to 1200

    -> DMA buffer image width = 1200

    For 1152 you need to set the ImageBuffer to 1160 what also will be tha DMA buffer image width.

    Maybe you use different settings in ImageBuffer?

    Why are you using SelectROI anyway? It is only reasonable to use it if you put a PARALLELdn operator in front of it.

    Both of the screenshots do not show a multiple of 20.

    I can only further assist if you attach the VA design and the images as well as a full documentation of all settings.


    Hello community

    Today I have a question for you. I want to use a Dalsa Linea Color GigE in RAW mode. This is a bilinear linescan camera having the following pattern:

    Sensor layout of a bilinear line scan camera with color pattern Red/BlueFollowedByBlue/Red_GreenFollowedByGreen

    I need to set the camera to the pixel format BiColorRGBG8 but I don't know the exact output format of the camera.

    Here is the camera manual:…eries%202k%20and%204K.pdf

    My questions are:

    1. Is the camera sending an image of Width and Height and the image composing takes place in the camera?

    2. What is the pixel format of the camera? RB line first next GG line or pixel 0 RGBG, next pixel 1 RGBG etc?

    3. What is the line width? The CL version of the Dalsa P4 sends the GG line after the RB line. So the LVAL is 2 x width

    I modified the Silicon Software example…near%20Bayer%20RB_GG.html to the VQ4 but without having an answer to the questions above I cannot finish the implementation. See attached.

    Maybe someone has an idea?



    Hi Pierre

    I've been reading your post several times now but still don't have the one answer. Let me explain the behavior of Select_ROI

    - if the input image width is larger than the defined offset + width the operator will cut the ROI

    - if the input image width is smaller the operator will not extend the width and will pass the data to the output

    - if you select an offset or width which cannot be divided by the used parallelism the operator has to add dummy pixel to the end of each line.

    You wrote: "... and also modify the FG_WIDTH ..."

    In a VisualApplets applet you don't have FG_WIDTH. You can define

    - all operators changing the width like ImageBuffer and Select_ROI

    - describe the DMA output image size which is used for PC buffer allocation and display window size.

    So please let me know what you mean by FG_WIDTH.

    On a DMA transfer the line length information is not included. So the PC cannot know if the lines have a width of 1184 or 1152. It simply needs to match with the settings.

    If you have a mismatch it looks like in your attached image. From the image we can see that the display size does not correspond to the transfer size. There are some pixel which are shifted to the next lines as there are to many pixel or are shifted to the previous line as some pixel are missing.

    Can you see the problem in the VisualApplets simulation.

    Could you add your VA design so I can have a look at it. Or an extract?


    Dear Silverfly

    as you are using a GigEVision camera the camera explicitly needs to be started. microDisplay will do that for you when you click the Play button. However it will start camera 0 with DMA 0. Camera 1 with DMA 1. Etc. As you only have a single DMA camera 1 (you call it camera 2) will not be started by microDisplay.

    Solution: Open the GenICam explorer and connect with the second camera. Search for parameter StartAcquisition and click Execute. This shall start the second camera and you can acquire the images with microDisplay.

    You won't have that type of problem when you integrate the frame grabber with the API using the SDK as you need the code to start the cameras anyway. If you are using a 3rd party interface like Halcon you need to check it.



    Hello Jayasuriya

    in VA 3.1.1 we released examples for Tap Geometry sorting using BRAM instead of the ImageBufferSC.

    See the example documentation here:…20Geometry%20Sorting.html

    I guess you are using a microEnable IV frame grabber? If you have enough BRAM you can use one of the solutions from the link. Otherwise it is possible to use two ImageBufferSC operators, a FIFO and insert line as well as append line.

    We tried using cast parallel but it is reading out as expected.

    What do you mean by that? Do you mean ... but it is NOT reading out as expected? In this case try the two ImageBufferSC


    VisualApplets can be used to transfer serial data over the IOs of the frame grabber. Application areas are for example

    • Control eject mechanism like valves for bulk material sorting.
    • Object x- and y- position output. For example for laser welding and control the position of piezo controlled mirrors.
    • Value output for digital to analog converters for example to control a laser intensity
    • etc.

    VisualApplets does not have dedicated operators for the communication. Instead the existing operators are used to generate the output stream or analyze a stream at a digital input of a frame grabber. The implementations can be adapted to many requirements and protocols.

    Many applications can be solved. For more complex requirements like complex protocols with handshakes a custom solution might be necessary.

    The following VisualApplets design shows a simple approach for serial output and serial input. I hope that other designs and protocols will follow after this one.

    Serial Protocol

    In our example I defined a simple protocol.


    We transfer 8 bit data words. The serial stream is in the format

    • 1 Start bit (high)
    • 8 Data bit
    • 1 Parity bit (XOR of all data bits)
    • 1 Stop bit (high)

    followed after a gap of a minimum of two clock cycles the next data can come.

    The CTS (Clear to Send) data is send backwards from the receiver to the sender. It is 1 if new data can be received and 0 if the input buffer is full.
    Moreover in this example the CTS signal is used for re-synchronization. Suppose the data receiver is out-of-sync and a start bit is not detected as the start of the data. The CTS signal is disabled from time to time (e.g. once a second) for more than the period of one data transfer i.e. 1 + 8+ 1+ 1+ 2 clock cycles. The sender will detect this re-synchronization gap and can start from the beginning.

    Maximum Bitrate

    I could test the connection link with a bitrate of up to 20 MHz when I directly connect the extension IOs of one frame grabber with a second frame grabber using a short cable of 10cm length. The maximum speed depends on the electrical characteristics of your cable.

    Applet Function

    In this demo we simply transfer generated image data from one frame grabber process using a loopback cable to the second frame grabber process. So we transfer image data between process 0 and process 1 over an external loopback cable. We than compare the DMA output of both frame grabbers. If a parity error is detected the output pixel will be red.






    Serial Output:

    The serial output box contains of three H-Boxes and has two parameters.

    The first parameter is the bitrate which has to match with the receiver. The second parameter is the source for the CTS signal of the demo. You can either use an external input or an FPGA internal loop so you don't need to connect any cables to test the applet.


    In OutputBuffer the data is buffered first in a small BRAM fifo. The most important box is SerialDataStream. Here the 8 bit data values are converted into the serial data stream. Check the following explanations and screenshot.


    First the 8 bit data is extended by the start bit, parity and stop bit plus 2 dummy gap bits. So our 8 bit signal is extended to 13 bit. Next every pixel is converted into a single line i.e. 1 pixel per image line.

    Now we need to serialize the 13 bit parallel data. This can be easily done using a CastParallel and PARALLELdn operator. After the PARALLELdn the 13 bit data are transferred one bit after each other starting with the LSB..

    The following CTS_Valve H-Box contains an ImageValve operator to only allow the output of data if the CTS signal is valid.

    After the valve we could directly output the data. However at this moment data would use the speed of the FPGA clock which usually is 125MHz. So we would have a bitrate of 125MHz. As we need a slower bitrate we simply replicate ever bit i.e. every pixel using operator PixelReplicator to get the desired bitrate.

    Now the data generation is completed and can be converted to a signal and output on one of the GPOs. This is done in H-Box GPO_Output

    Serial Input

    The serial input box of process 1 now needs to receive the signals, analyze the integrity and output the data. Again, this H-Box has parameters to set the baudrate and the source for the serial data.


    In GPI_Input the physical input is selected and output as a signal to StartDetectionFilter. Here the signal is analyzed for a start condition i.e. for a rising edge which is supposed to be the start bit. Once it detected a rising edge the implementation will record all bits of the sequence. If the receiver is asynchronous to the sender. For example because it was started after the sender it might falsely detect a bit to be the start bit. To re-synchronize the CTS signal adds some gaps from time to time which will automatically cause a re-synchronization. See explantion above.

    In H-Box DataExtract the start bit condition and the defined bitrate are used to lath the data in a new empty line using CreateBlankImage and PixelToImage.


    In DataErrorDetect now the serial data is analyzed for start bit, stop bit and correct parity. The output of this H-Box is still a serial stream of the data with a second bit which has to have value 0 in the last pixel of the line what indicates that no error was detected.


    Finally in H-Box Parallelize we can parallelize the serial data. This is simply done using a PARALLELup operator. Now we can remove the start, parity and stop bit to get our 8 bit data value. Bit 1 contains the error flag which is output on a second link.


    H-Box OutputBuffer includes a BRAM FIFO. If the processing after the serial input gets stopped this buffer could run into overflows and some serial data can get lost. To avoid this the fill level of the buffer is measured and at a certain point the CTS output signal is set to low i.e. not cleared to send data so that the sender will stop sending data.


    This is continued in CTS_Output. Here the Resync gap is added to the stream.


    Applet Usage in microDisplay

    The applet is pre-configured to be directly used in microDisplay. It will use the internal loop and the internal image pattern generator. So you can directly load the applet and start it in microDisplay.


    First you can see that both DMA channels transfer data. DMA 0 is coming from process 0 and transferring the original data. DMA 1 is showing the data from process 1 which is the transferred data over the serial interface and protocol. As you can see it the data is correct. We get about 24.1 fps. At an image size of 256x256 this results in 1.5 MByte/s. For the bitrate of 48ns this is a reasonable output. (1 / (1.5 MPixel/s * 13 bit)

    If you change the bit rate in either the sender or receiver you can see that the data gets incorrect. Once you correct it the re-syncronization works and data integrity is correct. (Obviously the frame generation puts the wrong pixel at the wrong position as we do not transfer the frame parameters like frame start.)

    Next we want to have a look at the data. For this we use our logic analyzer applet which had been presented in this forum thread: LINK


    From the waveform we see on channel 0 the serial data and in channel 1 the CTS signal. As you can see the CTS re-synchronization gap is about 2 data word periods long. After the rising edge of the CTS signal the sender starts sending the data.

    On channel 2 we can see a debug output. It shows the start condition detection. On channel 3 we can see the position where data is latched.

    In the drawing the bitrate was reduced to 480ns.

    Attached you can find the VA file. You need VisualApplets 3.1 to use the file. If you don't have a license for the parameters library or debugging operator library you need to remove those operators. The serial communication will still work correct.

    Hello VisualApplets community

    Today I like to present one of the most complex VisualApplets implementations ever made: A Logic Analyzer fully programmed in VisualApplets using graphical output visualization and text overlays. The applet can be fully used in microDisplay without the use of any extra software or hardware. The implementation is open source so you can include it into your projects or if you are keen for it go and understand and modify it.

    Quick Start: Download the attached HAP file and load it to a marathon VCL. Done!

    Table of Content:

    1. Features
    2. Application Areas
    3. Installation, Usage and Requirements
    4. Implementation

    1. Features

    The following screenshot shows the usage in microDisplay:


    The applet has the following functions:

    • latching of digital signals at the frame grabber IOs or internal sources
    • Function generators
      • define the period
      • sequence length
      • downscale
      • delays
    • output of the signals to the frame grabber IOs
    • Logic Analyzer functionality - The usage is style of any other soft- or hardware logic analyzer -> ease of use
      • edge sensitive trigger
      • resolution in the range of 16ns to 32sec
      • 8 channels with arbitrary selection of signals
      • text output for settings and cursor positions
    • statistics module to count signals and measure min/max periods

    2. Application Areas

    Debugging of applets and system setups require logic analyzers to answer questions like

    • How does my signal look like?
    • Is there an encoder jitter?
    • Is my trigger output correct?
    • Is the number of trigger inputs the same as the outputs? Is there a frame response for every trigger?

    You have two options to use the logic analyzer:

    1. Use the ready-made applet and connect the frame grabber IOs to your encoder or use a crosslink cable to monitor the signals of a second frame grabber. The following pictures show example of crosslink cables. We can send you instructions on how to make them on request.
    2. Use the logic analyzer as a VisualApplets hierarchical box and integrate it into you own design.

    Application Example 1: Emulate an Encoder Signal and monitor the Behavior of an AcquisitionApplets

    One possible application is the emulation of encoder signals. The generators in the applet can emulate an encoder signal with two traces A and B. Moreover an image trigger signal can be emulated.

    Using the crosslink cables you can then send the emulated signals to your device under test e.g. an AcquisitionApplets. Configure the applet to your requirements.

    Next monitor the applets LVAL and FVAL signals to see if the triggering works correct.


    Application Example 2: Monitor the Correct Function of the Light Controller Applet

    In the following screenshot you can see the output of an applet I made recently. The applets intention is to control 3 different lights from one trigger and use different exposure times. By monitoring the signals it could be ensured that the applet works correct.


    3. Installation, Usage and Requirements

    Attached you can find a HAP file "HardwareTest_LA_V32.hap" for mE5 marathon VCL frame grabbers. You can directly copy the file into your runtime %SISODIR5%\Hardware Applets\mE5-MA-VCL directory, flash it to the frame grabber and load it in microDisplay.

    The apple is extensively using the operators from the Parameters library. So the applet can be easily used.

    All relevant parameters to setup the applet are in the parameter tree category "Parameters". Do not change any of the implementation parameters.


    In the following a short explanation of the parameters is listed:

    • Parameters -> Logic_Analzer
      In this category all relevant parameters to setup the logic analyzer are included. The values are very similar to soft- and hardware logic analyzers so usage is easy.
      • TriggerMode
        Start = Continous Grab for each trigger. After a time out of no triggers the applet will run in untriggered free run mode
        Stop = Stop of the triggering. No more signals are captured
        Single Shot = The logic analyzer will wait for the next trigger condition. As soon as it recorded the sequence after the condition the recording will be stopped. You need to stop and restart the trigger mode.
      • TriggerTimeout
        Time after the logic analyzer switches to FreeRun mode . Only available in TriggerMode = Start
      • TriggerEdge
        Rising or falling edge.
      • Trigger Channel
        Select the trigger channel. Each trigger condition is marked with the white arrow in the waveform window.
      • xDiv
        Scale the time of a x-division i.e. the time of the one of the 10 waveform areas
      • Cursor 1 and Cursor 2
        Positons of the two blue lines and measured times.
      • Shift
        Shift the trigger position to the beginning or end of the waveform window
      • Channels
        for all of the 8 channels you can arbitrarily select a source. Explanation of the sources can be found below.
    • Parameters -> Inputs
      Define the polarity of the digital inputs and set a debounce time.
    • Parameters -> Generators
      The generators are used for internal signal generation. You can use up to four independent frequency generators. Define the period and sequence length for each generator.
    • Parameters -> Pulse_Form
      The generated signals can get a pulse width, delay and downscaling.
    • Parameters -> Outputs
      Arbitrary mapping of the generated signals or digital inputs to the digital frame grabber outputs is possibl
    • Parameters -> Statistics
      The applet includes 8 statistic modules which can count the pulses, measure the current, min and max period as well as signal width. This is for example a very good source to detect encoder jitters.
    • Other Parameters
      The logic analyzer was added to the existing frame grabber test applet. For a description of this applet check the applet documentation in…es%20hardware%20test.html

    If you want to include the H-Box into your own designs simply copy it and connect signal sources. The output needs to be directly connected to a DMA channel.


    It has all parameters of the logic analyzer category explained above.

    Note the following requirements:

    • At minimum Runtime 5.6.1 is required. In Runtime 5.6 microDisplay will not be fast enough.
    • If you want to build the HAP file on your own you need
      • VisualApplets 3.1 or higher (use VisualApplets 3.2 to set units in the parameter translates and references)
      • Parameters library extension

    4. Implementation

    Attached you can find the VisualApplets files. As mentioned it is one of the most advanced applets ever made with VisualApplets. And for sure VisualApplets was not made for this type of FPGA implementations. However, it shows you the great unlimited possibilities with VisualApplets.

    Even though it is one of the most complex VisualApplets design (2272 modules) the resource consumption is low. The H-Box LogicAnalyzer only requires 6300 LUTs 37BRAMs and 2 ALU.

    Hi Saito

    Welcome to the VisualApplets forum!

    On a DMA transfer the width of each individual line is discarded. Only the frame data will be stored in the PC memory. By using FG_TRANSFER_LEN the number of byte of the transfer can be obtained. Any line length information is discarded.

    So information about the image dimensions have to be appended as a trailer to the image.

    Solution 1:

    In the post Extact ROIs from linescan images using a Blob Analysis images of different sizes are cropped using ImageBufferMultiRoIDyn operator. I think in this thread most of your question is answered.

    Solution 2:

    Instead of the presented solution above you can measure the current image width and height just before the DMA transfer.



    See the attached VA design.


    In microDisplay variable image dimensions cannot be displayed. Because microDisplay is a test only tool you could increase the size of each image with SyncToMax and SyncToMin to have a constant image size. However this is only for testing as it will require more DMA bandwidth:


    Let me know if you have further questions.


    Hi Pierre,

    Welcome to the forum!

    In VisualApplets an acquisition process is reset and restart when starting the acquisition or stopping the acquisition.

    A process without a DMA channel will be started immediately when the applet is initialized.

    Unfortunately the AppendImage operator has no reset parameter or clear input. So the only way to reset its concatenation is by restarting the process i.e. restarting the acquisition.

    It might be possible to rebuild the function using operator SignalGate. But this needs exact verification in hardware and as signals are used cannot be simulated in VisualApplets. The timing is crucial. I made a short sketch to show the idea. I am not very sure this will work but I am confident there can be a workaround.


    We have another thread where the reset of processes is discussed. See: Reseting operators, image buffer and signals


    A similar but much easier example is using the same camera input but uses one interleaved BGR and one IR output in separated DMA channels.

    The attached VA design can directly be build and used in microDisplay. The attached VA example will show how to synchronously get the images and write RGB and Gray TIFF files to disk.



    Enable the build in simulator so you don't need a camera.



    Hello Sangrae Kim

    welcome to the forum.

    There are two examples for bilinear linescan cameras in VisualApplets 3.1

    Advantages of the Bayer De-Mosaicing in the frame grabber instead of the camera are:

    • Higher bandwidth: Transfer the RAW data on Camera Link and increase the data inside the frame grabber with high speed DMA channel
    • Higher Quality: The frame grabber can do a very high quality Bayer De-Mosaicing. With VisualApplets you can use any algorithm you like.

    We also have AcquisitionApplets for these cameras to use them on A-Series.

    You are asking for camera EV71YC4CCP1605-BA0 of E2V. This CXP camera can only output RGB color but not RAW Bayer data. In this case you can directly use the RGB data from the camera and don't need color interpolation. Unfortunately the advantages of the FPGA processing on the frame grabber cannot be used with this camera. See the manual of the camera User Manual ELIIXA+ 16K/8K CXP C OLOR Section 2.2

    Best regards