Posts by Johannes Trein

    Dear Silverfly


    as you are using a GigEVision camera the camera explicitly needs to be started. microDisplay will do that for you when you click the Play button. However it will start camera 0 with DMA 0. Camera 1 with DMA 1. Etc. As you only have a single DMA camera 1 (you call it camera 2) will not be started by microDisplay.

    Solution: Open the GenICam explorer and connect with the second camera. Search for parameter StartAcquisition and click Execute. This shall start the second camera and you can acquire the images with microDisplay.


    You won't have that type of problem when you integrate the frame grabber with the API using the SDK as you need the code to start the cameras anyway. If you are using a 3rd party interface like Halcon you need to check it.


    BR

    Johannes

    Hello Jayasuriya


    in VA 3.1.1 we released examples for Tap Geometry sorting using BRAM instead of the ImageBufferSC.


    See the example documentation here: http://www.siliconsoftware.de/…20Geometry%20Sorting.html


    I guess you are using a microEnable IV frame grabber? If you have enough BRAM you can use one of the solutions from the link. Otherwise it is possible to use two ImageBufferSC operators, a FIFO and insert line as well as append line.


    We tried using cast parallel but it is reading out as expected.

    What do you mean by that? Do you mean ... but it is NOT reading out as expected? In this case try the two ImageBufferSC


    Johannes

    VisualApplets can be used to transfer serial data over the IOs of the frame grabber. Application areas are for example

    • Control eject mechanism like valves for bulk material sorting.
    • Object x- and y- position output. For example for laser welding and control the position of piezo controlled mirrors.
    • Value output for digital to analog converters for example to control a laser intensity
    • etc.

    VisualApplets does not have dedicated operators for the communication. Instead the existing operators are used to generate the output stream or analyze a stream at a digital input of a frame grabber. The implementations can be adapted to many requirements and protocols.

    Many applications can be solved. For more complex requirements like complex protocols with handshakes a custom solution might be necessary.


    The following VisualApplets design shows a simple approach for serial output and serial input. I hope that other designs and protocols will follow after this one.

    Serial Protocol

    In our example I defined a simple protocol.

    pasted-from-clipboard.png


    We transfer 8 bit data words. The serial stream is in the format

    • 1 Start bit (high)
    • 8 Data bit
    • 1 Parity bit (XOR of all data bits)
    • 1 Stop bit (high)

    followed after a gap of a minimum of two clock cycles the next data can come.


    The CTS (Clear to Send) data is send backwards from the receiver to the sender. It is 1 if new data can be received and 0 if the input buffer is full.
    Moreover in this example the CTS signal is used for re-synchronization. Suppose the data receiver is out-of-sync and a start bit is not detected as the start of the data. The CTS signal is disabled from time to time (e.g. once a second) for more than the period of one data transfer i.e. 1 + 8+ 1+ 1+ 2 clock cycles. The sender will detect this re-synchronization gap and can start from the beginning.

    Maximum Bitrate

    I could test the connection link with a bitrate of up to 20 MHz when I directly connect the extension IOs of one frame grabber with a second frame grabber using a short cable of 10cm length. The maximum speed depends on the electrical characteristics of your cable.


    Applet Function

    In this demo we simply transfer generated image data from one frame grabber process using a loopback cable to the second frame grabber process. So we transfer image data between process 0 and process 1 over an external loopback cable. We than compare the DMA output of both frame grabbers. If a parity error is detected the output pixel will be red.

    pasted-from-clipboard.png

    Process0:

    pasted-from-clipboard.png


    Process1:

    pasted-from-clipboard.png


    Serial Output:

    The serial output box contains of three H-Boxes and has two parameters.

    The first parameter is the bitrate which has to match with the receiver. The second parameter is the source for the CTS signal of the demo. You can either use an external input or an FPGA internal loop so you don't need to connect any cables to test the applet.

    pasted-from-clipboard.png


    In OutputBuffer the data is buffered first in a small BRAM fifo. The most important box is SerialDataStream. Here the 8 bit data values are converted into the serial data stream. Check the following explanations and screenshot.

    pasted-from-clipboard.png


    First the 8 bit data is extended by the start bit, parity and stop bit plus 2 dummy gap bits. So our 8 bit signal is extended to 13 bit. Next every pixel is converted into a single line i.e. 1 pixel per image line.

    Now we need to serialize the 13 bit parallel data. This can be easily done using a CastParallel and PARALLELdn operator. After the PARALLELdn the 13 bit data are transferred one bit after each other starting with the LSB..

    The following CTS_Valve H-Box contains an ImageValve operator to only allow the output of data if the CTS signal is valid.

    After the valve we could directly output the data. However at this moment data would use the speed of the FPGA clock which usually is 125MHz. So we would have a bitrate of 125MHz. As we need a slower bitrate we simply replicate ever bit i.e. every pixel using operator PixelReplicator to get the desired bitrate.

    Now the data generation is completed and can be converted to a signal and output on one of the GPOs. This is done in H-Box GPO_Output

    Serial Input

    The serial input box of process 1 now needs to receive the signals, analyze the integrity and output the data. Again, this H-Box has parameters to set the baudrate and the source for the serial data.

    pasted-from-clipboard.png


    In GPI_Input the physical input is selected and output as a signal to StartDetectionFilter. Here the signal is analyzed for a start condition i.e. for a rising edge which is supposed to be the start bit. Once it detected a rising edge the implementation will record all bits of the sequence. If the receiver is asynchronous to the sender. For example because it was started after the sender it might falsely detect a bit to be the start bit. To re-synchronize the CTS signal adds some gaps from time to time which will automatically cause a re-synchronization. See explantion above.

    In H-Box DataExtract the start bit condition and the defined bitrate are used to lath the data in a new empty line using CreateBlankImage and PixelToImage.

    pasted-from-clipboard.png


    In DataErrorDetect now the serial data is analyzed for start bit, stop bit and correct parity. The output of this H-Box is still a serial stream of the data with a second bit which has to have value 0 in the last pixel of the line what indicates that no error was detected.

    pasted-from-clipboard.png


    Finally in H-Box Parallelize we can parallelize the serial data. This is simply done using a PARALLELup operator. Now we can remove the start, parity and stop bit to get our 8 bit data value. Bit 1 contains the error flag which is output on a second link.

    pasted-from-clipboard.png

    H-Box OutputBuffer includes a BRAM FIFO. If the processing after the serial input gets stopped this buffer could run into overflows and some serial data can get lost. To avoid this the fill level of the buffer is measured and at a certain point the CTS output signal is set to low i.e. not cleared to send data so that the sender will stop sending data.

    pasted-from-clipboard.png

    This is continued in CTS_Output. Here the Resync gap is added to the stream.

    pasted-from-clipboard.png


    Applet Usage in microDisplay

    The applet is pre-configured to be directly used in microDisplay. It will use the internal loop and the internal image pattern generator. So you can directly load the applet and start it in microDisplay.


    pasted-from-clipboard.png


    First you can see that both DMA channels transfer data. DMA 0 is coming from process 0 and transferring the original data. DMA 1 is showing the data from process 1 which is the transferred data over the serial interface and protocol. As you can see it the data is correct. We get about 24.1 fps. At an image size of 256x256 this results in 1.5 MByte/s. For the bitrate of 48ns this is a reasonable output. (1 / (1.5 MPixel/s * 13 bit)

    If you change the bit rate in either the sender or receiver you can see that the data gets incorrect. Once you correct it the re-syncronization works and data integrity is correct. (Obviously the frame generation puts the wrong pixel at the wrong position as we do not transfer the frame parameters like frame start.)


    Next we want to have a look at the data. For this we use our logic analyzer applet which had been presented in this forum thread: LINK

    pasted-from-clipboard.png


    From the waveform we see on channel 0 the serial data and in channel 1 the CTS signal. As you can see the CTS re-synchronization gap is about 2 data word periods long. After the rising edge of the CTS signal the sender starts sending the data.

    On channel 2 we can see a debug output. It shows the start condition detection. On channel 3 we can see the position where data is latched.

    In the drawing the bitrate was reduced to 480ns.


    Attached you can find the VA file. You need VisualApplets 3.1 to use the file. If you don't have a license for the parameters library or debugging operator library you need to remove those operators. The serial communication will still work correct.

    Hello VisualApplets community


    Today I like to present one of the most complex VisualApplets implementations ever made: A Logic Analyzer fully programmed in VisualApplets using graphical output visualization and text overlays. The applet can be fully used in microDisplay without the use of any extra software or hardware. The implementation is open source so you can include it into your projects or if you are keen for it go and understand and modify it.


    Quick Start: Download the attached HAP file and load it to a marathon VCL. Done!


    Table of Content:

    1. Features
    2. Application Areas
    3. Installation, Usage and Requirements
    4. Implementation

    1. Features

    The following screenshot shows the usage in microDisplay:

    pasted-from-clipboard.png


    The applet has the following functions:

    • latching of digital signals at the frame grabber IOs or internal sources
    • Function generators
      • define the period
      • sequence length
      • downscale
      • delays
    • output of the signals to the frame grabber IOs
    • Logic Analyzer functionality - The usage is style of any other soft- or hardware logic analyzer -> ease of use
      • edge sensitive trigger
      • resolution in the range of 16ns to 32sec
      • 8 channels with arbitrary selection of signals
      • text output for settings and cursor positions
    • statistics module to count signals and measure min/max periods

    2. Application Areas

    Debugging of applets and system setups require logic analyzers to answer questions like

    • How does my signal look like?
    • Is there an encoder jitter?
    • Is my trigger output correct?
    • Is the number of trigger inputs the same as the outputs? Is there a frame response for every trigger?

    You have two options to use the logic analyzer:

    1. Use the ready-made applet and connect the frame grabber IOs to your encoder or use a crosslink cable to monitor the signals of a second frame grabber. The following pictures show example of crosslink cables. We can send you instructions on how to make them on request.
      LoopbackFrontIO.pngLoopbackInternal.png
    2. Use the logic analyzer as a VisualApplets hierarchical box and integrate it into you own design.

    Application Example 1: Emulate an Encoder Signal and monitor the Behavior of an AcquisitionApplets

    One possible application is the emulation of encoder signals. The generators in the applet can emulate an encoder signal with two traces A and B. Moreover an image trigger signal can be emulated.

    Using the crosslink cables you can then send the emulated signals to your device under test e.g. an AcquisitionApplets. Configure the applet to your requirements.

    Next monitor the applets LVAL and FVAL signals to see if the triggering works correct.

    Application1_encoderEmulation.png


    Application Example 2: Monitor the Correct Function of the Light Controller Applet

    In the following screenshot you can see the output of an applet I made recently. The applets intention is to control 3 different lights from one trigger and use different exposure times. By monitoring the signals it could be ensured that the applet works correct.

    pasted-from-clipboard.png

    3. Installation, Usage and Requirements

    Attached you can find a HAP file "HardwareTest_LA_V32.hap" for mE5 marathon VCL frame grabbers. You can directly copy the file into your runtime %SISODIR5%\Hardware Applets\mE5-MA-VCL directory, flash it to the frame grabber and load it in microDisplay.


    The apple is extensively using the operators from the Parameters library. So the applet can be easily used.

    All relevant parameters to setup the applet are in the parameter tree category "Parameters". Do not change any of the implementation parameters.

    pasted-from-clipboard.png


    In the following a short explanation of the parameters is listed:

    • Parameters -> Logic_Analzer
      In this category all relevant parameters to setup the logic analyzer are included. The values are very similar to soft- and hardware logic analyzers so usage is easy.
      • TriggerMode
        Start = Continous Grab for each trigger. After a time out of no triggers the applet will run in untriggered free run mode
        Stop = Stop of the triggering. No more signals are captured
        Single Shot = The logic analyzer will wait for the next trigger condition. As soon as it recorded the sequence after the condition the recording will be stopped. You need to stop and restart the trigger mode.
      • TriggerTimeout
        Time after the logic analyzer switches to FreeRun mode . Only available in TriggerMode = Start
      • TriggerEdge
        Rising or falling edge.
      • Trigger Channel
        Select the trigger channel. Each trigger condition is marked with the white arrow in the waveform window.
      • xDiv
        Scale the time of a x-division i.e. the time of the one of the 10 waveform areas
      • Cursor 1 and Cursor 2
        Positons of the two blue lines and measured times.
      • Shift
        Shift the trigger position to the beginning or end of the waveform window
      • Channels
        for all of the 8 channels you can arbitrarily select a source. Explanation of the sources can be found below.
    • Parameters -> Inputs
      Define the polarity of the digital inputs and set a debounce time.
    • Parameters -> Generators
      The generators are used for internal signal generation. You can use up to four independent frequency generators. Define the period and sequence length for each generator.
    • Parameters -> Pulse_Form
      The generated signals can get a pulse width, delay and downscaling.
    • Parameters -> Outputs
      Arbitrary mapping of the generated signals or digital inputs to the digital frame grabber outputs is possibl
    • Parameters -> Statistics
      The applet includes 8 statistic modules which can count the pulses, measure the current, min and max period as well as signal width. This is for example a very good source to detect encoder jitters.
    • Other Parameters
      The logic analyzer was added to the existing frame grabber test applet. For a description of this applet check the applet documentation in http://www.siliconsoftware.de/…es%20hardware%20test.html

    If you want to include the H-Box into your own designs simply copy it and connect signal sources. The output needs to be directly connected to a DMA channel.

    pasted-from-clipboard.png

    It has all parameters of the logic analyzer category explained above.


    Note the following requirements:

    • At minimum Runtime 5.6.1 is required. In Runtime 5.6 microDisplay will not be fast enough.
    • If you want to build the HAP file on your own you need
      • VisualApplets 3.1 or higher (use VisualApplets 3.2 to set units in the parameter translates and references)
      • Parameters library extension

    4. Implementation

    Attached you can find the VisualApplets files. As mentioned it is one of the most advanced applets ever made with VisualApplets. And for sure VisualApplets was not made for this type of FPGA implementations. However, it shows you the great unlimited possibilities with VisualApplets.


    Even though it is one of the most complex VisualApplets design (2272 modules) the resource consumption is low. The H-Box LogicAnalyzer only requires 6300 LUTs 37BRAMs and 2 ALU.

    Hi Saito

    Welcome to the VisualApplets forum!


    On a DMA transfer the width of each individual line is discarded. Only the frame data will be stored in the PC memory. By using FG_TRANSFER_LEN the number of byte of the transfer can be obtained. Any line length information is discarded.


    So information about the image dimensions have to be appended as a trailer to the image.


    Solution 1:

    In the post Extact ROIs from linescan images using a Blob Analysis images of different sizes are cropped using ImageBufferMultiRoIDyn operator. I think in this thread most of your question is answered.


    Solution 2:

    Instead of the presented solution above you can measure the current image width and height just before the DMA transfer.

    pasted-from-clipboard.png

    pasted-from-clipboard.png

    See the attached VA design.


    microDisplay:

    In microDisplay variable image dimensions cannot be displayed. Because microDisplay is a test only tool you could increase the size of each image with SyncToMax and SyncToMin to have a constant image size. However this is only for testing as it will require more DMA bandwidth:

    pasted-from-clipboard.png


    Let me know if you have further questions.


    Johannes

    Hi Pierre,


    Welcome to the forum!

    In VisualApplets an acquisition process is reset and restart when starting the acquisition or stopping the acquisition.

    A process without a DMA channel will be started immediately when the applet is initialized.


    Unfortunately the AppendImage operator has no reset parameter or clear input. So the only way to reset its concatenation is by restarting the process i.e. restarting the acquisition.


    It might be possible to rebuild the function using operator SignalGate. But this needs exact verification in hardware and as signals are used cannot be simulated in VisualApplets. The timing is crucial. I made a short sketch to show the idea. I am not very sure this will work but I am confident there can be a workaround.

    pasted-from-clipboard.png


    We have another thread where the reset of processes is discussed. See: Reseting operators, image buffer and signals


    Johannes

    A similar but much easier example is using the same camera input but uses one interleaved BGR and one IR output in separated DMA channels.


    The attached VA design can directly be build and used in microDisplay. The attached VA example will show how to synchronously get the images and write RGB and Gray TIFF files to disk.


    pasted-from-clipboard.png


    pasted-from-clipboard.png


    Enable the build in simulator so you don't need a camera.


    BR

    Johannes

    Hello Sangrae Kim


    welcome to the forum.


    There are two examples for bilinear linescan cameras in VisualApplets 3.1


    Advantages of the Bayer De-Mosaicing in the frame grabber instead of the camera are:

    • Higher bandwidth: Transfer the RAW data on Camera Link and increase the data inside the frame grabber with high speed DMA channel
    • Higher Quality: The frame grabber can do a very high quality Bayer De-Mosaicing. With VisualApplets you can use any algorithm you like.

    We also have AcquisitionApplets for these cameras to use them on A-Series.


    You are asking for camera EV71YC4CCP1605-BA0 of E2V. This CXP camera can only output RGB color but not RAW Bayer data. In this case you can directly use the RGB data from the camera and don't need color interpolation. Unfortunately the advantages of the FPGA processing on the frame grabber cannot be used with this camera. See the manual of the camera User Manual ELIIXA+ 16K/8K CXP C OLOR Section 2.2


    Best regards

    Johannes

    Hi Aron,


    microDisplay can only display images having the same line length. It will be the one you define in the DmaToPC operator and will be by default the link properties.

    The only chance you have is to expand the line length of the blob output to the line length of the image. However this will generate an enormous data overhead in the DMA transfer. I do not recommend this solution.

    Anyway for demo purposes in microDisplay it might be OK. See the attachment

    pasted-from-clipboard.png


    Johannes

    Hi Aron


    a DMA transfer will always discard the line length information. The software application needs to know about the image dimensions and pixel format of the transferred image. microDisplay will use the link properties of the VisualApplets design to get these information.

    In your case you used SetDimension and defined the output format to 8192 x 6 @ 8 bit.

    pasted-from-clipboard.png


    If you change the dimensions to 2 x 65536 microDisplay will display your results as you need.

    pasted-from-clipboard.png

    Note that this change does not change anything in the implementation or DMA transfer. It is just the way microDisplay will show the DMA result. The number of byte of the transfer are the same.

    Johannes

    Hi Lothar


    you can use 12 Bit DMA output on marathon and ironman frame grabbers without limitations. On marathon the product of parallelism and bit with has to be a multiple of 8,

    So parallelism 2, 4 and 8 are allowed among others. Parallelism 16 is not allowed as it will exceed the 128 Bit. To use the maximum bandwidth of the marathon you need to cast the parallelism to get 8 bits and parallel Dn to 16.

    pasted-from-clipboard.png


    The PARALLELdn might add dummy pixel at the end of each line. You can append all image lines to a single line to avoid this.

    pasted-from-clipboard.png


    microDisplay cannot show the result of this correctly as it will display the result as 8 bit and not 12 bit as well as the full image in a single line.


    Internal note: FR 8543


    BR

    Johannes