Posts by B.Ru

    Hi,


    Yes, you can use a LUT to translate grey values into pseudo 24bit color in VA.

    I would recommend to get the colormap from numpy/matplotlib as text file.


    Within VA simulation you can directly use color map feature in simulation probe.
    You only need to enable "color map".

    Best regards,

    Dear Baoxi,


    You can use the build-in ImageBuffer Width/Height + X/Y Offset to apply a region of interest to the image.
    There are more operators that can do a region of interest, like: DynamicROI or ImageBufferMultiROI(dyn) ...


    If the region of interest is not rectangular, please let us know.


    Best regards,

    Dear Sangrae_Kim,


    You would like to use a CameraLink camera in CL Medium or Full Mode.
    Since CL Medium and CL Full camera operators do not support 14 bit directly, it is required to acquire 6x8bit (Medium) or 8x8bit (Full) or 10x8bit (Deca, Full 10tap) and assemble 14 bit data based on 8bit details.


    The sensor tag geometry is based on a line based re-arrangement and can get solved by BlockRAM.


    So the following steps are required:

    • Decide on the CL Mode : Medium/Full/Deca
    • Based on that assembly of 14bit per pixel based on 8bit details per tap is required
    • Then the sensor tap geometry needs to get implemented

    As soon as you have decided which CL Mode is the preferred choice, please post an acquired image of that mode (8bit applet) here. That image will look really strange due to 8 bit and no sensor tap geometry re-sorting. Please be aware that the resolution will be different within 8 bit when acquiring 14 bit mode from camera.


    Based on that image and the precise description from the camera's manual we can go the next step.

    Thank you.


    Best regards,

    Dear Pierre,


    It would be simpler...

    Could you post a design for that ?

    Very simlar approach using CreateBlankImage + H-Box FG_Time, but then appending it at the end of the next frame by InsertImage.
    Switching in between of different cases is required.
    In case of trigger received the InsertImage should forward, in all other cases not or mark it correspondingly.


    Best regards,

    Hello Pierre,


    Concerning your questions:


    Where is the doc on the SDK side ?

    • I can provide some code for this and adopt the data-format together with you, based on what we design.

    This way, I could

    -read the Monitor FIFO from my apc callback, so that I will always be able to get reliable information and will support frame loss easily

    -instead of reading "FG_TIMESTAMP_LONG", I could use the 64b pseudo-timestamp from the monitor

    -detect that an event occured without relying on the EventToHost at all

    Is it correct ?

    • you can retrieve the event-related dataset (FG_Time and the things we add ...)
    • you can read the "current FG_Time" from the framegrabber when you receive the APC.

    Would it impact performances a lot ?

    • This will not affect the performance of applet of software (SW) in practice. While SW needs to read those values, but register reads are really fast: I would estimate a response/read time of ~300µs.
    • The amount of required FPGA ressources is minimalistic.

    We should go into a VA coaching session of ~1h and we will find a nice solution for your request. We can add a lot of additional details:

    • Trigger/Image Count difference within VA
    • Image receive start time
    • DMA transfer start
    • Generate Trigger-Event for rising/falling edge


    Best regards,

    Hello Pierre,


    Thank you for your detailed explanation of the requirements.


    -An awful solution (for me) would be to add another DmaToPC in order to send bytes instead of "event" in case of tag signal. ...


    Any help is welcome.

    really want to avoid modifying the image pixels to store information, or even worse (for me), adding data at the end of the image.

    From my point of view adding a header or footer to a DMA Transfer is a possible solution,

    but since you do not like/prefer this approach I would like to make the following proposal:


    There is a simple and reliable way of getting FG related timestamps:


    You can use the ImageMonitor operator...

    FGevent_Time.png


    Above you can see an example of event generation and frame grabber (FG) time stamp transfer.
    When you register a software callback so that you will have a function getting called when the edge on the trigger input occurs.


    A very precise FG timestamp is sampled for each trigger here and waits in FIFO until ImageMonitor (link to docu) reads it. You can add more data blocks (image-number, counters, ...) and use several of these in the design. This is named "Image"Monitor but here we simply use it for data "structure".


    The event data (FG time stamp) can be read through register/ImageMonitor interface. If you want to have some sample C++ code for that feel free to contact me.
    FGevent_Clocker.png


    Above you can see how a FG time can be generated using 64bit... You can modify the data-type to your personal preference.
    Current FG Time is shown by GetStatus to Register interface.


    VA DESIGN: FG_TimeStamps_TimeDataForEvent.va


    In case of questions do not hesitate to contact me.


    Best regards,
    Björn

    Hello,


    In case two asynchronous signals need to get synchronous to each other, those require exactly the same mean frequency, so that no single pulse gets lost.
    In case the frequency is identically and there is no drift causing a phase jump, you could use the RsFlipFlop...
    That would be the most simple approach.

    Normally you will see a different situation, especially when the external signal changes its frequency or there is some "distortion" on the signal.

    Then more complex approaches come into place:

    One starting point could become the SignalToPeriod Operator:
    Signal.SignalToPeriod


    Measure both periods, find the max/min/mean using filters (PixelNeighbour/Kernel) and then go for Signal.PeriodToSignal...


    For the measurement itself you can use rising or falling edge.
    Similar for a Phase A/B encoder you want to need to sync to a certain generator.

    Please mention some more details and we can focus on a more suitable approach...


    Best regards,

    Here in a screenshot sequence...


    1) Design with connected Probe "Sequence" of interest:


    pasted-from-clipboard.png


    2) Go to Simulation dialog, enter required amount of N processing cycles and press start to proceed:


    pasted-from-clipboard.png


    3) Close simulation dialog and switch into the probe-ciew by double-click onto it:


    pasted-from-clipboard.png


    4) Click onto an image within the "Sequence Viewer" at the bottom and press Ctrl+A:


    pasted-from-clipboard.png


    5) Press right mouse button within "Sequence Viewer" and select "Save" to Save all images...

    The selected name will get a counting image number N extention in format "filename%5d", N.


    Best regards,

    Hi Oliver and Jesse,


    It was a pleasure to discuss the VA design approach with both of you.

    One additional hint:

    • In case the ImageBufferMultiROI(dyn) is not flexible enough to generate additional kinds of overlapping image regions you could potentially use the FrameBufferRandomRead operator for it. One trade-off is the limitation to a single image to image relationship in between of the input and the output, but it generally enables an additional left right shift including overlap.

    One application would be auto-follow of a certain region of image.


    Best regards,

    Hi Jesse Lin,


    Thank you for your feedback.

    And I try to use other methods to remove unwanted frames ,but the buffer will overflow. Camera line rate = 10k Hz.


    Please give me some advice.


    Based on your input the line width is 8k = 8192 pixels per line and you can see a FillLevel at a linerate of 10 kHz of 75%.
    A certain FillLevel is OK, but we need to look into it based on the resulting/required bandwidth.
    One possible bottle-neck is the parallelism within the links, and especially:

    • Memory Interface
    • DMA performance

    The CXP camera interface is focussing up to 2 GB/s:


    pasted-from-clipboard.png


    Since you nearly double the bandwidth due to the overlap, please consider to use a higher(up to double) paralleism in the links.


    The DMA will accept a connection of up to 16x 8 bit = theoretical 2 GB/s, practically it will be limited to 1800 MB/s.
    The memory interface can handle up to 11.2 GB/s (shared between all RAM-modules) and read/write access will add up.


    When I look into your design I would recommend the approach below to generate an overlap on basis of a single RAM:


    pasted-from-clipboard.png

    That would save RAM modules/FPGA ressources and you do not need to write the same data twice...

    Related VA design: LineScan_OverlappingAreaImages.va


    Based on the ROI settings within BufferX2 you can reduce the overlapping part easily...
    I have seen that you use a more complex approach on when to overlap, but that is something you can adopt into this design.
    Just consider using DynamicROI operator if you require a more dynamic approach into the above functionality.
    ImageBufferMultiROIdyn would be a second valid candidate.
    Enjoy.


    Best regards,

    Hi Jesse,

    Please be aware that a following image is required to run the second (lower) channel of "module26".

    Taken from:

    https://docs.baslerweb.com/liv…nization.InsertImage.html


    "The operator InsertImage multiplexes a number of n input links I[0] .. I[n-1] into the output link O. Thus the operator outputs the input images of all inputs in sequential order at O.

    The operator forwards the input images to the output one after the other. First, input link I[0] is processed, next link I[1]. After link I[n-1] the operator starts with the next image at I[0] again etc. The operator waits until an image at a currently selected input is present. Therefore it is not possible to skip inputs."...


    Best regards,

    Björn

    Hi,


    you can find the solution for removing unwanted deadlock as a screenshot below and design here: mE5MA_VCL_Overlap_fixBRudde.va

    pasted-from-clipboard.png


    Since InsertImage will only forward on a single channel, the others are in wait (blocked, inhibit, stop, ...) state and require buffering, cause by all channels being sourced by the same M-Type: "Sync" in here.
    So the RAM1 needs to be in fron of module 13.
    Please be aware that a following image is required to run the second (lower) channel of "module26".


    Best regards,
    Björn

    Hi Kevin,

    2) Can a speed up be achieved by splitting up an image into several ImageBufferMultiROIs?

    using multiple parallel RAMs would cause a potential speed-up. Please be aware that a shared memory concept would behave differently on the VA platforms there are. Additionally the speed-up could be achieved by overlapping regions.
    If you like we can focus on this topic during a conceptual coaching... Feel free to contact me for this.


    Best regards,