Posts by cwalker888

    Testing this in VA gives the expected behavior--thank you. In my application, I apply a smoothing filter that requires the image to be at least the size of the Kernel. If an image is acquired that is shorter than can be processed by the image filtering, what is the best way to handle this in VA? In my application, I could probably just "ignore" the processed result while still sending the final image data (last few pixel rows, basically) to the host PC with the "final image" flag. Is there a way to check whether the incoming image is large enough to be processed and to choose not to process it based on its size?

    In my application, linescan cameras are acquiring and sending 2D images to my host PC where they are concatenated. The final image length needs to be as long as the camera trigger photoeye signal is ON. Once the photoeye turns OFF, the camera should complete the final 2D image acquisition. I need the host PC to know when the final 2D image has been sent OR when the Trigger photoeye signal turns OFF so that it can determine the last acquired image and finalize the image concatenation. I can see two possible ways to do this:

    1. Use an acquisition enable signal (i.e. analyze a Trigger 5 board input signal) to acquire images and send a value out with the 2D images that represents a serial number. All 2D images with the same serial number are considered part of the final image. Increment a counter (i.e. serial number) every time the photoeye signal goes from low to high (i.e. new image). The problem here is that the host PC is still waiting for whatever grabs come from the camera and does not "know" when the final 2D image has been sent until the next low-to-high transition. To overcome this, if there is a way for the framegrabber to send out one last image on a high-to-low transition and signal that it is the final image then the host PC could know that the sequence of images to concatenate is finished. But how to do this?
    2. Use an event signal in the host PC that monitors the Trigger 5 board input signal of the photoeye. If the photoeye goes from high-to-low (trailing edge of part), then tell the grab worker that image concatenation is complete. If it goes from low-to-high, tell the grab background worker that a image is being acquired. This could be done with serialization/counter as well. But looking at the SDK documentation, I do not see how to capture the IO events from the Trigger 5 board in the host PC.

    I appreciate your thoughts on this. Thanks!

    Thank you for providing the OpenCV code. I am using the C# wrapper and am having trouble converting the SisoImage data type into a Mat or a Bitmap Image type.


    C++ Code: (your code works)

    char* dataBuffer = (char*)Fg_getImagePtrEx(fg,lastPicNr,0,pMem0); //getBufferOfPhoto();

    Mat any_img(Size(width, height), CV_8UC1, dataBuffer, Mat::AUTO_STEP);

    imshow( "Display window raw", any_img );


    C# Code: (my code which does not work)

    SisoImage sisoImage = SiSoCsRt.Fg_getImagePtrEx( data.fg, (int)( imgNr ), data.port, data.mem);

    IntPtr dataBuffer = sisoImage.asPtr();

    Image<Gray, Byte> imgToDisplay = new Image<Gray, Byte>( 512, 2048 );

    imgToDisplay.Ptr = dataBuffer;

    CvInvoke.imshow( "Display window raw", imgToDisplay);


    I also tried this to get a Bitmap but this also did not work as it is not able to convert the SisoImage type:

    Image img = (Bitmap)(new ImageConverter()).ConvertFrom(dataBuffer);


    Also, the SisoCsRt.DrawBuffer() is able to write images to the created display (i.e. SisoCsRt.CreateDisplay() ) but is it possible to draw directly to a panel object on a Form?

    Bjorn, this is very helpful and I walked through your applet design and it makes sense and is clever. Thank you!


    As for using the Coordinate_X(Y) operators, I spent a couple of hours this morning looking through example applets and trying to figure out how to use the COG_X(Y) data to build a 2D image where white pixels are essentially the COG of each detected blob. However, I keep running into issues. I think the problem is mostly to do with the fact that the COG_X(Y) data is a variable stream of 1D data and does not match the image size. For example, my thought was to create a blank image and then create a Coordinate_X and Coordinate_Y image off it. Then to write something like IF(Coordinate_X AND COG_X AND Coordinate_Y AND COG_Y) then 1 else 0. But the Coordinate_X(Y) images are 2D images and the COG_X(Y) data is 1D. Any suggestions on this one?

    My incoming image consists of a uniform background and many (thousands) of dark particles. I first find the blobs in the image and have the center of gravity in X and Y of each blob. What I would like to do is then create a "density map" of these blobs as an image output. For instance, if my image is 16000x1000 in size and contains 50,000 blobs, my goal is to create an output image of 160x10 where each 100x100 region of the image is compacted to a single pixel (i.e. image reduces in size by 100x100=10,000x). Each resulting pixel value should then be the sum of the blobs in each 100x100 region of the image. Note that I do not want to count areas of blobs, just the center of gravity of each blob.


    For instance, if I have 10 blobs in the first 100x100 pixels of the image, then I want the pixel at (0,0) of my output image to have a value of 10. If there are 56 blobs in the next 100x100 region of the image, then pixel (1,0) of my output image should have a value of 56.


    I am new to Visual Applets and it is unclear to me how to do this in an efficient way. I think the design method requires me to first create an image buffer that represents a black image that is 160x10 in size. Then I need to loop through the found blobs CoGs and accumulate the buffer (output) image pixel values depending on where each blob is located. I attached an example picture to this post that shows blobs in a 10x10 image and the CoG of each blob as red dots. Then the red dots are accumulated over each 2x2 region (i.e. 100x100 in example above) and the resulting output image is a 5x5 image count or density map.

    Example blob density map.jpg


    Any thoughts on the best way to implement this?

    I am new to using Silicon Software VA and hardware. I have a me5VCL board and a Dalsa Linea 16K linescan camera which supports GenICam. I installed the SDK Runtime 5.7 and ran the microDisplayX application. I get the error "Error Execturing Gs_restartCxpDiscoveryCycle(...): service connection lost (5)" when I press the button the "Start Full Camera Discovery". The board has the default applet "Acq_SingleFullLineGray" loaded but the camera shows up as "Generic Camera" and I cannot see any of the GenICam parameters.