My incoming image consists of a uniform background and many (thousands) of dark particles. I first find the blobs in the image and have the center of gravity in X and Y of each blob. What I would like to do is then create a "density map" of these blobs as an image output. For instance, if my image is 16000x1000 in size and contains 50,000 blobs, my goal is to create an output image of 160x10 where each 100x100 region of the image is compacted to a single pixel (i.e. image reduces in size by 100x100=10,000x). Each resulting pixel value should then be the sum of the blobs in each 100x100 region of the image. Note that I do not want to count areas of blobs, just the center of gravity of each blob.
For instance, if I have 10 blobs in the first 100x100 pixels of the image, then I want the pixel at (0,0) of my output image to have a value of 10. If there are 56 blobs in the next 100x100 region of the image, then pixel (1,0) of my output image should have a value of 56.
I am new to Visual Applets and it is unclear to me how to do this in an efficient way. I think the design method requires me to first create an image buffer that represents a black image that is 160x10 in size. Then I need to loop through the found blobs CoGs and accumulate the buffer (output) image pixel values depending on where each blob is located. I attached an example picture to this post that shows blobs in a 10x10 image and the CoG of each blob as red dots. Then the red dots are accumulated over each 2x2 region (i.e. 100x100 in example above) and the resulting output image is a 5x5 image count or density map.
Any thoughts on the best way to implement this?