Thank you for your reply.
I want to do pseudo color processing on gray images. Color model refers to the jet color model in MATLAB. This is the function of jet color model in MATLAB.
%function J = jet(m)
% if nargin < 1
% m = size(get(gcf,'colormap'),1);
% end
% n = ceil(m/4);
% u = [(1:n)/n ones(1,n-1) (n:-1:1)/n]';
% g = ceil(n/2) - (mod(m,4)==1) + (1:length(u))';
% r = g + n;
% b = g - n;
% g(g>m) = [];
% r(r>m) = [];
% b(b<1) = [];
% J = zeros(m,3);
% J(r,1) = u(1:length(r));
% J(g,2) = u(1:length(g));
% J(b,3) = u(end-length(b)+1:end);
forum.silicon.software/index.php?attachment/453/
Because I don't know much about the jet function in MATLAB, and I'm not sure if I can simulate the jet function in the visual applet, I also plan to use a simple function conversion for pseudo color processing.
Input a gray image, for each pixel I (I, J), judge the pixel whose gray value is less than 64, transform according to r = 0, g = 4 * I (I, J), B = 255; for the pixel whose gray value is greater than 63 and less than 128, transform according to r = 0, g = 255, B = 255-4 * [I (I, J) - 128]; for the pixel whose gray value is greater than 127 and less than 192, transform according to r = 4 * [I (I, J) - 128], G= 255, B = 0 for transformation; for pixels with gray value greater than 191, according to r = 255, g = 255-4 * [I (I, J) - 192], B = 0 for transformation.
I tried to write a pseudo color VA file (as shown in the attachment), but it didn't work very well. I would appreciate it if you could help me to point out the problem.
I hope you can help me solve this problem. thank you very much. And I'm really sorry for the trouble caused by the language problem.
Best wishes
Danna
]]>I'm currently desperate. On my applet I am dealing with very large images (width: 15360 pixel). It often happens that the building components of VisualApplets only allow 11bit (1024 pixel). Is there a possibility to increase the bit width?
I noticed this with the building blocks:
- CreateBlankImage
- CoefficientBuffer
Thanks for the help.
]]>thank you for your request.
You can use the Visual Applets example "GeometricTransformation_FrameBufferRandomRead.va" for upscaling the image in both image directions by a factor of 1.17. This example is DRAm based,but maybe can solve your problem:
Right-Mouse-click on the module "GeometricTransformation" and set under properties following parameters:
thank you for your question. We use this procedure because only interger values are allowed for the parameters in VisualApplets.
The reference angle chi and alpha is multiplied by a factor 1024/90, to covert degree to fixed point parameter value
I.E. 90°= 1024, and 27,77°= 316
Also the multiplication by a factor of 2^2=4 of the distance is due to the fixed point t arithmetic and correspond to the subpixel accuracy.
This means: 2^2 correspond to distance of 1. If you like to set a distance Z of 0.5 we use value of 2 in VisualApplets.
]]>thank you for your request. Out support team will contact you.
]]>Thank you for your update.
I has confirmed the function in "GeometricTransformation_FrameBufferRandomRead.va".
Thanks.
]]>In the log I found:
That at least indicates that 4 PCIe lanes are used, and a PCI payload size of 128.
The facts above should deliver above 600 MB/s as DMA performance in your system.
You can do a DMA performance test in mDisplay manually if you want to:
Yes, I understood and thank you for your answer. I found the solution thanks to your information.
]]>You can simply adopt the distortion approach into the scaling details.
]]>Dear zhuomuke,
In case you would go for a mE5-MA-VCX-QP platform you could save 2 of the 6 image buffers due to the shared memory approach of this platform. So you will require 4 buffer and have 4. Tha would be a perfect fit.
Would it be useful to post some C++ code on how to interpret the 12bit data corretcly in the context of a display example?
Please check the DMA for the VA Design manually in microDisplay.
So you have to build a CreateBlankImage approach instead of a camera into the design.
Using SourceSelector would enable you to include the camera and the simulator at the same time.
The CreateBlankImage operator would deliver as much data as the DMA can transfer.
In microDisplay(X) you can see the obtained bandwidth and use this as a good indicator for the possible DMA performance.
In case of further questions on this do not hesitate to contact me.
The amount of FPGA external RAM ressouces is currently restricted to a maximum of 4.
In case of symmetric configurations and synchronization in between of the cameras an approach with InsertLine and AppendLine with ImageFIFOs directly behind the cameras it may become possible to write all camera image stream data into a single RAM ressource. That is additionally a question of the required bandwidth, because a single buffer needs to handle all 4 camera-streams at the same time.
]]>Hm. I can't see that it is flipped from you CL Spec image.
Upon looking at it again, I can see you are correct. I must have been confused.
If you set the VisualApplets operator to 8 Tap it will not use the nineth pixel.
I was wondering about that. It seems odd that there would be more pixels than taps.
I've added a VA sample. It is a little simpler compared to the examples in the VA installation directory but does the same with a little more FPGA ressources.
Thank you very much, this will be very helpful! I really appreciate the help
I should consider the camera's peak transmission rate.
Thank you for your detailed guidance.
you want to output the pulse at the end of the frame correct? So if the blob ends at line 545 you still want the output at the end of the frame, correct?
YES
And now it's achieved when use "islastpixel"
thank you so much!
Oliver
]]>If you don''t want the value of GetStatus beeing overwritten with the next image you can do the following:
use a fifo, use ImageValve with SetSignalStatus at the gate input, use GetStatus.
You can now read one value after each other. The software needs to sufficiently fast enough to avoid a fifo overflow.
Operator ImageMonitor can also be used for this.
BR
Johannes
I tested nvJPEG library.
The result is after decoding on the GPU. (50 % jpeg compression)
** Test Result : 2048 x 2048 = 150fps , 2048 x 1024 = 295fps
For the information,
The nvJPEG image buffer consists of each RGB channel.
So we need to merge each channel in CPU side again for 24bit image.
I don't have enough experience with GPU processing and multi core processing .
Step 1 : FPGA(encoding) Got Full speed (2048 x 2048 = 150fps)
Step 2: GPU(decoding) Got Full speed (2048 x 2048 = 150fps)
** Without copying memory from GPU to host
Step 3: Copy image data of each channel from GPU to HOST memory. ( decreased frame rate)
** Just copy Red channel only ( 2048 x2048 = 140fps) copy Green&Blue channel (decrease frame rate more)
Step 4: RGB Color merge
This is the test result so far.
I need more analysis of Step 3 & 4 for increase speed.
Thanks.
]]>