Dear Saito-san,
thank you for the question. You may use a "LineMemory" operator as it is used in the VisualApplets example "TapSorting_8X_1Y.va" under "Examples\Processing\Geometry\TapGeometrySorting" in the VisualApplets installation directory.
Dear Saito-san,
thank you for the question. You may use a "LineMemory" operator as it is used in the VisualApplets example "TapSorting_8X_1Y.va" under "Examples\Processing\Geometry\TapGeometrySorting" in the VisualApplets installation directory.
Dear Saito-san,
thank you for your question.
I had a look on the description of Halcon function "fit_surface_first_order" (https://www.mvtec.com/doc/halc…_surface_first_order.html ) : This function approximates a first order surface plane to a gray value object in minimizing the distance between gray values and plane. Also the gray value moments are calculated.
I think such functionality can be implemented in VisualApplets using loop functionality and a geoemetric scaling function within this loop to approximate a plane to an object.
Under (https://www.mvtec.com/doc/halc…n/moments_gray_plane.html.) I found a similar Halcon function : "moments_gray_plane". It also calculates the gray value image moments and approximates a plane. Only method for plane claculation is different. As it seems from description it maybe possible to implement in Visual Applets without loop functionality. Is this method also of interest for the customer?
Dear JSuriya,
thank you for your request.
Do you refer to the Visual Applets example "PrintInspection_Blob.va"?
Please may you explain "reference points"? Do you mean rotation, translation and scaling with respect to a certain point (E.g center of gravity)?
Of cource instead of rotation around center of gravity you can choose any other point in the image to perform the transformation.
Dear Saito-san,
thank you for your request.
Maybe a combination of following operators is of interest for you:
1. Signal Edge(Rising Edge):A pulse of one clock cycle is generated, when a rising edge of the input signal is detected
2. SignalToWidth:The operators measures the pulse width of the signal at the input (=1 in this case). The result is output as a 0D pixel value at output link.
3. Multiplication by a user defined factor, which is the period of the output signal
4. PeriodToSignal:The operator generates a periodic signal at its output. The period time is controlled by the pixel value at the input link I. Thus the operator converts the input stream values into a signal with the period provided by the pixel values. The high time of the output pulses is one tick.
In the design "2020-2-27-LSTCA.va" you can furthermore save some FPGA resources: The SYNC operators in front of the division operations in modules "Lstca/aver" ,"Lstca/image2" and "Lstca/K2" are not necessary.
Under https://docs.baslerweb.com/liv…t/rules%20of%20links.html
you can find some details on synchronization rules.
Dear Danna,
thank you for your questions.
1. Concerning you question from Februray 23rd:
Please find attached the design "j_030320.va" : In this design an example for the calculation with so called fractional bits is implemented:
For the calculation in VisualApplets you can only use Integer values. When you want to perform division or other mathemetical operations with values smaller than 1 you can use a "trick":
For example input pixel value I=2:
-Division by 20: 2/20= 0.1
-Problem: Only integer values can be processed
-Solution: Mulitply input by 2^x: x e.g. =10: We call this fractional bits
-2*(2^10)/20= 102,4 => Value 102 (with 10 fractional bits) is result from division operation
-102 corresponds to 102/2^10, which is ~0.1
The more fractional bits you use the more accurate the result from division operation is.
2. Concerning your request from March 3rd:Build problems of hap file:
The design clock is set to 312 Mhz in design "2020-2-27-LSTCA.va". This is causing the timing error.
Please reduce the design clock frequency to 125 MHz.
Dear JSuriya,
"..You can connect the images from different processes using operators "RxImageLink" and "TxImageLink".-..."
I apologize: this information is not correct. You can only exchange signals between the independent processes via "RxSignalLink/Tx signal link" not the image data.
(see"Interprocess communication" under https://docs.baslerweb.com/liv…multiple%20processes.html)
In the attachment you find the design" multiProcesses_030320.va" for the stitching of the images for the three cameras. Please test it.
Dear JSuriya,
thank you for your request.
"..But I need all the cameras to be in a same process so that I can append the images from the cameras for further processing ..."
You can connect the images from different processes using operators "RxImageLink" and "TxImageLink".
Concerning the problem with the image acquisition when more than camera is contained in one process:
Maybe the reason for your problem is described under https://docs.baslerweb.com/liv…ultiple%20processes.html:
The cameras in the same process have to be started and stopped simultaneously, as all DMAs of a process have always to be started.
The processes are indenpendent and can be started and stopped individually.
Dear Matthias R,
thank you for your question. May you send a snippet of your VisualApplets design for the problemtic situations , so we can sketch a possible solution?
The modification of parallelism and number of kernel components could help in such situations.
The Image size of Coefficient Buffer is limited due to DRAM size. This may be a reason for the problems you observe. The operator Create Blank image does not have this restriction.
Dear Danna,
thank you for your request.
Please find attached a VisualApplets design in which following processing is implemented:
Input a gray: pixel I (I, J),
If I(I,J)<64,: r = 0, g = 4 * I (I, J), B = 255;
If 63 < I(I,J)< 128: r = 0, g = 255, B = 255-4 * [I (I, J) - 64]
If 127< I(I,J)<192: r = 4 * [I (I, J) - 128], G= 255, B = 0
If 191< I(I,J): r = 255, g = 255-4 * [I (I, J) - 192], B = 0
The only difference to the color processing you have described is for the case 63 < I(I,J)< 128. In order to be sure to have 8 bit value (less than 255), the value 128 was substituted by 64.
I hope this design is helpful for you.
Dear Danna,
may you send your request in English? I can not read your question.
Thank you!
Dear Janek,
thank you for your request.
You can use the Visual Applets example "GeometricTransformation_FrameBufferRandomRead.va" for upscaling the image in both image directions by a factor of 1.17. This example is DRAm based,but maybe can solve your problem:
Right-Mouse-click on the module "GeometricTransformation" and set under properties following parameters:
Dear Jayasuriya,
thank you for your question. We use this procedure because only interger values are allowed for the parameters in VisualApplets.
The reference angle chi and alpha is multiplied by a factor 1024/90, to covert degree to fixed point parameter value
I.E. 90°= 1024, and 27,77°= 316
Also the multiplication by a factor of 2^2=4 of the distance is due to the fixed point t arithmetic and correspond to the subpixel accuracy.
This means: 2^2 correspond to distance of 1. If you like to set a distance Z of 0.5 we use value of 2 in VisualApplets.
Dear Sangrae Kim,
thank you for your request. Out support team will contact you.
Dear Sangae Kim,
thank you for your request.
You can use the Visual Applets example "GeometricTransformation_FrameBufferRandomRead.va" for upscaling the image in both image directions by a factor of 2.
Right-Mouse-click on the module "GeometricTransformation" and set under properties following parameters
Dear Jayasuriya,
thank you for your question:
"...
I am having doubts in PrintInspection_blob.va(example applet). How to find center of gravity of object,angle alpha,angle chi and distance Z...."
Please find attached a sketch of the patterns A and B ( part of object) and the Center of gravity (COG) of object.
If you want to rotate not around the center of gravity you can chhoose any other point to rotate around. You can select any pattern on the objects. I recommend to select patterns which appear only one time in the object and are mostly rotation invariant.
In the image in the attachment also the angles alpha, chi and the distance Z between pattern A and the COG are sketched.
Calculation of angle alpha:
with x_A, y_A : x and y coordinate of pattern A,
x_B, y_B : x and y coordinate of pattern B,
x_COG, y_COG: y coordinate of COG
alpha= arctan((y_A - y_COG)/(x_COG-x_A))-chi)
calculation of angle chi
chi=arctan((y_A - y_B)/(x_B-x_A))
calculation of distance Z:
Z=Sqrt((x_COG-x_A)^2+(y_A-y_COG)^2)
Please see also under http://www.siliconsoftware.de/…%20Inspection%20Blob.html a detailed documentation on the design "PrintInspection_Blob.va"
Also the design "PrintInspection_ImageMoments.va" maybe of interest for you:
In this design the rotation angle and position shift calculation of an object is not pattern based but it is based on the calculation of image moments
(see e.g. https://en.wikipedia.org/wiki/Image_moment).
Concerning your question
"...
In ''split fraction and integer bit'' hierarchical box which conversion is used to convert 959 into 15712255 (input to 'is in range' operator)..."
In the hierarchical box "LimitCoordinateValues" in module "SplitFractional And IntegerBit" the coordinates are limited to the maximum input image coordinates. Due to the geometric transformation performed in module "InverseTrabnsformation" in module "geometricTransformation" 14 fractional bits remain.
This means coordinate of 959 corresponds to 959*2^14=15712255.
I hope to help you with these explanations.
Best regards
Carmen Z
Dear Danna,
before changing from higher bit depth to lower bit width with operator "CastBitWidth" I always recommend to use operator "ClipHigh" before. This operators sets all input values above a defined maximum to the maximum value.
If you are completely sure that your input values do not exceed 255 (= 8 Bit range) the ClipHigh operator is not necessary.
Dear Danna,
thank you for your question.
If you want to change the Init Value in module "StartUp_Value" from 16 Bit Value to 8 Bit Value: Please also change in module "cast" the ClipHigh" value to 255 and "castBitWidth" to 8 Bit and adapt in module "dynamicDivider" the Bit depth of the Const operators to less/equal than 8 Bit.
best regards
Carmen
Dear silverfly,
in your design "pmp_two_cameras" the adress generation in boxes "ImageSequence" is not correct.
To adapt the "ImageSequence" box from example "ExposureFusion.va" ,( under "Examples\Processing\HDR_ImageComposition\ExposureFusion\ExposureFusion.va" in your Visualapplets installation directory )
to your image dimensions right-mouse click on box "ExposureFusion", click on "properties" and set the image dimensions
to 600x800. This automatically sets all the relevant parameters in box "ImageSequence". Copy now the "ImageSequence" box to your design.
The two cameras should run withh the same frame rate. But as Johannes has written:
"..
Note: If you set both cameras to the same frame rate they will still be a little asynchronous. Soon or later the buffers will be filled. You ned to externaly trigger the cameras to be 100% synchronous...."
best regards
Carmen Z
Dear Pier,
thank you for your question.
Yes, the GetStatus operator can be used for this purpose.
This operator provides a register for reading the number of counted pixels.