CN115131587A - Template matching method of gradient vector features based on edge contour - Google Patents

Template matching method of gradient vector features based on edge contour Download PDF

Info

Publication number
CN115131587A
CN115131587A CN202211050817.0A CN202211050817A CN115131587A CN 115131587 A CN115131587 A CN 115131587A CN 202211050817 A CN202211050817 A CN 202211050817A CN 115131587 A CN115131587 A CN 115131587A
Authority
CN
China
Prior art keywords
image
template
gradient
angle
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211050817.0A
Other languages
Chinese (zh)
Inventor
曲东升
刘卓欣
陈辉
李长峰
张继
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Mingseal Robotic Technology Co Ltd
Original Assignee
Changzhou Mingseal Robotic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Mingseal Robotic Technology Co Ltd filed Critical Changzhou Mingseal Robotic Technology Co Ltd
Priority to CN202211050817.0A priority Critical patent/CN115131587A/en
Publication of CN115131587A publication Critical patent/CN115131587A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a template matching method based on gradient vector characteristics of edge contours, which comprises the steps of collecting image data information of a rubber road and a template workpiece by using a camera, converting the image data information into a gray image, and preprocessing the gray image; performing threshold segmentation on the glue path and the template workpiece on a planar two-dimensional image, selecting an area needing to create a workpiece template, removing a non-edge contour area, and creating a template image; scaling and changing angles of the template images according to requirements, filling the template images to create a measurement matrix, and extracting the characteristics of all the template images; acquiring and outputting a mechanical coordinate of a Mark point under a robot base coordinate system; and the Mark point mechanical coordinate is combined with the dispensing needle head and the robot base coordinate calibration conversion matrix to realize the rubber road guiding operation. The method can meet the special requirements of partial users on the template matching algorithm of the online dispenser, and meanwhile, the edge extraction is improved to reduce the characteristic noise influence caused by the painting operation.

Description

Template matching method of gradient vector features based on edge contour
Technical Field
The invention relates to the technical field of template matching, in particular to a template matching method based on gradient vector characteristics of edge contours.
Background
In the spot gluing industry, the problem of inconsistent product positions caused by factors such as incoming materials, product placement and the like is solved in the actual field production of operation equipment in a workshop. The traditional teaching positioning mode has a fixed path, can not flexibly change according to the position of a product, and can flexibly generate a corresponding dispensing track according to the position change of the product in a two-dimensional machine vision-based mode, so that an edge contour template matching scheme needs to be developed to guide the motion control of an execution mechanism of a dispenser.
HALCON is a set of perfect standard machine vision algorithm package, has a widely applied machine vision integrated development environment, saves product cost, shortens software development period, and is convenient for quick development of machine vision, medical image and image analysis application due to flexible architecture.
However, the current shape template matching scheme based on HALCON commercial vision library is charged and does not open the bottom layer, and can not realize the customization requirements of highly non-standard customers, such as the special requirements of some users on the template matching algorithm of the online dispenser. Meanwhile, in the process of developing and selecting the ROI area based on the open source vision library OpenCV, an area which is not interested in the ROI area needs to be removed, and in the process, after a scheme of painting the original template image is adopted, the border edge of the painted area and the original image area may have shape feature points which do not exist in the original image, so that under the condition, if the image edge feature points are extracted by directly adopting an open source Canny operator, a large amount of feature point noise points may be caused.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art.
Therefore, the invention provides a template matching method based on gradient vector features of an edge contour, which is based on the edge contour template extraction of an open source vision library OpenCV, can meet the special requirements of partial users on a template matching algorithm of an online dispenser, and improves the edge extraction to reduce the influence of feature noise caused by painting operation.
The template matching method based on the gradient vector features of the edge contour, disclosed by the embodiment of the invention, relies on an open source vision library OpenCV and comprises the following steps of:
step 1, acquiring image data information of a glue path and a template workpiece by using a camera, converting the image data information into a gray image, and preprocessing the gray image;
step 2, based on the step 1, performing threshold segmentation on the glue path and the template workpiece on a planar two-dimensional image, selecting an area needing to create a workpiece template, and removing a non-edge contour area, namely only the edge contour of the workpiece is needed when creating the template image, so that invalid edge contour information of more areas outside a target rectangular area and inside the target area needs to be removed in a coordinate system of the original image, and creating a template image;
step 3, according to the template image created in the step 2, scaling and angle change of the template image are carried out according to needs, the template image after scaling and/or angle change is filled, a measurement matrix is created, when the measurement matrix is created by using a filling method, feature point denoising is carried out on nonexistent edge feature points, the features of all template images are extracted, and image data of each layer of the pyramid is stored in a file for storing corresponding features by using an image pyramid method;
step 4, when local adjustment or global adjustment is needed to be carried out on the Mark point, firstly, Mark point adjustment parameters are set, and the adjusted Mark point parameters are stored; then, searching a target template, when the matching result of the target template and the template image meets the requirement, generating a measuring moment according to the measuring moment parameters created in the step 3, and simultaneously generating a corresponding Mark point sequence according to the road characteristics in the example; then, acquiring adjusted Mark point parameters, adjusting the Mark points with corresponding sequence numbers according to the set Mark point adjustment parameters, and outputting the adjusted Mark points; finally, according to a Mark point sequence generated by the example workpiece or the adjusted Mark point sequence, combining a camera calibration result, acquiring and outputting a Mark point mechanical coordinate under a robot base coordinate system;
when local adjustment or global adjustment of the Mark points is not needed, acquiring and outputting the mechanical coordinates of the Mark points under a robot base coordinate system directly according to a Mark point sequence generated by an example workpiece or the adjusted Mark point sequence in combination with a camera calibration result;
during actual operation, firstly obtaining a picture data stream of an object to be matched, calculating and searching out a position, angle and scale bias result of a returned object of a maximum scoring object by using improved similarity after obtaining a picture of the object to be matched, and returning the actual coordinate of a target workpiece on operating equipment according to a camera calibration result; the actual coordinates of the operation equipment are the mechanical coordinates of the Mark points;
and 5, combining the actual coordinates of the operation equipment output in the step 4 with the dispensing needle head and the robot base coordinate calibration conversion matrix to realize the glue path guiding operation.
The method has the advantages that (1) the ideas of shape template matching, multi-sample template matching and ROI area processing are combined into an edge contour template matching algorithm module, the area needing to be removed in the ROI area is subjected to polygon painting based on an open source vision library OpenCV, and therefore the interior insignificant shape feature points can be removed, and only the feature points with significant edge contour features are extracted;
(2) according to the method, by improving a hysteresis threshold method of a Canny operator, the lower limit value of the image gradient of the boundary of the painting area and the original image is used as the lower limit value of the threshold in the hysteresis threshold method, and the calculated lower limit value of the threshold has a remarkable improvement effect on the denoising effect of the characteristic noise points generated by painting operation;
(3) the invention adopts the instruction set operation, the pyramid method and the method for establishing the pre-response map and the lookup table thereof, obviously accelerates the matching efficiency of the algorithm, transfers the time consumption of the matching process to the mapping process by establishing the pre-response map and the lookup table thereof, greatly shortens the time consumption of matching, and simultaneously effectively reduces the total time required by template matching.
According to an embodiment of the present invention, in the step 2, the creation process of the template image is as follows:
step 21, inputting an image: when a template is created, firstly, image data are required to be input, wherein the image data are image data obtained by firstly utilizing a camera to acquire image data information of a template workpiece on a product glue path, then converting the image data into a gray image and then preprocessing the gray image;
step 22, image preprocessing: selecting a reasonable ROI (region of interest) in an input image, and denoising the ROI image by utilizing a Gaussian filter fourth-order Gaussian algorithm in an open source vision library OpenCV (open source vision library);
step 23, calculating an image gradient direction: according to the image gradient, saving the position, gradient direction and gradient amplitude of the edge point pixel of the image;
step 24, constructing an image pyramid: storing the values of the position, the gradient direction and the gradient amplitude of the edge point pixel of the image in a determined data format in an image pyramid;
step 25, judging whether the requirements of the angle and the angle step distance are met: rotating the image of the ROI area according to a required angle and an angle step pitch, and extracting and storing the coordinates of the feature points and the gradient amplitude;
step 26, outputting a file for storing the template characteristics: and storing the image data after rotation in a characteristic file according to a format, and completing the creation of the template when the characteristic data of the images of all required angles are stored in the corresponding file.
According to an embodiment of the present invention, in the 22 nd step, the region to be removed inside the ROI region is polygon-colored, and shape feature points that are not significant inside are removed.
According to an embodiment of the invention, in the 25 th step, when judging that the requirements of the angle and the angle step are met, the self-increment is carried out according to the angle and the angle step, then whether the requirements of the angle and the angle step are met or not is continuously judged, and the coordinates and the gradient amplitude of the stored characteristic points are extracted until the requirements of the angle and the angle step are not met;
and when judging that the requirements of the angle and the angle step distance are not met, directly extracting and storing the coordinates of the characteristic points and the gradient amplitude.
According to an embodiment of the present invention, in the 4 th step, the matching process of the target template is as follows:
step 41, inputting an image to be matched: selecting and inputting an image to be matched;
42, image preprocessing: preprocessing an image to be matched;
43, calculating the gradient direction of the image: calculating the gradient direction of each pixel of the obtained image;
step 44, constructing an image pyramid: constructing an image pyramid according to the gradient direction of each pixel of the image obtained by calculation, and constructing corresponding gradient direction pyramids for the images with different pyramid layer numbers;
step 45, calculating similarity: the similarity is obtained from a file of the template characteristics output and saved in the template establishing process;
step 46, searching out the optimal matching: constructing corresponding lookup tables in eight directions, searching a pixel region with the maximum similarity score according to the lookup tables of a target object in an image to be matched and an obtained lookup table of a template image, performing image restoration of a corresponding position according to a corresponding region searched in the lookup table at the highest layer of the pyramid, finding the corresponding position of the pixel on an original image, and finally finding the accurate position of the template object on the image to be matched by layer-by-layer matching;
step 47, completing template matching: and when the template matching is completed, returning the required result.
According to an embodiment of the present invention, in the 46 th step, the eight directions are 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees, and 315 degrees.
According to an embodiment of the present invention, in the 47 th step, when no matching point is searched, the non-target similarity is returned to be 0.
According to an embodiment of the present invention, in the step 4, the Mark point adjustment parameters include an adjustment direction and an adjustment value.
According to one embodiment of the invention, the processing steps of the edge noise caused by the image coloring are as follows:
first, the
Figure 2548DEST_PATH_IMAGE001
Firstly, denoising by using a 5-by-5 Gaussian filter kernel before edge detection;
first, the
Figure 654110DEST_PATH_IMAGE002
Step, calculating the gradient amplitude and the gradient direction of the image;
first, the
Figure 30252DEST_PATH_IMAGE003
Step, using non-maximum value to restrain all pixel points to primarily filter out non-edge pixels and clarify boundaries, namely only keeping the maximum gradient intensity of the pixel points, otherwise setting the maximum gradient intensity to zero;
first, the
Figure 425461DEST_PATH_IMAGE004
And step two thresholds are set by utilizing a double-threshold improved threshold hysteresis method, wherein the two thresholds are considered as edges if the two thresholds are larger than an upper threshold limit, and the two thresholds are considered as edges if the two thresholds are smaller than a lower threshold limit.
According to an embodiment of the invention, in said first step
Figure 443096DEST_PATH_IMAGE004
In the step, the gradient value of the lower threshold is higher than that of the color-coated pixels.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a gray scale image of a pre-processed workpiece;
FIG. 3 is a pre-ROI area processing training diagram;
FIG. 4 is a diagram of training after ROI area processing;
FIG. 5 is an edge feature point effect diagram after ROI area processing;
FIG. 6 is a diagram of the effect after denoising;
FIG. 7 is a schematic diagram of a creation process without ROI area processing;
FIG. 8 is a schematic diagram of the creation process through ROI area processing;
FIG. 9 is a graph showing the effect of template matching without ROI processing;
FIG. 10 is a diagram showing the effect of the template matching result after ROI processing;
FIG. 11 is a template creation flow diagram;
fig. 12 is a target template matching flow chart.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The template matching method based on gradient vector features of edge contours of the embodiment of the present invention is described in detail below with reference to the accompanying drawings.
Referring to fig. 1, the template matching method based on gradient vector features of edge contours relies on an open source vision library OpenCV, and includes the following steps:
step 1, acquiring image data information of a glue path and a template workpiece by using a camera, converting the image data information into a gray image, and preprocessing the gray image;
based on the step 1, performing threshold segmentation on the glue path and the template workpiece on a planar two-dimensional image, selecting an area needing to create a workpiece template, and removing a non-edge contour area, namely only the edge contour of the workpiece is needed when the template image is created, so that invalid edge contour information in more areas outside a target moment area and in the target area needs to be removed in a coordinate system of an original image, and creating a template image;
step 3, according to the template image created in the step 2, scaling and angle change of the template image are carried out according to needs, the template image after scaling and/or angle change is filled, a measurement matrix is created, when the measurement matrix is created by using a filling method, feature point denoising is carried out on nonexistent edge feature points, the features of all template images are extracted, and image data (mainly coordinates of edge contour points, image gradient amplitude values and image gradient directions) of each layer of the pyramid are stored in a yaml file for storing corresponding features by using an image pyramid method;
step 4, when local adjustment or global adjustment is needed to be carried out on the Mark point, firstly, Mark point adjustment parameters (the Mark point adjustment parameters comprise an adjustment direction, an adjustment value and the like) are set, and the adjusted Mark point parameters are stored; then, searching a target template, when the matching result of the target template and the template image meets the requirement, generating a measuring moment according to the measuring moment parameters created in the step 3, and simultaneously generating a corresponding Mark point sequence according to the road characteristics in the example; then, acquiring adjusted Mark point parameters, adjusting the Mark points with corresponding sequence numbers according to the set Mark point adjustment parameters (including adjustment direction, adjustment value and the like), and outputting the Mark points; finally, according to a Mark point sequence generated by the example workpiece or the adjusted Mark point sequence, combining a camera calibration result, acquiring and outputting a Mark point mechanical coordinate under a robot base coordinate system;
when local adjustment or global adjustment of the Mark points is not needed, acquiring and outputting the mechanical coordinates of the Mark points under a robot base coordinate system directly according to a Mark point sequence generated by an example workpiece or the adjusted Mark point sequence in combination with a camera calibration result;
namely, during actual operation, firstly obtaining a picture data stream of an object to be matched, calculating and searching a position, angle and scale bias result of a returned object of a maximum scoring object by using improved similarity after obtaining a picture of the object to be matched, and returning the actual coordinate of a target workpiece in operation equipment according to a camera calibration result; the actual coordinates of the working equipment are the mechanical coordinates of the Mark points;
and 5, combining the actual coordinates of the operation equipment output in the step 4 with the dispensing needle head and the robot base coordinate calibration conversion matrix to realize the glue path guiding operation.
Referring to fig. 2, the camera collects the contour of the rubber road part of the workpiece and the image data of the workpiece, and generates a gray image of the workpiece through operations such as gray scale scaling, format conversion and the like.
And (3) extracting the edge contour characteristic points of the workpiece, acquiring the row and column coordinates and the gradient amplitude of the contour characteristic points of the workpiece, and storing the coordinates and the gradient amplitude into a corresponding data structure.
Referring to fig. 4, the edge contour feature points of the workpiece processed by the ROI are extracted.
Referring to fig. 4 and 5, the number of polygon edges to be painted, the coordinates of each vertex, the array pointers for storing the coordinates of each vertex, and the like are input, and then the result of the contour feature points of the template image is directly extracted, wherein the result contains additional feature noise points generated after the painting operation.
And (5) calculating the lower boundary of the image gradient threshold value at the junction of the color coating area and the original template area, acquiring the row and column coordinates and the gradient amplitude value of the contour feature point by adopting a hysteresis threshold value method of an improved Canny operator, and storing the coordinates and the gradient amplitude value into a corresponding data structure, thereby effectively reducing the interference of the feature noise point.
And (3) performing multi-sample training on the template image according to set parameters, such as pyramid layer number, angle range, angle change step distance, scale change range and the like, and repeating the steps 2-4 until all the characteristics of the template required for matching are extracted.
See fig. 7 and 8, the training process.
Referring to fig. 9 and fig. 10, the image to be matched and the features stored in the above steps are input, the stored feature position of the uppermost layer of the image pyramid is matched, then down-sampling is performed until the matching result coordinates on the original image to be matched are returned, the filled rectangular mode matching area center with similarity measurement scores is quickly searched for the matching result, and the scale and angle change of the target on the image to be matched is determined according to the image ID obtained by matching.
Referring to fig. 11, in step 2, the specific process of creating the template image is as follows:
step 21, inputting an image: when a template is created, firstly, image data are required to be input, wherein the image data are image data obtained by firstly utilizing a camera to acquire image data information of a template workpiece on a product glue path, then converting the image data into a gray image and then preprocessing the gray image;
22, image preprocessing (including ROI selection, denoising and the like): selecting a reasonable ROI (region of interest) in an input image, denoising the ROI image by using a fourth-order Gaussian algorithm, specifically, performing polygon painting on a region needing to be removed in the ROI, and removing shape feature points which are not obvious in the ROI.
Step 23, calculating an image gradient direction: according to the image gradient, saving the position, gradient direction and gradient amplitude of the edge point pixel of the image;
step 24, constructing an image pyramid: storing the values of the position, the gradient direction and the gradient amplitude of the edge point pixel of the image in an image pyramid in a determined data format;
step 25, judging whether the requirements of the angle and the angle step distance are met: rotating the image of the ROI area according to a required angle and an angle step pitch, and extracting and storing the coordinates of the feature points and the gradient amplitude; specifically, when the requirements of the angle and the angle step are judged to be met, the self-increment is carried out according to the angle and the angle step, then whether the requirements of the angle and the angle step are met or not is continuously judged, and the coordinates and the gradient amplitude of the feature points are extracted and stored until the requirements of the angle and the angle step are not met; when judging that the requirements of the angle and the angle step distance are not met, directly extracting and storing the coordinates of the characteristic points and the gradient amplitude;
step 26, outputting the yaml file storing the template characteristics: and storing the image data after rotation in a characteristic file according to a format, and completing the creation of the template when the characteristic data of the images of all required angles are stored in the corresponding file.
Referring to fig. 12, in step 4, the specific matching procedure of the target template is:
step 41, inputting an image to be matched: selecting and inputting an image to be matched;
42, image preprocessing: carrying out preprocessing such as filtering and noise reduction on an image to be matched;
43, calculating the gradient direction of the image: calculating the gradient direction of each pixel of the obtained image;
step 44, constructing an image pyramid: constructing an image pyramid according to the gradient direction of each pixel of the image obtained by calculation, and constructing corresponding gradient direction pyramids for the images with different pyramid layer numbers;
step 45, calculating (linearizing) similarity: the similarity is obtained from a yaml file of the template characteristics saved in the output of the template creating process;
step 46, searching out the optimal matching (pre-response map matching strategy): constructing corresponding lookup tables in eight directions, searching a pixel region with the maximum similarity score according to the lookup tables of a target object in an image to be matched and an obtained lookup table of a template image, performing image restoration of a corresponding position according to a corresponding region searched in the lookup table at the highest layer of the pyramid, finding the corresponding position of the pixel on an original image, and finally finding the accurate position of the template object on the image to be matched by layer-by-layer matching; wherein the eight directions are 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees, and 315 degrees.
Step 47, completing template matching (returning similarity, angle ID and the like): and when the template matching is completed, returning required results such as matching time, similarity, image position, rotation angle and the like.
The following points are illustrated for the image pyramid: (1) the image pyramid is a way of image multi-scale adjustment expression, and is mainly used for image segmentation. Generally, the bottom of the image pyramid is the unprocessed original image, and after the image pyramid is processed by the algorithm, the top of the image pyramid is the low resolution version of the original image. In general, the higher the hierarchy of the image, the smaller the top image result, and the lower the resolution. (2) Common image pyramids comprise a gaussian pyramid and a laplacian pyramid, wherein the gaussian pyramid is a set of resolution images which are gradually reduced by gaussian sampling; the laplacian pyramid reconstructs the sampled image to restore the original image. Namely, the gaussian pyramid is used for sampling and compressing the image, the laplacian pyramid is used for reconstructing and restoring the image, and the gaussian pyramid and the laplacian pyramid are used in cooperation. (3) The image pyramid can obviously reduce the complexity of an image edge contour matching algorithm, and generally, the complexity of the shape matching algorithm is influenced by three key factors, namely the width w, the height h and the number n of selected key feature points of a target image ROI. I.e. the efficiency of the operation of shape matching depends on the ROI region size w x h, and the number of feature points n. In order to improve the efficiency of the matching algorithm, the complexity O (w × h × n) is minimized when processing the algorithm, so that the smaller ROI region size and the smaller feature point number can shorten the running time of the algorithm without affecting the matching accuracy of the algorithm. The two aspects can be optimized simultaneously through the image pyramid, which is why the method can improve the algorithm efficiency. (4) It can be seen that the higher the pyramid level of the image, the more blurred the image, the lower the resolution, and the less detail. Therefore, the higher the number of layers of the visible image pyramid is, the better the visible image pyramid is, and the key features of the image cannot be lost in the process of carrying out Gaussian sampling on the image. Generally, in practical applications, the number of layers of the image pyramid is set to 3 to 4. In brief, in the work flow of template matching of the image pyramid, the approximate position of a matching target is determined in a top image, then the images correspond layer by layer downwards, after the optimal matching position is found in each layer, the coordinates on the image with higher resolution are searched in the corresponding position in the next layer, and finally the position of the target can be matched in the bottom of the image pyramid, namely the original image. (5) The image pyramid can greatly reduce the matching time complexity in the edge contour matching, improve the matching efficiency, and has wide application in the machine vision industry.
In the matching of the edge contour matching template, the information correlation of two parts of the shape characteristic point and the shape characteristic point is extracted. The purpose of extracting the shape feature points is to convert abstract shape information contained in the abstract edge contour features into digital information (numerical values, vectors or other computable symbols) which can be processed by a computer, and whether the extracted digital feature points can retain the significant shape information of the template is crucial to the matching result, so that relatively much workload is required for feature point extraction in specific applications. According to the feature point matching, the corresponding relation between the current local part and the whole shape is calculated through the defined similarity measurement, and the similarity result reflects the matching degree with the template image. The premise of efficient feature point extraction is to select a reasonable ROI (region of interest), wherein the ideal ROI region should contain all feature points required by template matching and remove all non-template significant edge contour feature points. This must be done by selecting a large ROI and removing some local regions containing extraneous edge features when selecting the ROI in the image. In practice, the ROI region is usually selected twice on the template image, the first time a larger ROI region is selected, which contains all the salient feature points of the template image, and the second time the irrelevant region in the larger ROI region is selected and removed. And then extracting the characteristic points.
The hysteresis threshold method of the improved Canny operator is characterized in that feature points with a communication relation in feature points obtained by first-order gradient calculation of a high threshold value Sobel operator and feature points obtained by second-order gradient calculation of a low threshold value Sobel operator are extracted, and the feature points with pixel values smaller than a set contour threshold value are removed to obtain the feature points.
Image gradients are a characteristic of an image that is significant and robust to illumination variations and noise. Furthermore, image gradients are often the only reliable image feature in dealing with some non-textured objects. The calculation of contrast change is more robust only by considering the gradient direction, and the absolute value of the cosine of the gradient direction can correctly process the shielded boundary of the object, so that the characteristics extracted by the target object are not influenced by background illumination change.
The calculation formula of the gradient direction is as follows:
Figure 531137DEST_PATH_IMAGE005
(1)
wherein, the meaning that each symbol in formula (1) represents is specifically as follows:
i represents an image to be matched;
(x, y) represents the position coordinates in the image I to be matched;
ori (I, (x, y)) represents a gradient direction of an arc at a position (x, y) in the image I to be matched of the object to be detected.
Figure 126067DEST_PATH_IMAGE006
Represents a gradient direction;
f represents a discrete function of the original image;
Figure 324967DEST_PATH_IMAGE007
discrete function f representing original image and Sobel filtering kernel S in x direction x Convolution results; and is
Figure 321742DEST_PATH_IMAGE007
The expression of (a) is:
Figure 580685DEST_PATH_IMAGE008
S x representing the Sobel filter kernel in the x direction; and S x The expression of (a) is:
Figure 538276DEST_PATH_IMAGE009
Figure 275288DEST_PATH_IMAGE010
discrete function f representing original image and Sobel filtering kernel S in y direction y Convolution results; and is
Figure 392149DEST_PATH_IMAGE010
The expression of (a) is:
Figure 821993DEST_PATH_IMAGE011
f;
S y representing the Sobel filter kernel in the y direction; and S y The expression of (a) is:
Figure 125936DEST_PATH_IMAGE012
the gradient magnitude is calculated as follows:
Figure 666638DEST_PATH_IMAGE013
(2)
wherein, the meaning that each symbol in formula (2) represents is specifically as follows:
Figure 778951DEST_PATH_IMAGE014
representing the gradient magnitude.
The similarity measure is based on a modified version of the Steger algorithm, and equation (3) below is robust to background clutter, small translations, and deformations.
The calculation formula of the original similarity is as follows:
Figure 114117DEST_PATH_IMAGE015
(3)
wherein, the meaning represented by each symbol in the formula (3) is specifically as follows:
Figure 170935DEST_PATH_IMAGE016
representing a similarity measure;
i represents an image to be matched;
t represents a template size value;
c represents the position coordinates on the graph;
r represents the displacement at position c;
i' represents a template image;
p denotes a list of positions r to be considered in the stored template image I';
ori(
Figure 249749DEST_PATH_IMAGE017
and r) a template image representing an object to be detected
Figure 75623DEST_PATH_IMAGE017
The direction of the gradient of the radian at the middle position r;
ori (I, c + r) represents the gradient direction of the displacement r at the position c in the image I to be matched;
this improved similarity metric processing approach can effectively process arbitrarily shaped objects.
Based on the similarity measure, improvement of robustness enhancement on deformation and displacement is made, and the similarity measure is introduced, and for each gradient direction on the target, the most similar direction in the input image is searched in the neighborhood of the related gradient position.
The improved similarity calculation formula is as follows:
Figure 50532DEST_PATH_IMAGE018
(4)
wherein, the meaning represented by each symbol in the formula (4) is specifically as follows:
Figure 266750DEST_PATH_IMAGE016
representing a similarity measure;
I represents an image to be matched;
t represents a template size value;
c represents the position coordinates on the graph;
r represents the displacement at position c;
i' represents a template image;
p denotes a list of positions r to be considered in the stored template image I';
r (c + R) represents a neighborhood of size T centered at position c + R in the input image;
t represents a set of current position neighborhoods in the input image;
the definition of a neighborhood R of size T centered at position c + R in the input image is as follows:
Figure 149255DEST_PATH_IMAGE019
wherein, the meaning represented by each symbol in the formula (5) is specifically as follows:
r represents a neighborhood of size T centered at position c + R in the input image;
c represents the position coordinates on the graph;
r represents the displacement at position c;
t represents a template size value;
for each gradient, the local neighborhood is precisely aligned to the relevant gradient location.
In order to remove the influence of characteristic points of edges in a non-ROI area in the ROI area on the extraction of the characteristic points, the non-ROI area is painted, after painting processing, the image data scale is large, an algorithm needs to reduce the data scale of an image by using an image pyramid method and an edge detection calculation theory, and the prominent edge characteristic of an original image cannot be lost due to over processing.
The processing steps of the edge noise caused by the image painting are as follows:
first, the
Figure 829635DEST_PATH_IMAGE001
Firstly, denoising by using a 5-by-5 Gaussian filter kernel before edge detection;
first, the
Figure 772184DEST_PATH_IMAGE002
Step, calculating the gradient amplitude and the gradient direction of the image;
first, the
Figure 678960DEST_PATH_IMAGE003
Step, using non-maximum value to restrain all pixel points to primarily filter out non-edge pixels and clarify boundaries, namely only keeping the maximum gradient intensity of the pixel points, otherwise setting the maximum gradient intensity to zero;
first, the
Figure 99577DEST_PATH_IMAGE004
The Canny operator generally regards pixel points between the thresholds connected to the edges as edge points, so that the gradient value of the lower threshold boundary is higher than that of the color-coated pixels, and edge feature noise caused by ROI region processing operation is reduced as much as possible.
The invention is suitable for various industries of machine vision identification and positioning, such as: in the dispensing industry and the like, polygon painting is carried out on a region needing to be removed in the ROI region by utilizing an open source vision library OpenCV, so that the shape feature points which are not significant in the interior can be removed, and only the feature points with the significant edge contour features are extracted; the hysteresis threshold method of the Canny operator is improved, the lower limit value of the image gradient of the boundary of the painting area and the original image is used as the lower threshold limit of the hysteresis threshold method, and the calculated lower threshold limit has a remarkable improvement effect on the denoising effect of the characteristic noise points generated by the painting operation.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention are equivalent to or changed within the technical scope of the present invention.

Claims (10)

1. A template matching method of gradient vector features based on edge contour relies on an open source vision library OpenCV, and is characterized by comprising the following steps:
step 1, acquiring image data information of a glue path and a template workpiece by using a camera, converting the image data information into a gray image, and preprocessing the gray image;
step 2, based on the step 1, performing threshold segmentation on the glue path and the template workpiece on a planar two-dimensional image, selecting an area needing to create a workpiece template, and removing a non-edge contour area, namely only the edge contour of the workpiece is needed when creating the template image, so that invalid edge contour information of more areas outside a target rectangular area and inside the target area needs to be removed in a coordinate system of the original image, and creating a template image;
step 3, according to the template image created in the step 2, scaling and angle change of the template image are carried out as required, the template image after scaling and/or angle change is filled, a measurement matrix is created, when the measurement matrix is created by using a filling method, feature point denoising is carried out on nonexistent edge feature points, the features of all template images are extracted, and image data of each layer of the pyramid is stored in a file for storing the corresponding features by using an image pyramid method;
step 4, when local adjustment or global adjustment needs to be carried out on the Mark point, firstly, Mark point adjustment parameters are set, and the adjusted Mark point parameters are stored; then, searching a target template, when the matching result of the target template and the template image meets the requirement, generating a measuring moment according to the measuring moment parameters created in the step 3, and simultaneously generating a corresponding Mark point sequence according to the road characteristics in the example; then, acquiring the adjusted Mark point parameters, adjusting the Mark points with the corresponding sequence numbers according to the set Mark point adjustment parameters, and outputting the Mark points; finally, according to a Mark point sequence generated by the example workpiece or the adjusted Mark point sequence, combining a camera calibration result, acquiring and outputting a Mark point mechanical coordinate under a robot base coordinate system;
when local adjustment or global adjustment of the Mark points is not needed, directly combining a camera calibration result according to a Mark point sequence generated by an example workpiece or the adjusted Mark point sequence to obtain and output the mechanical coordinates of the Mark points in a robot base coordinate system;
namely, during actual operation, firstly obtaining a picture data stream of an object to be matched, calculating and searching a position, angle and scale bias result of a returned object of a maximum scoring object by using improved similarity after obtaining a picture of the object to be matched, and returning the actual coordinate of a target workpiece in operation equipment according to a camera calibration result; the actual coordinates of the working equipment are the mechanical coordinates of the Mark points;
and 5, combining the actual coordinates of the operation equipment output in the step 4 with the dispensing needle head and the robot base coordinate calibration conversion matrix to realize the glue path guiding operation.
2. The template matching method based on gradient vector features of the edge contour as claimed in claim 1, wherein: in the 2 nd step, the creation process of the template image is as follows:
step 21, inputting an image: when a template is created, firstly, image data is required to be input, wherein the image data is obtained by firstly utilizing a camera to acquire image data information of a template workpiece on a product glue path, then converting the image data into a gray image and then preprocessing the gray image;
step 22, image preprocessing: selecting a reasonable ROI (region of interest) in an input image, and denoising the ROI image by utilizing a Gaussian filter fourth-order Gaussian algorithm in an open source vision library OpenCV (open source vision library);
step 23, calculating an image gradient direction: according to the image gradient, saving the position, gradient direction and gradient amplitude of the edge point pixel of the image;
step 24, constructing an image pyramid: storing the values of the position, the gradient direction and the gradient amplitude of the edge point pixel of the image in a determined data format in an image pyramid;
step 25, judging whether the requirements of the angle and the angle step distance are met: rotating the image of the ROI area according to a required angle and an angle step pitch, and extracting and storing the coordinates of the feature points and the gradient amplitude;
step 26, outputting a file for storing the template characteristics: and storing the image data after rotation in a characteristic file according to a format, and completing the creation of the template when the characteristic data of the images of all required angles are stored in the corresponding file.
3. The template matching method for gradient vector features based on edge contour as claimed in claim 2, wherein: in the 22 nd step, the region to be removed in the ROI region is subjected to polygon painting, and shape feature points that are not significant in the region are removed.
4. The template matching method for gradient vector features based on edge contour as claimed in claim 2, wherein: in the step 25, when the requirements of the angle and the angle step are judged to be met, the self-increment is carried out according to the angle and the angle step, then whether the requirements of the angle and the angle step are met or not is continuously judged, and the coordinates and the gradient amplitude of the feature points are extracted and stored until the requirements of the angle and the angle step are not met;
and when judging that the requirements of the angle and the angle step distance are not met, directly extracting and storing the coordinates of the characteristic points and the gradient amplitude.
5. The template matching method for gradient vector features based on edge contour as claimed in claim 2, wherein: in the 4 th step, the matching process of the target template is as follows:
step 41, inputting an image to be matched: selecting and inputting an image to be matched;
42, image preprocessing: preprocessing an image to be matched;
43, calculating the gradient direction of the image: calculating the gradient direction of each pixel of the obtained image;
step 44, constructing an image pyramid: constructing an image pyramid according to the gradient direction of each pixel of the image obtained by calculation, and constructing corresponding gradient direction pyramids for the images with different pyramid layer numbers;
step 45, calculating similarity: the similarity is obtained from a file of the template characteristics output and saved in the template establishing process;
step 46, searching out the optimal matching: constructing corresponding lookup tables in eight directions, searching a pixel region with the maximum similarity score according to the lookup tables of a target object in an image to be matched and an obtained lookup table of a template image, performing image restoration of a corresponding position according to a corresponding region searched in the lookup table at the highest layer of the pyramid, finding the corresponding position of the pixel on an original image, and finally finding the accurate position of the template object on the image to be matched by layer-by-layer matching;
step 47, completing template matching: and when the template matching is completed, returning the required result.
6. The template matching method for gradient vector features based on edge contour as claimed in claim 5, wherein: in the 46 th step, the eight directions are 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees, and 315 degrees.
7. The template matching method for gradient vector features based on edge contour as claimed in claim 5, wherein: in the step 47, when no matching point is found, the similarity of no object is returned to 0.
8. The template matching method based on gradient vector features of edge contour according to claim 1, characterized in that: in the 4 th step, the Mark point adjustment parameters include an adjustment direction and an adjustment value.
9. The template matching method for gradient vector features based on edge contour as claimed in claim 3, wherein: the processing steps of the edge noise caused by the image painting are as follows:
first, the
Figure 816189DEST_PATH_IMAGE001
Firstly, denoising by using a 5-by-5 Gaussian filter kernel before edge detection;
first, the
Figure 655969DEST_PATH_IMAGE002
Step, calculating the gradient amplitude and the gradient direction of the image;
first, the
Figure 464525DEST_PATH_IMAGE003
Step, using non-maximum value to restrain all pixel points to primarily filter out non-edge pixels and clarify boundaries, namely only keeping the maximum gradient intensity of the pixel points, otherwise setting the maximum gradient intensity to zero;
first, the
Figure 225807DEST_PATH_IMAGE004
And step two thresholds are set, namely two thresholds are set, the threshold is larger than the upper threshold and is considered to be a certain edge, and the threshold is smaller than the lower threshold and is considered to be not a certain edge.
10. The template matching method based on gradient vector features of edge contour as claimed in claim 9, wherein: at the first stage
Figure 82905DEST_PATH_IMAGE004
In the step, the gradient value of the lower threshold is higher than that of the color-coated pixels.
CN202211050817.0A 2022-08-30 2022-08-30 Template matching method of gradient vector features based on edge contour Pending CN115131587A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211050817.0A CN115131587A (en) 2022-08-30 2022-08-30 Template matching method of gradient vector features based on edge contour

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211050817.0A CN115131587A (en) 2022-08-30 2022-08-30 Template matching method of gradient vector features based on edge contour

Publications (1)

Publication Number Publication Date
CN115131587A true CN115131587A (en) 2022-09-30

Family

ID=83387768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211050817.0A Pending CN115131587A (en) 2022-08-30 2022-08-30 Template matching method of gradient vector features based on edge contour

Country Status (1)

Country Link
CN (1) CN115131587A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330877A (en) * 2022-10-13 2022-11-11 常州铭赛机器人科技股份有限公司 Mutual copying method for operation programs of same machine
CN115526881A (en) * 2022-10-18 2022-12-27 深圳市安仕新能源科技有限公司 Battery cell polarity detection method and device based on image modeling
CN116486126A (en) * 2023-06-25 2023-07-25 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium
CN116939376A (en) * 2023-09-14 2023-10-24 长春理工大学 Four-camera simultaneous polarization imaging system and method based on stokes vector
CN117173389A (en) * 2023-08-23 2023-12-05 无锡芯智光精密科技有限公司 Visual positioning method of die bonder based on contour matching

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654902A (en) * 2012-01-16 2012-09-05 江南大学 Contour vector feature-based embedded real-time image matching method
CN108256394A (en) * 2016-12-28 2018-07-06 中林信达(北京)科技信息有限责任公司 A kind of method for tracking target based on profile gradients
CN114193460A (en) * 2022-02-16 2022-03-18 常州铭赛机器人科技股份有限公司 Rubber road guiding and positioning method based on three-dimensional vision and Mark self-compensation
CN114821114A (en) * 2022-03-28 2022-07-29 南京业恒达智能系统股份有限公司 Groove cutting robot image processing method based on visual system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654902A (en) * 2012-01-16 2012-09-05 江南大学 Contour vector feature-based embedded real-time image matching method
CN108256394A (en) * 2016-12-28 2018-07-06 中林信达(北京)科技信息有限责任公司 A kind of method for tracking target based on profile gradients
CN114193460A (en) * 2022-02-16 2022-03-18 常州铭赛机器人科技股份有限公司 Rubber road guiding and positioning method based on three-dimensional vision and Mark self-compensation
CN114821114A (en) * 2022-03-28 2022-07-29 南京业恒达智能系统股份有限公司 Groove cutting robot image processing method based on visual system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330877A (en) * 2022-10-13 2022-11-11 常州铭赛机器人科技股份有限公司 Mutual copying method for operation programs of same machine
CN115526881A (en) * 2022-10-18 2022-12-27 深圳市安仕新能源科技有限公司 Battery cell polarity detection method and device based on image modeling
CN116486126A (en) * 2023-06-25 2023-07-25 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium
CN116486126B (en) * 2023-06-25 2023-10-27 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium
CN117173389A (en) * 2023-08-23 2023-12-05 无锡芯智光精密科技有限公司 Visual positioning method of die bonder based on contour matching
CN117173389B (en) * 2023-08-23 2024-04-05 无锡芯智光精密科技有限公司 Visual positioning method of die bonder based on contour matching
CN116939376A (en) * 2023-09-14 2023-10-24 长春理工大学 Four-camera simultaneous polarization imaging system and method based on stokes vector
CN116939376B (en) * 2023-09-14 2023-12-22 长春理工大学 Four-camera simultaneous polarization imaging system and method based on stokes vector

Similar Documents

Publication Publication Date Title
CN115131587A (en) Template matching method of gradient vector features based on edge contour
Romero-Ramirez et al. Speeded up detection of squared fiducial markers
CN109978839B (en) Method for detecting wafer low-texture defects
CN111474184B (en) AOI character defect detection method and device based on industrial machine vision
CN110866924B (en) Line structured light center line extraction method and storage medium
US6421458B2 (en) Automated inspection of objects undergoing general affine transformation
EP0853293B1 (en) Subject image extraction method and apparatus
CN110021024B (en) Image segmentation method based on LBP and chain code technology
CN107452030B (en) Image registration method based on contour detection and feature matching
CN111401266B (en) Method, equipment, computer equipment and readable storage medium for positioning picture corner points
CN113706464B (en) Printed matter appearance quality detection method and system
CN111553949A (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN114529459A (en) Method, system and medium for enhancing image edge
CN106296587B (en) Splicing method of tire mold images
Chalimbaud et al. Embedded active vision system based on an FPGA architecture
CN112435223A (en) Target detection method, device and storage medium
CN116704516A (en) Visual inspection method for water-soluble fertilizer package
CN108447092B (en) Method and device for visually positioning marker
CN111027538A (en) Container detection method based on instance segmentation model
CN113627210A (en) Method and device for generating bar code image, electronic equipment and storage medium
CN115112098B (en) Monocular vision one-dimensional two-dimensional measurement method
CN114998347B (en) Semiconductor panel corner positioning method and device
CN111340040A (en) Paper character recognition method and device, electronic equipment and storage medium
CN115661110A (en) Method for identifying and positioning transparent workpiece
CN114092499A (en) Medicine box dividing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220930

RJ01 Rejection of invention patent application after publication