WO2021017361A1 - Template matching algorithm based on edge and gradient feature - Google Patents

Template matching algorithm based on edge and gradient feature Download PDF

Info

Publication number
WO2021017361A1
WO2021017361A1 PCT/CN2019/123994 CN2019123994W WO2021017361A1 WO 2021017361 A1 WO2021017361 A1 WO 2021017361A1 CN 2019123994 W CN2019123994 W CN 2019123994W WO 2021017361 A1 WO2021017361 A1 WO 2021017361A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
layer
pyramid
angle
matching
Prior art date
Application number
PCT/CN2019/123994
Other languages
French (fr)
Chinese (zh)
Inventor
邢述达
郭晓锋
余章卫
Original Assignee
苏州中科全象智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州中科全象智能科技有限公司 filed Critical 苏州中科全象智能科技有限公司
Publication of WO2021017361A1 publication Critical patent/WO2021017361A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Definitions

  • This application relates to an industrial machine vision image processing technology, for example, to a template matching algorithm based on edge and gradient features.
  • Template matching algorithm is one of the difficulties and cores of machine vision, and it is widely used in the fields of target recognition, workpiece positioning and target extraction in videos.
  • the purpose of template matching is to find an object similar to the target in the image and obtain its position.
  • This method includes many image processing techniques, such as image preprocessing, image segmentation, and similarity evaluation.
  • the methods used in the industry mainly include two aspects, feature-based and grayscale-based matching algorithms.
  • Feature-based algorithms mainly include keypoint-based, edge-based, and line-based and arc-based primitive models. .
  • Template matching based on grayscale is time-consuming, and it is difficult to resist rotation and scaling. Therefore, this method is generally not used in situations where time requirements are relatively strict.
  • Template matching based on key points is very sensitive to occlusion, and strongly depends on key point search, and has poor anti-interference.
  • model matching algorithms based on primitives such as lines and arcs have certain advantages, primitive modeling and Difficulty in parameter adjustment. Therefore, it is necessary to develop a matching algorithm with low development complexity, reliable matching and strong real-time performance.
  • This application provides a template matching algorithm based on edge and gradient features, which is based on the generalized Hough transform and combined with a pyramid model to reduce computational complexity, and endow it with strong anti-interference, rotation invariance, resistance to partial occlusion and scaling, etc. performance.
  • This application provides a template matching algorithm based on edge and gradient features.
  • the matching process includes two stages, offline and online.
  • the offline stage corresponds to the template image and performs at least edge extraction and gradient calculation processing to generate R of the generalized Hough transform.
  • -table corresponds to the input target image for edge extraction, gradient calculation, and coarse-to-fine matching processing. According to the angle and position, the R-table is used to obtain the value in the voting space of the generalized Hough transform to obtain the target position.
  • processing process in the offline phase includes:
  • Image pyramid generation using the pyramid model to layer the template image to get each layer of images from high to low;
  • Sub-pixel edge extraction using the local area effect to obtain the single boundary of the sub-pixel edge of different pyramid layer images, and obtain the boundary direction at the same time;
  • edge filtering is also included.
  • the edge is chain-coded and the first threshold is preset, and the eight-neighborhood search of the edge angle is performed one by one: the angle of the current point is located at the front and rear edges
  • the retention within the range of the angle, the retention where the angle deviation between the current point angle and the previous edge or the back edge is less than the first threshold value, the retention where the angle deviation from the front and rear edges is greater than the removal of the first threshold value.
  • processing process in the online phase includes:
  • Image pyramid generation using the pyramid model to layer the input target image to get each layer of images from high to low;
  • Sub-pixel edge extraction using the local area effect to obtain the single boundary of the sub-pixel edge of different pyramid layer images, and obtain the boundary direction at the same time;
  • the highest level of the image pyramid adopts the generalized Hough transform, and the 360° full template matching with the default zoom factor of 1 is performed to obtain the rough position and angle (x, y, theta) of the elements in the target image at the highest level;
  • the matching results of one layer are matched layer by layer down to the bottom layer, and the search is performed in the surrounding range of the matching results of the upper layer to obtain the position and angle of the elements in the target image at each non-highest layer;
  • Fine matching according to the position and angle obtained by rough matching, obtain the Hough space result conforming to the Gaussian distribution law, fit the surface and obtain the global maximum value to the position and angle obtained in each layer, and obtain the final element in the target image Output the result.
  • the pyramid model used for the layering of the target image is a combined model of Gaussian pyramid and mean value pyramid, and the Gaussian convolution and mean value filtering both use 3*3 convolution kernels.
  • the optimal T-point automatic threshold calculation based on the gradient histogram is used in the edge extraction process, and the threshold is given to the extracted boundary.
  • the fitting surface adopts a quadratic spherical fitting equation, and the local extreme point is obtained within the range of the selected point to obtain the coordinate point position of the sub-pixel.
  • the template matching algorithm method of this application is applied and implemented.
  • the algorithm combines the pyramid model and optimized search strategy, has anti-interference, can achieve rotation invariance, and has a certain ability to resist scaling and occlusion; moreover, the generalized Huo The calculation amount of the husband transform is greatly reduced, and the matching rate is doubled.
  • Figure 1 is a schematic diagram of the generalized Hough transform.
  • Figure 2 is a schematic flow chart of the template matching algorithm of this application.
  • Figure 3 is a schematic diagram of edge screening in the offline phase.
  • Figure 4 is a schematic diagram of the range of coarse matching in the online phase.
  • Fig. 5 is a template image and a corner frame as an element.
  • Figure 6 shows the target image and its elements of occlusion, scaling, and rotation.
  • this application is committed to optimizing industrial machine vision image processing technology, combining its own experience and creative work, and researches and proposes a method based on edges and gradients.
  • the characteristic template matching algorithm has the advantages of strong anti-interference, rotation invariance, resistance to partial occlusion and scaling.
  • the main body algorithm of template matching uses generalized Hough transform.
  • the generalized Hough transform describes a graph in the form of a table, as shown in Figure 1.
  • the parameters mainly include the angle of the edge normal vector The vector r where the edge point points to the center, the angle ⁇ of the vector r. Save the coordinates of the edge points of the graph in a table, then the graph is determined, whether it is a straight line (actually a line segment), a circle, an ellipse or other geometric shapes, it can be processed in the same way.
  • the generalized Hough transform adds correlation to the parameters, so that only one parameter needs to be traversed in the case of translation and rotation (no scaling).
  • the three parameters of the edge of the image are the center coordinate (horizontal and vertical) of the graphic, the rotation angle (relative to the reference graphic) and the vector from the edge to the center point.
  • the main thing in template matching is to obtain the angle and vector table in the offline stage.
  • the position of the voting point is obtained by using the vector table according to the angle and position, and the point whose voting exceeds a certain threshold is the candidate point of the matching target.
  • the matching process mainly includes two stages, offline and online, as shown in FIG. 2, which is the process of an embodiment of the template matching algorithm of the present application.
  • the main function of the offline stage is to process the template image and generate the R-table of the generalized Hough transform.
  • the process can include image pyramid generation, stable sub-pixel edge extraction and construction of the R-table of generalized Hough transform.
  • the image pyramid is generated by using a Gaussian pyramid model to layer the template image to obtain images of each layer from high to low; the Gaussian pyramid model has a good ability to preserve edges and uses a 3*3 convolution kernel , Can take into account the speed requirements.
  • Sub-pixel edge extraction uses the local area effect to obtain the single boundary of the sub-pixel edge of different pyramid layer images, while obtaining the boundary direction; this algorithm has the advantages of boundary positioning accuracy and angle calculation error.
  • this application adopts the eight-neighborhood search of the edge angles one by one after edge extraction.
  • the current point's angle is located The retention within the range of the front and rear edge angles, the retention where the angle deviation between the current point angle and the previous edge or the back edge is less than the first threshold, and the removal where the angle deviation from the front and rear edges is greater than the first threshold.
  • the search process can be: the edge adopts chain coding and the first threshold is preset, and the current point calculation angle is within the range of the front and rear edge angles, that is, point 2 is reserved. Otherwise, perform threshold filtering: the angle deviation between the current angle and the previous edge is within the first threshold, then point 4 is retained, when the condition is not met, and then compared with the next edge, when both exceed the first threshold, point 5 is removed .
  • the online stage corresponds to the input target image for edge extraction, gradient calculation, and coarse-to-fine matching processing.
  • the R-table is used to obtain the value in the voting space of the generalized Hough transform to obtain the target position. It mainly includes image pyramid generation, edge extraction and gradient calculation, violent matching at the highest level of the pyramid, downward layer-by-layer matching and fine matching algorithms.
  • the overview process is shown in Figure 2.
  • the image pyramid uses the Gaussian pyramid model, the mean pyramid model or a combination of the two, and the edge extraction algorithm can realize automatic threshold setting.
  • Fine matching is to perform surface fitting on the results (including position and angle) within the variation range of the result after coarse matching, and obtain the global maximum value, which is the calculation result of fine matching.
  • the processing in the online stage includes: using a combination of a mean pyramid and a Gaussian pyramid to layer the input target image to obtain images of each layer from high to low.
  • the Gaussian pyramid has a good ability to preserve edges.
  • the mean pyramid has the characteristics of fast calculation speed and also has a good ability to suppress noise, but it will blur the edges. Therefore, in this application, the Gaussian pyramid is used when extracting edges, and the mean pyramid is used when matching from top to bottom.
  • Gaussian convolution and mean filtering both use a 3*3 convolution kernel, which can achieve a good balance between speed and accuracy.
  • Sub-pixel edge gradient acquisition and automatic threshold algorithm As in the offline phase, the single boundary of the sub-pixel edge is obtained based on the local area effect, and the direction of the boundary is obtained at the same time.
  • the rough matching process of this application controls the angle at plus or minus 2 degrees. Therefore, if the angle is accurate to 1 degree, it is directly divided into 360 degrees; the specified angle is from 0 to 360 degrees, and the double type angle floor value is adopted. Because in the process of extracting edges, it is necessary to manually set thresholds for the extracted boundaries. In order to avoid this problem, this application adopts the optimal T-point automatic threshold calculation based on the gradient histogram, and sets the threshold for the extracted boundaries. value.
  • generalized Hough transform is used to perform 360° full template matching with the default zoom factor of 1, to obtain the rough position and angle of the element in the target image at the highest level (x ,y,theta); use the matching result of the upper layer to match down to the bottom layer, and search in the surrounding range of the matching result of the upper layer (related to position and angle) to obtain the elements in the target image at each non-top layer Position and angle;
  • the fitting surface adopts the quadratic spherical fitting equation to obtain the local extreme point within the range of the selected point, and obtain the coordinate point position of the sub-pixel.
  • the application and implementation of the template matching algorithm of the present application combined with the pyramid model and optimized search strategy, has a small amount of calculation, anti-interference, can achieve rotation invariance, and It has a certain ability to resist scaling and occlusion.
  • the angle irons in the template image and target image shown in Figure 5 and Figure 6 are visible.
  • the matching target has a certain occlusion and screening at any angle.
  • the tested image parameters and PC parameters are as follows The table shows:
  • RGB image Processor Intel Core i7 Size: 1000*600 CPU frequency: 3.7GHz
  • test result shows that the time required to use the generalized Hough transform in the related technology is 734 ms, while the template matching algorithm of the present application only requires 74 ms, and the matching rate is doubled.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A template matching algorithm based on edge and gradient features. A matching process comprises an offline stage and an online stage. The offline stage corresponds to processing, comprising at least edge extraction and gradient computation, of a template image, and generation of an R-table of generalized Hough transform; and the online stage corresponds to edge extraction, gradient computation and coarse-to-fine matching processing of an input target image, acquisition, according to an angle and a position and by means of the R-table, of a value in a voting space of generalized Hough transform, and acquisition of a target position. The template matching algorithm combines a pyramid model with an optimized search strategy, has an anti-interference capability, can achieve rotational invariance, and has a certain capability to resist scaling and occlusion; moreover, the computation amount of generalized Hough transform is greatly reduced, and the matching rate is increased manyfold.

Description

一种基于边缘和梯度特征的模板匹配算法A template matching algorithm based on edge and gradient features
本公开要求在2019年07月31日提交中国专利局、申请号为201910699607.6的中国专利申请的优先权,以上申请的全部内容通过引用结合在本公开中。This disclosure claims the priority of a Chinese patent application filed with the Chinese Patent Office with an application number of 201910699607.6 on July 31, 2019, and the entire content of the above application is incorporated into this disclosure by reference.
技术领域Technical field
本申请涉及一种工业机器视觉图像处理技术,例如涉及一种基于边缘和梯度特征的模板匹配算法。This application relates to an industrial machine vision image processing technology, for example, to a template matching algorithm based on edge and gradient features.
背景技术Background technique
模板匹配算法是机器视觉的难点和核心之一,被广泛应用于目标识别、工件的定位和视频中目标提取等领域。模板匹配的目的是在图像中找到与目标物相似的物体并获取其位置,这种方法包含诸多图像处理技术,如:图像预处理、图像分割和相似度评价等。Template matching algorithm is one of the difficulties and cores of machine vision, and it is widely used in the fields of target recognition, workpiece positioning and target extraction in videos. The purpose of template matching is to find an object similar to the target in the image and obtain its position. This method includes many image processing techniques, such as image preprocessing, image segmentation, and similarity evaluation.
在相关技术中,工业范围内使用的方法主要包含两个方面,基于特征和基于灰度的匹配算法,其中基于特征的算法主要包括基于关键点、基于边缘和基于线、圆弧等基元模型。In related technologies, the methods used in the industry mainly include two aspects, feature-based and grayscale-based matching algorithms. Feature-based algorithms mainly include keypoint-based, edge-based, and line-based and arc-based primitive models. .
基于灰度的模板匹配比较耗时,而且很难抵制旋转和缩放,因此在对时间要求比较苛刻的场合,一般不采用此方法。基于关键点的模板匹配,对遮挡很敏感,而且强烈依赖于关键点查找,抗干扰性比较差;基于线、圆弧等基元的模型匹配算法虽然具有一定的优势,但是基元建模和参数调整困难。因此,有必要研发一种开发复杂度较低、匹配可靠且具有很强实时性的匹配算法。Template matching based on grayscale is time-consuming, and it is difficult to resist rotation and scaling. Therefore, this method is generally not used in situations where time requirements are relatively strict. Template matching based on key points is very sensitive to occlusion, and strongly depends on key point search, and has poor anti-interference. Although model matching algorithms based on primitives such as lines and arcs have certain advantages, primitive modeling and Difficulty in parameter adjustment. Therefore, it is necessary to develop a matching algorithm with low development complexity, reliable matching and strong real-time performance.
发明内容Summary of the invention
本申请提供一种基于边缘和梯度特征的模板匹配算法,以广义霍夫变换为 基础并结合金字塔模型,降低计算复杂度,并赋予抗干扰性强、旋转不变性、抵制一部分的遮挡和缩放等性能。This application provides a template matching algorithm based on edge and gradient features, which is based on the generalized Hough transform and combined with a pyramid model to reduce computational complexity, and endow it with strong anti-interference, rotation invariance, resistance to partial occlusion and scaling, etc. performance.
本申请提供了一种基于边缘和梯度特征的模板匹配算法,匹配过程包括离线和在线两个阶段,其中离线阶段对应模板图像进行至少包括边缘提取和梯度计算的处理,生成广义霍夫变换的R-table;在线阶段对应输入的目标图像进行边缘提取、梯度计算和由粗至精的匹配处理,根据角度和位置利用R-table获取广义霍夫变换的投票空间内的值,获取目标位置。This application provides a template matching algorithm based on edge and gradient features. The matching process includes two stages, offline and online. The offline stage corresponds to the template image and performs at least edge extraction and gradient calculation processing to generate R of the generalized Hough transform. -table; the online stage corresponds to the input target image for edge extraction, gradient calculation, and coarse-to-fine matching processing. According to the angle and position, the R-table is used to obtain the value in the voting space of the generalized Hough transform to obtain the target position.
可选地,离线阶段的处理过程包括:Optionally, the processing process in the offline phase includes:
图像金字塔生成,采用金字塔模型对模板图像分层,得到由高到低的每层图像;Image pyramid generation, using the pyramid model to layer the template image to get each layer of images from high to low;
亚像素边缘提取,采用基于局部区域效应获取不同金字塔层图像亚像素边缘的单边界,同时获得边界方向;Sub-pixel edge extraction, using the local area effect to obtain the single boundary of the sub-pixel edge of different pyramid layer images, and obtain the boundary direction at the same time;
构建广义霍夫变换的R-table,对得到的特征进行旋转和缩放,其中最顶层图像直接建立360°旋转的R-table,对除最顶层外的图像建立无旋转的R-table。Construct the R-table of the generalized Hough transform, rotate and scale the obtained features, in which the topmost image directly establishes a 360°rotated R-table, and the non-rotated R-table is established for the images except the topmost layer.
可选地,构建广义霍夫变换的R-table之前还包括边缘筛选,对边缘采用链式编码并预设第一阀值,逐一进行边缘角度的八邻域查找:当前点的角度位于前后边缘角度的范围内的保留,当前点的角度与前一边缘或后一边缘的角度偏差小于第一阀值的保留、与前后边缘的角度偏差均大于第一阀值的去除。Optionally, before constructing the R-table of the generalized Hough transform, edge filtering is also included. The edge is chain-coded and the first threshold is preset, and the eight-neighborhood search of the edge angle is performed one by one: the angle of the current point is located at the front and rear edges The retention within the range of the angle, the retention where the angle deviation between the current point angle and the previous edge or the back edge is less than the first threshold value, the retention where the angle deviation from the front and rear edges is greater than the removal of the first threshold value.
可选地,在线阶段的处理过程包括:Optionally, the processing process in the online phase includes:
图像金字塔生成,采用金字塔模型对输入的目标图像分层,得到由高到低的每层图像;Image pyramid generation, using the pyramid model to layer the input target image to get each layer of images from high to low;
亚像素边缘提取,采用基于局部区域效应获取不同金字塔层图像亚像素边缘的单边界,同时获得边界方向;Sub-pixel edge extraction, using the local area effect to obtain the single boundary of the sub-pixel edge of different pyramid layer images, and obtain the boundary direction at the same time;
分层粗匹配,图像金字塔最高层采用广义霍夫变换,进行默认缩放系数1的360°全模板匹配,获得目标图像中要素在最高层的粗略位置和角度(x,y,theta);利用上一层的匹配结果向下逐层匹配至底层,在上一层匹配结果周边范围进行查找,获得目标图像中要素在每个非最高层的位置和角度;Layered coarse matching, the highest level of the image pyramid adopts the generalized Hough transform, and the 360° full template matching with the default zoom factor of 1 is performed to obtain the rough position and angle (x, y, theta) of the elements in the target image at the highest level; The matching results of one layer are matched layer by layer down to the bottom layer, and the search is performed in the surrounding range of the matching results of the upper layer to obtain the position and angle of the elements in the target image at each non-highest layer;
精匹配,根据粗匹配所获得的位置和角度,获取符合高斯分布规律的霍夫空间结果,对每层所得的位置和角度进行拟合曲面并求取全局最大值,得到目标图像中要素的最终输出结果。Fine matching, according to the position and angle obtained by rough matching, obtain the Hough space result conforming to the Gaussian distribution law, fit the surface and obtain the global maximum value to the position and angle obtained in each layer, and obtain the final element in the target image Output the result.
可选地,对目标图像的分层所采用的金字塔模型为高斯金字塔和均值金字塔的结合模型,且其中高斯卷积和均值滤波采用的均为3*3的卷积核。Optionally, the pyramid model used for the layering of the target image is a combined model of Gaussian pyramid and mean value pyramid, and the Gaussian convolution and mean value filtering both use 3*3 convolution kernels.
可选地,在边缘提取的过程中采用基于梯度直方图的最优T-point自动阈值计算,对提取出的边界给定阀值。Optionally, the optimal T-point automatic threshold calculation based on the gradient histogram is used in the edge extraction process, and the threshold is given to the extracted boundary.
可选地,拟合曲面采用二次的球面拟合方程,在选取点的范围内进行局部极值点的求取,获取亚像素的坐标点位置。Optionally, the fitting surface adopts a quadratic spherical fitting equation, and the local extreme point is obtained within the range of the selected point to obtain the coordinate point position of the sub-pixel.
与相关技术相比,本申请模板匹配算法方法应用实施,该算法结合金字塔模型和优化搜索策略,具有抗干扰,可以实现旋转不变,且具有一定的抵制缩放和遮挡的能力;而且,广义霍夫变换的计算量大幅减小,匹配速率成倍提升。Compared with related technologies, the template matching algorithm method of this application is applied and implemented. The algorithm combines the pyramid model and optimized search strategy, has anti-interference, can achieve rotation invariance, and has a certain ability to resist scaling and occlusion; moreover, the generalized Huo The calculation amount of the husband transform is greatly reduced, and the matching rate is doubled.
附图说明Description of the drawings
图1是广义霍夫变换的原理图。Figure 1 is a schematic diagram of the generalized Hough transform.
图2是本申请模板匹配算法的流程示意图。Figure 2 is a schematic flow chart of the template matching algorithm of this application.
图3是离线阶段进行边缘筛选的示意图。Figure 3 is a schematic diagram of edge screening in the offline phase.
图4是在线阶段进行粗匹配的范围示意图。Figure 4 is a schematic diagram of the range of coarse matching in the online phase.
图5是模板图像及其中的作为要素的角架。Fig. 5 is a template image and a corner frame as an element.
图6是目标图像及其中存在遮挡、缩放、旋转的要素。Figure 6 shows the target image and its elements of occlusion, scaling, and rotation.
具体实施方式Detailed ways
下面结合附图对本申请技术方案的创新特点和实施方式作进一步详细说明。The innovative features and implementation manners of the technical solution of the present application will be further described in detail below in conjunction with the accompanying drawings.
本申请针对相关技术中各种基于模板匹配的算法或多或少地存在各种缺陷,致力于优化工业机器视觉图像处理技术,结合自身经验和创造性劳动,研究并提出了一种基于边缘和梯度特征的模板匹配算法,具有抗干扰性强、旋转不变性、抵制一部分的遮挡和缩放等优点。In view of the various defects of various template matching-based algorithms in related technologies, this application is committed to optimizing industrial machine vision image processing technology, combining its own experience and creative work, and researches and proposes a method based on edges and gradients. The characteristic template matching algorithm has the advantages of strong anti-interference, rotation invariance, resistance to partial occlusion and scaling.
本申请公开了一种基于边缘和梯度特征的模板匹配算法,模板匹配的主体算法使用的是广义霍夫变换。广义霍夫变换是表的形式描述一种图形,如图1所示,参数主要包括,边缘法向量角度
Figure PCTCN2019123994-appb-000001
边缘点指向中心的向量r,向量r的角度ψ。把图形边缘点坐标保存在一张表中,那么该图形就确定下来,无论是直线(其实是线段)、圆、椭圆还是其它形状的几何图形,都可以使用同一方法处理。广义霍夫变换为参数增加关联,使得有平移和旋转(无缩放)的情况只需遍历一个参数。图像的边缘三个参数分别是图形的中心坐标(横纵),旋转角度(相对参考图形)和边缘到中心点的向量。首先把参考图形边缘点对中心的径向量保存起来,利用待搜索图形边缘点的梯度方向(用相对坐标轴的角度表示)作为索引找到相应的径向量,加上向量后就完成投射。所以要遍历的参数只有旋转角度,如下表所示:
This application discloses a template matching algorithm based on edge and gradient features. The main body algorithm of template matching uses generalized Hough transform. The generalized Hough transform describes a graph in the form of a table, as shown in Figure 1. The parameters mainly include the angle of the edge normal vector
Figure PCTCN2019123994-appb-000001
The vector r where the edge point points to the center, the angle ψ of the vector r. Save the coordinates of the edge points of the graph in a table, then the graph is determined, whether it is a straight line (actually a line segment), a circle, an ellipse or other geometric shapes, it can be processed in the same way. The generalized Hough transform adds correlation to the parameters, so that only one parameter needs to be traversed in the case of translation and rotation (no scaling). The three parameters of the edge of the image are the center coordinate (horizontal and vertical) of the graphic, the rotation angle (relative to the reference graphic) and the vector from the edge to the center point. First, save the radial value of the reference graphic edge point to the center, use the gradient direction of the graphic edge point to be searched (represented by the angle relative to the coordinate axis) as an index to find the corresponding radial value, and add the vector to complete the projection. So the only parameter to be traversed is the rotation angle, as shown in the following table:
IndexIndex ψψ r i r i
00 00 {r ii=0} {r ii = 0}
11 {r ii=dψ} {r ii =dψ}
22 2*dψ2*dψ {r ii=2dψ} {r ii =2dψ}
.... .... ....
.... .... ....
构建R-table,加上缩放就只需要遍历两个参数。在模板匹配中主要是:在离线阶段获取角度和向量表。在线检测阶段根据角度和位置,利用向量表,获取投票点的位置,投票超过一定阈值的点即为匹配目标的待选点。To build an R-table, plus zoom, only need to traverse two parameters. The main thing in template matching is to obtain the angle and vector table in the offline stage. In the online detection stage, the position of the voting point is obtained by using the vector table according to the angle and position, and the point whose voting exceeds a certain threshold is the candidate point of the matching target.
本申请为提高匹配速度,采用金字塔模型减小计算复杂度。匹配过程主要包括离线和在线两个阶段,如图2所示,是本申请模板匹配算法的一实施例的过程。In order to improve the matching speed, this application adopts the pyramid model to reduce the computational complexity. The matching process mainly includes two stages, offline and online, as shown in FIG. 2, which is the process of an embodiment of the template matching algorithm of the present application.
离线阶段的主要作用是对模板图像进行处理,生成广义霍夫变换的R-table,其过程可以包括图像金字塔生成、稳定的亚像素边缘提取和构建广义霍夫变换的R-table等。The main function of the offline stage is to process the template image and generate the R-table of the generalized Hough transform. The process can include image pyramid generation, stable sub-pixel edge extraction and construction of the R-table of generalized Hough transform.
在一实施例中,图像金字塔生成,采用高斯金字塔模型对模板图像分层,得到由高到低的每层图像;高斯金字塔模型具有很好的保存边缘的能力,采用3*3的卷积核,可以兼顾速度要求。In one embodiment, the image pyramid is generated by using a Gaussian pyramid model to layer the template image to obtain images of each layer from high to low; the Gaussian pyramid model has a good ability to preserve edges and uses a 3*3 convolution kernel , Can take into account the speed requirements.
亚像素边缘提取,采用基于局部区域效应获取不同金字塔层图像亚像素边缘的单边界,同时获得边界方向;此算法具有边界定位精度和角度计算误差小的优势。Sub-pixel edge extraction uses the local area effect to obtain the single boundary of the sub-pixel edge of different pyramid layer images, while obtaining the boundary direction; this algorithm has the advantages of boundary positioning accuracy and angle calculation error.
边缘筛选,由于在计算过程中,局部区域由于噪声的干扰会出现计算错误的现象,为了避免这样的问题,本申请采用在边缘提取后逐一进行边缘角度的 八邻域查找,当前点的角度位于前后边缘角度的范围内的保留,当前点的角度与前一边缘或后一边缘的角度偏差小于第一阀值的保留、与前后边缘的角度偏差均大于第一阀值的去除。如图3所示,查找流程可以为:边缘采用链式编码并预设第一阀值,当前点计算角度位于前后边缘角度的范围内,即点2保留。否则进行阈值筛选:当前的角度和前一个边缘的角度偏差在第一阈值内,则保留点4,当不满足条件,再和后一个边缘进行比较,当都超过第一阈值,则去除点5。Edge screening, because in the calculation process, the local area will have calculation errors due to the interference of noise. In order to avoid such problems, this application adopts the eight-neighborhood search of the edge angles one by one after edge extraction. The current point's angle is located The retention within the range of the front and rear edge angles, the retention where the angle deviation between the current point angle and the previous edge or the back edge is less than the first threshold, and the removal where the angle deviation from the front and rear edges is greater than the first threshold. As shown in Figure 3, the search process can be: the edge adopts chain coding and the first threshold is preset, and the current point calculation angle is within the range of the front and rear edge angles, that is, point 2 is reserved. Otherwise, perform threshold filtering: the angle deviation between the current angle and the previous edge is within the first threshold, then point 4 is retained, when the condition is not met, and then compared with the next edge, when both exceed the first threshold, point 5 is removed .
构建广义霍夫变换的R-table,对得到的特征进行旋转和缩放,其中最顶层图像直接建立360°旋转的R-table,对除最顶层外的图像建立无旋转的R-table。Construct the R-table of the generalized Hough transform, rotate and scale the obtained features, in which the topmost image directly establishes a 360°rotated R-table, and the non-rotated R-table is established for the images except the topmost layer.
在线阶段对应输入的目标图像进行边缘提取、梯度计算和由粗至精的匹配处理,根据角度和位置利用R-table获取广义霍夫变换的投票空间内的值,以此获取目标位置,在线阶段主要包括图像金字塔的生成、边缘提取和梯度计算、金字塔最高层的暴力匹配、向下逐层匹配和精匹配算法。概述过程如图2所示,其中图像金字塔使用的是高斯金字塔模型、均值金字塔模型或两者的结合,并且边缘提取算法可以实现自动的阈值设定,金字塔最高层实行模板匹配后,获取到物体的在最高层的粗略位置,进入下一层进行匹配时,可以在获取参数的周围变化进行一定范围内的匹配,无需进行全匹配,这样可以很大程度地降低计算复杂度和减少运行时间。精匹配是对粗匹配后的结果变化范围内的结果(包括位置和角度)进行曲面的拟合,获取全局最大值,即为精匹配的运算结果。The online stage corresponds to the input target image for edge extraction, gradient calculation, and coarse-to-fine matching processing. According to the angle and position, the R-table is used to obtain the value in the voting space of the generalized Hough transform to obtain the target position. It mainly includes image pyramid generation, edge extraction and gradient calculation, violent matching at the highest level of the pyramid, downward layer-by-layer matching and fine matching algorithms. The overview process is shown in Figure 2. The image pyramid uses the Gaussian pyramid model, the mean pyramid model or a combination of the two, and the edge extraction algorithm can realize automatic threshold setting. After the highest level of the pyramid performs template matching, the object is obtained In the rough position of the highest layer, when entering the next layer for matching, the surrounding changes of the acquired parameters can be matched within a certain range without full matching, which can greatly reduce the computational complexity and reduce the running time. Fine matching is to perform surface fitting on the results (including position and angle) within the variation range of the result after coarse matching, and obtain the global maximum value, which is the calculation result of fine matching.
在一些实施例中,在线阶段的处理过程,包括:采用均值金字塔和高斯金字塔的结合对输入的目标图像分层,得到由高到低的每层图像。高斯金字塔具有很好的保存边缘的能力,均值金字塔具有计算速度快的特点,同时也具有很好的抑制噪声的能力,但是会对边缘变的模糊。因此,本申请需要提取边缘的 时候采用高斯金字塔,而在自上向下匹配的时候采用均值金字塔的方式。其中高斯卷积和均值滤波采用的均为3*3的卷积核,可以在速度和精度上达到很好的均衡。In some embodiments, the processing in the online stage includes: using a combination of a mean pyramid and a Gaussian pyramid to layer the input target image to obtain images of each layer from high to low. The Gaussian pyramid has a good ability to preserve edges. The mean pyramid has the characteristics of fast calculation speed and also has a good ability to suppress noise, but it will blur the edges. Therefore, in this application, the Gaussian pyramid is used when extracting edges, and the mean pyramid is used when matching from top to bottom. Among them, Gaussian convolution and mean filtering both use a 3*3 convolution kernel, which can achieve a good balance between speed and accuracy.
亚像素边缘梯度的获取和自动阈值算法。和离线阶段相同,采用的是基于局部区域效应获取亚像素边缘的单边界,同时获得边界的方向。本申请粗匹配过程将角度控制在正负2度。因此对角度的精确到1度,直接进行360度切分;规定角度为0~360度,采用double类型角度floor值。由于在提取边缘的过程中,需要对提取出的边界进行手动给定阈值,为了避免此问题,本申请采用基于梯度直方图的最优T-point自动阈值计算,对提取出的边界给定阀值。Sub-pixel edge gradient acquisition and automatic threshold algorithm. As in the offline phase, the single boundary of the sub-pixel edge is obtained based on the local area effect, and the direction of the boundary is obtained at the same time. The rough matching process of this application controls the angle at plus or minus 2 degrees. Therefore, if the angle is accurate to 1 degree, it is directly divided into 360 degrees; the specified angle is from 0 to 360 degrees, and the double type angle floor value is adopted. Because in the process of extracting edges, it is necessary to manually set thresholds for the extracted boundaries. In order to avoid this problem, this application adopts the optimal T-point automatic threshold calculation based on the gradient histogram, and sets the threshold for the extracted boundaries. value.
如图4所示先进行分层粗匹配,在图像金字塔最高层采用广义霍夫变换,进行默认缩放系数1的360°全模板匹配,获得目标图像中要素在最高层的粗略位置和角度(x,y,theta);利用上一层的匹配结果向下逐层匹配至底层,在上一层匹配结果周边范围(与位置和角度相关)进行查找,获得目标图像中要素在每个非最高层的位置和角度;As shown in Figure 4, first perform layered rough matching. At the highest level of the image pyramid, generalized Hough transform is used to perform 360° full template matching with the default zoom factor of 1, to obtain the rough position and angle of the element in the target image at the highest level (x ,y,theta); use the matching result of the upper layer to match down to the bottom layer, and search in the surrounding range of the matching result of the upper layer (related to position and angle) to obtain the elements in the target image at each non-top layer Position and angle;
而后进行精匹配,根据粗匹配所获得的位置和角度,获取符合高斯分布规律的霍夫空间结果,对每层所得的位置和角度进行拟合曲面并求取全局最大值,得到目标图像中要素的最终输出结果。其中拟合曲面采用二次的球面拟合方程,在选取点的范围内进行局部极值点的求取,获取亚像素的坐标点位置。Then fine matching is performed. According to the position and angle obtained by rough matching, the Hough space result conforming to the Gaussian distribution law is obtained, and the position and angle obtained in each layer are fitted to the surface and the global maximum value is obtained to obtain the elements in the target image The final output result. The fitting surface adopts the quadratic spherical fitting equation to obtain the local extreme point within the range of the selected point, and obtain the coordinate point position of the sub-pixel.
综上关于本申请实施例及其附图的详细描述可以理解到:本申请模板匹配算法的应用实施,结合金字塔模型和优化搜索策略,具有计算量小,抗干扰,可以实现旋转不变,且具有一定的抵制缩放和遮挡的能力,如图5和图6所示的模板图像和目标图像中角铁可见,匹配的目标存在一定的遮挡和任意角度的筛选,测试的图像参数和PC参数如下表所示:In summary of the detailed description of the embodiments of the present application and the accompanying drawings, it can be understood that: the application and implementation of the template matching algorithm of the present application, combined with the pyramid model and optimized search strategy, has a small amount of calculation, anti-interference, can achieve rotation invariance, and It has a certain ability to resist scaling and occlusion. The angle irons in the template image and target image shown in Figure 5 and Figure 6 are visible. The matching target has a certain occlusion and screening at any angle. The tested image parameters and PC parameters are as follows The table shows:
图像参数Image parameters 电脑参数Computer parameters
类型:RGB图像Type: RGB image 处理器:Intel Core i7Processor: Intel Core i7
大小:1000*600Size: 1000*600 CPU频率:3.7GHzCPU frequency: 3.7GHz
测试结果表明使用相关技术中的广义霍夫的变换需要的时间为734ms,而使用本申请的模板匹配算法只需要74ms,匹配速率倍增。The test result shows that the time required to use the generalized Hough transform in the related technology is 734 ms, while the template matching algorithm of the present application only requires 74 ms, and the matching rate is doubled.

Claims (7)

  1. 一种基于边缘和梯度特征的模板匹配算法,匹配过程包括离线和在线两个阶段,其中离线阶段对应模板图像进行至少包括边缘提取和梯度计算的处理,生成广义霍夫变换的R-table;在线阶段对应输入的目标图像进行边缘提取、梯度计算和由粗至精的匹配处理,根据角度和位置利用R-table获取广义霍夫变换的投票空间内的值,获取目标位置。A template matching algorithm based on edge and gradient features. The matching process includes two stages, offline and online. The offline stage corresponds to the template image for processing at least including edge extraction and gradient calculation to generate a generalized Hough transform R-table; In the stage, edge extraction, gradient calculation and matching processing from coarse to fine are performed on the input target image. According to the angle and position, the R-table is used to obtain the value in the voting space of the generalized Hough transform to obtain the target position.
  2. 根据权利要求1所述基于边缘和梯度特征的模板匹配算法,其中,离线阶段的处理过程包括:The template matching algorithm based on edge and gradient features according to claim 1, wherein the processing process in the offline phase includes:
    图像金字塔生成,采用金字塔模型对模板图像分层,得到由高到低的每层图像;Image pyramid generation, using the pyramid model to layer the template image to get each layer of images from high to low;
    亚像素边缘提取,采用基于局部区域效应获取不同金字塔层图像亚像素边缘的单边界,同时获得边界方向;Sub-pixel edge extraction, using the local area effect to obtain the single boundary of the sub-pixel edge of different pyramid layer images, and obtain the boundary direction at the same time;
    构建广义霍夫变换的R-table,对得到的特征进行旋转和缩放,其中最顶层图像直接建立360°旋转的R-table,对除最顶层外的图像建立无旋转的R-table。Construct the R-table of the generalized Hough transform, rotate and scale the obtained features, in which the topmost image directly establishes a 360°rotated R-table, and the non-rotated R-table is established for the images except the topmost layer.
  3. 根据权利要求2所述基于边缘和梯度特征的模板匹配算法,其中,构建广义霍夫变换的R-table之前还包括边缘筛选,对边缘采用链式编码并预设第一阀值,逐一进行边缘角度的八邻域查找:当前点的角度位于前后边缘角度的范围内的保留,当前点的角度与前一边缘或后一边缘的角度偏差小于第一阀值的保留、与前后边缘的角度偏差均大于第一阀值的去除。The template matching algorithm based on edge and gradient features according to claim 2, wherein before constructing the R-table of the generalized Hough transform, it also includes edge filtering, adopting chain coding for the edges and preset the first threshold value, and the edges are performed one by one. Eight-neighborhood search of angles: the retention of the current point's angle within the range of the front and rear edge angles, the retention of the angle of the current point and the previous or back edge is less than the first threshold, and the angle deviation from the front and back edges Both are greater than the removal of the first threshold.
  4. 根据权利要求1所述基于边缘和梯度特征的模板匹配算法,其中,在线阶段的处理过程包括:The template matching algorithm based on edge and gradient features of claim 1, wherein the processing process in the online phase includes:
    图像金字塔生成,采用金字塔模型对输入的目标图像分层,得到由高到低的每层图像;Image pyramid generation, using the pyramid model to layer the input target image to get each layer of images from high to low;
    亚像素边缘提取,采用基于局部区域效应获取不同金字塔层图像亚像素边 缘的单边界,同时获得边界方向;Sub-pixel edge extraction, using the local area effect to obtain a single boundary of the sub-pixel edges of different pyramid layer images, and obtain the boundary direction at the same time;
    分层粗匹配,图像金字塔最高层采用广义霍夫变换,进行默认缩放系数1的360°全模板匹配,获得目标图像中要素在最高层的粗略位置和角度(x,y,theta);利用上一层的匹配结果向下逐层匹配至底层,在上一层匹配结果周边范围进行查找,获得目标图像中要素在每个非最高层的位置和角度;Layered coarse matching, the highest level of the image pyramid adopts the generalized Hough transform, and the 360° full template matching with the default zoom factor of 1 is performed to obtain the rough position and angle (x, y, theta) of the elements in the target image at the highest level; The matching results of one layer are matched layer by layer down to the bottom layer, and the search is performed in the surrounding range of the matching results of the upper layer to obtain the position and angle of the elements in the target image at each non-highest layer;
    精匹配,根据粗匹配所获得的位置和角度,获取符合高斯分布规律的霍夫空间结果,对每层所得的位置和角度进行拟合曲面并求取全局最大值,得到目标图像中要素的最终输出结果。Fine matching, according to the position and angle obtained by rough matching, obtain the Hough space result conforming to the Gaussian distribution law, fit the surface and obtain the global maximum value to the position and angle obtained in each layer, and obtain the final element in the target image Output the result.
  5. 根据权利要求4所述基于边缘和梯度特征的模板匹配算法,其中,对目标图像的分层所采用的金字塔模型为高斯金字塔和均值金字塔的结合模型,且其中高斯卷积和均值滤波采用的均为3*3的卷积核。The template matching algorithm based on edge and gradient features according to claim 4, wherein the pyramid model used for the layering of the target image is a combined model of Gaussian pyramid and mean pyramid, and both Gaussian convolution and mean filtering are used It is a 3*3 convolution kernel.
  6. 根据权利要求4所述基于边缘和梯度特征的模板匹配算法,其中,在边缘提取的过程中采用基于梯度直方图的最优T-point自动阈值计算,对提取出的边界给定阀值。The template matching algorithm based on edge and gradient features according to claim 4, wherein the optimal T-point automatic threshold calculation based on the gradient histogram is used in the process of edge extraction, and the threshold is given to the extracted boundary.
  7. 根据权利要求4所述基于边缘和梯度特征的模板匹配算法,其中,拟合曲面采用二次的球面拟合方程,在选取点的范围内进行局部极值点的求取,获取亚像素的坐标点位置。The template matching algorithm based on edge and gradient features according to claim 4, wherein the fitting surface adopts a quadratic spherical fitting equation to obtain local extreme points within the range of selected points to obtain sub-pixel coordinates Point location.
PCT/CN2019/123994 2019-07-31 2019-12-09 Template matching algorithm based on edge and gradient feature WO2021017361A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910699607.6 2019-07-31
CN201910699607.6A CN110472674B (en) 2019-07-31 2019-07-31 Template matching algorithm based on edge and gradient characteristics

Publications (1)

Publication Number Publication Date
WO2021017361A1 true WO2021017361A1 (en) 2021-02-04

Family

ID=68508400

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/123994 WO2021017361A1 (en) 2019-07-31 2019-12-09 Template matching algorithm based on edge and gradient feature

Country Status (2)

Country Link
CN (1) CN110472674B (en)
WO (1) WO2021017361A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033640A (en) * 2021-03-16 2021-06-25 深圳棱镜空间智能科技有限公司 Template matching method, device, equipment and computer readable storage medium
CN113159103A (en) * 2021-02-24 2021-07-23 广东拓斯达科技股份有限公司 Image matching method, image matching device, electronic equipment and storage medium
CN113888634A (en) * 2021-09-28 2022-01-04 湖北瑞兴达特种装备科技股份有限公司 Method and system for positioning and detecting target on special-shaped complex surface
CN115588109A (en) * 2022-09-26 2023-01-10 苏州大学 Image template matching method, device, equipment and application
CN115791791A (en) * 2022-11-14 2023-03-14 中国科学院沈阳自动化研究所 Visual detection method for liquid crystal panel packing scrap
CN116958514A (en) * 2023-09-20 2023-10-27 中国空气动力研究与发展中心高速空气动力研究所 Sub-pixel positioning method for shock wave position of optical image
CN117078730A (en) * 2023-10-12 2023-11-17 资阳建工建筑有限公司 Anti-protruding clamp registration method based on template matching
CN117237441A (en) * 2023-11-10 2023-12-15 湖南科天健光电技术有限公司 Sub-pixel positioning method, sub-pixel positioning system, electronic equipment and medium

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472674B (en) * 2019-07-31 2023-07-18 苏州中科全象智能科技有限公司 Template matching algorithm based on edge and gradient characteristics
CN111079803B (en) * 2019-12-02 2023-04-07 易思维(杭州)科技有限公司 Template matching method based on gradient information
CN111079802B (en) * 2019-12-02 2023-04-07 易思维(杭州)科技有限公司 Matching method based on gradient information
CN111931786B (en) * 2020-06-23 2022-02-01 联宝(合肥)电子科技有限公司 Image processing method and device and computer readable storage medium
CN113160236B (en) * 2020-11-30 2022-11-22 齐鲁工业大学 Image identification method for shadow shielding of photovoltaic cell
CN112818989B (en) * 2021-02-04 2023-10-03 成都工业学院 Image matching method based on gradient amplitude random sampling
CN113409344A (en) * 2021-05-11 2021-09-17 深圳市汇川技术股份有限公司 Template information acquisition method, device and computer-readable storage medium
CN113758439A (en) * 2021-08-23 2021-12-07 武汉理工大学 Method and device for measuring geometric parameters on line in hot ring rolling forming process
CN114792373B (en) * 2022-04-24 2022-11-25 广东天太机器人有限公司 Visual identification spraying method and system of industrial robot
CN114926488A (en) * 2022-05-23 2022-08-19 华南理工大学 Workpiece positioning method based on generalized Hough model and improved pyramid search acceleration
CN115409890B (en) * 2022-11-02 2023-03-24 山东大学 Self-defined mark detection method and system based on MSR and generalized Hough transform
CN116579928B (en) * 2023-07-14 2023-10-03 苏州优备精密智能装备股份有限公司 Sub-precision template matching method based on scaling, angle and pixel space

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2863335A1 (en) * 2012-08-28 2015-04-22 Tencent Technology Shenzhen Company Limited Method, device and storage medium for locating feature points on human face
CN105930858A (en) * 2016-04-06 2016-09-07 吴晓军 Fast high-precision geometric template matching method enabling rotation and scaling functions
CN110008966A (en) * 2019-04-08 2019-07-12 湖南博睿基电子科技有限公司 One kind being based on polar quick SIFT feature point extracting method
CN110472674A (en) * 2019-07-31 2019-11-19 苏州中科全象智能科技有限公司 A kind of template matching algorithm based on edge and Gradient Features

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5581309B2 (en) * 2008-03-24 2014-08-27 スー カン,ミン Information processing method for broadcast service system, broadcast service system for implementing the information processing method, and recording medium related to the information processing method
CN103456005B (en) * 2013-08-01 2016-05-25 华中科技大学 Generalised Hough transform image matching method based on local invariant geometric properties
CN105046684B (en) * 2015-06-15 2017-09-29 华中科技大学 A kind of image matching method based on polygon generalised Hough transform
CN105261012A (en) * 2015-09-25 2016-01-20 上海瑞伯德智能系统科技有限公司 Template matching method based on Sobel vectors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2863335A1 (en) * 2012-08-28 2015-04-22 Tencent Technology Shenzhen Company Limited Method, device and storage medium for locating feature points on human face
CN105930858A (en) * 2016-04-06 2016-09-07 吴晓军 Fast high-precision geometric template matching method enabling rotation and scaling functions
CN110008966A (en) * 2019-04-08 2019-07-12 湖南博睿基电子科技有限公司 One kind being based on polar quick SIFT feature point extracting method
CN110472674A (en) * 2019-07-31 2019-11-19 苏州中科全象智能科技有限公司 A kind of template matching algorithm based on edge and Gradient Features

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159103A (en) * 2021-02-24 2021-07-23 广东拓斯达科技股份有限公司 Image matching method, image matching device, electronic equipment and storage medium
CN113159103B (en) * 2021-02-24 2023-12-05 广东拓斯达科技股份有限公司 Image matching method, device, electronic equipment and storage medium
CN113033640B (en) * 2021-03-16 2023-08-15 深圳棱镜空间智能科技有限公司 Template matching method, device, equipment and computer readable storage medium
CN113033640A (en) * 2021-03-16 2021-06-25 深圳棱镜空间智能科技有限公司 Template matching method, device, equipment and computer readable storage medium
CN113888634A (en) * 2021-09-28 2022-01-04 湖北瑞兴达特种装备科技股份有限公司 Method and system for positioning and detecting target on special-shaped complex surface
CN115588109A (en) * 2022-09-26 2023-01-10 苏州大学 Image template matching method, device, equipment and application
CN115588109B (en) * 2022-09-26 2023-06-06 苏州大学 Image template matching method, device, equipment and application
CN115791791A (en) * 2022-11-14 2023-03-14 中国科学院沈阳自动化研究所 Visual detection method for liquid crystal panel packing scrap
CN116958514A (en) * 2023-09-20 2023-10-27 中国空气动力研究与发展中心高速空气动力研究所 Sub-pixel positioning method for shock wave position of optical image
CN116958514B (en) * 2023-09-20 2023-12-05 中国空气动力研究与发展中心高速空气动力研究所 Sub-pixel positioning method for shock wave position of optical image
CN117078730A (en) * 2023-10-12 2023-11-17 资阳建工建筑有限公司 Anti-protruding clamp registration method based on template matching
CN117078730B (en) * 2023-10-12 2024-01-23 资阳建工建筑有限公司 Anti-protruding clamp registration method based on template matching
CN117237441A (en) * 2023-11-10 2023-12-15 湖南科天健光电技术有限公司 Sub-pixel positioning method, sub-pixel positioning system, electronic equipment and medium
CN117237441B (en) * 2023-11-10 2024-01-30 湖南科天健光电技术有限公司 Sub-pixel positioning method, sub-pixel positioning system, electronic equipment and medium

Also Published As

Publication number Publication date
CN110472674A (en) 2019-11-19
CN110472674B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
WO2021017361A1 (en) Template matching algorithm based on edge and gradient feature
WO2021047684A1 (en) Active contour- and deep learning-based automatic segmentation method for fuzzy boundary image
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN108898610A (en) A kind of object contour extraction method based on mask-RCNN
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
Deschaud et al. Point cloud non local denoising using local surface descriptor similarity
CN110688947B (en) Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
US20120269443A1 (en) Method, apparatus, and program for detecting facial characteristic points
CN106446773B (en) Full-automatic robust three-dimensional face detection method
CN108416801B (en) Har-SURF-RAN characteristic point matching method for stereoscopic vision three-dimensional reconstruction
CN111767960A (en) Image matching method and system applied to image three-dimensional reconstruction
CN111860501B (en) High-speed rail height adjusting rod falling-out fault image identification method based on shape matching
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN109272521A (en) A kind of characteristics of image fast partition method based on curvature analysis
CN105046278B (en) The optimization method of Adaboost detection algorithm based on Haar feature
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN107292869A (en) Image Speckle detection method based on anisotropic Gaussian core and gradient search
CN105678241A (en) Cascaded two dimensional image face attitude estimation method
CN111709893B (en) ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
Li et al. Adaptive bilateral smoothing for a point-sampled blade surface
CN116503462A (en) Method and system for quickly extracting circle center of circular spot
CN117611525A (en) Visual detection method and system for abrasion of pantograph slide plate
CN108520494B (en) SAR image and visible light image registration method based on structural condition mutual information
CN115035326B (en) Radar image and optical image accurate matching method
Ye et al. Improved edge detection algorithm of high-resolution remote sensing images based on fast guided filter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939522

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939522

Country of ref document: EP

Kind code of ref document: A1