CN111340134B - A Fast Template Matching Method Based on Local Dynamic Warping - Google Patents

A Fast Template Matching Method Based on Local Dynamic Warping Download PDF

Info

Publication number
CN111340134B
CN111340134B CN202010164515.0A CN202010164515A CN111340134B CN 111340134 B CN111340134 B CN 111340134B CN 202010164515 A CN202010164515 A CN 202010164515A CN 111340134 B CN111340134 B CN 111340134B
Authority
CN
China
Prior art keywords
image
test
similarity
template
template image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010164515.0A
Other languages
Chinese (zh)
Other versions
CN111340134A (en
Inventor
王禹林
刘�文
段裕刚
查文彬
李恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yuqiyuan Intelligent Equipment Technology Co ltd
Nanjing University of Science and Technology
Beijing Institute of Electronic System Engineering
Original Assignee
Nanjing Yuqiyuan Intelligent Equipment Technology Co ltd
Nanjing University of Science and Technology
Beijing Institute of Electronic System Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yuqiyuan Intelligent Equipment Technology Co ltd, Nanjing University of Science and Technology, Beijing Institute of Electronic System Engineering filed Critical Nanjing Yuqiyuan Intelligent Equipment Technology Co ltd
Priority to CN202010164515.0A priority Critical patent/CN111340134B/en
Publication of CN111340134A publication Critical patent/CN111340134A/en
Application granted granted Critical
Publication of CN111340134B publication Critical patent/CN111340134B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于局部动态规整的快速模板匹配方法,可应用于工件定位、工业分拣和目标跟踪等领域。步骤如下:利用改进的环投影方法(IRPT)提取模板图像和测试子图的特征向量,然后初估相似度,筛选出候选测试子图,进而利用所提出的局部动态规整方法(LDTW)计算候选测试子图的相似度和缩放系数;取相似度值最高的测试子图,基于其对应的缩放系数,在测试图的对应位置裁剪出包含目标物体的最小区域,最终利用方向码方法(OC)计算该区域的旋转角度。较之于现有技术,本发明只需一张模板图像便可计算缩放系数和旋转角度,解决了常规算法需要大量不同缩放系数和旋转角度组合的模板图像才能计算缩放系数和旋转角度的难题,极大简化了算法。

Figure 202010164515

The invention discloses a fast template matching method based on local dynamic regularization, which can be applied to the fields of workpiece positioning, industrial sorting, target tracking and the like. The steps are as follows: using the improved ring projection method (IRPT) to extract the feature vectors of the template image and the test subgraph, then initially estimate the similarity, screen out the candidate test subgraphs, and then use the proposed local dynamic warping method (LDTW) to calculate the candidate Test the similarity and zoom coefficient of the sub-images; take the test sub-image with the highest similarity value, and based on its corresponding zoom coefficient, cut out the smallest area containing the target object at the corresponding position of the test image, and finally use the direction code method (OC) Calculate the rotation angle of the area. Compared with the prior art, the present invention only needs one template image to calculate the scaling factor and the rotation angle, which solves the problem that the conventional algorithm needs a large number of template images with different combinations of the scaling factor and the rotation angle to calculate the scaling factor and the rotation angle. The algorithm is greatly simplified.

Figure 202010164515

Description

一种基于局部动态规整的快速模板匹配方法A Fast Template Matching Method Based on Local Dynamic Warping

技术领域technical field

本发明涉及机器视觉定位技术领域,具体涉及到一种基于局部动态规整的快速模板匹配方法。The invention relates to the technical field of machine vision positioning, in particular to a fast template matching method based on local dynamic regularization.

背景技术Background technique

模板匹配算法是机器视觉中的一项关键技术,利用给定的模板图像,在测试图像中识别和定位相似的测试子图(即目标工件),在工件定位系统和产品质量检测系统中有着广泛的应用。随着工况越来越复杂,对模板匹配算法的实时性和鲁棒性提出了更高的要求。Template matching algorithm is a key technology in machine vision. It uses a given template image to identify and locate similar test sub-images (ie target workpieces) in the test image. It is widely used in workpiece positioning systems and product quality inspection systems. Applications. As the working conditions become more and more complex, higher requirements are placed on the real-time and robustness of the template matching algorithm.

常规的模板匹配方法通常面临着缩放、旋转、噪声以及光照变化等挑战,目前仍然缺少较好的解决方法。如专利文献1(CN105046271A)公开了一种基于模板匹配的MELF元件定位与检测方法,通过旋转和缩放原始模板图像得到大量模板图像,然后逐一利用每个模板图像来匹配测试图像,从而达到识别旋转角度和缩放系数的目的,但算法复杂度较高;又如专利文献2(CN108805220A)公开了一种基于梯度积分的快速模板匹配算法,将图像金字塔、提取轮廓点和梯度积分相结合,减少了数据计算的冗余,但该算法仍然依赖于大量不同缩放和旋转角度组合的模板图像才能识别工件的缩放系数和旋转角度;又如专利文献3(CN102254181A)公开了一种多阶微分环形模板匹配跟踪方法,该方法基于环形模板匹配准则实现了目标物体旋转角度的计算,但仍然无法计算目标物体的缩放系数,且该方法对光照变化鲁棒性差。综上所述,目前仍然缺少一种能够同时计算旋转角度和缩放系数,且算法复杂度较低,鲁棒性较好的模板匹配方法。Conventional template matching methods usually face the challenges of scaling, rotation, noise, and illumination changes, and there is still a lack of good solutions. For example, Patent Document 1 (CN105046271A) discloses a method for positioning and detecting MELF components based on template matching. A large number of template images are obtained by rotating and scaling the original template image, and then each template image is used to match the test image one by one, so as to achieve the recognition of rotation The purpose of the angle and scaling factor, but the algorithm complexity is high; another example is Patent Document 2 (CN108805220A) discloses a fast template matching algorithm based on gradient integration, which combines image pyramids, extraction of contour points and gradient integration to reduce the number of The data calculation is redundant, but the algorithm still relies on a large number of template images with different combinations of scaling and rotation angles to identify the scale factor and rotation angle of the workpiece; another example is Patent Document 3 (CN102254181A) discloses a multi-order differential ring template matching Tracking method, this method realizes the calculation of the rotation angle of the target object based on the ring template matching criterion, but still cannot calculate the scale factor of the target object, and the method is not robust to illumination changes. To sum up, there is still a lack of a template matching method that can calculate the rotation angle and the scaling coefficient at the same time, and has low algorithm complexity and good robustness.

发明内容SUMMARY OF THE INVENTION

为了提高模板匹配算法的实时性和鲁棒性,同时简化算法,本发明提供了一种基于局部动态规整的快速模板匹配方法。In order to improve the real-time performance and robustness of the template matching algorithm and simplify the algorithm at the same time, the present invention provides a fast template matching method based on local dynamic warping.

本发明采用的技术方案如下:The technical scheme adopted in the present invention is as follows:

一种基于局部动态规整的快速模板匹配方法,步骤如下:A fast template matching method based on local dynamic warping, the steps are as follows:

步骤1.遍历测试图,提取与模板图像尺寸一样的测试子图,利用环投影算法提取测试子图和模板图像的环投影特征向量;Step 1. Traverse the test graph, extract the test subgraph with the same size as the template image, and utilize the ring projection algorithm to extract the ring projection feature vector of the test subgraph and the template image;

步骤2.以步骤1所得的环投影特征向量为输入,计算测试子图与模板图像的粗估相似度,并筛选出相似度大于一号设定阈值的测试子图,列为候选测试子图;Step 2. Take the ring projection feature vector obtained in step 1 as an input, calculate the rough estimation similarity of the test subgraph and the template image, and filter out the test subgraph whose similarity is greater than the No. 1 setting threshold, and list it as a candidate test subgraph;

步骤3.针对步骤2所得的候选测试子图,利用局部动态规整算法,通过局部对齐环投影特征向量的曲线轮廓来计算相似度和图像缩放系数;Step 3. For the candidate test subgraph obtained in step 2, utilize the local dynamic regularization algorithm to calculate similarity and image scaling factor by the curve outline of the local alignment ring projection feature vector;

步骤4.测试图遍历完成后,取相似度的最大值,若该相似度的最大值大于或等于三号设定阈值,则对应的测试子图的坐标即为目标位置,同时根据对应的缩放系数从测试图中裁剪出包含目标物体的最小区域;Step 4. After the test map traversal is completed, take the maximum value of the similarity. If the maximum value of the similarity is greater than or equal to the threshold set by No. 3, the coordinates of the corresponding test sub-map are the target position, and at the same time according to the corresponding zoom The coefficients crop out the smallest area containing the target object from the test map;

步骤5.利用方向码算法提取步骤4中所得最小区域和模板图像的方向码特征向量,然后基于方向码特征向量计算图像的旋转角度,最终得到目标位置、缩放系数和旋转角度。Step 5. Use the direction code algorithm to extract the minimum area obtained in step 4 and the direction code feature vector of the template image, then calculate the rotation angle of the image based on the direction code feature vector, and finally obtain the target position, scaling factor and rotation angle.

优选的,所述步骤1中的环投影算法如下:模板图像尺寸记为M×N,以模板图像中心点(x0,y0)为原点建立极坐标系,任何一个像素表示为T(r,θ),环投影特征向量表示为IRPT,Preferably, the ring projection algorithm in the step 1 is as follows: the size of the template image is denoted as M×N, the center point (x 0 , y 0 ) of the template image is used as the origin to establish a polar coordinate system, and any pixel is denoted as T(r , θ), the ring projection eigenvector is denoted as IRPT,

Figure GDA0003762432890000021
Figure GDA0003762432890000021

其中,

Figure GDA0003762432890000022
Rmax=min(M/2,N/2),s(r)是半径为r的圆环上的像素个数,Tmin(r,θ)是圆环上所有像素强度的最小值。in,
Figure GDA0003762432890000022
R max =min(M/2, N/2), s(r) is the number of pixels on a circle with radius r, and T min (r, θ) is the minimum value of all pixel intensities on the circle.

优选的,所述步骤2中测试子图与模板图像的粗估相似度的算法如下:由测试子图提取的环投影特征向量记为S,由模板图像提取的环投影特征向量记为T,测试子图与模板图像的粗估相似度记为Kc,Kc越大则图像越相似,Preferably, the algorithm for roughly estimating the similarity between the test subgraph and the template image in the step 2 is as follows: the ring projection feature vector extracted from the test submap is denoted as S, the ring projection feature vector extracted from the template image is denoted as T, and the test The rough estimation similarity between the sub-image and the template image is denoted as K c , the larger the K c is, the more similar the images are.

Figure GDA0003762432890000023
Figure GDA0003762432890000023

其中,n是向量X的维度,S[0:m/2]是特征向量S的前m/2维。where n is the dimension of the vector X and S[0:m/2] is the first m/2 dimension of the feature vector S.

优选的,所述步骤2中的一号设定阈值记为β1,取0.35≤β1≤0.5。Preferably, the No. 1 set threshold in the step 2 is denoted as β 1 , and takes 0.35≤β 1 ≤0.5.

优选的,所述步骤3中的方法1和方法2是通过寻找候选测试子图与模板图像的环投影特征向量的最优局部匹配关系,计算相似度和图像缩放系数。Preferably, the method 1 and the method 2 in the step 3 are to calculate the similarity and the image scaling coefficient by finding the optimal local matching relationship between the candidate test subgraph and the ring projection feature vector of the template image.

优选的,所述步骤3中的一种具体方法为:Preferably, a specific method in the step 3 is:

步骤3.1a.由候选测试子图提取的环投影特征向量记为S,由模板图像提取的环投影特征向量记为T,输入S和T,其维度分别为ms和mt,创建一个距离矩阵D和一个累积距离矩阵Dacc,其维度都为mt×ms,初始化距离函数DIS=|x-y|;Step 3.1a. The ring projection feature vector extracted from the candidate test subgraph is denoted as S, and the ring projection feature vector extracted from the template image is denoted as T, input S and T, whose dimensions are m s and m t respectively, create a distance Matrix D and a cumulative distance matrix D acc , whose dimensions are both m t ×m s , initialize the distance function DIS=|xy|;

步骤3.2a.利用距离函数DIS计算特征向量S和T每个元素之间的距离,从而得到距离矩阵D,然后将距离矩阵D赋值给累积距离矩阵DaccStep 3.2a. utilize the distance function DIS to calculate the distance between each element of the eigenvectors S and T, thereby obtaining the distance matrix D, then assign the distance matrix D to the cumulative distance matrix D acc ;

步骤3.3a.利用如下公式更新累积距离矩阵Dacc的每个元素值,更新完毕后即得到累积距离矩阵DaccStep 3.3a. Use the following formula to update each element value of the cumulative distance matrix D acc , and obtain the cumulative distance matrix D acc after the update is completed,

Figure GDA0003762432890000031
Figure GDA0003762432890000031

步骤3.4a.针对累积距离矩阵Dacc的最后一列,从下往上搜索值最小的元素,其值记为temp1,该元素的位置记为(i1,ms);Step 3.4a. For the last column of the cumulative distance matrix D acc , search for the element with the smallest value from bottom to top, its value is denoted as temp 1 , and the position of this element is denoted as (i1, m s );

步骤3.5a.针对累积距离矩阵Dacc的最后一行,从右往左搜索值最小的元素,其值记为temp2,该元素的位置记为(i2,ms);Step 3.5a. For the last row of the cumulative distance matrix D acc , search for the element with the smallest value from right to left, its value is denoted as temp 2 , and the position of this element is denoted as (i2, m s );

步骤3.6a.特征向量S和T的相似度记为Ks,缩放系数记为K,Step 3.6a. The similarity between feature vectors S and T is recorded as K s , and the scaling factor is recorded as K,

若temp1小于或等于temp2,则

Figure GDA0003762432890000032
If temp 1 is less than or equal to temp 2 , then
Figure GDA0003762432890000032

若temp1大于temp2,则

Figure GDA0003762432890000033
If temp 1 is greater than temp 2 , then
Figure GDA0003762432890000033

优选的,所述步骤3中的另一种具体方法为:Preferably, another specific method in the step 3 is:

步骤3.1b.利用高斯滤波对特征曲线进行平滑降噪,RPT特征向量记为f(x),高斯函数记为g(x,σ),滤波后的RPT特征向量F(x)为Step 3.1b. Use Gaussian filtering to smooth and denoise the characteristic curve, the RPT eigenvector is denoted as f(x), the Gaussian function is denoted as g(x, σ), and the filtered RPT eigenvector F(x) is

Figure GDA0003762432890000034
Figure GDA0003762432890000034

步骤3.2b.卷积核记为T,则离散斜率曲线序列F′(x)为Step 3.2b. The convolution kernel is denoted as T, then the discrete slope curve sequence F'(x) is

Figure GDA0003762432890000035
Figure GDA0003762432890000035

步骤3.3b.由模板图像和测试子图得到的斜率曲线序列分别记为T′和S′,缩放系数记为k,初始化缩放系数计算范围为[k1,k2],缩放系数计算精度(步长)为k′,利用如下公式计算每个缩放系数k对应的相似度Ks,则最大的相似度

Figure GDA0003762432890000038
所对应的缩放系数k即为所求缩放系数K,Step 3.3b. The slope curve sequences obtained from the template image and the test sub-image are respectively denoted as T' and S', the scaling coefficient is denoted as k, the initial calculation range of the scaling coefficient is [k 1 , k 2 ], and the calculation accuracy of the scaling coefficient is ( Step size) is k', and the similarity K s corresponding to each scaling coefficient k is calculated by the following formula, then the maximum similarity
Figure GDA0003762432890000038
The corresponding scaling factor k is the desired scaling factor K,

Figure GDA0003762432890000036
Figure GDA0003762432890000036

其中,nmax=min(t,k×t),

Figure GDA0003762432890000037
β2是二号设定阈值,取10≤β2≤15。where n max =min(t, k×t),
Figure GDA0003762432890000037
β 2 is the No. 2 set threshold, and takes 10≤β 2 ≤15.

优选的,所述步骤4的三号设定阈值记为β3,取0.55≤β3≤0.7。Preferably, the set threshold value No. 3 in the step 4 is denoted as β 3 , and takes 0.55≤β 3 ≤0.7.

优选的,所述步骤5中的方向码算法为一种扇形采样方法,即将图像分成n份扇形区域,然后将扇形区域内所有像素强度值求平均并作为方向码特征向量的一个元素,由此得到一个与旋转角度相关联的方向码特征向量。Preferably, the direction code algorithm in the step 5 is a fan-shaped sampling method, that is, the image is divided into n fan-shaped areas, and then the intensity values of all pixels in the fan-shaped area are averaged and used as an element of the direction code feature vector, thus Get an orientation code eigenvector associated with the rotation angle.

优选的,所述步骤5具体为:Preferably, the step 5 is specifically:

步骤5.1.输入图像I的尺寸记为M×N,以输入图像中心点(x0,y0)为原点建立极坐标系,则任何一个像素可表示为I(r,θ),初始化方向码方法的角度计算精度θ′,方向码特征向量OC计算公式如下:Step 5.1. The size of the input image I is denoted as M×N, and the polar coordinate system is established with the center point (x 0 , y 0 ) of the input image as the origin, then any pixel can be expressed as I(r, θ), and the initialization direction code The angle calculation accuracy of the method is θ′, and the calculation formula of the direction code eigenvector OC is as follows:

Figure GDA0003762432890000041
Figure GDA0003762432890000041

其中,

Figure GDA0003762432890000042
rmax=min(M/2,N/2),sr是落入扇形区域内的像素数量;in,
Figure GDA0003762432890000042
r max =min(M/2, N/2), s r is the number of pixels falling into the sector area;

步骤5.2.输入图像为模板图像,记为T,每个θ对应一个方向码特征向量,利用步骤5.1中的计算方法可得到模板图像对应的360°/θ′个方向码特征向量,则角度θ对应的特征向量表示为

Figure GDA0003762432890000043
计算公式如下:Step 5.2. The input image is a template image, denoted as T, and each θ corresponds to a direction code feature vector. Using the calculation method in step 5.1, the 360°/θ′ direction code feature vectors corresponding to the template image can be obtained, then the angle θ The corresponding eigenvectors are expressed as
Figure GDA0003762432890000043
Calculated as follows:

Figure GDA0003762432890000044
Figure GDA0003762432890000044

其中,nmax=360°/θ′-1;Wherein, n max =360°/θ′-1;

步骤5.3.输入图像为最小区域,记为S,利用步骤5.1中的计算方法得到最小区域对应的一个方向码特征向量,表示为

Figure GDA0003762432890000045
计算公式如下所示:Step 5.3. The input image is the minimum area, denoted as S, and the calculation method in step 5.1 is used to obtain a direction code feature vector corresponding to the minimum area, which is expressed as
Figure GDA0003762432890000045
The calculation formula is as follows:

Figure GDA0003762432890000046
Figure GDA0003762432890000046

步骤5.4.计算每个θ对应的特征向量

Figure GDA0003762432890000047
和特征向量
Figure GDA0003762432890000048
的相似度K(θ,0°),计算公式如下所示:Step 5.4. Calculate the eigenvectors corresponding to each θ
Figure GDA0003762432890000047
and eigenvectors
Figure GDA0003762432890000048
The similarity K (θ, 0°) , the calculation formula is as follows:

Figure GDA0003762432890000049
Figure GDA0003762432890000049

其中,

Figure GDA00037624328900000410
β4是四号设定阈值,取15≤β4≤25;in,
Figure GDA00037624328900000410
β 4 is the No. 4 set threshold, which is 15≤β 4 ≤25;

步骤5.5.找到最大的相似度K(θ,0°),其对应的θ即为最小区域相对于模板图片逆时针旋转的角度,计算公式如下:Step 5.5. Find the maximum similarity K (θ, 0°) , and the corresponding θ is the counterclockwise rotation angle of the minimum area relative to the template image. The calculation formula is as follows:

Figure GDA00037624328900000411
Figure GDA00037624328900000411

本发明的有益效果是:The beneficial effects of the present invention are:

(1)常规的模板匹配方法都需要大量不同缩放和旋转角度组合的模板图像来构建模板库,然后将大量的模板图像和测试图像逐一匹配才能识别工件的缩放系数和旋转角度,该匹配过程不仅非常耗时,而且其计算精度依赖于缩放系数和旋转角度的步长。本发明利用所提出的局部动态规整(LDTW,Local Dynamic Time Warping)算法巧妙地解决了该问题,仅仅需要一张模板图像便可同时计算缩放系数和目标位置,随之利用方向码(OC,Orientation Codes)算法方法计算旋转角度。与常规模板匹配方法相比,本发明具备很低的算法复杂度,有效提高了识别定位的实时性。此外,所提出的LDTW方法具备良好的光照鲁棒性,具备更好的稳定性。(1) Conventional template matching methods require a large number of template images with different combinations of scaling and rotation angles to build a template library, and then matching a large number of template images and test images one by one to identify the scale factor and rotation angle of the workpiece. The matching process not only It is very time-consuming, and its calculation accuracy depends on the step size of the scaling factor and rotation angle. The present invention uses the proposed Local Dynamic Time Warping (LDTW, Local Dynamic Time Warping) algorithm to subtly solve this problem, and only needs a template image to calculate the zoom coefficient and the target position at the same time, and then uses the direction code (OC, Orientation Codes) algorithm method to calculate the rotation angle. Compared with the conventional template matching method, the present invention has very low algorithm complexity, and effectively improves the real-time performance of identification and positioning. In addition, the proposed LDTW method has good illumination robustness and has better stability.

(2)常规的模板匹配方法通常遭受噪声以及光照变化,进而导致定位精度不稳定,对于工业现场容易导致生产事故。本发明采用所提出的改进的环投影(IRPT,ImprovedRing Projection Transformation)算法提取特征,该特征具备良好的噪声鲁棒性。(2) Conventional template matching methods usually suffer from noise and illumination changes, which in turn lead to unstable positioning accuracy, which is easy to cause production accidents in industrial sites. The present invention adopts the proposed improved ring projection (IRPT, Improved Ring Projection Transformation) algorithm to extract features, and the features have good noise robustness.

(3)本发明算法在遍历测试图计算测试子图和模板图片相似度以及利用OC特征向量计算旋转角度的过程中可进行并行计算,满足工业应用实时性要求。(3) The algorithm of the present invention can perform parallel calculation in the process of traversing the test graph to calculate the similarity between the test subgraph and the template image and using the OC feature vector to calculate the rotation angle, so as to meet the real-time requirements of industrial applications.

附图说明Description of drawings

图1为本发明实施例中模版匹配方法的流程图。FIG. 1 is a flowchart of a template matching method in an embodiment of the present invention.

图2为本发明实施例中视觉图像处理过程示意图(方法1)。FIG. 2 is a schematic diagram of a visual image processing process in an embodiment of the present invention (method 1).

图3为本发明实施例中视觉图像处理过程示意图(方法2)。FIG. 3 is a schematic diagram of a visual image processing process in an embodiment of the present invention (method 2).

图4为本发明实施例利用一张模板图片处理各种缩放系数和旋转角度的结果图(方法1)。FIG. 4 is a result diagram of processing various scaling coefficients and rotation angles by using a template image according to an embodiment of the present invention (method 1).

图5为本发明实施例利用一张模板图片处理各种缩放系数和旋转角度的结果图(方法2)。FIG. 5 is a result diagram of processing various scaling coefficients and rotation angles by using a template image according to an embodiment of the present invention (method 2).

具体实施方式Detailed ways

下面结合实施例与附图对本发明作进一步说明。本实施例提供一种基于局部动态规整的快速模板匹配方法,如图1~3所示,其步骤如下。The present invention will be further described below with reference to the embodiments and the accompanying drawings. This embodiment provides a fast template matching method based on local dynamic warping, as shown in FIGS. 1 to 3 , and the steps are as follows.

步骤S1.遍历测试图,提取与模板图像尺寸一样的测试子图,利用环投影算法提取测试子图和模板图像的环投影特征向量。Step S1. Traverse the test graph, extract the test subgraph with the same size as the template image, and use the ring projection algorithm to extract the ring projection feature vector of the test subgraph and the template image.

S1.1.从上到下从左到右遍历测试图,并裁剪与模板图像尺寸一样的测试子图,取测试子图中心点的像素坐标作为测试子图的位置坐标。S1.1. Traverse the test image from top to bottom and from left to right, crop the test sub-image with the same size as the template image, and take the pixel coordinates of the center point of the test sub-image as the position coordinates of the test sub-image.

S1.2.利用所提出的改进的环投影(IRPT,Improved Ring ProjectionTransformation)算法提取测试子图和模板图像的IRPT特征向量;具体的,模板图像尺寸记为M×N,以模板图像中心点(x0,y0)为原点建立极坐标系,任何一个像素表示为T(r,θ),环投影特征向量表示为IRPT,S1.2. Use the proposed improved Ring Projection (IRPT, Improved Ring Projection Transformation) algorithm to extract the IRPT feature vector of the test subgraph and the template image; x 0 , y 0 ) is the origin to establish a polar coordinate system, any pixel is represented as T(r, θ), and the ring projection feature vector is represented as IRPT,

Figure GDA0003762432890000061
Figure GDA0003762432890000061

其中,

Figure GDA0003762432890000062
Rmax=min(M/2,N/2),s(r)是半径为r的圆环上的像素个数,Tmin(r,θ)是圆环上所有像素强度的最小值。in,
Figure GDA0003762432890000062
R max =min(M/2, N/2), s(r) is the number of pixels on a circle with radius r, and T min (r, θ) is the minimum value of all pixel intensities on the circle.

步骤S 2.以S1所得的环投影特征向量为输入,计算测试子图与模板图像的粗估相似度,并筛选出相似度大于一号设定阈值的测试子图,列为候选测试子图,一号设定阈值记为β1Step S 2. Take the ring projection feature vector of S1 gained as input, calculate the rough estimation similarity of test subgraph and template image, and filter out the test subgraph whose similarity is greater than No. 1 setting threshold, and be listed as candidate test subgraph, The first set threshold is denoted as β 1 .

该粗估相似度的算法如下:由测试子图提取的环投影特征向量记为S,由模板图像提取的环投影特征向量记为T,测试子图与模板图像的粗估相似度记为Kc,Kc越大则图像越相似,The algorithm for rough estimation of similarity is as follows: the ring projection feature vector extracted from the test subgraph is denoted as S, the ring projection feature vector extracted from the template image is denoted as T, the rough estimation similarity between the test subgraph and the template image is denoted as K c , The larger the K c , the more similar the images are.

Figure GDA0003762432890000063
Figure GDA0003762432890000063

其中,n是向量X的维度,S[0:m/2]是特征向量S的前m/2维。where n is the dimension of the vector X and S[0:m/2] is the first m/2 dimension of the feature vector S.

通常,β1过大会导致错误地过滤掉正确的候选测试子图,过小则会丧失初步筛选功能,取0.35≤β1≤0.5。若Kc大于或等于阈值β1,则将该测试子图列为候选测试子图;若小于,则将其相似度置为0。值得指出的是,仅仅利用特征向量S的前m/2维计算相似度,有利于过滤由于图像缩放而产生的背景噪声。Usually, if β 1 is too large, the correct candidate test subgraph will be filtered out by mistake, and if it is too small, the preliminary screening function will be lost, and take 0.35≤β 1 ≤0.5. If K c is greater than or equal to the threshold β 1 , the test subgraph is listed as a candidate test subgraph; if it is less than, the similarity is set to 0. It is worth pointing out that only using the first m/2 dimensions of the feature vector S to calculate the similarity is beneficial to filter the background noise caused by image scaling.

步骤S3.针对S 2所得的候选测试子图,基于IRPT特征向量的旋转不变性和全局轮廓不变性,利用局部动态规整(LDTW,Local Dynamic Time Warping)算法,通过局部对齐环投影特征向量的曲线轮廓来计算相似度和图像缩放系数。本实施例提出两种实现方法,即方法1和方法2,其本质是通过寻找候选测试子图与模板图像的环投影特征向量的最优局部匹配关系,计算相似度和图像缩放系数。Step S3. For the candidate test subgraph obtained by S 2 , based on the rotation invariance and global contour invariance of the IRPT eigenvector, utilize the local dynamic warping (LDTW, Local Dynamic Time Warping) algorithm to project the curve of the eigenvector through the local alignment ring. contour to calculate similarity and image scaling factor. This embodiment proposes two implementation methods, namely method 1 and method 2. The essence is to calculate the similarity and the image scaling coefficient by finding the optimal local matching relationship between the candidate test subgraph and the ring projection feature vector of the template image.

方法1具体为:Method 1 is specifically:

S3.1a.由候选测试子图提取的环投影特征向量记为S,由模板图像提取的环投影特征向量记为T,输入S和T,其维度分别为ms和mt,创建一个距离矩阵D和一个累积距离矩阵Dacc,其维度都为mt×ms,初始化距离函数DIS=|x-y|。S3.1a. The ring projection feature vector extracted from the candidate test subgraph is denoted as S, and the ring projection feature vector extracted from the template image is denoted as T, input S and T, whose dimensions are m s and m t respectively, create a distance Matrix D and a cumulative distance matrix D acc , whose dimensions are both m t ×m s , initialize the distance function DIS=|xy|.

S3.2a.利用距离函数DIS计算特征向量S和T每个元素之间的距离,从而得到距离矩阵D,然后将距离矩阵D赋值给累积距离矩阵DaccS3.2a. Use the distance function DIS to calculate the distance between each element of the eigenvectors S and T to obtain the distance matrix D, and then assign the distance matrix D to the cumulative distance matrix D acc .

S3.3a.利用如下公式更新累积距离矩阵Dacc的每个元素值,更新完毕后即得到累积距离矩阵DaccS3.3a. Use the following formula to update each element value of the cumulative distance matrix D acc , and obtain the cumulative distance matrix D acc after updating,

Figure GDA0003762432890000071
Figure GDA0003762432890000071

S3.4a.针对累积距离矩阵Dacc的最后一列,从下往上搜索值最小的元素,其值记为temp1,该元素的位置记为(i1,ms)。S3.4a. For the last column of the cumulative distance matrix D acc , search for the element with the smallest value from bottom to top, the value of which is denoted as temp 1 , and the position of this element is denoted as (i1, m s ).

S3.5a.针对累积距离矩阵Dacc的最后一行,从右往左搜索值最小的元素,其值记为temp2,该元素的位置记为(i2,ms)。S3.5a. For the last row of the cumulative distance matrix D acc , search for the element with the smallest value from right to left, whose value is denoted as temp 2 , and the position of this element is denoted as (i2, m s ).

S3.6a.特征向量S和T的相似度记为Ks,缩放系数记为K,S3.6a. The similarity between feature vectors S and T is recorded as K s , and the scaling factor is recorded as K,

若temp1小于或等于temp2,则

Figure GDA0003762432890000072
If temp 1 is less than or equal to temp 2 , then
Figure GDA0003762432890000072

若temp1大于temp2,则

Figure GDA0003762432890000073
If temp 1 is greater than temp 2 , then
Figure GDA0003762432890000073

方法2具体为:Method 2 is specifically:

S3.1b.利用高斯滤波对特征曲线进行平滑降噪。RPT特征向量记为f(x),高斯函数记为g(x,σ)。滤波后的RPT特征向量F(x)为:S3.1b. Use Gaussian filtering to smooth and denoise the characteristic curve. The RPT feature vector is denoted as f(x), and the Gaussian function is denoted as g(x, σ). The filtered RPT feature vector F(x) is:

Figure GDA0003762432890000074
Figure GDA0003762432890000074

S3.2b.卷积核记为T,则离散斜率曲线序列F′(x)为:S3.2b. The convolution kernel is denoted as T, then the discrete slope curve sequence F'(x) is:

Figure GDA0003762432890000075
Figure GDA0003762432890000075

S3.3b.由模板图像和测试子图得到的斜率曲线序列分别记为T′和S′,缩放系数记为k,初始化缩放系数计算范围为[k1,k2],缩放系数计算精度(步长)为k′,利用如下公式计算每个缩放系数k对应的相似度Ks,则最大的相似度

Figure GDA0003762432890000076
所对应的缩放系数k即为所求缩放系数K,S3.3b. The slope curve sequences obtained from the template image and the test sub-image are denoted as T' and S' respectively, the scaling coefficient is denoted as k, the initial calculation range of the scaling coefficient is [k 1 , k 2 ], and the calculation accuracy of the scaling coefficient is ( Step size) is k', and the similarity K s corresponding to each scaling coefficient k is calculated by the following formula, then the maximum similarity
Figure GDA0003762432890000076
The corresponding scaling factor k is the desired scaling factor K,

Figure GDA0003762432890000081
Figure GDA0003762432890000081

其中,nmax=min(t,k×t),

Figure GDA0003762432890000082
β2是二号设定阈值,取10≤β2≤15。where n max =min(t, k×t),
Figure GDA0003762432890000082
β 2 is the No. 2 set threshold, and takes 10≤β 2 ≤15.

步骤S 4.测试图遍历完成后,取相似度的最大值。若该相似度的最大值大于或等于三号设定阈值,则对应的测试子图的坐标即为目标位置,同时根据对应的缩放系数从测试图中裁剪出包含目标物体的最小区域(ROI,Region of interest);若小于,则说明测试图中无目标工件。三号设定阈值记为β3,β3过大和过小都会导致误匹配,通常取0.55≤β3≤0.7。Step S 4. After the traversal of the test graph is completed, take the maximum value of the similarity. If the maximum value of the similarity is greater than or equal to the threshold set by No. 3, the coordinates of the corresponding test sub-image are the target position, and at the same time, according to the corresponding zoom factor, the smallest region (ROI, ROI, Region of interest); if it is less than, it means that there is no target workpiece in the test chart. The set threshold of No. 3 is recorded as β 3 . If β 3 is too large or too small, it will lead to false matching. Usually, 0.55≤β 3 ≤0.7 is taken.

步骤S 5.利用方向码(OC)算法提取S 4中所得最小区域(ROI)和模板图像的方向码特征向量,然后基于方向码特征向量计算图像的旋转角度,最终得到目标位置、缩放系数和旋转角度。Step S 5. Utilize the direction code (OC) algorithm to extract the minimum area (ROI) obtained in S 4 and the direction code feature vector of the template image, then calculate the rotation angle of the image based on the direction code feature vector, and finally obtain the target position, scaling factor and Rotation angle.

上述方向码算法为一种扇形采样方法,即将图像分成n份扇形区域,然后将扇形区域内所有像素强度值求平均并作为方向码特征向量的一个元素,由此得到一个与旋转角度相关联的方向码特征向量。具体为:The above direction code algorithm is a fan-shaped sampling method, that is, the image is divided into n fan-shaped areas, and then all pixel intensity values in the fan-shaped area are averaged and used as an element of the direction code feature vector, thereby obtaining a rotation angle. Direction code feature vector. Specifically:

S5.1.输入图像的尺寸记为M×N,以输入图像中心点(x0,y0)为原点建立极坐标系,则任何一个像素可表示为T(r,θ);初始化方向码方法的角度计算精θ′,取θ′=1°,方向码特征向量OC计算公式如下:S5.1. The size of the input image is recorded as M×N, and the polar coordinate system is established with the center point (x 0 , y 0 ) of the input image as the origin, then any pixel can be represented as T(r, θ); initialization direction code The angle calculation of the method is precise θ′, take θ′=1°, and the calculation formula of the direction code feature vector OC is as follows:

Figure GDA0003762432890000083
Figure GDA0003762432890000083

其中,

Figure GDA0003762432890000084
rmax=min(M/2,N/2),sr是落入扇形区域内的像素数量。in,
Figure GDA0003762432890000084
r max =min(M/2, N/2), s r is the number of pixels falling within the sector.

S5.2.输入图像为模板图像,记为T,每个θ对应一个方向码特征向量,利用步骤5.1中的计算方法可得到模板图像对应的360°/θ′个方向码特征向量,则角度θ对应的特征向量表示为

Figure GDA0003762432890000085
计算公式如下:S5.2. The input image is a template image, denoted as T, and each θ corresponds to a direction code feature vector. Using the calculation method in step 5.1, the 360°/θ′ direction code feature vectors corresponding to the template image can be obtained, then the angle The eigenvector corresponding to θ is expressed as
Figure GDA0003762432890000085
Calculated as follows:

Figure GDA0003762432890000086
Figure GDA0003762432890000086

其中,nmax=360°/θ′-1。where n max =360°/θ'-1.

S5.3.输入图像为最小区域,记为S,利用步骤5.1中的计算方法得到最小区域对应的一个方向码特征向量,表示为

Figure GDA0003762432890000091
计算公式如下所示:S5.3. The input image is the minimum area, denoted as S, and a direction code feature vector corresponding to the minimum area is obtained by using the calculation method in step 5.1, which is expressed as
Figure GDA0003762432890000091
The calculation formula is as follows:

Figure GDA0003762432890000092
Figure GDA0003762432890000092

S5.4.计算每个θ对应的特征向量

Figure GDA0003762432890000093
和特征向量
Figure GDA0003762432890000094
的相似度K(θ,0°),计算公式如下所示:S5.4. Calculate the eigenvector corresponding to each θ
Figure GDA0003762432890000093
and eigenvectors
Figure GDA0003762432890000094
The similarity K (θ, 0°) , the calculation formula is as follows:

Figure GDA0003762432890000095
Figure GDA0003762432890000095

其中,

Figure GDA0003762432890000096
β4是四号设定阈值;β4过大容易导致丧失计算旋转角度功能,过小则导致噪声鲁棒性不好,根据实际调参经验,β4通常设置为15-25。in,
Figure GDA0003762432890000096
β 4 is the No. 4 setting threshold; if β 4 is too large, it will easily lead to the loss of the function of calculating the rotation angle, and if β 4 is too small, it will lead to poor noise robustness. According to the actual parameter adjustment experience, β 4 is usually set to 15-25.

S5.5.找到最大的相似度K(θ,0°),其对应的θ即为最小区域相对于模板图片逆时针旋转的角度,计算公式如下:S5.5. Find the maximum similarity K (θ, 0°) , and the corresponding θ is the counterclockwise rotation angle of the minimum area relative to the template image. The calculation formula is as follows:

Figure GDA0003762432890000097
Figure GDA0003762432890000097

步骤S6.利用S5得到目标位置、缩放系数和旋转角度进行工件定位、工业分拣或目标跟踪等工作。Step S6. Use S5 to obtain the target position, zoom factor and rotation angle for workpiece positioning, industrial sorting or target tracking.

显然,本发明的上述实施例仅仅是为了说明本发明所作的举例,而并非对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其他不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷例。而这些属于本发明的实质精神所引申出的显而易见的变化或变动仍属于本发明的保护范围。Obviously, the above-mentioned embodiments of the present invention are only examples for illustrating the present invention, rather than limiting the embodiments of the present invention. For those of ordinary skill in the art, changes or modifications in other different forms can also be made on the basis of the above description. All implementations need not and cannot be exhaustive here. And these obvious changes or changes derived from the essential spirit of the present invention still belong to the protection scope of the present invention.

Claims (5)

1. A fast template matching method based on local dynamic warping is characterized by comprising the following steps:
step 1, traversing the test graph, extracting a test subgraph with the same size as the template image, and extracting ring projection characteristic vectors of the test subgraph and the template image by using a ring projection algorithm;
the ring projection algorithm is as follows: the size of the template image is recorded as M multiplied by N, and the central point (x) of the template image is used 0 ,y 0 ) A polar coordinate system is established for the origin, any one pixel is denoted as T (r, theta), the ring projection feature vector is denoted as IRPT,
Figure FDA0003766697960000011
wherein,
Figure FDA0003766697960000012
R max min (M/2, N/2), s (r) is the number of pixels on the circle with radius r, T min (r, θ) is the minimum of all pixel intensities on the circle;
step 2, taking the ring projection feature vector obtained in the step 1 as input, calculating the roughly estimated similarity between the test subgraph and the template image, screening out the test subgraphs with the similarity larger than a first set threshold value, and listing the test subgraphs as candidate test subgraphs;
step 3, aiming at the candidate test subgraph obtained in the step 2, calculating the similarity and the image scaling coefficient by using a local dynamic warping algorithm and locally aligning the curve outline of the ring projection feature vector, namely, searching the optimal local matching relation of the candidate test subgraph and the ring projection feature vector of the template image and calculating the similarity and the image scaling coefficient;
a specific method of the step 3 is as follows:
step 3.1a, recording the ring projection characteristic vector extracted from the candidate test subgraph as S, recording the ring projection characteristic vector extracted from the template image as T, and inputting S and T, wherein the dimensionalities of the S and T are m respectively s And m t Creating a distance matrix D and a cumulative distance matrix D acc All dimensions of m t ×m s Initializing distance function DIS ═ x-y |;
step 3.2a, calculating the distance between each element of the eigenvector S and each element of the eigenvector T by using the distance function DIS to obtain a distance matrix D, and then assigning the distance matrix D to the accumulated distance matrix D acc
Step 3.3a. update the cumulative distance matrix D using the following formula acc After the updating of each element value, the accumulative distance matrix D is obtained acc
Figure FDA0003766697960000013
Step 3.4a. for the cumulative distance matrix D acc The last column of (1) searching the element with the smallest value from bottom to top, and recording the value as temp 1 The position of the element is (i1, m) s );
Step 3.5a. for the cumulative distance matrix D acc The last line of (1) searches for the element with the smallest value from right to left, and the value is recorded as temp 2 The position of the element is (i2, m) s );
Step 3.6a. the similarity of the characteristic vectors S and T is recorded as K s The scaling factor, denoted as K,
if temp. is lower than the normal 1 Temp less than or equal to 2 Then, then
Figure FDA0003766697960000014
If temp. is lower than the normal 1 Greater than temp 2 Then, then
Figure FDA0003766697960000021
The other specific method of the step 3 is as follows:
step 3.1b, smoothing and denoising the characteristic curve by Gaussian filtering, wherein a ring projection characteristic vector is recorded as f (x), a Gaussian function is recorded as g (x, sigma), and a filtered ring projection characteristic vector F (x) is
Figure FDA0003766697960000022
Step 3.2b. the convolution kernel is marked as T, then the discrete slope curve sequence F' (x) is
Figure FDA0003766697960000023
And 3.3b, respectively recording the slope curve sequences obtained from the template image and the test subgraph as T 'and S', recording the scaling coefficient as k, and initializing the calculation range of the scaling coefficient as k 1 ,k 2 ]The calculation precision of the scaling coefficient is K', and the similarity K corresponding to each scaling coefficient K is calculated by using the following formula s Then the maximum similarity
Figure FDA0003766697960000026
The corresponding scaling factor K is the desired scaling factor K,
Figure FDA0003766697960000024
wherein n is max =min(t,k×t),
Figure FDA0003766697960000025
β 2 Setting a threshold value of 10-beta 2 ≤15;
Step 4, after traversing the test chart, taking the maximum value of the similarity, if the maximum value of the similarity is greater than or equal to a third set threshold value, determining the coordinate of the corresponding test subgraph as the target position, and cutting out the minimum area containing the target object from the test chart according to the corresponding scaling coefficient;
step 5, extracting the direction code characteristic vectors of the minimum region and the template image obtained in the step 4 by using a direction code algorithm, and then calculating the rotation angle of the image based on the direction code characteristic vectors to finally obtain a target position, a scaling coefficient and the rotation angle; the direction code algorithm is a sector sampling method, namely, an image is divided into n sector areas, all pixel intensity values in the sector areas are averaged and used as an element of a direction code feature vector, and therefore a direction code feature vector related to a rotation angle is obtained.
2. The fast template matching method based on local dynamic warping as claimed in claim 1, wherein the algorithm for testing the rough estimated similarity between the sub-graph and the template image in step 2 is as follows: the ring projection characteristic vector extracted from the test subgraph is marked as S, the ring projection characteristic vector extracted from the template image is marked as T, and the rough estimation similarity between the test subgraph and the template image is marked as K c ,K c The larger the size the more similar the image is,
Figure FDA0003766697960000031
where n is the dimension of vector X, S [ 0: m/2 is the first m/2 dimension of the feature vector S.
3. The fast template matching method based on local dynamic warping as claimed in claim 1 or 2, wherein: setting a threshold value in the step 2 as beta 1 Taking beta of not more than 0.35 1 ≤0.5。
4. The fast template matching method based on local dynamic warping as claimed in claim 1, wherein: setting a threshold value beta in the third step of the step 4 3 Taking beta not more than 0.55 3 ≤0.7。
5. The fast template matching method based on local dynamic warping as claimed in claim 1, wherein the step 5 specifically comprises:
step 5.1, recording the size of the input image I as M multiplied by N, and inputting the central point (x) of the image 0 ,y 0 ) Establishing a polar coordinate system for the origin, any one pixel can be represented as I (r, θ), the angle calculation precision θ' of the direction code method is initialized, and the direction code feature vector OC is calculated as follows:
Figure FDA0003766697960000032
wherein,
Figure FDA0003766697960000033
r max =min(M/2,N/2),s r is the number of pixels falling within the sector area;
step 5.2, the input image is a template image and is marked as T, each theta corresponds to a direction code feature vector, 360 degrees/theta' direction code feature vectors corresponding to the template image can be obtained by utilizing the calculation method in the step 5.1, and the feature vector corresponding to the angle theta is expressed as
Figure FDA0003766697960000034
The calculation formula is as follows:
Figure FDA0003766697960000035
wherein n is max =360°/θ′-1;
Step 5.3, the input image is a minimum area and is marked as S, and a direction code characteristic vector corresponding to the minimum area is obtained by using the calculation method in the step 5.1 and is expressed as
Figure FDA0003766697960000036
The calculation formula is as follows:
Figure FDA0003766697960000037
step 5.4, calculating the characteristic vector corresponding to each theta
Figure FDA0003766697960000038
And feature vectors
Figure FDA0003766697960000039
Similarity K of (2) (θ,0°) The calculation formula is as follows:
Figure FDA0003766697960000041
wherein,
Figure FDA0003766697960000042
β 4 setting a threshold value of 15-beta 4 ≤25;
Step 5.5, find the biggest similarity K (θ,0°) And the corresponding theta is the counterclockwise rotation angle of the minimum area relative to the template picture, and the calculation formula is as follows:
Figure FDA0003766697960000043
CN202010164515.0A 2020-03-11 2020-03-11 A Fast Template Matching Method Based on Local Dynamic Warping Active CN111340134B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010164515.0A CN111340134B (en) 2020-03-11 2020-03-11 A Fast Template Matching Method Based on Local Dynamic Warping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010164515.0A CN111340134B (en) 2020-03-11 2020-03-11 A Fast Template Matching Method Based on Local Dynamic Warping

Publications (2)

Publication Number Publication Date
CN111340134A CN111340134A (en) 2020-06-26
CN111340134B true CN111340134B (en) 2022-09-06

Family

ID=71182292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010164515.0A Active CN111340134B (en) 2020-03-11 2020-03-11 A Fast Template Matching Method Based on Local Dynamic Warping

Country Status (1)

Country Link
CN (1) CN111340134B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950623B (en) * 2021-03-29 2024-08-02 云印技术(深圳)有限公司 Mark identification method and system
CN115166721B (en) * 2022-09-05 2023-04-07 湖南众天云科技有限公司 Radar and GNSS information calibration fusion method and device in roadside sensing equipment
CN117011369B (en) * 2023-08-15 2024-11-12 合肥图迅电子科技有限公司 Chip reference point positioning method, device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559749A (en) * 2018-12-24 2019-04-02 苏州思必驰信息科技有限公司 Combined decoding method and system for speech recognition system
CN110136160A (en) * 2019-05-13 2019-08-16 南京大学 A Fast Image Matching Method Based on Circular Projection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7813558B2 (en) * 2005-01-11 2010-10-12 Nec Corporation Template matching method, template matching apparatus, and recording medium that records program for it

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559749A (en) * 2018-12-24 2019-04-02 苏州思必驰信息科技有限公司 Combined decoding method and system for speech recognition system
CN110136160A (en) * 2019-05-13 2019-08-16 南京大学 A Fast Image Matching Method Based on Circular Projection

Also Published As

Publication number Publication date
CN111340134A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN110866924B (en) Line structured light center line extraction method and storage medium
CN109544456B (en) Panoramic environment perception method based on fusion of 2D image and 3D point cloud data
CN110148162B (en) Heterogeneous image matching method based on composite operator
CN108122256B (en) A method of it approaches under state and rotates object pose measurement
CN111340134B (en) A Fast Template Matching Method Based on Local Dynamic Warping
WO2019042232A1 (en) Fast and robust multimodal remote sensing image matching method and system
CN106981077B (en) Infrared image and visible light image registration method based on DCE and LSS
CN108225319B (en) Monocular vision rapid relative pose estimation system and method based on target characteristics
CN107169972B (en) Non-cooperative target rapid contour tracking method
CN104867126A (en) Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN107452030A (en) Method for registering images based on contour detecting and characteristic matching
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN115471682A (en) An Image Matching Method Based on SIFT Fusion ResNet50
CN110516528A (en) A moving target detection and tracking method based on moving background
CN116862960A (en) Workpiece morphology point cloud registration method, device, equipment and storage medium
CN111709893B (en) ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
CN114549400A (en) Image identification method and device
WO2023130842A1 (en) Camera pose determining method and apparatus
CN113192095B (en) A Corner Detection Method Based on Parallelogram Diagonals
Paffenholz et al. Geo-referencing point clouds with transformational and positional uncertainties
CN109829502B (en) Image pair efficient dense matching method facing repeated textures and non-rigid deformation
Kang et al. A robust image matching method based on optimized BaySAC
CN112001954A (en) An underwater PCA-SIFT image matching method based on polar curve constraints
Dryanovski et al. Real-time pose estimation with RGB-D camera
Koutaki et al. Fast and high accuracy pattern matching using multi-stage refining eigen template

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant