JP3074292B2 - Template matching processing method - Google Patents

Template matching processing method

Info

Publication number
JP3074292B2
JP3074292B2 JP08312517A JP31251796A JP3074292B2 JP 3074292 B2 JP3074292 B2 JP 3074292B2 JP 08312517 A JP08312517 A JP 08312517A JP 31251796 A JP31251796 A JP 31251796A JP 3074292 B2 JP3074292 B2 JP 3074292B2
Authority
JP
Japan
Prior art keywords
cross
correlation
pixel
correlation value
template matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP08312517A
Other languages
Japanese (ja)
Other versions
JPH10124666A (en
Inventor
隆 徳山
知生 野木
伸一 小柳
明生 立石
光俊 赤井
Original Assignee
大同電機工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大同電機工業株式会社 filed Critical 大同電機工業株式会社
Priority to JP08312517A priority Critical patent/JP3074292B2/en
Publication of JPH10124666A publication Critical patent/JPH10124666A/en
Application granted granted Critical
Publication of JP3074292B2 publication Critical patent/JP3074292B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【産業上の利用分野】本発明は、画像処理に於ける、画
像間パターンマッチング処理方法に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for pattern matching between images in image processing.

【0002】[0002]

【従来の技術】近年、あらゆる分野で画像処理応用技術
が導入される中、得られた画像データを総合的に解析
し、高精度かつ多角的な特徴を抽出する処理手法が種々
試みられている。
2. Description of the Related Art In recent years, while image processing application techniques have been introduced in various fields, various processing methods for comprehensively analyzing obtained image data and extracting highly accurate and diversified features have been attempted. .

【0003】FAの分野では、画像間の対応付けを求め
るパターンマッチング処理の用途が多く、特に、テンプ
レート画像とテクスチャー画像の類似度を走査、演算
し、対象物の位置を求めるテンプレートマッチング処理
は、物の位置決め、移動体の追跡、三次元形状計測等の
処理に用いられる、最も一般的な手法である。
In the field of FA, there are many uses of pattern matching processing for obtaining correspondence between images. In particular, template matching processing for scanning and calculating the similarity between a template image and a texture image to obtain the position of an object is performed by: This is the most common method used for processing such as object positioning, tracking of a moving object, and three-dimensional shape measurement.

【0004】ここで、テンプレートマッチング処理につ
いて説明する。図5に示す様に、求める対象物を含むテ
ンプレート画像データg(x,y)、テクスチャー画像
データf(x,y)とすると、正規化相互相関R(x,
y)は、画像データg(x,y)の中心座標を(k,
ι)とすると、式にて計算される。この演算をテクス
チャ画像全体の画素に対して施し、正規化相互相関値が
最大となる画素位置を求める。
Here, the template matching process will be described. As shown in FIG. 5, assuming template image data g (x, y) and texture image data f (x, y) including the target to be obtained, normalized cross-correlation R (x, y)
y) represents the center coordinates of the image data g (x, y) as (k,
ι), it is calculated by the formula. This calculation is performed on the pixels of the entire texture image, and the pixel position at which the normalized cross-correlation value is maximized is obtained.

【0005】式で演算された正規化相互相関値の分布
を図6に示す。この結果、最も相関の高い位置を元画素
精度で求めることが出来る。
FIG. 6 shows the distribution of the normalized cross-correlation values calculated by the equations. As a result, the position having the highest correlation can be obtained with original pixel accuracy.

【0006】次に対象物の位置をサブピクセルの精度で
求める場合について説明する。先に示したテンプレート
画像データg(x,y)、テクスチャ画像データf
(x,y)に濃度補間処理を施し、サブピクセルオーダ
ーの濃淡画像データを作り、式を用いて同様の演算を
補間画素全体に対して行えば良い。ここで、図7を用い
て、濃度補間処理の一例を説明する。元画素画像データ
の位置をu,v座標で表し、座標位置を(u,v)、
(u,v+1)、(u+1,v)、(u+1,v+1)
とし、濃度値を各々f(u,v)、f(u,v+1)、
f(u+1,v)、f(u+1,v+1)とし、濃度補
間後の(u,v)の位置の濃度をf(u,v
とすると、濃度値f(u,v)は式にて演算され
る。 f(u,v)=f(u,v)(1−α)(1−β)+f(u+1,v)α( 1−β)+f(u,v+1)(1−α)β+f(u+1,v+1)αβ………
Next, the case where the position of the object is obtained with sub-pixel accuracy will be described. The template image data g (x, y) and the texture image data f shown above
A density interpolation process is performed on (x, y) to generate grayscale image data on the order of sub-pixels, and a similar operation may be performed on the entire interpolation pixel using an equation. Here, an example of the density interpolation processing will be described with reference to FIG. The position of the original pixel image data is represented by u, v coordinates, and the coordinate position is (u, v),
(U, v + 1), (u + 1, v), (u + 1, v + 1)
And the density values are f (u, v), f (u, v + 1),
f (u + 1, v) and f (u + 1, v + 1), and the density at the position of (u 0 , v 0 ) after density interpolation is f (u 0 , v 0 ).
Then, the density value f (u 0 , v 0 ) is calculated by an equation. f (u 0 , v 0 ) = f (u, v) (1−α) (1−β) + f (u + 1, v) α (1−β) + f (u, v + 1) (1−α) β + f ( u + 1, v + 1) αβ ...

【0007】α及びβは、元画像データ間を例えば各々
10等分(10×10=100倍補間)して補間する場
合は、図8に示すように0.1、0.2〜1という様に
変化させ、各点の濃度値を求める。以上の結果、最も相
関の高い位置をサブピクセル精度で求めることが出来
る。図9に従来のサブピクセル精度での対応点を求める
ための処理フローの一例を示す。
Α and β are, for example, 0.1 and 0.2 to 1 as shown in FIG. 8 when the original image data is interpolated by dividing the original image data into ten equal parts (10 × 10 = 100 times interpolation). To obtain the density value at each point. As a result, the position having the highest correlation can be obtained with sub-pixel accuracy. FIG. 9 shows an example of a conventional processing flow for obtaining a corresponding point with sub-pixel accuracy.

【0008】[0008]

【発明が解決しようとする課題】以上説明した様に、テ
ンプレートマッチング処理で用いる固像データは多く、
サブピクセル精度で対応位置を求めようとした場合、元
画素のデータ量が、補間倍率(例えば、0.1ピクセル
精度では、10×10=100倍)により拡大される。
また、テクスチャ画像の全ての点に於いて、対応付け走
査を施すので、テンプレート画像パターンと最も良く一
致する場所を探し出すには、かなり長い演算時間を必要
とする問題が生じる。
As described above, a large amount of solid image data is used in the template matching process.
When trying to find the corresponding position with sub-pixel accuracy, the data amount of the original pixel is enlarged by the interpolation magnification (for example, 10 × 10 = 100 times with 0.1-pixel accuracy).
In addition, since the correspondence scanning is performed at all points of the texture image, a problem that a considerably long calculation time is required to find a place that best matches the template image pattern occurs.

【0009】これらの問題の解決策として、超高速動作
のCPUの導入や複数MPUによる並列分散処理構成等
のハードウェア的対応が考えられるが、処理装置のコス
ト上昇及び大型化につながることになり、得策ではな
い。一方、ソフトウェア的な解決策としては、相互相関
演算を高速フーリェ変換を用いて行う方法や、相互相関
演算の部分和が、あるしきい値を超えた時点で、演算を
中断させ、演算効率を上げるSSDA法(Sequen
tial Similarity Detection
Algorithm)や、飛び越し走査で対応範囲を
絞り込み、次にその範囲内を全て走査する粗密2段階の
サーチ方法等が提案さているが、位置検出精度を上げる
には濃度補間倍率を上げる他はなく、それに共ない、画
像データも増大し、結局の所、演算時間が伸びてしまう
という問題が生じる。
As solutions to these problems, it is conceivable to introduce a CPU that operates at a very high speed or to deal with hardware such as a parallel distributed processing configuration using a plurality of MPUs. However, this leads to an increase in the cost and size of the processing device. Not a good idea. On the other hand, as a software solution, a method of performing the cross-correlation calculation using the fast Fourier transform, or when the partial sum of the cross-correlation calculation exceeds a certain threshold, the calculation is interrupted to improve the calculation efficiency. SSDA method (Sequen
tial Similarity Detection
Algorithm) or a two-step coarse / fine search method that narrows the corresponding range by interlaced scanning and then scans the entire range is proposed. However, the only way to increase the position detection accuracy is to increase the density interpolation magnification. Along with this, there is a problem that the image data also increases, and the calculation time eventually increases.

【0010】本発明は上記実情に鑑みなされたもので、
画像間の対応位霞を求めるテンプレートマッチング処理
に於いて、濃度補間処理をせずに、極めて短時間でサブ
ピクセル精度の対応位置を求めるテンプレートマッチン
グ処理方法を提供することを目的とする。
The present invention has been made in view of the above circumstances,
An object of the present invention is to provide a template matching processing method for finding a corresponding position with sub-pixel accuracy in a very short time without performing a density interpolation process in a template matching process for finding a corresponding position haze between images.

【0011】[0011]

【課題を解決するための手段】テレビカメラで撮像され
た濃淡画像間の対応付け走査により求められた、元画素
データによる最大相互相関値と、当該最大相互相関値価
素位置の周囲の画素位置の相互相関値の差、或いは、傾
きを利用して、サブピクセル精度の対応位置を求める様
にしたものである。
A maximum cross-correlation value based on original pixel data and a pixel position around the maximum cross-correlation value element position, which are obtained by associating scanning between grayscale images picked up by a television camera. The corresponding position of sub-pixel accuracy is obtained by using the difference or the slope of the cross-correlation value of.

【0012】[0012]

【作用】元画素データによる正規化相互相関演算で求め
た、最大相互相関値と、当該最大相互相関値画素位置の
周囲の画素位置の相互相関値の差、或いは、傾きを利用
して、サブピクセル精度の対応点を求めることが出来る
ので、従来の様に濃度補間処理を施した大きな画像デー
タに対して、テンプレートマッチング処理を行う必要が
なくなる。
The difference between the maximum cross-correlation value obtained by the normalized cross-correlation calculation based on the original pixel data and the cross-correlation value between the pixel positions surrounding the maximum cross-correlation pixel position or the inclination is calculated. Since it is possible to obtain corresponding points of pixel accuracy, it is not necessary to perform template matching processing on large image data on which density interpolation processing has been performed as in the related art.

【0013】[0013]

【実施例】以下、本発明の実施例について、図面を用い
て説明する。図1は、本実施例のテンプレートマッチン
グ処理フローで、本処理フローの順序に従って説明す
る。まず、テレビカメラからのアナログ映像信号を取り
込み、順次A/D変換する。次にデジタイズされた取り
込み画像データ上で、図5に示すように、対象物を含む
テンプレート画像データg(x,y)と、テクスチャ画
像f(x,y)の領域を設定する。次に正規化相互相関
演算について説明すると、画像データg(x,y)の中
心座標を(k,ι)とすると、正規化相互相関R(x,
y)は、式にて計算される。この演算をテクスチャ画
像全体の画素に対して施し、正規化相互相関値が最大と
なる画素位置を求める。式で演算された正規化相互相
関値の三次元分布モデルを図2に示す。x軸、y軸は、
画素の座標軸、z軸は、各画素位置での相互相関値を表
す。図中W〜Wが各画素位置での相互相関値であ
り、高低差で相関の度合を表している。相互相関値の最
も高い位置はWの値を持つ画素であり、この画素内に
サブピクセル精度の対応点が存在することになる。
Embodiments of the present invention will be described below with reference to the drawings. FIG. 1 illustrates a template matching processing flow according to the present embodiment, which will be described in the order of the processing flow. First, an analog video signal from a television camera is fetched and sequentially A / D converted. Next, as shown in FIG. 5, an area of the template image data g (x, y) including the object and the area of the texture image f (x, y) are set on the digitized captured image data. Next, the normalized cross-correlation calculation will be described. Assuming that the center coordinates of the image data g (x, y) are (k, ι), the normalized cross-correlation R (x,
y) is calculated by an equation. This calculation is performed on the pixels of the entire texture image, and the pixel position at which the normalized cross-correlation value is maximized is obtained. FIG. 2 shows a three-dimensional distribution model of the normalized cross-correlation value calculated by the equation. The x and y axes are
The coordinate axis of the pixel and the z-axis represent the cross-correlation value at each pixel position. In the drawing, W 1 to W 9 are cross-correlation values at each pixel position, and the degree of correlation is represented by a height difference. Highest point of the cross-correlation value is a pixel having a value of W 2, so that corresponding points of the sub-pixel accuracy exists in this pixel.

【0014】次にサブピクセル精度の対応点を求める方
法について説明する。図3は、図2の相互相関値を、最
大相互相関値Wを持つ画素の囲りのx軸、y軸方向4
点に集めたモデル図で、この状態からサブピクセル精度
の対応点を求める。まず、x軸方向に於けるサブピクセ
ル精度の対応点を求める方法について、図4を用いて説
明する。Wは最大相互相関値、W、Wは両隣の画
素の相互相関値であり、θはWとWを結ぶ線分
と、WよりOWに垂直に下ろした線分とのなす角、
θはWとWを結ぶ線分とWよりOWに垂直に
下ろした線分とのなす角である。サブピクセル精度の対
応点は、θ=θが成り立つ位置にOWがあること
であるから(図中点線で示す)、式で求められる。t
anθ=tanθ より
Next, a description will be given of a method of obtaining a corresponding point of sub-pixel accuracy. Figure 3 is a cross-correlation value in FIG. 2, x-axis of囲Ri of pixels having a maximum cross-correlation value W 2, y-axis direction 4
In this model diagram, points corresponding to sub-pixel accuracy are obtained from this state. First, a method for finding a corresponding point of sub-pixel accuracy in the x-axis direction will be described with reference to FIG. W 2 is the maximum cross-correlation value, W 1 and W 3 are the cross-correlation values of both neighboring pixels, and θ 1 is a line segment connecting W 1 and W 2 , and a line segment vertically lowered from W 1 to OW 2. Angle with,
theta 2 is the angle between the line segment drawn vertically OW 2 the line segment and W 3 connecting W 3 and W 2. Since the corresponding point of the sub-pixel accuracy is that OW 2 is at a position where θ 1 = θ 2 is satisfied (shown by a dotted line in the figure), it can be obtained by the equation. t
than anθ 1 = tanθ 2

【0015】同様にy軸方向に於けるサブピクセル精度
の対応点は、W2、W4、W5を用いて、式で求めら
れる。
Similarly, the corresponding point of the sub-pixel accuracy in the y-axis direction can be obtained by an equation using W2, W4, and W5.

【0016】以上の結果から、サブピクセル精度の対応
点は画素中心から(Δι,Δι)だけずれた位置に
存在することが求められる。仮に元画素の最大相互相関
値Wと、当該最大相互相関値画素位置の周囲の画素位
置の相互相関値W、W、W、Wのいずれかとの
差が微小(ある定数以下、或いは、零)である場合は、
量子化誤差等を考慮して、最大相互相関値画素及び、最
大相互相関値との差が微小な相互相関値を持つ画素を含
む領域の周囲の画素位置の相互相関値を用いて同様の計
算を実施すれば良い。また、計算式から明らかな様に、
位置検出精度は、最大相互相関値Wと、当該最大相互
相関値画素位置の周囲の画素位置の相互相関値W、W
、W、Wとの差、或いは、傾きがあれば、量子化
誤差等を考慮に入れなければ、いくらでも高めることが
可能である。
From the above results, it is required that the corresponding point of sub-pixel accuracy exists at a position shifted from the pixel center by (Δι x , Δι y ). If the difference between the maximum cross-correlation value W 2 of the original pixel and any of the cross-correlation values W 1 , W 3 , W 4 , and W 5 at pixel positions around the pixel position of the maximum cross-correlation value is minute (a certain constant or less) , Or zero)
The same calculation is performed using the maximum cross-correlation value pixel and the cross-correlation value of a pixel position around a region including a pixel having a small cross-correlation value with a small difference from the maximum cross-correlation value in consideration of a quantization error and the like. Should be implemented. Also, as is clear from the calculation formula,
The position detection accuracy includes the maximum cross-correlation value W 2 and the cross-correlation values W 1 and W of pixel positions around the maximum cross-correlation pixel position.
If there is a difference from 3 , W 4 , and W 5 , or a gradient, it can be increased as much as possible without taking into account quantization error and the like.

【0017】[0017]

【発明の効果】上述した様に本発明は、元画素のデータ
より求めた相互相関値を利用するので、従来の様に濃度
補間処理を施した、大きな画像データを扱う必要がな
く、極めて高速かつ高精度な両像間のマッチングを低コ
ストでコンパクトな装置で実現出来るという大きな効果
を有する。
As described above, the present invention utilizes the cross-correlation value obtained from the data of the original pixel, so that it is not necessary to handle large image data subjected to density interpolation processing as in the prior art, and it is extremely fast. In addition, there is a great effect that high-precision matching between both images can be realized with a low-cost and compact device.

【図面の簡単な説明】[Brief description of the drawings]

【図1】実施例の処理フローFIG. 1 is a processing flow of an embodiment.

【図2】相互相関の分布の3次元モデル図FIG. 2 is a three-dimensional model diagram of a cross-correlation distribution.

【図3】サブピクセル精度の対応位置を求めるためのモ
デル図
FIG. 3 is a model diagram for obtaining a corresponding position of sub-pixel accuracy.

【図4】x軸方向の対応位置を求める方法を説明するた
めの図
FIG. 4 is a diagram for explaining a method of obtaining a corresponding position in the x-axis direction.

【図5】相互相関を説明するための図FIG. 5 is a diagram for explaining a cross-correlation.

【図6】相互相関の分布を示す図FIG. 6 is a diagram showing distribution of cross-correlation.

【図7】濃度補間を説明するための図FIG. 7 is a diagram for explaining density interpolation;

【図8】濃度補間データを説明するための図FIG. 8 is a diagram for explaining density interpolation data;

【図9】従来のサブピクセル精度の対応点を求めるため
の処理フロー
FIG. 9 is a processing flow for obtaining a corresponding point with conventional sub-pixel accuracy.

───────────────────────────────────────────────────── フロントページの続き (72)発明者 赤井 光俊 東京都千代田区岩本町3丁目11番5号 大同電機工業株式会社内 審査官 安田 太 (56)参考文献 特開 昭64−82279(JP,A) 特開 平7−43309(JP,A) 特開 平4−194702(JP,A) (58)調査した分野(Int.Cl.7,DB名) G06T 7/00 ──────────────────────────────────────────────────続 き Continuing from the front page (72) Inventor Mitsutoshi Akai 3-11-5 Iwamotocho, Chiyoda-ku, Tokyo Examiner, Daido Electric Industry Co., Ltd. Futoshi Yasuda (56) References JP A) JP-A-7-43309 (JP, A) JP-A-4-194702 (JP, A) (58) Fields investigated (Int. Cl. 7 , DB name) G06T 7/00

Claims (1)

(57)【特許請求の範囲】(57) [Claims] 【請求項1】テレビカメラからのアナログ映像信号をデ
ジタイズし、演算素子により計測、解析処理を行う画像
処理装置の前記、計測、解析処理の内、画像間の対応付
走査を行うための相互相関演算と、相互相関演算結果の
最大相互相関値から対応位置を求めるテンプレートマッ
チング処理に於いて、前記、相互相関演算により求めら
れた、元画素データによる最大相互相関値と、当該最大
相互相関値画素位置の周辺画素の相互相関値との差、成
いは傾きを利用して、サブピクセル精度の対応位置を求
めることを特徴とするテンプレートマッチング処理方
法。
1. An image processing apparatus for digitizing an analog video signal from a television camera and performing measurement and analysis processing by means of an arithmetic element, for performing cross-correlation between images in the measurement and analysis processing. In the calculation and the template matching process for finding a corresponding position from the maximum cross-correlation value of the cross-correlation calculation result, the maximum cross-correlation value based on the original pixel data, obtained by the cross-correlation calculation, and the maximum cross-correlation value pixel A template matching processing method characterized in that a corresponding position with sub-pixel accuracy is determined by using a difference or a slope of a position with a cross-correlation value of peripheral pixels.
JP08312517A 1996-10-18 1996-10-18 Template matching processing method Expired - Lifetime JP3074292B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP08312517A JP3074292B2 (en) 1996-10-18 1996-10-18 Template matching processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP08312517A JP3074292B2 (en) 1996-10-18 1996-10-18 Template matching processing method

Publications (2)

Publication Number Publication Date
JPH10124666A JPH10124666A (en) 1998-05-15
JP3074292B2 true JP3074292B2 (en) 2000-08-07

Family

ID=18030185

Family Applications (1)

Application Number Title Priority Date Filing Date
JP08312517A Expired - Lifetime JP3074292B2 (en) 1996-10-18 1996-10-18 Template matching processing method

Country Status (1)

Country Link
JP (1) JP3074292B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011188376A (en) * 2010-03-10 2011-09-22 Toshiba Corp Image processing apparatus and image processing method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7082225B2 (en) 2001-08-28 2006-07-25 Nippon Telegraph And Telephone Corporation Two dimensional image recording and reproducing scheme using similarity distribution
US7599512B2 (en) 2003-01-14 2009-10-06 Tokyo Institute Of Technology Multi-parameter highly-accurate simultaneous estimation method in image sub-pixel matching and multi-parameter highly-accurate simultaneous estimation program
US8041123B2 (en) 2005-03-03 2011-10-18 Pioneer Corporation Template matching processing apparatus and method, hologram reproducing apparatus and method, and computer program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011188376A (en) * 2010-03-10 2011-09-22 Toshiba Corp Image processing apparatus and image processing method

Also Published As

Publication number Publication date
JPH10124666A (en) 1998-05-15

Similar Documents

Publication Publication Date Title
US6801653B1 (en) Information processing apparatus and method as well as medium
US6690842B1 (en) Apparatus and method for detection and sub-pixel location of edges in a digital image
US8098963B2 (en) Resolution conversion apparatus, method and program
CN111462198B (en) Multi-mode image registration method with scale, rotation and radiation invariance
JP3054682B2 (en) Image processing method
US4985765A (en) Method and apparatus for picture motion measurement whereby two pictures are correlated as a function of selective displacement
JP3074292B2 (en) Template matching processing method
US7057614B2 (en) Information display system and portable information terminal
KR100640761B1 (en) Method of extracting 3 dimension coordinate of landmark image by single camera
JPH0875454A (en) Range finding device
JPH06160047A (en) Pattern matching method
JPH0695340B2 (en) Image matching method
JPH07168941A (en) Picture processor
CN109767390A (en) A kind of digital picture of block parallel disappears image rotation method
JP3516117B2 (en) Image processing method and apparatus
JP2703454B2 (en) Image pattern matching method
JPH09330403A (en) Template matching method
JPH0410074A (en) Picture pattern inclination detecting method
JP3080097B2 (en) Parallel line figure extraction method
CN117729444A (en) Image quality improving method, device and optical system
JP3747595B2 (en) Arc position estimation method
CN111209835A (en) Improved SURF mobile robot image matching method
JPH07311814A (en) Tilt angle estimating method for document image
JP2001033224A (en) Apparatus and method for generating three-dimensional image
JPH08279048A (en) Image search device