JPS6126191A - Matching method of thick and thin pictures - Google Patents

Matching method of thick and thin pictures

Info

Publication number
JPS6126191A
JPS6126191A JP14843684A JP14843684A JPS6126191A JP S6126191 A JPS6126191 A JP S6126191A JP 14843684 A JP14843684 A JP 14843684A JP 14843684 A JP14843684 A JP 14843684A JP S6126191 A JPS6126191 A JP S6126191A
Authority
JP
Japan
Prior art keywords
data
thick
pictures
storage device
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP14843684A
Other languages
Japanese (ja)
Inventor
Koichi Ejiri
公一 江尻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to JP14843684A priority Critical patent/JPS6126191A/en
Publication of JPS6126191A publication Critical patent/JPS6126191A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PURPOSE:To calculate the similarity or distance of thick and thin pictures having different photographing conditions each other by finding out the density grade vectors of respective picture elements of the thick and thin pictures to be compared and adding the difference of density grade vectors between the corresponding picture elements. CONSTITUTION:A picture input device 20 inputs the data of thick and thin pictures, a storage device 21 stores the data and an arithmetic unit 22 detects the density grade vectors of respective picture elements of the thick and thin pictures. The coordinates of a remarked pictures element are set up, the density data of the remarked picture element and its adjacent picture element are read from the storage device 21 to find out the difference between these data, the sum of the vectors is calculated to find out the density grade vector of the remarked picture element, and the storage device 23 stores the processed result of the arithmetic unit 22. A matching operation circuit 25 executes the matching operation between the density grade picture data of the inputted thick and thin pictures which are stored in the storage device 23 and the density grade picture data of a reference thick and thin pictures stored in the storage device 23. Namely, the distance or similarity between both the data can be calculated by adding the vector difference between the corresponding picture elements of both the data.

Description

【発明の詳細な説明】 〔技術分野〕 本発明は1m淡画像マツチング方式に関する。[Detailed description of the invention] 〔Technical field〕 The present invention relates to a 1m light image matching method.

〔従来技術〕[Prior art]

従来の濃淡画像マツチング方式は、濃淡画像の濃度の差
分を類似度とし、て利用するものである。
The conventional grayscale image matching method uses the difference in density between grayscale images as a degree of similarity.

1:のような方式の場合、単、に濃度レベルが全体的に
シフトしただけでも、大きな差分和を生じ、正しいマツ
チング点を検出できない。例えば、第2図の濃淡画像と
第3図の濃淡画像は、全体的に濃度レベルが2だけシフ
トしているが5両者はよく類似していることは明らかで
ある(第2図、第3図中の数字は濃度レベル)。し、か
じ、この二つの濃淡画像を重ねても、小さな濃度差分和
は得られない。
In the case of method 1:, even a simple shift in the overall density level causes a large sum of differences, making it impossible to detect the correct matching point. For example, although the grayscale image in Figure 2 and the grayscale image in Figure 3 have an overall density level shift of 2, it is clear that they are very similar (Figures 2 and 3). The numbers in the figure are concentration levels). However, even if these two grayscale images are superimposed, a small sum of density differences cannot be obtained.

このように従来方式では、二つの濃淡画像の撮影条件が
同一か、あるいは類似し、ている必要がある。
As described above, in the conventional method, the photographing conditions for the two grayscale images must be the same or similar.

このような制約を軽減する目的で、濃淡画像の濃度の極
大点と極小点を互いに対応づける方式も提案されている
が、十分な効果を達成できるものはない。例えば、第2
図の濃淡画像は極大点が1個であるのに対し、それと見
かけ上類似している第3図の濃淡画像は2個の極大点が
あり、両者の類似度は低い値になってしまう。
For the purpose of alleviating such constraints, a method has been proposed in which the maximum and minimum points of the density of a grayscale image are associated with each other, but none of these methods can achieve a sufficient effect. For example, the second
The grayscale image in the figure has one maximum point, whereas the grayscale image in FIG. 3, which is apparently similar to it, has two maximum points, resulting in a low similarity value between the two.

〔目 的〕〔the purpose〕

本発明の目的は、撮影条件の異なる濃淡画像のマツチン
グ方式を提供することにある。
An object of the present invention is to provide a method for matching gray scale images under different photographing conditions.

〔構 成〕〔composition〕

本発明の濃淡画像マツチング方式は、比較すべき二つの
濃淡画像のそれぞれの各画素の濃度勾配ベクトルを求め
、対応する画素間の濃度勾配ベクトルの差分を加算する
ことによって、二つの濃淡画像の類似度または距離を算
出することを特徴−とするものである。
The grayscale image matching method of the present invention calculates the density gradient vector of each pixel of two grayscale images to be compared, and adds the differences in the density gradient vectors between corresponding pixels. The feature is that it calculates degrees or distances.

即ち、第2図および第3図の濃淡画像について。That is, regarding the grayscale images in FIGS. 2 and 3.

それぞれ第4図および第5図に示すような濃度勾配画像
を求め、両者を重ね合わせてマツチングするものである
。なお、第4図および第5図の矢印は、原画像の最大濃
度勾配の方向を向いた単位ベクトルであり、ここでは第
6図(B)に示すような8方位で示されている。以下、
本発明の詳細な説明する。
In this method, density gradient images as shown in FIGS. 4 and 5 are obtained, and the two images are superimposed and matched. Note that the arrows in FIGS. 4 and 5 are unit vectors pointing in the direction of the maximum density gradient of the original image, and are shown here in eight directions as shown in FIG. 6(B). below,
The present invention will be described in detail.

第1図は本発明の一実施例を示す概略ブロック図である
。この図において、20は濃淡画像データを入力する画
像入力装置であり、21はその濃淡画像データを記憶す
るための記憶装置である。
FIG. 1 is a schematic block diagram showing one embodiment of the present invention. In this figure, 20 is an image input device for inputting grayscale image data, and 21 is a storage device for storing the grayscale image data.

22は濃淡画像の各画素の濃度勾配ベクトルを検出する
演算装置である。第7図はその処理を示すフローチャー
トであり、第6図(、A)は注目画素とその隣接画素の
説明図、同図(B)は方向東位ベクトルの説明図である
2 まず、注目画素の座標(記憶アドレス)をセットしくス
テップ30)、その注目画素Xの濃度勾配ベクトルXが
未定義であるか判定する(ステップ31)。定義済みな
らば、次の座標へ注目画素を移す。未定義ならば、注目
画素Xとその隣接画素a+ b’+ C+ ((、e+
 f+ K+ hの濃度データを記憶装置21から読み
込み、次の演算を行う(ステップ32)。
22 is an arithmetic unit that detects the density gradient vector of each pixel of the grayscale image. FIG. 7 is a flowchart showing the process, FIG. 6 (A) is an explanatory diagram of the pixel of interest and its adjacent pixels, and FIG. 6 (B) is an explanatory diagram of the direction east vector.2 First, the pixel of interest (step 30), and determines whether the density gradient vector X of the pixel of interest X is undefined (step 31). If it has been defined, move the target pixel to the next coordinate. If it is undefined, the pixel of interest X and its adjacent pixel a+ b'+ C+ ((, e+
The density data of f+K+h is read from the storage device 21 and the following calculation is performed (step 32).

?−(ex)了/1e−xl e= (c  x)  2/ l c−x lb= (
b−x)  3/ l b−x l”’ニー(a−x)
 T/ l a−x lh= (h−x)  8/ I
 h−x lここで各式の分母は、濃淡画像の品質が十
分に良好かつ安定な場合は、■またはその他の定数にし
・でよい。
? - (ex) completion/1e-xl e= (c x) 2/ l c-x lb= (
b-x) 3/ l b-x l”'knee (a-x)
T/ l a-x lh= (h-x) 8/ I
h−x lHere, the denominator of each equation may be set to ■ or some other constant if the quality of the grayscale image is sufficiently good and stable.

このようじてに求めたa、b、・・・、hのベクトル和
を計算する二とにより、注目画素Xの濃度勾配ベクトル
マを求める(ステップ33)。そし、て、−;、7.−
H,マ、了、了、Tのすべてが未定義であるか判定し1
(ステップ34)、未定義でかしづ九ば、全画素の処理
が終了したか判定しくステップ35)、未終了ならば次
の画素の処理へ進む。
By calculating the vector sum of a, b, . Then, te, -;, 7. −
Determine if all of H, Ma, Ryo, Ryo, and T are undefined 1
(Step 34) If undefined, it is determined whether processing of all pixels has been completed or not (Step 35); if not, processing proceeds to the next pixel.

なお、濃度勾配ベクトルの求め方は、上述の方法に限定
されるわけではなく、適宜変更してよい。
Note that the method for determining the concentration gradient vector is not limited to the method described above, and may be changed as appropriate.

第1図に戻り、23は演算装置22の処理結果を記憶す
るための記憶装置である。24は基準濃淡画像の濃度勾
配画像データを記憶するための記憶装置であり1本実施
例では、基準濃淡画像の濃度勾配画像データも、それと
比較すべき入力濃淡画像と同様の経路で書き込み得るよ
うになっている。25はマツチング演算回路であり、記
憶装置23に記憶されている入力濃淡画像の濃度勾配画
像データと、記憶装置24に記憶されている基準濃淡画
像の濃度勾配画像データとのマツチング演算を実行する
。即ち、両者の対応画素間のベクトル差分を加算するご
とにより、両者の距離(または類似度)を算出する。具
体的には、ベクトルの方位差を距離m位としてカウント
する。
Returning to FIG. 1, 23 is a storage device for storing the processing results of the arithmetic unit 22. Reference numeral 24 denotes a storage device for storing density gradient image data of the reference gradation image. In this embodiment, the density gradient image data of the reference gradation image can also be written in the same path as the input gradation image to be compared with it. It has become. A matching calculation circuit 25 performs a matching calculation between the density gradient image data of the input gradation image stored in the storage device 23 and the density gradient image data of the reference gradation image stored in the storage device 24. That is, the distance (or similarity) between both pixels is calculated by adding the vector differences between the corresponding pixels. Specifically, the direction difference between the vectors is counted as a distance of m.

なお1個個のベクトルが安定しない場合がある。Note that one vector may not be stable.

その安定化には、第7図の処理で決定された濃度勾配ベ
クトルを、小さなウィンドウ内で平均化するとよい。特
にリモートセンシング画像を扱う場合、そのような濃度
勾配ベクトルの平均化を行うのが望まし、い。
For stabilization, it is recommended to average the concentration gradient vectors determined in the process shown in FIG. 7 within a small window. Especially when dealing with remote sensing images, it is desirable to average such density gradient vectors.

〔効 果〕〔effect〕

以上の説明から明らかなように、本発明の濃淡画像マツ
チング方式は、次の効果を有する。
As is clear from the above description, the grayscale image matching method of the present invention has the following effects.

(1)m度しベルの全体的なシフトによる影響を受けに
くいので、異なる撮影条件でとられた濃淡画像のマツチ
ングが可能である。
(1) Since it is not easily affected by the overall shift of the m-degree bell, it is possible to match gray scale images taken under different photographing conditions.

(11)シェープ、イングや照明むらのある濃淡画像に
ついても、従来より正確なマツチングが可能である。
(11) It is possible to perform more accurate matching than before even for grayscale images with uneven shapes, ings, and illumination.

(iii)3次元濃淡画像のマツチングにも有効に適用
できる。
(iii) It can also be effectively applied to matching three-dimensional grayscale images.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の一実施例を示す概略ブロック図、第2
図および第3図は見かけ上類似し、た濃淡画像の模式図
、第4図および第5図はそれぞれ第2図および第3図に
示し・た濃淡画像の濃度勾配画像を示す模式図、第6図
(A)は注目画像を中心とし、た3×3画素の配列図、
同図(B)は方向!It位ベクトルの説明図、第7図は
各画素の濃度勾配ベクトルを求める処理のフローチャー
トである。 20・・・画像入力装置、   21,23.24・・
・記憶装置、 22・・演算装置、 25・・・マツチ
ング演算回路。 第  1   図 第  4  図 第  5  図 第  6  図 (A) CB>
FIG. 1 is a schematic block diagram showing one embodiment of the present invention, and FIG.
4 and 5 are schematic diagrams showing the density gradient images of the gradation images shown in FIGS. 2 and 3, respectively. Figure 6 (A) is a 3x3 pixel array diagram centered on the image of interest,
The same figure (B) is the direction! FIG. 7, which is an explanatory diagram of the It-order vector, is a flowchart of the process of determining the density gradient vector of each pixel. 20... Image input device, 21, 23.24...
-Storage device, 22...Arithmetic unit, 25...Matching arithmetic circuit. Figure 1 Figure 4 Figure 5 Figure 6 (A) CB>

Claims (1)

【特許請求の範囲】[Claims] (1)比較すべき二つの濃淡画像のそれぞれの各画素の
濃度勾配ベクトルを求め、対応する画素間の濃度勾配ベ
クトルの差分を加算することによって、二つの濃淡画像
の類似度または距離を算出することを特徴とする濃淡画
像マッチング方式。
(1) Find the density gradient vector of each pixel of the two grayscale images to be compared, and calculate the similarity or distance between the two grayscale images by adding the differences in the density gradient vectors between corresponding pixels. This is a grayscale image matching method that is characterized by:
JP14843684A 1984-07-17 1984-07-17 Matching method of thick and thin pictures Pending JPS6126191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP14843684A JPS6126191A (en) 1984-07-17 1984-07-17 Matching method of thick and thin pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP14843684A JPS6126191A (en) 1984-07-17 1984-07-17 Matching method of thick and thin pictures

Publications (1)

Publication Number Publication Date
JPS6126191A true JPS6126191A (en) 1986-02-05

Family

ID=15452747

Family Applications (1)

Application Number Title Priority Date Filing Date
JP14843684A Pending JPS6126191A (en) 1984-07-17 1984-07-17 Matching method of thick and thin pictures

Country Status (1)

Country Link
JP (1) JPS6126191A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010256992A (en) * 2009-04-21 2010-11-11 Nippon Telegr & Teleph Corp <Ntt> Device, method and program for generating image
JP2012113362A (en) * 2010-11-19 2012-06-14 Toshiba Corp Image processing method, image processing apparatus, and image processing program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010256992A (en) * 2009-04-21 2010-11-11 Nippon Telegr & Teleph Corp <Ntt> Device, method and program for generating image
JP2012113362A (en) * 2010-11-19 2012-06-14 Toshiba Corp Image processing method, image processing apparatus, and image processing program
US8682060B2 (en) 2010-11-19 2014-03-25 Kabushiki Kaisha Toshiba Image processing method, apparatus, and computer program product

Similar Documents

Publication Publication Date Title
US11410323B2 (en) Method for training convolutional neural network to reconstruct an image and system for depth map generation from an image
US20220076391A1 (en) Image Distortion Correction Method and Apparatus
US6353678B1 (en) Method and apparatus for detecting independent motion in three-dimensional scenes
US20190141247A1 (en) Threshold determination in a ransac algorithm
US8953847B2 (en) Method and apparatus for solving position and orientation from correlated point features in images
JP4349367B2 (en) Estimation system, estimation method, and estimation program for estimating the position and orientation of an object
ES2253542T3 (en) PROCEDURE AND SYSTEM FOR PRODUCING FORMATED INFORMATION RELATED TO DEFECTS, AT A MINIMUM, OF A CHAIN DEVICE, ESPECIALLY FOR ERASING EFFECT.
Zhang et al. Estimating the fundamental matrix by transforming image points in projective space
US20220414908A1 (en) Image processing method
US9183634B2 (en) Image processing apparatus and image processing method
JPH0719832A (en) Extracting method for corresponding points of pulirity of images
US8340399B2 (en) Method for determining a depth map from images, device for determining a depth map
CN117726747A (en) Three-dimensional reconstruction method, device, storage medium and equipment for complementing weak texture scene
US8126275B2 (en) Interest point detection
US20200202563A1 (en) 3d image reconstruction processing apparatus, 3d image reconstruction processing method and computer-readable storage medium storing 3d image reconstruction processing program
US5995662A (en) Edge detecting method and edge detecting device which detects edges for each individual primary color and employs individual color weighting coefficients
US6483949B1 (en) Image processing apparatus and method, and medium therefor
CN111462246A (en) Equipment calibration method of structured light measurement system
JPS6126191A (en) Matching method of thick and thin pictures
CN118096657A (en) Method, device, computer equipment and medium for detecting welding defect of thermistor
Zhang Understanding the relationship between the optimization criteria in two-view motion analysis
JPH0875454A (en) Range finding device
JP3452188B2 (en) Tracking method of feature points in 2D video
JPH10123163A (en) Flow velocity distribution measurement method
JP2961140B2 (en) Image processing method