JPS59111576A - Window size determining method in object recognizing system - Google Patents

Window size determining method in object recognizing system

Info

Publication number
JPS59111576A
JPS59111576A JP57222554A JP22255482A JPS59111576A JP S59111576 A JPS59111576 A JP S59111576A JP 57222554 A JP57222554 A JP 57222554A JP 22255482 A JP22255482 A JP 22255482A JP S59111576 A JPS59111576 A JP S59111576A
Authority
JP
Japan
Prior art keywords
distance
distance rank
window size
rank
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP57222554A
Other languages
Japanese (ja)
Inventor
Atsushi Kuno
敦司 久野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Corp
Original Assignee
Tateisi Electronics Co
Omron Tateisi Electronics Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tateisi Electronics Co, Omron Tateisi Electronics Co filed Critical Tateisi Electronics Co
Priority to JP57222554A priority Critical patent/JPS59111576A/en
Publication of JPS59111576A publication Critical patent/JPS59111576A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To execute efficiently a labeling processing in case of an object recognizing processing by deriving a dimension number from a progression for showing a corresponding relation of an inter-feature point distance and a distance rank model, and determining a window size basing on its dimension number. CONSTITUTION:With regard to an object to be detected, a coordinate of each feature point Q1-QN is calculated. A distance rank space of the N-th dimension, which uses each distance rank in a distance rank model as an axis is supposed. Subsequently, with regard to a point Q in this distance rank space, a distance rank vector V is derived. A dimension number of the distance rank vector is initialized to (q) and a partial different value of the distance rank vector between the feature points is calculated. A distance rank matrix is derived by using this partial different value as a criterion, and compared with a unit matrix. As a result, a window size (w) is determined from the distance rank by the dimension number (q).

Description

【発明の詳細な説明】 〈発明の技術分野〉 本発明は、例えば物体の角部の如く、被検出物の形状的
特徴に関与する点(以下「特徴点」という)に着目し、
任意の向きをなす被検出物の特徴点を被検出物モデルに
おける特定の特徴点と所定視野範囲のウィンドウ内にて
対応させてラベルを割り当てる(以下この処理を「ラベ
リング」と称する)ことによって、被検出物の位置や姿
勢等を認識する物体認識システムに関連し、殊に本発明
は、前記ウィンドウのサイズを必要最小限に設定するた
めのウィンドウサイズ決定方法を提供するものである。
[Detailed Description of the Invention] <Technical Field of the Invention> The present invention focuses on points (hereinafter referred to as "feature points") that are related to the shape characteristics of an object to be detected, such as corners of an object, and
By associating feature points of the detected object with arbitrary orientations with specific feature points in the detected object model within a window of a predetermined field of view and assigning a label (hereinafter this process is referred to as "labeling"), In relation to an object recognition system that recognizes the position, posture, etc. of a detected object, the present invention particularly provides a window size determining method for setting the size of the window to the necessary minimum size.

〈発明の背景〉 近年この種物体認識システムにおいて、被検出物の特徴
点をラベリングする場合、適当な視野範囲をもつウィン
ドウを設定して、行なう方法が提案されている。かかる
ウィンドウは、そのサイズが必要以上に大きいと、デー
タ処理が複雑化して効率が悪く、一方サイズが小さいと
、特徴点のラベリングが不能となる。これがためウィン
ドウサイズを必要最小限に設定する必要があるが、従来
は適当なウィンドウサイズをもって特徴点のラベリング
処理を実行し、ラベリングが不能のとき、ウィンドウサ
イズを広げる等、再設定して、ラベリング処理を再実施
しており、従ってラベリング処理に無駄を生じ、処理効
率が著しく低下する等の問題があった。
<Background of the Invention> In recent years, in this type of object recognition system, a method has been proposed in which, when labeling feature points of a detected object, a window having an appropriate field of view is set. If such a window size is unnecessarily large, data processing becomes complicated and inefficient, while if the window size is small, labeling of feature points becomes impossible. For this reason, it is necessary to set the window size to the minimum necessary, but conventionally, labeling of feature points is executed with an appropriate window size, and when labeling is impossible, resetting the window size etc. The process is re-performed, which causes waste in the labeling process, leading to problems such as a significant drop in processing efficiency.

〈発明の目的〉 本発明は、特徴点のラベリングを実行する以前にウィン
ドウサイズを予め設定する新規なウィンドウサイズ決定
方法を提供するもので、これにより物体認識処理におけ
るラベリング処理の効率化をはかることを目的とする。
<Objective of the Invention> The present invention provides a novel window size determination method in which the window size is preset before performing labeling of feature points, thereby improving the efficiency of labeling processing in object recognition processing. With the goal.

〈発明の構成および効果〉 上記の目的を達成するため、被検出物モデルの特徴点に
つき他の特徴点との間の各距離を配列して成る距離ラン
クモデルを予め形成しておき、被検出物の各特徴点と他
の特徴点との間の距離につき前記距離ランクモデルとの
対応関係を数列をもって表わし、ついて全ての数列が相
互に不一致となる次元数を求めた後、この次元数に基づ
きウィンドウサイズを決定するようにした。
<Configuration and Effects of the Invention> In order to achieve the above object, a distance rank model is formed in advance by arranging distances between each feature point of a detected object model and other feature points, and For the distance between each feature point of an object and another feature point, the correspondence relationship with the distance rank model is expressed as a number sequence, and after finding the number of dimensions where all the number sequences are inconsistent with each other, this number of dimensions is The window size is now determined based on this.

本発明によれば、ラベリング処理に最適な必要最小限の
ウィンドウサイズを予め設定でき、従来の如く、ウィン
ドウサイズの再設定やラベリングの再処理等が不要とな
り、物体認識システムにおけるラベリング処理の効率を
大幅に向上できる等、発明目的を達成した優れた効果を
奏する。
According to the present invention, the minimum required window size that is optimal for labeling processing can be set in advance, eliminating the need for resetting the window size or reprocessing labeling as in the past, and improving the efficiency of labeling processing in an object recognition system. It has excellent effects that achieve the purpose of the invention, such as a significant improvement.

〈実施例の説明〉 第1図は被検出物の搬送ラインを示すものであり、ベル
トコンベヤ1上の被検出物2は搬送下流端に設けたロボ
ット3にて1個づつ撮み上げられて、つぎの処理、若し
くは加工工程へ移される。
<Explanation of Examples> Fig. 1 shows a conveyance line for objects to be detected, in which objects to be detected 2 on a belt conveyor 1 are photographed one by one by a robot 3 installed at the downstream end of the conveyance. , and then transferred to the next treatment or processing step.

第2図は被検出物モデル2Aを示し、平面形状において
値角をなす合計18個の特徴点P1 + P2 +・・
・・・・、PlBをもつ。
Fig. 2 shows a detected object model 2A, which has a total of 18 feature points P1 + P2 +... forming a value angle in a planar shape.
..., has PIB.

前記被検出物2は、ベルトコンベヤ1上において任意の
方向を向いており、前記ロボット30指先部4が各被検
出物2の向きに応じて回動し、被検出物2の一定箇所に
指片4a。
The objects 2 to be detected are facing in arbitrary directions on the belt conveyor 1, and the fingertip portion 4 of the robot 30 rotates according to the orientation of each object 2 to be detected, and places the finger at a certain point on the object 2. Piece 4a.

4bを当てて、被検出物2を把持し、これを撮み上げる
4b, grip the object 2 to be detected, and photograph it.

前記ロボット指先部4は、本発明にかかる第3図の装置
をもって被検出物2の自身がチェックされて、回動方向
や角度等が制御される。第3図中、カメラ装置5はベル
トコンベヤ1の上流位置に配備され、被検出物2の平面
形状を画像化すると共に、所定のウィンドウ内lこ含ま
れる複数個の特徴点を検出する。
The robot fingertip section 4 is checked by the object 2 to be detected using the device shown in FIG. 3 according to the present invention, and its rotation direction, angle, etc. are controlled. In FIG. 3, a camera device 5 is disposed upstream of the belt conveyor 1, and images the planar shape of the object 2 to be detected, and detects a plurality of feature points included within a predetermined window.

、  第4図に正方矩形状の視野をもつウィンドウWを
破線で示してあり、図示例では合計2個の特徴点Q、、
Q5がウィンドウWの視野内に含まれる。カメラ装置5
の出力は、インターフェイス6を介してマイクロコンピ
ュータ等における演算制御手段7(以下単にl’−CP
UJといううに取り込まれ、前記各特徴点の位置は座標
データとして画像メモリ8ヘスドアされる。cpu7は
、かかるデータの読込みや読出しを制御すると共に、物
体認識にかかる各種プログラムを解読実行して、被検出
物2の向きをチェックし、これに基づきロボット制御装
置9の動作を制御する。
, In Fig. 4, a window W having a square rectangular field of view is shown by a broken line, and in the illustrated example, a total of two feature points Q, ,
Q5 is included within the field of view of window W. Camera device 5
The output is sent to the arithmetic control means 7 (hereinafter simply l'-CP
The positions of the respective feature points are stored in the image memory 8 as coordinate data. The CPU 7 controls reading and reading of such data, decodes and executes various programs related to object recognition, checks the orientation of the detected object 2, and controls the operation of the robot control device 9 based on this.

また第3図中メモリ10には、前記プログラムの他に、
被検出物モデル2Aの全特徴点P1〜PLOおよび特徴
点間の距離に関する基準データ(以下距離ランクモデル
という)がストアされている。
In addition to the above programs, the memory 10 in FIG.
All feature points P1 to PLO of the detected object model 2A and reference data regarding distances between the feature points (hereinafter referred to as distance rank model) are stored.

第5図は距離ランクモデルの構成を示す。FIG. 5 shows the configuration of the distance rank model.

図中距離ランクとは被検出物2Aにかかる特徴点間の全
ての距離値を値の小さな順にdl。
The distance rank in the figure refers to all the distance values between the feature points on the detected object 2A in descending order of dl.

d2.・・・+di+・・・、 dNと配列したもので
ある。
d2. ...+di+..., dN.

またラベル集合とは、各距離ランクと関連する特徴点の
集合を意味し、その集合をχ(d+)。
In addition, the label set means a set of feature points associated with each distance rank, and the set is χ(d+).

χ(d2) 、・・・、χ(di)、・・・ 、χ(d
N)の如くに表わしである。
χ(d2),...,χ(di),...,χ(d
N).

本発明は、特徴点のラベリング処理に先立ち、前記ウィ
ンドウWのサイズ(本実施例では縦横長さ)を設定する
新規方法を提供するもので、具体的方法は第6図の動作
フロー1こ示しである。
The present invention provides a new method for setting the size (in this embodiment, the length and width) of the window W prior to the feature point labeling process. It is.

まずステップ11では、CPU7は、カメラ装置5にて
画像化された被検出物2につき、各特徴点Q+ + Q
2 +・・・+Qi+・・・+QNの座標を算出する。
First, in step 11, the CPU 7 calculates each feature point Q+ + Q for the detected object 2 imaged by the camera device 5.
2 Calculate the coordinates of +...+Qi+...+QN.

ところで前記距離ランクモデルにおける距離ランクをd
、 、 d2.・・・、山、・・・、dN(但シd+<
d2<・・・(di (・・・<dN)として、各距離
ランクを軸とするN次元の距離ランク空間を想定する。
By the way, the distance rank in the distance rank model is d
, , d2. ..., mountain, ..., dN (however, d+<
Assuming d2<...(di (...<dN), an N-dimensional distance rank space having each distance rank as an axis is assumed.

つぎにこの距離ランク空間中の点Qにつき距離ランクベ
クトル■(以下Vで表わす)を考えると、■のベクトル
成分は(vl。
Next, if we consider a distance rank vector ■ (hereinafter expressed as V) for a point Q in this distance rank space, the vector component of ■ is (vl.

V2.・・・r vN )の如く表わせる。この場合、
ベクトル成分の各要素viは、距離ランクモデルにおい
て対応する距離ランクをもっときは11持たないときは
0にセットされる。
V2. ... r vN ). in this case,
Each element vi of the vector component is set to 0 if it does not have a corresponding distance rank in the distance rank model.

斯くてステップ12では、CPU7は各特徴点Qi (
i= 1.2.・・・、N)につき距離ランクベクトル
Viを求める。このViはつぎのように表わせる。
Thus, in step 12, the CPU 7 selects each feature point Qi (
i=1.2. ..., N), the distance rank vector Vi is determined. This Vi can be expressed as follows.

Vi= (V(i、1) l V(i12)、−−・・
−1V(i IN))つぎにステップ13でウィンドウ
サイズの決定に関与する距離ランクベクトルの次元数を
9(但し1≦9≦N)に初期設定した後、ステップ14
において、CPU7は特徴点Qr  にかかる距離ラン
クベクトルVrにつき他の特徴点QS にかかる距離ラ
ンクベクトルVs との間の部分相異値D(r、s:q
)を次式により算出する。
Vi= (V(i, 1) l V(i12), ---
-1V(i IN)) Next, in step 13, the number of dimensions of the distance rank vector involved in determining the window size is initialized to 9 (however, 1≦9≦N), and then in step 14
, the CPU 7 calculates the partial difference value D(r, s: q
) is calculated using the following formula.

尚0式中、v(r、k)、v(s、k)は、Vr 、 
Vsのベクトル成分(v(r、■)、 v(r、2)、
−、v(r、N) )(v(s、1)、v(s、2)、
 ・、 v(s+N) )の要素を示す。
In the formula 0, v(r, k) and v(s, k) are Vr,
Vector components of Vs (v(r, ■), v(r, 2),
−, v(r, N) )(v(s, 1), v(s, 2),
, v(s+N)).

また符号のは排他的論理和、符号−は論理    ′に
=1 式の積集合を表わす。
Also, the sign represents exclusive OR, and the sign - represents the product set of =1 expressions.

つてD(r、s;3)−1・1・0−0 となる。これ
は部内aランクの次元数9−3の下では、Vr、Vsは
異なった成分要素をもつことを意味する。
Therefore, D(r, s; 3)-1・1・0-0. This means that Vr and Vs have different components under the dimension number 9-3 of the internal a rank.

斯くてステップ14,15.16のルートにおいて、す
へての特徴点Qlr Q2 、・・・+Qr+・、 Q
s l−、QN (但しr+5=L2+ ”・+ N)
につき、上記部分相異値D(r、S;9)が算出される
。これによりステップ15の判定が“=YES”となり
、つぎのN行、N列の行列で表わされる距離ランクマト
リクス[R(q) ]を求める。
Thus, in the route of steps 14, 15, and 16, all the feature points Qlr Q2 , . . . +Qr+ ., Q
s l−, QN (however, r+5=L2+ ”・+N)
Accordingly, the partial difference value D(r, S; 9) is calculated. As a result, the determination in step 15 becomes "=YES", and the next distance rank matrix [R(q)] represented by a matrix of N rows and N columns is determined.

そしてステップ17において、CPU7は上記距離ラン
クマトリクスCR(q))が次式を満足するか否かをチ
ェックする。
Then, in step 17, the CPU 7 checks whether the distance rank matrix CR(q)) satisfies the following equation.

但し[E]は、単位行列を示す。However, [E] indicates a unit matrix.

上記において、(R(q))’:(E〕であるとき、行
列中に部分相異値D(r、s;9)=1(但しrNS)
の要素を含むことを意味し、この場合、ステップ17の
判定が“NO″′となり、ステップ18で次元数9に1
加算された後、前記ステップ14,15.16にかかる
同様の処理が実行される。そして(R(q)) −(E
)であるとき、つぎのステップ19に進み、CPU7は
つぎの0式をもってウィンドウサイズWを算出する。
In the above, when (R(q))': (E), the partial dissimilarity value D(r, s; 9) = 1 (however, rNS) in the matrix
In this case, the judgment in step 17 is "NO"', and in step 18, the number of dimensions is 9 and 1 is included.
After the addition, similar processing to steps 14, 15 and 16 is performed. and (R(q)) −(E
), the process proceeds to the next step 19, and the CPU 7 calculates the window size W using the following equation.

W−2・ d9・(1+α)  ・・曲 ■尚0式中、
d9は第9番目の距離ランクを、またαは距離ランクd
9の許容データ屯を夫々示し、更に第7図にウィンドウ
サイズWと距離ランクd9のエリアX9との関係を示し
である。
W-2・d9・(1+α)・・Song ■In addition, in the 0 formula,
d9 is the ninth distance rank, and α is the distance rank d
In addition, FIG. 7 shows the relationship between the window size W and the area X9 of distance rank d9.

然してCPU7は0式をもってウィンドウサイズを決定
した後、ウィンドウの視野内で特異点のラベリング処理
を実行するものである。
However, after determining the window size using the formula 0, the CPU 7 executes a singular point labeling process within the field of view of the window.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は被検出物の搬送ラインを示す説明図、第2図は
被検出物モデルの平面図、第3図は本発明を実施した装
置例の回路ブロック図、第4図は被検出物およびウィン
ドウの視野範囲を示す平面図、第5図は距離ランクモデ
ルを示す説明図、第6図は本発明にかかるウィンドウサ
イズ決定方法を示すフローチャート、第7図はウィンド
ウサイズと距離ランクのエリアとの関係を示す説明図で
ある。 2・・・・・・被検出物 2A・・・・・・被検出物モ
デルW・・・・・・ウィンドウ  W・・・・・・ウィ
ンドウサイズ494− テ/回 分30 枡σ図 一495= オq百訂
FIG. 1 is an explanatory diagram showing a conveyance line for an object to be detected, FIG. 2 is a plan view of a model of an object to be detected, FIG. 3 is a circuit block diagram of an example of a device implementing the present invention, and FIG. 4 is an illustration of an object to be detected. 5 is an explanatory diagram showing the distance rank model, FIG. 6 is a flowchart showing the window size determination method according to the present invention, and FIG. 7 is a diagram showing the window size and distance rank area. FIG. 2...Detected object 2A...Detected object model W...Window W...Window size 494- Te/times 30 Square σ Figure 1 495= Oq 100 edition

Claims (1)

【特許請求の範囲】[Claims] 被検出物の画像に対し所定視野のウィンドウ内の特徴点
につき被検出物モデルの対応する特徴点のラベルを割り
当てる物体認識システムにおいて、被検出物モデルの特
徴点につき他の特徴点との間の各距離を配列して成る距
離ランクモデルを予め形成しておき、被検出物の特徴点
と他の特徴点との間の距離につき前記距離ランクモデル
との対応関係を数列をもって算出し、ついで全ての数列
が相互に不一致となる次元数を求めた後、次元数に基づ
きウィンドウサイズを決定することを特徴とする物体認
識システムにおけるウィンドウサイズ決定方法。
In an object recognition system that assigns a label of a corresponding feature point of a detected object model to a feature point within a window of a predetermined field of view in an image of a detected object, the label of the corresponding feature point of the detected object model is A distance rank model is formed in advance by arranging each distance, and the correspondence relationship between the distance rank model and the distance between the feature points of the detected object and other feature points is calculated using a numerical sequence. 1. A method for determining a window size in an object recognition system, the method comprising determining a window size based on the number of dimensions after determining the number of dimensions in which the number sequences of are mutually inconsistent.
JP57222554A 1982-12-17 1982-12-17 Window size determining method in object recognizing system Pending JPS59111576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP57222554A JPS59111576A (en) 1982-12-17 1982-12-17 Window size determining method in object recognizing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP57222554A JPS59111576A (en) 1982-12-17 1982-12-17 Window size determining method in object recognizing system

Publications (1)

Publication Number Publication Date
JPS59111576A true JPS59111576A (en) 1984-06-27

Family

ID=16784263

Family Applications (1)

Application Number Title Priority Date Filing Date
JP57222554A Pending JPS59111576A (en) 1982-12-17 1982-12-17 Window size determining method in object recognizing system

Country Status (1)

Country Link
JP (1) JPS59111576A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007260370A (en) * 2006-03-27 2007-10-11 Takahide Sogi Set of wide-mouthed sake bottle and sake cup

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007260370A (en) * 2006-03-27 2007-10-11 Takahide Sogi Set of wide-mouthed sake bottle and sake cup

Similar Documents

Publication Publication Date Title
US11393103B2 (en) Target tracking method, device, system and non-transitory computer readable medium
US9626551B2 (en) Collation apparatus and method for the same, and image searching apparatus and method for the same
US8345988B2 (en) Method and apparatus for recognizing 3-D objects
WO2015165365A1 (en) Facial recognition method and system
CN113111844B (en) Operation posture evaluation method and device, local terminal and readable storage medium
CN110096929A (en) Object Detection Based on Neural Network
US11803585B2 (en) Method and apparatus for searching for an image and related storage medium
CN112668629A (en) Intelligent warehousing method, system, equipment and storage medium based on picture identification
CN111461113A (en) A large-angle license plate detection method based on deformed plane object detection network
AU2020294190B2 (en) Image processing method and apparatus, and electronic device
CN110929555B (en) Face recognition method and electronic device using same
CN114139620B (en) Template-based image similarity matching method
TW201635197A (en) Face recognition method and system
CN115147885B (en) Face shape comparison method, device, equipment and storage medium
JP2015007919A (en) Program, apparatus and method for realizing highly accurate geometric verification between images of different viewpoints
CN110070490A (en) Image split-joint method and device
JPS59111576A (en) Window size determining method in object recognizing system
Zhu et al. Robust image registration for power equipment using large-gap fracture contours
Rana et al. Real Time Deep Learning based Face Recognition System Using Raspberry PI
WO2022196551A1 (en) Authentication system and tracking system
JP2019174868A (en) Object tracking apparatus, method and program
Duan et al. Zero-Shot 3D Pose Estimation of Unseen Object by Two-step RGB-D Fusion
JP7207396B2 (en) Information processing device, information processing method, and program
CN110210343B (en) Big data face recognition method and system and readable storage medium thereof
Nel et al. An integrated sign language recognition system