JPH0345898A - Image identifying and tracing apparatus - Google Patents

Image identifying and tracing apparatus

Info

Publication number
JPH0345898A
JPH0345898A JP18130389A JP18130389A JPH0345898A JP H0345898 A JPH0345898 A JP H0345898A JP 18130389 A JP18130389 A JP 18130389A JP 18130389 A JP18130389 A JP 18130389A JP H0345898 A JPH0345898 A JP H0345898A
Authority
JP
Japan
Prior art keywords
target
image
feature
circuit
shape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP18130389A
Other languages
Japanese (ja)
Other versions
JP2693586B2 (en
Inventor
Shinichi Kuroda
伸一 黒田
Koichi Sasagawa
耕一 笹川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Priority to JP18130389A priority Critical patent/JP2693586B2/en
Publication of JPH0345898A publication Critical patent/JPH0345898A/en
Application granted granted Critical
Publication of JP2693586B2 publication Critical patent/JP2693586B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Abstract

PURPOSE:To satisfactorily continue a target tracing even with an image having many noises, under various disturbing environments by employing a shape extraction filter for extracting a specific shape such as circular, elliptical, parallel shape, from a dense and pale image in a target candidate extractor. CONSTITUTION:A target candidate extractor 31 is formed of an edge information extractor 32 and a shape extraction filter 33. An image signal obtained from an image sensor 1 is converted into a dense and pale image signal by an image input circuit 21, and the extractor 32 extracts an edge intensity, dense/ pale change direction at each image according to the dense and pale image signal. The filter 33 extracts a preset specific shape such as circular, elliptical, parallel shape, by using the edge intensity and dense and pale change direction data quantized in eight directions. Feature amounts such as a position of center of gravity, an area, etc., are calculated at each region of a target candidate obtained by the filter 33 via a feature extractor 41, and stored in a feature data memory 62. A feature amount of each candidate of each image stored in the memory 62 is employed in a corresponding relation evaluator 63, which evaluates the corresponding relation of the candidate and determined a target region.

Description

【発明の詳細な説明】 [産業上の利用分野コ この発明は、画像識別・追尾装置、特に目標識別装置及
び追尾能力の向上のための改良に関するものである。
DETAILED DESCRIPTION OF THE INVENTION [Field of Industrial Application] The present invention relates to an image identification/tracking device, and particularly to a target identification device and an improvement for improving tracking ability.

[従来の技術] 第5図は例えば特願昭63−251139号に示された
従来の画像識別・追尾装置の構成を示す図であり、図に
おいて(1)は画像センサ、<2)は画像センサ(1)
により得られた画像信号を、濃淡画像信号に変換するA
/D変換器、(3)は濃淡画像信号を、あるしきい値で
2値化する2値化回路、(4〉は2値化回路り3)で2
値化された目標等の領域の面積を計測する面積計測回路
、〈5)は目標までの距離を計測するレーダ装置、(6
)はA/D変換器(2)よりの濃度画像信号と面積計測
回路(4〉からの面積データと、レーダ装置(5)から
の測距データとを入力して、目標か否かを判定する判定
回路、(7〉は2値化回路(3〉の出力と、判定回路り
6)の出力から、目標領域のみにレベル1を有する目標
2値化画像信号を作成する目標抽出回路、(8〉は目+
IA2値化画像信号から目標領域の重心位置を計測し、
目標位置データを、追尾信号として出力する重心位置計
測回路である。
[Prior Art] FIG. 5 is a diagram showing the configuration of a conventional image identification/tracking device disclosed in, for example, Japanese Patent Application No. 63-251139. In the figure, (1) is an image sensor, and <2) is an image sensor. Sensor (1)
Converting the image signal obtained by A to a grayscale image signal
/D converter, (3) is a binarization circuit that binarizes the grayscale image signal with a certain threshold, (4> is a binarization circuit 3)
An area measuring circuit that measures the area of the target, etc. that has been converted into a value, <5) a radar device that measures the distance to the target, and (6)
) inputs the density image signal from the A/D converter (2), the area data from the area measuring circuit (4), and the distance measurement data from the radar device (5), and determines whether it is a target or not. (7> is a target extraction circuit that creates a target binarized image signal having level 1 only in the target area from the output of the binarization circuit (3) and the output of the judgment circuit 6); 8〉 is eye +
Measure the center of gravity position of the target area from the IA binary image signal,
This is a center of gravity position measurement circuit that outputs target position data as a tracking signal.

次に動作について説明する。画像センサーク1〉により
撮像された画像データは、A/D変換器(2)により濃
淡画像信号に変換される。この信号は、2値化回路(3
)により目標の領域が2値化され、面積計測回路(4〉
で、各目標領域の面積が計測される。判定回路(6〉に
は面積データと、濃淡画像信号と、レーダ装置(5)で
得られた目標までの距離データとが入力される。
Next, the operation will be explained. Image data captured by the image sensor 1> is converted into a grayscale image signal by an A/D converter (2). This signal is processed by the binarization circuit (3
), the target area is binarized, and the area measurement circuit (4)
Then, the area of each target region is measured. The area data, the gray scale image signal, and the distance data to the target obtained by the radar device (5) are input to the determination circuit (6>).

いま、目標の面積をS、輝度レベルをA、距離データを
Rとすると、目標を探知したいとき、目標が遠方にある
場合には、Sは十分に小さく、また、Aは底値以上を有
している必要があることから1判定回路り6〉はA/S
が最も大となる領域を目標領域と判定し、また、目標が
中〜近距離にある場合には、微小高輝度の]Rフレアー
等の誤検出を避けるため、判定回路(9〉はAXSが最
も大とねる領域を目標領域と判定する。
Now, let us say that the area of the target is S, the brightness level is A, and the distance data is R. When you want to detect a target and the target is far away, S is sufficiently small and A has a value greater than or equal to the bottom value. 1 judgment circuit 6〉 is A/S.
The determination circuit (9> The area with the largest curvature is determined to be the target area.

判定回路〈6〉により判定された目標データは、目標検
出回路(7〉で目標領域のみにレベル1を有する目標2
を化画像信号に変換され、重心位置計測回路(8〉で目
標領域の重心位置が算出され、追尾信号として出力する
The target data determined by the determination circuit <6> is detected by the target detection circuit (7>) as a target 2 which has level 1 only in the target area.
is converted into an image signal, and the center of gravity position of the target area is calculated by the center of gravity position measuring circuit (8) and output as a tracking signal.

よ [発明が解決し3つとする課題] 従来の画像追尾装置は、以上のように構成されているの
で、測距データと2値化後の面積データと、輝度情報の
みで目標領域の判定を行っているので、レーダ装置が妨
害にあい、正しい測距データが得られねいときなど、微
小高輝度のIRフレア等の誤検出が発生すること、また
、画像データからの情報の抽出は、面積輝度情報のみで
あり、形状情報による判定を行っていないため、赤外画
像で高輝度に見える水平線、雲等の背景ノイズ、疑似熱
源の分離が不十分であるという課題がありた。
[Three problems to be solved by the invention] Since the conventional image tracking device is configured as described above, it is possible to determine the target area using only the distance measurement data, the area data after binarization, and the luminance information. Therefore, when the radar equipment is interfered with and cannot obtain correct distance measurement data, erroneous detection of small, high-intensity IR flares, etc. may occur, and the extraction of information from image data is difficult. Since the method only uses brightness information and does not make judgments based on shape information, there was a problem in that it was insufficient to separate out background noise such as horizon lines that appear to be highly bright in infrared images, clouds, and pseudo heat sources.

二の発明は、上記のようね課題を解消するためにたされ
たもので、形状情報の利用により、 IRフレア、水平
線、雲等の背景ノイズ、疑似熱源の分離を行い、ノイズ
の多い画像及び様々ね妨害環境下においても良好に目標
を識別し、かつ追尾できる画像識別・追尾装置を得るこ
とを目的とする。
The second invention was made to solve the above-mentioned problems, and uses shape information to separate background noise such as IR flares, horizon lines, clouds, and pseudo heat sources, and to eliminate noisy images and The purpose of the present invention is to obtain an image identification/tracking device that can effectively identify and track a target even under various disturbing environments.

[課題を解決するための手段] この発明に係る画像識別・追尾装置は、画像入力回路の
出力を入力として、エツジ強度と濃淡変化の方向を抽出
するエツジ情報抽出回路と、このエツジ情報抽出回路の
出力から円・楕円・平行形状の特定形状を抽出する形状
抽出フィルタとから構成され、目標候補を出力する目標
候補抽出回路と、上記特徴抽出回路から得られる複数枚
の連続画像での目標候補に関する特徴量を記憶する特徴
データ記憶部と、複数枚の連続画像間での目標候補の面
積・重心位置情報の特徴データ及び既に対応のとれた目
標候補間の移動量の動き情報を基に、目標候補の対応関
係を評価する対応関係評価部から構成され、目標領域を
判定する判定回路を用いたものである。
[Means for Solving the Problems] An image identification/tracking device according to the present invention includes an edge information extraction circuit that uses the output of an image input circuit as input to extract edge intensity and direction of density change, and this edge information extraction circuit. A shape extraction filter that extracts specific shapes such as circles, ellipses, and parallel shapes from the output of the target candidate extraction circuit that outputs target candidates, and a target candidate in multiple consecutive images obtained from the feature extraction circuit. Based on the feature data storage unit that stores the feature amounts related to the image, the feature data of the area and center of gravity position information of the target candidates between multiple consecutive images, and the movement information of the amount of movement between the target candidates that have already been matched, It consists of a correspondence evaluation section that evaluates the correspondence of target candidates, and uses a determination circuit that determines the target area.

[作 用] この発明における形状抽出フィルタは、濃淡画像から円
・楕円・平行形状等設定された特定形状を高速で抽出し
、また、判定回路は抽出結果の連続画像での関係から、
目標領域を判定する槽底としたので、形状情報・動き情
報を用いた判定が可能となり、良好な識別・追尾が実行
可能となる。
[Function] The shape extraction filter of the present invention extracts a specific shape such as a circle, ellipse, parallel shape, etc. from a grayscale image at high speed, and the determination circuit extracts a specific shape such as a circle, an ellipse, a parallel shape, etc.
Since the target area is determined as the bottom of the tank, determination using shape information and movement information becomes possible, and good identification and tracking becomes possible.

[実施例] 以下、この発明の一実施例を図について説明する。第1
図において(1〉は画像上ンサ、(21〉は画像上ンサ
(1〉から得られる画像信号を、各画素毎に多値化する
画像入力回路、(32)は濃淡画像からエツジ強度と濃
淡変化の方向等を抽出するエツジ情報抽出回路、(33
〉はエツジ強度と濃淡変化の方向データから、円・楕円
・平行形状等の設定された特定形状を抽出する形状抽出
フィルタ、〈31)はエツジ情報抽出回路(32〉と形
状抽出フィルタ(33)とから構成される目標候補抽出
回路、(41〉は目標候補の領域毎に面積・重心位置等
の特徴量を抽出する特徴抽出回路、(62)は特徴抽出
回路(41)で得られた特徴量を、複数枚の連続画像に
ついて記憶する特徴データ記憶部、(63)は複数枚の
連続画像間での目標候補の特徴データから対応関係を評
価する対応関係評価部、<61〉は特徴データ記憶部(
62)と対応関係評価部り63)とから構成される判定
回路である。
[Example] Hereinafter, an example of the present invention will be described with reference to the drawings. 1st
In the figure, (1> is an image sensor, (21> is an image input circuit that multi-values the image signal obtained from the image sensor (1) for each pixel, and (32) is an edge strength and density Edge information extraction circuit that extracts the direction of change, etc. (33
〉 is a shape extraction filter that extracts a set specific shape such as a circle, ellipse, parallel shape, etc. from edge strength and density change direction data, 〈31) is an edge information extraction circuit (32〉) and a shape extraction filter (33) (41) is a feature extraction circuit that extracts feature quantities such as area and centroid position for each region of the target candidate, and (62) is a feature obtained by the feature extraction circuit (41). (63) is a correspondence evaluation unit that evaluates the correspondence from the feature data of the target candidate between the plurality of consecutive images; <61> is the feature data Storage part (
62) and a correspondence evaluation section 63).

次に動作に次いて説明する。画像センサ(1〉より帰ら
れた画像信号は1画像入力回路(21〉で濃淡画像信号
に変換される。エツジ情報抽出回路(32〉は濃淡画像
信号によりエツジ強度、濃淡変化の方向を各画像毎に抽
出する。これは、例えば3行3列の空間積和演算で実行
される。処理対策画像データX + # 、n l +
荷重係数W ((、j)が第2図(a)に示すように配
列されているとき、空間積和演算結果Y +s、n+は
1次式で表わされる。
Next, the operation will be explained. The image signal returned from the image sensor (1>) is converted into a grayscale image signal by the single image input circuit (21>).The edge information extraction circuit (32>) uses the grayscale image signal to determine the edge strength and direction of grayscale change in each image. This is performed, for example, by a spatial product-sum calculation of 3 rows and 3 columns.Processing countermeasure image data X + #, n l +
When the weighting coefficients W ((, j) are arranged as shown in FIG. 2(a), the spatial product-sum calculation results Y +s, n+ are expressed by a linear equation.

このとき、荷重係数として第2図<b>に示す2種類の
荷重係数を用い、横方向の変化量S8、縦方向の変化量
S、を求めると、エツジ強度及び濃淡変化の方向は、各
々 SX I +l S、 l tan−’Sy/Sxで与
えられる。
At this time, when the two types of load coefficients shown in Fig. 2 <b> are used as load coefficients to determine the amount of change S8 in the horizontal direction and the amount S8 of change in the vertical direction, the direction of edge strength and density change is determined respectively. SX I +l S, l tan-'Sy/Sx.

形状抽出フィルタ(33)は、エツジ強度及び8方向に
量子化された濃淡変化の方向データを用い、円・楕円・
平行形状等の、あらかじめ設定された特定形状を抽出す
る。第3図により形状抽出フィルタ(33)での抽出原
理を説明する。図は円を抽出する際の説明図であり、第
3図(a)に示すように円を構成するエツジ部での濃淡
変化の方向は、全て中心に向かう。そこで第3図(b)
に示すようにエツジ部での濃淡変化の方向に対応して、
長さSだけ離れたところに長さしのスポーク(線要素)
を発生させる。第3図(C)に示すように、各方向から
スポークを発生させると、円の中心付近でスポークが交
差することにねる。形状抽出フィルタはこの交差状況を
見ることにより、円の有無を判定する手法であり、Sが
抽出したい円の大きさ、Lがその許容量に対応すること
にねる。また、平行形状を抽出する際には、向かいあう
平行線からのエツジの方向が対向するという性質を利用
して、平行線の中心線を抽出する。以上のように形状抽
出フィルタは、抽出したい形状・大きさを制限すること
により、エツジの強度と方向という濃淡情報に、形状情
報を付加した特徴を同時に用いることで、ノイズに強い
安定な抽出を行う。特に、熱分布を示す赤外画像等にお
いては、目標物は円・楕円・平行形状を示すことが多く
、目標領域を抽出する際の有効た手法である。さらに形
状情報を利用しているので、微小高輝度の]Rフレア、
雲、水平11A等の影響を除去できることになる。
The shape extraction filter (33) uses edge strength and direction data of shading changes quantized in 8 directions to extract circles, ellipses,
Extract a specific shape set in advance, such as a parallel shape. The extraction principle of the shape extraction filter (33) will be explained with reference to FIG. The figure is an explanatory diagram when extracting a circle, and as shown in FIG. 3(a), the direction of the change in density at the edge portions that make up the circle all points toward the center. Therefore, Figure 3(b)
As shown in , corresponding to the direction of the density change at the edge,
Spokes of length S apart (line elements)
to occur. As shown in FIG. 3(C), when spokes are generated from each direction, the spokes intersect near the center of the circle. The shape extraction filter is a method of determining the presence or absence of a circle by looking at this intersection situation, and S corresponds to the size of the circle to be extracted and L corresponds to its allowable amount. Furthermore, when extracting parallel shapes, the center line of the parallel lines is extracted by utilizing the property that the directions of edges from opposite parallel lines are opposite. As described above, shape extraction filters can perform stable extraction that is resistant to noise by limiting the shape and size to be extracted, and simultaneously using features with shape information added to the intensity and direction information of edges. conduct. In particular, in infrared images showing heat distribution, targets often have a circular, elliptical, or parallel shape, and this is an effective method for extracting a target area. Furthermore, since shape information is used, minute high-intensity ]R flare,
This means that the influence of clouds, horizontal 11A, etc. can be removed.

本発明者らは既に特定形状の抽出法として「画像形状認
識方式」 (特願昭62−330960号)形状抽出フ
ィルタの高速実行として「スポークレジスタ生成回路」
 (特願昭63−255102号)の2つの発明を完成
させ、この発明はこれらの応用に関するものである。
The present inventors have already developed an ``image shape recognition method'' (Japanese Patent Application No. 62-330960) as a method for extracting a specific shape, and a ``spoke register generation circuit'' as a high-speed execution method for a shape extraction filter.
(Japanese Patent Application No. 63-255102), and this invention relates to their application.

形状抽出フィルタ(33〉で帰られた目標候補は、特徴
抽出回路〈4工〉で目標候補の領域毎に重心位置・面積
等の特徴量が演算される。各画像毎に演算される特徴量
は、さらに複数枚の連続画像について、各々演算された
特徴データ記憶部<62)に格納される。対応関係評価
部(63)では、特徴データ記憶部<62〉に貯えられ
た各画像での各目標候補毎の特徴量を用い、目標候補の
対応関係を評価し、目標領域を判定する。
For the target candidates returned by the shape extraction filter (33), feature quantities such as the center of gravity position and area are calculated for each region of the target candidate in the feature extraction circuit <Step 4>.The feature quantities calculated for each image is further stored in the calculated feature data storage unit <62) for each of the plurality of consecutive images. The correspondence evaluation unit (63) uses the feature amount for each target candidate in each image stored in the feature data storage unit <62> to evaluate the correspondence of the target candidates and determines the target area.

いま、ある時刻tでの画像から得られた目標候補なQ、
(t)、その重心位置を x 、 (t>、  y :
 (t>、  面積をS+(t)と表す。ここで、i=
(、・・、Nt、Ntは時刻tの画像で抽でされた目標
候補の総数である。
Now, the target candidate Q obtained from the image at a certain time t,
(t), its center of gravity is x, (t>, y:
(t>, area is expressed as S+(t). Here, i=
(..., Nt, Nt is the total number of target candidates drawn from the image at time t.

説明のため、時刻tt+t++++t++2+ tlや
3の連続する4フレ一ム間での対応関係を評価するとき
について考える。時刻t、、tjにおける目標候補Q−
(tk>、Qb(tz>[n=1.−、Ntm、   
b=1.−Ntt、  tb≠tt、t、tz=t、・
・t、、3コ  について4フレームから2フレームを
選ぶすべての組合せ6通りの画像間(1,とt、、。
For the sake of explanation, let us consider the case where the correspondence between four consecutive frames of time tt+t++++t++2+tl or 3 is evaluated. Target candidate Q- at time t,,tj
(tk>, Qb(tz>[n=1.-, Ntm,
b=1. −Ntt, tb≠tt, t, tz=t,・
・Select 2 frames from 4 frames for t,, 3 images between all 6 combinations of images (1, and t, .

t、とti*+、Jとij+l+fj+Iとt3+2.
tl*+とt、+2. il+2とt、−3)で目標候
補の総当たりの組合せについて対応関係を評価する。対
応関係の有無は、次式に示される目標候補間の距n o
、b、  面積の差M、bにより判定する。
t, and ti**, J and ij+l+fj+I and t3+2.
tl*+ and t, +2. il+2 and t, -3), the correspondence is evaluated for round-robin combinations of target candidates. The presence or absence of a correspondence relationship is determined by the distance n o between target candidates shown in the following formula
, b, is determined based on the area difference M, b.

Dab2:[xb(tz)−xs(tk)]’+[yb
(tz)−y、(t、)]2・・・(2)M−b”  
Sb(を尤)−Ss (ih )       ・・・
・・(3)このDabとMabがともにあるしきい値T
+ 、 T2より小さいとき、Qa(tk)とQb (
tz )の間に対応関係があるとする。さらに、同一の
フレーム間で1つの目標候補に対して、2つ以上の目標
候補が対応した場合には、Dabと口、bの積が最小と
なる目標候補間を選択する。
Dab2: [xb(tz)-xs(tk)]'+[yb
(tz)-y, (t,)]2...(2)M-b"
Sb (尤)-Ss (ih)...
...(3) Threshold value T at which both Dab and Mab exist
+, when smaller than T2, Qa (tk) and Qb (
Suppose that there is a correspondence relationship between tz ). Furthermore, if two or more target candidates correspond to one target candidate in the same frame, the target candidate that minimizes the product of Dab, mouth, and b is selected.

次に対応関係の4つのフレーム間での連結について評価
する。即ち、1.〜t、、3の4フレームで見たとき、
全2のフレーム間(1,とil”l+  ij+1とi
 4 +2+  i j + 2とij+3)で対応が
とれているかどうか、あるいは4フレーム中3フレーム
だけで対応がとれているかどうか((1,,1,やI 
+ il+2)*  (jj+ t)+1゜を戸3)・
 (tj・ti・2・fi+3)+  <f」・1・t
i・2・t」・3〉の4種)、あるいは4フレーム中で
2フレ一ム間だけで対応関係があるのみかを評価する。
Next, the connection between the four frames in correspondence will be evaluated. That is, 1. When viewed in 4 frames of ~t,,3,
Between all 2 frames (1, and il”l+ ij+1 and i
4 +2+ i j + 2 and ij+3), or whether there is a correspondence in only 3 out of 4 frames ((1,,1, or I
+il+2)* (jj+t)+1° to door 3)・
(tj・ti・2・fi+3)+<f''・1・t
i, 2, t'', and 3>), or whether there is a correspondence only between two frames in four frames.

当然、多くのフレームで対応関係がとれるものを目標領
域と判定する。
Naturally, a target area is determined to be a target area if a correspondence relationship can be established in many frames.

処理の流れを第4図を用いて説明する。各時刻fi、−
1i+”3の画像から抽出された目標候補の特徴データ
は、特徴データ記憶部<62〉に記憶される。
The flow of processing will be explained using FIG. Each time fi, -
The feature data of the target candidate extracted from the image 1i+"3 is stored in the feature data storage section <62>.

第4図は各時刻ti 、 −ti。3の画像から抽出さ
れた目標候補の数が、各々5個、6個、3個、4個であ
ることを示している。次に2フレ一ム間での対応関係を
 2.3式を用い評価する。その結果、図に示すように
Q3(t、)とQ2(t4.+ )、Q5(J )とQ
e(t;−+)。
FIG. 4 shows each time ti, -ti. The numbers of target candidates extracted from images No. 3 are 5, 6, 3, and 4, respectively. Next, the correspondence between the two frames is evaluated using equation 2.3. As a result, as shown in the figure, Q3(t,) and Q2(t4.+), Q5(J) and Q
e(t;-+).

Q2(t、や1)とQ冨(t7*3>−Qs(ti、2
)とQa(tl。3)の間で対応関係があったとすると
、さらに4フレ一ム間での対応関係を調べ 3フレーム
で対応のとれるQ3<t、)、Q2(t;。+)、Q+
<t7や3〉を目標領域と判定し出力する。即ち、連続
フレーム間での対応関係を見ることによlJ、雲等の影
響によζノ検出し得々いフレームが存在しても、良好に
追尾することが可能になる。また、次のフレーム1.・
4での目標候補については、既に検出した目標領域との
対応関係を見るだけでもよいし、あるいは上記と同様に
tJや1〜t、1の目標候補に対し評価してもよい。
Q2(t, 1) and Qtomi(t7*3>-Qs(ti, 2
) and Qa(tl.3), we further examine the correspondence between 4 frames and find that the correspondence exists in 3 frames: Q3<t,), Q2(t;.+), Q+
<t7 or 3> is determined to be the target area and output. That is, by looking at the correspondence between successive frames, it becomes possible to perform good tracking even if there are frames in which it is difficult to detect ζ due to the influence of lJ, clouds, etc. Also, the next frame 1.・
Regarding the target candidates in 4, it is sufficient to simply look at the correspondence with the already detected target areas, or similarly to the above, the target candidates tJ and 1 to t, 1 may be evaluated.

さらに、既に目標領域が求まり、これを追尾する段階で
は、1フレームあたりのX方向、Y方向での移動量V、
、  V、を求め、2式の代わりに次式を用い、動き情
報の利用を図ってもよい。
Furthermore, the target area has already been determined, and at the stage of tracking it, the amount of movement V in the X direction and Y direction per frame,
, V, and use the following equation instead of equation 2 to utilize motion information.

Dab2”[Xb(tt)−(L(ttt)+vx(t
t−ttt)l]2+[Xb(tz>−fL(tb )
+V、(tt−ttt )l]2・・・・・・(4〉な
お、上記実施例では、対応関係評価部で2フレ一ム間で
の対応、4フレ一ム間での対応関係を評価する場合につ
いて説明したが、任意のフレームで評価してもよい。
Dab2”[Xb(tt)−(L(ttt)+vx(t
t-ttt)l]2+[Xb(tz>-fL(tb)
+V, (tt-ttt)l]2... (4) In the above embodiment, the correspondence evaluation unit evaluates the correspondence between two frames and the correspondence between four frames. Although the case of evaluation has been described, evaluation may be performed using any frame.

また、目標の形状を円・楕円・平行形状等に近似して目
標候補を抽出しているので、目標の接近等により目標領
域が拡大すると、円・楕円・平行形状等に近似で!?た
くねり、目標候補が抽出でよねいという問題点がある。
In addition, target candidates are extracted by approximating the shape of the target to a circle, ellipse, parallel shape, etc., so when the target area expands due to the approach of the target, etc., it can be approximated to a circle, ellipse, parallel shape, etc.! ? There is a problem that the target candidates cannot be extracted easily.

これには判定回路(61〉において、目標領域の面積が
一定値以上にたったことを検知し、画像入力回路(21
)で、面積増加に対応して画像を縮小することで対応し
く第1図で波線の矢印で示すコントロール)、追尾を継
続する。kお、画像の縮小は画素の間引き、あるいは2
式2画素の平均値を1画素とする等の近傍画素の統合手
法によζJ簡単に実現できる。
For this purpose, the determination circuit (61) detects that the area of the target area exceeds a certain value, and the image input circuit (21) detects that the area of the target area exceeds a certain value.
), the image is reduced in size in response to the increase in area (control indicated by the wavy arrow in FIG. 1), and tracking is continued. kOh, image reduction is done by thinning out pixels or by 2
Equation ζJ can be easily realized by a method of integrating neighboring pixels, such as taking the average value of two pixels as one pixel.

[発明の効果] 以上のように、この発明によれば目標候補抽出回路に、
濃淡画像から円・楕円・平行形状等の特定形状を抽出す
る形状抽出フィルタを用い、判定回路で複数枚の連続画
像間での目標候補の面積・重心位置情報及び動き情報を
基に、目標候補の対応量1系を評価するように構成した
ので、 ノイズの多い画像でも目標を良好に識別するこ
とができ、また、様々ね妨害環境下においても良好に目
標追尾を継続することができる効果がある。
[Effects of the Invention] As described above, according to the present invention, the target candidate extraction circuit has the following functions:
Using a shape extraction filter that extracts specific shapes such as circles, ellipses, parallel shapes, etc. from grayscale images, a determination circuit identifies target candidates based on the area, center of gravity position, and movement information of target candidates between multiple consecutive images. Since it is configured to evaluate the correspondence quantity 1 system, it is possible to identify the target well even in a noisy image, and it has the effect of being able to continue tracking the target well even under various interference environments. be.

【図面の簡単な説明】[Brief explanation of drawings]

第1図はこの発明の一実施例による画像識別・追尾装置
の構成を示すブロック図、第2図はエツジ情報抽出処理
を行う空間積和演算の説明図、第3図は形状抽出フィル
タの抽出原理の説明図、第4図は判定回路での処理内容
を示す説明図、第5図は従来の画像追尾装置の構成を示
すブロック図である。 図において(1)は画像センサ、<2〉はA/D変換器
、(21>は画像入力回路、〈3)は2値化回路、(3
1〉は目標候補抽出回路(32〉はエツジ情報抽出回路
、(33ンは形状抽出フィルタ、〈4〉は面積計測回路
、(41〉は特徴抽出回路、(5〉はレーダ装置、<6
〉、<61>は判定回路、<62)は特徴データ記憶部
、(63)は対応関係評価部、(7〉は目標抽出回路、
り8〉は重心位置計測回路である。 なお、図中同一符号は同一 または相当部分を示す。
Fig. 1 is a block diagram showing the configuration of an image identification/tracking device according to an embodiment of the present invention, Fig. 2 is an explanatory diagram of spatial product-sum calculation for performing edge information extraction processing, and Fig. 3 is an extraction diagram of a shape extraction filter. FIG. 4 is an explanatory diagram of the principle, FIG. 4 is an explanatory diagram showing the processing contents of the determination circuit, and FIG. 5 is a block diagram showing the configuration of a conventional image tracking device. In the figure, (1) is an image sensor, <2> is an A/D converter, (21> is an image input circuit, <3) is a binarization circuit, and (3) is an image input circuit.
1> is a target candidate extraction circuit (32> is an edge information extraction circuit, (33) is a shape extraction filter, <4> is an area measurement circuit, (41> is a feature extraction circuit, (5> is a radar device, <6
〉, <61> is a determination circuit, <62) is a feature data storage unit, (63) is a correspondence evaluation unit, (7> is a target extraction circuit,
8> is a center of gravity position measuring circuit. Note that the same symbols in the figures indicate the same or equivalent parts.

Claims (1)

【特許請求の範囲】[Claims] 画像センサから得られる画像信号を、各画素毎に多値化
する画像入力回路と、この画像入力回路の出力から、2
値化等により目標候補を抽出する目標候補抽出回路と、
上記目標候補の領域毎に面積、重心位置の特徴量を抽出
する特徴抽出回路と、上記特徴量を評価することにより
、目標領域を判定する判定回路とから構成される画像識
別・追尾装置において、上記画像入力回路の出力を入力
とし、エッジ強度と濃淡変化の方向を抽出するエッジ情
報抽出回路と、このエッジ情報抽出回路の出力から、円
・楕円・平行形状の特定形状を抽出する形状抽出フィル
タとから構成され目標候補を出力する目標候補抽出回路
と、上記特徴抽出回路から得られる複数枚の連続画像で
の目標候補に関する特徴量を記憶する特徴データ記憶部
と、複数枚の連続画像間での目標候補の面積・重心位置
情報の特徴データ及び既に対応のとれた目標候補間の移
動量の動き情報を基に、目標候補の対応関係を評価する
対応関係評価部とから構成され、目標領域を判定する判
定回路とを備えたことを特徴とする画像識別・追尾装置
An image input circuit that multivalues the image signal obtained from the image sensor for each pixel, and 2
a target candidate extraction circuit that extracts target candidates by converting them into values, etc.;
An image identification/tracking device comprising a feature extraction circuit that extracts feature quantities such as area and centroid position for each region of the target candidate, and a determination circuit that determines a target region by evaluating the feature quantities, An edge information extraction circuit that takes the output of the image input circuit as input and extracts the edge strength and direction of density change, and a shape extraction filter that extracts specific shapes such as circles, ellipses, and parallel shapes from the output of this edge information extraction circuit. a target candidate extraction circuit that outputs target candidates; a feature data storage unit that stores feature amounts related to the target candidates in the plurality of consecutive images obtained from the feature extraction circuit; and a correspondence evaluation unit that evaluates the correspondence of target candidates based on the feature data of the area and center of gravity position information of the target candidates and the movement information of the amount of movement between target candidates that have already corresponded. An image identification/tracking device characterized by comprising: a determination circuit for determining.
JP18130389A 1989-07-13 1989-07-13 Image identification / tracking device Expired - Fee Related JP2693586B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP18130389A JP2693586B2 (en) 1989-07-13 1989-07-13 Image identification / tracking device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP18130389A JP2693586B2 (en) 1989-07-13 1989-07-13 Image identification / tracking device

Publications (2)

Publication Number Publication Date
JPH0345898A true JPH0345898A (en) 1991-02-27
JP2693586B2 JP2693586B2 (en) 1997-12-24

Family

ID=16098320

Family Applications (1)

Application Number Title Priority Date Filing Date
JP18130389A Expired - Fee Related JP2693586B2 (en) 1989-07-13 1989-07-13 Image identification / tracking device

Country Status (1)

Country Link
JP (1) JP2693586B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09331519A (en) * 1996-06-12 1997-12-22 Matsushita Electric Ind Co Ltd Automatic monitor equipment
JP2007183999A (en) * 1998-04-07 2007-07-19 Omron Corp Image processing apparatus and method, medium with program for image processing recorded thereon, and inspection apparatus
JP2010210212A (en) * 2009-03-12 2010-09-24 Toshiba Corp Object identification device
JP2011080890A (en) * 2009-10-08 2011-04-21 Toshiba Corp Object identification device
CN102222534A (en) * 2011-03-10 2011-10-19 中国原子能科学研究院 Beam shutter for single event effect ground accelerator simulation test
WO2022038757A1 (en) * 2020-08-21 2022-02-24 三菱電機株式会社 Target identification device, target identification method, and target identification program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09331519A (en) * 1996-06-12 1997-12-22 Matsushita Electric Ind Co Ltd Automatic monitor equipment
JP2007183999A (en) * 1998-04-07 2007-07-19 Omron Corp Image processing apparatus and method, medium with program for image processing recorded thereon, and inspection apparatus
JP2010210212A (en) * 2009-03-12 2010-09-24 Toshiba Corp Object identification device
JP2011080890A (en) * 2009-10-08 2011-04-21 Toshiba Corp Object identification device
CN102222534A (en) * 2011-03-10 2011-10-19 中国原子能科学研究院 Beam shutter for single event effect ground accelerator simulation test
WO2022038757A1 (en) * 2020-08-21 2022-02-24 三菱電機株式会社 Target identification device, target identification method, and target identification program

Also Published As

Publication number Publication date
JP2693586B2 (en) 1997-12-24

Similar Documents

Publication Publication Date Title
KR100519782B1 (en) Method and apparatus for detecting people using a stereo camera
US9294665B2 (en) Feature extraction apparatus, feature extraction program, and image processing apparatus
US20110293190A1 (en) Image processing for change detection
EP2339507B1 (en) Head detection and localisation method
JP2006071471A (en) Moving body height discrimination device
JP2008046903A (en) Apparatus and method for detecting number of objects
JP4389602B2 (en) Object detection apparatus, object detection method, and program
CN101316371B (en) Flame detecting method and device
CN107818583A (en) Cross searching detection method and device
CN102013007A (en) Apparatus and method for detecting face
JPH0345898A (en) Image identifying and tracing apparatus
US5193127A (en) Method and device for detecting patterns adapted automatically for the level of noise
Hadi et al. Fusion of thermal and depth images for occlusion handling for human detection from mobile robot
JP2011090708A (en) Apparatus and method for detecting the number of objects
JP2004295416A (en) Image processing apparatus
JPS6126189A (en) Extracting method of edge
JPH0620054A (en) Method and apparatus for decision of pattern data
JP2009205695A (en) Apparatus and method for detecting the number of objects
JPH067171B2 (en) Moving object detection method
JPH09282460A (en) Automatic target recognizing device
JPH01177682A (en) Graphic recognizing device
US20020191850A1 (en) Real time object localization and recognition from silhouette images
JP3091356B2 (en) Moving object detection method and apparatus
JP2007164288A (en) Target object identifying device
JP6332702B2 (en) Space recognition device

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees