JPH04301571A - Apparatus for recognizing environment of moving vehicle - Google Patents

Apparatus for recognizing environment of moving vehicle

Info

Publication number
JPH04301571A
JPH04301571A JP3065854A JP6585491A JPH04301571A JP H04301571 A JPH04301571 A JP H04301571A JP 3065854 A JP3065854 A JP 3065854A JP 6585491 A JP6585491 A JP 6585491A JP H04301571 A JPH04301571 A JP H04301571A
Authority
JP
Japan
Prior art keywords
feature
amount
image input
parallel
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP3065854A
Other languages
Japanese (ja)
Inventor
Hiroyuki Takahashi
弘行 高橋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazda Motor Corp
Original Assignee
Mazda Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mazda Motor Corp filed Critical Mazda Motor Corp
Priority to JP3065854A priority Critical patent/JPH04301571A/en
Publication of JPH04301571A publication Critical patent/JPH04301571A/en
Withdrawn legal-status Critical Current

Links

Abstract

PURPOSE:To reduce the amount of processing and to conduct extraction of a moving body and calculation of the amount of movement at a high speed. CONSTITUTION:A preceding vehicle or a white line on a road, which is taken as an object, is inputted as an image by a right image input element 1 and a left image input element 2 which are constructed of CCD cameras or the like installed so that center axes are parallel. Feature amount detecting elements 3 and 4 extract edges through filters from the images picked up by the right image input element 1 and the left image input element 2, respectively, and feature amounts extracted are sent to a differential device 5. In the differential device 5, comparison of the feature amounts obtained by the feature amount detecting elements 3 and 4 is conducted and, as the result, time-series data for the feature amounts are prepared. These time series data are stored in an image accumulating element 6 and, only prescribed parallel feature couples being observed finally in a feature amount tracing element 7, the amount of movement thereof is calculated.

Description

【発明の詳細な説明】[Detailed description of the invention]

【0001】0001

【産業上の利用分野】本発明は画像処理にて移動物体を
認識する移動車の環境認識装置に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an environment recognition device for a moving vehicle that recognizes moving objects by image processing.

【0002】0002

【従来の技術】従来の移動体の抽出、特に人や車両等の
道路上の移動障害物を抽出する方法としては、ステレオ
視による視差と移動ベクトルを用いるものがある。これ
は、対象をステレオ視した結果を用いて移動量を計算し
たり、移動ベクトルを規定して、その移動ベクトルを持
つ特徴量に関してステレオ視を用いるものである。
2. Description of the Related Art Conventional methods for extracting moving objects, particularly moving obstacles on roads such as people and vehicles, use stereoscopic parallax and movement vectors. In this method, the amount of movement is calculated using the results of viewing the object in stereo, or a movement vector is defined, and stereo viewing is used for the feature amount having the movement vector.

【0003】0003

【発明が解決しようとしている課題】しかしながら、上
記従来のステレオ視した結果を用いて移動量を求めたり
、移動ベクトルを規定して、その移動ベクトルを持つ特
徴量に関してステレオ視を用いる方法では計算量が膨大
になり、処理速度の観点から実車への搭載が困難となる
という問題がある。
[Problems to be Solved by the Invention] However, the above conventional method of determining the amount of movement using the results of stereo viewing, or defining a movement vector and using stereo viewing for the feature quantity having that movement vector, requires a large amount of calculation. The problem is that the amount of information becomes enormous, making it difficult to install it in actual vehicles from the viewpoint of processing speed.

【0004】0004

【課題を解決するための手段】本発明は上述の課題を解
決することを目的としてなされたもので、上述の課題を
解決するための手段として、以下の構成を備える。即ち
、それぞれの中心軸が平行となるよう配設された左右カ
メラによるステレオ画像入力手段と、該左右カメラによ
る入力画像各々のエツジを抽出する手段と、左右のエツ
ジ抽出画面の差分画像を得る手段と、該差分画像から平
行線対を抽出する手段と、該平行線対の動きに基づいて
移動体を抽出する手段とを備える。
[Means for Solving the Problems] The present invention has been made for the purpose of solving the above-mentioned problems, and has the following configuration as a means for solving the above-mentioned problems. That is, a stereo image input means using left and right cameras arranged so that their central axes are parallel, a means for extracting edges of each input image from the left and right cameras, and a means for obtaining a difference image between the left and right edge extraction screens. , means for extracting a pair of parallel lines from the difference image, and means for extracting a moving object based on the movement of the pair of parallel lines.

【0005】[0005]

【作用】以上の構成において、移動体の抽出、及び移動
量の計算を高速に行なうことができる。
[Operation] With the above configuration, it is possible to extract a moving object and calculate the amount of movement at high speed.

【0006】[0006]

【実施例】以下、添付図面を参照して本発明に係る好適
な実施例を詳細に説明する。図1は、本発明の実施例に
係る移動車の環境認識装置(以下、装置という)全体の
構成を示すブロツク図である。同図において、右画像入
力部1、及び左画像入力部2は、例えばCCDカメラ等
にて構成され、本装置では2台のカメラにて対象とする
先行車両や道路上の白線を入力する。これら2台のカメ
ラは中心軸が平行となるように設置され、撮像面での左
右画像の傾きによる変化を極力抑えている。
DESCRIPTION OF THE PREFERRED EMBODIMENTS Preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings. FIG. 1 is a block diagram showing the overall configuration of an environment recognition device for a moving vehicle (hereinafter referred to as the device) according to an embodiment of the present invention. In the figure, a right image input section 1 and a left image input section 2 are configured with, for example, a CCD camera or the like, and in this device, two cameras are used to input a target preceding vehicle and a white line on the road. These two cameras are installed so that their central axes are parallel to each other, minimizing changes in the left and right images on the imaging plane due to inclination.

【0007】特徴量検出部3,4は、それぞれ右画像入
力部1、左画像入力部2にて撮られた画像より、後述す
るエツジ抽出等の特徴量抽出を行なうための、例えばガ
ウシアン−ラプラシアンフイルタを有しており、抽出し
た特徴量を差分装置5に送る。差分装置5では、特徴量
検出部3,4にて得られた特徴量の比較が行なわれ、結
果として特徴量に対する時系列データが作成される。そ
して、それらの時系列データは、画像蓄積部6に格納さ
れる。図1のt〜t+nは、画像蓄積部6に格納された
n個の時系列データを示す。
The feature quantity detection units 3 and 4 perform Gaussian-Laplacian detection, for example, to perform feature quantity extraction such as edge extraction, which will be described later, from the images taken by the right image input unit 1 and left image input unit 2, respectively. It has a filter and sends the extracted feature amount to the difference device 5. The difference device 5 compares the feature amounts obtained by the feature amount detection units 3 and 4, and as a result, time series data for the feature amounts are created. Then, those time series data are stored in the image storage section 6. t to t+n in FIG. 1 indicate n pieces of time-series data stored in the image storage unit 6.

【0008】画像蓄積部6に格納、蓄積された画像デー
タは、最終的には特徴量追跡部7にて所定の特徴量のみ
が着目され、それらの移動量が計算される。次に、本実
施例の装置における特徴量追跡方法について説明する。 図2は、本実施例の装置における特徴量追跡の手順を示
す概略フローチヤートである。同図において、ステツプ
S1で右画像入力部1、及び左画像入力部2にて対象画
像を入力し、ステツプS2で、特徴量検出部3,4が内
蔵する上述のフイルタによるエツジ抽出を行なう。そし
て、ステツプS3では差分装置5による左右の画像特徴
量の比較処理として、特徴量の差分をとる処理が行なわ
れる。この特徴量の差分を得るために、ここでは右画像
と左画像で特徴量の差が大きいもの、及びその差がない
ものについては、それらのマツチングを行なわず、それ
らの条件を満足し平行線で近似できる特徴量についての
み着目(これを平行仮説という)する。
[0008] The image data stored and accumulated in the image storage unit 6 is finally focused on only predetermined feature quantities in the feature quantity tracking unit 7, and the amount of movement thereof is calculated. Next, a feature amount tracking method in the apparatus of this embodiment will be explained. FIG. 2 is a schematic flowchart showing the procedure of feature amount tracking in the apparatus of this embodiment. In the same figure, in step S1, a target image is input into the right image input section 1 and the left image input section 2, and in step S2, edge extraction is performed using the above-mentioned filters built into the feature quantity detection sections 3 and 4. Then, in step S3, the difference device 5 performs a process of comparing the left and right image feature quantities by calculating a difference between the feature quantities. In order to obtain the difference in feature amounts, here, we do not match the right image and the left image with a large difference in feature amount, or if there is no difference, but instead use parallel lines that satisfy those conditions. We focus only on features that can be approximated by (this is called the parallel hypothesis).

【0009】通常、物体が移動するということは、観測
者側から見れば物体が平行に移動することであり、移動
状態が複雑になつても、この平行性は失われないことに
着目する。具体的に説明すると、図3に示すように、右
画像から得られた物体のエツジ(一点鎖線にて示す)と
左画像に対応する物体のエツジ(実線にて示す)とにお
いて、それらの平行部分の間隔dが所定範囲内にあるか
否かでマツチングを行なうかどうかの判断をする。つま
り、得られたエツジが形成する平行線の間隔が一定範囲
内にあるもののみ特徴量として追跡する。換言すれば、
視差が一定範囲内にある物体に着目することであり、遠
方にある物体については、2台のカメラにて画像として
捉えた場合、視差がないか、あつても少ないので特徴量
としては無視されることになる。
[0009] Normally, when an object moves, it means that the object moves in parallel when viewed from the observer's side, and attention is paid to the fact that even if the state of movement becomes complicated, this parallelism is not lost. Specifically, as shown in Fig. 3, the edges of the object obtained from the right image (indicated by the dashed line) and the edge of the object corresponding to the left image (indicated by the solid line) are parallel to each other. It is determined whether or not to perform matching based on whether the interval d between the parts is within a predetermined range. In other words, only those edges whose distances between parallel lines formed by the obtained edges are within a certain range are tracked as features. In other words,
The idea is to focus on objects whose parallax is within a certain range.For distant objects, when captured as images with two cameras, there is no parallax, or even if there is, the parallax is small, so it is ignored as a feature. That will happen.

【0010】図2のステツプS4では、上述の平行部分
の間隔の大小判定を行ない、差分が所定値内である、つ
まり、平行部分の間隔dが一定範囲内にあれば、それを
平行線にて近似できる特徴量であるとして画像蓄積部6
に格納する。しかし、平行部分の間隔が一定範囲内にな
ければ平行仮説は成立しないとして、ステツプS5でそ
の特徴量を削除する。
In step S4 of FIG. 2, the above-mentioned spacing between the parallel parts is determined, and if the difference is within a predetermined value, that is, the spacing d between the parallel parts is within a certain range, it is converted into a parallel line. The image storage unit 6
Store in. However, if the interval between the parallel parts is not within a certain range, the parallel hypothesis does not hold, and the feature amount is deleted in step S5.

【0011】ステツプS7では、特徴量を追跡して作成
し、画像蓄積部6に蓄積した特徴量の時系列データが所
定値に達したか否かを判定し、それが所定値になつてい
なければ、再度ステツプS1に戻る。しかし、所定の時
系列データが得られたときはステツプS8に進み、それ
らの時系列データをもとに特徴量の移動量を計算する。
[0011] In step S7, it is determined whether the time-series data of the feature quantities created by tracking the feature quantities and stored in the image storage section 6 has reached a predetermined value. If so, return to step S1 again. However, when predetermined time-series data is obtained, the process proceeds to step S8, and the amount of movement of the feature amount is calculated based on the time-series data.

【0012】図4は、時系列データの例であり、特徴量
である平行線対が画面上で移動する様子が示されている
。この平行線対を追跡することで、対象とする物体の移
動量を計算することができる。以上説明したように、本
実施例によれば、2台のカメラを平行に設置し、フイル
タを通して抽出した左右画像の特徴量の差分をとつて得
た平行線対を特徴量として追跡することで、処理量を削
減し、移動体の抽出、及び移動量の計算を高速化するこ
とができるという効果がある。
FIG. 4 is an example of time series data, and shows how pairs of parallel lines, which are feature quantities, move on the screen. By tracing this pair of parallel lines, the amount of movement of the target object can be calculated. As explained above, according to this embodiment, two cameras are installed in parallel, and a pair of parallel lines obtained by calculating the difference between the feature values of the left and right images extracted through a filter is tracked as a feature value. This has the effect of reducing the amount of processing and speeding up the extraction of moving objects and the calculation of the amount of movement.

【0013】[0013]

【発明の効果】以上説明したように、本発明によれば、
画像上で平行線にて近似できる特徴量にのみ着目して処
理量を削減し、移動体の抽出やその移動量算出処理を高
速化できるという効果がある。
[Effects of the Invention] As explained above, according to the present invention,
This method has the effect of reducing the amount of processing by focusing only on feature quantities that can be approximated by parallel lines on the image, and speeding up the process of extracting a moving object and calculating its movement amount.

【図面の簡単な説明】[Brief explanation of drawings]

【図1】本発明の実施例に係る移動車の環境認識装置全
体の構成を示すブロツク図、
FIG. 1 is a block diagram showing the overall configuration of an environment recognition device for a moving vehicle according to an embodiment of the present invention;

【図2】実施例の装置における特徴量追跡の手順を示す
概略フローチヤート、
FIG. 2 is a schematic flowchart showing the procedure for feature quantity tracking in the device of the embodiment;

【図3】エツジ抽出にて得られた左右画像に対応する平
行線対を示す図、
[Fig. 3] A diagram showing parallel line pairs corresponding to left and right images obtained by edge extraction,

【図4】特徴量の時系列データの例を示す図である。FIG. 4 is a diagram showing an example of time-series data of feature amounts.

【符号の説明】[Explanation of symbols]

1      右画像入力部 2      左画像入力部 3,4  特徴量検出部 5      差分装置 6      画像蓄積部 7      特徴量追跡部 1 Right image input section 2 Left image input section 3, 4 Feature detection unit 5 Difference device 6 Image storage section 7 Feature tracking unit

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】  それぞれの中心軸が平行となるよう配
設された左右カメラによるステレオ画像入力手段と、該
左右カメラによる入力画像各々のエツジを抽出する手段
と、左右のエツジ抽出画面の差分画像を得る手段と、該
差分画像から平行線対を抽出する手段と、該平行線対の
動きに基づいて移動体を抽出する手段とを備えることを
特徴とする移動車の環境認識装置。
1. A stereo image input means using left and right cameras arranged so that their central axes are parallel, means for extracting edges of each input image from the left and right cameras, and a difference image between the left and right edge extraction screens. 1. An environment recognition device for a moving vehicle, comprising: means for obtaining a pair of parallel lines from the difference image; and means for extracting a moving object based on the movement of the pair of parallel lines.
JP3065854A 1991-03-29 1991-03-29 Apparatus for recognizing environment of moving vehicle Withdrawn JPH04301571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3065854A JPH04301571A (en) 1991-03-29 1991-03-29 Apparatus for recognizing environment of moving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3065854A JPH04301571A (en) 1991-03-29 1991-03-29 Apparatus for recognizing environment of moving vehicle

Publications (1)

Publication Number Publication Date
JPH04301571A true JPH04301571A (en) 1992-10-26

Family

ID=13299018

Family Applications (1)

Application Number Title Priority Date Filing Date
JP3065854A Withdrawn JPH04301571A (en) 1991-03-29 1991-03-29 Apparatus for recognizing environment of moving vehicle

Country Status (1)

Country Link
JP (1) JPH04301571A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004060402A1 (en) * 2004-12-14 2006-07-13 Adc Automotive Distance Control Systems Gmbh Method and device for determining a vehicle speed

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004060402A1 (en) * 2004-12-14 2006-07-13 Adc Automotive Distance Control Systems Gmbh Method and device for determining a vehicle speed
US8140214B2 (en) * 2004-12-14 2012-03-20 Conti Temic Microelectronic Gmbh Method and device for determining the speed of a vehicle

Similar Documents

Publication Publication Date Title
CN109272530B (en) Target tracking method and device for space-based monitoring scene
JP3242529B2 (en) Stereo image matching method and stereo image parallax measurement method
KR101647370B1 (en) road traffic information management system for g using camera and radar
JP6439820B2 (en) Object identification method, object identification device, and classifier training method
CN107146200B (en) Unmanned aerial vehicle remote sensing image splicing method based on image splicing quality evaluation
KR101530255B1 (en) Cctv system having auto tracking function of moving target
JPH08294143A (en) Method for corresponding between images and inter-image parallax measurement method
CN102999918A (en) Multi-target object tracking system of panorama video sequence image
WO2008020598A1 (en) Subject number detecting device and subject number detecting method
JP7209115B2 (en) Detection, 3D reconstruction and tracking of multiple rigid objects moving in relatively close proximity
CN111340922A (en) Positioning and mapping method and electronic equipment
US20180173982A1 (en) System and method for 1d root association providing sparsity guarantee in image data
JPH1166319A (en) Method and device for detecting traveling object, method and device for recognizing traveling object, and method and device for detecting person
US20220301317A1 (en) Method and device for constructing object motion trajectory, and computer storage medium
Jung et al. Object detection and tracking-based camera calibration for normalized human height estimation
CN113256683B (en) Target tracking method and related equipment
JP2002342762A (en) Object tracing method
JP6798609B2 (en) Video analysis device, video analysis method and program
JP7363504B2 (en) Object detection method, detection device and electronic equipment
JP2013069045A (en) Image recognition device, image recognition method, and image recognition program
CN115019241B (en) Pedestrian identification and tracking method and device, readable storage medium and equipment
US20210150745A1 (en) Image processing method, device, electronic apparatus, and computer readable storage medium
Lee et al. Vehicle counting based on a stereo vision depth maps for parking management
Unno et al. Vehicle motion tracking using symmetry of vehicle and background subtraction
CN112818743B (en) Image recognition method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
A300 Application deemed to be withdrawn because no request for examination was validly filed

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 19980514