JPH03282709A - Environment recognition device for mobile vehicle - Google Patents

Environment recognition device for mobile vehicle

Info

Publication number
JPH03282709A
JPH03282709A JP2081404A JP8140490A JPH03282709A JP H03282709 A JPH03282709 A JP H03282709A JP 2081404 A JP2081404 A JP 2081404A JP 8140490 A JP8140490 A JP 8140490A JP H03282709 A JPH03282709 A JP H03282709A
Authority
JP
Japan
Prior art keywords
image
vehicle
candidate point
moving object
mobile body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2081404A
Other languages
Japanese (ja)
Other versions
JP2829934B2 (en
Inventor
Shoichi Maruya
丸屋 祥一
Atsushi Kutami
篤 久田見
Hiroyuki Takahashi
弘行 高橋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazda Motor Corp
Original Assignee
Mazda Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mazda Motor Corp filed Critical Mazda Motor Corp
Priority to JP2081404A priority Critical patent/JP2829934B2/en
Publication of JPH03282709A publication Critical patent/JPH03282709A/en
Application granted granted Critical
Publication of JP2829934B2 publication Critical patent/JP2829934B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Processing (AREA)

Abstract

PURPOSE:To accelerate recognition whether or not a body in accordance with a mobile body candidate point is traveling by narrowing down the mobile body candidate point based on an edge extracted from prescribed two screens. CONSTITUTION:Travel path area extracting means 1, 2 which extract a travel path area by binarizing an input image, and an edge extracting means 3 which extracts the edge of the image in a horizontal direction advancing from the lower side to the upper side in an extracted travel path area are provided. Also, a mobile body candidate point extracting means 7 which narrows down the mobile body candidate point based on the size of an extracted edge, and mobile body recognizing means 5, 6, and 7 which recognize the fact that the mobile candidate point is a mobile body based on the amount of travel between the prescribed two pictures and that of its own vehicle are provided. In such a way, the mobile body candidate point can be narrowed down by performing edge extraction by applying image processing. Thereby, it is possible to recognize the mobile body before its own vehicle from the amount of travel between the prescribed two pictures and that of its own vehicle, which enables the recognition to be performed with few amount of calculation and in a short time.

Description

【発明の詳細な説明】 (産業上の利用分野) 本発明は、画像処理にて移動体の認識を行なう移動車の
環境認識装置に関する。
DETAILED DESCRIPTION OF THE INVENTION (Field of Industrial Application) The present invention relates to an environment recognition device for a moving vehicle that recognizes a moving object through image processing.

(従来の技術) 従来、画像処理を利用して移動体を認識する場合、連続
した2画面を比較して同一物体を確認し、それが移動体
であるか否かの判別を行なっていた。その場合、画面か
ら様々な特微量を引き出してそれらを演算し、その演算
結果を比較、照合して同一物体か否かの判別を行なって
いた。
(Prior Art) Conventionally, when recognizing a moving object using image processing, two consecutive screens are compared to confirm the same object, and it is determined whether or not it is a moving object. In this case, various characteristic amounts were extracted from the screen, calculated, and the results of the calculations were compared and verified to determine whether or not the objects were the same.

(発明が解決しようとしている課題) しかしながら、上記従来例では、同一物体か否かを判別
するために、画面から引き出した様々な特微量を演算に
より比較、照合を行なっているため、膨大な計算量と時
間を要するという欠点がある。
(Problem to be solved by the invention) However, in the conventional example described above, in order to determine whether or not the objects are the same, various feature quantities extracted from the screen are compared and verified by calculation, which requires a huge amount of calculation. The drawback is that it requires a large amount of time and time.

(課題を解決するための手段) 本発明は、上述の課題を解決することを目的として成さ
れたもので、上述の課題を解決する一手段として以下の
構成を備える。
(Means for Solving the Problems) The present invention has been made for the purpose of solving the above-mentioned problems, and includes the following configuration as one means for solving the above-mentioned problems.

即ち、入力画像を2値化して走行路領域を抽出する走行
路領域抽出手段と、前記走行路領域抽出手段にて抽出さ
れた走行路領域内で下方から上方に向かって画像の水平
方向のエツジ抽出を行なうエツジ抽出手段と、前記エツ
ジ抽出手段にて抽出されたエツジの大きさに基づき移動
物候補点の絞り込みを行なう移動物候補点抽出手段と、
所定の2画面間での移動物候補点の移動量と自車の移動
量とに基づいて移動物候補点が移動体であることを認識
する移動物認識手段とを備える。
That is, a driving road area extracting means binarizes an input image to extract a driving road area, and a horizontal edge of the image is extracted from the bottom to the top within the driving road area extracted by the driving road area extracting means. an edge extraction means for performing extraction; a moving object candidate point extraction means for narrowing down moving object candidate points based on the size of the edge extracted by the edge extraction means;
The moving object recognition means recognizes that the moving object candidate point is a moving object based on the amount of movement of the moving object candidate point and the amount of movement of the own vehicle between two predetermined screens.

(作用) 以上の構成において、画像処理によるエツジ抽出を行な
って移動物候補点を絞り込み、所定の2画面間での移動
物候補点の移動量と自車の移動量とから自車前方の移動
物体を認識できる。
(Function) In the above configuration, edge extraction is performed by image processing to narrow down the moving object candidate points, and the movement in front of the own vehicle is determined based on the amount of movement of the moving object candidate points and the amount of movement of the own vehicle between two predetermined screens. Can recognize objects.

(実施例) 以下、添付図面を参照して本発明に係る好適な一実施例
を詳細に説明する。
(Embodiment) Hereinafter, a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.

第1図は本発明に係る一実施例である、移動車の環境認
識装置のブロック図である。
FIG. 1 is a block diagram of an environment recognition device for a moving vehicle, which is an embodiment of the present invention.

第1図において、移動車の環境認識装置(以下認識装置
と呼ぶ)は、ビデオカメラ等から成る画像入力部1から
自車前方の画像を入力し、その入力画像を画像2値化部
2で2値化する。そして、エツジ抽出部3で、2値化し
た画像から後述する方法にてエツジの抽出を行ない、エ
ツジ抽出後の画像を画像蓄積部4に格納する。
In FIG. 1, an environment recognition device for a moving vehicle (hereinafter referred to as recognition device) inputs an image in front of the vehicle from an image input unit 1 consisting of a video camera, etc., and converts the input image into an image binarization unit 2. Binarize. Then, the edge extraction section 3 extracts edges from the binarized image by a method described later, and stores the image after edge extraction in the image storage section 4.

センサ部5は車速センサ5a、及び方向センサ5bによ
り構成されており、車速センサ5aでは自車の移動速度
を検出し、方向センサ5bで自車の移動方向を検出する
。そして、これらの検出結果は移動量演算部6に送られ
、そこで自車の移動量が算出される。
The sensor unit 5 includes a vehicle speed sensor 5a and a direction sensor 5b.The vehicle speed sensor 5a detects the moving speed of the own vehicle, and the direction sensor 5b detects the moving direction of the own vehicle. These detection results are then sent to the movement amount calculating section 6, where the movement amount of the own vehicle is calculated.

主演算部7は画像蓄積部4に格納されたエツジ抽出後の
画像を入力して移動物候補点を抽出する。主演算部7は
、後述するように移動物候補点の抽出を連続した2画面
について行ない、2画面間での移動物候補点の移動量を
算出する。そして、この移動物候補点の移動量の算出結
果と移動量演算部6からの自軍の移動量とに基づき、自
車前方の移動物体を認識し、その結果を表示部8に可視
、或は可聴表示する。
The main calculation unit 7 inputs the image after edge extraction stored in the image storage unit 4 and extracts moving object candidate points. The main calculation unit 7 extracts moving object candidate points for two consecutive screens, as will be described later, and calculates the amount of movement of the moving object candidate points between the two screens. Then, based on the calculation result of the movement amount of this moving object candidate point and the movement amount of the own troops from the movement amount calculating section 6, the moving object in front of the own vehicle is recognized, and the result is displayed on the display section 8 or displayed. Display audibly.

以下、本実施例の認識装置における移動物体の認識方法
について詳細に説明する。
Hereinafter, a method for recognizing a moving object in the recognition apparatus of this embodiment will be described in detail.

第2図(A)及び(B)は自車前方を撮影した画像から
、道路上の白線を抽出する方法を説明するための図であ
る。第2図(A)は画像入力部1にて撮影した、画像処
理前の画像であり、第2図(B)は第2図(A)の画像
を画像2値化部2にて2値化した後の画像である。
FIGS. 2A and 2B are diagrams for explaining a method of extracting white lines on a road from an image taken in front of the vehicle. FIG. 2(A) is an image taken by the image input section 1 before image processing, and FIG. 2(B) is the image of FIG. 2(A) taken by the image binarization section 2 into a binary value. This is the image after it has been converted.

画像2値化部2で画像を2値化するための閾値Tは、第
2図(A)の画面中央下部の領域Aを道路領域として、
領域Aを2値化した平均値t。を求めることにより、次
式にて表わすことができる。
The threshold value T for binarizing the image in the image binarizing unit 2 is based on the area A at the lower center of the screen in FIG. 2(A) as a road area.
Average value t obtained by binarizing area A. By finding , it can be expressed by the following formula.

T=to  +α          ・・・ (1)
但し、α=f(to)である。
T=to +α... (1)
However, α=f(to).

上記の式(1)にて求めた閾値Tにて第2図(A)の画
像を2値化すると、道路上の白線が抽出できるので、第
2図(B)の斜線にて示した領域Sが求まる。
When the image in Fig. 2 (A) is binarized using the threshold value T determined by the above formula (1), the white line on the road can be extracted, so the area indicated by diagonal lines in Fig. 2 (B) Find S.

そこで、エツジ抽出部3にて、上述の方法にて求めた領
域S内の画像の水平方向のエツジ抽出を第3図に示した
マスクを用いて行なう。即ち、第4図(A)に示したよ
うに、領域S内に影21aを落とす物体21や影20a
を落とす物体20が存在する場合、画面下から上方向へ
エツジ抽出を行なうと上方が下方よりも暗い部分、即ち
、明から暗に変化する部分がエツジ(後述する第4図(
B)の21bや20b)として抽出される。そして、抽
出された部分の横幅、つまりエツジの長さを測定するこ
とにより実際の物体の大きさを判定できるので、一定の
値以上のものを移動物候補点として絞り込んで保存する
一方、その他のものはノイズとして除去する。その結果
、第4図(B)に示す画像が得られるので、それをエツ
ジ抽出、及び移動物候補点絞り込み後の画像として画像
蓄積部4に格納する。
Therefore, the edge extracting section 3 extracts edges in the horizontal direction of the image within the region S obtained by the above-described method using the mask shown in FIG. That is, as shown in FIG. 4(A), an object 21 that casts a shadow 21a within the region
If there is an object 20 that drops the object 20, when edge extraction is performed from the bottom of the screen upwards, the upper part is darker than the lower part, that is, the part that changes from bright to dark is an edge (see Figure 4 (described later)).
B) is extracted as 21b and 20b). The actual size of the object can be determined by measuring the width of the extracted portion, that is, the length of the edges, so those that exceed a certain value are narrowed down and saved as moving object candidate points, while other Remove things as noise. As a result, the image shown in FIG. 4(B) is obtained, which is stored in the image storage unit 4 as an image after edge extraction and moving object candidate point narrowing down.

次に、第4図(A)の画像を撮影後、所定時間(Δtと
する)が経過してから、再び画像入力部1にて自車前方
の画像を撮影して(第4図(C)に示す)、上述と同様
の方法にてその画像の水平方向のエツジ抽出を行ない、
それをエツジ抽出後の画像として画像蓄積部4に格納す
る。結果として、第4図(D)に示すエツジ22b、及
び23bがその画面での移動物候補点として抽出できる
ことになる。尚、ここではエツジ抽出後の画像に、便宜
上道路上の白線を記入しておく。
Next, after a predetermined period of time (denoted as Δt) has elapsed after photographing the image shown in FIG. ), extract the edges in the horizontal direction of the image using the same method as described above,
This is stored in the image storage section 4 as an image after edge extraction. As a result, edges 22b and 23b shown in FIG. 4(D) can be extracted as moving object candidate points on that screen. Incidentally, here, for convenience, white lines on the road are written in the image after edge extraction.

そこで、上述の画像処理にて得られた移動物候補点に対
応する物体が、実際に移動しているか静止しているかの
判定方法について以下に述べる。
Therefore, a method for determining whether the object corresponding to the moving object candidate point obtained by the above-described image processing is actually moving or standing still will be described below.

第5図は平面上を移動する二種類の物体と平面に対して
静止する物体の位置関係を示す図である。同図において
、自車30には、自車前方に位置する移動物体21や静
止物体20を含む、進行方向(図中の矢印)前方の所定
領域を撮影できるようにテレビカメラ31が固定されて
いる。
FIG. 5 is a diagram showing the positional relationship between two types of objects moving on a plane and an object stationary with respect to the plane. In the figure, a television camera 31 is fixed to the own vehicle 30 so as to be able to photograph a predetermined area in front of the traveling direction (arrow in the figure), including a moving object 21 and a stationary object 20 located in front of the own vehicle. There is.

いま、時刻t1に自軍30、及び前方移動物体21が平
面40上の点線で示した位置にあり、時間Δを経過後の
時刻t2にそれぞれ実線にて示す位置にある場合、前述
の如くセンサ部5の車速センサ5a及び方向センサ5b
が自車の移動速度と移動方向を常に監視しているので、
移動量演算部6は、その移動速度と移動方向、及び時間
Δtとから自車の移動量Δ℃を演算する。一方、静止物
体20は時間の経過とは無関係に平面上の一定の場所に
位置する。そこで、時刻t1に撮影した画像を第4図(
A)、時刻t2に撮影した画像を第4図(C)とすると
、前述の如く水平方向のエツジ抽出後の画像はそれぞれ
第4図(B)、第4図(D)となる。
Now, if the own troops 30 and the forward moving object 21 are at the positions shown by dotted lines on the plane 40 at time t1, and at the positions shown by solid lines at time t2 after the elapse of time Δ, the sensor unit 5 vehicle speed sensor 5a and direction sensor 5b
constantly monitors the speed and direction of your vehicle,
The movement amount calculation unit 6 calculates the movement amount Δ°C of the own vehicle from the movement speed, movement direction, and time Δt. On the other hand, the stationary object 20 is located at a fixed location on a plane regardless of the passage of time. Therefore, the image taken at time t1 is shown in Figure 4 (
A), and the image taken at time t2 is shown in FIG. 4(C), the images after horizontal edge extraction as described above are shown in FIG. 4(B) and FIG. 4(D), respectively.

尚、ここでは、ある速度で移動する自車からの2画面の
撮影は、撮影した静止物体の画面上での移動を認識でき
る程度の時間間隔Δtを置いて行なうものとする。
Here, it is assumed that two images are taken from the vehicle moving at a certain speed with a time interval Δt long enough to recognize the movement of the photographed stationary object on the screen.

これらエツジ抽出後の画像は画像蓄積部4に格納されて
いるので、主演算部7で時刻1. 12での画像を比較
し、両画像に共通する移動候補点が存在するかどうかを
判定する。即ち、主演算部7は第4図(B)及び第4図
(D)に示した画像から、20b、21b、22b及び
23bを両画像に共通する移動候補点と判定するが、そ
の内エツジ20bと23bは両画面間で位置が移動して
おり、エツジ21bと22bは画面上での位置に変化が
ないことが分かる。そこで、主演算部7は更に、移動量
演算部6で演算した自車の移動量とエツジ20bと23
bの画面上での移動量とが合致するかの演算を行ない、
両者が一致すればエツジ20b、23bは領域S、即ち
道路上の同一静止物体の影から抽出したエツジであると
判断する。また、主演算部7は画面上での移動がないと
判定したエツジ21bと22bを、自車前方の移動物体
から抽出したエツジであると判断する。それ故、第4図
(A)(7)20と第4図(C)(7)23は同一静止
物体であり、また21と22は同一移動物体であること
が分かる。
Since these images after edge extraction are stored in the image storage section 4, the main processing section 7 processes them at time 1. The images obtained in step 12 are compared to determine whether there is a moving candidate point common to both images. That is, the main calculation unit 7 determines 20b, 21b, 22b, and 23b from the images shown in FIG. 4(B) and FIG. 4(D) as movement candidate points common to both images, but among them, It can be seen that the positions of edges 20b and 23b have moved between the two screens, and the positions of edges 21b and 22b have not changed on the screen. Therefore, the main calculation unit 7 further calculates the distance between the movement amount of the own vehicle calculated by the movement amount calculation unit 6 and the edges 20b and 23.
Perform calculations to see if the amount of movement of b on the screen matches,
If the two match, it is determined that the edges 20b and 23b are edges extracted from the area S, that is, the shadow of the same stationary object on the road. Furthermore, the main calculation unit 7 determines that edges 21b and 22b, which have been determined not to be moving on the screen, are edges extracted from a moving object in front of the vehicle. Therefore, it can be seen that FIG. 4(A)(7) 20 and FIG. 4(C)(7) 23 are the same stationary object, and 21 and 22 are the same moving object.

更に、第4図(B)と第4図(D)とを比較して、エツ
ジ20bと23bの画面上での移動量と移動量演算部6
で演算した自車の移動量からエラ1 ジ20b、23bは道路上の静止物体の影のエツジであ
ることが明白となり、またエツジ21bと22bは自車
前方の移動物体の影のエツジであると判定できる程度に
画面上で移動している場合、主演算部7はエツジ21b
、22bの画面上での移動量と移動量演算部6で演算し
た自車の移動量とから自車前方の移動物体の速度を算出
できる。
Furthermore, by comparing FIG. 4(B) and FIG. 4(D), the amount of movement of the edges 20b and 23b on the screen and the amount of movement calculation unit 6 are calculated.
From the amount of movement of the vehicle calculated in Error 1, it is clear that edges 20b and 23b are the edges of the shadow of a stationary object on the road, and edges 21b and 22b are the edges of the shadow of a moving object in front of the vehicle. If the movement on the screen is such that it can be determined that
, 22b on the screen and the amount of movement of the own vehicle calculated by the movement amount calculating section 6, the speed of the moving object in front of the own vehicle can be calculated.

即ち、自車が等速で移動しているとき、第4図(D)の
エツジ22bが画面上で第4図(B)の21bよりも上
方にあれば、前方移動物体21は自車30よりも速い速
度で進んでいることになり、逆の場合は前方移動物体2
1は自車30よりも速度が遅いことになる。尚、言うま
でも無く、第4図(B)のエツジ21bと第4図(D)
の22bとが画面上で同一の位置にあれば、前方移動物
体21は自車と同じ速度で進んでいることに2 なる。
That is, when the own vehicle is moving at a constant speed, if the edge 22b in FIG. 4(D) is above 21b in FIG. It means that it is moving at a faster speed than the forward moving object 2.
1 means that the speed is slower than the own vehicle 30. Needless to say, the edge 21b in Figure 4 (B) and the edge 21b in Figure 4 (D)
22b are at the same position on the screen, it means that the forward moving object 21 is moving at the same speed as the own vehicle.

以上説明したように、本実施例によれば、連続して撮影
した2画像のそれぞれに対して、画像の中の明から暗の
変化点をエツジとして抽出して、それらの絞り込みを行
ない、画面間でのエツジの移動量、及び自軍の移動量を
もとに演算を実行することにより自車前方の物体が移動
しているか否かの判断ができるので、道路上の移動物体
の認識が高速かつ容易になるという効果がある。
As explained above, according to this embodiment, points of change from light to dark in the images are extracted as edges for each of two consecutively photographed images, and the edges are narrowed down. By performing calculations based on the amount of movement of the edges and the amount of movement of your own troops, you can judge whether objects in front of your vehicle are moving or not, making it possible to recognize moving objects on the road quickly. It also has the effect of making it easier.

尚、上記実施例では2値化した画像の明暗をもとにエツ
ジ抽出を行ない、移動物の候補点を検索するという手法
を採ったが、カラー画像を用いてエツジ抽出を行なって
もよい。
In the above embodiment, edge extraction is performed based on the brightness of the binarized image to search for candidate points of a moving object, but edge extraction may be performed using a color image.

即ち、車両真下の影は太陽の照射がなくても明瞭に現わ
れ、しかもその影は道路の色成分を有しておらず、RG
 B それぞれが同成分台まれているので、画像の中の
影の色成分を分析することにより、車両真下の影とその
地道路の色成分と類似した色成分を持つ影との区別が容
易にでき、より正確な移動物の候補点抽出が可能となる
In other words, the shadow directly below the vehicle clearly appears even without sunlight, and furthermore, the shadow does not have the color component of the road, and is RG
B By analyzing the color components of the shadow in the image, it is easy to distinguish between the shadow directly below the vehicle and the shadow with color components similar to those of the road below. This enables more accurate candidate point extraction of moving objects.

(発明の効果) 以上説明したように、本発明によれば、所定の2画面か
ら抽出したエツジをもとに移動物候補点の絞り込みを行
なうことにより、候補点に対応した物体が移動している
か否かの認識を高速化できるという効果がある。
(Effects of the Invention) As explained above, according to the present invention, by narrowing down moving object candidate points based on edges extracted from two predetermined screens, the object corresponding to the candidate points is moved. This has the effect of speeding up the recognition of whether or not there is a person.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明に係る一実施例である、移動車の環境認
識装置のブロック図、 第2図(A)は画像入力部にて捕らえた原画像を示す図
、 第2図(B)は画像2値化部にて2値化した後の画像を
示す図、 第3図は第2図(B)に示した領域S内の画像の水平方
向のエツジ抽出を行なうためのマスクを示す図、 第4図(A)は、ある時刻t1に撮影した画像を示す図
、 第4図(B)は第4図(A)の画像の水平方向のエツジ
抽出を行なった後の画像を示す図、第4図(C)は、時
刻tlからΔを経過後に撮影した画像を示す図、 第4図(D)は第4図(C)の画像の水平方向のエツジ
抽出を行なった後の画像を示す図、第5図は平面上を移
動する二種類の物体と平面上に静止する物体の位置関係
を示す図である。 図中、1・・・画像入力部、2・・・画像2値化部、 
5 3・・・エツジ抽出部、4・・・画像蓄積部、5・・・
センサ部、6・・・移動量演算部、7・・・主演算部、
8・・・表示部、20・・・静止物体、21・・・移動
物体、30・・・自車、31・・・テレビカメラである
Fig. 1 is a block diagram of an environment recognition device for a moving vehicle, which is an embodiment of the present invention; Fig. 2(A) is a diagram showing an original image captured by an image input unit; Fig. 2(B) Figure 3 shows the image after it has been binarized by the image binarization unit, and Figure 3 shows a mask for extracting edges in the horizontal direction of the image in the area S shown in Figure 2 (B). Figure 4 (A) is a diagram showing an image taken at a certain time t1, Figure 4 (B) is an image after performing edge extraction in the horizontal direction of the image in Figure 4 (A). Figure 4(C) is a diagram showing an image taken after Δ has elapsed from time tl. Figure 4(D) is a diagram showing the image taken after horizontal edge extraction of the image in Figure 4(C). The image shown in FIG. 5 is a diagram showing the positional relationship between two types of objects moving on a plane and an object stationary on the plane. In the figure, 1... image input section, 2... image binarization section,
5 3... Edge extraction section, 4... Image storage section, 5...
Sensor unit, 6... Movement amount calculation unit, 7... Main calculation unit,
8... Display unit, 20... Stationary object, 21... Moving object, 30... Self-vehicle, 31... Television camera.

Claims (1)

【特許請求の範囲】 外界認識のための画像入力手段を備えた移動車の環境認
識装置であつて、 入力画像を2値化して走行路領域を抽出する走行路領域
抽出手段と、 前記走行路領域抽出手段にて抽出された走行路領域内で
下方から上方に向かつて画像の水平方向のエッジ抽出を
行なうエッジ抽出手段と、 前記エッジ抽出手段にて抽出されたエッジの大きさに基
づき移動物候補点の絞り込みを行なう移動物候補点抽出
手段と、 所定の2画面間での移動物候補点の移動量と自車の移動
量とに基づいて移動物候補点が移動体であることを認識
する移動物認識手段とを有することを特徴とする移動車
の環境認識装置。
[Scope of Claims] An environment recognition device for a moving vehicle including an image input means for recognizing the outside world, comprising: a travel road region extracting means for extracting a travel road region by binarizing the input image; edge extraction means for extracting edges in the horizontal direction of an image from below to above within the travel road area extracted by the area extraction means; and a moving object based on the size of the edge extracted by the edge extraction means. A moving object candidate point extraction means that narrows down the candidate points, and a moving object candidate point that recognizes that the moving object candidate point is a moving object based on the amount of movement of the moving object candidate point between two predetermined screens and the amount of movement of the own vehicle. An environment recognition device for a moving vehicle, characterized in that it has a moving object recognition means.
JP2081404A 1990-03-30 1990-03-30 Mobile vehicle environment recognition device Expired - Lifetime JP2829934B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2081404A JP2829934B2 (en) 1990-03-30 1990-03-30 Mobile vehicle environment recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2081404A JP2829934B2 (en) 1990-03-30 1990-03-30 Mobile vehicle environment recognition device

Publications (2)

Publication Number Publication Date
JPH03282709A true JPH03282709A (en) 1991-12-12
JP2829934B2 JP2829934B2 (en) 1998-12-02

Family

ID=13745385

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2081404A Expired - Lifetime JP2829934B2 (en) 1990-03-30 1990-03-30 Mobile vehicle environment recognition device

Country Status (1)

Country Link
JP (1) JP2829934B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07162846A (en) * 1993-12-07 1995-06-23 Sumitomo Electric Ind Ltd Follow-up recognizing device for object
JPH11328365A (en) * 1998-05-14 1999-11-30 Toshiba Corp Device and method for monitoring image
JP2003309843A (en) * 2002-04-12 2003-10-31 Honda Motor Co Ltd Obstacle warning device
JP2006004173A (en) * 2004-06-17 2006-01-05 Toyota Motor Corp Vehicle periphery monitoring device
US8818042B2 (en) 2004-04-15 2014-08-26 Magna Electronics Inc. Driver assistance system for vehicle
US8842176B2 (en) 1996-05-22 2014-09-23 Donnelly Corporation Automatic vehicle exterior light control
US8917169B2 (en) 1993-02-26 2014-12-23 Magna Electronics Inc. Vehicular vision system
US8993951B2 (en) 1996-03-25 2015-03-31 Magna Electronics Inc. Driver assistance system for a vehicle
US9171217B2 (en) 2002-05-03 2015-10-27 Magna Electronics Inc. Vision system for vehicle
US9436880B2 (en) 1999-08-12 2016-09-06 Magna Electronics Inc. Vehicle vision system
US9440535B2 (en) 2006-08-11 2016-09-13 Magna Electronics Inc. Vision system for vehicle

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8917169B2 (en) 1993-02-26 2014-12-23 Magna Electronics Inc. Vehicular vision system
JPH07162846A (en) * 1993-12-07 1995-06-23 Sumitomo Electric Ind Ltd Follow-up recognizing device for object
US8993951B2 (en) 1996-03-25 2015-03-31 Magna Electronics Inc. Driver assistance system for a vehicle
US8842176B2 (en) 1996-05-22 2014-09-23 Donnelly Corporation Automatic vehicle exterior light control
JPH11328365A (en) * 1998-05-14 1999-11-30 Toshiba Corp Device and method for monitoring image
US9436880B2 (en) 1999-08-12 2016-09-06 Magna Electronics Inc. Vehicle vision system
JP2003309843A (en) * 2002-04-12 2003-10-31 Honda Motor Co Ltd Obstacle warning device
US10683008B2 (en) 2002-05-03 2020-06-16 Magna Electronics Inc. Vehicular driving assist system using forward-viewing camera
US10351135B2 (en) 2002-05-03 2019-07-16 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9555803B2 (en) 2002-05-03 2017-01-31 Magna Electronics Inc. Driver assistance system for vehicle
US11203340B2 (en) 2002-05-03 2021-12-21 Magna Electronics Inc. Vehicular vision system using side-viewing camera
US9171217B2 (en) 2002-05-03 2015-10-27 Magna Electronics Inc. Vision system for vehicle
US10118618B2 (en) 2002-05-03 2018-11-06 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9834216B2 (en) 2002-05-03 2017-12-05 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9643605B2 (en) 2002-05-03 2017-05-09 Magna Electronics Inc. Vision system for vehicle
US9008369B2 (en) 2004-04-15 2015-04-14 Magna Electronics Inc. Vision system for vehicle
US10306190B1 (en) 2004-04-15 2019-05-28 Magna Electronics Inc. Vehicular control system
US11847836B2 (en) 2004-04-15 2023-12-19 Magna Electronics Inc. Vehicular control system with road curvature determination
US9736435B2 (en) 2004-04-15 2017-08-15 Magna Electronics Inc. Vision system for vehicle
US9428192B2 (en) 2004-04-15 2016-08-30 Magna Electronics Inc. Vision system for vehicle
US9948904B2 (en) 2004-04-15 2018-04-17 Magna Electronics Inc. Vision system for vehicle
US10015452B1 (en) 2004-04-15 2018-07-03 Magna Electronics Inc. Vehicular control system
US11503253B2 (en) 2004-04-15 2022-11-15 Magna Electronics Inc. Vehicular control system with traffic lane detection
US10110860B1 (en) 2004-04-15 2018-10-23 Magna Electronics Inc. Vehicular control system
US9191634B2 (en) 2004-04-15 2015-11-17 Magna Electronics Inc. Vision system for vehicle
US10187615B1 (en) 2004-04-15 2019-01-22 Magna Electronics Inc. Vehicular control system
US9609289B2 (en) 2004-04-15 2017-03-28 Magna Electronics Inc. Vision system for vehicle
US8818042B2 (en) 2004-04-15 2014-08-26 Magna Electronics Inc. Driver assistance system for vehicle
US10462426B2 (en) 2004-04-15 2019-10-29 Magna Electronics Inc. Vehicular control system
US10735695B2 (en) 2004-04-15 2020-08-04 Magna Electronics Inc. Vehicular control system with traffic lane detection
JP4506299B2 (en) * 2004-06-17 2010-07-21 トヨタ自動車株式会社 Vehicle periphery monitoring device
JP2006004173A (en) * 2004-06-17 2006-01-05 Toyota Motor Corp Vehicle periphery monitoring device
US10787116B2 (en) 2006-08-11 2020-09-29 Magna Electronics Inc. Adaptive forward lighting system for vehicle comprising a control that adjusts the headlamp beam in response to processing of image data captured by a camera
US11148583B2 (en) 2006-08-11 2021-10-19 Magna Electronics Inc. Vehicular forward viewing image capture system
US11396257B2 (en) 2006-08-11 2022-07-26 Magna Electronics Inc. Vehicular forward viewing image capture system
US10071676B2 (en) 2006-08-11 2018-09-11 Magna Electronics Inc. Vision system for vehicle
US11623559B2 (en) 2006-08-11 2023-04-11 Magna Electronics Inc. Vehicular forward viewing image capture system
US9440535B2 (en) 2006-08-11 2016-09-13 Magna Electronics Inc. Vision system for vehicle
US11951900B2 (en) 2006-08-11 2024-04-09 Magna Electronics Inc. Vehicular forward viewing image capture system

Also Published As

Publication number Publication date
JP2829934B2 (en) 1998-12-02

Similar Documents

Publication Publication Date Title
US10941019B2 (en) User detection system and image processing device
US8175806B2 (en) Car navigation system
US20060140447A1 (en) Vehicle-monitoring device and method using optical flow
JP2007179386A (en) Method and apparatus for recognizing white line
JPH03282709A (en) Environment recognition device for mobile vehicle
JPH05298591A (en) Object recognition device
JP3812384B2 (en) Leading vehicle recognition device
JPH03282707A (en) Environment recognition device for mobile vehicle
JP2004086417A (en) Method and device for detecting pedestrian on zebra crossing
JP3157958B2 (en) Leading vehicle recognition method
JPH07244717A (en) Travel environment recognition device for vehicle
JPH10320559A (en) Traveling path detector for vehicle
JP2962799B2 (en) Roadside detection device for mobile vehicles
JPH0997335A (en) Vehicle recognition device and traffic flow measuring instrument
RU2262661C2 (en) Method of detecting moving vehicle
JP2946620B2 (en) Automatic number reading device with speed measurement function
JPH0520593A (en) Travelling lane recognizing device and precedence automobile recognizing device
JPH05290293A (en) Vehicle head detector
Rosebrock et al. Using the shadow as a single feature for real-time monocular vehicle pose determination
JPS63292386A (en) Counting device for moving object
JPH0514892A (en) Image monitor device
Maftuna et al. MODERN SYSTEMS FOR RECOGNIZING VEHICLE LICENSE PLATES
EP3611655A1 (en) Object detection based on analysis of a sequence of images
JPH05303638A (en) Device for extracting parallel lines
JP2000222671A (en) Vehicle detection device