JP4005679B2 - Ambient environment recognition device for autonomous vehicles - Google Patents

Ambient environment recognition device for autonomous vehicles Download PDF

Info

Publication number
JP4005679B2
JP4005679B2 JP31852397A JP31852397A JP4005679B2 JP 4005679 B2 JP4005679 B2 JP 4005679B2 JP 31852397 A JP31852397 A JP 31852397A JP 31852397 A JP31852397 A JP 31852397A JP 4005679 B2 JP4005679 B2 JP 4005679B2
Authority
JP
Japan
Prior art keywords
ground
dimensional object
parallax
distance image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP31852397A
Other languages
Japanese (ja)
Other versions
JPH11149557A (en
Inventor
至 瀬田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Subaru Corp
Original Assignee
Fuji Jukogyo KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuji Jukogyo KK filed Critical Fuji Jukogyo KK
Priority to JP31852397A priority Critical patent/JP4005679B2/en
Publication of JPH11149557A publication Critical patent/JPH11149557A/en
Application granted granted Critical
Publication of JP4005679B2 publication Critical patent/JP4005679B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Measurement Of Optical Distance (AREA)
  • Image Processing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Description

【0001】
【発明の属する技術分野】
本発明は、ステレオカメラで撮像した1対の画像をステレオ処理して周囲環境を認識する自律走行車の周囲環境認識装置に関する。
【0002】
【従来の技術】
従来より、ゴルフ場、河川敷堤防、公園における草刈、芝刈作業や果樹園での農薬散布等の各種フィールドでの作業を無人で行いながら自律的に走行する自律走行車に係わる開発が積極的に行われている。
【0003】
この自律走行車では、光電センサや超音波センサ等の各種センサを用いて走行制御に必要な情報を得たり、誘導ケーブルによって走行制御を行うようにしているが、センサによってはガイドやビーコン等を敷設する必要があり、また、誘導ケーブルでは、切れやすい、コストがかかる、保守管理が大変であるといった問題がある。
【0004】
このため、近年では、画像処理の技術を適用して周囲環境を認識し、走行制御を行う技術が開発されており、例えば、特開昭63−314616号公報には、2台のカメラを用いて撮像した進行方向のステレオ画像から自己位置を検出することにより、自律走行制御を行なう技術が開示されている。
【0005】
【発明が解決しようとする課題】
ところで、果樹園等での自律走行では、木や枝等の立体障害物だけでなく、窪地、丘、段差等を考慮した走行可能領域の判断及び車体制御が必要であり、明暗の大きい環境下で撮像した一対の画像(基準画像及び比較画像)をステレオ処理して樹木等の立体物と地面形状とを3次元的に把握しなければならない。
【0006】
しかしながら、明暗の大きい画像をバランスの取れた明るさに調整すると、カメラのダイナミックレンジの関係で地面部の輝度がつぶれてしまう(周りと非常によく似る)傾向があり、基準画像の小領域と比較画像の小領域との輝度差によるシティブロック距離を用いて2つの画像のマッチングを取る従来のステレオ処理では、シティブロック距離の変化がつきにくく、立体物と区別して地面部を検出することは困難であった。
【0007】
この場合、地面部に合わせて画像全体の明るさを調整すると、取り込んだ画像から地面は検出できても、ほかの立体物が検出しづらくなってしまい、さらには、2台のカメラのゲイン調整(明るさの調整)のちょっとしたずれの影響を受けて視差検出の誤りが多くなる。
【0008】
本発明は上記事情に鑑みてなされたもので、明暗の大きい環境下でステレオ撮像した一対の画像から立体物と地面部とを誤り無く検出し、立体物及び地面部の3次元情報を同時に得ることのできる自律走行車の周囲環境認識装置を提供することを目的としている。
【0009】
【課題を解決するための手段】
請求項1記載の発明は、2台1組のカメラからなるステレオカメラを搭載し、このステレオカメラで撮像した1対の画像をステレオ処理して周囲環境を認識する自律走行車の周囲環境認識装置であって、立体物と地面部とを含む1対の撮像画像に対し、地面部を排除して互いに対応する領域を輝度の差に基づいて求め、対応する領域の視差から得られる立体物までの遠近情報を数値化した立体物主体の距離画像を生成する手段と、上記1対の撮像画像に対し、地面部の互いに対応する領域を輝度変化の差に基づいて求め、対応する領域の視差から得られる地面部の遠近情報を数値化した地面主体の距離画像を生成する手段と、上記立体物主体の距離画像と上記地面主体の距離画像とを合成し、上記1対の撮像画像に対して画面全体に渡る遠近情報を数値化した距離画像を生成する手段とを備えたことを特徴とする。
【0010】
請求項2記載の発明は、請求項1記載の発明において、地面を平面と仮定して求めた画面上の任意点の距離から視差を概算して対応する領域の位置を推測し、この推測位置から所定範囲を、地面部の視差検出範囲として設定することを特徴とする。
【0011】
すなわち、本発明では、2台1組のカメラからなるステレオカメラで撮像した立体物と地面部とを含む1対の画像に対し、地面部を排除して互いに対応する領域を輝度の差に基づいて求め、対応する領域の視差から得られる立体物までの遠近情報を数値化した立体物主体の距離画像を生成するとともに、地面部の互いに対応する領域を輝度変化の差に基づいて求め、対応する領域の視差から得られる地面部の遠近情報を数値化した地面主体の距離画像を生成する。そして、立体物主体の距離画像と地面主体の距離画像とを合成して画面全体に渡る遠近情報を数値化した距離画像を生成する。
【0012】
この際、地面を平面と仮定して求めた画面上の任意点の距離から視差を概算して対応する領域の位置を推測し、この推測位置から所定範囲を地面部の視差検出範囲として設定することが望ましい。
【0013】
【発明の実施の形態】
以下、図面を参照して本発明の実施の形態を説明する。図1〜図7は本発明の実施の一形態に係わり、図1は周囲環境認識装置の構成図、図2は地面部ステレオ処理部の構成図、図3は元画像から立体物主体の距離画像を生成した例を示す説明図、図4はカメラと平面との位置関係を示す説明図、図5は視差検出範囲の説明図、図6は元画像から地面主体の距離画像を生成した例を示す説明図、図7は出力距離画像を示す説明図である。
【0014】
図1において、符号1は、自律走行車、例えば果樹園における農薬散布等の作業を無人で行う作業ロボット等に搭載され、周囲を撮像した画像を処理して障害物や自己位置を認識し、地形に応じた姿勢制御を伴う自律移動をアシストする周囲環境認識装置である。
【0015】
この周囲環境認識装置1は、2台1組のカメラ10a,10bからなるステレオカメラ10、立体物ステレオ処理部20、地面部ステレオ処理部30、データ選択部40、距離画像メモリ50、認識部60等から構成されており、果樹園等の明暗の大きい環境下で撮像した1対の元画像(基準画像及び比較画像)を立体物ステレオ処理部20と地面部ステレオ処理部30との2系統に分配して並列処理し、各並列処理結果を統合することで、木や枝葉等の立体障害物と、窪地、丘、段差等の地面形状とを同時に認識する。
【0016】
具体的には、立体物ステレオ処理部20では、2台1組のステレオカメラ10で撮像した一対の画像をステレオマッチングして立体物主体の距離画像を生成する。この処理では、まず、地面部と樹木等の立体物とを含む元画像に対してエッジ検出を強めに行って地面部を排除し、この地面部を排除した画像の小領域、例えば4×4画素の小領域毎に、互いの画素の輝度の差から以下の(1)式に示すシティブロック距離C1を計算する。
【0017】
C1=Σ|Ai−Bi| …(1)
但し、Ai:基準画像の小領域におけるi番目画素の輝度
i:比較画像の小領域におけるi番目画素の輝度
Σ :i=0〜n(小領域内の画素数)の総和
そして、上記(1)式によるシティブロック距離C1が最小となる比較画像の小領域を基準画像の小領域に対応する場所として決定し、互いに対応する小領域で対象物までの距離に応じて生じる画素のズレ(=視差)を求めた後、この視差から得られる立体物までの遠近情報を数値化した3次元画像情報(距離画像)を生成する。
【0018】
すなわち、この立体物主体のステレオ処理では、通常のステレオ処理(例えば、本出願人による特開平5−114099号公報に詳述されている)に対してエッジ検出を強めに行うことで、例えば、図3に示すような元画像から立体物主体の距離画像を生成し、地面部を排除した立体物だけの距離データを得るようにしている。
【0019】
一方、上記地面部ステレオ処理部30は、図2に示すように、視差検出範囲設定部31、輝度微分シティブロック距離計算部32、視差検出部33等から構成されており、ステレオカメラ10で撮像した一対の画像をステレオマッチングし、地面主体の距離画像を生成する。
【0020】
すなわち、地面はある程度平面とみなすことができることから、視差検出範囲設定部31では、地面を平面と仮定して基準画像の1つの小領域にマッチングする比較画像の小領域の位置を推測し、視差を検出する範囲を設定する。
【0021】
図4に示すように、カメラが平面を写しているとすると、カメラの平面からの高さをH、画面上の消失点を(Xv,Yv)、画面上の任意の点を(Xi,Yi)、焦点距離をfとして、任意点の実際の距離Zは、以下の(2)式で与えられる。
【0022】
Z=H/tan(tan-1(Yv/f)−tan-1(Yi/f))…(2)
また、2台のカメラ10a,10bの基線長(カメラ光軸間の距離)をr、視差をxとすると、レンズから目標物体までの距離Dは、以下の(3)式で与えられる。
【0023】
D=rf/x …(3)
従って、上記(2)式の右辺と上記(3)式の右辺とが等しいとみなすことにより、およその視差を求めることができ、このおよその視差を用いて地面を平面と仮定した場合の基準画像の1つの小領域、例えば8×2画素の小領域に対し、図5に示すように、マッチングする比較画像の小領域の位置を推測する。そして、この推測点から水平走査方向の前後所定範囲を、比較画像の視差検出範囲として設定することで、より正確に地面部を検出することができる。
【0024】
次に、輝度微分シティブロック距離計算部32では、視差検出範囲内の小領域毎に、以下の(4)式による輝度微分のシティブロック距離C2を計算する。すなわち、地面部は、各部分の輝度が周りと非常によく似ており、通常のマッチング処理のように2枚の画像の輝度の差をとるだけでは視差検出の誤りが多くなる。このため、輝度の差でマッチングをとるのではなく、輝度変化点を検出することにより、マッチングをとる。
【0025】
C2=Σ|(Ai−Ai-1)−(Bi−Bi-1)| …(4)
そして、視差検出部33で、上記(4)式による輝度微分のシティブロック距離C2が最小となる比較画像の小領域を、基準画像の小領域に対応する場所として決定して互いの視差を求め、この視差から得られる地面部の遠近情報を数値化した3次元画像情報(距離画像)を生成する。図6は、元画像から輝度微分のシティブロック距離によって地面主体の距離画像を生成した例である。
【0026】
データ選択部40では、上記立体物ステレオ処理部20で生成された立体物主体の距離画像に、上記地面部ステレオ処理部30で生成された地面主体の距離画像から選択した地面部のデータを埋め込み、図7に示すように、1つの距離画像として合成する。
【0027】
上記データ選択部40から出力される距離画像は上記距離画像メモリ50にストされ、認識部60に読み込まれる。この認識部60では、距離画像メモリ50からの3次元の距離情報に基づいて樹木等の立体物や地面の凹凸等を認識し、図示しない走行制御装置に自律走行のための情報を出力する。
【0028】
これにより、例えば、果樹園における農薬散布で予め走行経路を学習させて自律走行を行う場合、丘があると認識された場合には減速し、段差があると認識された場合には徐行するといったように、その場に応じた走行可能領域を判断し、地形に応じた車体制御を伴う自律走行をアシストすることができる。
【0029】
【発明の効果】
以上説明したように請求項1記載の発明によれば、2台1組のカメラからなるステレオカメラで撮像した立体物と地面部とを含む1対の画像に対し、地面部を排除して互いに対応する領域を輝度の差に基づいて求め、対応する領域の視差から得られる立体物までの遠近情報を数値化した立体物主体の距離画像を生成するとともに、地面部の互いに対応する領域を輝度変化の差に基づいて求め、対応する領域の視差から得られる地面部の遠近情報を数値化した地面主体の距離画像を生成する。そして、立体物主体の距離画像と地面主体の距離画像とを合成して画面全体に渡る遠近情報を数値化した距離画像を生成するため、例えば、果樹園等の明暗の大きい環境下においても樹木等の立体物と地面部とを誤り無く検出して正確な3次元情報を得ることができ、正確な走行可能領域の判断を可能として地形に応じた車体制御を行う自律走行をアシストすることができる。
【0030】
この場合、請求項2に記載したように、地面を平面と仮定して求めた画面上の任意点の距離から視差を概算して対応する領域の位置を推測し、この推測位置から所定範囲を地面部の視差検出範囲として設定することで、より正確に地面部を検出することができる。
【図面の簡単な説明】
【図1】周囲環境認識装置の構成図
【図2】地面部ステレオ処理部の構成図
【図3】元画像から立体物主体の距離画像を生成した例を示す説明図
【図4】カメラと平面との位置関係を示す説明図
【図5】視差検出範囲の説明図
【図6】元画像から地面主体の距離画像を生成した例を示す説明図
【図7】出力距離画像を示す説明図
【符号の説明】
1 …周囲環境認識装置
10…ステレオカメラ
20…立体物ステレオ処理部
30…地面部ステレオ処理部
40…データ選択部
50…距離画像メモリ
60…認識部
[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a surrounding environment recognition device for an autonomous vehicle that recognizes the surrounding environment by stereo-processing a pair of images captured by a stereo camera.
[0002]
[Prior art]
Conventionally, development related to autonomous vehicles that run autonomously while performing unmanned operations in various fields such as golf courses, riverbed embankments, mowing in parks, lawn mowing work and pesticide spraying in orchards has been active. It has been broken.
[0003]
In this autonomous vehicle, various sensors such as photoelectric sensors and ultrasonic sensors are used to obtain information necessary for driving control and driving control is performed using a guide cable. In addition, there is a problem that it is necessary to lay the cable, and the induction cable is easily cut, expensive, and difficult to maintain.
[0004]
For this reason, in recent years, a technology for recognizing the surrounding environment by applying image processing technology and performing traveling control has been developed. For example, Japanese Patent Application Laid-Open No. 63-314616 uses two cameras. A technique for performing autonomous running control by detecting a self-position from a stereo image in a traveling direction captured in this manner is disclosed.
[0005]
[Problems to be solved by the invention]
By the way, in autonomous driving in orchards, etc., it is necessary to determine not only three-dimensional obstacles such as trees and branches, but also a travelable area and car body control considering depressions, hills, steps, etc. A pair of images (reference image and comparative image) captured in step 3 must be stereo-processed to grasp a three-dimensional object such as a tree and the ground shape three-dimensionally.
[0006]
However, when adjusting a bright and dark image to a well-balanced brightness, the brightness of the ground tends to collapse (similarly to the surroundings) due to the dynamic range of the camera. In conventional stereo processing that matches two images using the city block distance based on the luminance difference with the small area of the comparison image, it is difficult for the city block distance to change, and it is difficult to detect the ground part separately from the three-dimensional object. It was difficult.
[0007]
In this case, if the brightness of the entire image is adjusted according to the ground, even if the ground can be detected from the captured image, it becomes difficult to detect other three-dimensional objects, and the gain adjustment of the two cameras Errors in parallax detection increase under the influence of a slight shift in (brightness adjustment).
[0008]
The present invention has been made in view of the above circumstances, and detects a three-dimensional object and a ground part without error from a pair of images taken in stereo under a light and dark environment, and simultaneously obtains three-dimensional information of the three-dimensional object and the ground part. An object of the present invention is to provide an environment recognition device for an autonomous vehicle that can handle such a situation.
[0009]
[Means for Solving the Problems]
According to the first aspect of the present invention, there is provided a surrounding environment recognition device for an autonomous vehicle equipped with a stereo camera including a set of two cameras and recognizing the surrounding environment by stereo-processing a pair of images captured by the stereo camera. Then, with respect to a pair of captured images including a three-dimensional object and a ground part, areas corresponding to each other are obtained based on the difference in luminance by removing the ground part, and the three-dimensional object obtained from the parallax of the corresponding area A means for generating a three-dimensional object-based distance image in which perspective information is digitized, and a corresponding area of the ground portion of the pair of captured images is obtained based on a difference in luminance change, and the parallax of the corresponding area A ground-based distance image obtained by quantifying the perspective information of the ground portion obtained from the above, a solid object-based distance image and the ground-based distance image are synthesized, and the pair of captured images Perspective across the entire screen Characterized by comprising a means for generating a distance image obtained by digitizing a broadcast.
[0010]
According to a second aspect of the present invention, in the first aspect of the present invention, the estimated position of a corresponding region is estimated by estimating a parallax from a distance of an arbitrary point on a screen obtained on the assumption that the ground is a plane. The predetermined range is set as the parallax detection range of the ground portion.
[0011]
That is, in the present invention, for a pair of images including a three-dimensional object and a ground portion captured by a stereo camera including a set of two cameras, regions corresponding to each other by removing the ground portion are based on a luminance difference. The distance image of the three-dimensional object mainly obtained by digitizing the perspective information from the parallax of the corresponding area to the three-dimensional object is generated, and the corresponding areas of the ground part are obtained based on the difference in luminance change and A ground-based distance image is generated by digitizing the perspective information of the ground part obtained from the parallax of the area to be processed. Then, the distance image mainly composed of the three-dimensional object and the distance image mainly composed of the ground object are combined to generate a distance image in which the perspective information over the entire screen is digitized.
[0012]
At this time, the parallax is estimated from the distance of an arbitrary point on the screen obtained on the assumption that the ground is a plane, the position of the corresponding region is estimated, and a predetermined range is set as the parallax detection range of the ground portion from the estimated position. It is desirable.
[0013]
DETAILED DESCRIPTION OF THE INVENTION
Embodiments of the present invention will be described below with reference to the drawings. 1 to 7 relate to an embodiment of the present invention, FIG. 1 is a configuration diagram of an ambient environment recognition device, FIG. 2 is a configuration diagram of a ground portion stereo processing unit, and FIG. 4 is an explanatory diagram showing an example of generating an image, FIG. 4 is an explanatory diagram showing a positional relationship between the camera and the plane, FIG. 5 is an explanatory diagram of a parallax detection range, and FIG. 6 is an example of generating a ground-based distance image from the original image. FIG. 7 is an explanatory diagram showing an output distance image.
[0014]
In FIG. 1, reference numeral 1 is mounted on an autonomous vehicle, for example, a work robot that performs unmanned operations such as spraying agricultural chemicals in an orchard, recognizes an obstacle or a self-position by processing an image obtained by imaging the surroundings, It is an ambient environment recognition device that assists autonomous movement with posture control according to terrain.
[0015]
The surrounding environment recognition device 1 includes a stereo camera 10 including a pair of cameras 10a and 10b, a three-dimensional object stereo processing unit 20, a ground portion stereo processing unit 30, a data selection unit 40, a distance image memory 50, and a recognition unit 60. And a pair of original images (reference image and comparative image) captured in an environment of high brightness such as an orchard into two systems of a three-dimensional object stereo processing unit 20 and a ground part stereo processing unit 30 By distributing and processing in parallel, and integrating the results of parallel processing, solid obstacles such as trees and branches and leaves and ground shapes such as depressions, hills and steps are recognized simultaneously.
[0016]
Specifically, the three-dimensional object stereo processing unit 20 generates a distance image mainly composed of a three-dimensional object by stereo-matching a pair of images captured by a pair of stereo cameras 10. In this process, first, edge detection is performed on the original image including the ground portion and a three-dimensional object such as a tree to remove the ground portion, and a small area of the image from which the ground portion is excluded, for example, 4 × 4. For each small region of the pixel, the city block distance C1 shown in the following equation (1) is calculated from the difference in luminance between the pixels.
[0017]
C1 = Σ | A i −B i | (1)
Where A i is the luminance of the i-th pixel in the small area of the reference image B i is the luminance of the i-th pixel in the small area of the comparison image Σ is the sum of i = 0 to n (the number of pixels in the small area), and The small area of the comparative image that minimizes the city block distance C1 according to the equation (1) is determined as a location corresponding to the small area of the reference image, and the pixel shift that occurs according to the distance to the object in the small area corresponding to each other. After obtaining (= parallax), three-dimensional image information (distance image) is generated by digitizing perspective information from the parallax to the three-dimensional object.
[0018]
That is, in this stereo processing mainly of a three-dimensional object, by performing edge detection more strongly than normal stereo processing (for example, detailed in Japanese Patent Application Laid-Open No. H5-114099 by the present applicant), for example, A distance image mainly composed of a three-dimensional object is generated from the original image as shown in FIG. 3, and distance data of only the three-dimensional object excluding the ground portion is obtained.
[0019]
On the other hand, as shown in FIG. 2, the ground part stereo processing unit 30 includes a parallax detection range setting unit 31, a luminance differential city block distance calculation unit 32, a parallax detection unit 33, and the like. The paired images are stereo-matched to generate a ground image.
[0020]
That is, since the ground can be regarded as a plane to some extent, the parallax detection range setting unit 31 assumes the ground as a plane and estimates the position of the small region of the comparison image that matches one small region of the reference image, and the parallax Set the detection range.
[0021]
As shown in FIG. 4, if the camera is shooting a plane, the height from the plane of the camera is H, the vanishing point on the screen is (Xv, Yv), and any point on the screen is (Xi, Yi). ), Where f is the focal length, and the actual distance Z of an arbitrary point is given by the following equation (2).
[0022]
Z = H / tan (tan −1 (Yv / f) −tan −1 (Yi / f)) (2)
Further, assuming that the base line length (distance between the camera optical axes) of the two cameras 10a and 10b is r and the parallax is x, the distance D from the lens to the target object is given by the following equation (3).
[0023]
D = rf / x (3)
Therefore, by assuming that the right side of the above equation (2) is equal to the right side of the above equation (3), an approximate parallax can be obtained, and the reference when the ground is assumed to be a plane using this approximate parallax. As shown in FIG. 5, the position of the small region of the comparison image to be matched is estimated with respect to one small region of the image, for example, a small region of 8 × 2 pixels. Then, by setting the predetermined range in the horizontal scanning direction from this estimated point as the parallax detection range of the comparative image, the ground portion can be detected more accurately.
[0024]
Next, the luminance differential city block distance calculation unit 32 calculates a luminance differential city block distance C2 according to the following equation (4) for each small region in the parallax detection range. In other words, the brightness of each part of the ground part is very similar to that of the surroundings, and errors in parallax detection increase only by taking the difference in brightness between the two images as in normal matching processing. For this reason, the matching is obtained by detecting the luminance change point, not by the luminance difference.
[0025]
C2 = Σ | (A i −A i−1 ) − (B i −B i−1 ) | (4)
Then, the parallax detection unit 33 determines the small area of the comparison image that minimizes the city block distance C2 of the luminance differentiation according to the above equation (4) as a location corresponding to the small area of the reference image, and obtains the parallax of each other. Then, three-dimensional image information (distance image) is generated by digitizing the perspective information of the ground part obtained from the parallax. FIG. 6 shows an example in which a ground-based distance image is generated from the original image based on the city block distance of luminance differentiation.
[0026]
The data selection unit 40 embeds the ground portion data selected from the ground subject distance image generated by the ground portion stereo processing unit 30 in the three-dimensional subject distance image generated by the three-dimensional object stereo processing unit 20. As shown in FIG. 7, they are synthesized as one distance image.
[0027]
The distance image output from the data selection unit 40 is stored in the distance image memory 50 and read into the recognition unit 60. The recognition unit 60 recognizes a three-dimensional object such as a tree, unevenness of the ground, or the like based on the three-dimensional distance information from the distance image memory 50, and outputs information for autonomous traveling to a traveling control device (not shown).
[0028]
Thus, for example, when autonomous driving is performed by learning a traveling route in advance by spraying pesticides in an orchard, the vehicle decelerates when it is recognized that there is a hill and slows down when it is recognized that there is a step. As described above, it is possible to determine the travelable area according to the place and assist the autonomous travel with the vehicle body control according to the terrain.
[0029]
【The invention's effect】
As described above, according to the first aspect of the present invention, with respect to a pair of images including a three-dimensional object and a ground portion captured by a stereo camera including a set of two cameras, the ground portion is excluded from each other. The corresponding area is obtained based on the difference in luminance, and a three-dimensional object-based distance image is generated by digitizing the perspective information to the three-dimensional object obtained from the parallax of the corresponding area. A ground-based distance image obtained by quantifying the perspective information of the ground portion obtained from the difference in change and obtained from the parallax of the corresponding region is generated. Then, in order to generate a distance image in which perspective information over the entire screen is digitized by synthesizing the distance image mainly composed of a three-dimensional object and the distance image mainly composed of the ground, It is possible to detect accurate three-dimensional information by detecting a three-dimensional object such as a ground part and the ground part without error, and to assist in autonomous traveling that performs vehicle body control in accordance with the terrain by enabling accurate determination of a travelable region. it can.
[0030]
In this case, as described in claim 2, the position of the corresponding region is estimated by estimating the parallax from the distance of an arbitrary point on the screen obtained on the assumption that the ground is a plane, and a predetermined range is determined from the estimated position. By setting the parallax detection range of the ground portion, the ground portion can be detected more accurately.
[Brief description of the drawings]
FIG. 1 is a configuration diagram of a surrounding environment recognition device. FIG. 2 is a configuration diagram of a ground stereo processing unit. FIG. 3 is an explanatory diagram illustrating an example of generating a distance image mainly of a three-dimensional object from an original image. FIG. 5 is an explanatory diagram showing a positional relationship with a plane. FIG. 5 is an explanatory diagram of a parallax detection range. FIG. 6 is an explanatory diagram showing an example in which a ground image is generated from an original image. [Explanation of symbols]
DESCRIPTION OF SYMBOLS 1 ... Ambient environment recognition apparatus 10 ... Stereo camera 20 ... Three-dimensional object stereo processing part 30 ... Ground part stereo processing part 40 ... Data selection part 50 ... Distance image memory 60 ... Recognition part

Claims (2)

2台1組のカメラからなるステレオカメラを搭載し、このステレオカメラで撮像した1対の画像をステレオ処理して周囲環境を認識する自律走行車の周囲環境認識装置であって、
立体物と地面部とを含む1対の撮像画像に対し、地面部を排除して互いに対応する領域を輝度の差に基づいて求め、対応する領域の視差から得られる立体物までの遠近情報を数値化した立体物主体の距離画像を生成する手段と、
上記1対の撮像画像に対し、地面部の互いに対応する領域を輝度変化の差に基づいて求め、対応する領域の視差から得られる地面部の遠近情報を数値化した地面主体の距離画像を生成する手段と、
上記立体物主体の距離画像と上記地面主体の距離画像とを合成し、上記1対の撮像画像に対して画面全体に渡る遠近情報を数値化した距離画像を生成する手段とを備えたことを特徴とする自律走行車の周囲環境認識装置。
A surrounding environment recognition device for an autonomous vehicle that includes a stereo camera composed of a set of two cameras and recognizes the surrounding environment by stereo processing a pair of images captured by the stereo camera,
For a pair of captured images including a three-dimensional object and a ground part, a region corresponding to each other is obtained based on the difference in luminance by excluding the ground part, and perspective information to the three-dimensional object obtained from the parallax of the corresponding region is obtained. Means for generating a digitized three-dimensional object-based distance image;
For the pair of captured images, a corresponding area of the ground part is obtained based on the difference in luminance change, and a ground-based distance image is generated by quantifying the perspective information of the ground part obtained from the parallax of the corresponding area. Means to
Means for synthesizing the three-dimensional object-based distance image and the ground-based distance image, and generating a distance image obtained by digitizing perspective information over the entire screen for the pair of captured images. A device for recognizing the surrounding environment of a featured autonomous vehicle.
地面を平面と仮定して求めた画面上の任意点の距離から視差を概算して対応する領域の位置を推測し、この推測位置から所定範囲を、地面部の視差検出範囲として設定することを特徴とする請求項1記載の自律走行車の周囲環境認識装置。Estimating the position of the corresponding region by estimating the parallax from the distance of an arbitrary point on the screen obtained assuming the ground as a plane, and setting a predetermined range as the parallax detection range of the ground part from this estimated position The surrounding environment recognition device for an autonomous vehicle according to claim 1.
JP31852397A 1997-11-19 1997-11-19 Ambient environment recognition device for autonomous vehicles Expired - Lifetime JP4005679B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP31852397A JP4005679B2 (en) 1997-11-19 1997-11-19 Ambient environment recognition device for autonomous vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP31852397A JP4005679B2 (en) 1997-11-19 1997-11-19 Ambient environment recognition device for autonomous vehicles

Publications (2)

Publication Number Publication Date
JPH11149557A JPH11149557A (en) 1999-06-02
JP4005679B2 true JP4005679B2 (en) 2007-11-07

Family

ID=18100071

Family Applications (1)

Application Number Title Priority Date Filing Date
JP31852397A Expired - Lifetime JP4005679B2 (en) 1997-11-19 1997-11-19 Ambient environment recognition device for autonomous vehicles

Country Status (1)

Country Link
JP (1) JP4005679B2 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5148669B2 (en) * 2000-05-26 2013-02-20 本田技研工業株式会社 Position detection apparatus, position detection method, and position detection program
JP4672175B2 (en) * 2000-05-26 2011-04-20 本田技研工業株式会社 Position detection apparatus, position detection method, and position detection program
KR100416306B1 (en) * 2000-12-16 2004-01-31 주식회사 포디컬쳐 3d shape acquiring method and encarving system for 3d images with large brightness contrast and media for storing program source thereof
JP4956452B2 (en) * 2008-01-25 2012-06-20 富士重工業株式会社 Vehicle environment recognition device
US8340438B2 (en) * 2009-12-17 2012-12-25 Deere & Company Automated tagging for landmark identification
JP6548518B2 (en) 2015-08-26 2019-07-24 株式会社ソニー・インタラクティブエンタテインメント INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
US9915951B2 (en) 2015-12-27 2018-03-13 Toyota Motor Engineering & Manufacturing North America, Inc. Detection of overhanging objects
US9910442B2 (en) 2016-06-28 2018-03-06 Toyota Motor Engineering & Manufacturing North America, Inc. Occluded area detection with static obstacle maps
US10078335B2 (en) 2016-06-28 2018-09-18 Toyota Motor Engineering & Manufacturing North America, Inc. Ray tracing for hidden obstacle detection
US10137890B2 (en) 2016-06-28 2018-11-27 Toyota Motor Engineering & Manufacturing North America, Inc. Occluded obstacle classification for vehicles
CN110392216B (en) * 2018-08-08 2021-07-23 乐清市川嘉电气科技有限公司 Light variation real-time judging system
KR102077219B1 (en) * 2018-11-01 2020-02-13 재단법인대구경북과학기술원 Routing method and system for self-driving vehicle using tree trunk detection

Also Published As

Publication number Publication date
JPH11149557A (en) 1999-06-02

Similar Documents

Publication Publication Date Title
US10129521B2 (en) Depth sensing method and system for autonomous vehicles
JP5157067B2 (en) Automatic travel map creation device and automatic travel device.
US8712144B2 (en) System and method for detecting crop rows in an agricultural field
US8855405B2 (en) System and method for detecting and analyzing features in an agricultural field for vehicle guidance
US8737720B2 (en) System and method for detecting and analyzing features in an agricultural field
US7266454B2 (en) Obstacle detection apparatus and method for automotive vehicle
JP4005679B2 (en) Ambient environment recognition device for autonomous vehicles
Agrawal et al. Rough terrain visual odometry
JP5160370B2 (en) Autonomous mobile robot device, mobile body steering assist device, autonomous mobile robot device control method, and mobile body steering assist method
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
JP2011013803A (en) Peripheral shape detection device, autonomous mobile device, operation auxiliary device for mobile body, peripheral shape detection method, control method for the autonomous mobile device and operation auxiliary method for the mobile body
JP2019125116A (en) Information processing device, information processing method, and program
KR20210090384A (en) Method and Apparatus for Detecting 3D Object Using Camera and Lidar Sensor
WO2018202464A1 (en) Calibration of a vehicle camera system in vehicle longitudinal direction or vehicle trans-verse direction
Shacklock et al. Visual guidance for autonomous vehicles: capability and challenges
US11447063B2 (en) Steerable scanning and perception system with active illumination
CN212044739U (en) Positioning device and robot based on inertial data and visual characteristics
JP3279610B2 (en) Environment recognition device
Yu et al. Distance estimation method with snapshot landmark images in the robotic homing navigation
JP3977505B2 (en) Road shape recognition device
JPH09259282A (en) Device and method for detecting moving obstacle
WO2024095993A1 (en) Row detection system, agricultural machine provided with row detection system, and row detection method
Lins et al. A novel machine vision approach applied for autonomous robotics navigation
JP3245284B2 (en) Self-position detection method for autonomous mobile work vehicles
US20230421739A1 (en) Robust Stereo Camera Image Processing Method and System

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20040930

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20070808

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20070814

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20070824

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100831

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110831

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110831

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120831

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120831

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130831

Year of fee payment: 6

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

EXPY Cancellation because of completion of term