JPS6371604A - System for detecting road boarder and obstacle by using area-divided color image - Google Patents

System for detecting road boarder and obstacle by using area-divided color image

Info

Publication number
JPS6371604A
JPS6371604A JP61215529A JP21552986A JPS6371604A JP S6371604 A JPS6371604 A JP S6371604A JP 61215529 A JP61215529 A JP 61215529A JP 21552986 A JP21552986 A JP 21552986A JP S6371604 A JPS6371604 A JP S6371604A
Authority
JP
Japan
Prior art keywords
area
road
image
obstacle
differentiated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP61215529A
Other languages
Japanese (ja)
Inventor
Hideo Mori
英雄 森
Shinji Kotani
信司 小谷
Hideki Owa
大輪 秀樹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to JP61215529A priority Critical patent/JPS6371604A/en
Publication of JPS6371604A publication Critical patent/JPS6371604A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

PURPOSE:To detect an obstacle on a road by utilizing a blind person guide robot and an outdoor moving robot for road and obstacle detection by detecting the road borders and obstacle by using an area-divided color image process. CONSTITUTION:Three image planes of R, G and B of a color TV camera 3 are digitized and differential operators are put in operation for the respective image planes of R, G, and B to form differentiated images. The differentiated images of R, G, and B are added together to obtain a monochromatic differentiated image. Then the original image is divided into areas where the differentiated image exceeds a threshold value. Then the means values of the center coordiantes, length, and width RGB value of each area are calculated and an area which has a gray RGV value and its center coordiantes in the lower half of the image plane and is large in area is regarded as a road candidate area. The contour of the road candidate area is approximated with a directed segment to obtain a road border observation straight line 4. Then an area which straddles the road border observation straight line and differs in RGB value from a road area is extracted as an obstacle part area 5.

Description

【発明の詳細な説明】 [発明の利用分野] 本発明は、TVカメラを使った盲人歩行誘導ロボットや
屋外移動ロボットの道路や障害物検出に利用する。
DETAILED DESCRIPTION OF THE INVENTION [Field of Application of the Invention] The present invention is used to detect roads and obstacles for blind walking guidance robots and outdoor mobile robots using a TV camera.

[発明の目的] ロボットが一般の車道や歩道を移動するiこは、道路境
界から一定距離はなれて、道路境界に平行)こ移動する
ように、制御すれば良いことが知られている。しかし、
−船道路の道路境界は、外側が草地の路肩や、白線、フ
ェンス、段差等と多様のため、それらを検出する良い方
式がなかった。本発明の目的は、この方式の発明と道路
上の障害物を検出する方式の発明にある。
[Object of the Invention] It is known that when a robot moves on a general road or sidewalk, it can be controlled so that it moves a certain distance away from the road boundary and parallel to the road boundary. but,
- Road boundaries on ship roads vary, including grass shoulders, white lines, fences, and steps, so there has been no good method for detecting them. The object of the present invention is to invent this method and to invent a method for detecting obstacles on the road.

[発明の概要] 1) 道路境界像の逆変換 第1図のように、移動ロボットを道路境界の近くに置く
。移動ロボット頭部の高さhメートルのところに、焦点
距離fのレンズをもつカラーTVカメラをロボットの正
面主軸方向に取りつけ、さらにカメラの光軸を水平面よ
りφ度下向きに傾けたとする。TV画面に写る道路境界
の像は、第2図のように、画面の左上隅を横切る直線と
なる。
[Summary of the invention] 1) Inverse transformation of road boundary image As shown in FIG. 1, a mobile robot is placed near the road boundary. Assume that a color TV camera with a lens of focal length f is attached to the head of a mobile robot at a height of h meters in the direction of the robot's front principal axis, and that the optical axis of the camera is tilted downward by φ degrees from the horizontal plane. The image of the road boundary on the TV screen is a straight line that crosses the upper left corner of the screen, as shown in FIG.

TVカメラの真下の車体底部の点を原点とする座標系x
Oyozoを考える。ただし、yOはロボットの正面主
軸方向にとった座標軸で、xOとZOは、yoに水平方
向に直交する座標軸と垂直方向に直交する座標軸である
。xoyQzO座標をzO力方向hだけ平行移動し、そ
のあとXO軸を中心にφだけ下向きに回転して新しい座
標系をxlylzlを得る。この座標系の原点はTVカ
メラのレンズの中心に一致し、yl@は光軸と一致する
。xOyozOからxlylzlへの座標変換は次のよ
うにる。
Coordinate system x whose origin is the point on the bottom of the car body directly below the TV camera
Think Oyozo. However, yO is a coordinate axis taken in the front principal axis direction of the robot, and xO and ZO are coordinate axes perpendicular to yo in the horizontal direction and perpendicular to the vertical direction. The xoyQzO coordinate is translated in the zO force direction h, and then rotated downward by φ around the XO axis to obtain a new coordinate system xlylzl. The origin of this coordinate system coincides with the center of the TV camera lens, and yl@ coincides with the optical axis. The coordinate transformation from xOyozO to xlylzl is as follows.

映像面上の中心を原点にした二次元座標系uvを考える
。ただし、U軸は−x1軸に方向に、V軸は一21軸に
方向にとる。TVカメラのレンズの焦点距離をfとする
とレンズの公式から、点(xi、yl、zl)の像 (
U、V)は次のように表わされる。
Consider a two-dimensional coordinate system uv whose origin is the center on the image plane. However, the U axis is taken in the direction of the -x1 axis, and the V axis is taken in the direction of the -21 axis. If the focal length of the TV camera lens is f, then from the lens formula, the image of point (xi, yl, zl) is (
U, V) are expressed as follows.

v/f =zl/yl             (2
)u/f =xl/yl             (
3)(1)、 (2)、 (3)式から(xO,yO,
zO)が与えられると(u、v)が直ちに求まる。
v/f = zl/yl (2
) u/f = xl/yl (
3) From equations (1), (2), and (3), (xO, yO,
When zO) is given, (u, v) can be found immediately.

ロボットから道路境界までの距離をa、正面主軸方向と
道路境界のなす角をθとすると、道路境界は次のように
表わされる。
When the distance from the robot to the road boundary is a, and the angle between the front principal axis direction and the road boundary is θ, the road boundary is expressed as follows.

xo cosθ+yOsinθ= a       (
4)zO= O(5) この道路境界の像は(1) 、 (2) 、 (3)式
から次のように求まる。
xo cosθ+yOsinθ= a (
4) zO=O(5) The image of this road boundary can be found from equations (1), (2), and (3) as follows.

hcoseu/f+(h sir+φ sinθ+a 
cosφ)V/f=asinφ −hcosφ sin
θ      (6)逆に、画面上で道路境界の像が求
まれば、実空間での道路境界からの距aaと正面主軸方
向とのなす角θが求まる。
hcoseu/f+(h sir+φ sinθ+a
cosφ) V/f=asinφ −hcosφ sin
θ (6) Conversely, if the image of the road boundary is determined on the screen, the angle θ between the distance aa from the road boundary in real space and the front principal axis direction is determined.

2) 道路候補領域の抽出 カラーTVカメラのRGB3画面をデジタル化する。R
GBの各画面毎に微分オペレイクを作用させて微分画像
を作る。RGBの微分画像を加算してモノクロ微分画像
を得る。次ぎに微分画像で値がいき値を超えるところで
原画1象を領域分割する。次ぎに各領域での面積と中心
座標、長さ、幅、RGB値の平均値を求める。灰色のR
GB値を有する領域で、中心座標が画面の下半分にあり
かつ面積が大きい領域を道路候補領域とする。
2) Extraction of road candidate area The RGB 3 screen of the color TV camera is digitized. R
A differential image is created by applying a differential operation to each screen of GB. A monochrome differential image is obtained by adding the RGB differential images. Next, the original image is divided into regions where the value of the differential image exceeds the threshold value. Next, the average value of the area, center coordinates, length, width, and RGB values of each region is determined. gray R
A region having a GB value, whose center coordinates are in the lower half of the screen, and which has a large area is defined as a road candidate region.

3) 道路境界予測領域の算出 ロボットを道路境界に平行に、すなわち θ=Oで、距
@aを保ちつつ走行するとする。(6)式から道路境界
像を表わす直線が求まる。
3) Calculation of predicted road boundary area Let us assume that the robot runs parallel to the road boundary, that is, θ=O, while maintaining the distance @a. A straight line representing the road boundary image is found from equation (6).

htj/f +aCO5tlJV/f=asinφ  
    (7)この式を次のように変形する。
htj/f +aCO5tlJV/f=asinφ
(7) This equation is transformed as follows.

u/fcosω+v/fsinω=b       (
8)ただし、 tan(a) = acosφ/h b=asinφcosω/h 直線(8)を上下にΔbだけ平行移動した二本の直線の
囲む領域を道路境界予測領域とする。
u/fcosω+v/fsinω=b (
8) However, tan(a) = acosφ/h b=asinφcosω/h The area surrounded by two straight lines obtained by moving straight line (8) vertically by Δb is defined as a road boundary prediction area.

4) 道路境界観測直線 道路候補領域は白線等の道路境界の途切れや障害物等に
よって、道路外の領域と一つになることがある。そこで
、道路候補領域の輪郭を有向線分で近似し、その中で、
道路境界の予測領域内にある有向線分を結んで作った直
線 u/fcosΩ+v/fsinΩ=B        
(9)を道路境界観測直線とする。第2図参照。
4) Road boundary observation Straight road candidate areas may become one with areas outside the road due to discontinuities in road boundaries such as white lines, obstacles, etc. Therefore, the outline of the road candidate area is approximated by directed line segments, and within that,
Straight line u/fcosΩ+v/fsinΩ=B made by connecting directed line segments within the predicted area of the road boundary
Let (9) be the road boundary observation straight line. See Figure 2.

5) 障害物の検出 第3図のように、道路境界観測直線を跨いで、かつRG
B値が道路領域と異なる領域を抽出し障害物部分領域と
する。自動車や人のような障害物は色彩的に均一でない
ので、画fff上では幾つかの領域に分かれてしまう。
5) Obstacle Detection As shown in Figure 3, detect obstacles across the road boundary observation straight line and in the RG
A region whose B value is different from the road region is extracted and set as an obstacle partial region. Obstacles such as cars and people are not uniform in color, so they are divided into several areas on the image fff.

そこで、障害物部分領域に隣接する領域でそれと両端の
U座標の値が一致するものがあるときは、障害物部分領
域とその隣接領域を合併する。この処理を条件を満たす
!a接頭域がなくなるまで繰り返す。こうして得られた
合併領域を障害物領域とする。画面上の障害物領域の右
下隅の座標(υ、V)を与えると、(1) 、 (2)
 、 (3)式を逆変換して、三次元空間での座標(X
O,YO,ZO)が求まる。
Therefore, if there is a region adjacent to the obstacle partial region whose U coordinate values at both ends match that of the region, the obstacle partial region and its adjacent region are merged. This process satisfies the conditions! Repeat until there are no more a prefix areas. The combined area obtained in this way is defined as an obstacle area. Given the coordinates (υ, V) of the lower right corner of the obstacle area on the screen, (1), (2)
, (3) is inversely transformed to obtain the coordinates (X
O, YO, ZO) can be found.

[実施例] 1) レンズの水平画角が40Q、垂直画角が29−゛
、イメージ・メモリの画素数が水平320、垂直240
とすると、v/f 、 u/fともに0.002で1画
素に対応する。
[Example] 1) The horizontal angle of view of the lens is 40Q, the vertical angle of view is 29-゛, and the number of pixels of the image memory is 320 horizontally and 240 vertically.
Then, both v/f and u/f are 0.002, which corresponds to one pixel.

h=1250mm、a=1000mm、φ・8″とする
と、道路境界を表わす(8)式のωとbは、ω=38.
4°、b=0.0873となる。
When h=1250mm, a=1000mm, and φ・8″, ω and b in equation (8) representing the road boundary are ω=38.
4°, b=0.0873.

2)  (9)式の道路境界観測直線が与えられたとす
ると、(6)式から tanΩ=(hsinφsinθ+a cosφ)/h
cosθB=(asinφ −hcosφ sinθ)
cosΩ /hcosθさらに、Ω=45.6 、B=
0.225で、h=1250關、φ=8とすると、ロボ
ットから道路境界までの距離はa=1300+nm、ロ
ボットと道路境界のなす角はθ・−10となる。
2) If the road boundary observation straight line in equation (9) is given, then from equation (6) tanΩ=(hsinφsinθ+a cosφ)/h
cosθB=(asinφ − hcosφ sinθ)
cosΩ/hcosθ Furthermore, Ω=45.6, B=
0.225, h=1250, and φ=8, the distance from the robot to the road boundary is a=1300+nm, and the angle between the robot and the road boundary is θ·-10.

3) 微分オペレータとして第4図の4個のオペレータ
を作用させ絶対値をとり、その中で最大値を微分オペレ
ータの値とする。この微分オペレータをRGB各両面毎
に作用させその和をとって微分画像とする。画素の濃淡
レベルを64レベルとしたとき、23レベルをしきい値
として領域分割すると良い結果が得られた。
3) As a differential operator, the four operators shown in FIG. 4 are used to obtain the absolute value, and the maximum value among them is taken as the value of the differential operator. This differential operator is applied to each RGB surface, and the sum is calculated to form a differential image. When the density level of a pixel was set to 64 levels, good results were obtained when dividing into regions using 23 levels as a threshold.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図はロボットとxoyOzo座標系およびxlyl
zl座標系、uv座標系の関係を表わす図、第2図は7
7画面とuv座標系を表わす図、第3図は道路境界像と
障害物の関係を表わす図、第4図は微分オペレータを示
す図である。 1・・・ロボット、2・・・道路境界、3・・・TVカ
メラ、4・・・道路境界像、5・・・障害物
Figure 1 shows the robot, xoyOzo coordinate system, and xlyl
A diagram showing the relationship between the zl coordinate system and the uv coordinate system, Figure 2 is 7
FIG. 3 is a diagram showing the relationship between road boundary images and obstacles, and FIG. 4 is a diagram showing a differential operator. 1... Robot, 2... Road boundary, 3... TV camera, 4... Road boundary image, 5... Obstacle

Claims (1)

【特許請求の範囲】[Claims] 領域分割したカラー画像処理を使って道路境界と障害物
を検出する方式。
A method for detecting road boundaries and obstacles using region-divided color image processing.
JP61215529A 1986-09-12 1986-09-12 System for detecting road boarder and obstacle by using area-divided color image Pending JPS6371604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP61215529A JPS6371604A (en) 1986-09-12 1986-09-12 System for detecting road boarder and obstacle by using area-divided color image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP61215529A JPS6371604A (en) 1986-09-12 1986-09-12 System for detecting road boarder and obstacle by using area-divided color image

Publications (1)

Publication Number Publication Date
JPS6371604A true JPS6371604A (en) 1988-04-01

Family

ID=16673935

Family Applications (1)

Application Number Title Priority Date Filing Date
JP61215529A Pending JPS6371604A (en) 1986-09-12 1986-09-12 System for detecting road boarder and obstacle by using area-divided color image

Country Status (1)

Country Link
JP (1) JPS6371604A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004280451A (en) * 2003-03-14 2004-10-07 Matsushita Electric Works Ltd Autonomous moving device
JP2009068852A (en) * 2007-09-10 2009-04-02 Tokyo Metropolitan Univ Environment recognition system, autonomous type mobile and environment recognition program
CN107454969A (en) * 2016-12-19 2017-12-08 深圳前海达闼云端智能科技有限公司 Obstacle detection method and device
CN107636680A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method and device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004280451A (en) * 2003-03-14 2004-10-07 Matsushita Electric Works Ltd Autonomous moving device
JP2009068852A (en) * 2007-09-10 2009-04-02 Tokyo Metropolitan Univ Environment recognition system, autonomous type mobile and environment recognition program
CN107454969A (en) * 2016-12-19 2017-12-08 深圳前海达闼云端智能科技有限公司 Obstacle detection method and device
WO2018112707A1 (en) * 2016-12-19 2018-06-28 深圳前海达闼云端智能科技有限公司 Method and device for detecting obstacles
US10997438B2 (en) 2016-12-19 2021-05-04 Cloudminds (Shanghai) Robotics Co., Ltd. Obstacle detection method and apparatus
CN107636680A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method and device
WO2018120027A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and apparatus for detecting obstacles
CN107636680B (en) * 2016-12-30 2021-07-27 达闼机器人有限公司 Obstacle detection method and device

Similar Documents

Publication Publication Date Title
US5706355A (en) Method of analyzing sequences of road images, device for implementing it and its application to detecting obstacles
CA2950791C (en) Binocular visual navigation system and method based on power robot
JP4409035B2 (en) Image processing apparatus, singular part detection method, and recording medium recording singular part detection program
Mori et al. On-line vehicle and pedestrian detections based on sign pattern
JP2989744B2 (en) Measurement surface extraction device and method
WO2020244414A1 (en) Obstacle detection method, device, storage medium, and mobile robot
CN110956069B (en) Method and device for detecting 3D position of pedestrian, and vehicle-mounted terminal
CN113848892B (en) Robot cleaning area dividing method, path planning method and device
JPH05298591A (en) Object recognition device
JPS6371604A (en) System for detecting road boarder and obstacle by using area-divided color image
CN109895697B (en) Driving auxiliary prompting system and method
CN109982047A (en) A method of flight monitoring panorama fusion display
Baker et al. Automated street crossing for assistive robots
Lee et al. A study on recognition of road lane and movement of vehicles using vision system
JPS6270916A (en) Boundary detecting method for self-traveling truck
Ozaki et al. An image processing system for autonomous vehicle
JPH0396451A (en) Obstruction detection device for vehicle
Adorni et al. A non-traditional omnidirectional vision system with stereo capabilities for autonomous robots
Baker et al. A vision-based tracking system for a street-crossing robot
JPH01265399A (en) Road recognizing device for vehicle
JPH07220194A (en) Road environment recognizing device
Agunbiade et al. Road Detection Technique Using Filters with Application to Autonomous Driving System
JPS6224310A (en) Boundary detector for automatic traveling truck
Chen et al. Image-based obstacle avoidance and path-planning system
Ge A real time vehicle tracking system for an outdoor mobile robot