WO1998020302A1 - Device for detecting obstacle on surface of traveling road of traveling object - Google Patents

Device for detecting obstacle on surface of traveling road of traveling object Download PDF

Info

Publication number
WO1998020302A1
WO1998020302A1 PCT/JP1997/004042 JP9704042W WO9820302A1 WO 1998020302 A1 WO1998020302 A1 WO 1998020302A1 JP 9704042 W JP9704042 W JP 9704042W WO 9820302 A1 WO9820302 A1 WO 9820302A1
Authority
WO
WIPO (PCT)
Prior art keywords
road surface
dimensional
obstacle
image
moving object
Prior art date
Application number
PCT/JP1997/004042
Other languages
French (fr)
Japanese (ja)
Inventor
Seiichi Mizui
Hiroyoshi Yamaguchi
Tetsuya Shinbo
Osamu Yoshimi
Original Assignee
Komatsu Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Komatsu Ltd. filed Critical Komatsu Ltd.
Priority to AU48845/97A priority Critical patent/AU4884597A/en
Publication of WO1998020302A1 publication Critical patent/WO1998020302A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Definitions

  • the present invention relates to a device for detecting an obstacle present on a traveling road surface on which a moving object such as an unmanned dump truck travels.
  • a device for detecting an obstacle on a traveling path of a mobile body such as an unmanned dump truck
  • a device that detects an obstacle by capturing an image of the front of the mobile body and processing the captured image.
  • Obstacle detection devices that use this type of image processing can obtain a greater amount of information than devices that use ultrasonic sensors, laser lasers, or millimeter-wave sensors to detect obstacles in the forward direction of the moving object. It has many advantages, such as a wide viewing angle and a wide range of obstacle detection.
  • the present inventors convert the distance image in the traveling direction of the moving object into a three-dimensional distribution of each pixel, specify a pixel group corresponding to a traveling path from the three-dimensional distribution state of the pixels, and
  • the present invention proposes an invention in which a road is regarded as a plane, and an object having a height higher than a predetermined height is detected as an obstacle based on the height of the plane, and attempts have been made to implement the present invention.
  • a moving object obstacle detection device using image processing requires a great deal of time for arithmetic processing in exchange for the large amount of information that can be obtained.
  • the method of detecting the white line painted along the road and specifying the range of image processing may certainly be applied to ordinary roads, but it is possible to apply it to rough terrain where unmanned dump trucks run. In such cases, it cannot be applied.
  • an object that does not exist on the original planned traveling path may be erroneously detected as an obstacle.
  • the obstacle detection method of the invention proposed by the present inventors, it is possible to detect an obstacle having a height equal to or higher than a predetermined height with reference to the traveling road surface, but the traveling road surface is a slope or the like.
  • the slope and the road height are not unique. Therefore, it is impossible to detect obstacles based on the height of this plane assuming that each part of the traveling road surface having the originally different slope and road surface height is regarded as a unique plane, which may cause erroneous detection.
  • the present invention has been made in view of such a situation, and enables to identify a traveling road surface from an image on any traveling road such as an uneven terrain, and to perform obstacle detection in real time. It is a first object of the present invention to reliably detect only obstacles on a planned traveling path without erroneous detection even if the traveling path is curved or branched.
  • the present invention ensures that even if the traveling road surface is a slope or the like and the slopes and road surface heights of the respective traveling road surfaces are different, obstacles having a predetermined height or higher based on the traveling road surface can be reliably detected.
  • a second object is to enable detection without erroneous detection.
  • the obstacle is detected based on the traveling road surface of the moving body and the three-dimensional image of the obstacle on the traveling road surface
  • a position detecting means for detecting a current position of the moving body as a three-dimensional coordinate position; and the traveling road surface in a three-dimensional coordinate system viewed from the moving body and the obstacle.
  • the moving object coordinate system Based on three-dimensional coordinate position data indicating the three-dimensional coordinate position of each point along the travel path and the current three-dimensional coordinate position of the moving object detected by the position detecting means, the moving object coordinate system
  • the three-dimensional coordinate position data of the traveling path is calculated, and the three-dimensional coordinate position data of the traveling path in the moving body coordinate system is matched with the three-dimensional image in the moving body coordinate system currently generated by the three-dimensional image generating means.
  • an image processing unit that cuts out a portion corresponding to the traveling road surface from the current three-dimensional image, and a portion corresponding to the running road surface that is cut out by the image processing unit in the three-dimensional image.
  • Detecting means for detecting the presence of the obstacle.
  • the three-dimensional coordinate position data S 0, S 1, S 2,... Of the traveling path 31 in the moving body coordinate system XI—Y 1—Z 1 are calculated.
  • the three-dimensional coordinate position data S 0, S l, S 2... Of the traveling path in the moving body coordinate system XI—Y l—Z 1 is generated by the currently generated moving body coordinate system XI—Y l—Z 1
  • a portion K corresponding to the traveling road surface 31 is cut out from the current three-dimensional image 40 (see the image 40 ′ in FIG. 9). Then, it is detected that the obstacle 33 exists for the portion K corresponding to the cut road surface 31 that has been cut out.
  • the obstacle is detected based on a traveling road surface of a moving body and a three-dimensional image of the obstacle on the traveling road surface.
  • position detecting means for detecting a current position of the moving object as a three-dimensional coordinate position
  • Three-dimensional image generating means for generating a current three-dimensional image of the traveling road surface and the obstacle in a three-dimensional coordinate system viewed from the moving body;
  • the moving object coordinate system Based on three-dimensional coordinate position data indicating the three-dimensional coordinate position of each point along the travel path and the current three-dimensional coordinate position of the moving object detected by the position detecting means, the moving object coordinate system
  • the three-dimensional coordinate position data of the traveling path is calculated, and the three-dimensional coordinate position data of the traveling path in the moving body coordinate system is matched with the three-dimensional image in the moving body coordinate system currently generated by the three-dimensional image generating means.
  • Image processing means for cutting out a specific portion of the traveling road surface in the current three-dimensional image
  • Detecting means for obtaining a specific portion of the cut road surface as a plane, and detecting that the obstacle is present on the plane;
  • the configuration of the second invention based on the three-dimensional coordinate position data of the traveling road and the current three-dimensional coordinate position of the moving object detected by the position detecting means, as shown in FIG.
  • the three-dimensional coordinate position data S 0, S l, and S 2 ′ ”of the traveling path 31 in the moving body coordinate system XI—Y 1—Z 1 are calculated.
  • the moving body coordinate system XI—Y l—Z The three-dimensional coordinate position data S 0, S l, S 2... of the travel path in 1 is converted to the currently generated three-dimensional image 40 (Fig. 8) in the mobile coordinate system X l— Y l— Z 1.
  • a specific portion L3 of the road surface 31 is cut out from the current three-dimensional image 40 (see the image 40 'in FIG. 9).
  • the specific portion L3 of the cut traveling road surface is obtained as a plane, and it is detected that the obstacle 33 exists on this plane.
  • FIG. 1 is a block diagram showing a configuration example of an embodiment of a device for detecting an obstacle on a traveling road surface of a moving object according to the present invention.
  • FIG. 2 is a flowchart showing a procedure of processing executed by the detection range specifying unit and the obstacle detection unit shown in FIG.
  • FIG. 3 is a diagram showing the relationship between the overall coordinate system and the vehicle body coordinate system.
  • FIG. 4 is a diagram schematically showing a traveling path on which the moving body of the embodiment travels.
  • FIG. 5 is a diagram showing a gradient of a traveling road and a road surface height according to the embodiment.
  • FIG. 6 is a diagram showing the road width of the traveling road according to the embodiment.
  • FIG. 7 is a diagram showing a distance image generated by the three-dimensional distance image generation unit shown in FIG.
  • FIG. 8 is a diagram showing a three-dimensional distribution image of pixels obtained by converting the distance image shown in FIG.
  • FIG. 9 is a diagram showing, as an image, only the traveling road surface or only a portion obtained by dividing the traveling road surface into sections.
  • X0—Y0—Z0 indicates the entire coordinate system
  • XI—Y1—Z1 indicates the vehicle body coordinate system that moves together with the moving body 1.
  • XI is a coordinate axis corresponding to the vehicle width direction of the moving body 1
  • Z1 is a coordinate axis corresponding to the traveling direction of the moving body 1 (scheduled traveling path 31)
  • Y1 is a vertical coordinate axis.
  • FIG. 1 shows a configuration of an obstacle detection device according to an embodiment of the present invention.
  • the obstacle detection device includes a position measurement sensor 7 that detects the current position of the moving object 1 as a three-dimensional coordinate position of the overall coordinate system XO—Y0—Z0, and a planned traveling path.
  • 3 Three-dimensional coordinate position data indicating the three-dimensional coordinate position in the overall coordinate system XO—Y0—ZO of each point S 0, S 1, S 2,... Along line 1 is obtained in advance and stored.
  • a distance from a reference position (reference plane) on the moving body 1 to an obstacle on the traveling road surface 31 of the moving body 1 is determined.
  • a three-dimensional distance image generation unit 3 that measures the distance and generates a three-dimensional distance image of the traveling road surface 31 and the obstacle, and converts the distance image into a three-dimensional image 40 in the vehicle body coordinate system XI-Y1-Z1.
  • the planned travel path data, the current three-dimensional coordinate position of the moving body 1 currently detected by the position measurement sensor 7, and the vehicle body rotation angle currently detected by the rotation angle detection sensor 8 are used.
  • the three-dimensional coordinate position data of the traveling path 31 in the vehicle body coordinate system XI—Yl—Z1 is converted into Thrust on the 3D image 40 in the body coordinate system X1-Y1-Z1
  • the detection range identification unit 4 that cuts out a range for detecting an obstacle from the current three-dimensional image 40, and an obstacle exists in the three-dimensional image 40 in the cut-out detection range described above.
  • an obstacle detection unit 5 for detecting the presence of the vehicle.
  • the vehicle body automatic control unit 10 of the drive control unit 9 uses the output of the position measurement sensor 7 and the output of the rotation angle detection sensor 8 as feedback signals as the planned road data stored in the planned road data storage unit 11.
  • the moving body 1 is driven and controlled so as to follow the above target points S0, Sl, S2,.
  • the position information of the obstacle detected by the obstacle detection unit 5 is transmitted, for example, by radio to a monitoring station that monitors the unmanned dump truck as the mobile unit 1, displayed on a CRT display, and displayed on an unmanned dump truck. Used for truck management.
  • the planned road data stored in the planned road data storage unit 11 includes data SO (SX0, SY0, SZO) and S1 (SX1, SY1, S0) indicating the three-dimensional positions of the points S0, Sl, S2,. S Z1), S2 (SX2, SY2, S Z2)... Are acquired in advance.
  • GPS Global Positioning 'Sensor
  • the position measurement signal is wirelessly input via the antenna 6 in FIG.
  • the moving body 1 is caused to travel along the planned traveling path 31, and the position is measured by the position measuring sensor 7, which is a GPS, at regular time intervals (for example, every 1 second). Then, if the position data is measured, the planned traveling route data S 0, S 1, S 2,... Can be easily obtained. That is, the data output from the position measurement sensor 7 indicates the mounting position of the sensor 7. Since the mounting position of the sensor 7 relative to the vehicle center is known, the vehicle center coordinate position can be obtained from the output data of the position measurement sensor 7. Further, based on the assumption that the center of the vehicle body is moving along the center of the planned traveling path 31, the planned traveling path data (the traveling path center position) S0, S1, S2,. be able to.
  • the planned travel route data instead of being obtained by the position measurement by the teaching travel described above, a result measured by surveying or the like is separately input as the planned travel route data via predetermined input means, It may be stored in the storage unit 11.
  • the rotation angle detection sensor 8 includes, for example, a gyroscope that detects the angle of the moving body 1 in the yaw direction of the vehicle body and two inclinometers that detect the pitching angle and the rolling angle of the vehicle body. Based on the results, as shown in Fig. 3, the vehicle body rotation angles (R X0, R Y0) representing the rotation angles of the vehicle body coordinate system XI—Y 1—Z 1 with respect to the global coordinate system X0—YO—Z0 , RZ 0) are output.
  • the origin positions (HX0, HY0, HZO) of the vehicle body coordinate system XI—Yl—Z1 viewed in the global coordinate system X0—Y0—Z0 are acquired as outputs of the position measurement sensor 7.
  • the three-dimensional distance image generation unit 3 includes, for example, a planned road surface 31 as shown in FIG. 7, a branch road 32 from the planned road 31, and an obstacle 33 existing on the planned road surface 31. Then, a distance image 30 including obstacles 34, 35, 36, 37, etc. existing other than the planned traveling road surface 31 is generated.
  • Each pixel 50 of the distance image 30 has a three-dimensional coordinate indicating the two-dimensional coordinate position (i, j) in the i-j two-dimensional coordinate system and the distance d from the reference position (reference plane) of the mobile unit 1.
  • the data (i, j, d) are associated with each other, and the pixel at each position i, j of the distance image 30 has a brightness corresponding to the distance d.
  • the distance image 30 is output from the three-dimensional distance image generation unit 3
  • the distance image 30 is subjected to coordinate transformation to obtain a body coordinate system XI—Y1 as shown in FIG. —A three-dimensional image 40 in Z1 is generated.
  • each pixel 50 of the distance image 30 is associated with the three-dimensional information of (i,;!, D) as described above, each pixel 50 represented by this distance image data (i, d)
  • the pixel 50 is moved together with the moving body 1 as shown in FIG. 8, and the three-dimensional coordinate position data (X, y) on the body coordinate system X 1—Y 1—Z 1 having the origin at a predetermined position of the moving body 1 , Z) can be converted to each pixel 60 associated with.
  • the three-dimensional image 40 can be obtained as a distribution diagram of the three-dimensional coordinate positions of the pixels 60 (step 102).
  • the planned road data SO (SX0, SY0, SZO), Sl (SX1, SY1, SZD, S2 (SX2, SY2, SZ2)) stored in the storage unit 11 are as follows.
  • the planned route data SO (CX0, CY0, CZO), SI (CXI, CY1, CZD, S2 (CX2, CY2, CZ2), etc.) in the reference system XI—Y1-Z1 are converted.
  • the vehicle body rotation angle (RX0, RY0, RZO) and the origin position (HX0, HY0, HZ0) of the vehicle body coordinate system XI—Yl—Z1 viewed from the global coordinate system are used.
  • the coordinate position (XP1, YP1, ZPI) of a point P in the coordinate system can be converted to the coordinate position ( ⁇ 0, ⁇ 0, ⁇ ⁇ 0) in the global coordinate system according to the following equation (1). it can.
  • MR0 is a rotation matrix of the vehicle body coordinate system, and is represented by the following equation (2) using the vehicle body rotation angles (RX0, RY0, RZO).
  • FIG. 5 is a plot of points S0, Sl, S2,... On the traveling path 31 in the Z1 (moving body traveling direction) —Y1 (vertical direction) coordinate system.
  • the gradient ⁇ and the road surface height CY are not unique in each part in the traveling direction.
  • the slope 03 and the road heights CY3 and CY4 of the sections S3 to S4 are different from the slopes and road heights of other sections. Therefore, each part of the traveling road surface having a different slope and road surface height is regarded as a unique gradient and road surface height, and the entire traveling road surface 31 (the entire traveling road existing in the image 40) is regarded as one plane. There is no reasonable way to detect an obstacle based on the height of the plane, and erroneous detection may occur.
  • the traveling road surface 31 in front of the moving object 1 instead, as shown in Fig. 5, the traveling direction (Z1 axis direction) is divided for each point S0, Sl, S2 ..., and images J0, Jl, J2'- 'for each section are cut out, and for each of these sections For each of the images J0, Jl, J2,..., Planes L0, L1, L2,.
  • FIG. 9 shows images J1 and J3 cut out from the entire image 40 in FIG. 8 in a three-dimensional coordinate system XI—Y1—Z1.
  • the result of obtaining the plane corresponding to the traveling path 31 is represented as planes Ll and L3.
  • the three-dimensional distribution image 40 of the pixel 60 shown in FIG. 8 is divided into sections S 0 to S 1, S 1 to S 2, and S 3 to S 4 ′ ′′ (see FIG. 9) in the Z1-axis direction, and Planes L0, L1, L2, etc. are detected for each of the images J0, J1, J2, etc.
  • the section S3 to S4 all the pixels 60 of the image J3 existing in this section are A pixel group located at the lowest point in the vertical direction (Y1 axis direction) is selected from among them, and by approximating these planes, the plane 3 can be detected (step 104).
  • each plane L0, Ll, L2 ' it is detected whether or not there is an object having a height equal to or higher than a predetermined threshold based on the plane. For example, in the case of plane L3, Since there is an object 33 having a predetermined threshold value or more based on the plane L3, this is detected as an obstacle 33 (step 105).
  • the planned traveling path 31 and the branch road 32 are detected as the same plane (unless there is a step between the planned traveling path 31 and the branch road 32), and the planned traveling is performed.
  • An object 35 on a fork 32 that is not on the road 31 may be erroneously detected as an obstacle (see Fig. 9).
  • Figure 6 shows the XI (moving vehicle width direction)-Z1 (moving vehicle traveling direction) coordinate system, right boundary point in the traveling direction SO (+), SI (+), S2 (+), traveling direction Left boundary point SO
  • S1 (-), S2 (-) ... are plotted.
  • the right boundary points SO (+), SI (+), S2 (+) ... in the traveling direction are the center positions SO (CX0, CY0, CZO), S1 (CX1, CY1, CZl), S2 ( CX2, CY2, CZ2)... in the plus direction (right side) of the XI axis with respect to the vehicle width of the mobile unit 1 or offset by half the road width + Wc,
  • the left boundary points SO (—), SI (—), S2 (-)... in the traveling direction are the center positions SO (CX0, CY0, CZO) and S1 (CX1, CY1, CZl) of the planned traveling road 31.
  • Step 106 a process (step 104) of cutting out images J0, Jl, J2...
  • step 106 When the processing of step 106 is performed alone, the following is obtained.
  • the portion K corresponding to the traveling road surface 31 can be obtained by offsetting the traveling road center position S0, Sl, S2,... By the vehicle width or the road width dividing soil Wc.
  • the present invention is not limited to this.
  • mobile unit 1 There is no problem when mobile unit 1 is traveling along the center of the travel path.However, mobile unit 1 may be running off the center of the travel path. In some cases, the vehicle may run off the area K offset by the distance or the road width derivation Wc. Therefore, a deviation between the target points S0, Sl, S2,... And the current traveling position of the moving body 1 is detected, and an offset amount in the road width direction is calculated according to the deviation.
  • the area can be set arbitrarily.
  • the traveling road surface 31 is identified from the image, and image processing is performed to detect an obstacle only in the portion K corresponding to the traveling road surface. Therefore, the image processing time is short and the obstacle 33 is detected in real time.
  • the traveling path 31 is curved or has a branch road 32, the object 35 existing other than the planned traveling path 31 is not erroneously detected as an obstacle, and the Only the obstacle 33 on 31 can be reliably detected.
  • step 104 When the processing of step 104 is performed alone (when there is a step in the width direction of the traveling road surface 31), the following is performed.
  • the three-dimensional coordinate position data S0, Sl, S2, S3,... Of the traveling path 31 in the body coordinate system XI—Yl—Z1, and the body coordinate system XI—Yl—Zl The images J0, Jl, J2, J3,... Of each section of the road surface 31 are cut out of the three-dimensional image 40 by matching with the three-dimensional image 40 (FIG. 8) (image 40 in FIG. 9). '). Then, planes L0, Ll, L2, L3 ... are obtained for the extracted images J0, Jl, J2, J3 ... of each section, and the planes L0, Ll, L2, L3 ... The presence of an obstacle is detected.
  • a specific part J 3 (L3) of the traveling road surface 31 is cut out from the image, and the specific part is set as the plane L3 to detect the obstacle 33 thereon. Even if the road surface 31 is a sloping road or the like, and the slope ⁇ and the road height CY of each part of the traveling road surface are different, a predetermined value based on the plane L0, Ll, L2, L3. Obstacles higher than the height 33 can be reliably detected without erroneous detection.
  • the three-dimensional image 40 is obtained from the distance image 30.
  • the traveling from the image is performed. It is possible to cut out the portion K of the road surface 31 or cut out the planes L0, L1, L2, L3 '-of each part of the running road surface.
  • the portion K of the road surface 31 is cut out directly from the three-dimensional distance image 30 shown in FIG. 7 (this is indicated by diagonal lines), and the planes L0, L1, L2, L3. You can do it.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Measurement Of Optical Distance (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A device which can surely detect only obstacles on the surface of the scheduled traveling road of a traveling object. The device computes the three-dimensional coordinate position data of the traveling road on the coordinate system of the traveling object, based on the three-dimensional coordinate position data of the traveling road and the present three-dimensional coordinate position data of the traveling object detected by means of a position detecting means, collates the three-dimensional coordinate position data on the coordinate system of the traveling object with the currently generated three-dimensional picture on the coordinate system of the traveling object, and segments the part corresponding to the surface of the traveling road from the currently generated three-dimensional picture. Then, the device detects the presence/absence of an obstacle on the segmented corresponding part of the surface of the traveling road.

Description

明細書 移動体の走行路面上の障害物検出装置 技術分野  Description Obstacle detection device on the traveling road surface of a moving object
本発明は、 無人ダンプトラック等の移動体が走行する走行路面上に存在する障 害物を検出する装置に関する。 背景技術  The present invention relates to a device for detecting an obstacle present on a traveling road surface on which a moving object such as an unmanned dump truck travels. Background art
無人ダンプトラック等の移動体において、 その走行路上の障害物を検出する装 置として、 移動体前方を撮像し、 その撮像画像を処理することにより障害物を検 出する装置がある。 この種の画像処理を用いた障害物検出装置は、 超音波センサ、 レーザレーザ、 ミリ波センサを用いて移動体進行方向前方の障害物を検出する装 置と比較して、 得られる情報量が多く、 視野角が広く広範囲で障害物を検出でき るという利点がある。  As a device for detecting an obstacle on a traveling path of a mobile body such as an unmanned dump truck, there is a device that detects an obstacle by capturing an image of the front of the mobile body and processing the captured image. Obstacle detection devices that use this type of image processing can obtain a greater amount of information than devices that use ultrasonic sensors, laser lasers, or millimeter-wave sensors to detect obstacles in the forward direction of the moving object. It has many advantages, such as a wide viewing angle and a wide range of obstacle detection.
従来の画像処理を用いた移動体の障害物検出装置では、 画像の全画面を処理す るようにしている。 また、 特開平 3— 2 6 0 8 1 4号公報にみられるように、 一 般道路を走行する場合には道路に沿ってペイントされた白線を検出し、 画像処理 範囲の特定を行うようにしている。  In a conventional mobile object obstacle detection device using image processing, the entire screen of an image is processed. Also, as seen in Japanese Patent Application Laid-Open No. 3-260814, when driving on a general road, a white line painted along the road is detected, and an image processing range is specified. ing.
また、 本発明者らは、 移動体進行方向の距離画像を、 各画素の 3次元分布に変 換し、 この画素の 3次元分布状態から、 走行路に対応する画素群を特定し、 この 走行路を平面とみて、 この平面の高さを基準としてこれより所定の高さ以上にあ る物体を障害物として検出するという発明を提案しており、 この発明を実施する 試みがなされている。  In addition, the present inventors convert the distance image in the traveling direction of the moving object into a three-dimensional distribution of each pixel, specify a pixel group corresponding to a traveling path from the three-dimensional distribution state of the pixels, and The present invention proposes an invention in which a road is regarded as a plane, and an object having a height higher than a predetermined height is detected as an obstacle based on the height of the plane, and attempts have been made to implement the present invention.
し力、し、 画像処理を用いた移動体の障害物検出装置は、 得られる情報量が多い こととひきかえに、 演算処理に多大な時間を要する。  A moving object obstacle detection device using image processing requires a great deal of time for arithmetic processing in exchange for the large amount of information that can be obtained.
よって、 上記従来技術で述べたように、 取得された撮像画像の全画面について 画像処理をしていたのでは、 演算処理時間は膨大なものとなる。 このため、 移動 W 体と障害物との相对位置が逐次変化する状況において障害物検出をリアルタイム に行いたいとの要請に応えられない虞もある。 Therefore, as described in the related art, if image processing is performed on the entire screen of the acquired captured image, the calculation processing time becomes enormous. Because of this, move In a situation where the relative positions of the W body and the obstacle are sequentially changing, there is a possibility that the request to perform the obstacle detection in real time may not be able to be met.
また、 上記道路に沿ってペイントされた白線を検出して画像処理の範囲を特定 する方法は、 確かに、 一般道路に適用することができるかもしれないが、 無人ダ ンプトラックが走行する不整地等においては、 適用することはできない。  In addition, the method of detecting the white line painted along the road and specifying the range of image processing may certainly be applied to ordinary roads, but it is possible to apply it to rough terrain where unmanned dump trucks run. In such cases, it cannot be applied.
また、 走行路がカーブしていたり、 分岐していたりしていると、 本来の予定走 行路上に存在しない物体を、 障害物であると誤検出してしまう虞もある。  In addition, if the traveling path is curved or branched, an object that does not exist on the original planned traveling path may be erroneously detected as an obstacle.
さらに、 上記本発明者らが提案する発明の障害物検出方法では、 確かに、 走行 路面を基準して、 所定高さ以上の障害物を検出ことができるものの、 走行路面が 坂道等である場合は勾配、 路面高さは、 一義的ではない。 したがって、 こうした 本来勾配、 路面高さが異なる走行路面各部を一義的な一平面とみて、 この平面の 高さを基準として障害物を検出する方法には無理があり、 誤検出を生じる虞があ る。 発明の開示  Furthermore, in the obstacle detection method of the invention proposed by the present inventors, it is possible to detect an obstacle having a height equal to or higher than a predetermined height with reference to the traveling road surface, but the traveling road surface is a slope or the like. The slope and the road height are not unique. Therefore, it is impossible to detect obstacles based on the height of this plane assuming that each part of the traveling road surface having the originally different slope and road surface height is regarded as a unique plane, which may cause erroneous detection. You. Disclosure of the invention
本発明は、 こうした実状に鑑みてなされたものであり、 不整地等のいかなる走 行路であろうとも、 画像からその走行路面を特定でき、 障害物検出をリアルタイ ムに行えるようにするとともに、 走行路がカーブしていたり分岐していたりして も予定走行路上の障害物のみを確実に誤検出なく検出できるようにすることを第 1の目的とするものである。  The present invention has been made in view of such a situation, and enables to identify a traveling road surface from an image on any traveling road such as an uneven terrain, and to perform obstacle detection in real time. It is a first object of the present invention to reliably detect only obstacles on a planned traveling path without erroneous detection even if the traveling path is curved or branched.
また、 本発明は、 走行路面が坂道等であって走行路面各部の勾配、 路面高さが 異なるものであつたとしても、 その走行路面を基準とする所定の高さ以上の障害 物を確実に誤検出なく検出できるようにすることを第 2の目的とするものである。 そこで、 上記第 1の目的を達成するために本発明の第 1発明の主たる発明では、 移動体の走行路面および当該走行路面上の障害物の 3次元画像に基づいて、 当該 障害物を検出するようにした移動体の走行路面上の障害物検出装置において、 前記移動体の現在位置を 3次元座標位置として検出する位置検出手段と、 前記移動体からみた 3次元座標系における前記走行路面および前記障害物の現 在の 3次元画像を生成する 3次元画像生成手段と、 In addition, the present invention ensures that even if the traveling road surface is a slope or the like and the slopes and road surface heights of the respective traveling road surfaces are different, obstacles having a predetermined height or higher based on the traveling road surface can be reliably detected. A second object is to enable detection without erroneous detection. Therefore, in order to achieve the first object, in the main invention of the first invention of the present invention, the obstacle is detected based on the traveling road surface of the moving body and the three-dimensional image of the obstacle on the traveling road surface In the obstacle detecting device on the traveling road surface of the moving body as described above, a position detecting means for detecting a current position of the moving body as a three-dimensional coordinate position; and the traveling road surface in a three-dimensional coordinate system viewed from the moving body and the obstacle. Obstacle present A three-dimensional image generating means for generating an existing three-dimensional image,
前記走行路に沿った各点の 3次元座標位置を示す 3次元座標位置データと、 前 記位置検出手段で検出された移動体の現在の 3次元座標位置とに基づき、 前記移 動体座標系における走行路の 3次元座標位置データを演算し、 当該移動体座標系 における走行路の 3次元座標位置データを、 前記 3次元画像生成手段で現在生成 されている移動体座標系における 3次元画像に突き合わせることにより、 現在の 3次元画像の中から、 前記走行路面に対応する部分を切り出す画像処理手段と、 前記 3次元画像のうち、 前記画像処理手段で切り出された前記走行路面に対応 する部分について、 前記障害物が存在していることを検出する検出手段と を具えるようにしている。  Based on three-dimensional coordinate position data indicating the three-dimensional coordinate position of each point along the travel path and the current three-dimensional coordinate position of the moving object detected by the position detecting means, the moving object coordinate system The three-dimensional coordinate position data of the traveling path is calculated, and the three-dimensional coordinate position data of the traveling path in the moving body coordinate system is matched with the three-dimensional image in the moving body coordinate system currently generated by the three-dimensional image generating means. Thereby, an image processing unit that cuts out a portion corresponding to the traveling road surface from the current three-dimensional image, and a portion corresponding to the running road surface that is cut out by the image processing unit in the three-dimensional image. Detecting means for detecting the presence of the obstacle.
すなわち、 この第 1発明の構成によれば、 走行路の 3次元座標位置データと、 位置検出手段で検出された移動体の現在の 3次元座標位置とに基づき、 図 9に示 すように、 移動体座標系 X I— Y 1— Z 1における走行路 3 1の 3次元座標位置デー タ S 0、 S l、 S 2…が演算される。 そして、 移動体座標系 X I— Y l— Z 1における 走行路の 3次元座標位置データ S 0、 S l、 S 2…が、 現在生成されている移動体座 標系 X I— Y l— Z 1における 3次元画像 4 0 (図 8 ) に突き合わされ、 現在の 3次 元画像 4 0の中から、 走行路面 3 1に対応する部分 Kが切り出される (図 9の画 像 4 0 ' 参照) 。 そして、 この切り出された走行路面 3 1に対応する部分 Kにつ いて、 障害物 3 3が存在していることが検出される。  That is, according to the configuration of the first invention, based on the three-dimensional coordinate position data of the traveling road and the current three-dimensional coordinate position of the moving object detected by the position detecting means, as shown in FIG. The three-dimensional coordinate position data S 0, S 1, S 2,... Of the traveling path 31 in the moving body coordinate system XI—Y 1—Z 1 are calculated. Then, the three-dimensional coordinate position data S 0, S l, S 2... Of the traveling path in the moving body coordinate system XI—Y l—Z 1 is generated by the currently generated moving body coordinate system XI—Y l—Z 1 Then, a portion K corresponding to the traveling road surface 31 is cut out from the current three-dimensional image 40 (see the image 40 ′ in FIG. 9). Then, it is detected that the obstacle 33 exists for the portion K corresponding to the cut road surface 31 that has been cut out.
このように、 不整地等のいかなる走行路であろうとも、 画像からその走行路面 が特定され、 その走行路面に対応する部分のみについて障害物を検出するための 画像処理がなされるので、 画像処理時間が短時間で済み障害物の検出がリアルタ ィムになされるとともに、 走行路がカーブしていたり、 分岐していたりしても予 定走行路上の障害物のみを確実に誤検出なく検出できるようになる。  In this way, no matter what road the road is on, such as on uneven terrain, the road surface is identified from the image, and image processing is performed to detect obstacles only at the portion corresponding to the road surface. Obstacles are detected in real time in a short time, and even if the road is curved or branched, only obstacles on the planned road can be reliably detected without erroneous detection. Become like
また、 上記第 2の目的を達成するために本発明の第 2発明の主たる発明では、 移動体の走行路面および当該走行路面上の障害物の 3次元画像に基づいて、 当該 障害物を検出するようにした移動体の走行路面上の障害物検出装置において、 前記移動体の現在位置を 3次元座標位置として検出する位置検出手段と、 前記移動体からみた 3次元座標系における前記走行路面および前記障害物の現 在の 3次元画像を生成する 3次元画像生成手段と、 Further, in order to achieve the second object, in the main invention of the second invention of the present invention, the obstacle is detected based on a traveling road surface of a moving body and a three-dimensional image of the obstacle on the traveling road surface. In the obstacle detecting device on the traveling road surface of the moving object as described above, position detecting means for detecting a current position of the moving object as a three-dimensional coordinate position, Three-dimensional image generating means for generating a current three-dimensional image of the traveling road surface and the obstacle in a three-dimensional coordinate system viewed from the moving body;
前記走行路に沿った各点の 3次元座標位置を示す 3次元座標位置データと、 前 記位置検出手段で検出された移動体の現在の 3次元座標位置とに基づき、 前記移 動体座標系における走行路の 3次元座標位置データを演算し、 当該移動体座標系 における走行路の 3次元座標位置データを、 前記 3次元画像生成手段で現在生成 されている移動体座標系における 3次元画像に突き合わせることにより、 現在の 3次元画像の中の走行路面の特定部分を切り出す画像処理手段と、  Based on three-dimensional coordinate position data indicating the three-dimensional coordinate position of each point along the travel path and the current three-dimensional coordinate position of the moving object detected by the position detecting means, the moving object coordinate system The three-dimensional coordinate position data of the traveling path is calculated, and the three-dimensional coordinate position data of the traveling path in the moving body coordinate system is matched with the three-dimensional image in the moving body coordinate system currently generated by the three-dimensional image generating means. Image processing means for cutting out a specific portion of the traveling road surface in the current three-dimensional image,
前記切り出された走行路面の特定部分を平面として求め、 この平面の上に前記 障害物が存在していることを検出する検出手段と  Detecting means for obtaining a specific portion of the cut road surface as a plane, and detecting that the obstacle is present on the plane;
を具えるようにしている。  It is equipped with.
すなわち、 この第 2発明の構成によれば、 走行路の 3次元座標位置データと、 位置検出手段で検出された移動体の現在の 3次元座標位置とに基づき、 図 9に示 すように、 移動体座標系 X I— Y 1—Z 1における走行路 3 1の 3次元座標位置デー タ S 0、 S l、 S 2'"が演算される。 そして、 移動体座標系 X I— Y l— Z 1における 走行路の 3次元座標位置データ S 0、 S l、 S 2…が、 現在生成されている移動体座 標系 X l— Y l— Z 1における 3次元画像 4 0 (図 8 ) に突き合わされ、 現在の 3次 元画像 4 0の中から、 走行路面 3 1の特定部分 L 3が切り出される (図 9の画像 4 0 ' 参照) 。  That is, according to the configuration of the second invention, based on the three-dimensional coordinate position data of the traveling road and the current three-dimensional coordinate position of the moving object detected by the position detecting means, as shown in FIG. The three-dimensional coordinate position data S 0, S l, and S 2 ′ ”of the traveling path 31 in the moving body coordinate system XI—Y 1—Z 1 are calculated. Then, the moving body coordinate system XI—Y l—Z The three-dimensional coordinate position data S 0, S l, S 2… of the travel path in 1 is converted to the currently generated three-dimensional image 40 (Fig. 8) in the mobile coordinate system X l— Y l— Z 1. Then, a specific portion L3 of the road surface 31 is cut out from the current three-dimensional image 40 (see the image 40 'in FIG. 9).
そして、 この切り出された走行路面の特定部分 L 3が平面として求められ、 この 平面の上に障害物 3 3が存在していることが検出される。  Then, the specific portion L3 of the cut traveling road surface is obtained as a plane, and it is detected that the obstacle 33 exists on this plane.
このように、 走行路面の特定の一部が画像から切り出され、 その特定部分を平 面として求めその上の障害物を検出するようにしたので、 走行路面が坂道等であ つて走行路面各部の勾配、 路面高さが異なるものであったとしても、 走行路面各 部を基準とする所定の高さ以上の障害物が確実に誤検出なく検出できるようにな る。 図面の簡単な説明 ' 図 1は本発明に係る移動体の走行路面上の障害物検出装置の実施の形態におけ る構成例を示すブロック図である。 In this way, a specific part of the road surface is cut out from the image, and the specific part is determined as a flat surface to detect obstacles on it. Even if the gradient and the road surface height are different, an obstacle having a predetermined height or higher based on each part of the traveling road surface can be reliably detected without erroneous detection. Brief description of the drawings '' FIG. 1 is a block diagram showing a configuration example of an embodiment of a device for detecting an obstacle on a traveling road surface of a moving object according to the present invention.
図 2は図 1に示す検出範囲特定部および障害物検出部で実行される処理の手順 を示すフローチヤ一トである。  FIG. 2 is a flowchart showing a procedure of processing executed by the detection range specifying unit and the obstacle detection unit shown in FIG.
図 3は全体座標系と車体座標系との関係を示す図である。  FIG. 3 is a diagram showing the relationship between the overall coordinate system and the vehicle body coordinate system.
図 4は実施の形態の移動体が走行する走行路を概略的に示す図である。  FIG. 4 is a diagram schematically showing a traveling path on which the moving body of the embodiment travels.
図 5は実施の形態の走行路の勾配および路面高さを示す図である。  FIG. 5 is a diagram showing a gradient of a traveling road and a road surface height according to the embodiment.
図 6は実施の形態の走行路の路幅を示す図である。  FIG. 6 is a diagram showing the road width of the traveling road according to the embodiment.
図 7は図 1に示す 3次元距離画像生成部で生成される距離画像を示す図である。 図 8は図 7に示す距離画像を変換して得られる画素の 3次元分布画像を示す図 である。  FIG. 7 is a diagram showing a distance image generated by the three-dimensional distance image generation unit shown in FIG. FIG. 8 is a diagram showing a three-dimensional distribution image of pixels obtained by converting the distance image shown in FIG.
図 9は走行路面のみ、 あるいは走行路面を各区間に区切った部分のみを画像と して示す図である。  FIG. 9 is a diagram showing, as an image, only the traveling road surface or only a portion obtained by dividing the traveling road surface into sections.
発明を実施するための最良の形態 BEST MODE FOR CARRYING OUT THE INVENTION
以下、 図面を参照して本発明の実施の形態について説明する。  Hereinafter, embodiments of the present invention will be described with reference to the drawings.
本実施の形態では、 図 4に示すように無人ダンプトラック等の移動体 1が予定 走行路 3 1を走行する場合に、 この走行路上に存在する岩等の障害物を移動体搭 載の障害物検出装置で検出する場合を想定している。  In the present embodiment, as shown in FIG. 4, when the moving body 1 such as an unmanned dump truck travels on the planned traveling path 31, obstacles such as rocks existing on the traveling path are obstructed by the moving body. It is assumed that the object is detected by an object detection device.
図 4において、 X0—Y0— Z 0は全体座標系を示しており、 X I— Y l— Z 1 は、 移動体 1とともに移動する車体座標系を示している。 X Iは移動体 1の車幅方 向に対応する座標軸であり、 Z 1は移動体 1の進行方向 (予定走行路 3 1 ) に対応 する座標軸であり、 Y 1は鉛直方向の座標軸である。  In FIG. 4, X0—Y0—Z0 indicates the entire coordinate system, and XI—Y1—Z1 indicates the vehicle body coordinate system that moves together with the moving body 1. XI is a coordinate axis corresponding to the vehicle width direction of the moving body 1, Z1 is a coordinate axis corresponding to the traveling direction of the moving body 1 (scheduled traveling path 31), and Y1 is a vertical coordinate axis.
図 1は、 本発明の実施形態である障害物検出装置の構成を示している。  FIG. 1 shows a configuration of an obstacle detection device according to an embodiment of the present invention.
同図 1に示すように、 この障害物検出装置は、 移動体 1の現在位置を、 全体座 標系 XO— Y0—Z 0の 3次元座標位置として検出する位置計測センサ 7と、 予定走 行路 3 1に沿った各点 S 0、 S l、 S 2…の全体座標系 XO— Y0— Z Oにおける 3次 元座標位置を示す 3次元座標位置データが予め取得され、 これが記憶されている 予定走行路データ記憶部 1 1と、 移動体 1搭載のカメラ 2の撮像画像に基づき、 移動体 1上の基準位置 (基準面) から、 その移動体 1の走行路面 31上の障害物 までの距離を計測し、 当該走行路面 31および障害物の 3次元の距離画像を生成 する 3次元距離画像生成部 3と、 この距離画像を車体座標系 XI— Y1— Z1におけ る 3次元画像 40に変換するとともに、 上記予定走行路データと、 位置計測セン サ 7で現在検出されている移動体 1の現在の 3次元座標位置と、 回転角度検出セ ンサ 8で現在検出されている車体回転角とに基づき、 車体座標系 XI— Yl— Z1に おける走行路 31の 3次元座標位置データを演算し、 この車体座標系 XI— Yl_ Z1における走行路 31の 3次元座標位置データを、 上記変換された車体座標系 X 1一 Y1— Z1における 3次元画像 40に突き合わせることにより、 現在の 3次元画 像 40の中から、 障害物を検出する範囲を切り出す検出範囲特定部 4と、 3次元 画像 40のうち、 上記切り出された検出範囲について、 障害物が存在しているこ とを検出する障害物検出部 5とから構成されている。 As shown in FIG. 1, the obstacle detection device includes a position measurement sensor 7 that detects the current position of the moving object 1 as a three-dimensional coordinate position of the overall coordinate system XO—Y0—Z0, and a planned traveling path. 3 Three-dimensional coordinate position data indicating the three-dimensional coordinate position in the overall coordinate system XO—Y0—ZO of each point S 0, S 1, S 2,... Along line 1 is obtained in advance and stored. Based on the image data of the planned traveling road data storage unit 1 1 and the camera 2 mounted on the moving body 1, a distance from a reference position (reference plane) on the moving body 1 to an obstacle on the traveling road surface 31 of the moving body 1 is determined. A three-dimensional distance image generation unit 3 that measures the distance and generates a three-dimensional distance image of the traveling road surface 31 and the obstacle, and converts the distance image into a three-dimensional image 40 in the vehicle body coordinate system XI-Y1-Z1. In addition to the conversion, the planned travel path data, the current three-dimensional coordinate position of the moving body 1 currently detected by the position measurement sensor 7, and the vehicle body rotation angle currently detected by the rotation angle detection sensor 8 are used. Based on the three-dimensional coordinate position data of the traveling path 31 in the vehicle body coordinate system XI—Yl—Z1, the three-dimensional coordinate position data of the traveling path 31 in the vehicle body coordinate system XI—Yl_Z1 is converted into Thrust on the 3D image 40 in the body coordinate system X1-Y1-Z1 By matching, the detection range identification unit 4 that cuts out a range for detecting an obstacle from the current three-dimensional image 40, and an obstacle exists in the three-dimensional image 40 in the cut-out detection range described above. And an obstacle detection unit 5 for detecting the presence of the vehicle.
駆動制御部 9の車体自動制御部 10では、 上記位置計測センサ 7の出力と、 回 転角度検出センサ 8の出力をフィードバック信号として、 予定走行路データ記憶 部 1 1に記憶された予定走行路 31上の各目標点 S0、 Sl、 S2…に沿って追従す るように、 移動体 1が駆動制御される。  The vehicle body automatic control unit 10 of the drive control unit 9 uses the output of the position measurement sensor 7 and the output of the rotation angle detection sensor 8 as feedback signals as the planned road data stored in the planned road data storage unit 11. The moving body 1 is driven and controlled so as to follow the above target points S0, Sl, S2,.
また、 障害物検出部 5で検出された障害物の位置情報は、 たとえば、 無線にて、 移動体 1たる無人ダンプトラックを監視する監視局に送信され、 CRTディスプ レイに表示等され、 無人ダンプトラックの管理に供される。  In addition, the position information of the obstacle detected by the obstacle detection unit 5 is transmitted, for example, by radio to a monitoring station that monitors the unmanned dump truck as the mobile unit 1, displayed on a CRT display, and displayed on an unmanned dump truck. Used for truck management.
予定走行路データ記憶部 1 1に記憶される予定走行路データは、 各点 S0、 Sl、 S2…の 3次元位置を示すデータ SO (SX0、 SY0、 S ZO) 、 S1(SX1、 S Y 1、 S Z1) 、 S2(SX2、 SY2、 S Z2) …として予め取得しておかれる。  The planned road data stored in the planned road data storage unit 11 includes data SO (SX0, SY0, SZO) and S1 (SX1, SY1, S0) indicating the three-dimensional positions of the points S0, Sl, S2,. S Z1), S2 (SX2, SY2, S Z2)... Are acquired in advance.
ここで、 位置計測センサ 7としては GP S (グローバル ·ポジショニング 'セ ンサ) を用いることができる。 この場合図 1のアンテナ 6を介して位置計測信号 が無線にて入力されることになる。  Here, GPS (Global Positioning 'Sensor) can be used as the position measurement sensor 7. In this case, the position measurement signal is wirelessly input via the antenna 6 in FIG.
よって、 ティ一チング時に、 移動体 1を予定走行路 31に沿って走行させ、 G PSたる位置計測センサ 7で、 一定時間間隔毎に (たとえば 1秒毎に) 位置計測 を行い、 位置データを計測すれば、 予定走行路データ S 0、 S l、 S 2…を容易に取 得することができる。 つまり、 位置計測センサ 7から出力されるデータは、 当該 センサ 7の取付け位置を表している。 センサ 7の車体中心に対する相対的な取付 け位置は既知であるので、 位置計測センサ 7の出力データから車体中心座標位置 を求めることができる。 さらに、 車体中心が予定走行路 3 1の中心に沿って移動 しているとの仮定のもとに、 予定走行路デ一タ (走行路中心位置) S 0、 S l、 S 2…を求めることができる。 Therefore, at the time of teaching, the moving body 1 is caused to travel along the planned traveling path 31, and the position is measured by the position measuring sensor 7, which is a GPS, at regular time intervals (for example, every 1 second). Then, if the position data is measured, the planned traveling route data S 0, S 1, S 2,... Can be easily obtained. That is, the data output from the position measurement sensor 7 indicates the mounting position of the sensor 7. Since the mounting position of the sensor 7 relative to the vehicle center is known, the vehicle center coordinate position can be obtained from the output data of the position measurement sensor 7. Further, based on the assumption that the center of the vehicle body is moving along the center of the planned traveling path 31, the planned traveling path data (the traveling path center position) S0, S1, S2,. be able to.
なお、 予定走行路データとしては、 上述したティ一チング走行による位置計測 によって求めるのではなくて、 別途、 測量等によって計測した結果を予定走行路 データとして、 所定の入力手段を介して入力し、 記憶部 1 1に記憶させるように しておいてもよレ、。  In addition, as the planned travel route data, instead of being obtained by the position measurement by the teaching travel described above, a result measured by surveying or the like is separately input as the planned travel route data via predetermined input means, It may be stored in the storage unit 11.
回転角度検出センサ 8は、 たとえば、 移動体 1の車体のョー方向の角度を検出 するョーレイ トジャイロと、 車体のピッチング角とローリング角を検出する 2つ の傾斜計とから構成されており、 これら検出結果に基づき、 図 3に示すように、 全体座標系 X0— YO— Z 0の座標軸に対する車体座標系 X I— Y 1— Z 1の座標軸の 回転角を表す車体の回転角 (R X0、 R Y0、 R Z 0) が出力される。 全体座標系 X 0— Y0— Z 0でみた車体座標系 X I— Y l— Z 1の原点位置 (H X0、 HY0、 H Z O) は、 位置計測センサ 7の出力として取得される。  The rotation angle detection sensor 8 includes, for example, a gyroscope that detects the angle of the moving body 1 in the yaw direction of the vehicle body and two inclinometers that detect the pitching angle and the rolling angle of the vehicle body. Based on the results, as shown in Fig. 3, the vehicle body rotation angles (R X0, R Y0) representing the rotation angles of the vehicle body coordinate system XI—Y 1—Z 1 with respect to the global coordinate system X0—YO—Z0 , RZ 0) are output. The origin positions (HX0, HY0, HZO) of the vehicle body coordinate system XI—Yl—Z1 viewed in the global coordinate system X0—Y0—Z0 are acquired as outputs of the position measurement sensor 7.
3次元距離画像生成部 3では、 例えば図 7に示すような予定走行路面 3 1と、 予定走行路 3 1からの分岐路 3 2と、 予定走行路面 3 1上に存在する障害物 3 3 と、 予定走行路面 3 1以外に存在する障害物 3 4、 3 5、 3 6、 3 7等からなる 距離画像 3 0が生成される。 距離画像 3 0の各画素 5 0には、 i一 j 2次元座標 系における 2次元座標位置 ( i、 j ) 、 移動体 1の基準位置 (基準面) からの距 離 dを示す 3次元のデータ ( i、 j 、 d ) が対応づけられており、 距離画像 3 0 の各位置 i、 jの画素は、 距離 dに応じた明度を有している。 こうした 3次元の 距離画像を生成するための距離計測の方法としては、 例えば特願平 7— 2 0 0 9 9 9号に示される多眼レンズ (多眼力メラ) を使用した方法を用いることができ る。 ' 以下、 検出範囲特定部 4、 障害物検出部 5で行われる処理について、 図 2のフ 口一チャートを参照して説明する。 The three-dimensional distance image generation unit 3 includes, for example, a planned road surface 31 as shown in FIG. 7, a branch road 32 from the planned road 31, and an obstacle 33 existing on the planned road surface 31. Then, a distance image 30 including obstacles 34, 35, 36, 37, etc. existing other than the planned traveling road surface 31 is generated. Each pixel 50 of the distance image 30 has a three-dimensional coordinate indicating the two-dimensional coordinate position (i, j) in the i-j two-dimensional coordinate system and the distance d from the reference position (reference plane) of the mobile unit 1. The data (i, j, d) are associated with each other, and the pixel at each position i, j of the distance image 30 has a brightness corresponding to the distance d. As a method of distance measurement for generating such a three-dimensional distance image, for example, a method using a multi-lens (multi-eye lens) disclosed in Japanese Patent Application No. 7-209999 is used. it can. ' Hereinafter, the processing performed by the detection range identification unit 4 and the obstacle detection unit 5 will be described with reference to the flowchart in FIG.
同図 2に示すように、 3次元距離画像生成部 3から距離画像 30が出力される と、 この距離画像 30を座標変換することによって、 図 8に示すように、 車体座 標系 XI— Y1—Z1における 3次元画像 40が生成される。  As shown in FIG. 2, when the distance image 30 is output from the three-dimensional distance image generation unit 3, the distance image 30 is subjected to coordinate transformation to obtain a body coordinate system XI—Y1 as shown in FIG. —A three-dimensional image 40 in Z1 is generated.
すなわち、 距離画像 30の各画素 50には、 上述したように ( i、 ;! 、 d) の 3次元の情報が対応づけられているので、 この距離画像データ ( i、 d) で 示される各画素 50を、 図 8に示すように、 移動体 1とともに移動し、 移動体 1 の所定位置を原点とする車体座標系 X 1— Y 1— Z 1上の 3次元座標位置データ ( X、 y、 z) に対応づけられた各画素 60に変換することができる。 このような変換 を施すことによって、 画素 60の 3次元座標位置の分布図として 3次元画像 40 を取得することができる (ステップ 1 02) 。  That is, since each pixel 50 of the distance image 30 is associated with the three-dimensional information of (i,;!, D) as described above, each pixel 50 represented by this distance image data (i, d) The pixel 50 is moved together with the moving body 1 as shown in FIG. 8, and the three-dimensional coordinate position data (X, y) on the body coordinate system X 1—Y 1—Z 1 having the origin at a predetermined position of the moving body 1 , Z) can be converted to each pixel 60 associated with. By performing such conversion, the three-dimensional image 40 can be obtained as a distribution diagram of the three-dimensional coordinate positions of the pixels 60 (step 102).
記憶部 1 1に記憶された予定走行路データ SO (SX0、 SY0、 S ZO) 、 Sl( SX1、 SY1、 S ZD 、 S2(SX2、 S Y2、 S Z2) …は、 つぎのように車体座 標系 XI— Y1—Z1における予定走行路データ SO (CX0、 CY0、 CZO) 、 SI (CXI, CY1、 C ZD 、 S2(CX2、 CY2、 C Z2) …に変換される。  The planned road data SO (SX0, SY0, SZO), Sl (SX1, SY1, SZD, S2 (SX2, SY2, SZ2)) stored in the storage unit 11 are as follows. The planned route data SO (CX0, CY0, CZO), SI (CXI, CY1, CZD, S2 (CX2, CY2, CZ2), etc.) in the reference system XI—Y1-Z1 are converted.
すなわち、 上述した車体の回転角 (RX0、 RY0、 RZO) 、 全体座標系からみ た車体座標系 XI— Yl— Z1の原点位置 (HX0、 HY0、 HZ0) を用いて、 図 3 に示すように車体座標系における、 ある点 Pの座標位置(XP1、 YP1、 Z PI) は、 次式 (1) のようにして、 全体座標系における座標位置(ΧΡ0、 ΥΡ0、 Ζ Ρ 0) に変換することができる。  That is, as shown in FIG. 3, the vehicle body rotation angle (RX0, RY0, RZO) and the origin position (HX0, HY0, HZ0) of the vehicle body coordinate system XI—Yl—Z1 viewed from the global coordinate system are used. The coordinate position (XP1, YP1, ZPI) of a point P in the coordinate system can be converted to the coordinate position (ΧΡ0, ΥΡ0, Ζ Ρ0) in the global coordinate system according to the following equation (1). it can.
(1)(1)
Figure imgf000010_0001
ただし、 上記 (1) 式において MR0は、 車体座標系の回転マトリ ックスであり 車体の回転角 (RX0、 RY0、 RZO) を用いて、 次式 (2) のように表される。 COS(RXO) S!N(RXO) 0 COS(RYO) 0 -SIN(RYO)
Figure imgf000010_0001
However, in the above equation (1), MR0 is a rotation matrix of the vehicle body coordinate system, and is represented by the following equation (2) using the vehicle body rotation angles (RX0, RY0, RZO). COS (RXO) S! N (RXO) 0 COS (RYO) 0 -SIN (RYO)
RO •SIN(RXO) COS(RXO) 0 0 1 0  ROSIN (RXO) COS (RXO) 0 0 1 0
0 0 1 SIN(RYO) 0 COS(RYO)  0 0 1 SIN (RYO) 0 COS (RYO)
1 0 0 1 0 0
0 COS(RZO) SIN(RZO) (2)  0 COS (RZO) SIN (RZO) (2)
0 -SIN(RZO) COS(RZO) よって、 上記 (1) 式より、 全体座標系における予定走行路 3 1の SO点の座標 位置 (SX0、 SY0、 S Z0) を、 車体座標系の座標位置 (CX0、 CY0、 C ZO) に、 以下のように変換することができる。  0 -SIN (RZO) COS (RZO) Therefore, from the above equation (1), the coordinate position (SX0, SY0, SZ0) of the SO point on the planned traveling path 31 in the global coordinate system is calculated as the coordinate position in the vehicle body coordinate system. (CX0, CY0, CZO) can be converted as follows.
CX0 SX0 一 HX0 CX0 SX0 One HX0
CY0 = MR0一 I SY0 一 HY0 (3)  CY0 = MR0-I I SY0-HY0 (3)
CZ0 SZ0 ― HZ0 上記 (3) 式と同様にして、 他の予定走行路データ S1(S XI、 SY1、 S ZD 、 S2(SX2、 SY2、 S Z2) …を、 車体座標系 XI— Yl— Z 1における予定走行路 データ S1(CX1、 CY1、 CZ1) 、 S2(CX2、 CY2、 C Z2) …に変換するこ とができる (ステップ 1 03 :図 9参照) 。  CZ0 SZ0-HZ0 In the same manner as in the above formula (3), the other scheduled road data S1 (SXI, SY1, SZD, S2 (SX2, SY2, SZ2)…) is converted to the body coordinate system XI—Yl—Z. It can be converted to the planned road data S1 (CX1, CY1, CZ1), S2 (CX2, CY2, CZ2) in Step 1 (Step 103: see FIG. 9).
図 5は、 Z1 (移動体進行方向) — Y1 (鉛直方向) 座標系において、 上記走行 路 3 1上の各点 S0、 Sl、 S2…をプロットしたものである。  FIG. 5 is a plot of points S0, Sl, S2,... On the traveling path 31 in the Z1 (moving body traveling direction) —Y1 (vertical direction) coordinate system.
このように、 予定走行路面 3 1が坂道である場合は、 勾配 θ、 路面高さ CYは、 進行方向各部分において一義的ではない。 たとえば、 区間 S3〜S4の勾配 03、 路 面高さ CY3、 CY4は、 他の区間の勾配、 路面高さとは異なっている。 したがつ て、 こうした勾配、 路面高さが異なる走行路面各部を一義的な勾配、 路面高さと みて、 走行路面 3 1全体 (画像 40内に存在する走行路全体) を一平面とみなし、 この平面の高さを基準として障害物を検出する方法には無理があり、 誤検出を生 じる虞がある。  As described above, when the planned traveling road surface 31 is a sloping road, the gradient θ and the road surface height CY are not unique in each part in the traveling direction. For example, the slope 03 and the road heights CY3 and CY4 of the sections S3 to S4 are different from the slopes and road heights of other sections. Therefore, each part of the traveling road surface having a different slope and road surface height is regarded as a unique gradient and road surface height, and the entire traveling road surface 31 (the entire traveling road existing in the image 40) is regarded as one plane. There is no reasonable way to detect an obstacle based on the height of the plane, and erroneous detection may occur.
そこで、 つぎの処理では、 移動体 1前方の走行路面 3 1を一平面とみるのでは なくて、 図 5に示すように、 進行方向 (Z1軸方向) を各点 S0、 Sl、 S2…ごと に区切り、 各区間ごとの画像 J0、 Jl、 J2'-'を切り出し、 この各区間ごとの画像 J0、 Jl、 J2…毎に、 走行路面 31に相当する平面 L0、 L l、 L2…を求めるよ うにしている。 Therefore, in the following processing, the traveling road surface 31 in front of the moving object 1 Instead, as shown in Fig. 5, the traveling direction (Z1 axis direction) is divided for each point S0, Sl, S2 ..., and images J0, Jl, J2'- 'for each section are cut out, and for each of these sections For each of the images J0, Jl, J2,..., Planes L0, L1, L2,.
図 9には、 図 8の全体画像 40から切り出された画像 J 1、 J 3が、 3次元座標 系 XI— Y1— Z1で示されている。 この画像 Jl、 J3について、 走行路 31に相当 する平面を求めた結果が、 平面 Ll、 L3として表されている。  FIG. 9 shows images J1 and J3 cut out from the entire image 40 in FIG. 8 in a three-dimensional coordinate system XI—Y1—Z1. With respect to the images Jl and J3, the result of obtaining the plane corresponding to the traveling path 31 is represented as planes Ll and L3.
すなわち、 図 8に示す画素 60の 3次元分布画像 40が、 Z1軸方向に各区間 S 0〜S1、 S1〜S2、 S3〜S4'" (図 9参照) に分割されて、 各 [ 間の画像 J 0、 J 1、 J 2···ごとに、 平面 L0、 Ll、 L2…が検出される。 たとえば、 区間 S3~S4の 場合、 この区間の中に存在する画像 J3の全画素 60の中から鉛直方向 (Y1軸方 向) 最下点にある画素群を選択し、 これらを平面近似することによって、 平面し 3を検出することができる (ステップ 104) 。  That is, the three-dimensional distribution image 40 of the pixel 60 shown in FIG. 8 is divided into sections S 0 to S 1, S 1 to S 2, and S 3 to S 4 ′ ″ (see FIG. 9) in the Z1-axis direction, and Planes L0, L1, L2, etc. are detected for each of the images J0, J1, J2, etc. For example, in the section S3 to S4, all the pixels 60 of the image J3 existing in this section are A pixel group located at the lowest point in the vertical direction (Y1 axis direction) is selected from among them, and by approximating these planes, the plane 3 can be detected (step 104).
つぎに、 各平面 L0、 Ll、 L2'"毎に、 その平面を基準とする所定のしきい値以 上の高さの物体があるか否かが検出される。 たとえば、 平面 L3の場合、 その平面 L3を基準として所定のしきい値以上の物体 33があるので、 これが障害物 33で あると検出される (ステップ 105) 。  Next, for each plane L0, Ll, L2 '", it is detected whether or not there is an object having a height equal to or higher than a predetermined threshold based on the plane. For example, in the case of plane L3, Since there is an object 33 having a predetermined threshold value or more based on the plane L3, this is detected as an obstacle 33 (step 105).
つぎに、 図 9に示すように予定走行路面 31の内側を示す画像 Kを、 図 8の全 体画像 40の中から切り出す処理が実行される。  Next, as shown in FIG. 9, a process of cutting out the image K showing the inside of the planned traveling road surface 31 from the whole image 40 in FIG. 8 is executed.
たとえば、 区間 S1〜S2の画像 J 1について、 予定走行路 31とその分岐路 32 とを同じ平面として検出してしまい (予定走行路 31と分岐路 32の間に段差が なければ) 、 予定走行路 31上にはない分岐路 32上の物体 35を 「障害物」 と して誤検出してしまうことがある (図 9参照) 。  For example, regarding the image J1 of the sections S1 to S2, the planned traveling path 31 and the branch road 32 are detected as the same plane (unless there is a step between the planned traveling path 31 and the branch road 32), and the planned traveling is performed. An object 35 on a fork 32 that is not on the road 31 may be erroneously detected as an obstacle (see Fig. 9).
そこで、 このような事態を避けるために、 予定走行路面 31の内側を示す画像 K内に存在する物体のみを障害物と判定するものである。  Therefore, in order to avoid such a situation, only an object existing in the image K showing the inside of the planned traveling road surface 31 is determined as an obstacle.
図 6は、 XI (移動体車幅方向) 一 Z1 (移動体進行方向) 座標系において、 進 行方向右側境界点 SO ( + ) 、 SI (+ ) 、 S2 ( + ) ···、 進行方向左側境界点 SO Figure 6 shows the XI (moving vehicle width direction)-Z1 (moving vehicle traveling direction) coordinate system, right boundary point in the traveling direction SO (+), SI (+), S2 (+), traveling direction Left boundary point SO
(一) 、 S1 (-) 、 S2 (-) …を'プロットしたものである。 進行方向右側境界点 SO ( + ) 、 SI (+ ) 、 S2 ( + ) …は、 予定走行路 31の 中心位置 SO (CX0、 CY0、 CZO) 、 S1(CX1、 CY1、 C Zl) 、 S2(CX2、 CY2、 CZ2) …に対して XI軸プラス方向 (右側) に、 移動体 1の車幅の半分な いしは走行路幅の半分 +Wcだけオフセットしたもの、 (1), S1 (-), S2 (-) ... are plotted. The right boundary points SO (+), SI (+), S2 (+) ... in the traveling direction are the center positions SO (CX0, CY0, CZO), S1 (CX1, CY1, CZl), S2 ( CX2, CY2, CZ2)… in the plus direction (right side) of the XI axis with respect to the vehicle width of the mobile unit 1 or offset by half the road width + Wc,
SO ( + ) (CX0+Wc、 CY0、 CZO) 、 SI (+ ) (CXl+Wc, CY1、 C Zl) 、 S2 ( + ) (CX2 + Wc、 CY2、 CZ2) …として得られる。  SO (+) (CX0 + Wc, CY0, CZO), SI (+) (CXl + Wc, CY1, CZl), S2 (+) (CX2 + Wc, CY2, CZ2).
同様に、 進行方向左側境界点 SO (—) 、 SI (—) 、 S2 (-) …は、 予定走行 路 31の中心位置 SO (CX0、 CY0、 CZO) 、 S1(CX1、 CY1、 C Zl) 、 s 2(CX2、 CY2、 CZ2) …に対して XI軸マイナス方向 (左側) に、 移動体 1の 車幅の半分ないしは走行路幅の半分一 Wcだけオフセットしたもの、  Similarly, the left boundary points SO (—), SI (—), S2 (-)… in the traveling direction are the center positions SO (CX0, CY0, CZO) and S1 (CX1, CY1, CZl) of the planned traveling road 31. , S 2 (CX2, CY2, CZ2) ... offset by half the width of the vehicle of moving object 1 or half the width of the roadway in the XI axis minus direction (left side), Wc,
SO (-) (CXO— Wc、 CY0、 CZO) 、 SI (―) (CXI— Wc、 CY1、 c Zl) 、 S2 (-) (CX2— Wc、 CY2、 CZ2) …として得られる。  SO (-) (CXO—Wc, CY0, CZO), SI (—) (CXI—Wc, CY1, c Zl), S2 (−) (CX2—Wc, CY2, CZ2).
このようにして、 予定走行路面 31の内側を示す画像 Kが、 全体画像 40から 切り出された結果、 たとえ、 予定走行路 31の外側を含んでいる画像 J 1内に物体 35が存在したとしても、 この物体 35は、 画像 K内に存在していないので、 「 障害物」 ではないと判定することができ、 予定走行路 31上に存在する物体のみ を障害物であると確実に検出することができるようになる (ステップ 106) 。 以上説明した実施の形態では、 3次元画像 40の中から各区間ごとの画像 J0、 Jl、 J 2··· (平面 L0、 Ll、 L2---) を切り出す処理 (ステップ 104) と、 3次 元画像 40の中から走行路面 31の内側を示す画像 Kを切り出す処理 (ステップ 106) とを併せて実施する場合について説明したが、 本発明としては、 いずれ か一方の処理のみを行うようにしてもよい。  In this way, the image K showing the inside of the planned traveling road surface 31 is cut out from the whole image 40, and even if the object 35 exists in the image J1 including the outside of the planned traveling road surface 31, However, since this object 35 does not exist in the image K, it can be determined that it is not an “obstacle”, and it is possible to reliably detect only an object existing on the planned traveling path 31 as an obstacle. (Step 106). In the embodiment described above, a process (step 104) of cutting out images J0, Jl, J2... (Plane L0, Ll, L2 ---) for each section from the three-dimensional image 40; The case has been described in which the process (step 106) of cutting out the image K showing the inside of the traveling road surface 31 from the dimension image 40 is also performed. However, in the present invention, only one of the processes is performed. You may.
ステップ 106の処理を単独で行った場合には、 つぎのようになる。  When the processing of step 106 is performed alone, the following is obtained.
すなわち、 図 9に示すように、 車体座標系 XI— Yl— Z1における走行路 31の 3次元座標位置データ S0、 Sl、 S2…と、 車体座標系 X1— Y1—Z1における 3 次元画像 40 (図 8) とが突き合わされることにより、 3次元画像 40の中から、 走行路面 31に対応する部分 Kが切り出される (図 9の画像 40' 参照) 。 そし て、 この切り出された走行路面 31 'に対応する部分 Kについて、 障害物 33が存 在していることを検出することができる。 That is, as shown in FIG. 9, the three-dimensional coordinate position data S0, Sl, S2... Of the traveling path 31 in the vehicle body coordinate system XI—Yl—Z1 and the three-dimensional image 40 in the vehicle body coordinate system X1—Y1-Z1 (FIG. 8) is matched, a portion K corresponding to the traveling road surface 31 is cut out from the three-dimensional image 40 (see image 40 'in FIG. 9). Then, an obstacle 33 exists in the portion K corresponding to the cut road surface 31 ′. Can be detected.
ここで、 走行路面 31に対応する部分 Kは、 走行路中心位置 S0、 Sl、 S2…に 対して車幅分あるいは路幅分土 Wcだけオフセットさせたものとして求めてことが できるが、 必ずしもこれに限定されるものではない。  Here, the portion K corresponding to the traveling road surface 31 can be obtained by offsetting the traveling road center position S0, Sl, S2,... By the vehicle width or the road width dividing soil Wc. However, the present invention is not limited to this.
移動体 1が走行路中心位置に沿つて走行している場合には問題はないが、 移動 体 1としては、 走行路中心位置からずれて走行している場合もあり、 この場合は、 車幅分あるいは路幅分士 Wcだけオフセットした領域 Kからはみでてしまうことも ある。 そこで、 目標点 S0、 Sl、 S 2…と移動体 1の現在の走行位置との偏差を検 出し、 この偏差に応じて路幅方向のオフセット量を求め、 移動体 1がはみでない ような領域を任意に設定することができる。  There is no problem when mobile unit 1 is traveling along the center of the travel path.However, mobile unit 1 may be running off the center of the travel path. In some cases, the vehicle may run off the area K offset by the distance or the road width derivation Wc. Therefore, a deviation between the target points S0, Sl, S2,... And the current traveling position of the moving body 1 is detected, and an offset amount in the road width direction is calculated according to the deviation. The area can be set arbitrarily.
このように、 不整地等のいかなる走行路 31であろうとも、 画像からその走行 路面 31が特定され、 その走行路面に对応する部分 Kのみについて障害物を検出 するための画像処理がなされるので、 画像処理時間が短時間で済み障害物 33の 検出がリアルタイムになされる。 また、 走行路 31がカーブしていたり、 分岐路 32を有していたとしても、 予定走行路 31以外に存在する物体 35を 「障害物」 であると誤って検出することなく、 予定走行路 31上の障害物 33のみを確実に 検出できるようになる。  In this way, regardless of the traveling road 31 on an uneven terrain or the like, the traveling road surface 31 is identified from the image, and image processing is performed to detect an obstacle only in the portion K corresponding to the traveling road surface. Therefore, the image processing time is short and the obstacle 33 is detected in real time. In addition, even if the traveling path 31 is curved or has a branch road 32, the object 35 existing other than the planned traveling path 31 is not erroneously detected as an obstacle, and the Only the obstacle 33 on 31 can be reliably detected.
また、 ステップ 104の処理を単独で行った場合 (走行路面 31の幅方向に段 差がある場合) には、 つぎのようになる。  When the processing of step 104 is performed alone (when there is a step in the width direction of the traveling road surface 31), the following is performed.
すなわち、 図 9に示すように、 車体座標系 XI— Yl— Z1における走行路 31の 3次元座標位置データ S0、 Sl、 S2、 S3···と、 車体座標系 XI— Yl— Zlにおけ る 3次元画像 40 (図 8) とが突き合わされることにより、 3次元画像 40の中 から、 走行路面 31の各区間の画像 J 0、 Jl、 J2、 J3…が切り出される (図 9 の画像 40' 参照) 。 そして、 この切り出された各区間の画像 J 0、 Jl、 J 2、 J 3…について平面 L0、 Ll、 L2、 L3…が求められ、 この平面 L0、 Ll、 L2、 L 3…それぞれについてその上に障害物が存在していることが検出される。  That is, as shown in FIG. 9, the three-dimensional coordinate position data S0, Sl, S2, S3,... Of the traveling path 31 in the body coordinate system XI—Yl—Z1, and the body coordinate system XI—Yl—Zl The images J0, Jl, J2, J3,... Of each section of the road surface 31 are cut out of the three-dimensional image 40 by matching with the three-dimensional image 40 (FIG. 8) (image 40 in FIG. 9). '). Then, planes L0, Ll, L2, L3 ... are obtained for the extracted images J0, Jl, J2, J3 ... of each section, and the planes L0, Ll, L2, L3 ... The presence of an obstacle is detected.
このように、 走行路面 3 1の特定の一部 J 3 (L3) が画像から切り出され、 そ の特定部分を平面 L3としてその上の障害物 33を検出するようにしたので、 走行 路面 3 1が坂道等であって走行路面各部の勾配 θ、 路面高さ C Yが異なるもので あつたとしても、 走行路面各部の平面 L0、 L l、 L2、 L 3…を基準とする所定の 高さ以上の障害物 3 3を誤検出することなく確実に検出できるようになる。 As described above, a specific part J 3 (L3) of the traveling road surface 31 is cut out from the image, and the specific part is set as the plane L3 to detect the obstacle 33 thereon. Even if the road surface 31 is a sloping road or the like, and the slope θ and the road height CY of each part of the traveling road surface are different, a predetermined value based on the plane L0, Ll, L2, L3. Obstacles higher than the height 33 can be reliably detected without erroneous detection.
また、 本実施の形態では、 距離画像 3 0から、 3次元画像 4 0を求めるように しているが、 移動体前方の物体の 3次元位置を判断できる画像であれば、 その画 像から走行路面 3 1の部分 Kを切り出したり、 走行路面各部の平面 L0、 L l、 L 2、 L 3'--を切り出すことが可能である。  Further, in the present embodiment, the three-dimensional image 40 is obtained from the distance image 30. However, if the image can determine the three-dimensional position of the object ahead of the moving object, the traveling from the image is performed. It is possible to cut out the portion K of the road surface 31 or cut out the planes L0, L1, L2, L3 '-of each part of the running road surface.
たとえば、 図 7に示す 3次元距離画像 3 0から直接、 走行路面 3 1の部分 Kを 切り出したり (これを斜線にて示す) 、 走行路面各部の平面 L0、 L l、 L2、 L3 …を切り出すようにしてもよレ、。  For example, the portion K of the road surface 31 is cut out directly from the three-dimensional distance image 30 shown in FIG. 7 (this is indicated by diagonal lines), and the planes L0, L1, L2, L3. You can do it.

Claims

請求の範囲 The scope of the claims
1 . 移動体の走行路面および当該走行路面上の障害物の 3次元画像に基 づいて、 当該障害物を検出するようにした移動体の走行路面上の障害物検出装置 において、  1. An obstacle detection device for detecting the obstacle on the traveling road surface of the moving object based on the three-dimensional image of the traveling road surface of the moving object and the obstacle on the traveling road surface,
前記移動体の現在位置を 3次元座標位置として検出する位置検出手段と、 前記移動体からみた 3次元座標系における前記走行路面および前記障害物の現 在の 3次元画像を生成する 3次元画像生成手段と、  Position detection means for detecting the current position of the moving object as a three-dimensional coordinate position; and three-dimensional image generation for generating a current three-dimensional image of the traveling road surface and the obstacle in a three-dimensional coordinate system viewed from the moving object. Means,
前記走行路に沿った各点の 3次元座標位置を示す 3次元座標位置データと、 前 記位置検出手段で検出された移動体の現在の 3次元座標位置とに基づき、 前記移 動体座標系における走行路の 3次元座標位置データを演算し、 当該移動体座標系 における走行路の 3次元座標位置データを、 前記 3次元画像生成手段で現在生成 されている移動体座標系における 3次元画像に突き合わせることにより、 現在の 3次元画像の中から、 前記走行路面に対応する部分を切り出す画像処理手段と、 前記 3次元画像のうち、 前記画像処理手段で切り出された前記走行路面に対応 する部分について、 前記障害物が存在していることを検出する検出手段と を具えた移動体の走行路面上の障害物検出装置。  Based on three-dimensional coordinate position data indicating the three-dimensional coordinate position of each point along the travel path and the current three-dimensional coordinate position of the moving object detected by the position detecting means, the moving object coordinate system The three-dimensional coordinate position data of the traveling path is calculated, and the three-dimensional coordinate position data of the traveling path in the moving body coordinate system is matched with the three-dimensional image in the moving body coordinate system currently generated by the three-dimensional image generating means. Thereby, an image processing unit that cuts out a portion corresponding to the traveling road surface from the current three-dimensional image, and a portion corresponding to the running road surface that is cut out by the image processing unit in the three-dimensional image. An obstacle detecting device on a traveling road surface of a moving object, comprising: detecting means for detecting the presence of the obstacle.
2 . 移動体の走行路面および当該走行路面上の障害物の 3次元画像に基 づいて、 当該障害物を検出するようにした移動体の走行路面上の障害物検出装置 において、  2. An obstacle detection device for detecting the obstacle based on the three-dimensional image of the traveling road surface of the moving body and the obstacle on the traveling road surface.
前記移動体の現在位置を 3次元座標位置として検出する位置検出手段と、 前記移動体からみた 3次元座標系における前記走行路面および前記障害物の現 在の 3次元画像を生成する 3次元画像生成手段と、  Position detection means for detecting the current position of the moving object as a three-dimensional coordinate position; and three-dimensional image generation for generating a current three-dimensional image of the traveling road surface and the obstacle in a three-dimensional coordinate system viewed from the moving object. Means,
前記走行路に沿った各点の 3次元座標位置を示す 3次元座標位置デ一タと、 前 記位置検出手段で検出された移動体の現在の 3次元座標位置とに基づき、 前記移 動体座標系における走行路の 3次元座標位置データを演算し、 当該移動体座標系 における走行路の 3次元座標位置データを、 前記 3次元画像生成手段で現在生成 されている移動体座標系における 3次元画像に突き合わせることにより、 現在の 3次元画像の中の走行路面の特定部'分を切り出す画像処理手段と、 前記切り出された走行路面の特定部分を平面として求め、 この平面の上に前記 障害物が存在していることを検出する検出手段と Based on the three-dimensional coordinate position data indicating the three-dimensional coordinate position of each point along the travel path and the current three-dimensional coordinate position of the moving object detected by the position detecting means, the moving object coordinates The three-dimensional coordinate position data of the traveling path in the moving body coordinate system is calculated, and the three-dimensional coordinate position data of the traveling path in the moving body coordinate system is converted into the three-dimensional image in the moving body coordinate system currently generated by the three-dimensional image generating means. Image processing means for cutting out a specific portion of the traveling road surface in the current three-dimensional image by matching Detecting means for obtaining a specific portion of the cut road surface as a plane, and detecting that the obstacle is present on the plane;
を具えた移動体の走行路面上の障害物検出装置。  Obstacle detection device on the traveling road surface of a moving object, comprising:
3 . 移動体の基準位置から、 移動体の走行路面上の障害物までの距離を 計測し、 当該走行路面および障害物の距離画像を生成する距離画像生成手段と、 前記距離画像生成手段によって生成された距離画像を用いて、 前記移動体の走行 路面上の障害物を検出する検出手段とを具えた移動体の走行路面上の障害物検出 装置において、  3. A distance image generating unit that measures a distance from a reference position of the moving body to an obstacle on the traveling road surface of the moving object and generates a distance image of the traveling road surface and the obstacle, and is generated by the distance image generating unit. An obstacle detection device for detecting an obstacle on the traveling road surface of the moving body using the obtained distance image,
前記移動体の現在位置を 3次元座標位置として検出する位置検出手段と、 前記距離画像の各画素毎の 2次元座標位置データと、 各画素毎の前記基準位置 からの距離データとに基づき、 前記距離画像の各画素毎に、 移動体からみた 3次 元座標系における 3次元座標位置データを演算し、 当該移動体座標系における前 記走行路面および前記障害物の現在の 3次元画像を生成する 3次元画像生成手段 と、  Based on position detection means for detecting a current position of the moving object as a three-dimensional coordinate position, two-dimensional coordinate position data for each pixel of the distance image, and distance data from the reference position for each pixel. For each pixel of the distance image, calculate the three-dimensional coordinate position data in the three-dimensional coordinate system viewed from the moving object, and generate a current three-dimensional image of the road surface and the obstacle in the moving object coordinate system. 3D image generation means,
前記走行路に沿った各点の 3次元座標位置を示す 3次元座標位置データと、 前 記位置検出手段で検出された移動体の現在の 3次元座標位置とに基づき、 前記移 動体座標系における走行路の 3次元座標位置データを演算し、 当該移動体座標系 における走行路の 3次元座標位置データを、 前記 3次元画像生成手段で現在生成 されている移動体座標系における 3次元画像に突き合わせることにより、 現在の 3次元画像の中から、 前記走行路面に対応する部分を切り出す画像処理手段と、 前記 3次元画像のうち、 前記面像処理手段で切り出された前記走行路面に対応 する部分について、 前記障害物が存在していることを検出する検出手段と を具えた移動体の走行路面上の障害物検出装置。  Based on three-dimensional coordinate position data indicating the three-dimensional coordinate position of each point along the travel path and the current three-dimensional coordinate position of the moving object detected by the position detecting means, the moving object coordinate system The three-dimensional coordinate position data of the traveling path is calculated, and the three-dimensional coordinate position data of the traveling path in the moving body coordinate system is matched with the three-dimensional image in the moving body coordinate system currently generated by the three-dimensional image generating means. Thereby, an image processing means for cutting out a portion corresponding to the traveling road surface from the current three-dimensional image, and a portion corresponding to the traveling road surface cut out by the surface image processing means in the three-dimensional image An obstacle detecting device on a traveling road surface of a moving object, comprising: detecting means for detecting the presence of the obstacle.
4 . 移動体の基準位置から、 移動体の走行路面上の障害物までの距離を 計測し、 当該走行路面および障害物の距離画像を生成する距離画像生成手段と、 前記距離画像生成手段によって生成された距離画像を用いて、 前記移動体の走行 路面上の障害物を検出する検出手段とを具えた移動体の走行路面上の障害物検出 装置において、 ' 前記移動体の現在位置を 3次元座標位置として検出する位置検出手段と、 前記距離画像の各画素毎の 2次元座標位置データと、 各画素毎の前記基準位置 からの距離データとに基づき、 前記距離画像の各画素毎に、 移動体からみた 3次 元座標系における 3次元座標位置データを演算し、 当該移動体座標系における前 記走行路面および前記障害物の現在の 3次元画像を生成する 3次元画像生成手段 と、 4. A distance image generating means for measuring a distance from a reference position of the moving body to an obstacle on the traveling road surface of the moving object, and generating a distance image of the traveling road surface and the obstacle, and a distance image generating means for generating the distance image. An obstacle detecting device for detecting an obstacle on the traveling road surface of the moving body using the obtained distance image, Based on position detection means for detecting a current position of the moving object as a three-dimensional coordinate position, two-dimensional coordinate position data for each pixel of the distance image, and distance data from the reference position for each pixel. For each pixel of the distance image, calculate the three-dimensional coordinate position data in the three-dimensional coordinate system viewed from the moving object, and generate a current three-dimensional image of the road surface and the obstacle in the moving object coordinate system. 3D image generation means,
前記走行路に沿った各点の 3次元座標位置を示す 3次元座標位置データと、 前 記位置検出手段で検出された移動体の現在の 3次元座標位置とに基づき、 前記移 動体座標系における走行路の 3次元座標位置データを演算し、 当該移動体座標系 における走行路の 3次元座標位置データを、 前記 3次元画像生成手段で現在生成 されている移動体座標系における 3次元画像に突き合わせることにより、 現在の 3次元画像の中の走行路面の特定部分を切り出す画像処理手段と、  Based on three-dimensional coordinate position data indicating the three-dimensional coordinate position of each point along the travel path and the current three-dimensional coordinate position of the moving object detected by the position detecting means, the moving object coordinate system The three-dimensional coordinate position data of the traveling path is calculated, and the three-dimensional coordinate position data of the traveling path in the moving body coordinate system is matched with the three-dimensional image in the moving body coordinate system currently generated by the three-dimensional image generating means. Image processing means for cutting out a specific portion of the traveling road surface in the current three-dimensional image,
前記切り出された走行路面の特定部分を平面として求め、 この平面の上に前記 障害物が存在していることを検出する検出手段と  Detecting means for obtaining a specific portion of the cut road surface as a plane, and detecting that the obstacle is present on the plane;
を具えた移動体の走行路面上の障害物検出装置。  Obstacle detection device on the traveling road surface of a moving object, comprising:
PCT/JP1997/004042 1996-11-06 1997-11-06 Device for detecting obstacle on surface of traveling road of traveling object WO1998020302A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU48845/97A AU4884597A (en) 1996-11-06 1997-11-06 Device for detecting obstacle on surface of traveling road of traveling object

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP8293994A JPH10141954A (en) 1996-11-06 1996-11-06 Device for detecting obstruction on track for moving body
JP8/293994 1996-11-06

Publications (1)

Publication Number Publication Date
WO1998020302A1 true WO1998020302A1 (en) 1998-05-14

Family

ID=17801874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1997/004042 WO1998020302A1 (en) 1996-11-06 1997-11-06 Device for detecting obstacle on surface of traveling road of traveling object

Country Status (3)

Country Link
JP (1) JPH10141954A (en)
AU (1) AU4884597A (en)
WO (1) WO1998020302A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112857254A (en) * 2021-02-02 2021-05-28 北京大成国测科技有限公司 Parameter measurement method and device based on unmanned aerial vehicle data and electronic equipment

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002170102A (en) * 2000-12-04 2002-06-14 Nippon Telegr & Teleph Corp <Ntt> Method and apparatus for automatically obtaining and restoring subject of photography
JP5895682B2 (en) * 2012-04-19 2016-03-30 株式会社豊田中央研究所 Obstacle detection device and moving body equipped with the same
JP5947938B1 (en) 2015-03-06 2016-07-06 ヤマハ発動機株式会社 Obstacle detection device and moving body equipped with the same
BR112018074698A2 (en) 2016-05-30 2019-03-19 Nissan Motor Co., Ltd. object detection method and object detection apparatus
CN112859109B (en) * 2021-02-02 2022-05-24 北京大成国测科技有限公司 Unmanned aerial vehicle panoramic image processing method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0589266A (en) * 1991-09-27 1993-04-09 Olympus Optical Co Ltd Neuron element and neural network circuit
JPH07146145A (en) * 1993-11-24 1995-06-06 Fujitsu Ltd Road detection apparatus
JPH07264577A (en) * 1994-03-23 1995-10-13 Yazaki Corp Vehicle periphery monitoring device
JPH08101035A (en) * 1994-10-03 1996-04-16 Kajima Corp Remote measuring method
JPH09142236A (en) * 1995-11-17 1997-06-03 Mitsubishi Electric Corp Periphery monitoring method and device for vehicle, and trouble deciding method and device for periphery monitoring device
JPH09178855A (en) * 1995-12-25 1997-07-11 Hitachi Ltd Method of detecting obstruction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0589266A (en) * 1991-09-27 1993-04-09 Olympus Optical Co Ltd Neuron element and neural network circuit
JPH07146145A (en) * 1993-11-24 1995-06-06 Fujitsu Ltd Road detection apparatus
JPH07264577A (en) * 1994-03-23 1995-10-13 Yazaki Corp Vehicle periphery monitoring device
JPH08101035A (en) * 1994-10-03 1996-04-16 Kajima Corp Remote measuring method
JPH09142236A (en) * 1995-11-17 1997-06-03 Mitsubishi Electric Corp Periphery monitoring method and device for vehicle, and trouble deciding method and device for periphery monitoring device
JPH09178855A (en) * 1995-12-25 1997-07-11 Hitachi Ltd Method of detecting obstruction

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112857254A (en) * 2021-02-02 2021-05-28 北京大成国测科技有限公司 Parameter measurement method and device based on unmanned aerial vehicle data and electronic equipment

Also Published As

Publication number Publication date
JPH10141954A (en) 1998-05-29
AU4884597A (en) 1998-05-29

Similar Documents

Publication Publication Date Title
US11312353B2 (en) Vehicular control system with vehicle trajectory tracking
JP6930600B2 (en) Vehicle position estimation device and vehicle control device
US6734787B2 (en) Apparatus and method of recognizing vehicle travelling behind
KR101622028B1 (en) Apparatus and Method for controlling Vehicle using Vehicle Communication
CN106463064B (en) Object recognition device and vehicle travel control device using same
JP5473304B2 (en) Remote location image display device, remote control device, vehicle control device, remote control system, remote control method, remote control program, vehicle control program, remote location image display method, remote location image display program
US20200079165A1 (en) Hitch assist system
US20050074143A1 (en) Vehicle backing assist apparatus and vehicle backing assist method
US20180137376A1 (en) State estimating method and apparatus
JP2021510227A (en) Multispectral system for providing pre-collision alerts
EP3418122B1 (en) Position change determination device, overhead view image generation device, overhead view image generation system, position change determination method, and program
KR102175947B1 (en) Method And Apparatus for Displaying 3D Obstacle by Combining Radar And Video
WO2022190484A1 (en) Container measurement system
JP2021197009A (en) Risk determination system and risk determination program
US11932173B2 (en) Mirror pod environmental sensor arrangement for autonomous vehicle enabling compensation for uneven road camber
WO1998020302A1 (en) Device for detecting obstacle on surface of traveling road of traveling object
JP2016022906A (en) Electronic control unit and onboard communication network system
CN114537430A (en) Vehicle control device, vehicle control method, and computer-readable storage medium
JP4419560B2 (en) Vehicle lane travel support device
JPH0981757A (en) Vehicle position detecting device
JP2011113330A (en) Object detection device and drive assist system
CN115959111A (en) Vehicle control device, vehicle control method, and storage medium
CN115959112A (en) Vehicle control device, vehicle control method, and storage medium
CN115959109A (en) Vehicle control device, vehicle control method, and storage medium
US20210284165A1 (en) Vehicle control device, vehicle control method, and storage medium

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU US

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 09297644

Country of ref document: US