WO2006123438A1 - Procédé de détection de zone de chaussée plane et d’obstruction à l’aide d’une image stéréoscopique - Google Patents

Procédé de détection de zone de chaussée plane et d’obstruction à l’aide d’une image stéréoscopique Download PDF

Info

Publication number
WO2006123438A1
WO2006123438A1 PCT/JP2005/009701 JP2005009701W WO2006123438A1 WO 2006123438 A1 WO2006123438 A1 WO 2006123438A1 JP 2005009701 W JP2005009701 W JP 2005009701W WO 2006123438 A1 WO2006123438 A1 WO 2006123438A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
plane
area
road
road surface
Prior art date
Application number
PCT/JP2005/009701
Other languages
English (en)
Japanese (ja)
Inventor
Masatoshi Okutomi
Akihito Seki
Original Assignee
The Circle For The Promotion Of Science And Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Circle For The Promotion Of Science And Engineering filed Critical The Circle For The Promotion Of Science And Engineering
Priority to PCT/JP2005/009701 priority Critical patent/WO2006123438A1/fr
Publication of WO2006123438A1 publication Critical patent/WO2006123438A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present invention in order to realize driver assistance for safe driving of automobiles and automatic driving of autonomous mobile vehicles, on-vehicle stereo cameras, road plane areas, leading vehicles, oncoming vehicles, parking vehicles, pedestrians, etc.
  • the present invention relates to a road plane area and an obstacle detection method using stereo images for detecting all obstacles present on the road. Background art
  • laser radar, ultrasonic waves, and millimeter wave radars have been used for visually recognizing the driving environment in the guidance of autonomous mobile vehicles, particularly for detecting areas that can be driven, and for detecting obstacles in the driving environment. It can be roughly divided into the method of using images and the method of using images.
  • the device In the detection method using laser radar or millimeter wave radar, the device is generally expensive, and sufficient spatial resolution cannot be obtained.
  • the detection method using an ultrasonic sensor has problems that it is difficult to measure far away and that the spatial resolution is low.
  • the method using an image can be divided into a method using a single eye and a method using a compound eye.
  • most of the methods using images are monocular, that is, using images obtained from one viewpoint, and are mainly used in developed environments such as expressways, and white lines on the road surface (for example, The traveling area is detected by detecting separation lines and center lines.
  • the density pattern projected on a monocular image is obtained by the monocular imaging method. Therefore, there is a problem that it is difficult to stably distinguish the travelable area and the obstacle.
  • Non-Patent Document 1 Non-Patent Document 2
  • Non-Patent Document 3 Non-Patent Document 4
  • Non-Patent Document 1 Non-Patent Document 2
  • Non-Patent Document 3 Non-Patent Document 4
  • the fixed two-dimensional projective transformation is performed to detect the planar area of the road and obstacles.
  • the running surface road surface
  • the vehicle body may tilt with movement
  • the assumption that the positional relationship between the in-vehicle camera and the road plane is always constant does not actually hold. Often there is. Therefore, with the conventional detection method using the same fixed two-dimensional projective transformation, it is extremely difficult to cope with the inclination of the running surface (road surface) or the inclination of the vehicle body as the vehicle moves. There was a problem that there was.
  • road plane areas and obstacle detection methods can be broadly classified into those that use laser radar, ultrasound and millimeter wave radar, and those that use images.
  • Use laser radar, ultrasonic wave and millimeter wave radar has a problem that the apparatus is expensive, the measurement accuracy is low, and the spatial resolution is low.
  • detection methods that use images are limited to high-speed roads and other well-established usage environments, and cannot deal with vibrations during driving and slopes of roads. In the environment, there was a problem that the measurement accuracy deteriorated remarkably.
  • the present invention has been made for the above-described circumstances, and an object of the present invention is to use only image information obtained from an in-vehicle stereo camera, and to vibrate the camera due to the inclination of the road surface or the traveling of the car. It is now possible to dynamically determine the area in which the vehicle can travel in the real space, and to calculate and present the distance and relative speed to the obstacle in each direction as seen from the vehicle. It is intended to provide a road plane area and obstacle detection method using stereo images. Disclosure of the invention
  • the present invention relates to a road plane region and an obstacle detection method using stereo images
  • the object of the present invention is a stereo moving image composed of a reference image and a reference image captured by an imaging means mounted on an automobile.
  • a road plane area and an obstacle detection method using a stereo image that can detect a road plane area that can be traveled and all obstacles existing on the road surface by using only an image.
  • the third step of calculating the slope of the road surface by decomposing the obtained projection transformation matrix and the road surface is raised based on the slope of the road surface calculated in the third step.
  • the present invention provides a sixth step of calculating a direction-specific distance from the vehicle to the obstacle from the plane area where the vehicle can travel on the virtual projection plane image, and the top of the virtual projection plane image.
  • the method further comprises an eighth step of displaying the possible plane regions in a superimposed manner and displaying the direction-specific distance and the direction-specific relative velocity information, or in the first step, LOG filter processing and histogram flattening processing are performed on the reference image and the reference image, and then the projection transformation matrix corresponding to the road surface between the stereo images is dynamically estimated by a region-based method.
  • the reference image is projectively transformed by the projective transformation matrix, a difference image between the reference image and the projective transformed reference image is obtained, and a smoothing filter is applied to the difference image.
  • a binarized image is obtained by binarization using a threshold value, and a plane region that can be traveled is obtained by using the estimation result of the previous time plane region and taking a textureless region into the binary image.
  • the road plane attitude parameter indicating the inclination of the road surface is determined by the distance from the reference camera optical center to the road plane and the normal vector of the road plane.
  • the reference image is projectively transformed to generate a virtual projection plane image parallel to the road plane. Ri is effectively achieved.
  • FIG. 1 is a flowchart showing the overall flow of the present invention.
  • FIG. 2 is a schematic diagram for explaining plane projection and two-dimensional projective transformation.
  • FIG. 3 is a diagram showing an example of a stereo original image.
  • FIG. 4 is a diagram showing a result image obtained by subjecting the stereo original image shown in FIG. 3 to LOG filter processing and histogram flattening processing.
  • FIG. 5 is a schematic diagram for explaining a projection transformation matrix and time domain estimation procedure using time series information.
  • FIG. 6 is a diagram showing a planar area extraction result.
  • FIG. 7 is a schematic diagram for explaining a region expected from the previous time.
  • FIG. 8 is a diagram for explaining planar region extraction of a stereo image in which a large textureless region exists.
  • FIG. 9 is a schematic diagram for explaining textureless area processing.
  • FIG. 10 is a diagram showing a planar area extraction result obtained by performing textureless area processing on a stereo image having a large textureless area.
  • FIG. 11 is a diagram showing a planar area extraction result obtained by performing textureless area processing on a stereo image in which a textureless area is also present in the planar area.
  • FIG. 12 is an overall flowchart of plane extraction in the present invention.
  • Fig. 13 is a schematic diagram for explaining the method of considering the textureless area.
  • FIG. 14 is a diagram for explaining the virtual projection plane image (VPP image) of the present invention. It is a schematic diagram of
  • Figure 15 shows an example of a reference image o
  • Figure 16 shows an example of a reference image o
  • Figure 17 shows the result of planar area extraction.
  • FIG. 18 is a diagram showing the result of overlaying the planar area of FIG. 17 on the original image of FIG.
  • FIG. 19 is a view showing a VPP image of the reference image of FIG.
  • FIG. 20 is a diagram showing a VPP image of the planar region of FIG.
  • FIG. 2 1 is a diagram for explaining the characteristics of the VPP image.
  • Fig. 22 is a schematic diagram for explaining the calculation of the vehicle ft force in the plane area on the VPP image and the distance by direction to these obstacles.
  • FIG. 23 is a diagram for explaining a relative speedometer in a direction-specific plane area.
  • FIG. 24 is an image showing a road plane area and an obstacle detection result using the present invention.
  • FIG. 25 is a diagram showing an example of a road plane area and an obstacle detection result using the present invention.
  • FIG. 26 is a diagram showing another example of the road plane area and the obstacle detection result using the present invention.
  • two left and right stereo cameras are mounted on a car, and the car equipped with the stereo camera (hereinafter referred to as the vehicle equipped with the stereo camera).
  • the vehicle equipped with the stereo camera All obstacles present on the road surface, such as pedestrians, parking vehicles, oncoming vehicles, and preceding vehicles, under conditions where there are vibrations and changes in the slope of the road surface due to the movement of the vehicle (referred to as a vehicle or vehicle) It is assumed that a situation is detected.
  • the right stereo camera is used as the reference camera
  • the left stereo camera is used as the reference camera.
  • the stereo image taken with the reference camera is taken as the reference image
  • the stereo image taken with the reference camera is taken as the reference image.
  • only a stereo image that is, a base image and a reference image
  • a camera that captures a stereo image that is, a camera
  • the left stereo camera is used as the reference camera
  • the right stereo camera is used as the reference force camera. You can also do this.
  • the reference force meter is mounted on the left side of the vehicle, and the reference force meter is the vehicle. It is mounted on the right side.
  • the standard camera is mounted on the right side of the car, and the reference power camera is mounted on the left side of the car.
  • the travelable area in the coordinate system corresponding to the real space is dynamically determined, and the distance to the obstacle and the relative speed in each direction of the vehicle are calculated and presented. It is where it is.
  • the object detection method is a method for dynamically determining a flat area where a vehicle can travel (hereinafter referred to as a travelable area) using image information obtained from two on-vehicle stereo cameras.
  • a travelable area a flat area where a vehicle can travel
  • image information obtained from two on-vehicle stereo cameras.
  • a C CD camera it is desirable to use a stereo camera mounted on an automobile, but the present invention is not limited to this.
  • a CD camera is used as a stereo force camera.
  • FIG. 1 is a flowchart showing the overall flow of the present invention.
  • a projective transformation matrix for the road surface is dynamically estimated (step S 1 0). 0).
  • the road surface plane area (hereinafter, the road plane plane area is referred to as the road plane area, and may also be referred to as the travelable plane area).
  • the slope of the road surface is calculated by decomposing the projective transformation matrix obtained in step S 1 0 0 (step S
  • Step S 1 2 Based on the slope of the road surface calculated in step S 1 2 0, an image of the road surface viewed from above (in the present invention, this image is referred to as a virtual projection plane (VPP) image) is generated ( Step S
  • step S 1 3 the travelable plane area and the position and direction of the vehicle are presented.
  • Step S 1 4 0 the distance in each direction to the farthest part from the vehicle, that is, the obstacle, in the road plane area on the virtual projection plane image is calculated.
  • Step SI 5 0 the relative speed in each direction to the farthest portion from the vehicle in the road plane area on the virtual projection plane image, that is, to the obstacle is also calculated (step S 1 60).
  • the plane area that can be traveled is displayed superimposed on the virtual projection plane image (VPP image), and the direction-specific distance and direction-specific relative velocity information calculated in steps S 1 5 0 and S 1 6 0 are displayed. Is displayed (step S 1 70).
  • H is a 3 X 3 projective transformation matrix, and is assumed to be equal, allowing constant indefiniteness.
  • O is the optical center of the reference camera
  • o ' is the optical center of the reference camera
  • R is the rotation matrix from the reference camera coordinate system to the reference camera coordinate system
  • t is the reference.
  • d is the distance between the reference camera and the plane
  • n is the normal vector of the plane.
  • the area When there is an obstacle, the area is not recognized as a plane, so it is possible to detect that the area is not a travelable area.
  • obstacles are included in areas other than the plane area, and as a result, obstacles can be detected by measuring the spread of the plane.
  • FIG. 3 shows an example of the input stereo original image.
  • Fig. 3 (A) shows the reference image
  • Fig. 3 (B) shows the reference image.
  • Fig. 4 shows the resulting image obtained by applying LOG filter processing and histogram flattening processing to the stereo original image shown in Fig. 3.
  • Fig. 4 (A) shows the reference image.
  • FIG. 4 (B) Shows the image after processing of the reference image in FIG. 3 (B).
  • a projective transformation matrix corresponding to a plane is obtained by the region-based method disclosed in Non-Patent Document 5.
  • that minimizes the following evaluation function is obtained by iterative optimization.
  • I b (m) and I r (m) represent the density values of the reference image and the reference image at the image position m, respectively.
  • a projective transformation matrix between consecutive reference images I b (t — 1) and I b (t) is obtained.
  • the initial value of the projective transformation matrix is the projection transformation matrix — estimated at the previous time, and the calculation domain is obtained at the previous time for I b (t — 1).
  • the planar area R (t ⁇ 1) can be used.
  • the projection transformation matrix between the stereo images I b (t) and I r (t) is estimated at the previous time — 1) as the initial value, and t calculated in (2). ) As the calculation area.
  • the projection transformation matrix and the calculation area as the initial value sufficiently close to the true value are obtained in the continuous estimation using the time series image. Therefore, in the present invention, the projection transformation matrix and the calculation region estimation procedure described above require that the projection transformation matrix be obtained from the stereo image each time. Therefore, it is not necessary to know the camera's buttock parameters, the arrangement of the two cameras, the positional relationship between the camera and the road surface, etc. Even if there is a change due to the car body tilting or the camera vibrating due to the unevenness of the surface.
  • Step S 1 1 0> Using the obtained projective transformation matrix, extract the plane area of the road surface (the plane area that can be driven)
  • FIG. 6 shows the results of a series of processing using the stereo original image of Fig. 3.
  • Fig. 6 (A) is a reference image (same as Fig. 4 (A)) that has been subjected to LOG filtering and histogram flattening.
  • FIG. 6 (B) shows the result of projective transformation of the reference image of FIG. 4 (B) using the projective transformation matrix estimated by the method described in step S100.
  • Figure 6 (C) shows the difference (absolute value) between the image in Fig. 6 (A) and the image in Fig. 6 (B). Looking at the image in Fig. 6 (C), the positions of the images in Fig.
  • Fig. 6 (A) and Fig. 6 (B) coincide with the plane (road surface). Because it is black and it is shifted in other parts, it looks whitish.
  • the difference image in Fig. 6 (C) is binarized after applying an averaging filter in Fig. 6 (D).
  • Fig. 6 (D) uses the image of Fig. 6 (A) and Fig. 6 (B) [Sum of Absolute Difference)! / Equivalent to performing a tricky matching.
  • the planar area extraction result of the previous time is used as follows.
  • the plane area expected at the current time is obtained from the projection transformation matrix between the plane area R (t — 1) at the previous time and the time series of the reference image.
  • the boundary between the planar area and the non-planar area at the current time falls within a certain width with respect to the boundary expected from the previous time. it is conceivable that. Therefore, the other parts can be regarded as planar or non-planar areas based on the results of the previous time, and this is reflected in the matching results in Fig. 6 (D).
  • FIG. 6 shows the original image with reduced brightness other than the extracted plane area so that it can be seen how the extracted result corresponds to the original image. . From Fig. 6 (F), it can be seen that the extracted area is in good agreement with the actual road surface.
  • the reference image and the reference image after the projective transformation are used to create a flat surface. It is assumed that the position on the image is shifted with respect to other than this, resulting in a difference in image density.
  • Fig. 8 (A) and Fig. 8 (B) show examples of such images.
  • the matching result in this case (equivalent to Fig. 6 (D)) is shown in Fig. 8 (D). It will be shown.
  • Fig. 8 (D) the matching result clearly includes the wall of the building in front.
  • the area where the output result of the LOG filter is close to 0 is the textureless area.
  • the textureless area is a planar area (in other words, area B). It should be noted that there are some boundaries between the actual planar area and the non-planar area, such as edges and shading differences on the image. In the present invention, the boundary portion is not a textureless region and belongs to the matching region. Therefore, even if the textureless area exists at the edge of the planar area, the textureless area appears inside the matching area by the boundary in the processing result. If the entire textureless area is not included in the matching results in Fig. 9 (c), the entire textureless area is defined as a non-planar area (that is, area A).
  • the textureless region processing described above can be interpreted as follows. First, in the textureless region, it is not possible to determine whether or not there is a displacement even if local matching is performed. Therefore, the textureless areas are put together, and each area is regarded as a lump, and the deviation is judged. At that time, in the textureless area included in the planar area, However, since there is no displacement, the whole will match, but in the case of a textureless area in a non-planar area, the position will be displaced, so that a certain part of the area will always overlap with another area around it. The part is determined to be a non-planar part by matching. Therefore, the entire textureless area is determined as a non-planar area.
  • FIG. 10 shows the results obtained by performing the above textureless region processing.
  • FIG. 10 (B) shows the result of extracting the textureless region from the reference image of FIG. 10 (A).
  • Figure 10 (C) shows the result of extracting the planar area by applying the textureless area processing described above.
  • FIG. 10 (D) is a superimposed display with the original image.
  • Figure 11 shows another result obtained by performing textureless region processing.
  • Non-Patent Document 5 As a disadvantage of the region-based method used in the present invention (see Non-Patent Document 5), if the brightness of the two images is different, processing cannot be performed successfully. Therefore, in order to remove the brightness difference between the images, a LOG filter is applied to the input stereo original image.
  • the image is increased by increasing the contrast by histogram flattening.
  • step S 1 0 the projection transformation matrix of the plane between stereo images is dynamically estimated.
  • the reference image is projectively transformed using the projective transformation matrix representing the plane obtained by 2>.
  • the plane represented by the projective transformation matrix matches as if it were taken from the reference camera, and anything that is not on the plane represented by the projective transformation matrix is distorted and projected. In the present invention, this property is used to obtain a planar area.
  • the difference in luminance value is small. Conversely, the difference in luminance values of points that are not on the plane increases.
  • An area with a small difference is set as a plane area by performing threshold processing. At this time, the difference between each pixel is greatly affected by noise. Specifically, this is achieved by applying a smoothing filter to the difference image. Then, binarize using the threshold.
  • the boundary between the planar region and the non-planar region extracted at the current time falls within a certain width with respect to the boundary expected from the previous time. Therefore, the other parts can be regarded as a planar area or non-planar area based on the result of the previous time, and this is reflected in the binary image (matching result).
  • the road area we want to find is the road area where cars can travel. Therefore, the area where the car cannot travel is removed from the extracted results. First, a small area where the car cannot travel is removed from the road area extraction result. The area is obtained from the road area extraction result and the area below the threshold is excluded. Leaving. The entire road area is contracted and expanded.
  • the projective transformation matrix H of the road plane is expressed by the following equation 4 in the case of a camera arrangement as shown in Fig. 2.
  • R is the rotation matrix from the reference camera coordinate system to the reference camera coordinate system
  • t is the translation vector of the reference camera and reference camera in the reference camera coordinate system
  • d is the reference camera and road the distance of the plane
  • n represents at normal base data collected by Le road plane
  • a have a 2 is an internal parameter of each the reference camera reference camera.
  • n Vn t, t
  • R,-, n from the above number 1 0, R,-, n can be obtained a a
  • d is determined by giving the baseline
  • Step S 1 3 Generate an image (virtual projection plane image) of the road plane viewed from above
  • VPP Virtual Projection Plane
  • n p is the normal vector in the coordinate system of the road plane and the reference camera
  • f is the focal length of the reference camera
  • d is the distance between the reference camera optical center and the road plane
  • e. z is the optical axis of the reference camera.
  • the normal vector obtained by decomposing the projective transformation matrix in step S 1 2 0! Let it be ⁇ .
  • the projective transformation matrix that converts the reference camera into a VPP image rotated by this R is A ⁇ .
  • the unit vector e 0z (0,0,1 / in the optical axis direction of the reference camera is considered to be a vector u viewed from the VPP camera coordinate system, and is a vector obtained by orthogonally projecting it to the VPP image coordinate system.
  • Reference image coordinates ⁇ 3 ⁇ 4 O 0i , v 0 f obtained by projecting a point that can be expressed as m in the three-dimensional space onto the reference image, Dot,.
  • the luminance of the point m on the VPP image can be obtained by substituting the luminance value of the point m oi on the corresponding original image using the inverse matrix H- 1 of H. [Equation 2 8]
  • is a representative of a row i and column j elements of the matrix T 1.
  • FIG. 15 is a reference image
  • FIG. 18 is the result of superimposing the planar area shown in FIG. 17 on the original image.
  • Step S 1 4 0> Calculate the position and direction of the plane area that can be traveled on the V P P image and the vehicle
  • Fig. 20 shows an image obtained by converting only the planar region (white region) into a VPP image as shown in Fig. 17 by the method described in step S 1 30 (in this image).
  • the white area is the flat area).
  • the points on the estimated plane are converted into VPP images with the correct coordinates by the transformation of Equation 27 above, but points floating from the plane (for example, cars and walls) are converted into VPP images. When converted, it is not converted to the correct position.
  • a VPP image an image of the estimated road plane region viewed from above in parallel was generated, and the coordinates obtained from the image are a coordinate system corresponding to real space. .
  • O is the point where the optical center of the basic camera is projected onto the road plane.
  • this point will be the reference camera position origin. Therefore, from the positional relationship between the camera and the vehicle
  • V ⁇ ⁇ The vertical axis ⁇ in the image coincides with the optical axis. In other words, if the mounting angle between the optical axis and the vehicle is known, the direction of the vehicle can be calculated in the VPP image.
  • Step S 1 5 0> Calculate the distance in each direction to the farthest part from the vehicle in the road plane area on the V P P image, that is, the obstacle.
  • the upper limit was calculated as the area expansion from the reference camera position origin to 32 m in the optical axis direction.
  • the measurement range was 25 degrees from the optical axis, with increments of 0.5 degrees.
  • Step S 1 6 The highest point from the vehicle in the road plane area on the VPP image. Calculate the relative speed in each direction to the far part, that is, the obstacle
  • the relative speed by direction to the farthest part from the vehicle in the road plane area on the VPP image, that is, to the obstacle is calculated.
  • the distance for each direction tilted ⁇ from the optical axis (Y axis) is used in the past 5 frames. Since the image was taken at 30 fps, 1 frame is 1/30 s, so using the least squares method, the gradient is obtained from the time-series data of distance by direction for each 1/30 s. Thus, the relative speed in that direction could be calculated (see Fig. 23). Therefore, by performing the same process for each direction, the relative velocity of the planar area is calculated for almost the entire planar area.
  • Step S 1 7 0> V P P Overlays the plane area that can be driven on the image, and displays the calculated distance by direction and relative speed information by direction.
  • step S 1 5 0 and step S 1 60 the distance and relative speed of the road plane area in each direction can be calculated.
  • the image on the right side of the image shown in Fig. 24 is obtained by superimposing a reference image converted into a VPP image and a flat region converted into a VPP image.
  • the light blue area is the combined plane area
  • each point at the end of the plane area is the end point of the area ( Each point is a distance according to the direction of the plane, and represents the point indicated by arrow A in the figure.
  • the following rules apply to the coloration of the end points.
  • the plane expansion speed of the plane is negative (the plane contracts, that is, the direction toward the camera), there is a danger, so it is displayed as a warm color point.
  • the plane expansion speed is positive (the plane expands, that is, the direction away from the camera), the risk is low, so it is displayed with a cold color point, and the plane expansion Those without are displayed as green dots.
  • the gradient below the left side of the image shown in Figure 24 shows the transition of the color of the point due to the spread speed of the plane, and each number corresponds to the speed per hour (km / h). .
  • the image on the left side of the image shown in Fig. 24 is a composite of the plane area with the reference image, and the plane area is shown in light blue.
  • Each endpoint of the planar area is
  • the coordinates corresponding to the end points of the area in the reference image are calculated. Is displayed (indicated by arrow D in the figure).
  • the method of determining the color used here uses the same rule as the end point of the image on the right side of the image shown in Fig. 24.
  • the distance from the origin of the reference camera position is displayed in units of meters on a point at a certain angle from the optical axis (every 5 degrees in this example) (indicated by arrow E in the figure). ing) .
  • FIGS. 25 and 26 show examples of road plane areas and obstacle detection results using the present invention by the above-described method of displaying distance by direction and relative speed information by direction.
  • Figures 25 and 26 show the camera mounted on the car.
  • the result of applying the present invention to the captured stereo moving image is shown.
  • the result is the road plane area and the obstacle detection result in the urban area, and 3 frames (1/10 seconds) from the processing result of the stereo moving image. )
  • the results for each are extracted and displayed.
  • the two stereo cameras mounted on the automobile are used.
  • time-series information obtained from the captured stereo video it is possible to stably estimate the projection transformation matrix and road plane area continuously, and the road surface itself can be tilted or run.
  • the projection transformation matrix is not constant, even if the car body is tilted due to curves or unevenness of the road surface, or the camera mounted on the car vibrates, and the projection transformation matrix is not always constant. Therefore, we were able to solve these problems.
  • a virtual projection plane image (VPP image) is generated based on the slope of the road surface calculated by decomposing the projective transformation matrix, and the vehicle in the plane area on the VPP image is further generated. The distance from each to the obstacle and the relative speed by direction are calculated. Finally, the plane area that can be traveled is superimposed on the VPP image, and the calculated distance by direction and relative speed information by direction are calculated.
  • the display of the area that can be traveled to the vehicle and the obstacle is a visual display, which is a highly practical method. It is preferable to use a CCD camera as the stereo camera to be used. However, the present invention is not limited to this, and any other means can be used as long as it is a photographing means capable of photographing a stereo moving image. That. In addition, the present invention is applied to a traveling object other than an automobile. It is also possible to apply. Industrial applicability
  • the apparatus is inexpensive. It is versatile and has an excellent effect of achieving cost reduction.
  • the road plane area and the obstacle detection method using the stereo image according to the present invention it is possible to dynamically obtain the area where the vehicle can travel and the distance from the vehicle to the obstacle in each direction stably relative speed. So, based on the required travelable area and the distance from the vehicle to the obstacle in each direction, based on the relative speed, it is possible to provide driver assistance such as warning of collision risk, avoidance of collision, automatic driving of the vehicle, etc. Excellent effect.
  • Non-patent document 1

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

L’invention concerne un procédé de détection de zone de chaussée plane et d’obstruction, le procédé consistant à détecter une région plane circulable et une obstruction en utilisant une image stéréoscopique à partir d’une caméra embarquée sur un véhicule. Le procédé comprend une phase d’estimation dynamique d’une matrice de transformation de projection par rapport à une surface de chaussée ; une phase d’extraction d’une région plane circulable en utilisant la matrice de transformation de projection ; une phase de calcul de l’inclinaison d’une surface de chaussée en décomposant la matrice de transformation de projection ; une phase de création d’une image VPP sur la base de l’inclinaison de la surface de la chaussée ; une phase de présentation de la surface plane circulable sur l’image VPP, et de la position et la direction du véhicule de l’utilisateur ; une phase de calcul d’une distance dans chaque direction sur la base de la région plane circulable sur l’image VPP, la distance étant la distance du véhicule de l’utilisateur à l’obstruction ; une phase de calcul de vitesse relative dans chaque direction sur la base de la région plane circulable sur l’image VPP, la vitesse relative dans chaque direction étant la vitesse relative du véhicule de l’utilisateur à l’obstruction ; et une phase de superposition consistant à afficher l’image VPP superposée à la région plane circulable et d’affichage de la distance dans chaque direction et de la vitesse relative dans chaque direction.
PCT/JP2005/009701 2005-05-20 2005-05-20 Procédé de détection de zone de chaussée plane et d’obstruction à l’aide d’une image stéréoscopique WO2006123438A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2005/009701 WO2006123438A1 (fr) 2005-05-20 2005-05-20 Procédé de détection de zone de chaussée plane et d’obstruction à l’aide d’une image stéréoscopique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2005/009701 WO2006123438A1 (fr) 2005-05-20 2005-05-20 Procédé de détection de zone de chaussée plane et d’obstruction à l’aide d’une image stéréoscopique

Publications (1)

Publication Number Publication Date
WO2006123438A1 true WO2006123438A1 (fr) 2006-11-23

Family

ID=37431022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/009701 WO2006123438A1 (fr) 2005-05-20 2005-05-20 Procédé de détection de zone de chaussée plane et d’obstruction à l’aide d’une image stéréoscopique

Country Status (1)

Country Link
WO (1) WO2006123438A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2471276A (en) * 2009-06-22 2010-12-29 Bae Systems Plc Terrain sensing apparatus for an autonomous vehicle
CN108027423A (zh) * 2016-03-14 2018-05-11 日立建机株式会社 矿山用作业机械

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SEKI A. AND OKUTOMI M.: "Stereo Dogazo o Riyo shita Doromen Pattern Chushutsu ni yoru Jisharyo no Undo Suitei", INFORMATION PROCESSING SOCIETY OF JAPAN KENKYU HOKOKU 2004-CVIM-146, vol. 2004, no. 113, 12 November 2004 (2004-11-12), XP003006111 *
SEKI A. AND OKUTOMI M.: "Stereo Dogazo o Riyo shita Heimen Ryoiki Chushutsu ni yoru Shogaibutsu Kenshutsu", INFORMATION PROCESSING SOCIETY OF JAPAN KENKYU HOKOKU 2004-CVIM-143, vol. 2004, no. 26, 5 March 2004 (2004-03-05), XP003006112 *
SEKI A. AND OKUTOMI M.: "Stereo Dogazo o Riyo shita Heimen Ryoiki Chushutsu ni yoru Shogaibutsu Kenshutsu", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN KONPYUTA BIJON TO IMEJI MEDIA, 15 December 2004 (2004-12-15), XP003006110 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2471276A (en) * 2009-06-22 2010-12-29 Bae Systems Plc Terrain sensing apparatus for an autonomous vehicle
CN108027423A (zh) * 2016-03-14 2018-05-11 日立建机株式会社 矿山用作业机械

Similar Documents

Publication Publication Date Title
US10354151B2 (en) Method of detecting obstacle around vehicle
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
Goldbeck et al. Lane detection and tracking by video sensors
JP3937414B2 (ja) 平面検出装置及び検出方法
US8872925B2 (en) Method and device for camera calibration
US9862318B2 (en) Method to determine distance of an object from an automated vehicle with a monocular device
US20100246901A1 (en) Operation Support System, Vehicle, And Method For Estimating Three-Dimensional Object Area
US20110169957A1 (en) Vehicle Image Processing Method
Nedevschi et al. A sensor for urban driving assistance systems based on dense stereovision
KR20160123668A (ko) 무인자동주차 기능 지원을 위한 장애물 및 주차구획 인식 장치 및 그 방법
JP4344860B2 (ja) ステレオ画像を用いた道路平面領域並びに障害物検出方法
JP2006053756A (ja) 物体検出装置
Liu et al. Development of a vision-based driver assistance system with lane departure warning and forward collision warning functions
JP6552448B2 (ja) 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラム
CN105512641B (zh) 一种标定雨雪状态下视频中的动态行人及车辆的方法
WO2018149539A1 (fr) Procédé et appareil d'estimation d'une plage d'un objet mobile
JP5501084B2 (ja) 平面領域検出装置及びステレオカメラシステム
Dhiman et al. A multi-frame stereo vision-based road profiling technique for distress analysis
US20200193184A1 (en) Image processing device and image processing method
Yang Estimation of vehicle's lateral position via the Lucas-Kanade optical flow method
Hwang et al. Vision-based vehicle detection and tracking algorithm design
JP2006053754A (ja) 平面検出装置及び検出方法
Na et al. Drivable space expansion from the ground base for complex structured roads
WO2006123438A1 (fr) Procédé de détection de zone de chaussée plane et d’obstruction à l’aide d’une image stéréoscopique
JP4106163B2 (ja) 障害物検出装置及びその方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

122 Ep: pct application non-entry in european phase

Ref document number: 05743283

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP