WO2014054124A1 - Dispositif de détection de marquages de revêtement de surface de route et procédé de détection de marquages de revêtement de surface de route - Google Patents

Dispositif de détection de marquages de revêtement de surface de route et procédé de détection de marquages de revêtement de surface de route Download PDF

Info

Publication number
WO2014054124A1
WO2014054124A1 PCT/JP2012/075565 JP2012075565W WO2014054124A1 WO 2014054124 A1 WO2014054124 A1 WO 2014054124A1 JP 2012075565 W JP2012075565 W JP 2012075565W WO 2014054124 A1 WO2014054124 A1 WO 2014054124A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional object
image
search range
road marking
unit
Prior art date
Application number
PCT/JP2012/075565
Other languages
English (en)
Japanese (ja)
Inventor
嘉修 竹前
Original Assignee
トヨタ自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by トヨタ自動車株式会社 filed Critical トヨタ自動車株式会社
Priority to PCT/JP2012/075565 priority Critical patent/WO2014054124A1/fr
Publication of WO2014054124A1 publication Critical patent/WO2014054124A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the present invention relates to a road marking detection device and a road marking detection method for detecting a sign drawn on a road surface reflected in an image.
  • a lane recognition device in which an image portion other than an image portion representing a three-dimensional object on an image is used as a search range, and a lane candidate point is searched within the search range (see, for example, Patent Document 1).
  • the lane recognition device of Patent Document 1 extracts an image part representing a three-dimensional object on an image to be processed, and then sets an image part other than the image part representing the three-dimensional object on the same image as a search range, A lane candidate point is searched within the search range. That is, the three-dimensional object extraction process is executed for one image, and the lane candidate point search process is executed for the same one image using the extraction result. Therefore, it takes time to search for a lane candidate point from one image, and depending on the processing capability of the lane recognition device, there is a possibility that the lane cannot be recognized at an appropriate timing.
  • an object of the present invention is to provide a road marking detection device and a road marking detection method that can reduce the time required to detect road marking from an image.
  • a road marking detection apparatus includes an image acquisition unit that acquires an image around a vehicle, a three-dimensional object detection unit that detects a three-dimensional object in the image, and the image.
  • a search range setting unit that sets a search range of road markings, and a road marking detection unit that detects road markings in the search range, the search range setting unit is a target for setting the search range
  • a search range is set based on the image portion of the three-dimensional object detected in another image acquired before the image to be obtained.
  • a road marking detection method includes an image acquisition step for acquiring an image around a vehicle, a solid object detection step for detecting a solid object in the image, and a road marking for the image.
  • a search range setting step for setting a search range; and a road marking detection step for detecting a road marking in the search range.
  • the search range is acquired before an image for which the search range is set.
  • the search range is set based on the image portion of the three-dimensional object detected in the other image that has been obtained.
  • the present invention can provide a road marking detection device and a road marking detection method that can reduce the time required to detect road marking from an image.
  • FIG. 4 is a diagram illustrating a state in which edge detection processing related to luminance values is performed on the standard image and the reference image in FIG. It is a figure which shows a mode that a SAD algorithm is applied with respect to the reference
  • FIG. 1 is a functional block diagram showing a configuration example of a road marking detection apparatus 100 according to a first embodiment of the present invention.
  • the road marking detection device 100 is an in-vehicle device that detects a road marking from an image obtained by imaging the periphery of the vehicle.
  • "Road markings" are markings drawn on the road surface, road markings that are lines, symbols, or characters drawn on the road surface by road fences, paint, stones, etc., and the road center line, lane boundary line Including lane markings such as roadway outer lines.
  • the road marking detection device 100 is an in-vehicle device that detects a white line that is an example of a lane marking from an image obtained by imaging the front of the vehicle, and mainly includes the control device 1 and the imaging device 2.
  • the control device 1 is a device that controls the road marking detection device 100.
  • the control device 1 is a computer including a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
  • the control device 1 is a program corresponding to each functional element of an image acquisition unit 10, an image processing unit 11, a parallax calculation unit 12, a three-dimensional object detection unit 13, a road marking detection unit 14, and a search range setting unit 15, which will be described later. Is read from the ROM and loaded into the RAM, and the CPU executes the processing corresponding to each functional element.
  • the program corresponding to each functional element may be downloaded through a communication network or provided in a state recorded in a recording medium.
  • the imaging device 2 is an in-vehicle device that images the periphery of the vehicle, and outputs the captured image to the control device 1.
  • the imaging device 2 is a stereo camera including two cameras that simultaneously image the front of the vehicle. Each of the two cameras is arranged so that most of the imaging ranges overlap.
  • the imaging device 2 includes, for example, a right camera whose optical axis coincides with the vehicle central axis in a top view, and a left camera that is arranged horizontally spaced from the position of the right camera in the vehicle width direction. Including.
  • Each of the two cameras may be arranged so as to be lined up and down in the vertical direction.
  • the imaging device 2 may be a camera system including three or more cameras that simultaneously image the front of the vehicle.
  • control device 1 Next, various functional elements of the control device 1 will be described.
  • the image acquisition unit 10 is a functional element that acquires an image output by the imaging device 2.
  • the image acquisition unit 10 acquires an image output from the right camera as a standard image, and acquires an image output from the left camera as a reference image.
  • the image processing unit 11 is a functional element that performs image processing such as distortion correction processing, parallelization processing, and edge detection processing.
  • the “distortion correction process” is a process for correcting distortion such as internal distortion of the image due to the characteristics of the camera lens, external distortion of the image due to the posture of the camera, and the like. Specifically, the image processing unit 11 corrects the distortion by, for example, a correction conversion table based on the design value of the lens or by parameter estimation based on a distortion model in the radial direction.
  • the “parallelization process” is a process for generating a parallelized image obtained if the optical axes of a plurality of cameras are parallel to each other. Specifically, for example, the image processing unit 11 calculates the relative positional relationship between the cameras calculated based on the grid point position of each image obtained by imaging the grid pattern installed in the common field of view of the plurality of cameras. A parallelized image is generated by correcting the image using the image. Alternatively, the image processing unit 11 may generate the parallelized image by correcting the image using the pitch angle of each camera that has been determined in advance.
  • “Edge detection process” is a process for detecting a boundary line of an object in an image. Specifically, the image processing unit 11 generates an edge image using, for example, a Sobel filter.
  • the image processing unit 11 performs distortion correction processing on the standard image and the reference image based on the internal parameters and external parameters of the left and right cameras acquired in advance. Thereafter, the image processing unit 11 rotates the reference image so that the horizontal line of the reference image matches the horizontal line of the standard image. Thereafter, the image processing unit 11 performs edge detection processing so as to obtain an edge image suitable for subsequent processing by the parallax calculation unit 12 and the road marking detection unit 14.
  • the execution of the distortion correction process, the parallelization process, and the edge detection process is in no particular order. For example, after the edge detection process is executed, the distortion correction process and the parallelization process may be executed. May be executed simultaneously.
  • the parallax calculation unit 12 is a functional element that calculates the parallax of an object shown in an image. In this embodiment, the parallax calculation unit 12 calculates the parallax based on the similarity between the edge pattern of the small area cut out from the reference image and the edge pattern of the small area cut out from the reference image.
  • FIG. 2 is a diagram for explaining a coordinate system to an image pickup device 2 is a stereo camera having a right camera 2 R and the left camera 2 L.
  • FIG. 3 shows an example of a reference image CR L the right camera 2 the reference image R to output CR R and the left camera 2 L outputs.
  • the reference image CR R and the reference image CR L in FIG. 3 shows a state in which distortion correction processing and parallel processing has already been performed. Further, FIG.
  • FIG. 4 shows a state in which the edge detection processing has been performed relating to the luminance value with respect to the reference image CR R and the reference image CR L in FIG. Further, FIG. 5, SAD with respect to the reference image CR R and the reference image CR L in Figure 4: illustrating application of (Sum of Absolute Difference difference absolute value sum) algorithm.
  • FIG. 6 is a graph showing an example of the relationship between the parallax and the SAD value.
  • the left camera 2 L and the right camera 2 R is positioned at a reference length B in the vehicle width direction.
  • the point O R corresponds to a point on the reference image CR R through which the optical axis of the right camera 2 R
  • the point O L is equivalent to a point on the reference image CR L passing through the optical axis of the left camera 2 L To do.
  • the position of the object P present in the three-dimensional space, the coordinates P of the three-dimensional orthogonal coordinate system with its origin at the optical center of the left camera 2 L (X, Y, Z ) is represented by.
  • the position of the pixel corresponding to the object P in the reference image CR R is represented by coordinates P R of the two-dimensional orthogonal coordinate system having the point O R as the origin (x, y).
  • the reference position of the pixel corresponding to the object P in the image CR L is represented by coordinates P L of the two-dimensional orthogonal coordinate system having the point O L as the origin (u, v).
  • the y coordinate of the coordinates P R (x, y) values the coordinates P L of (u, v) It is equal to the value of v coordinate.
  • the parallax ⁇ d of the object P is the coordinates P R (x, y) is expressed by the difference between the value of x coordinate, the value of u coordinates of P L (u, v).
  • the point O R to the second quadrant of the two-dimensional orthogonal coordinate system with the origin coordinates P R (x, y) has a first quadrant of the two-dimensional orthogonal coordinate system having the point O L origin Have coordinates P L (u, v). Therefore, parallax ⁇ d is expressed by the sum of the absolute values of the x-coordinate of the coordinates P R (x, y), and the absolute value of u coordinates of P L (u, v).
  • the image processing unit 11 performs a distortion correction process on the reference image CR R, and performs distortion correction processing and the averaging processing to the reference image CR R. Further, the image processing unit 11 applies the reference image CR R (the right diagram in FIG. 3) to which the distortion correction process is applied, and the reference image CR L (the left diagram in FIG. 3) subjected to the distortion correction process and the averaging process. Then, edge detection processing is performed. As a result, the image processing unit 11 performs the edge image (the right diagram in FIG. 4) corresponding to the reference image CR R (the right diagram in FIG. 3) and the edge image (the diagram in the left diagram in FIG. 3) corresponding to the reference image CR R 4 left figure).
  • the parallax calculation unit 12 uses the edge pattern of the small region cut out from the reference image CR R (right diagram in FIG. 4) subjected to the edge detection process and the reference image CR L (left diagram in FIG. 4) subjected to the edge detection process.
  • the degree of similarity with the edge pattern of the small region cut out from is calculated. For example, the SAD algorithm is applied to the similarity calculation.
  • SAD algorithm cuts out the SAD window W R as an image portion of a predetermined size centered the pixel of interest AP R of the reference image CR R. Further, SAD algorithm, a central pixel of interest pixel AP L of the reference image CR L, cut out SAD window W L of the same size as the SAD window W R. Then, SAD algorithm, SAD window W the absolute value of the difference between the edge strength values of two corresponding pixels in each of the R and SAD window W L, derived for all combinations of the two corresponding pixels, SAD and the sum Calculate as a value. Incidentally, SAD value is calculated based on the SAD windows W R and SAD window W L corresponds to zero parallax.
  • SAD algorithm as indicated by arrow AR1, referred to by the same process described above in terms of allowed SAD window W is moved one pixel position in the + u direction L of the image CR L parallax magnitude one pixel To calculate the SAD value.
  • SAD algorithm the movement amount in the u axis direction of the target pixel AP L of SAD window W L is continued calculation of the SAD value until the predetermined number of pixels to be set as the disparity search range DR.
  • FIG. 6 is a graph showing an example of the relationship between the SAD value calculated in this way and the parallax.
  • the parallax calculating unit 12 a parallax when the SAD value is the smallest, i.e., a parallax when the edge pattern is most similar in the SAD window W edge patterns and SAD in R window W L, of the reference image CR R calculated as the parallax ⁇ d about the pixel of interest AP R.
  • the parallax calculating unit 12 after moving the position of the pixel of interest AP R of SAD window W R in the reference image CR R, by the same processing as described above, calculates the parallax ⁇ d about the pixel of interest AP R after the movement .
  • Parallax calculating unit 12 continues this process until the calculated parallax ⁇ d for all the pixels in a predetermined region on the reference image CR R.
  • the predetermined region on the reference image CR R may be an overall reference image CR R.
  • SSD SudofSquaredDifference
  • SGM Semi-Global Matching
  • the edge intensity value is used as the feature quantity for calculating the similarity, but the luminance value or a combination of the luminance value and the edge intensity value may be used as the feature quantity.
  • the parallax ⁇ d is calculated in units of pixels (pixels), but the parallax ⁇ d may be calculated in units of sub-pixels. Further, the parallax calculation unit 12 may calculate the parallax by a method other than the method described above.
  • the three-dimensional object detection unit 13 is a functional element that detects an image representing a three-dimensional object in the reference image (hereinafter referred to as “three-dimensional object image”).
  • the “three-dimensional object” is an object having a predetermined height with respect to the road surface, and includes, for example, a preceding vehicle, an oncoming vehicle, a pedestrian, a building wall, a utility pole, and the like.
  • the three-dimensional object detection unit 13, the parallax calculating unit 12 is calculated to detect a three-dimensional object based on the distribution state of the parallax ⁇ d for each pixel on the reference image CR R.
  • FIG. 7 shows an example of a vertical slice area in the reference image CR R.
  • FIG. 8 shows an example of detection of three-dimensional object image on the reference image CR R.
  • the three-dimensional object detection unit 13 on the reference image CR R, sets a vertical slice area SL1, SL2, SL3 ⁇ ⁇ ⁇ having a width of a predetermined number of pixels. Then, the three-dimensional object detection unit 13 counts the frequency (number of votes) for each parallax of the parallax regarding each pixel included in the vertical slice region SL1. The three-dimensional object detection unit 13 similarly counts the number of votes for each parallax for the other vertical slice regions SL2, SL3,.
  • the three-dimensional object detection unit 13 extracts pixels having parallax where the number of votes is equal to or greater than a predetermined threshold as pixels representing the three-dimensional object (hereinafter referred to as “three-dimensional object pixels”). This is based on the fact that there is a tendency for the number of votes of a specific parallax to increase when a three-dimensional object exists. Specifically, the distance between each point and the right camera 2 R on the surface of the right camera 2 R side in the three-dimensional object, to form a population of equal value, a certain size image portion in the longitudinal slice area The parallax related to the pixels constituting the occupied three-dimensional object image is also based on the fact that a group of equivalent values is formed.
  • the three-dimensional object detection unit 13 After that, the three-dimensional object detection unit 13 generates a three-dimensional object cell by collecting three-dimensional object pixels (hereinafter referred to as “corresponding three-dimensional object pixels”) extracted in one vertical slice region and having the same or close parallax values. To do. In this case, the three-dimensional object detection unit 13 generates a three-dimensional object cell by including a non-corresponding three-dimensional object pixel between the two corresponding three-dimensional object pixels spaced by a predetermined number of pixels as the corresponding three-dimensional object pixel. To do. Furthermore, the three-dimensional object detection unit 13 generates a combined three-dimensional object cell by combining a plurality of three-dimensional object cells including three-dimensional object pixels having the same or similar parallax values. In this case, the three-dimensional object detection unit 13 includes two three-dimensional object cells including pixels that are not corresponding three-dimensional object pixels that are between the two three-dimensional object cells spaced by a predetermined number of pixels as corresponding three-dimensional object pixels. Join.
  • the three-dimensional object detection unit 13 stores the positions of the generated three-dimensional object cell and the combined three-dimensional object cell. Specifically, the three-dimensional object detection unit 13 stores the three-dimensional object cell and the combined three-dimensional object cell in the RAM so that the three-dimensional object cell and the combined three-dimensional object cell can be referred to using the coordinate values and the like, and the stored three-dimensional object cell and the combined three-dimensional object cell Allow specification as a part.
  • FIG. 8 shows an example of a three-dimensional object cell in the reference image.
  • FIG. 8 upper row shows the reference image CR R to be parallax calculation process and the three-dimensional object detection process.
  • FIG. 8 middle shows a state where the superimposed display the frame FR1 ⁇ FR6 representing a three-dimensional object cell in the reference image CR R.
  • FIG. 8 the lower part shows a state in which superimposed on the reference image CR R binding frame CFR representing the combined three-dimensional object cell generated by combining three-dimensional object cells represented by each frame FR1 ⁇ FR6.
  • the three-dimensional object detection unit 13 may distinguish a three-dimensional object image from an image other than a three-dimensional object image such as a road surface image by a method other than the method described above.
  • the road marking detection unit 14 is a functional element that detects a road marking from a reference image. In this embodiment, a white line in the search range that is a part of the reference image is detected.
  • the road marking detection unit 14 detects two edge lines arranged in a straight line from the reference image subjected to the edge detection process. For example, a Hough transform algorithm is applied to the edge line extraction.
  • the road marking detection unit 14 may detect a white line using a luminance value instead of the edge intensity value. Further, the road marking detection unit 14 may detect a white line by a method other than the method described above.
  • the search range setting unit 15 is a functional element that sets a search range when the road marking detection unit 14 detects a road marking.
  • the search range setting unit 15 sets the search range based on the image portion of the three-dimensional object determined in the past.
  • the search range setting unit 15 is another reference image acquired by the image acquisition unit 10 before the reference image to be processed by the road marking detection unit 14 (hereinafter referred to as “current reference image”).
  • the image portion on the current reference image corresponding to the three-dimensional object cell and the combined three-dimensional object cell generated and stored by the three-dimensional object detection unit 13 is excluded from the search range.
  • the preceding reference image is desirably a reference image acquired by the image acquisition unit 10 last time.
  • the search range setting unit 15 not only excludes the image portions corresponding to the three-dimensional object cell and the combined three-dimensional object cell from the search range, but also selects image portions other than the image portion representing the road surface, such as an image portion representing the sky. You may exclude from a search range.
  • the road marking detection unit 14 may detect a white line from the image portion within the search range after excluding a part of the image portion from the search range in the reference image, or may detect the white line from the entire reference image. Above, you may reject the white line outside the search range. Alternatively, the road marking detection unit 14 may extract an edge line from the entire reference image, reject the edge line outside the search range, and then detect a white line based on the unrejected edge line.
  • FIG. 9 is a flowchart showing the flow of the first three-dimensional object / white line detection process, and the road marking detection apparatus 100 repeatedly executes the first three-dimensional object / white line detection process at a predetermined cycle.
  • the image acquisition unit 10 in the control device 1 of the road marking detection device 100 acquires a standard image and a reference image (step S1). Specifically, the image acquisition unit 10 acquires an image outputted by the right camera 2 R of the imaging device 2 as a reference image, obtains an image left camera 2 L of the image pickup apparatus 2 is output as reference image.
  • the image processing unit 11 in the control device 1 of the road marking detection device 100 executes distortion correction processing, parallelization processing, and edge detection processing (step S2). Specifically, the image processing unit 11 performs distortion correction processing and edge detection processing on the standard image, and executes distortion correction processing, parallelization processing, and edge detection processing on the reference image. Note that the parallelization process may be executed on both the standard image and the reference image.
  • control device 1 of the road marking detection device 100 parallels the first processing group including the parallax calculation processing and the three-dimensional object detection processing and the second processing group including the search range setting processing and the white line detection processing in separate threads. Run it. That is, detection of the three-dimensional object by the three-dimensional object detection unit 13 and detection of the road surface indication by the road surface detection detection unit 14 are performed at least partially overlapping in time.
  • the parallax calculation unit 12 in the control device 1 of the road marking detection apparatus 100 calculates the parallax of the object shown in the reference image (step S3). Specifically, the parallax calculation unit 12 is cut out from the edge pattern of the small area centered on the target pixel and the reference image that has been subjected to the edge detection process, cut out from the reference image that has been subjected to the edge detection process. The degree of similarity with the edge pattern of the small area centering on the corresponding target pixel is calculated.
  • the parallax calculation unit 12 calculates the similarity while moving the position of the small region in the reference image, and calculates the parallax related to the target pixel of the reference image based on the coordinates of the target pixel when the similarity is the highest. To do. In this way, the parallax calculation unit 12 calculates the parallax regarding all the pixels in the reference image.
  • the three-dimensional object detection unit 13 in the control device 1 of the road marking detection apparatus 100 detects a three-dimensional object image in the reference image (step S4). Specifically, the three-dimensional object detection unit 13 generates a three-dimensional object cell and a combined three-dimensional object cell based on the disparity distribution state for all pixels in the reference image, and the generated three-dimensional object cell and the position of the combined three-dimensional object cell. Is stored for reference.
  • the search range setting unit 15 in the control device 1 of the road marking detection device 100 sets a search range when the road marking detection unit 14 detects the road marking (step S5). Specifically, the search range setting unit 15 generates the three-dimensional object detection unit 13 based on the reference image (preceding reference image) acquired by the image acquisition unit 10 in step S1 of the first three-dimensional object / white line detection process executed last time. On the reference image (current reference image) acquired by the image acquisition unit 10 in step S1 of the first three-dimensional object / white line detection process executed this time, corresponding to the positions of the stored three-dimensional object cell and the combined three-dimensional object cell. Exclude the image portion from the search range.
  • the road marking detection unit 14 in the control device 1 of the road marking detection apparatus 100 detects a white line from the current reference image (step S6). Specifically, the road marking detection unit 14 detects two edge lines arranged in a straight line as white lines by applying a Hough transform algorithm to the current reference image subjected to the edge detection process.
  • the road marking detection apparatus 100 excludes a solid object image portion from the search range when detecting a white line, so that a three-dimensional object such as a preceding vehicle, a wall, or a utility pole is erroneously detected as a white line.
  • a white line on the road surface can be reliably detected.
  • the road marking detection apparatus 100 can prevent malfunctions and unnecessary operations in various systems that use the detection result of the white line.
  • the road surface marking detection apparatus 100 processes the first processing group and the second processing group in the first three-dimensional object / white line detection processing in parallel, compared to the case of sequentially processing the first processing group and the second processing group, The overall processing time of the first three-dimensional object / white line detection process can be shortened.
  • the road marking detection apparatus 100 displays the processing result of the first processing group in the first solid object / white line detection process that has already been executed in the second processing group in the first solid object / white line detection process that is currently being executed. Used in the processing of. That is, the road marking detection apparatus 100 does not wait for the processing result of the first processing group in the first solid object / white line detection process that is currently being executed, and does not wait for the first solid object / white line detection process that is currently being executed. Two processing group processing is executed. Therefore, the road marking detection apparatus 100 can use the detection result of the three-dimensional object in order to increase the accuracy of the white line detection while reducing the overall processing time of the first three-dimensional object / white line detection process.
  • the road surface marking detection apparatus 100 processes the first processing group and the second processing group in the first three-dimensional object / white line detection processing in parallel, compared to the case of sequentially processing the first processing group and the second processing group, Limits on processing time for each processing group can be relaxed. Therefore, the road marking detection apparatus 100 can increase the detection accuracy of both the three-dimensional object and the white line without extending the overall processing time. In other words, the road marking detection device 100 can reduce false detection of both a three-dimensional object and a white line.
  • FIG. 10 is a functional block diagram illustrating a configuration example of the road marking detection apparatus 100A.
  • FIG. 11 is a flowchart showing a flow of processing (hereinafter, referred to as “second solid object / white line detection process”) in which the road surface marking detection device 100A executes detection of a solid object and detection of a white line in parallel. It is.
  • the road marking detection apparatus 100A repeatedly executes the second three-dimensional object / white line detection process at a predetermined cycle.
  • the road marking detection device 100A can detect the image portion of the three-dimensional object in the reference image acquired later. Predict location. Then, the road marking detection apparatus 100A excludes the image portion corresponding to the predicted position from the search range when detecting the white line. As described above, the road marking detection device 100A is different from the road marking detection device 100 in the method of determining the image portion to be excluded from the search range. In addition, the road marking detection apparatus 100 excludes the image portion in the currently acquired reference image corresponding to the image portion of the three-dimensional object in the previously acquired reference image from the search range.
  • the road marking detection device 100 ⁇ / b> A is different from the road marking detection device 100 of FIG. 1 in that it has a three-dimensional object position prediction unit 16, but road marking detection is performed in other points. Common to the apparatus 100. Therefore, description of common points is omitted, and differences are described in detail.
  • the three-dimensional object position prediction unit 16 is a functional element that predicts the position of the three-dimensional object in the reference image acquired later based on the movement of the three-dimensional object shown in each of the reference image acquired in advance and the reference image actually acquired. .
  • the three-dimensional object position prediction unit 16 derives the three-dimensional position and the moving speed of the three-dimensional object shown in the reference image based on the reference image acquired in advance and the reference image actually acquired. Desirably, the three-dimensional object position prediction unit 16 acquires the current reference image based on the reference image (current reference image) currently acquired by the image acquisition unit 10 and the previously acquired reference image (preceding reference image). The three-dimensional position and moving speed of the three-dimensional object in are derived.
  • the three-dimensional object position prediction unit 16 acquires the next reference image (subsequent reference image) from the time when the three-dimensional position and moving speed of the three-dimensional object at the time of acquiring the current reference image and the current reference image are acquired. Based on the time until the time, the position of the image portion of the three-dimensional object in the subsequent reference image is predicted.
  • the three-dimensional object position prediction unit 16 considers the transition of the vehicle position when deriving the three-dimensional position and moving speed of the three-dimensional object at the time of acquiring the current reference image.
  • the three-dimensional object position prediction unit 16 also considers the transition of the vehicle position when predicting the position of the image portion of the three-dimensional object in the subsequent reference image.
  • the three-dimensional object position prediction unit 16 may acquire the vehicle position based on, for example, the output of an in-vehicle positioning device (not shown) such as a GPS receiver, and various types such as a vehicle speed sensor and a steering angle sensor. You may acquire the own vehicle position based on the output of a vehicle-mounted sensor.
  • the three-dimensional object position prediction unit 16 for example, the three-dimensional coordinates (X, Y, Z) of the three-dimensional object represented by the target pixel from the parallax ⁇ d related to the target pixel included in the combined three-dimensional object cell in the current reference image. To derive.
  • the distance Z is expressed by f ⁇ B ⁇ ⁇ d.
  • the two-dimensional coordinates of the target pixel in the two-dimensional orthogonal coordinate system to the point O R and the origin P R (x, y) and the height Y is represented by y ⁇ ⁇ d ⁇ f
  • lateral position X Is represented by x ⁇ ⁇ d ⁇ f.
  • the three-dimensional object position prediction unit 16 derives the three-dimensional coordinates of the three-dimensional object represented by each pixel from the parallax regarding all the pixels included in the combined three-dimensional object cell. Note that the three-dimensional object position prediction unit 16 does not use parallax related to all the pixels included in the combined three-dimensional object cell but part of the parallax related to a part of the pixels included in the combined three-dimensional object cell in order to shorten the calculation time.
  • the three-dimensional coordinates of the three-dimensional object represented by each of the pixels may be derived.
  • the three-dimensional object position prediction unit 16 determines the three-dimensional object represented by the three-dimensional coordinates of the three-dimensional object represented by the pixel having the largest parallax and the pixel having the smallest parallax among the pixels included in the combined three-dimensional object cell.
  • the coordinates may be derived.
  • the three-dimensional object position prediction unit 16 determines the size of the three-dimensional object in the XY plane related to the combined three-dimensional object cell based on the maximum value and the minimum value of the horizontal position X and the maximum value and the minimum value of the height Y. To derive. Further, the three-dimensional object position prediction unit 16 derives the center of gravity of the three-dimensional object related to the combined three-dimensional object cell in consideration of the distance Z, and derives the three-dimensional position occupied by the three-dimensional object related to the combined three-dimensional object cell.
  • the three-dimensional object position prediction unit 16 performs the same process on the combined three-dimensional object cell in the preceding reference image, and derives the three-dimensional position occupied by the three-dimensional object related to the combined three-dimensional object cell in the preceding reference image.
  • the three-dimensional position occupied by the three-dimensional object related to the combined three-dimensional object cell in the preceding reference image is preferably the three-dimensional position derived by the previous second three-dimensional object / white line detection process. This is because the calculation result is shortened by reusing the derivation result.
  • the three-dimensional object position prediction unit 16 determines the three-dimensional object related to the combined three-dimensional object cell in the current reference image based on the derived similarity of the three-dimensional position (for example, whether or not it exists within a predetermined distance range) and the preceding reference.
  • the three-dimensional object related to the combined three-dimensional object cell in the image is associated.
  • the three-dimensional object position prediction unit 16 determines the three-dimensional object based on the three-dimensional position of the three-dimensional object at the time of acquiring the current reference image and the three-dimensional position of the corresponding three-dimensional object at the time of acquiring the preceding reference image. Derive the velocity vector.
  • the three-dimensional object position predicting unit 16 uses the derived velocity vector and the three-dimensional position of the three-dimensional object at the time of acquiring the current reference image to obtain the tertiary of the corresponding three-dimensional object at the time of acquiring the subsequent reference image. Predict the original position.
  • the three-dimensional object position prediction unit 16 corresponds to the three-dimensional object in the subsequent reference image based on the predicted three-dimensional position of the three-dimensional object, the size of the three-dimensional object, the internal parameters and the external parameters of the left and right cameras, and the like. The position of the image part to be determined is determined. Then, the three-dimensional object position prediction unit 16 stores the image portion in the RAM so that the image portion can be referred to using the coordinate value or the like, and allows the stored image portion to be designated as the image portion of the three-dimensional object.
  • the three-dimensional object position prediction unit 16 may consider other data such as the vehicle speed and the yaw rate in order to improve the prediction accuracy of the three-dimensional position of the three-dimensional object. Further, the three-dimensional object position prediction unit 16 may predict the three-dimensional position of the three-dimensional object by a method other than the above-described method.
  • the second three-dimensional object / white line detection process includes step S5A instead of step S5 in the first process group, and the step S7 is added to the second process group. This is different from the object / white line detection processing but is common in other points. Therefore, description of common points is omitted, and differences are described in detail.
  • step S7 the response at the time when the three-dimensional object position prediction unit 16 acquires the subsequent reference image based on the detection result of the three-dimensional object in the current reference image and the detection result of the three-dimensional object in the preceding reference image.
  • the three-dimensional position of the three-dimensional object to be predicted is predicted.
  • the three-dimensional object position prediction unit 16 stores the image portion corresponding to the three-dimensional object in the RAM so that it can be referred to from the predicted three-dimensional position.
  • step S5A the search range setting unit 15 this time the second solid object / white line detection process corresponding to the image portion stored in the solid object position prediction unit 16 in step S7 of the previous second solid object / white line detection process.
  • the image portion on the reference image acquired by the image acquisition unit 10 in step S1 (corresponding to the subsequent reference image in the previous second solid object / white line detection process) is excluded from the search range.
  • the road marking detection apparatus 100A excludes the three-dimensional object from the search range when detecting the white line, so that the three-dimensional object such as the preceding vehicle, the wall, and the electric pole is not erroneously detected as a white line.
  • a white line on the road surface can be reliably detected.
  • the road marking detection device 100A can prevent malfunctions and unnecessary operations in various systems that use the detection result of the white line.
  • the road marking detection apparatus 100A performs parallel processing on the first processing group and the second processing group in the second solid object / white line detection processing, so compared to the case of sequentially processing the first processing group and the second processing group, The overall processing time of the second three-dimensional object / white line detection process can be shortened.
  • the road marking detection apparatus 100 ⁇ / b> A displays the processing result of the first processing group in the already executed second solid object / white line detection process in the second processing group in the currently executed second solid object / white line detection process. Used in the processing of. That is, the road surface marking detection device 100A does not wait for the processing result of the first processing group in the second solid object / white line detection process that is currently being executed, and the second solid object / white line detection process that is currently being executed. Two processing group processing is executed. Therefore, the road marking detection apparatus 100A can use the detection result of the three-dimensional object in order to increase the accuracy of the white line detection while reducing the overall processing time of the second three-dimensional object / white line detection process.
  • the road marking detection apparatus 100A performs parallel processing on the first processing group and the second processing group in the second solid object / white line detection processing, so compared to the case of sequentially processing the first processing group and the second processing group, Limits on processing time for each processing group can be relaxed. Therefore, the road marking detection apparatus 100A can improve the detection accuracy of both the three-dimensional object and the white line without extending the overall processing time. In other words, the road marking detection device 100A can reduce false detection of both a three-dimensional object and a white line.
  • the road marking detection apparatus 100A predicts the current position of the three-dimensional object based on the movement of the three-dimensional object shown in the reference image so far, and determines the image portion of the three-dimensional object in the reference image. Then, the road marking detection apparatus 100A excludes the image portion of the three-dimensional object from the search range when detecting the white line. For this reason, the road marking detection apparatus 100A more reliably determines the image portion of the three-dimensional object from the search range when detecting the white line, even if the three-dimensional object shown in the reference image is a moving object such as a preceding vehicle or an oncoming vehicle. Can be excluded. In addition, this effect becomes greater as the reference image acquisition interval becomes longer.
  • FIG. 12 is a functional block diagram illustrating a configuration example of the road marking detection apparatus 100B.
  • FIG. 13 is a flowchart showing a flow of processing (hereinafter, referred to as “third solid object / white line detection process”) in which the road marking detection apparatus 100B simultaneously executes the detection of the solid object and the detection of the white line. It is.
  • the road marking detection device 100B repeatedly executes the third three-dimensional object / white line detection process at a predetermined cycle.
  • the road marking detection device 100B prevents the road surface image from being captured in the image portion of the detected three-dimensional object as the error between the imaging device 2 and the three-dimensional object increases as the error in parallax increases.
  • the size of the image portion of the three-dimensional object is reduced as the distance increases.
  • the road marking detection device 100B is different from the road marking detection devices 100 and 100A in that the size of the image portion excluded from the search range is adjusted according to the distance between the imaging device 2 and the three-dimensional object. .
  • the road marking detection device 100B is different from the road marking detection device 100A of FIG. 10 in that it has a three-dimensional object image partial size adjustment unit 17, but the road surface is otherwise different.
  • the road marking detection apparatus 100B may omit the three-dimensional object position prediction unit 16.
  • the road marking detection apparatus 100B is different from the road marking detection apparatus 100 of FIG. 1 in that it has a three-dimensional object image partial size adjustment unit 17, but is common to the road marking detection apparatus 100 in other points. Therefore, description of common points is omitted, and differences are described in detail.
  • the three-dimensional object image part size adjustment unit 17 is a functional element that adjusts the size of the image part of the three-dimensional object that is excluded from the search range by the search range setting unit 15.
  • the three-dimensional object image part size adjusting unit 17 is the size of the image part corresponding to the three-dimensional object cell and the combined three-dimensional object cell generated and stored by the three-dimensional object detection unit 13, or the three-dimensional object position prediction unit.
  • the size of the image portion of the three-dimensional object predicted and stored by 16 is adjusted according to the distance between the imaging device 2 and the three-dimensional object.
  • the three-dimensional object image partial size adjustment unit 17 derives a coefficient that decreases with an increase in the distance Z represented by the focal length f ⁇ the reference length B ⁇ the parallax ⁇ d. Multiply the size of the three-dimensional object related to the combined three-dimensional object cell in the XY plane.
  • the three-dimensional object image partial size adjustment unit 17 calculates a coefficient having a constant value on the XY plane of the three-dimensional object related to the combined three-dimensional object cell derived by the three-dimensional object position prediction unit 16 when the distance Z is equal to or greater than the predetermined distance. You may multiply by the size of.
  • the three-dimensional object image portion size adjusting unit 17 reduces the size of the three-dimensional object image portion predicted and stored by the three-dimensional object position predicting unit 16 as the distance Z increases.
  • the third three-dimensional object / white line detection process includes step S5B instead of steps S5 and S5A in the first process group, and step S8 in FIG. 9 is added to the second process group. This is different from the one-dimensional object / white line detection process and the second three-dimensional object / white line detection process of FIG. Therefore, description of common points is omitted, and differences are described in detail.
  • step S8 the three-dimensional object image partial size adjusting unit 17 predicts and stores the three-dimensional object image predicted and stored by the three-dimensional object position prediction unit 16 according to the distance between the imaging device 2 and the three-dimensional object as described above. Adjust the size of the part.
  • step S5B the search range setting unit 15 corresponds to the image portion whose size has been adjusted by the three-dimensional object image portion size adjustment unit 17 in step S8 of the previous third solid object / white line detection process.
  • the image portion on the reference image acquired by the image acquisition unit 10 in step S1 of the object / white line detection process is excluded from the search range.
  • the road marking detection device 100B adjusts the size of the image portion of the three-dimensional object according to the distance between the imaging device 2 and the three-dimensional object, in addition to the effects of the road surface marking detection devices 100 and 100A. It is possible to prevent the search range from being excessively limited. As a result, the road marking detection apparatus 100B can suppress the occurrence of white line detection omission.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Un dispositif de détection de marquages de revêtement de surface de route (100) selon un mode de réalisation de la présente invention comporte les éléments suivants : une unité d'acquisition d'image (10) permettant d'acquérir une image de la périphérie d'un véhicule ; une unité de détection d'objet tridimensionnel (13) permettant de détecter un objet tridimensionnel dans l'image ; une unité d'établissement de domaine de recherche (15) permettant d'établir le domaine de recherche de marquages de revêtement de surface de route dans l'image ; et une unité de détection de marquages de revêtement de surface de route (14) permettant de détecter des marquages de revêtement de surface de route au sein du domaine de recherche. L'unité d'établissement de domaine de recherche (15) établit le domaine de recherche sur la base de la portion d'image de l'objet tridimensionnel détecté à l'aide d'une image acquise auparavant, et différente d'une image servant de sujet pour établir le domaine de recherche.
PCT/JP2012/075565 2012-10-02 2012-10-02 Dispositif de détection de marquages de revêtement de surface de route et procédé de détection de marquages de revêtement de surface de route WO2014054124A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/075565 WO2014054124A1 (fr) 2012-10-02 2012-10-02 Dispositif de détection de marquages de revêtement de surface de route et procédé de détection de marquages de revêtement de surface de route

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/075565 WO2014054124A1 (fr) 2012-10-02 2012-10-02 Dispositif de détection de marquages de revêtement de surface de route et procédé de détection de marquages de revêtement de surface de route

Publications (1)

Publication Number Publication Date
WO2014054124A1 true WO2014054124A1 (fr) 2014-04-10

Family

ID=50434485

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/075565 WO2014054124A1 (fr) 2012-10-02 2012-10-02 Dispositif de détection de marquages de revêtement de surface de route et procédé de détection de marquages de revêtement de surface de route

Country Status (1)

Country Link
WO (1) WO2014054124A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015203703A (ja) * 2014-04-16 2015-11-16 タタ コンサルタンシー サービシズ リミテッドTATA Consultancy Services Limited ステレオオブジェクト検出及び距離計算のためのシステム及び方法
WO2018094373A1 (fr) * 2016-11-21 2018-05-24 Nio Usa, Inc. Procédés et systèmes de détection d'objet de surface de capteur
US10430833B2 (en) 2016-11-21 2019-10-01 Nio Usa, Inc. Sensor surface object detection methods and systems
US10604120B2 (en) 2016-07-07 2020-03-31 Nio Usa, Inc. Sensor cleaning devices and systems

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0896299A (ja) * 1994-09-20 1996-04-12 Nissan Motor Co Ltd 車両用走行路認識装置及びこれを用いた警報・走行制御装置
JPH11175736A (ja) * 1997-12-15 1999-07-02 Toshiba Corp 物体領域追跡装置および物体領域追跡方法
JP2002074339A (ja) * 2000-08-31 2002-03-15 Hitachi Ltd 車載撮像装置
JP2008065634A (ja) * 2006-09-07 2008-03-21 Fuji Heavy Ind Ltd 物体検出装置および物体検出方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0896299A (ja) * 1994-09-20 1996-04-12 Nissan Motor Co Ltd 車両用走行路認識装置及びこれを用いた警報・走行制御装置
JPH11175736A (ja) * 1997-12-15 1999-07-02 Toshiba Corp 物体領域追跡装置および物体領域追跡方法
JP2002074339A (ja) * 2000-08-31 2002-03-15 Hitachi Ltd 車載撮像装置
JP2008065634A (ja) * 2006-09-07 2008-03-21 Fuji Heavy Ind Ltd 物体検出装置および物体検出方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015203703A (ja) * 2014-04-16 2015-11-16 タタ コンサルタンシー サービシズ リミテッドTATA Consultancy Services Limited ステレオオブジェクト検出及び距離計算のためのシステム及び方法
US10604120B2 (en) 2016-07-07 2020-03-31 Nio Usa, Inc. Sensor cleaning devices and systems
US11034335B2 (en) 2016-07-07 2021-06-15 Nio Usa, Inc. Low-profile imaging system with enhanced viewing angles
WO2018094373A1 (fr) * 2016-11-21 2018-05-24 Nio Usa, Inc. Procédés et systèmes de détection d'objet de surface de capteur
US10430833B2 (en) 2016-11-21 2019-10-01 Nio Usa, Inc. Sensor surface object detection methods and systems

Similar Documents

Publication Publication Date Title
EP3082066B1 (fr) Dispositif de détection de gradient de la surface d'une route
US9990534B2 (en) Image processing device and image processing method
US9536155B2 (en) Marking line detection system and marking line detection method of a distant road surface area
US10102433B2 (en) Traveling road surface detection apparatus and traveling road surface detection method
US10127702B2 (en) Image processing device and image processing method
CN105335955A (zh) 对象检测方法和对象检测装置
JP6544257B2 (ja) 情報処理システム、情報処理方法及び情報処理プログラム
JP6201148B2 (ja) キャリブレーション装置、キャリブレーション方法、キャリブレーション機能を備えた移動体搭載用カメラ及びプログラム
JP6377970B2 (ja) 視差画像生成装置及び視差画像生成方法
JP2010085240A (ja) 車両用画像処理装置
US20180182113A1 (en) Image processing device and image processing method
JP2014009975A (ja) ステレオカメラ
WO2014054124A1 (fr) Dispositif de détection de marquages de revêtement de surface de route et procédé de détection de marquages de revêtement de surface de route
WO2020209046A1 (fr) Dispositif de détection d'objet
US11054245B2 (en) Image processing apparatus, device control system, imaging apparatus, image processing method, and recording medium
EP3631675A1 (fr) Système et procédé avancés d'aide à la conduite
CN104537627A (zh) 一种深度图像的后处理方法
US9041778B2 (en) Image processing device and method of processing image
US10339394B2 (en) Step detection device and step detection method
JP6416654B2 (ja) 白線検出装置
JP6668740B2 (ja) 道路面推定装置
JP2019164837A (ja) 情報処理システム、情報処理方法及び情報処理プログラム
JP2015215235A (ja) 物体検出装置及び物体検出方法
KR102681321B1 (ko) 듀얼 카메라를 이용하여 거리를 계산하는 고속도로 주행지원 시스템의 성능 평가 장치와 그 방법
JP7180739B2 (ja) 情報処理装置、情報処理システム、情報処理方法及び情報処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12886017

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12886017

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP