WO2013129357A1 - Dispositif de détection d'objet en trois dimensions - Google Patents

Dispositif de détection d'objet en trois dimensions Download PDF

Info

Publication number
WO2013129357A1
WO2013129357A1 PCT/JP2013/054859 JP2013054859W WO2013129357A1 WO 2013129357 A1 WO2013129357 A1 WO 2013129357A1 JP 2013054859 W JP2013054859 W JP 2013054859W WO 2013129357 A1 WO2013129357 A1 WO 2013129357A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional object
vehicle
detection
detection area
road
Prior art date
Application number
PCT/JP2013/054859
Other languages
English (en)
Japanese (ja)
Inventor
修 深田
早川 泰久
Original Assignee
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社 filed Critical 日産自動車株式会社
Priority to JP2014502226A priority Critical patent/JP5915728B2/ja
Publication of WO2013129357A1 publication Critical patent/WO2013129357A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/421Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation by analysing segments intersecting the pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection

Definitions

  • the present invention relates to a three-dimensional object detection device.
  • This application claims priority based on Japanese Patent Application No. 2012-046616 filed on Mar. 2, 2012.
  • the contents described in the application are incorporated into the present application by reference and made a part of the description of the present application.
  • Patent Literature a technique is known in which two captured images captured at different times are converted into a bird's-eye view image, and an obstacle is detected based on the difference between the two converted bird's-eye view images.
  • a predetermined detection area is set on the rear side of the host vehicle and an obstacle (another vehicle) existing in the detection area is detected in order to detect another vehicle traveling on the rear side of the host vehicle.
  • an obstacle another vehicle
  • the distance between the host vehicle and another vehicle traveling behind the host vehicle that is about to merge with the road on which the host vehicle is traveling is different. May not be detected.
  • the problem to be solved by the present invention is to provide a three-dimensional object detection device that can appropriately detect an adjacent vehicle.
  • the present invention determines whether or not the number of lanes on the road on which the host vehicle is traveling has increased, and if it is determined that the number of lanes on the road on which the host vehicle is traveling has increased, The problem is solved by spreading in the direction.
  • the present invention when it is determined that the number of lanes on the road on which the host vehicle is traveling has increased, it is determined that the host vehicle is traveling at the junction, and the detection area is expanded in the vehicle width direction. Even when the distance between the host vehicle and the other vehicle is long, the other vehicle traveling behind the host vehicle can be detected appropriately.
  • FIG. 1 is a schematic configuration diagram of a vehicle equipped with a three-dimensional object detection device.
  • FIG. 2 is a plan view showing a traveling state of the vehicle of FIG.
  • FIG. 3 is a block diagram showing details of the computer.
  • 4A and 4B are diagrams for explaining the outline of the processing of the alignment unit, where FIG. 4A is a plan view showing the moving state of the vehicle, and FIG. 4B is an image showing the outline of the alignment.
  • FIG. 5 is a schematic diagram illustrating how a differential waveform is generated by the three-dimensional object detection unit.
  • FIG. 6 is a diagram illustrating a small area divided by the three-dimensional object detection unit.
  • FIG. 7 is a diagram illustrating an example of a histogram obtained by the three-dimensional object detection unit.
  • FIG. 1 is a schematic configuration diagram of a vehicle equipped with a three-dimensional object detection device.
  • FIG. 2 is a plan view showing a traveling state of the vehicle of FIG.
  • FIG. 3 is a
  • FIG. 8 is a diagram illustrating weighting by the three-dimensional object detection unit.
  • FIG. 9 is a diagram illustrating another example of a histogram obtained by the three-dimensional object detection unit.
  • FIG. 10 is a diagram for explaining a method of calculating the amount of movement candidate by the out-of-road determination unit.
  • FIG. 11 is a diagram illustrating an example of a histogram generated by counting the amount of movement candidates.
  • FIG. 12 is a diagram for explaining a method of determining a non-detection target by the out-of-road determination unit.
  • FIG. 13 is a diagram for explaining the increase / decrease of the count value.
  • FIG. 14 is a diagram for explaining a detection region setting method by the detection region setting unit.
  • FIG. 15 is a diagram illustrating an example of a relationship between the speed variation degree and the size of the detection region in the vehicle width direction.
  • FIG. 16 is a flowchart showing an adjacent vehicle detection process according to the first embodiment.
  • FIG. 17 is a flowchart showing detection area setting processing according to the present embodiment.
  • FIG. 18 is a flowchart showing the periodic object determination process shown in step S201.
  • FIG. 19 is a flowchart showing the non-detection target determination process shown in step S203.
  • FIG. 20 is a block diagram illustrating details of the computer according to the second embodiment.
  • FIGS. 21A and 21B are diagrams illustrating the traveling state of the vehicle, in which FIG. 21A is a plan view illustrating the positional relationship of the detection region and the like, and FIG.
  • FIG. 21B is a perspective view illustrating the positional relationship of the detection region and the like in real space.
  • FIG. 22 is a diagram for explaining the operation of the luminance difference calculation unit according to the second embodiment.
  • FIG. 22A is a diagram showing the positional relationship among the attention line, reference line, attention point, and reference point in the bird's-eye view image.
  • (B) is a figure which shows the positional relationship of the attention line, reference line, attention point, and reference point in real space.
  • 23A and 23B are diagrams for explaining the detailed operation of the luminance difference calculation unit according to the second embodiment, in which FIG. 23A is a diagram showing a detection area in a bird's-eye view image, and FIG. 23B is an attention in a bird's-eye view image.
  • FIG. 24 is a diagram illustrating an example of an image for explaining the edge detection operation.
  • 25A and 25B are diagrams showing edge lines and luminance distribution on the edge lines.
  • FIG. 25A is a diagram showing the luminance distribution when a three-dimensional object (adjacent vehicle) is present in the detection area
  • FIG. 25B is a detection area. It is a figure which shows luminance distribution when a solid object does not exist in FIG.
  • FIG. 26 is a flowchart illustrating an adjacent vehicle detection method according to the second embodiment.
  • FIG. 1 is a schematic configuration diagram of a vehicle equipped with a three-dimensional object detection device 1 according to the present embodiment.
  • the three-dimensional object detection device 1 according to the present embodiment is intended to detect another vehicle (hereinafter also referred to as an adjacent vehicle V2) existing in an adjacent lane that may be contacted when the host vehicle V1 changes lanes. To do.
  • the three-dimensional object detection device 1 according to the present embodiment includes a camera 10, a vehicle speed sensor 20, and a calculator 30.
  • the camera 10 is attached to the vehicle V ⁇ b> 1 so that the optical axis is at an angle ⁇ downward from the horizontal at a position of the height h behind the host vehicle V ⁇ b> 1.
  • the camera 10 captures an image of a predetermined area in the surrounding environment of the host vehicle V1 from this position.
  • the vehicle speed sensor 20 detects the traveling speed of the host vehicle V1, and calculates the vehicle speed from the wheel speed detected by, for example, a wheel speed sensor that detects the rotational speed of the wheel.
  • the computer 30 detects an adjacent vehicle existing in an adjacent lane behind the host vehicle.
  • FIG. 2 is a plan view showing a traveling state of the host vehicle V1 of FIG.
  • the camera 10 images the vehicle rear side at a predetermined angle of view a.
  • the angle of view a of the camera 10 is set to an angle of view at which the left and right lanes (adjacent lanes) can be imaged in addition to the lane in which the host vehicle V1 travels.
  • FIG. 3 is a block diagram showing details of the computer 30 of FIG. In FIG. 3, the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
  • the computer 30 includes a viewpoint conversion unit 31, a positioning unit 32, a three-dimensional object detection unit 33, an out-of-road determination unit 34, and a detection area setting unit 35. Below, each structure is demonstrated.
  • the viewpoint conversion unit 31 inputs captured image data of a predetermined area obtained by imaging with the camera 10, and converts the viewpoint of the input captured image data into bird's-eye image data in a bird's-eye view state.
  • the state viewed from a bird's-eye view is a state viewed from the viewpoint of a virtual camera looking down from above, for example, vertically downward.
  • This viewpoint conversion can be executed as described in, for example, Japanese Patent Application Laid-Open No. 2008-219063.
  • the viewpoint conversion of captured image data to bird's-eye view image data is based on the principle that a vertical edge peculiar to a three-dimensional object is converted into a straight line group passing through a specific fixed point by viewpoint conversion to bird's-eye view image data. This is because a planar object and a three-dimensional object can be distinguished if used.
  • the alignment unit 32 sequentially inputs the bird's-eye view image data obtained by the viewpoint conversion of the viewpoint conversion unit 31 and aligns the positions of the inputted bird's-eye view image data at different times.
  • 4A and 4B are diagrams for explaining the outline of the processing of the alignment unit 32, where FIG. 4A is a plan view showing the moving state of the host vehicle V1, and FIG. 4B is an image showing the outline of the alignment.
  • the host vehicle V1 of the current time is located in P 1, one unit time before the vehicle V1 is located in the P 1 '. Further, there is a parallel running state with the vehicle V1 is located is adjacent vehicle V2 laterally after the vehicle V1, located in P 2 adjacent vehicle V2 is the current time, one unit time before the adjacent vehicle V2 is P 2 Suppose it is located at '. Furthermore, it is assumed that the host vehicle V1 has moved a distance d at one time. Note that “one hour before” may be a past time for a predetermined time (for example, one control cycle) from the current time, or may be a past time for an arbitrary time.
  • the bird's-eye view image PB t at the current time is as shown in Figure 4 (b).
  • the adjacent vehicle V2 (position P 2) is tilting occurs.
  • the white line drawn on the road surface has a rectangular shape, and is in a state of being relatively accurately viewed in plan, but the adjacent vehicle V2 (position P 2). ') Will fall down.
  • the vertical edges of solid objects are straight lines along the collapse direction by the viewpoint conversion processing to bird's-eye view image data. This is because the plane image on the road surface does not include a vertical edge, but such a fall does not occur even when the viewpoint is changed.
  • the alignment unit 32 performs alignment of the bird's-eye view images PB t and PB t ⁇ 1 as described above on the data. At this time, the alignment unit 32 offsets the bird's-eye view image PB t-1 at the previous time and matches the position with the bird's-eye view image PB t at the current time.
  • the image on the left side and the center image in FIG. 4B show a state that is offset by the movement distance d ′.
  • This offset amount d ′ is a movement amount on the bird's-eye view image data corresponding to the actual movement distance d of the host vehicle V1 shown in FIG. 4 (a). It is determined based on the time until the time.
  • the alignment unit 32 takes the difference between the bird's-eye view images PB t and PB t ⁇ 1 and generates data of the difference image PD t .
  • the alignment unit 32 converts the pixel value difference between the bird's-eye view images PB t and PB t ⁇ 1 to an absolute value in order to cope with a change in the illumination environment, and the absolute value is a predetermined value.
  • the three-dimensional object detection unit 33 detects a three-dimensional object based on the data of the difference image PD t shown in FIG. At this time, the three-dimensional object detection unit 33 also calculates the movement distance of the three-dimensional object in the real space. In detecting the three-dimensional object and calculating the movement distance, the three-dimensional object detection unit 33 first generates a differential waveform.
  • the three-dimensional object detection unit 33 generates a differential waveform in the detection region set by the detection region setting unit 35 described later.
  • the three-dimensional object detection device 1 of the present example is intended to calculate the movement distance for an adjacent vehicle that may be contacted when the host vehicle V1 changes lanes. For this reason, in this example, as shown in FIG. 2, rectangular detection areas A1 and A2 are set on the rear side of the host vehicle V1. Such detection areas A1, A2 may be set from a relative position with respect to the host vehicle V1, or may be set based on the position of the white line.
  • the three-dimensional object detection device 1 may use, for example, an existing white line recognition technique. A detection area setting method by the detection area setting unit 35 will be described later.
  • the three-dimensional object detection unit 33 recognizes the sides (sides along the traveling direction) of the set detection areas A1 and A2 on the own vehicle V1 side as the ground lines L1 and L2.
  • the ground line means a line in which the three-dimensional object contacts the ground.
  • the ground line is set as described above, not a line in contact with the ground. Even in this case, from experience, the difference between the ground line according to the present embodiment and the ground line obtained from the position of the original adjacent vehicle V2 is not too large, and there is no problem in practical use.
  • FIG. 5 is a schematic diagram illustrating how the three-dimensional object detection unit 33 generates a differential waveform.
  • the three-dimensional object detection unit 33 calculates a differential waveform from a portion corresponding to the detection areas A ⁇ b> 1 and A ⁇ b> 2 in the difference image PD t (right diagram in FIG. 4B) calculated by the alignment unit 32.
  • DW t is generated.
  • the three-dimensional object detection unit 33 generates a differential waveform DW t along the direction in which the three-dimensional object falls by viewpoint conversion.
  • the detection area A1 is described for convenience, but the difference waveform DW t is generated for the detection area A2 in the same procedure.
  • first three-dimensional object detection unit 33 defines a line La on the direction the three-dimensional object collapses on data of the difference image PD t. Then, the three-dimensional object detection unit 33 counts the number of difference pixels DP indicating a predetermined difference on the line La.
  • the difference pixel DP indicating the predetermined difference is expressed by the pixel value of the difference image PD t as “0” and “1”, and the pixel indicating “1” is counted as the difference pixel DP. .
  • the three-dimensional object detection unit 33 counts the number of difference pixels DP and then obtains an intersection point CP between the line La and the ground line L1. Then, the three-dimensional object detection unit 33 associates the intersection CP with the count number, determines the horizontal axis position based on the position of the intersection CP, that is, the position on the vertical axis in the right diagram of FIG. The axis position, that is, the position on the right and left axis in the right diagram of FIG. 5 is determined and plotted as the count number at the intersection CP.
  • the three-dimensional object detection unit 33 defines lines Lb, Lc... In the direction in which the three-dimensional object falls, counts the number of difference pixels DP, and determines the horizontal axis position based on the position of each intersection CP. Then, the vertical axis position is determined from the count number (number of difference pixels DP) and plotted.
  • the three-dimensional object detection unit 33 generates the differential waveform DW t as shown in the right diagram of FIG.
  • the difference pixel PD on the data of the difference image PD t is a pixel that has changed in the images at different times, in other words, a location where a three-dimensional object exists.
  • the difference waveform DW t is generated by counting the number of pixels along the direction in which the three-dimensional object collapses and performing frequency distribution at the location where the three-dimensional object exists.
  • the differential waveform DW t is generated from the information in the height direction for the three-dimensional object.
  • the differential waveform DW t is an aspect of pixel distribution information indicating a predetermined luminance difference
  • the “pixel distribution information” in the present embodiment is obtained when the captured image is converted into a bird's-eye view image. It can be positioned as information indicating the distribution state of “pixels having a luminance difference equal to or greater than a predetermined threshold” detected along the direction in which the three-dimensional object falls. That is, in the bird's eye view image obtained by the viewpoint conversion unit 31, the three-dimensional object detection unit 33 distributes pixels whose luminance difference is greater than or equal to a predetermined threshold along the direction in which the three-dimensional object falls when the viewpoint is converted into the bird's eye view image. The three-dimensional object is detected based on the detected pixel distribution information.
  • the line La and the line Lb in the direction in which the three-dimensional object collapses have different distances overlapping the detection area A1. For this reason, if the detection area A1 is filled with the difference pixels DP, the number of difference pixels DP is larger on the line La than on the line Lb. For this reason, when the three-dimensional object detection unit 33 determines the vertical axis position from the count number of the difference pixels DP, the three-dimensional object detection unit 33 is normalized based on the distance at which the lines La and Lb in the direction in which the three-dimensional object falls and the detection area A1 overlap. Turn into. As a specific example, in the left diagram of FIG.
  • the three-dimensional object detection unit 33 normalizes the count number by dividing it by the overlap distance.
  • the difference waveform DW t the line La on the direction the three-dimensional object collapses, the value of the differential waveform DW t corresponding to Lb is substantially the same.
  • the three-dimensional object detection unit 33 After generation of the differential waveform DW t, the three-dimensional object detection unit 33 calculates the moving distance in comparison with the differential waveform DW t-1 of the previous differential waveform DW t and a time instant at the current time. That is, the three-dimensional object detection unit 33 calculates the movement distance from the time change of the difference waveforms DW t and DW t ⁇ 1 .
  • the three-dimensional object detection unit 33 divides the differential waveform DW t into a plurality of small areas DW t1 to DW tn (n is an arbitrary integer equal to or greater than 2).
  • FIG. 6 is a diagram illustrating the small areas DW t1 to DW tn divided by the three-dimensional object detection unit 33.
  • the small areas DW t1 to DW tn are divided so as to overlap each other, for example, as shown in FIG. For example, the small area DW t1 and the small area DW t2 overlap, and the small area DW t2 and the small area DW t3 overlap.
  • the three-dimensional object detection unit 33 obtains an offset amount (amount of movement of the differential waveform in the horizontal axis direction (vertical direction in FIG. 6)) for each of the small areas DW t1 to DW tn .
  • the offset amount is determined from the difference between the differential waveform DW t in the difference waveform DW t-1 and the current time before one unit time (distance in the horizontal axis direction).
  • three-dimensional object detection unit 33 for each small area DW t1 ⁇ DW tn, when moving the differential waveform DW t1 before one unit time in the horizontal axis direction, the differential waveform DW t at the current time The position where the error is minimized (the position in the horizontal axis direction) is determined, and the amount of movement in the horizontal axis between the original position of the differential waveform DW t ⁇ 1 and the position where the error is minimized is obtained as an offset amount. Then, the three-dimensional object detection unit 33 counts the offset amount obtained for each of the small areas DW t1 to DW tn and forms a histogram.
  • FIG. 7 is a diagram illustrating an example of a histogram obtained by the three-dimensional object detection unit 33.
  • the offset amount which is the amount of movement that minimizes the error between each of the small areas DW t1 to DW tn and the differential waveform DW t ⁇ 1 one time before, has some variation.
  • the three-dimensional object detection unit 33 forms a histogram of offset amounts including variations, and calculates a movement distance from the histogram.
  • the three-dimensional object detection unit 33 calculates the moving distance of the three-dimensional object from the maximum value of the histogram. That is, in the example illustrated in FIG.
  • the three-dimensional object detection unit 33 calculates the offset amount indicating the maximum value of the histogram as the movement distance ⁇ * .
  • the moving distance ⁇ * is a relative moving distance of the three-dimensional object with respect to the host vehicle. For this reason, when calculating the absolute movement distance, the three-dimensional object detection unit 33 calculates the absolute movement distance based on the obtained movement distance ⁇ * and the signal from the vehicle speed sensor 20.
  • a one-dimensional waveform is obtained by calculating the moving distance of the three-dimensional object from the offset amount of the differential waveform DW t when the error of the differential waveform DW t generated at different times is minimized.
  • the movement distance is calculated from the offset amount of the information, and the calculation cost can be suppressed in calculating the movement distance.
  • by dividing the differential waveform DW t generated at different times into a plurality of small areas DW t1 to DW tn it is possible to obtain a plurality of waveforms representing respective portions of the three-dimensional object.
  • the calculation accuracy of the movement distance can be improved. Further, in the present embodiment, by calculating the moving distance of the three-dimensional object from the time change of the differential waveform DW t including the information in the height direction, compared with a case where attention is paid only to one point of movement, Since the detection location before the time change and the detection location after the time change are specified including information in the height direction, it is likely to be the same location in the three-dimensional object, and the movement distance is calculated from the time change of the same location, and the movement Distance calculation accuracy can be improved.
  • the three-dimensional object detection unit 33 weights each of the plurality of small areas DW t1 to DW tn and forms a histogram by counting the offset amount obtained for each of the small areas DW t1 to DW tn according to the weight. May be.
  • FIG. 8 is a diagram illustrating weighting by the three-dimensional object detection unit 33.
  • the small area DW m (m is an integer of 1 to n ⁇ 1) is flat. That is, in the small area DW m , the difference between the maximum value and the minimum value of the number of pixels indicating a predetermined difference is small. Three-dimensional object detection unit 33 to reduce the weight for such small area DW m. This is because the flat small area DW m has no characteristics and is likely to have a large error in calculating the offset amount.
  • the small region DW m + k (k is an integer equal to or less than nm) is rich in undulations. That is, in the small area DW m , the difference between the maximum value and the minimum value of the number of pixels indicating a predetermined difference is large.
  • Three-dimensional object detection unit 33 increases the weight for such small area DW m. This is because the small region DW m + k rich in undulations is characteristic and there is a high possibility that the offset amount can be accurately calculated. By weighting in this way, the calculation accuracy of the movement distance can be improved.
  • the differential waveform DW t is divided into a plurality of small areas DW t1 to DW tn in order to improve the calculation accuracy of the movement distance.
  • the small area DW t1 is divided. It is not necessary to divide into ⁇ DW tn .
  • the three-dimensional object detection unit 33 calculates the moving distance from the offset amount of the differential waveform DW t when the error between the differential waveform DW t and the differential waveform DW t ⁇ 1 is minimized. That is, the method for obtaining the offset amount of the difference waveform DW t in the difference waveform DW t-1 and the current time before one unit time is not limited to the above disclosure.
  • the three-dimensional object detection unit 33 obtains the moving speed of the host vehicle V1 (camera 10), and obtains the offset amount for the stationary object from the obtained moving speed. After obtaining the offset amount of the stationary object, the three-dimensional object detection unit 33 calculates the moving distance of the three-dimensional object after ignoring the offset amount corresponding to the stationary object among the maximum values of the histogram.
  • FIG. 9 is a diagram showing another example of a histogram obtained by the three-dimensional object detection unit 33.
  • a stationary object is present in addition to a three-dimensional object within the angle of view of the camera 10, two maximum values ⁇ 1 and ⁇ 2 appear in the obtained histogram.
  • one of the two maximum values ⁇ 1, ⁇ 2 is the offset amount of the stationary object.
  • the three-dimensional object detection unit 33 calculates the offset amount for the stationary object from the moving speed, ignores the maximum value corresponding to the offset amount, and calculates the moving distance of the three-dimensional object by using the remaining maximum value. To do. Thereby, the situation where the calculation accuracy of the moving distance of a solid object falls by a stationary object can be prevented.
  • the three-dimensional object detection unit 33 stops calculating the movement distance. Thereby, in the present embodiment, it is possible to prevent a situation in which an erroneous movement distance having a plurality of maximum values is calculated.
  • the three-dimensional object detection unit 33 calculates the relative movement speed of the three-dimensional object by differentiating the relative movement distance of the three-dimensional object with respect to time.
  • the three-dimensional object detection unit 33 also calculates the absolute movement speed of the three-dimensional object based on the absolute movement distance of the three-dimensional object.
  • the three-dimensional object detection unit 33 repeatedly calculates the relative movement speed of the three-dimensional object at predetermined intervals, and calculates a time change amount ⁇ V of the calculated relative movement speed of the three-dimensional object.
  • the calculated time change amount ⁇ V of the relative movement speed is transmitted to the out-of-road determination unit 34 described later.
  • the out-of-road determination unit 34 determines whether or not the three-dimensional object detected in the detection areas A1 and A2 is a periodic object having periodicity such as a guard rail, and further, speed variation such as planted grass. It is determined whether or not the detection areas A1 and A2 are set outside the road by determining whether or not the non-detection target is large.
  • the out-of-road determination unit 34 determines that the detection areas A1 and A2 are out of the road when the three-dimensional object detected in the detection areas A1 and A2 is a periodic object having periodicity such as a guardrail. Judge that it is set.
  • the determination method of a periodic object is not specifically limited, In this embodiment, the road determination part 34 can determine a periodic object by the following methods.
  • FIG. 10 is a diagram for explaining a method for calculating the movement amount candidate.
  • FIG. 10 (a) shows the difference image PD t at time t
  • FIG. 10 (t) shows the difference at time t-1.
  • An image PD t-1 is shown.
  • the out-of-road determination unit 34 first detects the ground contact point of the three-dimensional object from the data of the difference image PD t at time t ⁇ 1 as shown in FIG.
  • the contact point is a contact point between the three-dimensional object and the road surface.
  • the out-of-road determination unit 34 detects a position closest to the force mela 10 of the host vehicle V1 among the detected three-dimensional objects as a grounding point.
  • off-street determination unit 34 detects the ground contact point P 1 for the three-dimensional object O 1, detects a grounding point P 2 for three-dimensional object O 2, to detect a grounding point P 3 for three-dimensional objects O 3 .
  • the out-of-road determination unit 34 sets a region T having a width W for the difference image PD t at time t as illustrated in FIG.
  • the out-of-road determination unit 34 sets a region T at a location corresponding to the contact points P 1 to P 3 of the data of the difference image PD t ⁇ 1 at time t ⁇ 1.
  • the out-of-road determination unit 34 detects the ground contact point of the three-dimensional object from the data of the difference image PD t at time t. Also about this time, the road determination part 34 detects the position nearest to the camera 10 of the own vehicle V1 among the detected solid objects as a grounding point. That is, the off-street determination unit 34, with three-dimensional object O 4 detects the ground point P 4, detects a grounding point P 5 on the three-dimensional object O 5, to detect the grounding point P 6 three-dimensional object O 6.
  • the out-of-road determination unit 34 associates the ground points with each other. That is, the off-street determination unit 34, together with associated ground point P 4 with respect to the ground point P 1, associates ground point P 5 with respect to the ground point P 1, and a ground point relative to the ground point P 1 associate the P 6. Similarly, the out-of-road determination unit 34 associates the ground points P 4 to P 6 with the ground points P 2 and P 3 .
  • the out-of-road determination unit 34 calculates the distance between the associated contact points P 1 to P 6 (that is, the movement amount candidate).
  • the out-of-road determination unit 34 sets the calculated distance as a movement amount candidate.
  • the out-of-road determination unit 34 calculates a plurality of movement amount candidates for each three-dimensional object.
  • the movement amount of the three-dimensional object is uniquely determined, and an erroneous movement amount is calculated for a periodic stationary structure (periodic object) in which similar image features appear periodically. I try to suppress it.
  • the reason for providing the region T is that even if an error occurs in the alignment of the bird's-eye view images PB t and PB t ⁇ 1 due to pitching or yawing of the host vehicle V1, the correspondence between the ground points P 1 to P 6 is stable. Is to do. Further, the correspondence between the ground points P 1 to P 6 is determined by matching processing of luminance distribution around the ground point of the bird's-eye view images PB t and PB t ⁇ 1 .
  • the out-of-road determination unit 34 counts the calculated movement amount candidates and creates a histogram. For example, the out-of-road determination unit 34 determines the distance between the ground point P 1 and the ground point P 4 , the distance between the ground point P 2 and the ground point P 5, and the distance between the ground point P 3 and the ground point P 6. If they are the same, the count value is set to “3”. As described above, the road determination unit 34 counts the movement amount candidates and creates a histogram.
  • FIG. 11 is a diagram illustrating a histogram generated by counting the movement amount candidates. In the example shown in FIG. 11, since a plurality of movement amounts m1, m2, m3, and m4 are detected, the count value is high in these movement amounts.
  • the out-of-road determination unit 34 moves the periodic stationary structure (periodic object) in a bird's eye view based on the imaging interval of the camera 10 and the moving speed of the host vehicle V1 detected by the speed sensor 20. Calculate the range. More specifically, the out-of-road determination unit 34 calculates a moving range having a predetermined margin with respect to the speed of the host vehicle V1.
  • the margin is, for example, 10 km / h.
  • the out-of-road determination unit 34 is a three-dimensional object that moves one pixel in one control cycle when the imaging interval of the camera 10 is 33 ms and the actual distance in the vehicle traveling direction covered by one pixel is 5 cm. The speed will be about 5.5 km / h.
  • a margin of 10 km / h is required to allow about 5.5 km / h.
  • the out-of-road determination unit 34 is based on a histogram created by counting movement amount candidates, a moving range in a bird's-eye view of a periodic stationary structure (periodic object), and a periodicity of a three-dimensional object described later. Thus, it is determined whether or not the plurality of three-dimensional objects detected by the three-dimensional object detection unit 33 are periodic stationary structures (periodic objects). Then, when the out-of-road determination unit 34 determines that the periodic object is detected in the detection areas A1 and A2, the out-of-road determination unit 34 determines that the detection areas A1 and A2 are set outside the road.
  • the out-of-road determination unit 34 sets the detection areas A1 and A2 outside the road. Judge that it is set.
  • the determination method of non-detection target objects such as planted grass
  • the out-of-road determination part 34 determines non-detection target objects, such as planted grass, with the following method. be able to.
  • the non-detection target includes snow and the like in addition to the planted grass.
  • the out-of-road determination unit 34 detects the degree of variation in the image information, and based on the detected degree of variation in the image information, It is determined whether or not the three-dimensional object detected by the detection unit 33 is a non-detection target object such as planted grass.
  • the out-of-road determination unit 34 uses the absolute value of the temporal change amount of the relative movement speed of the three-dimensional object calculated by the three-dimensional object detection unit 33 based on the captured image as the degree of variation in the image information. ⁇ V
  • the out-of-road determination unit 34 determines that the degree of variation in image information is higher as the absolute value
  • the out-of-road determination unit 34 sets the count value (vertical axis shown in FIG. 12) according to the absolute value
  • FIG. 12 is a figure for demonstrating the detection method of non-detection target objects, such as planted grass.
  • FIG. 13 is a table showing an example of the increment / decrement amount of the count value.
  • the out-of-road determination unit 34 is based on the magnitude of the absolute value
  • the out-of-road determination unit 34 detects the brightness of the detection areas A1 and A2 from the difference image, and when the brightness of the detection areas A1 and A2 is equal to or greater than a predetermined value as illustrated in FIG.
  • of the relative change speed of the three-dimensional object is 30 km / h or more (
  • the detected three-dimensional object is image information.
  • the count value is increased by X1.
  • the out-of-road determination unit 34 determines that the absolute value
  • X2 is smaller than X1 (X1> X2).
  • nighttime when the brightness is less than the predetermined value
  • the contrast of the captured image is low, and the certainty that the three-dimensional object can be determined to be a non-detection target is small.
  • the out-of-road determination unit 34 when the brightness is equal to or higher than a predetermined value (when it can be determined as daytime), the absolute value
  • the out-of-road determination unit 34 when the brightness is equal to or greater than a predetermined value (when it can be determined as daytime), the absolute value
  • Z1 is larger than Y1 (Z1> Y1).
  • the out-of-road determination unit 34 has the absolute value
  • the count value is decreased by Z2.
  • Z2 is larger than Y2 (Z2> Y2).
  • Z2 is smaller than Z1 (Z1> Z2).
  • the out-of-road determination unit 34 increases or decreases the count value in accordance with the variation in the absolute value
  • the first threshold value s 1 or more shown in FIG. 12 it is determined that the detected three-dimensional object is a non-detection target object such as planted grass.
  • the out-of-road determination unit 34 is detected when the count value becomes less than the second threshold value s 2 after the count value becomes equal to or more than the first threshold value s 1.
  • the determination that the three-dimensional object is a non-detection target object such as planted grass is cancelled.
  • the detected three-dimensional object is being determined to be non-detection object
  • the count value becomes less than the second threshold s 2 at time t2 it is determined that the detected three-dimensional object is not a non-detection target object at time t2.
  • the detected three-dimensional object is to be determined as a non-detection object It becomes.
  • the count value is the first value.
  • the first threshold value s 1 is provided as the upper limit value of the count value so as not to exceed the threshold value s 1 .
  • the first threshold value s 1 is set as the upper limit value of the count value.
  • the present invention is not limited to this, and a value larger than the first threshold value s 1 is set as the upper limit value of the count value. Alternatively, a value smaller than the first threshold value s 1 may be used as the upper limit value of the count value.
  • the out-of-road determination unit 34 determines that a non-detection target such as planted grass is detected in the detection areas A1 and A2, the determination is that the detection areas A1 and A2 are set outside the road. To do.
  • the out-of-road determination unit 34 cannot detect periodic objects such as guardrails or non-detected objects such as planted grass in the detection areas A1 and A2, the detection areas A1 and A2 are detected in adjacent lanes. It is determined that the road is set.
  • the detection area setting unit 35 shown in FIG. 3 sets the detection areas A1 and A2 for detecting a three-dimensional object.
  • the detection area setting unit 35 determines whether or not the point where the host vehicle is traveling is a merging point based on the determination result of the out-of-road determination unit 34, and determines that it is a merging point.
  • the detection areas A1 and A2 are expanded outward in the vehicle width direction (road shoulder side).
  • FIG. 14 is a diagram for explaining a method of setting the detection areas A1 and A2 by the detection area setting unit 35.
  • FIG. 14 shows an example scene in which the vehicle V1 is traveling near the confluence, the vehicle V1 is located at a position PV t at time t, the vehicle at time t + 1 after time t
  • An example is shown in which V1 is located at a position PV t + 1 and the host vehicle V1 is located at a position PV t + 2 at time t + 2 after time t + 1.
  • FIG. 14 shows an example scene in which the vehicle V1 is traveling near the confluence, the vehicle V1 is located at a position PV t at time t, the vehicle at time t + 1 after time t
  • FIG. 14 shows an example scene in which the vehicle V1 is traveling near the confluence, the vehicle V1 is located at a position PV t at time t, the vehicle at time t + 1 after time t
  • An example is shown in which V1 is located at a
  • detection areas A1 and A2 positions PA1 t and PA2 t ) at time t, detection areas A1 and A2 (positions PA1 t + 1 and PA t + 1 ) at time t + 1, and detection areas A1 and A2 at time t + 2.
  • A2 position PA1 t + 2 , PA t + 2
  • an adjacent vehicle V2 position PV ′ t + 1
  • an adjacent vehicle V2 position PV ′ t + 2
  • FIG. 15 is a diagram showing an example of the relationship between the speed variation degree and the width of the detection area in the vehicle width direction, and FIG. 15A is detected in the detection area A2 in the scene shown in FIG.
  • FIG. 15B shows an example of the size in the vehicle width direction of the detection area set in the detection area A2 in the scene shown in FIG. 14.
  • the speed variation degree for example, as shown in FIG. 12, a count value counted according to the amount of change in the relative movement speed of the detected three-dimensional object is used.
  • the three-dimensional object detection unit 33 is set outside the road In the detection area A2, a non-detection object such as a planted grass is detected as a three-dimensional object.
  • a non-detection object such as a planted grass is detected as a three-dimensional object.
  • to continue the state speed variation degree is a predetermined value c 1 or more, thereby, the detection area setting unit 35, the non-detection in the detection region A2 It is determined that the object is detected, and it is determined that the detection area A2 is set outside the road.
  • Detection area setting unit 35 when the state where the speed variation degree is less than the predetermined value c 2 continues for a predetermined time, determines that the non-detection object in the detection area A2 is not detected, the period was also When it is not detected, it is determined that the detection area A2 is set in the road.
  • the determination result of the road determination unit 34 changes from a state where the detection area is set outside the road to a state where the detection area is set within the road of the adjacent lane, Unless the lane has been changed, it can be determined that the number of lanes on the road on which the host vehicle has traveled has increased. That is, in the example shown in FIG. 14, only the detection area A1 is set on the road surface at time t, and the number of lanes on the road on which the vehicle travels is “2”, whereas at time t + 1, In the detection area A2, the detection area is changed from the state set outside the road to the state where the detection area is set in the road of the adjacent lane. In addition to the detection area A1, the detection area A2 is also on the road surface.
  • the number of lanes on the road on which the vehicle is traveling has increased to “3”. In this way, it is possible to detect an increase in the number of lanes on the road on which the host vehicle travels without changing the lane of the host vehicle at the junction. Therefore, when the number of lanes on the road on which the host vehicle travels increases without the host vehicle changing lanes, the detection area setting unit 35 joins where the number of lanes increases where the host vehicle is traveling. In order to detect that the vehicle is a point and to detect the other vehicle V2 that is separated from the host vehicle V1 in the vehicle width direction, the detection areas A1 and A2 are set so as to extend outward (roadside) in the vehicle width direction.
  • the detection area A2 is changed from the state set outside the road to the state where the detection areas A1 and A2 are set in the road of the adjacent lane. Therefore, the point where the host vehicle is traveling is determined not to be a merging point, and as shown in FIGS. 14 and 15B, the width of the detection area A2 in the vehicle width direction is the initial range (outlined portion). ) W 1 is set. Note that the width in the vehicle width direction in the initial range of the detection areas A1 and A2 is not particularly limited, but may be, for example, 3.5 m.
  • the detection area setting unit 35 When it is determined that the number of lanes on the road on which the host vehicle has traveled has increased from two to three based on the detection result at A2, and the host vehicle has not changed lanes, FIG. as shown in B), widened in the vehicle width direction detection area A2 to large w 2 than w 1. Note that when a 3.5m the breadth w 1 in the vehicle width direction in the initial range of the detection area A1, A2, and w 2, for example, can be 5 m.
  • the detection area setting unit 35 gradually narrows the detection area A2 when the predetermined time n has elapsed after expanding the detection area A2 at time t + 1, and finally the detection area A2
  • the size in the vehicle width direction of A2 is returned to the size of the initial range of the detection area A2.
  • FIG. 15 (B) the at time t + 2, it is set to a smaller w 3 than the vehicle width direction of the size w 2 detection area A2 at time t + 1 .
  • the detection area setting unit 35 sets the detection areas A1 and A2 outside the road by detecting a non-detection target as shown in FIG. 15A or detecting a periodic object, for example. And when it can be determined that the detection areas A1 and A2 have been set outside the road from the state where the detection areas A1 and A2 have been set outside the road, It is determined that the number of lanes on the road on which the car runs increased. If the detection area setting unit 35 determines that the number of lanes on the road on which the host vehicle is traveling has increased, the detection area setting unit 35 determines that the position on which the host vehicle is traveling is a junction where the number of lanes increases. Then, the detection areas A1 and A2 are set widened in the vehicle width direction.
  • FIG. 16 is a flowchart illustrating the adjacent vehicle detection process of the first embodiment.
  • the detection area setting unit 35 first sets detection areas A1 and A2 for detecting adjacent vehicles (step S101). It should be noted that the detection area setting process 35, which will be described later, applies the detection areas A1, A2 set by the detection area setting unit 35.
  • Step S102 the viewpoint transformation unit 31, based on data of the acquired captured image data of the bird's-eye view image PB t is generated (Step S103).
  • the alignment unit 32 aligns the data of the bird's-eye view image PB t and the data of the bird's-eye view image PB t ⁇ 1 one hour before, and generates data of the difference image PD t (step S104). .
  • three-dimensional object detection unit 33 from the data of the difference image PD t, pixel value by counting the number of difference pixel DP "1", to generate a difference waveform DW t (step S105).
  • the three-dimensional object detection unit 33 determines whether or not the peak of the differential waveform DW t is greater than or equal to a predetermined threshold value ⁇ (step S106).
  • a predetermined threshold value ⁇
  • the three-dimensional object detection unit 33 determines that there is no three-dimensional object and no other vehicle exists (step S106). S115). And it returns to step S101 and repeats the process shown in FIG.
  • step S106 Yes
  • the three-dimensional object detection unit 33 determines that a three-dimensional object exists in the adjacent lane, and proceeds to step S107.
  • the three-dimensional object detection unit 33 divides the differential waveform DW t into a plurality of small areas DW t1 to DW tn .
  • the three-dimensional object detection unit 33 performs weighting for each of the small areas DW t1 to DW tn (step S108), calculates an offset amount for each of the small areas DW t1 to DW tn (step S109), and adds the weights.
  • a histogram is generated (step S110).
  • the three-dimensional object detection unit 33 calculates a relative movement distance that is a movement distance of the three-dimensional object with respect to the host vehicle V1 based on the histogram (step S111).
  • the three-dimensional object detection unit 33 calculates the absolute movement speed of the three-dimensional object from the relative movement distance (step S112).
  • the three-dimensional object detection unit 33 calculates the relative movement speed by differentiating the relative movement distance with respect to time, and calculates the absolute movement speed by adding the own vehicle speed detected by the vehicle speed sensor 20.
  • the rear sides of the host vehicle are set as detection areas A1 and A2, and emphasis is placed on whether or not there is a possibility of contact when the host vehicle changes lanes. For this reason, the process of step S113 is performed. That is, assuming that the system according to the present embodiment is operated on a highway, if the speed of the adjacent vehicle is less than 10 km / h, even if the adjacent vehicle exists, the host vehicle is required to change the lane. Because it is located far behind, there are few problems.
  • step S113 determining whether the absolute moving speed of the adjacent vehicle is 10 km / h or more and the relative moving speed of the adjacent vehicle with respect to the own vehicle is +60 km / h or less has the following effects.
  • the absolute moving speed of the stationary object may be detected to be several km / h. Therefore, by determining whether the speed is 10 km / h or more, it is possible to reduce the possibility that the stationary object is determined to be an adjacent vehicle.
  • the relative speed of the adjacent vehicle to the host vehicle may be detected as a speed exceeding +60 km / h. Therefore, the possibility of erroneous detection due to noise can be reduced by determining whether the relative speed is +60 km / h or less.
  • step S113 it may be determined that the absolute movement speed of the adjacent vehicle is not negative or not 0 km / h. Further, in this embodiment, since an emphasis is placed on whether or not there is a possibility of contact when the host vehicle changes lanes, a warning sound is sent to the driver of the host vehicle when an adjacent vehicle is detected in step S114. Or a display corresponding to a warning may be performed by a predetermined display device.
  • FIG. 17 is a flowchart showing detection area setting processing according to the first embodiment.
  • the detection area setting process described below is performed in parallel with the adjacent vehicle detection process shown in FIG. 16, and the detection areas A1 and A2 set by this detection area setting process are the adjacent vehicle detection shown in FIG.
  • the detection areas A1 and A2 in the processing are applied.
  • step S201 the out-of-road determination unit 34 performs a periodic object determination process.
  • FIG. 18 is a flowchart showing the periodic object determination process shown in step S201.
  • the periodic object determination process in step S201 will be described with reference to FIG.
  • off-street determination unit 34 first, as shown in FIG. 10, the difference image PD t-1 at time t-1, and, in the difference image PD t at time t, of the three-dimensional object contacts A point is detected (step S301). Then, the out-of-road determination unit 34 associates the detected grounding points with each other, and calculates the distance between the associated grounding points as a movement amount candidate (step S302).
  • the out-of-road determination unit 34 counts the calculated movement amount candidates and creates a histogram as shown in FIG. 11 (step S303).
  • the out-of-road determination unit 34 detects the maximum value M from the generated histogram (step S304).
  • the out-of-road determination unit 34 sets a predetermined threshold based on the maximum value M detected in step S304 (step S305).
  • the predetermined threshold value can be set to 70% of the maximum value M, for example. For example, when the count value of the maximum value M is “7”, the predetermined threshold value can be set to “4.9”.
  • the predetermined threshold is 70% of the maximum value M, but is not limited to this.
  • the out-of-road determination unit 34 detects local maximum values M1 to M3 that are equal to or greater than a predetermined threshold (step S307).
  • the maximum value M is “7”, for example, the out-of-road determination unit 34 detects local maximum values M1 to M3 having a count value of “5” or more.
  • the out-of-road determination unit 34 detects the interval between the maximum values M, M1 to M3 ⁇ ⁇ (including the maximum value M), and votes the detected intervals. That is, in the example shown in FIG.
  • the interval I1 is between the maximum value M and the maximum value M1 and between the maximum value M1 and the maximum value M2, so the number of votes for the interval I1 is “ 2 ”, and only the interval between the maximum value M2 and the maximum value M3 is the interval I2, and thus the number of votes for the interval I2 is“ 1 ”.
  • the out-of-road determination unit 34 determines periodicity (step S308). At this time, the out-of-road determination unit 34 determines whether or not the number of votes in step S307 is equal to or greater than a predetermined number, and determines that there is periodicity when the number of votes is equal to or greater than the predetermined number.
  • the periodicity determination result is used in step S312.
  • the predetermined number is the number of detected half of the detected three-dimensional object from bird's-eye image PB t. Therefore, when the detected number of the detected solid objects from bird's eye view image PB t is "4", the predetermined number is "2".
  • the maximum value having a relatively small value (for example, the symbol M4 in FIG. 8). ) Can be ignored, is less susceptible to noise, and the periodicity can be determined with higher accuracy.
  • the predetermined number is not limited, and may be a fixed value, for example.
  • step S308 when it determines with there being periodicity in step S308, it is good also as a structure which reduces the threshold value set by step S305. For example, even if the predetermined threshold value is set to 70% of the maximum value M in step S305, if there is periodicity, it is set to 60% of the maximum value M.
  • the out-of-road determination unit 34 may reset the predetermined threshold every time it is determined that there is periodicity by repeatedly determining whether or not there is periodicity. As described above, when the periodicity is determined from the generation positions of the maximum values M and M1 to M3 of the count value, that is, the generation intervals of the maximum values M and M1 to M3, and it is determined that there is periodicity, a predetermined threshold value is set.
  • the periodic stationary structure (periodic object) can be easily determined.
  • the predetermined threshold value does not decrease until periodicity is once determined, and erroneous detection of a three-dimensional object due to an alignment error or the like can be suppressed.
  • the lowered threshold value may be initialized.
  • a periodic stationary structure (periodic object) can be appropriately detected according to a change in the environment after the lane change.
  • the road determination unit 34 should calculate a stationary equivalent movement amount. That is, the out-of-road determination unit 34 moves the periodic stationary structure (periodic object) in a bird's eye view based on the imaging interval of the camera 10 and the moving speed of the host vehicle V1 detected by the speed sensor 20. Calculate the range. At this time, the out-of-road determination unit 34 calculates a moving range having a predetermined margin with respect to the speed of the host vehicle V1.
  • the out-of-road determination unit 34 determines whether or not the maximum values M and M1 to M3 exist within the range of the movement amount detected in step S309 (step S310).
  • step S310 Yes
  • the out-of-road determination unit 34 determines that a periodic stationary structure (periodic object) exists, and the detection area A1 is the road. It is determined that it is set outside, and the road outside determination value is set to “ON” (step S311). This is because the periodic stationary structures (periodic objects) are often arranged at the same interval, and the specific count value tends to increase.
  • the periodic stationary structure (periodic object) is stationary, the count value of the movement amount candidate is within the movement range in consideration of the speed of the moving body and the like. Therefore, if it is determined in step S310 that any one of the maximum values M and M1 to M3 exists, it can be said that the plurality of three-dimensional objects are periodic stationary structures (periodic objects).
  • step S310 No
  • step S312 Yes
  • the out-of-road determination unit 34 calculates an aperiodic maximum value from the maximum values that are equal to or greater than the predetermined threshold set in step S305. Detection is performed (step S313).
  • the non-periodic maximum value corresponds to, for example, a maximum value M3 shown in FIG.
  • the maximum value M3 is different from the other maximum values M, M1, and M2, and has a different interval. For this reason, it is determined that the maximum value M3 has no periodicity and is an aperiodic maximum value.
  • the out-of-road determination unit 34 determines the aperiodic maximum value using the reset threshold.
  • step S313 No
  • the out-of-road determination unit 34 determines whether or not the periodic maximum values M, M1, and M2 are lower than the previous value (step). S314). In this process, the out-of-road determination unit 34 calculates the average value of the periodic maximum values M, M1, and M2 in the current process, and also calculates the average value of the periodic maximum values in the previous process. The out-of-road determination unit 34 determines whether the average value of the current process is lower than the average value of the previous process by a predetermined value or more.
  • step S314 Yes
  • the road determination unit 34 determines that the host vehicle V1 and the periodic stationary structure (periodic object) Assuming that another vehicle or the like has entered during this period, the mobile body is detected, it is determined that the detection area A1 is not set outside the road, and the road outside determination value is set to OFF (step S315).
  • the road determination unit 34 refers to the periodic stationary structure (period Assuming that another vehicle has entered the back side of the object), it is determined that a periodic stationary structure (periodic object) has been detected, and it is determined that the detection area A1 is set outside the road.
  • the determination value is set to ON (step S311). Thereby, the periodic object determination process shown in FIG. 18 is completed. Then, the process proceeds to step S202 in FIG.
  • step S202 the detection area setting unit 35 determines whether or not the detection areas A1 and A2 are set outside the road based on the determination result of the periodic object determination process in step S201. For example, if the detection area setting unit 35 detects a periodic object in the detection areas A1 and A2 and sets the out-of-road determination value to ON in the periodic object determination process in step S201, the detection areas A1 and A2 Judge that it is set outside. If it is determined that the detection areas A1 and A2 are set outside the road, the process shown in FIG. 17 is terminated without changing the detection areas A1 and A2. On the other hand, if the out-of-road determination value is OFF and it is determined that the detection areas A1 and A2 are not set outside the road, the process proceeds to step S203.
  • step S203 the out-of-road determination unit 34 performs non-detection target determination processing for detecting a non-detection target such as planted grass.
  • FIG. 19 is a flowchart showing the non-detection object determination process shown in step S203.
  • the non-detection target determination processing in step S203 will be described with reference to FIG.
  • the out-of-road determination unit 34 first calculates the amount of time change in the relative movement speed of the three-dimensional object in step S401. For example, the out-of-road determination unit 34 acquires the relative movement speed at different times calculated in the adjacent vehicle detection process shown in FIG. 16, and thereby calculates the amount of time change in the relative movement speed of the three-dimensional object. it can.
  • the out-of-road determination unit 34 determines whether or not the absolute value
  • of the relative change speed of the three-dimensional object is 30 km / h or more (
  • step S402 Yes
  • step S403 No
  • step S405 the count value is increased by X2 as shown in FIG. 13
  • the out-of-road determination unit 34 determines that the absolute value
  • the out-of-road determination unit 34 also determines that the absolute value
  • step S406 No
  • the out-of-road determination unit 34 sets the absolute value
  • off-street determination unit 34 in step S413, the count value, and determines whether it is the first threshold value s 1 or shown in FIG. 12. If the count value is equal to the first threshold value s 1 or more, the process proceeds to step S416, off-street determination unit 34, the detected three-dimensional object is determined to be a non-detection object, detection area A1 in off-road It is determined that it is set, and the out-of-road determination value is set to ON. On the other hand, when the count value is first less than the threshold value s 1, the process proceeds to step S414.
  • step S414 the out-of-road determination unit 34 determines whether or not the count value becomes less than the second threshold value s 2 after the count value becomes less than the first threshold value s 1 shown in FIG. After the count value reaches a first threshold value s of less than 1 is also the case where the second threshold value s 2 or more, the process proceeds to step S415, the off-street judging unit 34, the detected three-dimensional object in a non-detection object It is determined that there is, and the out-of-road determination value is set to ON (step S415).
  • step S 416 the process proceeds to step S 416, and the detected three-dimensional object is not detected by the out-of-road determination unit 34. It is determined that the object is not an object, and the out-of-road determination value is set to off. Thereby, the non-detection target object determination process shown in FIG. 19 ends.
  • step S ⁇ b> 204 the detection area setting unit 35 determines whether or not the detection areas A ⁇ b> 1 and A ⁇ b> 2 are set outside the road based on the determination result of the non-detection target object determination process in step S ⁇ b> 203. Done.
  • the detection area setting unit 35 detects a non-detection object such as a planted grass in the detection areas A1 and A2 in the non-detection object determination process in step S203, and the out-of-road determination value is set to ON. It is determined that the detection areas A1 and A2 are set outside the road. If it is determined that the detection areas A1 and A2 are set outside the road, the process shown in FIG. 17 is terminated without changing the detection areas A1 and A2. On the other hand, if it is determined that the detection areas A1 and A2 are not set outside the road, the process proceeds to step S205.
  • a non-detection object such as a planted grass in the detection areas A1 and A2 in the
  • step S205 the detection area setting unit 35 determines whether or not the own vehicle is located at the junction. For example, the detection area setting unit 35 changes from a state in which the detection area is set outside the road to a state in which the detection area is set in the road of the adjacent lane based on the determination result of the road outside determination unit 34.
  • the state continues for a predetermined time, as shown in FIG. 14, it is determined that the number of lanes on the road on which the host vehicle is traveling has increased and that the host vehicle is traveling to the junction. If it is determined that the host vehicle is traveling at the junction, the process proceeds to step S206. On the other hand, if it is determined that the host vehicle is not traveling at the junction, the detection area is not changed. Then, the process shown in FIG.
  • the detection area setting unit 35 determines whether or not a lane change operation is being performed. For example, the detection area setting unit 35 determines whether or not a lane change operation is being performed based on the steering angle and the operation information of the direction indicator.
  • the detection area A1 installed at the left rear of the host vehicle Changes from a state set outside the road to a state set inside the adjacent lane.
  • the detection area setting unit 35 is configured to change the lane. If it has been performed, the processing shown in FIG. 17 is terminated without changing the detection area. On the other hand, when the lane change has not been performed, it is determined that the increase in the number of lanes on the road on which the host vehicle is traveling is traveling at the merge point, and in order to detect other vehicles existing in the merge lane, Proceed to step S207.
  • step S207 since it is determined that the host vehicle is traveling at the junction, the detection area setting unit 35 extends the detection areas A1 and A2 to the outside in the vehicle width direction (the road shoulder side).
  • step S208 the detection area setting unit 35 determines whether or not a predetermined time has elapsed since the detection areas A1 and A2 are expanded outward (in the roadside) in step S207. And it waits until predetermined time passes after expanding detection area A1, A2 to the vehicle width direction outside (road shoulder side), and after extending detection area A1, A2 to the vehicle width direction outside (road shoulder side), it is predetermined.
  • the detection area setting unit 35 When it is determined that the time has elapsed, the detection area setting unit 35 gradually narrows the expanded detection areas A1 and A2, and finally moves the detection areas A1 and A2 to the outside in the vehicle width direction (the road shoulder side). Return to the previous size. Then, the detection area setting process ends.
  • the detection areas A1 and A2 are set in step S101 of the adjacent vehicle detection process shown in FIG. 16, the detection areas A1 and A2 set in the detection area setting process shown in FIG. 17 are set.
  • a difference image PD t is generated based on the difference between the two bird's-eye view images, and a predetermined detection area
  • the detection of the periodic object and the non-detection target is repeatedly performed in the detection areas A1 and A2, and based on the detection result, the detection areas A1 and A2 are set outside the road.
  • the detection areas A1 and A2 are expanded in the vehicle width direction. Thereby, in 1st Embodiment, even if it is a case where the own vehicle which drive
  • a non-detection target object such as a periodic object, such as a guardrail, and the planting grass
  • the detection areas A1 and A2 are set in the adjacent lane (inside the road) from the state where the areas A1 and A2 are set outside the road, the number of road lanes An increase can be detected, and it can be detected that the host vehicle is traveling at a junction.
  • the three-dimensional object detection device 1a according to the second embodiment includes a computer 30 a instead of the computer 30 of the first embodiment, except that it operates as described below. This is the same as in the first embodiment.
  • FIG. 20 is a block diagram illustrating details of the computer 30a according to the second embodiment.
  • the three-dimensional object detection device 1a includes a camera 10 and a computer 30a.
  • the computer 30a includes a viewpoint conversion unit 31, a luminance difference calculation unit 36, and an edge line detection unit. 37, a three-dimensional object detection unit 33a, an out-of-road determination unit 34, and a detection area setting unit 35.
  • FIGS. 21A and 21B are diagrams illustrating an imaging range and the like of the camera 10 in FIG. 20.
  • FIG. 21A is a plan view
  • FIG. 21B is a perspective view in real space rearward from the host vehicle V1. Show.
  • the camera 10 has a predetermined angle of view a, and images the rear side from the host vehicle V1 included in the predetermined angle of view a.
  • the angle of view a of the camera 10 is set so that the imaging range of the camera 10 includes the adjacent lane in addition to the lane in which the host vehicle V1 travels.
  • the detection areas A1 and A2 in this example are trapezoidal in a plan view (when viewed from a bird's eye), and the positions, sizes, and shapes of the detection areas A1 and A2 are determined based on the distances d 1 to d 4. Is done.
  • the detection areas A1 and A2 in the example shown in the figure are not limited to a trapezoidal shape, and may be other shapes such as a rectangle when viewed from a bird's eye view as shown in FIG.
  • the distance d1 is a distance from the host vehicle V1 to the ground lines L1 and L2.
  • the ground lines L1 and L2 mean lines on which a three-dimensional object existing in the lane adjacent to the lane in which the host vehicle V1 travels contacts the ground.
  • the object is to detect adjacent vehicles V2 and the like (including two-wheeled vehicles) traveling in the left and right lanes adjacent to the lane of the host vehicle V1 on the rear side of the host vehicle V1.
  • a distance d1 which is a position to be the ground lines L1, L2 of the adjacent vehicle V2 is determined from a distance d11 from the own vehicle V1 to the white line W and a distance d12 from the white line W to a position where the adjacent vehicle V2 is predicted to travel. It can be determined substantially fixedly.
  • the distance d1 is not limited to being fixedly determined, and may be variable.
  • the computer 30a recognizes the position of the white line W with respect to the host vehicle V1 by a technique such as white line recognition, and determines the distance d11 based on the recognized position of the white line W.
  • the distance d1 is variably set using the determined distance d11.
  • the distance d1 is It shall be fixedly determined.
  • the distance d2 is a distance extending in the vehicle traveling direction from the rear end portion of the host vehicle V1.
  • the distance d2 is determined so that the detection areas A1 and A2 are at least within the angle of view a of the camera 10.
  • the distance d2 is set so as to be in contact with the range divided into the angle of view a.
  • the distance d3 is a distance indicating the length of the detection areas A1, A2 in the vehicle traveling direction. This distance d3 is determined based on the size of the three-dimensional object to be detected. In the present embodiment, since the detection target is the adjacent vehicle V2 or the like, the distance d3 is set to a length including the adjacent vehicle V2.
  • the distance d4 is a distance indicating a height set so as to include a tire such as the adjacent vehicle V2 in the real space.
  • the distance d4 is a length shown in FIG. 21A in the bird's-eye view image.
  • the distance d4 may be a length that does not include a lane that is further adjacent to the left and right lanes in the bird's-eye view image (that is, the adjacent lane that is adjacent to two lanes). If the lane adjacent to the two lanes is included from the lane of the own vehicle V1, there is an adjacent vehicle V2 in the adjacent lane on the left and right of the own lane that is the lane in which the own vehicle V1 is traveling. This is because it becomes impossible to distinguish whether there is an adjacent vehicle on the lane.
  • the distances d1 to d4 are determined, and thereby the positions, sizes, and shapes of the detection areas A1 and A2 are determined. More specifically, the position of the upper side b1 of the detection areas A1 and A2 forming a trapezoid is determined by the distance d1. The starting point position C1 of the upper side b1 is determined by the distance d2. The end point position C2 of the upper side b1 is determined by the distance d3. The side b2 of the detection areas A1 and A2 having a trapezoidal shape is determined by a straight line L3 extending from the camera 10 toward the starting point position C1.
  • a side b3 of trapezoidal detection areas A1 and A2 is determined by a straight line L4 extending from the camera 10 toward the end position C2.
  • the position of the lower side b4 of the detection areas A1 and A2 having a trapezoidal shape is determined by the distance d4.
  • the areas surrounded by the sides b1 to b4 are set as the detection areas A1 and A2.
  • the detection areas A1 and A2 are true squares (rectangles) in real space on the rear side from the host vehicle V1.
  • the viewpoint conversion unit 31 inputs captured image data of a predetermined area obtained by imaging with the camera 10.
  • the viewpoint conversion unit 31 performs viewpoint conversion processing on the input captured image data to the bird's-eye image data in a bird's-eye view state.
  • the bird's-eye view is a state seen from the viewpoint of a virtual camera looking down from above, for example, vertically downward (or slightly obliquely downward).
  • This viewpoint conversion process can be realized by a technique described in, for example, Japanese Patent Application Laid-Open No. 2008-219063.
  • the luminance difference calculation unit 36 calculates a luminance difference with respect to the bird's-eye view image data subjected to viewpoint conversion by the viewpoint conversion unit 31 in order to detect the edge of the three-dimensional object included in the bird's-eye view image. For each of a plurality of positions along a vertical imaginary line extending in the vertical direction in the real space, the brightness difference calculation unit 36 calculates a brightness difference between two pixels in the vicinity of each position.
  • the luminance difference calculation unit 36 can calculate the luminance difference by either a method of setting only one vertical virtual line extending in the vertical direction in the real space or a method of setting two vertical virtual lines.
  • the brightness difference calculating unit 36 applies a first vertical imaginary line corresponding to a line segment extending in the vertical direction in the real space and a vertical direction in the real space different from the first vertical imaginary line with respect to the bird's eye view image that has undergone viewpoint conversion.
  • a second vertical imaginary line corresponding to the extending line segment is set.
  • the luminance difference calculation unit 36 continuously obtains the luminance difference between the point on the first vertical imaginary line and the point on the second vertical imaginary line along the first vertical imaginary line and the second vertical imaginary line.
  • the operation of the luminance difference calculation unit 36 will be described in detail.
  • the luminance difference calculation unit 36 corresponds to a line segment extending in the vertical direction in real space and passes through the detection area A1 (hereinafter, attention line La). Set).
  • the luminance difference calculation unit 36 corresponds to a line segment extending in the vertical direction in the real space and also passes through the second vertical virtual line Lr (hereinafter referred to as a reference line Lr) passing through the detection area A1.
  • the reference line Lr is set at a position separated from the attention line La by a predetermined distance in the real space.
  • the line corresponding to the line segment extending in the vertical direction in the real space is a line that spreads radially from the position Ps of the camera 10 in the bird's-eye view image.
  • This radially extending line is a line along the direction in which the three-dimensional object falls when converted to bird's-eye view.
  • the luminance difference calculation unit 36 sets a point of interest Pa (a point on the first vertical imaginary line) on the line of interest La.
  • the luminance difference calculation unit 36 sets a reference point Pr (a point on the second vertical plate) on the reference line Lr.
  • the attention line La, the attention point Pa, the reference line Lr, and the reference point Pr have the relationship shown in FIG. 22B in the real space.
  • the attention line La and the reference line Lr are lines extending in the vertical direction in the real space, and the attention point Pa and the reference point Pr are substantially the same height in the real space. This is the point that is set.
  • the attention point Pa and the reference point Pr do not necessarily have the same height, and an error that allows the attention point Pa and the reference point Pr to be regarded as the same height is allowed.
  • the luminance difference calculation unit 36 calculates a luminance difference between the attention point Pa and the reference point Pr. If the luminance difference between the attention point Pa and the reference point Pr is large, it is considered that an edge exists between the attention point Pa and the reference point Pr.
  • a vertical virtual line is set as a line segment extending in the vertical direction in the real space with respect to the bird's-eye view image, In the case where the luminance difference between the attention line La and the reference line Lr is high, there is a high possibility that there is an edge of the three-dimensional object at the set position of the attention line La. For this reason, the edge line detection unit 37 shown in FIG. 20 detects an edge line based on the luminance difference between the attention point Pa and the reference point Pr.
  • FIG. 23 is a diagram showing a detailed operation of the luminance difference calculation unit 36
  • FIG. 23 (a) shows a bird's-eye view image in a bird's-eye view state
  • FIG. 23 (b) is shown in FIG. 23 (a). It is the figure which expanded a part B1 of the bird's-eye view image.
  • the luminance difference is calculated in the same procedure for the detection area A2.
  • the adjacent vehicle V2 When the adjacent vehicle V2 is reflected in the captured image captured by the camera 10, the adjacent vehicle V2 appears in the detection area A1 in the bird's-eye view image as shown in FIG. As shown in the enlarged view of the region B1 in FIG. 23A in FIG. 23B, it is assumed that the attention line La is set on the rubber portion of the tire of the adjacent vehicle V2 on the bird's-eye view image.
  • the luminance difference calculation unit 36 first sets a reference line Lr.
  • the reference line Lr is set along the vertical direction at a position away from the attention line La by a predetermined distance in the real space.
  • the reference line Lr is set at a position separated from the attention line La by 10 cm in the real space.
  • the reference line Lr is set on the wheel of the tire of the adjacent vehicle V2, which is separated from the rubber of the tire of the adjacent vehicle V2, for example, by 10 cm, on the bird's eye view image.
  • the luminance difference calculation unit 36 sets a plurality of attention points Pa1 to PaN on the attention line La.
  • attention point Pai when an arbitrary point is indicated
  • the number of attention points Pa set on the attention line La may be arbitrary.
  • N attention points Pa are set on the attention line La.
  • the luminance difference calculation unit 36 sets the reference points Pr1 to PrN so as to be the same height as the attention points Pa1 to PaN in the real space. Then, the luminance difference calculation unit 36 calculates the luminance difference between the attention point Pa and the reference point Pr having the same height. Accordingly, the luminance difference calculation unit 36 calculates the luminance difference between the two pixels for each of a plurality of positions (1 to N) along the vertical imaginary line extending in the vertical direction in the real space. The luminance difference calculation unit 36 calculates, for example, a luminance difference between the first attention point Pa1 and the first reference point Pr1, and a luminance difference between the second attention point Pa2 and the second reference point Pr2. Will be calculated.
  • the luminance difference calculation unit 36 continuously obtains the luminance difference along the attention line La and the reference line Lr. That is, the luminance difference calculation unit 36 sequentially obtains the luminance difference between the third to Nth attention points Pa3 to PaN and the third to Nth reference points Pr3 to PrN.
  • the luminance difference calculation unit 36 repeatedly executes the processing such as setting the reference line Lr, setting the attention point Pa and the reference point Pr, and calculating the luminance difference while shifting the attention line La in the detection area A1. That is, the luminance difference calculation unit 36 repeatedly executes the above processing while changing the positions of the attention line La and the reference line Lr by the same distance in the extending direction of the ground line L1 in the real space. For example, the luminance difference calculation unit 36 sets a line that has been the reference line Lr in the previous process as the attention line La, sets the reference line Lr for the attention line La, and sequentially obtains the luminance difference. It will be.
  • the edge extending in the vertical direction is obtained by calculating the luminance difference from the attention point Pa on the attention line La and the reference point Pr on the reference line Lr that are substantially the same height in the real space. It is possible to clearly detect a luminance difference in the case where there is. Also, in order to compare the brightness of vertical virtual lines extending in the vertical direction in real space, even if the three-dimensional object is stretched according to the height from the road surface by converting to a bird's-eye view image, The detection process is not affected, and the detection accuracy of the three-dimensional object can be improved.
  • the edge line detection unit 37 detects the edge line from the continuous luminance difference calculated by the luminance difference calculation unit 36. For example, in the case illustrated in FIG. 23B, the first attention point Pa1 and the first reference point Pr1 are located in the same tire portion, and thus the luminance difference is small. On the other hand, the second to sixth attention points Pa2 to Pa6 are located in the rubber part of the tire, and the second to sixth reference points Pr2 to Pr6 are located in the wheel part of the tire. Therefore, the luminance difference between the second to sixth attention points Pa2 to Pa6 and the second to sixth reference points Pr2 to Pr6 becomes large. Therefore, the edge line detection unit 37 may detect that an edge line exists between the second to sixth attention points Pa2 to Pa6 and the second to sixth reference points Pr2 to Pr6 having a large luminance difference. it can.
  • the edge line detection unit 37 firstly follows the following Equation 1 to determine the i-th attention point Pai (coordinate (xi, yi)) and the i-th reference point Pri (coordinate ( xi ′, yi ′)) and the i th attention point Pai are attributed.
  • Equation 1 t represents a predetermined threshold, I (xi, yi) represents the luminance value of the i-th attention point Pai, and I (xi ′, yi ′) represents the luminance value of the i-th reference point Pri.
  • t represents a predetermined threshold
  • I (xi, yi) represents the luminance value of the i-th attention point Pai
  • I (xi ′, yi ′) represents the luminance value of the i-th reference point Pri.
  • the attribute s (xi, yi) of the attention point Pai is “ ⁇ 1”.
  • the attribute s (xi, yi) of the attention point Pai is “0”.
  • the edge line detection unit 37 determines whether or not the attention line La is an edge line from the continuity c (xi, yi) of the attribute s along the attention line La based on Equation 2 below.
  • the continuity c (xi, yi) is “1”.
  • the attribute s (xi, yi) of the attention point Pai is not the same as the attribute s (xi + 1, yi + 1) of the adjacent attention point Pai + 1
  • the continuity c (xi, yi) is “0”.
  • the edge line detection unit 37 obtains the sum for the continuity c of all the attention points Pa on the attention line La.
  • the edge line detection unit 37 normalizes the continuity c by dividing the obtained sum of continuity c by the number N of points of interest Pa. Then, the edge line detection unit 37 determines that the attention line La is an edge line when the normalized value exceeds the threshold ⁇ .
  • the threshold value ⁇ is a value set in advance through experiments or the like.
  • the edge line detection unit 37 determines whether or not the attention line La is an edge line based on Equation 3 below. Then, the edge line detection unit 37 determines whether or not all the attention lines La drawn on the detection area A1 are edge lines. [Equation 3] ⁇ c (xi, yi) / N> ⁇
  • the attention point Pa is attributed based on the luminance difference between the attention point Pa on the attention line La and the reference point Pr on the reference line Lr, and the attribute along the attention line La is attributed. Since it is determined whether the attention line La is an edge line based on the continuity c of the image, the boundary between the high luminance area and the low luminance area is detected as an edge line, and an edge in line with a natural human sense Detection can be performed. This effect will be described in detail.
  • FIG. 24 is a diagram illustrating an example of an image for explaining the processing of the edge line detection unit 37.
  • 102 is an adjacent image.
  • a region where the brightness of the first striped pattern 101 is high and a region where the brightness of the second striped pattern 102 is low are adjacent to each other, and a region where the brightness of the first striped pattern 101 is low and the second striped pattern 102. Is adjacent to a region with high brightness.
  • the portion 103 located at the boundary between the first striped pattern 101 and the second striped pattern 102 tends not to be perceived as an edge depending on human senses.
  • the edge line detection unit 37 determines the part 103 as an edge line only when the luminance difference attribute has continuity in addition to the luminance difference in the part 103. An erroneous determination of recognizing a part 103 that is not recognized as an edge line as a sensation as an edge line can be suppressed, and edge detection according to a human sensation can be performed.
  • the three-dimensional object detection unit 33a detects a three-dimensional object based on the amount of edge lines detected by the edge line detection unit 37.
  • the three-dimensional object detection device 1a detects an edge line extending in the vertical direction in real space. The fact that many edge lines extending in the vertical direction are detected means that there is a high possibility that a three-dimensional object exists in the detection areas A1 and A2. For this reason, the three-dimensional object detection unit 33a detects a three-dimensional object based on the amount of edge lines detected by the edge line detection unit 37.
  • the three-dimensional object detection unit 33a determines whether the amount of edge lines detected by the edge line detection unit 37 is equal to or greater than a predetermined threshold value ⁇ , and the amount of edge lines is determined to be a predetermined threshold value ⁇ . In the above case, the edge line detected by the edge line detection unit 37 is determined to be an edge line of a three-dimensional object.
  • the edge line is an aspect of pixel distribution information indicating a predetermined luminance difference
  • the “pixel distribution information” in the present embodiment is a three-dimensional object when a captured image is converted into a bird's-eye view image. It can be positioned as information indicating the distribution state of “pixels having a luminance difference equal to or greater than a predetermined threshold” detected along the falling direction.
  • the three-dimensional object detection unit 33a distributes pixels on the bird's-eye view image obtained by the viewpoint conversion unit 31 in which the luminance difference is equal to or greater than the predetermined threshold t in the direction in which the three-dimensional object falls when the viewpoint is converted into the bird's-eye view image.
  • the three-dimensional object detection unit 33a determines whether or not the edge line detected by the edge line detection unit 37 is correct.
  • the three-dimensional object detection unit 33a determines whether or not the luminance change along the edge line of the bird's-eye view image on the edge line is equal to or greater than a predetermined threshold value tb.
  • a predetermined threshold value tb When the brightness change of the bird's-eye view image on the edge line is equal to or greater than the threshold value tb, it is determined that the edge line has been detected by erroneous determination.
  • the luminance change of the bird's-eye view image on the edge line is less than the threshold value tb, it is determined that the edge line is correct.
  • the threshold value tb is a value set in advance by experiments or the like.
  • FIG. 25 is a diagram showing the luminance distribution of the edge line
  • FIG. 25A shows the edge line and luminance distribution when the adjacent vehicle V2 as a three-dimensional object exists in the detection area A1
  • FIG. Indicates an edge line and a luminance distribution when there is no solid object in the detection area A1.
  • the attention line La set in the tire rubber part of the adjacent vehicle V2 is determined to be an edge line in the bird's-eye view image.
  • the luminance change of the bird's-eye view image on the attention line La is gentle. This is because the tire of the adjacent vehicle is extended in the bird's-eye view image by converting the image captured by the camera 10 into the bird's-eye view image.
  • the attention line La set in the white character portion “50” drawn on the road surface in the bird's-eye view image is erroneously determined as an edge line.
  • the brightness change of the bird's-eye view image on the attention line La has a large undulation. This is because a portion with high brightness in white characters and a portion with low brightness such as a road surface are mixed on the edge line.
  • the three-dimensional object detection unit 33a determines whether or not the edge line is detected by erroneous determination. For example, when a captured image acquired by the camera 10 is converted into a bird's-eye view image, the three-dimensional object included in the captured image tends to appear in the bird's-eye view image in a stretched state. As described above, when a tire of an adjacent vehicle is stretched, one portion of the tire is stretched, so that a change in luminance of the bird's eye view image in the stretched direction tends to be small.
  • the bird's-eye view image includes a high luminance region such as a character portion and a low luminance region such as a road surface portion.
  • the brightness change in the stretched direction tends to increase in the bird's-eye view image. Therefore, when the luminance change along the edge line is greater than or equal to the predetermined threshold value tb, the three-dimensional object detection unit 33a detects the edge line by erroneous determination, and the edge line is detected by the three-dimensional object. Judge that it is not caused.
  • the three-dimensional object detection unit 33a determines that the edge line is an edge line of the three-dimensional object, and the three-dimensional object exists.
  • the three-dimensional object detection unit 33a calculates the luminance change of the edge line according to any one of the following mathematical formulas 4 and 5.
  • the luminance change of the edge line corresponds to the evaluation value in the vertical direction in the real space.
  • Equation 4 evaluates the luminance distribution by the sum of the squares of the differences between the i-th luminance value I (xi, yi) on the attention line La and the adjacent i + 1-th luminance value I (xi + 1, yi + 1).
  • Equation 5 evaluates the luminance distribution by the sum of the absolute values of the differences between the i-th luminance value I (xi, yi) on the attention line La and the adjacent i + 1-th luminance value I (xi + 1, yi + 1).
  • the attribute b (xi, yi) of the attention point Pa (xi, yi) is “1”. Become. If the relationship is other than that, the attribute b (xi, yi) of the attention point Pai is '0'.
  • This threshold value t2 is set in advance by an experiment or the like in order to determine that the attention line La is not on the same three-dimensional object. Then, the three-dimensional object detection unit 33a sums the attributes b for all the attention points Pa on the attention line La and obtains an evaluation value in the vertical equivalent direction, whereby the edge line is caused by the three-dimensional object. It is determined whether or not a three-dimensional object exists.
  • the out-of-road determination unit 34 illustrated in FIG. 20 detects periodic objects such as guardrails and non-detection objects such as planted grass, and based on the detection results, detection areas A1, It is determined whether A2 is set outside the road. Further, as in the first embodiment, the detection area setting unit 35 shown in FIG. 20 determines whether or not the own vehicle is traveling at the junction point based on the determination result of the out-of-road determination unit 34. When it is determined that the vehicle is traveling at the junction, the detection areas A1 and A2 are expanded in the vehicle width direction.
  • FIG. 26 is a flowchart showing details of the adjacent vehicle detection method according to this embodiment.
  • processing for the detection area A1 will be described, but the same processing is executed for the detection area A2.
  • step S501 the detection area setting unit 35 sets detection areas A1 and A2 for detecting adjacent vehicles.
  • the detection areas A1 and A2 set in the detection area setting process described later are applied as in the first embodiment.
  • step S502 the camera 10 captures an image of a predetermined area specified by the angle of view a and the attachment position, and the computer 30a acquires image data of the captured image P captured by the camera 10.
  • step S503 the viewpoint conversion unit 31 performs viewpoint conversion on the acquired image data to generate bird's-eye view image data.
  • step S504 the luminance difference calculation unit 36 sets the attention line La on the detection area A1. At this time, the luminance difference calculation unit 36 sets a line corresponding to a line extending in the vertical direction in the real space as the attention line La. Next, the luminance difference calculation unit 36 sets a reference line Lr on the detection area A1 in step S505. At this time, the luminance difference calculation unit 36 sets a reference line Lr that corresponds to a line segment extending in the vertical direction in the real space and is separated from the attention line La by a predetermined distance in the real space.
  • step S506 the luminance difference calculation unit 36 sets a plurality of attention points Pa on the attention line La. At this time, the luminance difference calculation unit 36 sets a number of attention points Pa that do not cause a problem when the edge is detected by the edge line detection unit 37.
  • step S507 the luminance difference calculation unit 36 sets the reference point Pr so that the attention point Pa and the reference point Pr are substantially the same height in the real space. Thereby, the attention point Pa and the reference point Pr are arranged in a substantially horizontal direction, and it becomes easy to detect an edge line extending in the vertical direction in the real space.
  • the luminance difference calculation unit 36 calculates the luminance difference between the attention point Pa and the reference point Pr that have the same height in the real space.
  • the luminance difference calculation unit 36 calculates the luminance difference in the detection areas A1 and A2 set in step S501. That is, as shown in FIG. 14, when the host vehicle is traveling at a junction, a luminance difference is detected in the detection areas A1 and A2 widened in the vehicle width direction. Thereby, in 2nd Embodiment, when the own vehicle is drive
  • the edge line detection unit 37 calculates the attribute s of each attention point Pa according to Equation 1 above.
  • the edge line detection unit 37 calculates the continuity c of the attribute s of each attention point Pa according to the above mathematical formula 2.
  • step S512 the computer 30a determines whether or not the processes in steps S504 to S511 have been executed for all the attention lines La that can be set on the detection area A1.
  • step S512 No
  • the processing returns to step S504, a new attention line La is set, and the processing up to step S512 is repeated.
  • step S512 Yes
  • the process proceeds to step S513.
  • step S513 the three-dimensional object detection unit 33a calculates a luminance change along the edge line for each edge line detected in step S511.
  • the three-dimensional object detection unit 33a calculates the luminance change of the edge line according to any one of the above formulas 4, 5, and 6.
  • step S514 the three-dimensional object detection unit 33a excludes edge lines whose luminance change is equal to or greater than a predetermined threshold value tb from among the edge lines. That is, it is determined that an edge line having a large luminance change is not a correct edge line, and the edge line is not used for detecting a three-dimensional object. As described above, this is to prevent characters on the road surface, roadside weeds, and the like included in the detection area A1 from being detected as edge lines.
  • the predetermined threshold value tb is a value set based on a luminance change generated by characters on the road surface, weeds on the road shoulder, or the like, which is obtained in advance through experiments or the like.
  • the three-dimensional object detection unit 33a determines an edge line whose luminance change is less than the predetermined threshold value tb among the edge lines as an edge line of the three-dimensional object, and thereby detects a three-dimensional object existing in the adjacent vehicle. .
  • the three-dimensional object detection unit 33a determines whether the amount of the edge line is equal to or greater than a predetermined threshold value ⁇ .
  • step S515 No
  • the three-dimensional object detection unit 33a determines that there is no three-dimensional object in the detection area A1, and proceeds to step S517.
  • the out-of-road determination unit 34a determines that there is no adjacent vehicle in the detection area A1.
  • the detection area setting process shown in FIG. 17 is performed as in the first embodiment. And the detection area set by this detection area setting process will be applied to the adjacent vehicle detection process shown in FIG.
  • the captured image is converted into a bird's-eye view image
  • the edge information of the three-dimensional object is detected from the converted bird's-eye view image
  • the adjacent vehicle existing in the adjacent lane is detected. It is determined whether or not the vehicle is traveling at the junction, and if the host vehicle is traveling at the junction and the host vehicle has not changed lanes, the detection area is expanded in the vehicle width direction.
  • the distance in the vehicle width direction from the road shoulder to the host vehicle may be calculated, and the amount of expanding the detection areas A1 and A2 may be changed according to the calculated distance.
  • the larger the distance in the vehicle width direction from the shoulder to the host vehicle the larger the amount of the detection areas A1 and A2 that can be expanded.
  • the method for calculating the distance in the vehicle width direction from the shoulder to the host vehicle is not particularly limited.
  • the vehicle of the host vehicle on the road on which the host vehicle travels by recognizing the white line of the road on which the host vehicle travels.
  • the distance in the vehicle width direction from the shoulder to the host vehicle may be calculated, or by detecting an obstacle on the shoulder, the host vehicle on the road on which the host vehicle runs
  • the distance in the vehicle width direction from the road shoulder to the host vehicle may be calculated.
  • the configuration has been illustrated in which the detection areas A1 and A2 are gradually narrowed when a predetermined time has elapsed after the detection areas A1 and A2 are expanded in the vehicle width direction.
  • the detection areas A1 and A2 may be gradually narrowed when the vehicle travels a predetermined distance after the detection areas A1 and A2 are expanded in the vehicle width direction. Also in this case, it is possible to effectively prevent the detected other vehicle from being lost when the junction point has passed.
  • the configuration in which the detection areas A1 and A2 are widened in the vehicle width direction when the host vehicle is traveling at a junction is exemplified.
  • vehicle speed information is acquired and the vehicle speed of the vehicle is below a predetermined value, it is determined that the vehicle is not traveling on a highway, and the vehicle is traveling on a junction. Even if it exists, it is good also as a structure which does not open detection area A1, A2 to a vehicle width direction. Thereby, since the detection area setting process is performed only on the expressway, the processing load can be reduced.
  • the vehicle speed of the host vehicle V1 is determined based on a signal from the speed sensor 20, but the present invention is not limited thereto, and the speed may be estimated from a plurality of images at different times. .
  • the vehicle speed sensor 20 is not necessary, and the configuration can be simplified.
  • the camera 10 of the above-described embodiment corresponds to the imaging unit of the present invention
  • the viewpoint conversion unit 31 corresponds to the image conversion unit of the present invention
  • the alignment unit 32, the three-dimensional object detection unit 33, and the luminance difference calculation unit 36 The edge line detection unit 37 corresponds to the three-dimensional object detection unit of the present invention
  • the out-of-road determination unit 34 is the variation degree detection unit of the present invention
  • the detection area setting unit 35 is the lane number increase determination unit of the present invention
  • the lane It corresponds to change determination means, detection area setting means, distance detection means, and vehicle speed information acquisition means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention se rapporte à un dispositif de détection d'objet en trois dimensions qui comprend : un moyen d'imagerie (10) qui capture une image de la zone se trouvant derrière un véhicule ; un moyen de réglage de région de détection (35) qui règle une région de détection prédéterminée se trouvant derrière le véhicule ; un moyen de conversion d'image (31) qui convertit le point de vue d'une image capturée en celui d'une image de vue à vol d'oiseau ; des moyens de détection d'objet en trois dimensions (32, 33) qui génèrent des informations de forme d'onde de différence à partir d'une image de différence où les positions des images de vue à vol d'oiseau prises à des moments différents sont alignées, et détectent l'existence des objets en trois dimensions se trouvant dans la zone de détection sur la base des informations de forme d'onde de différence ; un moyen de détermination d'augmentation du nombre de voies (35) qui détermine, sur la base des changements des informations de forme d'onde de différence, si le nombre de voies sur la route sur laquelle se déplace le véhicule a augmenté ; et un moyen de détermination de changement de voie (35) qui détermine si le véhicule change de voie. Le dispositif de détection d'objet en trois dimensions est caractérisé en ce que le moyen de réglage de région de détection agrandit la région de détection dans le sens de la largeur du véhicule lorsqu'il est déterminé que le véhicule ne change pas de voie et que le nombre de voies sur la route sur laquelle se déplace le véhicule a augmenté.
PCT/JP2013/054859 2012-03-02 2013-02-26 Dispositif de détection d'objet en trois dimensions WO2013129357A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014502226A JP5915728B2 (ja) 2012-03-02 2013-02-26 立体物検出装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-046616 2012-03-02
JP2012046616 2012-03-02

Publications (1)

Publication Number Publication Date
WO2013129357A1 true WO2013129357A1 (fr) 2013-09-06

Family

ID=49082554

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/054859 WO2013129357A1 (fr) 2012-03-02 2013-02-26 Dispositif de détection d'objet en trois dimensions

Country Status (2)

Country Link
JP (1) JP5915728B2 (fr)
WO (1) WO2013129357A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108141569A (zh) * 2015-10-08 2018-06-08 日产自动车株式会社 显示辅助装置及显示辅助方法
CN111551196A (zh) * 2020-04-23 2020-08-18 上海悠络客电子科技股份有限公司 一种检修工位上是否有车辆的检测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005189009A (ja) * 2003-12-24 2005-07-14 Aisin Aw Co Ltd ナビゲーション装置及びナビゲーションシステム
JP2009143309A (ja) * 2007-12-12 2009-07-02 Toyota Motor Corp 車線維持支援装置
JP2011070411A (ja) * 2009-09-25 2011-04-07 Clarion Co Ltd センサコントローラ、ナビゲーション装置、センサ制御方法
JP2011090582A (ja) * 2009-10-23 2011-05-06 Fuji Heavy Ind Ltd 右折時運転支援装置
JP2012003662A (ja) * 2010-06-21 2012-01-05 Nissan Motor Co Ltd 移動距離検出装置及び移動距離検出方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005189009A (ja) * 2003-12-24 2005-07-14 Aisin Aw Co Ltd ナビゲーション装置及びナビゲーションシステム
JP2009143309A (ja) * 2007-12-12 2009-07-02 Toyota Motor Corp 車線維持支援装置
JP2011070411A (ja) * 2009-09-25 2011-04-07 Clarion Co Ltd センサコントローラ、ナビゲーション装置、センサ制御方法
JP2011090582A (ja) * 2009-10-23 2011-05-06 Fuji Heavy Ind Ltd 右折時運転支援装置
JP2012003662A (ja) * 2010-06-21 2012-01-05 Nissan Motor Co Ltd 移動距離検出装置及び移動距離検出方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108141569A (zh) * 2015-10-08 2018-06-08 日产自动车株式会社 显示辅助装置及显示辅助方法
CN108141569B (zh) * 2015-10-08 2020-04-28 日产自动车株式会社 显示辅助装置及显示辅助方法
CN111551196A (zh) * 2020-04-23 2020-08-18 上海悠络客电子科技股份有限公司 一种检修工位上是否有车辆的检测方法

Also Published As

Publication number Publication date
JPWO2013129357A1 (ja) 2015-07-30
JP5915728B2 (ja) 2016-05-11

Similar Documents

Publication Publication Date Title
JP5924399B2 (ja) 立体物検出装置
JP5804180B2 (ja) 立体物検出装置
JP5776795B2 (ja) 立体物検出装置
JP5733467B2 (ja) 立体物検出装置
JP6020567B2 (ja) 立体物検出装置および立体物検出方法
JP5682735B2 (ja) 立体物検出装置
JP5943077B2 (ja) 立体物検出装置および立体物検出方法
JP5682734B2 (ja) 立体物検出装置
JP5743020B2 (ja) 立体物検出装置
JPWO2014017521A1 (ja) 立体物検出装置
JP5915728B2 (ja) 立体物検出装置
JP5794379B2 (ja) 立体物検出装置及び立体物検出方法
JP5790867B2 (ja) 立体物検出装置
JP5668891B2 (ja) 立体物検出装置
JP5768927B2 (ja) 立体物検出装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13754876

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014502226

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13754876

Country of ref document: EP

Kind code of ref document: A1