WO2013129358A1 - 立体物検出装置 - Google Patents
立体物検出装置 Download PDFInfo
- Publication number
- WO2013129358A1 WO2013129358A1 PCT/JP2013/054860 JP2013054860W WO2013129358A1 WO 2013129358 A1 WO2013129358 A1 WO 2013129358A1 JP 2013054860 W JP2013054860 W JP 2013054860W WO 2013129358 A1 WO2013129358 A1 WO 2013129358A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional object
- detection
- bird
- image
- eye view
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 307
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims abstract description 86
- 238000006243 chemical reaction Methods 0.000 claims abstract description 31
- 238000003384 imaging method Methods 0.000 claims abstract description 16
- 240000004050 Pentaglottis sempervirens Species 0.000 claims description 73
- 230000003247 decreasing effect Effects 0.000 claims description 15
- 230000007423 decrease Effects 0.000 claims description 7
- 230000002123 temporal effect Effects 0.000 claims description 7
- 230000001419 dependent effect Effects 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 38
- 238000000034 method Methods 0.000 description 33
- 238000012545 processing Methods 0.000 description 17
- 239000007787 solid Substances 0.000 description 11
- 244000025254 Cannabis sativa Species 0.000 description 9
- 239000013256 coordination polymer Substances 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 241000196324 Embryophyta Species 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/262—Analysis of motion using transform domain methods, e.g. Fourier domain methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q9/00—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
- B60Q9/002—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for parking purposes, e.g. for warning the driver that his vehicle has contacted or is about to contact an obstacle
- B60Q9/004—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for parking purposes, e.g. for warning the driver that his vehicle has contacted or is about to contact an obstacle using wave sensors
- B60Q9/005—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for parking purposes, e.g. for warning the driver that his vehicle has contacted or is about to contact an obstacle using wave sensors using a video camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/60—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
- B60R2300/607—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/802—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
Definitions
- the present invention relates to a three-dimensional object detection device.
- This application claims priority based on Japanese Patent Application No. 2012-046629 filed on Mar. 2, 2012.
- the contents described in the application are incorporated into the present application by reference and made a part of the description of the present application.
- Patent Document 1 a technique for detecting the implantation of a road shoulder by performing image processing on a captured image captured by an imaging device by pattern matching is known (see Patent Document 1).
- the conventional technique detects planted grass by image processing based on pattern matching, high detection accuracy cannot be obtained when detecting the planted grass. May be erroneously detected as another vehicle traveling on the road.
- the problem to be solved by the present invention is to provide a three-dimensional object detection device that can appropriately detect an adjacent vehicle.
- the present invention detects a three-dimensional object based on a captured image, calculates a degree of variation in the movement speed of the three-dimensional object based on the amount of change in the movement speed of the detected three-dimensional object, and detects based on the calculated degree of variation.
- the above-described problem is solved by determining whether or not the three-dimensional object that has been used is a non-detection target object.
- the movement speed of the three-dimensional object is calculated when the amount of time change in the movement speed of the three-dimensional object is calculated based on the image information. Tend to vary. According to the present invention, by determining whether or not the detected three-dimensional object is a non-detection target object such as planted grass based on the degree of variation in the moving speed of the three-dimensional object, the adjacent vehicle is It can be detected properly.
- FIG. 1 is a schematic configuration diagram of a vehicle equipped with a three-dimensional object detection device.
- FIG. 2 is a plan view showing a traveling state of the vehicle of FIG.
- FIG. 3 is a block diagram showing details of the computer.
- 4A and 4B are diagrams for explaining the outline of the processing of the alignment unit, where FIG. 4A is a plan view showing the moving state of the vehicle, and FIG. 4B is an image showing the outline of the alignment.
- FIG. 5 is a schematic diagram illustrating how a differential waveform is generated by the three-dimensional object detection unit.
- FIG. 6 is a diagram illustrating a small area divided by the three-dimensional object detection unit.
- FIG. 7 is a diagram illustrating an example of a histogram obtained by the three-dimensional object detection unit.
- FIG. 1 is a schematic configuration diagram of a vehicle equipped with a three-dimensional object detection device.
- FIG. 2 is a plan view showing a traveling state of the vehicle of FIG.
- FIG. 3 is a
- FIG. 8 is a diagram illustrating weighting by the three-dimensional object detection unit.
- FIG. 9 is a diagram illustrating another example of a histogram obtained by the three-dimensional object detection unit.
- FIG. 10 is a diagram for explaining a method for determining a non-detection target.
- FIG. 11 is a diagram for explaining increase or decrease of the count value.
- FIG. 12 is a flowchart showing the adjacent vehicle detection method (part 1).
- FIG. 13 is a flowchart illustrating the adjacent vehicle detection method (part 2).
- FIG. 14 is a block diagram illustrating details of the computer according to the second embodiment.
- 15A and 15B are diagrams showing a running state of the vehicle, in which FIG. 15A is a plan view showing the positional relationship of the detection area and the like, and FIG. FIG.
- FIG. 16 is a diagram for explaining the operation of the luminance difference calculation unit according to the second embodiment
- FIG. 16A is a diagram showing the positional relationship between the attention line, the reference line, the attention point, and the reference point in the bird's-eye view image.
- (B) is a figure which shows the positional relationship of the attention line, reference line, attention point, and reference point in real space.
- FIGS. 17A and 17B are diagrams for explaining the detailed operation of the luminance difference calculation unit according to the second embodiment.
- FIG. 17A is a diagram illustrating a detection area in a bird's-eye view image
- FIG. It is a figure which shows the positional relationship of a line, a reference line, an attention point, and a reference point.
- FIG. 17A is a diagram illustrating a detection area in a bird's-eye view image
- FIG. It is a figure which shows the positional relationship of a line, a reference line, an attention point, and a reference point.
- FIG. 18 is a diagram illustrating an image example for explaining the edge detection operation.
- 19A and 19B are diagrams illustrating edge lines and luminance distribution on the edge lines.
- FIG. 19A is a diagram illustrating the luminance distribution when a three-dimensional object (adjacent vehicle) exists in the detection area.
- FIG. 19B is a detection area. It is a figure which shows luminance distribution when a solid object does not exist in FIG.
- FIG. 20 is a flowchart (part 1) illustrating the adjacent vehicle detection method according to the second embodiment.
- FIG. 21 is a flowchart (part 2) illustrating the adjacent vehicle detection method according to the second embodiment.
- FIG. 1 is a schematic configuration diagram of a vehicle equipped with a three-dimensional object detection device 1 according to the present embodiment.
- the three-dimensional object detection device 1 according to the present embodiment is intended to detect another vehicle (hereinafter also referred to as an adjacent vehicle V2) existing in an adjacent lane that may be contacted when the host vehicle V1 changes lanes. To do.
- the three-dimensional object detection device 1 according to the present embodiment includes a camera 10, a vehicle speed sensor 20, and a calculator 30.
- the camera 10 is attached to the vehicle V ⁇ b> 1 so that the optical axis is at an angle ⁇ downward from the horizontal at a position of the height h behind the host vehicle V ⁇ b> 1.
- the camera 10 captures an image of a predetermined area in the surrounding environment of the host vehicle V1 from this position.
- the vehicle speed sensor 20 detects the traveling speed of the host vehicle V1, and calculates the vehicle speed from the wheel speed detected by, for example, a wheel speed sensor that detects the rotational speed of the wheel.
- the computer 30 detects an adjacent vehicle existing in an adjacent lane behind the host vehicle.
- FIG. 2 is a plan view showing a traveling state of the host vehicle V1 of FIG.
- the camera 10 images the vehicle rear side at a predetermined angle of view a.
- the angle of view a of the camera 10 is set to an angle of view at which the left and right lanes (adjacent lanes) can be imaged in addition to the lane in which the host vehicle V1 travels.
- FIG. 3 is a block diagram showing details of the computer 30 of FIG. In FIG. 3, the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
- the computer 30 includes a viewpoint conversion unit 31, a positioning unit 32, a three-dimensional object detection unit 33, and a three-dimensional object determination unit 34. Below, each structure is demonstrated.
- the viewpoint conversion unit 31 inputs captured image data of a predetermined area obtained by imaging with the camera 10, and converts the viewpoint of the input captured image data into bird's-eye image data in a bird's-eye view state.
- the state viewed from a bird's-eye view is a state viewed from the viewpoint of a virtual camera looking down from above, for example, vertically downward.
- This viewpoint conversion can be executed as described in, for example, Japanese Patent Application Laid-Open No. 2008-219063.
- the viewpoint conversion of captured image data to bird's-eye view image data is based on the principle that a vertical edge peculiar to a three-dimensional object is converted into a straight line group passing through a specific fixed point by viewpoint conversion to bird's-eye view image data. This is because a planar object and a three-dimensional object can be distinguished if used.
- the alignment unit 32 sequentially inputs the bird's-eye view image data obtained by the viewpoint conversion of the viewpoint conversion unit 31 and aligns the positions of the inputted bird's-eye view image data at different times.
- 4A and 4B are diagrams for explaining the outline of the processing of the alignment unit 32, where FIG. 4A is a plan view showing the moving state of the host vehicle V1, and FIG. 4B is an image showing the outline of the alignment.
- the host vehicle V1 of the current time is located in P 1, one unit time before the vehicle V1 is located in the P 1 '. Further, there is a parallel running state with the vehicle V1 is located is adjacent vehicle V2 laterally after the vehicle V1, located in P 2 adjacent vehicle V2 is the current time, one unit time before the adjacent vehicle V2 is P 2 Suppose it is located at '. Furthermore, it is assumed that the host vehicle V1 has moved a distance d at one time. Note that “one hour before” may be a past time for a predetermined time (for example, one control cycle) from the current time, or may be a past time for an arbitrary time.
- the bird's-eye view image PB t at the current time is as shown in Figure 4 (b).
- the adjacent vehicle V2 (position P 2) is tilting occurs.
- the white line drawn on the road surface has a rectangular shape, and is in a state of being relatively accurately viewed in plan, but the adjacent vehicle V2 (position P 2). ') Will fall down.
- the vertical edges of solid objects are straight lines along the collapse direction by the viewpoint conversion processing to bird's-eye view image data. This is because the plane image on the road surface does not include a vertical edge, but such a fall does not occur even when the viewpoint is changed.
- the alignment unit 32 performs alignment of the bird's-eye view images PB t and PB t ⁇ 1 as described above on the data. At this time, the alignment unit 32 offsets the bird's-eye view image PB t-1 at the previous time and matches the position with the bird's-eye view image PB t at the current time.
- the image on the left side and the center image in FIG. 4B show a state that is offset by the movement distance d ′.
- This offset amount d ′ is a movement amount on the bird's-eye view image data corresponding to the actual movement distance d of the host vehicle V1 shown in FIG. 4 (a). It is determined based on the time until the time.
- the alignment unit 32 takes the difference between the bird's-eye view images PB t and PB t ⁇ 1 and generates data of the difference image PD t .
- the alignment unit 32 converts the pixel value difference between the bird's-eye view images PB t and PB t ⁇ 1 to an absolute value in order to cope with a change in the illumination environment, and the absolute value is a predetermined value.
- the three-dimensional object detection unit 33 detects a three-dimensional object based on the data of the difference image PD t shown in FIG. At this time, the three-dimensional object detection unit 33 also calculates the movement distance of the three-dimensional object in the real space. In detecting the three-dimensional object and calculating the movement distance, the three-dimensional object detection unit 33 first generates a differential waveform.
- the three-dimensional object detection unit 33 sets a detection region in the difference image PD t .
- the three-dimensional object detection device 1 of the present example is intended to calculate a movement distance for an adjacent vehicle that may be contacted when the host vehicle V1 changes lanes. For this reason, in this example, as shown in FIG. 2, rectangular detection areas A1, A2 are set on the rear side of the host vehicle V1. Such detection areas A1, A2 may be set from a relative position with respect to the host vehicle V1, or may be set based on the position of the white line. When setting the position of the white line as a reference, the three-dimensional object detection device 1 may use, for example, an existing white line recognition technique.
- the three-dimensional object detection unit 33 recognizes the sides (sides along the traveling direction) of the set detection areas A1 and A2 on the own vehicle V1 side as the ground lines L1 and L2.
- the ground line means a line in which the three-dimensional object contacts the ground.
- the ground line is set as described above, not a line in contact with the ground. Even in this case, from experience, the difference between the ground line according to the present embodiment and the ground line obtained from the position of the original adjacent vehicle V2 is not too large, and there is no problem in practical use.
- FIG. 5 is a schematic diagram illustrating how the three-dimensional object detection unit 33 generates a differential waveform.
- the three-dimensional object detection unit 33 calculates a differential waveform from a portion corresponding to the detection areas A ⁇ b> 1 and A ⁇ b> 2 in the difference image PD t (right diagram in FIG. 4B) calculated by the alignment unit 32.
- DW t is generated.
- the three-dimensional object detection unit 33 generates a differential waveform DW t along the direction in which the three-dimensional object falls by viewpoint conversion.
- the detection area A1 is described for convenience, but the difference waveform DW t is generated for the detection area A2 in the same procedure.
- first three-dimensional object detection unit 33 defines a line La on the direction the three-dimensional object collapses on data of the difference image PD t. Then, the three-dimensional object detection unit 33 counts the number of difference pixels DP indicating a predetermined difference on the line La.
- the difference pixel DP indicating the predetermined difference is expressed by the pixel value of the difference image PD t as “0” and “1”, and the pixel indicating “1” is counted as the difference pixel DP. .
- the three-dimensional object detection unit 33 counts the number of difference pixels DP and then obtains an intersection point CP between the line La and the ground line L1. Then, the three-dimensional object detection unit 33 associates the intersection CP with the count number, determines the horizontal axis position based on the position of the intersection CP, that is, the position on the vertical axis in the right diagram of FIG. The axis position, that is, the position on the right and left axis in the right diagram of FIG. 5 is determined and plotted as the count number at the intersection CP.
- the three-dimensional object detection unit 33 defines lines Lb, Lc... In the direction in which the three-dimensional object falls, counts the number of difference pixels DP, and determines the horizontal axis position based on the position of each intersection CP. Then, the vertical axis position is determined from the count number (number of difference pixels DP) and plotted.
- the three-dimensional object detection unit 33 generates the differential waveform DW t as shown in the right diagram of FIG.
- the difference pixel PD on the data of the difference image PD t is a pixel that has changed in the images at different times, in other words, a location where a three-dimensional object exists.
- the difference waveform DW t is generated by counting the number of pixels along the direction in which the three-dimensional object collapses and performing frequency distribution at the location where the three-dimensional object exists.
- the differential waveform DW t is generated from the information in the height direction for the three-dimensional object.
- the line La and the line Lb in the direction in which the three-dimensional object collapses have different distances overlapping the detection area A1. For this reason, if the detection area A1 is filled with the difference pixels DP, the number of difference pixels DP is larger on the line La than on the line Lb. For this reason, when the three-dimensional object detection unit 33 determines the vertical axis position from the count number of the difference pixels DP, the three-dimensional object detection unit 33 is normalized based on the distance at which the lines La and Lb in the direction in which the three-dimensional object falls and the detection area A1 overlap. Turn into. As a specific example, in the left diagram of FIG.
- the three-dimensional object detection unit 33 normalizes the count number by dividing it by the overlap distance.
- the difference waveform DW t the line La on the direction the three-dimensional object collapses, the value of the differential waveform DW t corresponding to Lb is substantially the same.
- the three-dimensional object detection unit 33 After generation of the differential waveform DW t, the three-dimensional object detection unit 33 calculates the moving distance in comparison with the differential waveform DW t-1 of the previous differential waveform DW t and a time instant at the current time. That is, the three-dimensional object detection unit 33 calculates the movement distance from the time change of the difference waveforms DW t and DW t ⁇ 1 .
- the three-dimensional object detection unit 33 divides the differential waveform DW t into a plurality of small areas DW t1 to DW tn (n is an arbitrary integer equal to or greater than 2).
- FIG. 6 is a diagram illustrating the small areas DW t1 to DW tn divided by the three-dimensional object detection unit 33.
- the small areas DW t1 to DW tn are divided so as to overlap each other, for example, as shown in FIG. For example, the small area DW t1 and the small area DW t2 overlap, and the small area DW t2 and the small area DW t3 overlap.
- the three-dimensional object detection unit 33 obtains an offset amount (amount of movement of the differential waveform in the horizontal axis direction (vertical direction in FIG. 6)) for each of the small areas DW t1 to DW tn .
- the offset amount is determined from the difference between the differential waveform DW t in the difference waveform DW t-1 and the current time before one unit time (distance in the horizontal axis direction).
- three-dimensional object detection unit 33 for each small area DW t1 ⁇ DW tn, when moving the differential waveform DW t1 before one unit time in the horizontal axis direction, the differential waveform DW t at the current time The position where the error is minimized (the position in the horizontal axis direction) is determined, and the amount of movement in the horizontal axis between the original position of the differential waveform DW t ⁇ 1 and the position where the error is minimized is obtained as an offset amount. Then, the three-dimensional object detection unit 33 counts the offset amount obtained for each of the small areas DW t1 to DW tn and forms a histogram.
- FIG. 7 is a diagram illustrating an example of a histogram obtained by the three-dimensional object detection unit 33.
- the offset amount which is the amount of movement that minimizes the error between each of the small areas DW t1 to DW tn and the differential waveform DW t ⁇ 1 one time before, has some variation.
- the three-dimensional object detection unit 33 forms a histogram of offset amounts including variations, and calculates a movement distance from the histogram.
- the three-dimensional object detection unit 33 calculates the moving distance of the three-dimensional object from the maximum value of the histogram. That is, in the example illustrated in FIG.
- the three-dimensional object detection unit 33 calculates the offset amount indicating the maximum value of the histogram as the movement distance ⁇ * .
- the moving distance ⁇ * is a relative moving distance of the three-dimensional object with respect to the host vehicle. For this reason, when calculating the absolute movement distance, the three-dimensional object detection unit 33 calculates the absolute movement distance based on the obtained movement distance ⁇ * and the signal from the vehicle speed sensor 20.
- a one-dimensional waveform is obtained by calculating the moving distance of the three-dimensional object from the offset amount of the differential waveform DW t when the error of the differential waveform DW t generated at different times is minimized.
- the movement distance is calculated from the offset amount of the information, and the calculation cost can be suppressed in calculating the movement distance.
- by dividing the differential waveform DW t generated at different times into a plurality of small areas DW t1 to DW tn it is possible to obtain a plurality of waveforms representing respective portions of the three-dimensional object.
- the calculation accuracy of the movement distance can be improved. Further, in the present embodiment, by calculating the moving distance of the three-dimensional object from the time change of the differential waveform DW t including the information in the height direction, compared with a case where attention is paid only to one point of movement, Since the detection location before the time change and the detection location after the time change are specified including information in the height direction, it is likely to be the same location in the three-dimensional object, and the movement distance is calculated from the time change of the same location, and the movement Distance calculation accuracy can be improved.
- the three-dimensional object detection unit 33 weights each of the plurality of small areas DW t1 to DW tn and forms a histogram by counting the offset amount obtained for each of the small areas DW t1 to DW tn according to the weight. May be.
- FIG. 8 is a diagram illustrating weighting by the three-dimensional object detection unit 33.
- the small area DW m (m is an integer of 1 to n ⁇ 1) is flat. That is, in the small area DW m , the difference between the maximum value and the minimum value of the number of pixels indicating a predetermined difference is small. Three-dimensional object detection unit 33 to reduce the weight for such small area DW m. This is because the flat small area DW m has no characteristics and is likely to have a large error in calculating the offset amount.
- the small region DW m + k (k is an integer equal to or less than nm) is rich in undulations. That is, in the small area DW m , the difference between the maximum value and the minimum value of the number of pixels indicating a predetermined difference is large.
- Three-dimensional object detection unit 33 increases the weight for such small area DW m. This is because the small region DW m + k rich in undulations is characteristic and there is a high possibility that the offset amount can be accurately calculated. By weighting in this way, the calculation accuracy of the movement distance can be improved.
- the differential waveform DW t is divided into a plurality of small areas DW t1 to DW tn in order to improve the calculation accuracy of the movement distance.
- the small area DW t1 is divided. It is not necessary to divide into ⁇ DW tn .
- the three-dimensional object detection unit 33 calculates the moving distance from the offset amount of the differential waveform DW t when the error between the differential waveform DW t and the differential waveform DW t ⁇ 1 is minimized. That is, the method for obtaining the offset amount of the difference waveform DW t in the difference waveform DW t-1 and the current time before one unit time is not limited to the above disclosure.
- the three-dimensional object detection unit 33 obtains the moving speed of the host vehicle V1 (camera 10), and obtains the offset amount for the stationary object from the obtained moving speed. After obtaining the offset amount of the stationary object, the three-dimensional object detection unit 33 calculates the moving distance of the three-dimensional object after ignoring the offset amount corresponding to the stationary object among the maximum values of the histogram.
- FIG. 9 is a diagram showing another example of a histogram obtained by the three-dimensional object detection unit 33.
- a stationary object is present in addition to a three-dimensional object within the angle of view of the camera 10, two maximum values ⁇ 1 and ⁇ 2 appear in the obtained histogram.
- one of the two maximum values ⁇ 1, ⁇ 2 is the offset amount of the stationary object.
- the three-dimensional object detection unit 33 calculates the offset amount for the stationary object from the moving speed, ignores the maximum value corresponding to the offset amount, and calculates the moving distance of the three-dimensional object by using the remaining maximum value. To do. Thereby, the situation where the calculation accuracy of the moving distance of a solid object falls by a stationary object can be prevented.
- the three-dimensional object detection unit 33 stops calculating the movement distance. Thereby, in the present embodiment, it is possible to prevent a situation in which an erroneous movement distance having a plurality of maximum values is calculated.
- the three-dimensional object detection unit 33 calculates the relative movement speed of the three-dimensional object by differentiating the relative movement distance of the three-dimensional object with respect to time.
- the three-dimensional object detection unit 33 also calculates the absolute movement speed of the three-dimensional object based on the absolute movement distance of the three-dimensional object.
- the three-dimensional object detection unit 33 repeatedly calculates the relative movement speed of the three-dimensional object at predetermined intervals, and calculates a time change amount ⁇ V of the calculated relative movement speed of the three-dimensional object.
- the calculated time variation ⁇ V of the relative movement speed is transmitted to the three-dimensional object determination unit 34 described later.
- the three-dimensional object determination unit 34 determines whether the three-dimensional object detected by the three-dimensional object detection unit 33 is another vehicle (adjacent vehicle) traveling in the adjacent lane. Further, when determining whether or not the three-dimensional object is an adjacent vehicle, the three-dimensional object determination unit 34 detects the degree of variation in the image information, and the three-dimensional object is not detected based on the detected degree of variation in the image information. It is determined whether or not it is an object.
- the three-dimensional object determination unit 34 determines, as the degree of variation in image information, the absolute value of the temporal change amount of the relative movement speed of the three-dimensional object calculated by the three-dimensional object detection unit 33 based on the captured image
- the non-detection target refers to grass, snow, a guardrail, or the like that is a non-detection target with respect to an adjacent vehicle that is a detection target.
- the three-dimensional object determination unit 34 determines that the three-dimensional object is a non-detection target object, thereby erroneously detecting the non-detection target object as an adjacent vehicle. It is an effective prevention.
- the three-dimensional object determination unit 34 determines that the degree of variation in the image information is higher as the absolute value
- the three-dimensional object determination unit 34 calculates a count value (vertical axis shown in FIG. 10) in accordance with the absolute value
- FIG. 10 is a figure for demonstrating the detection method of a non-detection target object.
- FIG. 11 is a table showing an example of the increment / decrement amount of the count value.
- the three-dimensional object determination unit 34 is based on the magnitude of the absolute value
- the three-dimensional object determination unit 34 detects the brightness of the detection areas A1 and A2 from the difference image, and when the brightness of the detection areas A1 and A2 is equal to or greater than a predetermined value as illustrated in FIG.
- of the relative change speed of the three-dimensional object is 30 km / h or more (
- the three-dimensional object is an image such as an edge component. It is determined that there is a high possibility that the object is a non-detection target with large variation in information, and the count value is increased by X1.
- the three-dimensional object determination unit 34 determines the absolute value
- ⁇ V the absolute value of the temporal change amount of the relative movement speed of the three-dimensional object.
- X2 is smaller than X1 (X1> X2).
- the contrast of the captured image is low, and the certainty that the three-dimensional object can be determined to be a non-detection target is small.
- the solid object determining unit 34 when the brightness is equal to or higher than a predetermined value (when it can be determined as daytime), the absolute value
- the three-dimensional object determination unit 34 has an absolute value
- of the relative change speed of the three-dimensional object that is less than 30 km / h.
- the count value is decreased by Y2.
- Y2 is smaller than Y1 (Y1> Y2).
- the solid object determining unit 34 when the brightness is equal to or higher than a predetermined value (when it can be determined that it is daytime), the absolute value
- Z1 is larger than Y1 (Z1> Y1).
- the three-dimensional object determination unit 34 has the absolute value
- the count value is decreased by Z2.
- Z2 is larger than Y2 (Z2> Y2).
- Z2 is smaller than Z1 (Z1> Z2).
- the three-dimensional object determination unit 34 increases or decreases the count value in accordance with the variation in the absolute value
- a first threshold value s 1 or more as shown in 10 three-dimensional object is determined to be non-detection object.
- the solid object determination unit 34 after the count value reaches the first threshold value s 1 or more, further, when the count value becomes smaller than the second threshold value s 2, three-dimensional object The determination that the object is a non-detection object is canceled.
- the detected three-dimensional object is being determined to be non-detection object
- the count value becomes less than the second threshold s 2 at time t2
- it is determined that the detected three-dimensional object is not a non-detection target object at time t2.
- the detected three-dimensional object is to be determined as a non-detection object It becomes.
- the count value is the first value.
- the first threshold value s 1 is provided as the upper limit value of the count value so as not to exceed the threshold value s 1 .
- the count value becomes the first threshold value s 1 and equal, quickly reduced to smaller than the second threshold value s 2 Accordingly, it is possible to appropriately detect the adjacent vehicle.
- the example shown in FIG. 10 shows that even when the absolute value
- the first threshold value s 1 is provided as the upper limit value of the count value so as not to exceed the threshold value s 1 .
- the first threshold value s 1 as the upper limit value of the count value is not limited to this, a value greater than the first threshold value s 1, as the upper limit of the count value Alternatively, a value smaller than the first threshold value s 1 may be used as the upper limit value of the count value.
- the three-dimensional object determination unit 34 determines whether or not the three-dimensional object detected by the three-dimensional object detection unit 33 is a non-detection target object by increasing or decreasing the count value in this way.
- the detected three-dimensional object is suppressed from being detected as an adjacent vehicle. Thereby, it is possible to effectively prevent a non-detection target such as planted grass from being erroneously detected as an adjacent vehicle.
- FIGS. 12 and 13 are flowcharts showing the adjacent vehicle detection process of the present embodiment.
- the computer 30 acquires the data of the captured image P from the camera 10 (step S ⁇ b> 101), and the viewpoint conversion unit 31 acquires the data of the captured image P.
- data of the bird's-eye view vision image PB t is generated (step S102).
- the alignment unit 33 aligns the data of the bird's-eye view image PB t and the data of the bird's-eye view image PB t ⁇ 1 one hour before, and generates data of the difference image PD t (step S103). . Then, three-dimensional object detection unit 33, from the data of the difference image PD t, pixel value by counting the number of difference pixel DP "1", to generate a difference waveform DW t (step S104).
- the three-dimensional object detection unit 33 determines whether or not the peak of the differential waveform DW t is greater than or equal to a predetermined threshold value ⁇ (step S105).
- a predetermined threshold value ⁇ ⁇
- the three-dimensional object detection unit 33 determines that there is no three-dimensional object and no other vehicle exists (FIG. 13 step S130).
- step S105 Yes
- the three-dimensional object detection unit 33 determines that a three-dimensional object exists in the adjacent lane, and proceeds to step S106.
- the three-dimensional object detection unit 33 divides the differential waveform DW t into a plurality of small areas DW t1 to DW tn .
- the three-dimensional object detection unit 33 performs weighting for each of the small areas DW t1 to DW tn (step S107), calculates an offset amount for each of the small areas DW t1 to DW tn (step S108), and adds the weight.
- a histogram is generated (step S109).
- the three-dimensional object detection unit 33 calculates a relative movement distance that is a movement distance of the adjacent vehicle with respect to the own vehicle based on the histogram (step S110). Further, the three-dimensional object detection unit 33 calculates the relative movement speed by differentiating the relative movement distance with respect to time (step S111), and adds the own vehicle speed detected by the vehicle speed sensor 20 to obtain the absolute movement speed of the adjacent vehicle. Calculate (step S112).
- the three-dimensional object determination unit 34 determines whether or not the absolute value
- of the relative movement speed of the three-dimensional object is 30 km / h or more (
- the three-dimensional object determination unit 34 when the absolute value
- the three-dimensional object determination unit 34 when the absolute value
- the solid object determination unit 34 in step S124, the count value, and determines whether it is the first threshold value s 1 or shown in FIG. 10. If the count value is equal to the first threshold value s 1 or more, the process proceeds to step S129, the three-dimensional object determination unit 34, the detected three-dimensional object is determined to be a non-detection object, then the process proceeds to step S130, The three-dimensional object determination unit 34 determines that there is no adjacent vehicle in the adjacent lane. On the other hand, when the count value is first less than the threshold value s 1, the process proceeds to step S125.
- the steric object determination unit 34 After the count value reaches the first threshold value s 1 or more, the count value becomes the first threshold value s of less than 1 as shown in FIG. 10, further, a second less than the threshold value s 2 A determination is made whether or not. That is, once, when the count value reaches a first threshold value s 1 or decreases the count value even after a first threshold value s of less than 1, the count value is a second threshold value s 2 or more, In step S129, the three-dimensional object determination unit 34 determines that the detected three-dimensional object is a non-detection target, and then the three-dimensional object determination unit 34 determines that there is no adjacent vehicle in the adjacent lane ( Step S129).
- step S 126 the process proceeds to step S 126, and the detected three-dimensional object is not detected by the three-dimensional object determination unit 34. It is determined that the object is not an object, and the process proceeds to step S127.
- the count value is not the first threshold value s 1 more than once, the count value is first smaller than the threshold s 1, and, if the second threshold value s 2 or more, of course, step The process proceeds to S126.
- the rear sides of the host vehicle are set as detection areas A1 and A2, and emphasis is placed on whether or not there is a possibility of contact when the host vehicle changes lanes. For this reason, the process of step S127 is performed. That is, assuming that the system according to the present embodiment is operated on a highway, if the speed of the adjacent vehicle is less than 10 km / h, even if the adjacent vehicle exists, the host vehicle is required to change the lane. Because it is located far behind, there are few problems.
- step S127 when the relative moving speed of the adjacent vehicle to the own vehicle exceeds +60 km / h (that is, when the adjacent vehicle is moving at a speed higher than 60 km / h than the speed of the own vehicle), when changing the lane Is less likely to be a problem because it is moving in front of the host vehicle. For this reason, it can be said that the adjacent vehicle which becomes a problem at the time of lane change is judged in step S127.
- step S127 determining whether the absolute moving speed of the adjacent vehicle is 10 km / h or more and the relative moving speed of the adjacent vehicle with respect to the own vehicle is +60 km / h or less has the following effects.
- the absolute moving speed of the stationary object may be detected to be several km / h. Therefore, by determining whether the speed is 10 km / h or more, it is possible to reduce the possibility that the stationary object is determined to be an adjacent vehicle.
- the relative speed of the adjacent vehicle to the host vehicle may be detected as a speed exceeding +60 km / h. Therefore, the possibility of erroneous detection due to noise can be reduced by determining whether the relative speed is +60 km / h or less.
- step S127 it may be determined that the absolute movement speed of the adjacent vehicle is not negative or not 0 km / h. Further, in this embodiment, since an emphasis is placed on whether or not there is a possibility of contact when the host vehicle changes lanes, a warning sound is sent to the driver of the host vehicle when an adjacent vehicle is detected in step S128. Or a display corresponding to a warning may be performed by a predetermined display device.
- two images obtained at different times into a bird's-eye view image to generate a difference image PD t based on the difference between the two bird's-eye view image. Then, along the direction in which the three-dimensional object collapses by the viewpoint conversion, by the frequency distribution of counts the number of pixels indicating a predetermined difference in the data of the difference image PD t, the differential waveform DW from the data of the difference image PD t t is generated.
- a three-dimensional object is detected based on the generated differential waveform DW t , and the detected three-dimensional object is based on the absolute value
- ⁇ V the absolute value
- a non-detection target object for example, when edge processing is performed on a captured image obtained by capturing an image of a non-detection target such as grass, snow, or a guardrail, many discontinuous edge components tend to be detected. This is because the image information of the non-detection object tends to have a high degree of variation in the image information.
- of the amount of change in the relative movement speed of the three-dimensional object is detected as the degree of variation in the image information, and the detected absolute value
- the three-dimensional object is more likely to be a vehicle as the absolute value
- the count value is increased, and when the accumulated count value becomes equal to or greater than the first threshold value s 1 , the detection accuracy of the non-detection target object can be increased.
- the count value is a second threshold value when it becomes less than s 2
- three-dimensional object is by determining that there is no by non-detection object, it is possible to improve the detection accuracy of the non-detection object.
- the three-dimensional object detection device 1a according to the second embodiment includes a computer 30 a instead of the computer 30 of the first embodiment, except that it operates as described below. This is the same as in the first embodiment.
- FIG. 14 is a block diagram showing details of the computer 30a according to the second embodiment.
- the three-dimensional object detection device 1a includes a camera 10 and a computer 30a.
- the computer 30a includes a viewpoint conversion unit 31, a luminance difference calculation unit 34, and an edge line detection unit. 35, a three-dimensional object detection unit 33a, and a three-dimensional object determination unit 34a.
- FIGS. 15A and 15B are diagrams illustrating an imaging range and the like of the camera 10 in FIG. 14.
- FIG. 15A is a plan view
- FIG. 15B is a perspective view in real space rearward from the host vehicle V1. Show.
- the camera 10 has a predetermined angle of view a, and images the rear side from the host vehicle V1 included in the predetermined angle of view a.
- the angle of view a of the camera 10 is set so that the imaging range of the camera 10 includes the adjacent lane in addition to the lane in which the host vehicle V1 travels.
- the detection areas A1 and A2 in this example are trapezoidal in a plan view (when viewed from a bird's eye), and the positions, sizes, and shapes of the detection areas A1 and A2 are determined based on the distances d 1 to d 4. Is done.
- the detection areas A1 and A2 in the example shown in the figure are not limited to a trapezoidal shape, and may be other shapes such as a rectangle when viewed from a bird's eye view as shown in FIG.
- the distance d1 is a distance from the host vehicle V1 to the ground lines L1 and L2.
- the ground lines L1 and L2 mean lines on which a three-dimensional object existing in the lane adjacent to the lane in which the host vehicle V1 travels contacts the ground.
- the object is to detect adjacent vehicles V2 and the like (including two-wheeled vehicles) traveling in the left and right lanes adjacent to the lane of the host vehicle V1 on the rear side of the host vehicle V1.
- a distance d1 which is a position to be the ground lines L1, L2 of the adjacent vehicle V2 is determined from a distance d11 from the own vehicle V1 to the white line W and a distance d12 from the white line W to a position where the adjacent vehicle V2 is predicted to travel. It can be determined substantially fixedly.
- the distance d1 is not limited to being fixedly determined, and may be variable.
- the computer 30a recognizes the position of the white line W with respect to the host vehicle V1 by a technique such as white line recognition, and determines the distance d11 based on the recognized position of the white line W.
- the distance d1 is variably set using the determined distance d11.
- the distance d1 is It shall be fixedly determined.
- the distance d2 is a distance extending in the vehicle traveling direction from the rear end portion of the host vehicle V1.
- the distance d2 is determined so that the detection areas A1 and A2 are at least within the angle of view a of the camera 10.
- the distance d2 is set so as to be in contact with the range divided into the angle of view a.
- the distance d3 is a distance indicating the length of the detection areas A1, A2 in the vehicle traveling direction. This distance d3 is determined based on the size of the three-dimensional object to be detected. In the present embodiment, since the detection target is the adjacent vehicle V2 or the like, the distance d3 is set to a length including the adjacent vehicle V2.
- the distance d4 is a distance indicating a height that is set to include a tire such as the adjacent vehicle V2 in the real space.
- the distance d4 is a length shown in FIG. 15A in the bird's-eye view image.
- the distance d4 may be a length that does not include a lane that is further adjacent to the left and right lanes in the bird's-eye view image (that is, the adjacent lane that is adjacent to two lanes). If the lane adjacent to the two lanes is included from the lane of the own vehicle V1, there is an adjacent vehicle V2 in the adjacent lane on the left and right of the own lane that is the lane in which the own vehicle V1 is traveling. This is because it becomes impossible to distinguish whether there is an adjacent vehicle on the lane.
- the distances d1 to d4 are determined, and thereby the positions, sizes, and shapes of the detection areas A1 and A2 are determined. More specifically, the position of the upper side b1 of the detection areas A1 and A2 forming a trapezoid is determined by the distance d1. The starting point position C1 of the upper side b1 is determined by the distance d2. The end point position C2 of the upper side b1 is determined by the distance d3. The side b2 of the detection areas A1 and A2 having a trapezoidal shape is determined by a straight line L3 extending from the camera 10 toward the starting point position C1.
- a side b3 of trapezoidal detection areas A1 and A2 is determined by a straight line L4 extending from the camera 10 toward the end position C2.
- the position of the lower side b4 of the detection areas A1 and A2 having a trapezoidal shape is determined by the distance d4.
- the areas surrounded by the sides b1 to b4 are set as the detection areas A1 and A2.
- the detection areas A1 and A2 are squares (rectangles) in real space on the rear side from the host vehicle V1.
- the viewpoint conversion unit 31 inputs captured image data of a predetermined area obtained by imaging with the camera 10.
- the viewpoint conversion unit 31 performs viewpoint conversion processing on the input captured image data to the bird's-eye image data in a bird's-eye view state.
- the bird's-eye view is a state seen from the viewpoint of a virtual camera looking down from above, for example, vertically downward (or slightly obliquely downward).
- This viewpoint conversion process can be realized by a technique described in, for example, Japanese Patent Application Laid-Open No. 2008-219063.
- the luminance difference calculation unit 34 calculates a luminance difference with respect to the bird's-eye view image data subjected to viewpoint conversion by the viewpoint conversion unit 31 in order to detect the edge of the three-dimensional object included in the bird's-eye view image. For each of a plurality of positions along a vertical imaginary line extending in the vertical direction in the real space, the brightness difference calculating unit 34 calculates a brightness difference between two pixels in the vicinity of each position.
- the luminance difference calculation unit 34 can calculate the luminance difference by either a method of setting only one vertical virtual line extending in the vertical direction in the real space or a method of setting two vertical virtual lines.
- the brightness difference calculating unit 34 applies the first vertical imaginary line corresponding to the line segment extending in the vertical direction in the real space and the vertical direction in the real space different from the first vertical imaginary line, with respect to the bird's-eye view image converted in the viewpoint.
- a second vertical imaginary line corresponding to the extending line segment is set.
- the luminance difference calculation unit 34 continuously obtains a luminance difference between a point on the first vertical imaginary line and a point on the second vertical imaginary line along the first vertical imaginary line and the second vertical imaginary line.
- the operation of the luminance difference calculation unit 34 will be described in detail.
- the luminance difference calculation unit 34 corresponds to a line segment extending in the vertical direction in the real space and passes through the detection area A1 (hereinafter referred to as the attention line La). Set).
- the luminance difference calculation unit 34 corresponds to a line segment extending in the vertical direction in the real space, and also passes through the second vertical virtual line Lr (hereinafter referred to as a reference line Lr) passing through the detection area A1.
- the reference line Lr is set at a position separated from the attention line La by a predetermined distance in the real space.
- the line corresponding to the line segment extending in the vertical direction in the real space is a line that spreads radially from the position Ps of the camera 10 in the bird's-eye view image.
- This radially extending line is a line along the direction in which the three-dimensional object falls when converted to bird's-eye view.
- the luminance difference calculation unit 34 sets a point of interest Pa (a point on the first vertical imaginary line) on the line of interest La.
- the luminance difference calculation unit 34 sets a reference point Pr (a point on the second vertical plate) on the reference line Lr.
- the attention line La, the attention point Pa, the reference line Lr, and the reference point Pr have the relationship shown in FIG. 16B in the real space.
- the attention line La and the reference line Lr are lines extending in the vertical direction in the real space, and the attention point Pa and the reference point Pr are substantially the same height in the real space. This is the point that is set.
- the attention point Pa and the reference point Pr do not necessarily have the same height, and an error that allows the attention point Pa and the reference point Pr to be regarded as the same height is allowed.
- the luminance difference calculation unit 34 obtains a luminance difference between the attention point Pa and the reference point Pr. If the luminance difference between the attention point Pa and the reference point Pr is large, it is considered that an edge exists between the attention point Pa and the reference point Pr.
- a vertical virtual line is set as a line segment extending in the vertical direction in the real space with respect to the bird's-eye view image, In the case where the luminance difference between the attention line La and the reference line Lr is high, there is a high possibility that there is an edge of the three-dimensional object at the set position of the attention line La. For this reason, the edge line detection unit 35 shown in FIG. 14 detects an edge line based on the luminance difference between the attention point Pa and the reference point Pr.
- FIG. 17 is a diagram illustrating a detailed operation of the luminance difference calculation unit 34
- FIG. 17A illustrates a bird's-eye view image in a bird's-eye view state
- FIG. 17B illustrates the operation illustrated in FIG. It is the figure which expanded a part B1 of the bird's-eye view image.
- the luminance difference is calculated in the same procedure for the detection area A2.
- the adjacent vehicle V2 When the adjacent vehicle V2 is reflected in the captured image captured by the camera 10, the adjacent vehicle V2 appears in the detection area A1 in the bird's-eye view image as shown in FIG. As shown in the enlarged view of the area B1 in FIG. 17A in FIG. 17B, it is assumed that the attention line La is set on the rubber part of the tire of the adjacent vehicle V2 on the bird's-eye view image.
- the luminance difference calculation unit 34 first sets a reference line Lr.
- the reference line Lr is set along the vertical direction at a position away from the attention line La by a predetermined distance in the real space.
- the reference line Lr is set at a position separated from the attention line La by 10 cm in the real space.
- the reference line Lr is set on the wheel of the tire of the adjacent vehicle V2, which is separated from the rubber of the tire of the adjacent vehicle V2, for example, by 10 cm, on the bird's eye view image.
- the luminance difference calculation unit 34 sets a plurality of attention points Pa1 to PaN on the attention line La.
- attention points Pai when showing arbitrary points
- the number of attention points Pa set on the attention line La may be arbitrary.
- N attention points Pa are set on the attention line La.
- the luminance difference calculation unit 34 sets the reference points Pr1 to PrN so as to be the same height as the attention points Pa1 to PaN in the real space. Then, the luminance difference calculation unit 34 calculates the luminance difference between the attention point Pa and the reference point Pr having the same height. Accordingly, the luminance difference calculation unit 34 calculates the luminance difference between the two pixels for each of a plurality of positions (1 to N) along the vertical imaginary line extending in the vertical direction in the real space. For example, the luminance difference calculation unit 34 calculates a luminance difference between the first attention point Pa1 and the first reference point Pr1, and the second difference between the second attention point Pa2 and the second reference point Pr2. Will be calculated.
- the luminance difference calculation unit 34 continuously calculates the luminance difference along the attention line La and the reference line Lr. That is, the luminance difference calculation unit 34 sequentially obtains the luminance difference between the third to Nth attention points Pa3 to PaN and the third to Nth reference points Pr3 to PrN.
- the luminance difference calculation unit 34 repeatedly executes the above-described processing such as setting the reference line Lr, setting the attention point Pa and the reference point Pr, and calculating the luminance difference while shifting the attention line La in the detection area A1. That is, the luminance difference calculation unit 34 repeatedly executes the above processing while changing the position of the attention line La and the reference line Lr by the same distance in the extending direction of the ground line L1 in the real space. For example, the luminance difference calculation unit 34 sets a line that has been the reference line Lr in the previous process as the attention line La, sets the reference line Lr for the attention line La, and sequentially obtains the luminance difference. It will be.
- the edge extending in the vertical direction is obtained by calculating the luminance difference from the attention point Pa on the attention line La and the reference point Pr on the reference line Lr that are substantially the same height in the real space. It is possible to clearly detect a luminance difference in the case where there is. Also, in order to compare the brightness of vertical virtual lines extending in the vertical direction in real space, even if the three-dimensional object is stretched according to the height from the road surface by converting to a bird's-eye view image, The detection process is not affected, and the detection accuracy of the three-dimensional object can be improved.
- the edge line detection unit 35 detects an edge line from the continuous luminance difference calculated by the luminance difference calculation unit 34.
- the first attention point Pa ⁇ b> 1 and the first reference point Pr ⁇ b> 1 are located in the same tire portion, and thus the luminance difference is small.
- the second to sixth attention points Pa2 to Pa6 are located in the rubber part of the tire, and the second to sixth reference points Pr2 to Pr6 are located in the wheel part of the tire. Therefore, the luminance difference between the second to sixth attention points Pa2 to Pa6 and the second to sixth reference points Pr2 to Pr6 becomes large. Therefore, the edge line detection unit 35 may detect that an edge line exists between the second to sixth attention points Pa2 to Pa6 and the second to sixth reference points Pr2 to Pr6 having a large luminance difference. it can.
- the edge line detection unit 35 firstly follows the following Equation 1 to determine the i-th attention point Pai (coordinates (xi, yi)) and the i-th reference point Pri (coordinates ( xi ′, yi ′)) and the i th attention point Pai are attributed.
- Equation 1 t represents a predetermined threshold, I (xi, yi) represents the luminance value of the i-th attention point Pai, and I (xi ′, yi ′) represents the luminance value of the i-th reference point Pri.
- t represents a predetermined threshold
- I (xi, yi) represents the luminance value of the i-th attention point Pai
- I (xi ′, yi ′) represents the luminance value of the i-th reference point Pri.
- the attribute s (xi, yi) of the attention point Pai is “ ⁇ 1”.
- the attribute s (xi, yi) of the attention point Pai is “0”.
- the edge line detection unit 35 determines whether or not the attention line La is an edge line from the continuity c (xi, yi) of the attribute s along the attention line La based on Equation 2 below.
- the continuity c (xi, yi) is “1”.
- the attribute s (xi, yi) of the attention point Pai is not the same as the attribute s (xi + 1, yi + 1) of the adjacent attention point Pai + 1
- the continuity c (xi, yi) is “0”.
- the edge line detection unit 35 obtains a sum for the continuity c of all the points of interest Pa on the line of interest La.
- the edge line detection unit 35 normalizes the continuity c by dividing the obtained sum of continuity c by the number N of points of interest Pa. Then, the edge line detection unit 35 determines that the attention line La is an edge line when the normalized value exceeds the threshold ⁇ .
- the threshold value ⁇ is a value set in advance through experiments or the like.
- the edge line detection unit 35 determines whether or not the attention line La is an edge line based on Equation 3 below. Then, the edge line detection unit 35 determines whether or not all the attention lines La drawn on the detection area A1 are edge lines. [Equation 3] ⁇ c (xi, yi) / N> ⁇
- the attention point Pa is attributed based on the luminance difference between the attention point Pa on the attention line La and the reference point Pr on the reference line Lr, and the attribute along the attention line La is attributed. Since it is determined whether the attention line La is an edge line based on the continuity c of the image, the boundary between the high luminance area and the low luminance area is detected as an edge line, and an edge in line with a natural human sense Detection can be performed. This effect will be described in detail.
- FIG. 18 is a diagram illustrating an image example for explaining the processing of the edge line detection unit 35.
- 102 is an adjacent image.
- a region where the brightness of the first striped pattern 101 is high and a region where the brightness of the second striped pattern 102 is low are adjacent to each other, and a region where the brightness of the first striped pattern 101 is low and the second striped pattern 102. Is adjacent to a region with high brightness.
- the portion 103 located at the boundary between the first striped pattern 101 and the second striped pattern 102 tends not to be perceived as an edge depending on human senses.
- the edge line detection unit 35 determines the part 103 as an edge line only when the attribute of the luminance difference has continuity in addition to the luminance difference in the part 103, the edge line detection unit 35 An erroneous determination of recognizing a part 103 that is not recognized as an edge line as a sensation as an edge line can be suppressed, and edge detection according to a human sense can be performed.
- the three-dimensional object detection unit 33a detects a three-dimensional object based on the amount of edge lines detected by the edge line detection unit 35.
- the three-dimensional object detection device 1a detects an edge line extending in the vertical direction in real space. The fact that many edge lines extending in the vertical direction are detected means that there is a high possibility that a three-dimensional object exists in the detection areas A1 and A2. For this reason, the three-dimensional object detection unit 33a detects a three-dimensional object based on the amount of edge lines detected by the edge line detection unit 35.
- the three-dimensional object detection unit 33a determines whether the amount of edge lines detected by the edge line detection unit 35 is equal to or greater than a predetermined threshold value ⁇ , and the amount of edge lines is equal to the predetermined threshold value ⁇ .
- the edge line detected by the edge line detection unit 35 is determined to be an edge line of a three-dimensional object.
- the three-dimensional object detection unit 33a determines whether or not the edge line detected by the edge line detection unit 35 is correct.
- the three-dimensional object detection unit 33a determines whether or not the luminance change along the edge line of the bird's-eye view image on the edge line is equal to or greater than a predetermined threshold value tb.
- a predetermined threshold value tb When the brightness change of the bird's-eye view image on the edge line is equal to or greater than the threshold value tb, it is determined that the edge line has been detected by erroneous determination.
- the luminance change of the bird's-eye view image on the edge line is less than the threshold value tb, it is determined that the edge line is correct.
- the threshold value tb is a value set in advance by experiments or the like.
- FIG. 19 is a diagram showing the luminance distribution of the edge line.
- FIG. 19A shows the edge line and the luminance distribution when the adjacent vehicle V2 as a three-dimensional object exists in the detection area A1, and
- FIG. Indicates an edge line and a luminance distribution when there is no solid object in the detection area A1.
- the attention line La set in the tire rubber portion of the adjacent vehicle V2 is determined to be an edge line in the bird's-eye view image.
- the luminance change of the bird's-eye view image on the attention line La is gentle. This is because the tire of the adjacent vehicle is extended in the bird's-eye view image by converting the image captured by the camera 10 into the bird's-eye view image.
- the attention line La set in the white character portion “50” drawn on the road surface in the bird's-eye view image is erroneously determined as an edge line.
- the brightness change of the bird's-eye view image on the attention line La has a large undulation. This is because a portion with high brightness in white characters and a portion with low brightness such as a road surface are mixed on the edge line.
- the three-dimensional object detection unit 33a determines whether or not the edge line is detected by erroneous determination. For example, when a captured image acquired by the camera 10 is converted into a bird's-eye view image, the three-dimensional object included in the captured image tends to appear in the bird's-eye view image in a stretched state. As described above, when a tire of an adjacent vehicle is stretched, one portion of the tire is stretched, so that a change in luminance of the bird's eye view image in the stretched direction tends to be small.
- the bird's-eye view image includes a high luminance region such as a character portion and a low luminance region such as a road surface portion.
- the brightness change in the stretched direction tends to increase in the bird's-eye view image. Therefore, when the luminance change along the edge line is greater than or equal to the predetermined threshold value tb, the three-dimensional object detection unit 33a detects the edge line by erroneous determination, and the edge line is detected by the three-dimensional object. Judge that it is not caused.
- the three-dimensional object detection unit 33a determines that the edge line is an edge line of the three-dimensional object, and the three-dimensional object exists.
- the three-dimensional object detection unit 33a calculates the luminance change of the edge line according to any one of the following mathematical formulas 4 and 5.
- the luminance change of the edge line corresponds to the evaluation value in the vertical direction in the real space.
- Equation 4 evaluates the luminance distribution by the sum of the squares of the differences between the i-th luminance value I (xi, yi) on the attention line La and the adjacent i + 1-th luminance value I (xi + 1, yi + 1).
- Equation 5 evaluates the luminance distribution by the sum of the absolute values of the differences between the i-th luminance value I (xi, yi) on the attention line La and the adjacent i + 1-th luminance value I (xi + 1, yi + 1).
- the attribute b (xi, yi) of the attention point Pa (xi, yi) is “1”. Become. If the relationship is other than that, the attribute b (xi, yi) of the attention point Pai is '0'.
- This threshold value t2 is set in advance by an experiment or the like in order to determine that the attention line La is not on the same three-dimensional object. Then, the three-dimensional object detection unit 33a sums the attributes b for all the attention points Pa on the attention line La and obtains an evaluation value in the vertical equivalent direction, whereby the edge line is caused by the three-dimensional object. It is determined whether or not a three-dimensional object exists.
- the three-dimensional object determination unit 34a illustrated in FIG. 14 is configured so that the three-dimensional object detected by the three-dimensional object detection unit 33 indicates an adjacent lane based on the degree of variation in image information, similarly to the three-dimensional object determination unit 34 of the first embodiment. It is determined whether the vehicle is another vehicle (adjacent vehicle) that travels. Specifically, in the second embodiment, the three-dimensional object determination unit 34a calculates the relative movement speed of the three-dimensional object with respect to the host vehicle based on edge information (information such as edge components) of the bird's-eye view image, and the three-dimensional object.
- edge information information such as edge components
- the degree of variation in image information is determined based on the amount of change in the relative movement speed over time, and it is determined whether the detected three-dimensional object is a non-detection target object or an adjacent vehicle. For example, the three-dimensional object determination unit 34a determines that the degree of variation in the image information is higher as the absolute value of the temporal change amount of the relative movement speed of the three-dimensional object is larger, and the detected three-dimensional object is a non-detection target. Judge that the possibility is high. Details of the determination method by the three-dimensional object determination unit 34a will be described later.
- FIG. 20 is a flowchart showing details of the adjacent vehicle detection method according to this embodiment.
- the process for the detection area A1 will be described, but the same process is executed for the detection area A2.
- step S201 the camera 10 captures a predetermined area specified by the angle of view a and the attachment position, and the computer 30a acquires image data of the captured image P captured by the camera 10.
- step S202 the viewpoint conversion unit 31 performs viewpoint conversion on the acquired image data, and generates bird's-eye view image data.
- step S203 the luminance difference calculation unit 34 sets the attention line La on the detection area A1. At this time, the luminance difference calculation unit 34 sets a line corresponding to a line extending in the vertical direction in the real space as the attention line La.
- luminance difference calculation part 34 sets the reference line Lr on detection area
- step S205 the luminance difference calculation unit 34 sets a plurality of attention points Pa on the attention line La. At this time, the luminance difference calculation unit 34 sets a number of attention points Pa that are not problematic when the edge is detected by the edge line detection unit 35.
- step S206 the luminance difference calculation unit 34 sets the reference point Pr so that the attention point Pa and the reference point Pr are substantially the same height in the real space. Thereby, the attention point Pa and the reference point Pr are arranged in a substantially horizontal direction, and it becomes easy to detect an edge line extending in the vertical direction in the real space.
- step S207 the luminance difference calculation unit 34 calculates the luminance difference between the attention point Pa and the reference point Pr that have the same height in the real space.
- the edge line detection unit 35 calculates the attribute s of each attention point Pa in accordance with Equation 1 above.
- step S208 the edge line detection unit 35 calculates the continuity c of the attribute s of each attention point Pa according to the above mathematical formula 2.
- step S209 the edge line detection unit 35 determines whether or not the value obtained by normalizing the total sum of continuity c is greater than the threshold value ⁇ according to the above formula 3.
- step S209 Yes
- the edge line detection unit 35 detects the attention line La as an edge line in step S210. Then, the process proceeds to step S211.
- step S209 No
- the edge line detection unit 35 does not detect the attention line La as an edge line, and the process proceeds to step S211.
- step S211 the computer 30a determines whether or not the processing in steps S203 to S210 has been executed for all the attention lines La that can be set on the detection area A1.
- step S212 the three-dimensional object detection unit 33a calculates a luminance change along the edge line for each edge line detected in step S210.
- the three-dimensional object detection unit 33a calculates the luminance change of the edge line according to any one of the above formulas 4, 5, and 6.
- step S213 the three-dimensional object detection unit 33a excludes edge lines whose luminance change is equal to or greater than a predetermined threshold value tb from among the edge lines. That is, it is determined that an edge line having a large luminance change is not a correct edge line, and the edge line is not used for detecting a three-dimensional object. As described above, this is to prevent characters on the road surface, roadside weeds, and the like included in the detection area A1 from being detected as edge lines.
- the predetermined threshold value tb is a value set based on a luminance change generated by characters on the road surface, weeds on the road shoulder, or the like, which is obtained in advance through experiments or the like.
- the three-dimensional object detection unit 33a determines an edge line whose luminance change is less than the predetermined threshold value tb among the edge lines as an edge line of the three-dimensional object, and thereby detects a three-dimensional object existing in the adjacent vehicle. .
- the three-dimensional object detection unit 33a determines whether or not the amount of the edge line is equal to or greater than a predetermined threshold value ⁇ .
- the threshold value ⁇ is a value set in advance by experimentation or the like. For example, when a four-wheeled vehicle is set as a three-dimensional object to be detected, the threshold value ⁇ is determined in advance by an experiment or the like in the detection region A1. It is set on the basis of the number of edge lines of the four-wheeled vehicle that appears inside.
- step S215 the three-dimensional object determination unit 34a calculates the relative movement speed of the three-dimensional object with respect to the host vehicle. For example, the three-dimensional object determination unit 34a performs primary distribution by counting the number of pixels in which a predetermined edge component has been detected from the bird's-eye view image PB t along the direction in which the three-dimensional object falls due to viewpoint conversion, and performing frequency distribution. An original edge waveform is generated, and the relative movement speed of the three-dimensional object can be calculated from the difference between the edge waveform at the previous time and the edge waveform at the current time. Then, the process proceeds to step S216 in FIG.
- steps S216 to S232 the same processing as in steps S113 to S126 and S128 to S130 of the first embodiment is performed. That is, the three-dimensional object determination unit 34a detects the degree of variation in the relative movement speed of the three-dimensional object based on the relative movement speed of the three-dimensional object calculated in step S215, and the three-dimensional object is detected according to the degree of variation in the relative movement speed. It is determined whether the object is a non-detection target.
- the three-dimensional object determination unit 34a determines that the absolute value
- the three-dimensional object determination unit 34a has an absolute value
- ⁇ 10 km / h) (step S220 Yes)
- of the amount of change in the relative movement speed of the three-dimensional object that is less than 30 km / h and is 10 km / h or more (30 km / h>
- the brightness of the detection area A1 is equal to or greater than a predetermined value
- step S221 Yes)
- the count value is decreased by Y1 (step S222)
- the three-dimensional object determination unit 34a the count value, when it is determined whether a first threshold value s 1 or shown in FIG. 10 (step S227), the count value is a first threshold value s 1 or more Then, it is determined that the detected three-dimensional object is a non-detection target (step S231), and it is determined that there is no adjacent vehicle in the adjacent lane (step S232).
- the three-dimensional object determination unit 34a determines that the solid object is not. It is determined that the object is a detection object (step S231), and it is determined that there is no adjacent vehicle in the adjacent lane (step S232).
- the three-dimensional object is not a non-detection target. It judges (step S229), and judges that an adjacent vehicle exists in an adjacent lane (step S230). And it returns to step S201 of FIG. 20, and repeats the process mentioned above.
- a captured image is converted into a bird's-eye view image, and edge information of the three-dimensional object is detected from the converted bird's-eye view image. Then, the relative movement speed of the three-dimensional object is detected from the edge component detected in the bird's-eye view image, and the detected three-dimensional object is based on the absolute value
- the detected three-dimensional object based on the edge information is a non-detection target object. It is possible to appropriately determine whether or not the erroneous detection of the non-detected object as an adjacent vehicle can be effectively prevented.
- the configuration in which the non-detection target is detected is illustrated based on the variation
- the configuration may be such that a non-detection target is detected by analyzing a captured image captured by the camera 10 by two-dimensional texture analysis and detecting variations in the captured image based on the analysis result.
- a non-detection target is detected by analyzing a captured image captured by the camera 10 by two-dimensional texture analysis and detecting variations in the captured image based on the analysis result.
- a result of two-dimensional texture analysis of a captured image when pixels having a predetermined density difference are detected at a predetermined ratio or more, it is possible to determine that the three-dimensional object captured in the captured image is a non-detection target.
- the captured image is analyzed by a fast Fourier transform method, and as a result of the analysis, a high-frequency component equal to or greater than a predetermined value is detected at a predetermined ratio or more, the three-dimensional object captured in the captured image is an undetected object. It is good also as a structure to determine.
- the present invention is not limited to this configuration.
- it when it is determined that the detected three-dimensional object is a non-detection target, it may be configured not to determine whether the detected three-dimensional object is an adjacent vehicle.
- the alignment unit 32 detects the pixel value of the difference image PD t as “0” or “1”, and based on the difference image PD t , the three-dimensional object detection unit 33
- the configuration in which the three-dimensional object is detected by counting the pixels having the pixel value “1” of the difference image PD t as the difference pixel DP is exemplified.
- the configuration is not limited to this configuration.
- the pixel value of the difference image PD t is detected as a value obtained by converting the difference between the pixel values of the bird's-eye view images PB t and PB t ⁇ 1 to an absolute value, and the solid object detection unit 33 exceeds a predetermined difference threshold value. It is good also as a structure which counts a pixel as the difference pixel DP.
- the captured current time image and the image one hour before are converted into a bird's eye view, and after the alignment of the converted bird's eye view, the difference image PD t is generated, and the generated difference image is generated.
- PD t is evaluated along the falling direction (the falling direction of the three-dimensional object when the captured image is converted into a bird's eye view)
- the differential waveform DW t is generated, but the present invention is not limited to this.
- the converted bird's-eye view is converted into an equivalent to the image captured again, a difference image is generated between this image and the current time image, and the generated difference image collapsing direction corresponding to the direction (i.e., collapse direction converted into the direction of the captured image orientation) may be configured to generate a difference waveform DW t by evaluating along.
- the difference image PD t is generated from the difference between the two images subjected to the alignment, and the difference image PD t is converted into a bird's eye view
- the bird's-eye view does not necessarily have to be clearly generated as long as the evaluation can be performed along the direction in which the user falls.
- the vehicle speed of the host vehicle V1 is determined based on a signal from the speed sensor 20, but the present invention is not limited thereto, and the speed may be estimated from a plurality of images at different times. .
- the vehicle speed sensor 20 is not necessary, and the configuration can be simplified.
- the camera 10 of the above-described embodiment corresponds to the imaging unit of the present invention
- the viewpoint conversion unit 31 corresponds to the image conversion unit of the present invention
- the alignment unit 32 and the three-dimensional object detection unit 33 correspond to the three-dimensional object of the present invention.
- the three-dimensional object detection unit 33 corresponds to the detection unit
- the three-dimensional object detection unit 33 corresponds to the moving speed calculation unit of the present invention
- the three-dimensional object determination unit 34 corresponds to the three-dimensional object determination unit, the non-detection target determination unit, and the control unit of the present invention.
Abstract
Description
本出願は、2012年3月2日に出願された日本国特許出願の特願2012-046629に基づく優先権を主張するものであり、文献の参照による組み込みが認められる指定国については、上記の出願に記載された内容を参照により本出願に組み込み、本出願の記載の一部とする。
図1は、本実施形態に係る立体物検出装置1を搭載した車両の概略構成図である。本実施形態に係る立体物検出装置1は、自車両V1が車線変更する際に接触の可能性がある隣接車線に存在する他車両(以下、隣接車両V2ともいう)を検出することを目的とする。本実施形態に係る立体物検出装置1は、図1に示すように、カメラ10と、車速センサ20と、計算機30とを備える。
続いて、第2実施形態に係る立体物検出装置1aについて説明する。第2実施形態に係る立体物検出装置1aは、図14に示すように、第1実施形態の計算機30に代えて、計算機30aを備えており、以下に説明するように動作すること以外は、第1実施形態と同様である。ここで、図14は、第2実施形態に係る計算機30aの詳細を示すブロック図である。
[数1]
I(xi,yi)>I(xi’,yi’)+tのとき
s(xi,yi)=1
I(xi,yi)<I(xi’,yi’)-tのとき
s(xi,yi)=-1
上記以外のとき
s(xi,yi)=0
[数2]
s(xi,yi)=s(xi+1,yi+1)のとき(且つ0=0を除く)、
c(xi,yi)=1
上記以外のとき、
c(xi,yi)=0
[数3]
Σc(xi,yi)/N>θ
[数4]
鉛直相当方向の評価値=Σ[{I(xi,yi)-I(xi+1,yi+1)}2]
[数5]
鉛直相当方向の評価値=Σ|I(xi,yi)-I(xi+1,yi+1)|
[数6]
鉛直相当方向の評価値=Σb(xi,yi)
但し、|I(xi,yi)-I(xi+1,yi+1)|>t2のとき、
b(xi,yi)=1
上記以外のとき、
b(xi,yi)=0
10…カメラ
20…車速センサ
30…計算機
31…視点変換部
32…位置合わせ部
33…立体物検出部
34…立体物判定部
35…輝度差算出部
36…エッジ線検出部
a…画角
A1,A2…検出領域
CP…交点
DP…差分画素
DWt,DWt’…差分波形
DWt1~DWm,DWm+k~DWtn…小領域
L1,L2…接地線
La,Lb…立体物が倒れ込む方向上の線
P…撮像画像
PBt…鳥瞰視画像
PDt…差分画像
V1…自車両
V2…隣接車両
V3…隣隣接車両
Claims (9)
- 自車両後方の所定領域を撮像する撮像手段と、
前記撮像手段により得られた画像を鳥瞰視画像に視点変換する画像変換手段と、
前記画像変換手段により得られた異なる時刻の鳥瞰視画像の位置を鳥瞰視上で位置合わせし、当該位置合わせされた鳥瞰視画像の差分画像上で、所定の差分を示す画素数をカウントして度数分布化することで差分波形情報を生成し、当該差分波形情報に基づいて前記立体物を検出する立体物検出手段と、
前記差分波形情報に基づいて前記立体物の移動速度を算出する移動速度算出手段と、
前記差分波形情報に基づいて、前記立体物検出手段により検出された前記立体物が、前記所定領域に存在する他車両であるか否かを判定する立体物判定手段と、
前記立体物の移動速度の時間変化量を繰り返し算出することで、前記立体物の移動速度のバラツキ度合を検出し、前記バラツキ度合が高いほど、前記立体物は、前記他車両とは異なる非検出対象物である可能性が高いと判断する非検出対象物判定手段と、
前記非検出対象物判定手段の判定結果に基づいて、前記立体物判定手段が前記立体物を前記他車両であると判定することを抑制する制御手段と、を備える立体物検出装置。 - 自車両後方の所定領域を撮像する撮像手段と、
前記撮像手段により得られた画像を鳥瞰視画像に視点変換する画像変換手段と、
前記画像変換手段により得られた鳥瞰視画像からエッジ情報を検出し、当該エッジ情報に基づいて前記立体物を検出する立体物検出手段と、
前記エッジ情報に基づいて前記立体物の移動速度を算出する移動速度算出手段と、
前記エッジ情報に基づいて、前記立体物検出手段により検出された前記立体物が、前記所定領域に存在する他車両であるか否かを判定する立体物判定手段と、
前記立体物の移動速度の時間変化量を繰り返し算出することで、前記立体物の移動速度のバラツキ度合を検出し、前記バラツキ度合が高いほど、前記立体物は、前記他車両とは異なる非検出対象物である可能性が高いと判断する非検出対象物判定手段と、
前記非検出対象物判定手段の判定結果に基づいて、前記立体物判定手段が前記立体物を前記他車両であると判定することを抑制する制御手段と、を備える立体物検出装置。 - 請求項1または2に記載の立体物検出装置であって、
前記非検出対象物判定手段は、前記バラツキ度合が所定の第1判定値以上である場合には所定のカウント値を増加させ、前記バラツキ度合が前記第1判定値よりも小さい第2判定値以下である場合には前記カウント値を減少させることにより、前記バラツキ度合に基づいて前記カウント値を増減させ、前記増減させたカウント値に基づいて、前記立体物が前記非検出対象物であるか否かを判定することを特徴とする立体物検出装置。 - 請求項3に記載の立体物検出装置であって、
前記非検出対象物判定手段は、前記バラツキ度合に基づいて前記カウント値を増減させた結果、前記カウント値が所定の第1閾値以上となった場合に、前記立体物を前記非検出対象物であると判定することを特徴とする立体物検出装置。 - 請求項4に記載の立体物検出装置であって、
前記非検出対象物判定手段は、前記カウント値が前記第1閾値以上となった後に、前記カウント値が前記第1閾値よりも小さい所定の第2閾値未満となった場合には、前記立体物を前記非検出対象物ではないと判定することを特徴とする立体物検出装置。 - 請求項3~5のいずれかに記載の立体物検出装置であって、
前記所定領域の明るさを検出する明るさ検出手段をさらに備え、
前記非検出対象物判定手段は、前記所定領域の明るさが所定値未満である場合には、前記所定領域の明るさが所定値以上である場合と比べて、前記バラツキ度合に基づく前記カウント値の増減量を小さくすることを特徴とする立体物検出装置。 - 請求項1または請求項1に従属する請求項3~6のいずれかに記載の立体物検出装置であって、
前記立体物検出手段は、鳥瞰視画像に視点変換した際に立体物が倒れ込む方向に沿って、前記差分画像上において所定の差分を示す画素数をカウントして度数分布化することで、一次元の差分波形情報を生成すること特徴とする立体物検出装置。 - 請求項2または請求項2に従属する請求項3~6のいずれかに記載の立体物検出装置であって、
前記立体物検出手段は、鳥瞰視画像に視点変換した際に立体物が倒れ込む方向に沿って、前記エッジ情報を検出することを特徴とする立体物検出装置。 - 自車両後方の所定領域を撮像する撮像手段と、
前記撮像手段により得られた画像を鳥瞰視画像に視点変換する画像変換手段と、
前記画像変換手段により得られた前記鳥瞰視画像上で、前記鳥瞰視画像に視点変換した際に立体物が倒れこむ方向において、輝度差が所定閾値以上の画素の分布情報を検出することで、前記画素の分布情報に基づいて前記立体物の検出を行う立体物検出手段と、
前記画素の分布情報の時間変化に基づいて、前記立体物の移動速度を算出する移動速度算出手段と、
前記画素の分布情報に基づいて、前記立体物検出手段により検出された前記立体物が、前記所定領域に存在する他車両であるか否かを判定する立体物判定手段と、
前記立体物の移動速度の時間変化量を繰り返し算出することで、前記立体物の移動速度のバラツキ度合を検出し、前記バラツキ度合が所定値未満である場合には、前記立体物は前記他車両である可能性が高いと判断する非検出対象物判定手段と、
前記非検出対象物判定手段により前記立体物が前記他車両である可能性が高いと判断された場合には、前記立体物判定手段が前記立体物を前記他車両であると判定し易くする制御手段と、を備える立体物検出装置。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MX2014010405A MX2014010405A (es) | 2012-03-02 | 2013-02-26 | Dispositivo de deteccion de objetos tridimensionales. |
JP2014502227A JP5733467B2 (ja) | 2012-03-02 | 2013-02-26 | 立体物検出装置 |
EP13755143.8A EP2821957B1 (en) | 2012-03-02 | 2013-02-26 | Three-dimensional object detection device |
BR112014020353-9A BR112014020353B1 (pt) | 2012-03-02 | 2013-02-26 | Dispositivo de detecção de objeto tridimensional |
US14/373,064 US9239960B2 (en) | 2012-03-02 | 2013-02-26 | Three-dimensional object detection device |
RU2014139838A RU2636121C2 (ru) | 2012-03-02 | 2013-02-26 | Устройство обнаружения трехмерных объектов |
CN201380008399.1A CN104094311B (zh) | 2012-03-02 | 2013-02-26 | 立体物检测装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012046629 | 2012-03-02 | ||
JP2012-046629 | 2012-03-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013129358A1 true WO2013129358A1 (ja) | 2013-09-06 |
Family
ID=49082555
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/054860 WO2013129358A1 (ja) | 2012-03-02 | 2013-02-26 | 立体物検出装置 |
Country Status (8)
Country | Link |
---|---|
US (1) | US9239960B2 (ja) |
EP (1) | EP2821957B1 (ja) |
JP (1) | JP5733467B2 (ja) |
CN (1) | CN104094311B (ja) |
BR (1) | BR112014020353B1 (ja) |
MX (1) | MX2014010405A (ja) |
RU (1) | RU2636121C2 (ja) |
WO (1) | WO2013129358A1 (ja) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2608149B1 (en) * | 2010-08-19 | 2021-04-21 | Nissan Motor Co., Ltd. | Three-dimensional object detection device and three-dimensional object detection method |
MX2014001500A (es) * | 2011-09-12 | 2014-05-12 | Nissan Motor | Dispositivo de deteccion de objeto tridimensional. |
JP2015170174A (ja) * | 2014-03-07 | 2015-09-28 | ソニー株式会社 | 情報処理装置、情報処理システム、情報処理方法及びプログラム |
WO2016027423A1 (ja) * | 2014-08-19 | 2016-02-25 | パナソニックIpマネジメント株式会社 | 伝送方法、再生方法及び再生装置 |
DE102015116572A1 (de) * | 2015-09-30 | 2017-03-30 | Claas Selbstfahrende Erntemaschinen Gmbh | Verfahren zum Erkennen von Störungen einer Erntegutbergungsanordnung |
CN105740792B (zh) * | 2016-01-25 | 2019-03-12 | 浙江生辉照明有限公司 | 目标检测方法和装置 |
JP6859907B2 (ja) * | 2017-09-08 | 2021-04-14 | トヨタ自動車株式会社 | 車両制御装置 |
CA3121861A1 (en) | 2019-02-05 | 2020-08-13 | Motive Drilling Technologies, Inc. | Downhole display |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05143737A (ja) * | 1991-11-22 | 1993-06-11 | Ohkura Electric Co Ltd | 動きベクトルによる識別方法及び装置 |
JP2006107313A (ja) * | 2004-10-08 | 2006-04-20 | Nissan Motor Co Ltd | 物体判定装置、および物体判定方法 |
JP2006315482A (ja) | 2005-05-11 | 2006-11-24 | Mazda Motor Corp | 車両用移動物体検出装置 |
JP2007316790A (ja) * | 2006-05-24 | 2007-12-06 | Nissan Motor Co Ltd | 歩行者検出装置および歩行者検出方法 |
JP2008219063A (ja) | 2007-02-28 | 2008-09-18 | Sanyo Electric Co Ltd | 車両周辺監視装置及び方法 |
JP2012003662A (ja) * | 2010-06-21 | 2012-01-05 | Nissan Motor Co Ltd | 移動距離検出装置及び移動距離検出方法 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3232724B2 (ja) * | 1992-12-08 | 2001-11-26 | 株式会社デンソー | 車間距離制御装置 |
JP3589138B2 (ja) * | 1993-03-03 | 2004-11-17 | 株式会社デンソー | 車両走行制御装置 |
US6636257B1 (en) * | 1998-08-11 | 2003-10-21 | Honda Giken Kogyo Kabushiki Kaisha | Mobile body recognizing apparatus and motor vehicle monitoring apparatus |
DE10114932B4 (de) * | 2001-03-26 | 2005-09-15 | Daimlerchrysler Ag | Dreidimensionale Umfelderfassung |
JP3753652B2 (ja) * | 2001-12-04 | 2006-03-08 | 富士通テン株式会社 | Fm−cwレーダのミスペアリング判定及び信号処理方法 |
US6871121B2 (en) * | 2002-10-07 | 2005-03-22 | Blink Engineering Corp. | Entertainment system on-board a vehicle for visualizing on a display real-time vehicle data |
JP2008227646A (ja) | 2007-03-09 | 2008-09-25 | Clarion Co Ltd | 障害物検知装置 |
JP2008299787A (ja) * | 2007-06-04 | 2008-12-11 | Mitsubishi Electric Corp | 車両検知装置 |
WO2009036176A1 (en) * | 2007-09-11 | 2009-03-19 | Magna Electronics | Imaging system for vehicle |
JP2009126270A (ja) * | 2007-11-21 | 2009-06-11 | Sanyo Electric Co Ltd | 画像処理装置及び方法、運転支援システム、車両 |
EP2401176B1 (en) * | 2009-02-27 | 2019-05-08 | Magna Electronics | Alert system for vehicle |
DE102010011093A1 (de) * | 2010-03-11 | 2011-09-15 | Daimler Ag | Verfahren zur Bestimmung einer Fahrzeugaufbaubewegung |
RU2432276C1 (ru) * | 2010-07-07 | 2011-10-27 | Осман Мирзаевич Мирза | Способ наблюдения за дорожной ситуацией с движущегося транспортного средства (варианты) |
EP2608149B1 (en) * | 2010-08-19 | 2021-04-21 | Nissan Motor Co., Ltd. | Three-dimensional object detection device and three-dimensional object detection method |
-
2013
- 2013-02-26 EP EP13755143.8A patent/EP2821957B1/en active Active
- 2013-02-26 BR BR112014020353-9A patent/BR112014020353B1/pt active IP Right Grant
- 2013-02-26 WO PCT/JP2013/054860 patent/WO2013129358A1/ja active Application Filing
- 2013-02-26 JP JP2014502227A patent/JP5733467B2/ja active Active
- 2013-02-26 RU RU2014139838A patent/RU2636121C2/ru active
- 2013-02-26 CN CN201380008399.1A patent/CN104094311B/zh active Active
- 2013-02-26 MX MX2014010405A patent/MX2014010405A/es active IP Right Grant
- 2013-02-26 US US14/373,064 patent/US9239960B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05143737A (ja) * | 1991-11-22 | 1993-06-11 | Ohkura Electric Co Ltd | 動きベクトルによる識別方法及び装置 |
JP2006107313A (ja) * | 2004-10-08 | 2006-04-20 | Nissan Motor Co Ltd | 物体判定装置、および物体判定方法 |
JP2006315482A (ja) | 2005-05-11 | 2006-11-24 | Mazda Motor Corp | 車両用移動物体検出装置 |
JP2007316790A (ja) * | 2006-05-24 | 2007-12-06 | Nissan Motor Co Ltd | 歩行者検出装置および歩行者検出方法 |
JP2008219063A (ja) | 2007-02-28 | 2008-09-18 | Sanyo Electric Co Ltd | 車両周辺監視装置及び方法 |
JP2012003662A (ja) * | 2010-06-21 | 2012-01-05 | Nissan Motor Co Ltd | 移動距離検出装置及び移動距離検出方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2821957A4 |
Also Published As
Publication number | Publication date |
---|---|
EP2821957B1 (en) | 2020-12-23 |
CN104094311B (zh) | 2017-05-31 |
RU2014139838A (ru) | 2016-04-20 |
CN104094311A (zh) | 2014-10-08 |
JP5733467B2 (ja) | 2015-06-10 |
US20150016681A1 (en) | 2015-01-15 |
BR112014020353B1 (pt) | 2021-08-31 |
US9239960B2 (en) | 2016-01-19 |
JPWO2013129358A1 (ja) | 2015-07-30 |
BR112014020353A2 (pt) | 2021-05-25 |
EP2821957A4 (en) | 2016-01-06 |
EP2821957A1 (en) | 2015-01-07 |
MX2014010405A (es) | 2014-09-22 |
RU2636121C2 (ru) | 2017-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5733467B2 (ja) | 立体物検出装置 | |
JP5787024B2 (ja) | 立体物検出装置 | |
JP5924399B2 (ja) | 立体物検出装置 | |
JP5804180B2 (ja) | 立体物検出装置 | |
JP6020567B2 (ja) | 立体物検出装置および立体物検出方法 | |
JP5682735B2 (ja) | 立体物検出装置 | |
JP5943077B2 (ja) | 立体物検出装置および立体物検出方法 | |
JP5743020B2 (ja) | 立体物検出装置 | |
JP5682734B2 (ja) | 立体物検出装置 | |
JP5794379B2 (ja) | 立体物検出装置及び立体物検出方法 | |
JP5835459B2 (ja) | 立体物検出装置 | |
JP6003987B2 (ja) | 立体物検出装置及び立体物検出方法 | |
JP5999183B2 (ja) | 立体物検出装置および立体物検出方法 | |
JP5790867B2 (ja) | 立体物検出装置 | |
JP6011110B2 (ja) | 立体物検出装置および立体物検出方法 | |
WO2013129357A1 (ja) | 立体物検出装置 | |
JP5768927B2 (ja) | 立体物検出装置 | |
JP5668891B2 (ja) | 立体物検出装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13755143 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014502227 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013755143 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14373064 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2014/010405 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2014139838 Country of ref document: RU Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112014020353 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112014020353 Country of ref document: BR Kind code of ref document: A2 Effective date: 20140818 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01E Ref document number: 112014020353 Country of ref document: BR Kind code of ref document: A8 Free format text: APRESENTE A TRADUCAO SIMPLES DA FOLHA DE ROSTO DA CERTIDAO DE DEPOSITO DA PRIORIDADE REIVINDICADA; OU DECLARACAO DE QUE OS DADOS DO PEDIDO INTERNACIONAL ESTAO FIELMENTE CONTIDOS NA PRIORIDADE REIVINDICADA, CONTENDO TODOS OS DADOS IDENTIFICADORES (NUMERO DA PRIORIDADE, DATA, DEPOSITANTE E INVENTORES), CONFORME O PARAGRAFO UNICO DO ART. 25 DA RESOLUCAO 77/2013. CABE SALIENTAR NAO FOI POSSIVEL INDIVIDUALIZAR OS TITULARES DA CITADA PRIORIDADE, INFORMACAO NECESSARIA PARA O EXAME DA CESSAO DO DOCUMENTO DE PRIORIDADE, SE FOR O CASO. |
|
ENP | Entry into the national phase |
Ref document number: 112014020353 Country of ref document: BR Kind code of ref document: A2 Effective date: 20140818 |