WO2014017302A1 - 車載用周囲環境認識装置 - Google Patents
車載用周囲環境認識装置 Download PDFInfo
- Publication number
- WO2014017302A1 WO2014017302A1 PCT/JP2013/068935 JP2013068935W WO2014017302A1 WO 2014017302 A1 WO2014017302 A1 WO 2014017302A1 JP 2013068935 W JP2013068935 W JP 2013068935W WO 2014017302 A1 WO2014017302 A1 WO 2014017302A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- unit
- image
- reflection
- dimensional object
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B29/00—Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
- G08B29/18—Prevention or correction of operating errors
- G08B29/185—Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/168—Driving aids for parking, e.g. acoustic or visual feedback on parking space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present invention relates to an on-vehicle ambient environment recognition device.
- Patent Document 1 the technology described in Patent Document 1 is for extracting a high-intensity area due to road surface reflection of headlights and preventing false detection. Therefore, it is not possible to prevent an alarm from being output at an incorrect timing as a reflection of a background on a non-high luminance road surface is erroneously detected as a vehicle.
- an in-vehicle ambient environment recognition apparatus that captures an area around a vehicle based on an image capturing unit that captures an image of a road surface around the vehicle and acquires a captured image;
- An application execution unit that recognizes the other vehicle that is traveling and detects the relative velocity of the other vehicle with respect to the vehicle, a reflection determination unit that determines the presence or absence of a background object on the road based on the captured image, and the application
- the alarm control unit that controls the output of the alarm signal based on the recognition result of the other vehicle by the execution unit, and the relative speed of the other vehicle when it is determined by the reflection determination unit that the background object is reflected on the road surface
- an alarm suppression adjustment unit configured to suppress the output of the alarm signal based on the information.
- the alarm suppression adjustment unit is a condition for the reflection determination unit to determine the presence or absence of a background object on the road surface. It is preferable to adjust the degree of suppression of the output of the alarm signal by changing the speed of the vehicle according to the relative speed of the other vehicle.
- the in-vehicle surrounding environment recognition apparatus of the second aspect includes a region setting unit that sets a background region and a reflection region in a captured image.
- the reflection determination unit compares the image in the background region of the captured image with the image in the reflection region of the captured image, and the correlation thereof is at least a predetermined threshold value. It is preferable to determine the presence or absence of the reflection of the background on the road surface by determining whether there is any. Moreover, it is preferable that the warning suppression adjustment unit adjust the suppression degree of the output of the warning signal by changing the threshold according to the relative speed of the other vehicle.
- the in-vehicle surrounding environment recognition apparatus of the second aspect includes an area setting unit for setting a background area and a reflection area in a captured image, and an image in the background area of the captured image.
- the reflection determination unit may determine the presence or absence of the background object on the road surface by comparing the feature amount of the background region with the feature amount of the reflection region.
- the alarm suppression adjustment unit adjust the suppression degree of the output of the alarm signal by changing the detection condition according to the relative speed of the other vehicle.
- the alarm suppression adjustment unit adjusts the condition for the application execution unit to recognize another vehicle according to the relative speed of the other vehicle It is preferable to adjust the suppression degree of the output of the alarm signal by changing it.
- the application executing unit determines that the image information value based on the image in the detection area set in the captured image is equal to or more than a predetermined threshold. It is preferable to recognize other vehicles by determining whether there is any.
- the warning suppression adjustment unit adjust the suppression degree of the output of the warning signal by changing the threshold according to the relative speed of the other vehicle.
- the application execution unit detects predetermined image information value based on the image in the detection area set in the captured image. It is preferable to detect the image information value as a detection target when it satisfy
- the alarm suppression adjusting unit determines that the reflection judging unit judges that the background is reflected on the road surface, and then the road surface is corrected. It is preferable to adjust the suppression degree of the output of the alarm signal by extending the suppression of the output of the alarm signal according to the relative speed of the other vehicle when it is determined that the background object is not reflected.
- the alarm suppression adjustment unit satisfies the case where the relative speed of the other vehicle satisfies the predetermined speed condition.
- the speed condition is that the relative speed of the other vehicle is within a predetermined range, and the fluctuation of the relative speed of the other vehicle is predetermined. It is preferable to include at least one of being within the range.
- an in-vehicle surrounding environment recognition apparatus that captures an area around a vehicle based on an imaging unit that images a road surface around the vehicle and acquires a captured image, and a captured image acquired by the imaging unit.
- a reflection judging unit which judges the recognition of the other vehicle by the application executing unit when it is judged by the reflection judging unit that the background object is reflected on the road surface.
- FIG. 1 is a block diagram showing the configuration of a vehicle-mounted ambient environment recognition apparatus according to an embodiment of the present invention. It is a figure which shows the imaging
- FIG. 15 It is a figure which shows the example of a functional block of a reflection determination part. It is a figure which shows the example of a functional block of an application execution part. It is a figure explaining setting of a road surface field and a background field in three dimensions. It is a figure for explaining the reduction effect of the false alarm obtained by a 1st embodiment. It is a schematic block diagram of the vehicles for explaining other vehicles recognition processing. It is a top view which shows the travel state of the vehicle of FIG. 13 (three-dimensional object detection by difference waveform information). It is a block diagram which shows the detail of an other vehicle recognition part. It is a figure for demonstrating the outline
- FIG. 16 (a) is a top view which shows the movement state of a vehicle
- FIG.16 (b) is an image which shows the outline of alignment. It is the schematic which shows the mode of a production
- FIG. 25A is a plan view showing a positional relationship of a detection area etc.
- FIG. 25B is a detection area etc. in a real space It is a perspective view which shows the positional relationship of.
- FIG. 26A is a view for explaining the operation of the luminance difference calculation unit of FIG. 15, and FIG. 26A is a view showing a positional relationship of an attention line, a reference line, an attention point and a reference point in a bird's eye view image; ) Is a diagram showing the positional relationship between the attention line, the reference line, the attention point and the reference point in real space.
- FIG. 27A is a diagram showing a detection area in a bird's-eye view image
- FIG. 27B is a reference line in the bird's-eye view image
- FIG. It is a figure which shows the positional relationship of a line, an attention point, and a reference point.
- FIG. 28A is a diagram showing an edge line and a brightness distribution on the edge line
- FIG. 28A is a diagram showing a brightness distribution in the case where a three-dimensional object (vehicle) exists in the detection area
- FIG. It is a figure which shows luminance distribution in case there is no three-dimensional object.
- the 2 which shows the solid-object detection method using the edge information performed by the viewpoint conversion part, the brightness
- FIG. 34 (A) shows an example of differential waveform information when there are other vehicles in the detection area
- FIG. 34 (B) shows that there is no other vehicle in the detection area and a water film is formed
- It is a 1st flow chart which shows the control procedure of solid thing judgment in consideration of existence of a virtual image.
- FIG. 1 is a block diagram showing the configuration of a vehicle-mounted ambient environment recognition apparatus 100 according to an embodiment of the present invention.
- the on-vehicle ambient environment recognition apparatus 100 shown in FIG. 1 is mounted on a vehicle and used, and includes a camera 1, a control unit 2, an alarm output unit 3, and an operation state notification unit 4.
- the camera 1 is installed toward the rear of the vehicle, and captures an image in a shooting area including the road surface behind the vehicle at predetermined time intervals.
- an imaging device such as a CCD or a CMOS is used.
- the photographed image acquired by the camera 1 is output from the camera 1 to the control unit 2.
- FIG. 2 is a view showing a shooting area of the camera 1 and shows the camera 1 viewed from the side.
- the camera 1 shoots an image including the road surface behind the vehicle in the shooting area.
- the imaging area (field angle) of the camera 1 is set relatively wide so that the road surface behind the vehicle can be photographed in a sufficiently wide range in the left-right direction.
- FIG. 3 is a view showing an example of the attachment position of the camera 1.
- a license plate 21 is installed on the vehicle body 20 at the rear portion of the vehicle.
- the camera 1 is attached obliquely downward at a position immediately above the license plate 21.
- the attachment position shown here is an example to the last, you may attach the camera 1 to another position.
- the present method may be used also for the side camera and the front camera.
- the control unit 2 performs predetermined image processing using a captured image from the camera 1 and performs various controls according to the processing result.
- Various controls such as lane recognition, other vehicle recognition, pedestrian detection, sign detection, trapping prevention detection, parking frame recognition, moving object detection, are performed in the on-vehicle surrounding environment recognition device 100 by the control performed by the control unit 2. Function is realized.
- the alarm output unit 3 is a part for outputting an alarm such as an alarm lamp or an alarm buzzer to a driver of the vehicle.
- the operation of the alarm output unit 3 is controlled by the control unit 2. For example, if it is determined that the vehicle is about to deviate from the lane in which the vehicle is traveling in the lane recognition described above, it may collide with the vehicle in other vehicle detection, pedestrian detection, entrainment prevention, moving object detection, etc.
- the alarm output unit 3 outputs an alarm according to the control of the control unit 2.
- the operation state notification unit 4 is a portion for notifying the driver of the vehicle of the operation state of the in-vehicle surrounding environment recognition device 100.
- the controller 2 is installed near the driver's seat of the vehicle as the operating state notification unit 4. Turn on the lamp. As a result, the driver is notified that the in-vehicle surrounding environment recognition device 100 is in a non-operating state.
- warning suppression at the time of road surface reflection performed in the in-vehicle surrounding environment recognition device 100 will be described.
- various background objects in the background portion of the captured image may occur if the road surface is wet and the reflection coefficient is high. , May be reflected in the water film etc. formed on the road surface.
- a background object reflected on the road surface may be erroneously detected as a recognition object, and a warning to the driver may be output at an incorrect timing.
- the on-vehicle ambient environment recognition apparatus 100 determines the presence or absence of the reflection of the background on the road surface by the water film or the like, and suppresses the output of the alarm when it is determined that the reflection is present. As a result, the reflection of background objects on the road surface is erroneously detected as another vehicle, thereby preventing an alarm from being output at an incorrect timing.
- FIG. 4 is a control block diagram of the control unit 2 regarding the warning suppression at the time of road surface reflection.
- the control unit 2 controls control blocks of the area setting unit 201, the feature amount calculation unit 202, the reflection determination unit 203, the application execution unit 204, the alarm control unit 205, and the alarm suppression adjustment unit 206 with respect to alarm suppression at the time of road surface reflection.
- each control block of FIG. 4 is realized by executing a program corresponding to each of these control blocks by a microcomputer.
- the area setting unit 201 sets a plurality of background areas corresponding to the background area and a plurality of reflection areas on the road surface corresponding to the background area on the left and right of the photographed image acquired by the camera 1.
- FIG. 7 is a diagram showing an example of functional blocks of the area setting unit 201.
- the area setting unit 201 includes, for example, a road surface area setting unit 201a, a background horizontal position setting unit 201b, a reflective background area setting unit 201c, and an image area conversion unit 201d.
- a road surface area setting unit 201a sets a road surface area of an adjacent lane to be used for other vehicle recognition with respect to a captured image acquired by the camera 1.
- FIG. 11 is a diagram for explaining setting of the road surface area and the background area in three dimensions. As shown in FIG. 11, the road surface area setting unit 201a sets left and right processing areas 110 and 111 centered on the camera position of the host vehicle to further divide the road surface area for vehicle detection as shown in FIG. Set multiple local areas.
- the reflective background area setting unit 201c it is possible to calculate a vector direction specularly reflected on the road surface from the host vehicle camera, it is not possible to specify which lateral position the background is from the host vehicle camera. That is, it is unknown whether the background reflected in the road surface is a nearby obstacle or a streetlight 20m away. For this reason, in the background horizontal position setting unit 201b, the horizontal position is set by setting a predetermined value.
- the reflection background area setting unit 201c determines the lateral position 112 as shown in FIG. 11, assuming that the large wall 113 stands here, and extends the reflection vector at each vertex of the road surface local area previously obtained. , And estimate a three-dimensional position 114 that collides with the wall 113. Thereby, estimation calculation of the reflective background area is performed.
- the image area conversion unit 201d implements which position on the image the image area is to be.
- the feature amount calculation unit 202 calculates, for each background region and each reflection region set by the region setting unit 201, a feature amount indicating the feature of the image in each of these regions.
- FIG. 8 is a diagram showing an example of functional blocks of the feature quantity calculation unit 202.
- the feature quantity calculation unit 202 includes, for example, a road surface edge angle histogram extraction unit 202a, a white line edge angle estimation unit 202b, a background edge angle histogram extraction unit 202c, a background road surface edge angle correlation estimation unit 202d, and It consists of a background road surface solid object edge estimation unit 202e.
- FIG. 9 is a diagram showing an example of a functional block of the reflection determination unit 203. As shown in FIG. As shown in FIG.
- the reflection determination unit 203 includes, for example, an edge strength analysis unit 203a, a white line edge suppression unit 203b, a solid object edge emphasis unit 203c, a local region specific correlation analysis unit 203d, and left / right differential correlation analysis. It consists of part 203e.
- FIG. 10 is a diagram showing an example of a functional block of the application execution unit 204.
- the application execution unit 204 includes, for example, a lane recognition unit 204a, another vehicle recognition unit 204b, a pedestrian detection unit 204c, a sign detection unit 204d, a trapping prevention recognition unit 204e, a parking frame recognition unit 204f, and movement. It consists of a body detection unit 204g.
- the lane recognition unit 204 a recognizes lanes on the left and right of the host vehicle based on the captured image acquired by the camera 1.
- White line feature quantities are extracted from the captured image, a straight line in which the white line feature quantities are arranged is extracted, and finally the relative position of the host vehicle and the white line in the world coordinates and the relative attitude are calculated from the lines on the image. Determine if it is likely to deviate outside.
- the alarm control unit 205 is instructed to output an alarm.
- the other vehicle recognition unit 204 b recognizes the other vehicle present on the left rear or the right rear of the own vehicle based on the photographed image acquired by the camera 1.
- the application execution unit 204 recognizes the other vehicle based on the image information value based on the image in the detection area set in the captured image by executing the other vehicle recognition process as described in detail later.
- the relative speed of the other vehicle recognized with respect to the host vehicle is detected.
- the application execution unit 204 determines the presence or absence of the other vehicle which may collide with the own vehicle. For example, it is determined that there is a possibility that the vehicle may collide with the vehicle when the vehicle is about to start a lane change and another vehicle existing in the lane change direction is approaching the vehicle, alarm control It instructs the unit 205 to output an alarm.
- the pedestrian detection unit 204 c detects a pedestrian from the captured image based on the captured image acquired by the camera 1. It detects pedestrians who may have a collision in the traveling direction of the host vehicle, and warns when there is a risk of a collision.
- the sign detection unit 204 d detects a sign in the captured image based on the captured image acquired by the camera 1 and conveys the type of the sign to the user by voice or display.
- the entrainment prevention recognition unit 204e recognizes based on the captured image acquired by the camera 1 whether there is a two wheeled vehicle or the like involved when turning an intersection, and warns when there is a risk of contact with the own vehicle. Do.
- the parking frame recognition 204 f recognizes a parking frame for the purpose of automatic parking and parking assistance, and executes assistance or control for parking the vehicle from the position and posture of the parking frame.
- the moving body detection unit 204 g recognizes a moving body around the host vehicle at a low vehicle speed based on the captured image acquired by the camera 1. If it is determined that the moving object is detected from the captured image and the possibility of contact is high based on the moving direction and the behavior of the vehicle, the alarm control unit 205 is instructed to output an alarm.
- the alarm control unit 205 outputs an alarm output signal to the alarm output unit 3 in response to an instruction from the application execution unit 204. By the output of the alarm output signal, the alarm output unit 3 outputs an alarm to the driver.
- an alarm is realized when there is a risk of a collision such as an obstacle in the in-vehicle surrounding environment recognition apparatus 100.
- the alarm control unit 205 stops the output of the alarm output signal to the alarm output unit 3 when receiving the notification of the presence of reflection from the reflection determination unit 203. At this time, even if an instruction for alarm output is issued from the application execution unit 204, the alarm control unit 205 does not output an alarm output signal to the alarm output unit 3. As a result, when there is reflection of a background on the road surface, the alarm output by the alarm output unit 3 is suppressed.
- the alarm suppression adjustment unit 206 adjusts the suppression degree of the alarm output performed by the alarm control unit 205 based on the relative speed of the other vehicle detected by the other vehicle recognition unit 204b in the application execution unit 204. That is, when the relative speed of the other vehicle is relatively low, it is considered that there is a high possibility that the background object reflected on the road surface is erroneously recognized as the other vehicle. Therefore, in such a case, the degree of suppression of the alarm output is increased by the alarm suppression adjustment unit 206, thereby making it more difficult to generate a false alarm due to the reflection of a background object.
- the specific adjustment method of the suppression degree of the warning output which this warning suppression adjustment part 206 performs is demonstrated later.
- FIG. 5 is a flowchart of the process executed in the warning suppression at the time of road surface reflection described above. The processing shown in this flowchart is performed in the control unit 2 at predetermined processing cycles while the application (application) is being executed.
- step S110 the control unit 2 uses the camera 1 to image the inside of a predetermined imaging area including the road surface around the vehicle, and acquires a photographed image.
- the photographed image is output from the camera 1 to the control unit 2 and used in the subsequent processing.
- step S120 the control unit 2 sets a background area and a reflection area in the captured image acquired in step S110.
- the area setting unit 201 sets a plurality of background areas and a plurality of reflection areas in predetermined portions in the captured image.
- use of a front camera is premised.
- any of a front camera, a side camera, and a rear camera may be used. You may use either a side camera or a rear camera for prevention of a roll-in and parking frame recognition. No matter which camera is used, the basic method or idea can be applied as it is.
- FIG. 6 is a view showing an example of a background area and a reflection area set in a photographed image.
- the photographed image 30 shown in FIG. 6 is divided into a road surface image area 32 where the road surface is photographed and a background image area 33.
- background areas 34a to 34f and reflection are displayed at positions corresponding to the rear right of the vehicle (for a rear camera) or left front (for a front camera) with respect to the photographed image 30.
- Inset areas 35a to 35f are set, and background areas 36a to 36f and reflection areas 37a to 37f are set at positions corresponding to the left rear (for a rear camera) or the front right (for a front camera).
- the road surface area setting unit 201a thus sets the reflection areas 35a to 35f of the right or left adjacent lanes and the reflection areas 37a to 37f of the left or right adjacent lanes. Further, the reflective background area setting unit 201c sets right or left background areas 34a to 34f and right or left background areas 36a to 36f.
- Background regions 34a to 34f and 36a to 36f are respectively set at symmetrical positions in the background image region 33 along the direction of the positional change of the background object in the photographed image 30 which occurs as the vehicle travels.
- Reflected areas 35a to 35f and 37a to 37f are set in the road surface image area 32 corresponding to the background areas 34a to 34f and 36a to 36f, respectively.
- background areas 34a and 36a set at the left and right ends in the photographed image 30, that is, positions closest to the vehicle in real space correspond to the reflection areas 35a and 37a, respectively.
- background areas 34f and 36f set closer to the center in the captured image 30, that is, at positions farthest from the vehicle in real space correspond to the reflection areas 35f and 37f, respectively.
- reflection areas 35a to 35f and 37a to 37f are respectively set in the road surface image area 32, and a position in the background image area 33 at which reflection of a background object occurs in these reflection areas is set.
- Background areas 34a to 34f, 36a to 36f are set respectively.
- the set positions of the reflection areas 35a to 35f and 37a to 37f are areas where application detection is performed, for example, other vehicles whose other vehicle recognition unit 204b exists in the left rear or right rear of the own vehicle.
- the position corresponds to a detection area used for recognition.
- step S130 the control unit 2 causes the other vehicle recognition unit 204b of the application execution unit 204 to perform another vehicle recognition processing for recognizing another vehicle traveling around the host vehicle.
- the other vehicle recognition processing when the other vehicle exists in the left rear or the right rear of the own vehicle, the other vehicle is recognized, and the relative speed of the other vehicle with respect to the own vehicle is detected.
- the specific content of the other vehicle recognition process performed here is demonstrated in detail later.
- step S140 the control unit 2 determines whether the other vehicle present in the left rear (right front) or the right rear (left front) of the host vehicle is recognized by the other vehicle recognition processing in step S130. If another vehicle is recognized, the process proceeds to step S150. If not recognized, the process proceeds to step S170.
- step S150 the control unit 2 determines whether the relative speed of the other vehicle detected in the other vehicle recognition process in step S130 is within a predetermined range, for example, within a range of 0 to 10 km / hr. If the relative speed of the other vehicle is within this range, the process proceeds to step S160, and if it is out of the range, the process proceeds to step S170.
- step S160 the control unit 2 causes the alarm suppression adjustment unit 206 to adjust the degree of alarm suppression.
- the control unit 2 causes the alarm suppression adjustment unit 206 to adjust the degree of alarm suppression.
- the degree of alarm suppression is adjusted so that the alarm suppression can be easily performed as compared with the case where it is not so.
- the specific method of relieving the conditions for determining the presence or absence of reflection of a background thing is demonstrated in detail later.
- step S170 the control unit 2 causes the feature amount calculation unit 202 to set the background areas 34a to 34f and 36a to 36f set in step S120 and the reflection areas 35a to 35f and 37a to 37f, respectively.
- Feature quantities representing features of the image in the area are respectively calculated. For example, for each pixel of the image corresponding to each of the background areas 34a to 34f and 36a to 36f and each pixel of the image corresponding to each of the reflected areas 35a to 35f and 37a to 37f, Based on this, edge angles in the photographed image 30 are respectively calculated. By histogramming the edge angle of each pixel thus calculated for each area, it is possible to calculate the feature amount of each area according to the edge angle of the image of each area.
- the method of calculating the feature amount is not limited to this as long as the feature of the image in each area can be appropriately represented.
- the process of the feature amount calculation unit 202 will be described in detail with reference to the functional block diagram of FIG.
- an angle of a vector indicating a gradient direction of luminance is extracted as an edge angle.
- the edge angle is extracted for each pixel in each reflection area, and the distribution is analyzed for the edge angle in each reflection area by making a histogram for each reflection area.
- the edge angle histogram is relatively likely to include the white line edge angle component of the road surface.
- the white line edge angle estimation unit 202b estimates the edge angle of the white line in each reflection area. The estimation result is later used to suppress the edge angle component of the white line with respect to the reflection area when correlating the background area with the reflection area in the process of step S180.
- the background road surface edge angle correlation property estimation unit 202d estimates the correspondence between the background edge angle and the road surface edge angle for each of the background area and the reflection area corresponding to each other.
- a correspondence table is generated which indicates how many edge angles on the image the corresponding reflection area will be if the edge is reflected on the road surface .
- the background road surface three-dimensional object edge estimation unit 202e separates the corresponding table from the above-mentioned correspondence table. Make an estimate and record the estimate.
- step S180 the control unit 2 causes the reflection determination unit 203 to perform reflection determination to determine the presence or absence of a background object on the road surface based on the feature amounts of the respective regions calculated in step S170.
- the feature amounts calculated for the background regions 34a to 34f and 36a to 36f and the feature amounts calculated for the reflection regions 35a to 35f and 37a to 37f, respectively correspond to each other. Compare with each. For example, the feature of the background area 34a is compared with the feature of the reflected area 35a, and the feature of the background area 36a is compared with the feature of the reflected area 37a.
- the feature quantity of the background area 34f is compared with the feature quantity of the reflection area 35f corresponding thereto, and the feature quantity of the background area 36f is compared with the feature quantity of the reflection area 37f corresponding thereto.
- feature amounts of the other background areas and reflection areas are compared with each other. In this way, by comparing the feature amounts of each background area and each reflected area, the image in each background area is compared with the image in each reflected area, and the correlation is analyzed for each combination.
- the reason why it is desirable to determine the reflection on the road surface is to suppress false detection due to reflection of the road surface when reflection of the background object occurs. That is, if the edge strength of the road surface is weak, false detection does not occur in lane recognition, other vehicle recognition, pedestrian detection, and the like. In addition, when the edge strength of the background is low, there is a high possibility that there is no object to be reflected in the background in the first place, and therefore, there is a high possibility that there is no object to be reflected on the road surface. Therefore, before the reflection determination is performed, it is analyzed by the edge strength analysis unit 203a that there is an appropriate edge distribution on the road surface and the background.
- the white line edge suppression unit 203b uses the above-described white line edge angle estimated in step S170 to select the size of the histogram near the white line edge angle among the histograms of the edge angles of the reflection area of the road surface. Suppress. For example, for a predetermined angle range centered on the white line edge angle, the preprocessing is performed to reduce the influence of the white line by multiplying the height of the actual histogram by 0.3 times before the white line edge suppression before correlating with the background. It is executed by the unit 203b. As a result, the influence of the white line, which is an erroneous judgment factor of reflection, can be reduced.
- the background area estimated by the background road three-dimensional object edge estimation unit 202e from the edge angle histogram is the three-dimensional object edge angle
- the road surface reflection area is the three-dimensional object reflected edge angle
- the local region specific correlation analysis unit 203 d analyzes the correlation between the corresponding road surface reflection region and the edge angle histogram of the background region.
- the histograms of the above-mentioned edge angles calculated as the feature amounts of the respective regions in consideration of the change in the edge angle due to the reflection. That is, it is calculated in advance by the background road surface edge angle correlation estimation unit 202d which arrangement position in the histogram of the edge angle of each reflected area corresponds to the arrangement position in the histogram of the edge angle of each background area. Both histograms are compared based on the calculation results. In this way, the correlation between the histogram of the edge angle representing the feature amount of each background region and the histogram of the edge angle representing the feature amount of each reflected region is correctly analyzed by reflecting the reflection state. Can.
- the correlation check in the local area is performed by comparing the corresponding processing areas of the reflection area 35a and the background area 34a from the left side of the screen. Continuing on, the reflection area 35b and the background area 34b, the reflection area 35c and the background area 34c, the reflection area 35d and the background area 34d, the reflection area 35e and the background area 34e, and the reflection area 35f and the background area 34f, respectively. It is analyzed whether there is a correlation between the feature value of the background area corresponding to the road surface reflection and the background area.
- the reflection area 37a and the background area 36a, the reflection area 37b and the background area 36b, the reflection area 37c and the background area 36c, the reflection area 37d and the background area 36d, and the reflection area 37e and the background area The correlation is analyzed in 36e, the reflected area 37f and the background area 36f, respectively.
- the feature amounts of the background areas and the corresponding areas of the reflection areas are compared with each other. That is, on the left side of the screen, the feature quantities calculated for each of the background areas 34a to 34f are compared with the reflection areas 35a to 35f of the road surface. Similarly, on the right side of the screen, the feature values calculated for each of the background areas 36a to 36f are compared with the reflection areas 37a to 37f of the road surface. These regions are compared with each other with the same alphabetic subscripts.
- the left and right separate correlation analysis unit 203e analyzes the correlation by putting together the background area and the reflection area on the left and right of the screen.
- the feature amount moves backward in both the background area and the reflection area of the road surface. For example, when there is a feature quantity indicating an edge angle of 45 degrees in the reflection area 35b on the left of the screen, this feature quantity flows backward in the next frame, for example, a similar tendency occurs in the reflection area 35e.
- a feature quantity indicating an edge angle of 45 degrees in the reflection area 35b on the left of the screen
- this feature quantity flows backward in the next frame, for example, a similar tendency occurs in the reflection area 35e.
- I got scared Further, it is assumed that there is a highly correlated background edge angle in the background, which is also moved from the background area 34b to the background area 34e in time series. In such a case, it is determined that there is a high possibility that reflection of background on the road surface has occurred.
- the local correlation between the background area and the reflection area of the road surface it is also determined whether or not there is a correlation between the presence or absence of the feature amount and the depth direction. For example, on the left side of the screen, if there is a correlation only in the background area 34b and the reflection area 35b, and there is a feature amount in other areas but the correlation is low, then a coincident local area exists by chance I think that the possibility is high. On the other hand, if there is a correlation between only the background area 34b and the reflection area 35b, and the feature quantities of the background area and the road surface reflection area are small in other areas, this is the state on the left side of the screen.
- step S180 the feature quantities of the respective regions as described above are compared, and based on the comparison result, the background regions 34a to 34f and the reflected regions 35a to 35f, and the background regions 36a to 36f are reflected. It is determined whether or not there is a reflection of the background among the areas 37a to 37f. For example, the correlation between the background areas 34a to 34f and the reflection areas 35a to 35f is sufficiently high, and each image in the background area groups 34a to 34f and each image in the reflection area groups 35a to 35f are the whole. If the vehicle is moving in a direction away from the vehicle, it is determined that there is a reflection of a background on the road surface at the right rear of the vehicle.
- the correlation between the background areas 36a to 36f and the reflected areas 37a to 37f is sufficiently high, and each image in the background area groups 36a to 36f and each image in the reflected area groups 37a to 37f are similar to each other.
- step S160 the reference values at the time of determining the presence or absence of the background object on the road surface by comparing the feature amounts of the respective regions as described above in step S180 are changed. , Can adjust the degree of alarm suppression. That is, in the step S180, the correlation of the feature amount between the background areas 34a to 34f and the reflection areas 35a to 35f and between the background areas 36a to 36f and the reflection areas 37a to 37f is a predetermined threshold or more. If there is, it is determined that there is a reflection of the background, considering that the correlation between the images in each of these regions is high. In step S160, the threshold value for this correlation is lowered to ease the conditions for determining the presence or absence of background reflection in step S180, and alarm suppression so that alarm suppression is facilitated in step S200 described later. You can adjust the degree of
- the degree of alarm suppression can also be adjusted by changing the conditions for calculating the feature quantities of the respective areas in step S170, more specifically, changing the detection conditions of the edge. That is, in step S170, for each of the background areas 34a to 34f and 36a to 36f and the reflection areas 35a to 35f and 37a to 37f, portions where the luminance difference between adjacent pixels is a predetermined value or more are used as edges. The feature amount of each area is calculated by detecting and histogramting the edge angle. In step S160, by reducing the luminance difference as the edge detection condition so that more edge components are detected, in step S180, the condition for determining the presence or absence of the reflection of the background is relaxed. The degree of alarm suppression can be adjusted so that the alarm suppression can be easily performed in step S200 described later.
- the change in the threshold with respect to the correlation in step S180 and the change in the edge detection condition in step S170 may be performed independently. , Both may be executed together.
- step S190 the control unit 2 determines the presence or absence of the background object on the road surface from the result of the reflection determination in step S180. If it is determined in step S180 that there is a reflection of the background on the road surface at least one of the left rear and the right rear of the vehicle, the process proceeds from step S190 to step S200. On the other hand, if it is determined in step S180 that there is no reflection of background objects on the road surface in any of the left and right rear of the vehicle, the flowchart of FIG. 5 is ended without executing step S200.
- step S200 the control unit 2 causes the alarm output unit 3 to stop the output of the alarm output signal.
- the control unit 2 sends a predetermined notification from the reflection determination unit 203 to the alarm control unit 205 to stop the alarm output signal from the alarm control unit 205 to the alarm output unit 3, and the alarm output unit 3. Suppress the alarm output by.
- the alarm output unit is erroneously Do not output an alarm from 3).
- step S200 it is preferable to stop the output of the alarm only for the left rear and the right rear of the vehicle, for which it is determined in step S180 that there is a reflection of a background object on the road surface.
- the reflection is erroneously detected as a moving object. Therefore, by suppressing the output of the alarm, an effect of suppressing the erroneous alarm can be obtained.
- the sign detection unit 204d there is a possibility that the reflection may be erroneously detected as a sign. Therefore, by suppressing the output of the alarm, an effect of suppressing the erroneous alarm can be obtained.
- the trapping prevention recognition unit 204e there is a possibility that the reflection is erroneously detected as an obstacle. Therefore, by suppressing the output of the alarm, an effect of suppressing the erroneous alarm can be obtained.
- the parking frame recognition unit 204f since there is a possibility that the position of the parking frame may become unstable or erroneously recognized due to the reflection, erroneous control or the like is suppressed by stopping the application using the parking frame.
- the moving body detection unit 204g since there is a possibility that the reflection is erroneously detected as the moving body, the effect of suppressing the false alarm can be obtained by suppressing the output of the warning.
- FIG. 12 is a view for explaining the reduction effect of the false alarm obtained by the on-vehicle ambient environment recognition apparatus 100 of the present embodiment as described above.
- the alarm from the alarm output unit 3 is adjusted by adjusting the suppression degree of the alarm output performed by the alarm suppression adjustment unit 206 described above. It illustrates how the output timing of H changes.
- the relative speed of the other vehicle shown in FIG. 12 (a) is within the predetermined range as described in step S150 of FIG. 5 from time Tv1 to time Tv2 and is otherwise It is outside.
- the alarm suppression adjustment unit 206 executes the adjustment of the suppression degree of the alarm output as described above in the period from the time Tv1 to the time Tv2 described above, and determines the condition for determining the presence or absence of the background object on the road surface. ease. As a result, the reflection determination unit 203 can easily obtain the determination result that reflection is present. As a result, for example, as shown in FIG. 12B, the timing at which it is determined that there is no reflection is moved from time Tr4 to time Tr4a, and the period in which the determination result that there is reflection is obtained is extended. The portion shown by the broken line in FIG. 12B shows an example of the determination result of the reflection of the background object on the road surface obtained when the adjustment of the suppression degree of the alarm output is not performed.
- the image information value indicated by reference numeral 50 in FIG. 12C is obtained in the other vehicle recognition process executed by the other vehicle recognition unit 204b of the application execution unit 204.
- the image information value 50 exceeds the threshold value Th0, it is recognized as another vehicle.
- the threshold Th0 since the image information value 50 exceeds the threshold Th0 in the period from time To1 to time To2, the other vehicle is recognized in this period.
- the alarm output unit 3 outputs an alarm at a timing as shown in FIG. 12 (d) according to the alarm output signal from the alarm control unit 205.
- the timing of the alarm output is a period during which it is determined that the reflection is present in FIG. 12 (b) and another vehicle is recognized in FIG. 12 (c).
- the part shown with a broken line in FIG.12 (d) is the timing of the alarm output in case adjustment of suppression level of alarm output is not performed, and an alarm output is performed in the period of said time Tr4 to time To2. It shows.
- the degree of suppression of the alarm output is adjusted, and according to this, it is determined that reflection is present as shown in FIG.
- the period of time to obtain is extended. As a result, it is possible to suppress the alarm output in the period from time Tr4 to time To2.
- FIG. 13 is a schematic configuration diagram of a vehicle for illustrating another vehicle recognition processing performed by the other vehicle recognition unit 204b in the on-vehicle surrounding environment recognition device 100 of the present invention.
- the on-vehicle ambient environment recognition device 100 detects another vehicle that the driver of the host vehicle V should pay attention to while driving, for example, another vehicle that may be in contact when the host vehicle V changes lanes as an obstacle Device.
- the on-vehicle surrounding environment recognition apparatus 100 of this example detects another vehicle traveling on an adjacent lane (hereinafter, also simply referred to as an adjacent lane) next to the lane on which the host vehicle travels.
- the in-vehicle surrounding environment recognition device 100 of this example can calculate the movement distance and movement speed of the detected other vehicle.
- the on-vehicle ambient environment recognition device 100 is mounted on the host vehicle V, and among the three-dimensional objects detected around the host vehicle, the vehicle travels in the adjacent lane next to the lane where the host vehicle V travels.
- An example of detecting a different vehicle is shown.
- the vehicle-mounted ambient environment recognition apparatus 100 of this example is provided with the camera 1, the vehicle speed sensor 5, and the other vehicle recognition part 204b.
- the camera 1 is attached to the vehicle V such that the optical axis is directed downward from the horizontal at an angle ⁇ at a position of height h behind the vehicle V.
- the camera 1 images a predetermined area of the surrounding environment of the host vehicle V from this position.
- one camera 1 is provided to detect a three-dimensional object behind the host vehicle V in the present embodiment, for other applications, for example, another camera for acquiring an image around the vehicle may be provided. You can also.
- the vehicle speed sensor 5 detects the traveling speed of the host vehicle V, and calculates, for example, the vehicle speed from the wheel speed detected by the wheel speed sensor that detects the number of revolutions of the wheel.
- the other vehicle recognition unit 204b detects a three-dimensional object behind the vehicle as another vehicle, and in the present example, calculates the movement distance and the movement speed of the three-dimensional object.
- FIG. 14 is a plan view showing a traveling state of the vehicle V of FIG.
- the camera 1 captures an image of the vehicle rear side at a predetermined angle of view a.
- the angle of view a of the camera 1 is set to an angle of view that enables imaging of the left and right lanes in addition to the lane in which the host vehicle V is traveling.
- the imageable area includes the detection target areas A1 and A2 on the rear of the host vehicle V and on the adjacent lanes to the left and right of the traveling lane of the host vehicle V.
- FIG. 15 is a block diagram showing details of the other vehicle recognition unit 204b of FIG. In FIG. 15, the camera 1 and the vehicle speed sensor 5 are also illustrated in order to clarify the connection relationship.
- the other vehicle recognition unit 204 b includes a viewpoint conversion unit 31, an alignment unit 32, a three-dimensional object detection unit 33, a three-dimensional object determination unit 34, a virtual image determination unit 38 and a control unit 39. , And a smear detection unit 40.
- Another vehicle recognition part 204b of this embodiment is the composition about a detection block of a solid thing using difference waveform information.
- the other vehicle recognition unit 204b according to this embodiment can also be configured as a detection block of a three-dimensional object using edge information. In this case, in the configuration shown in FIG.
- the detection block configuration B including the three-dimensional object detection unit 37 can be substituted.
- both detection block configuration A and detection block configuration B can be used to perform detection of a three-dimensional object using differential waveform information as well as detection of a three-dimensional object using edge information .
- the detection block configuration A and the detection block configuration B can be operated according to an environmental factor such as brightness.
- the on-vehicle ambient environment recognition device 100 of the present embodiment detects a three-dimensional object present in the right side detection area or the left side detection area on the rear side of the vehicle based on the image information obtained by the single-eye camera 1 that images the rear side of the vehicle.
- the viewpoint conversion unit 31 inputs captured image data of a predetermined area obtained by imaging with the camera 1 and converts the input captured image data into bird's eye image data in a state of being viewed from a bird's-eye view.
- the state of being viewed from a bird's eye is a state viewed from the viewpoint of a virtual camera looking down from above, for example, vertically downward.
- This viewpoint conversion can be performed, for example, as described in Japanese Patent Laid-Open No. 2008-219063.
- the viewpoint conversion of the captured image data to the bird's-eye view image data is based on the principle that the vertical edge unique to the three-dimensional object is converted into a straight line group passing through a specific fixed point by the viewpoint conversion to the bird's-eye view image data This is because it is possible to distinguish between a flat object and a three-dimensional object by using it.
- the result of the image conversion process by the viewpoint conversion unit 31 is also used in detection of a three-dimensional object based on edge information described later.
- the alignment unit 32 sequentially inputs the bird's-eye view image data obtained by the viewpoint conversion of the viewpoint conversion unit 31 and aligns the positions of the input bird's-eye view image data at different times.
- 16A and 16B are diagrams for explaining the outline of the process of the alignment unit 32.
- FIG. 16A is a plan view showing a movement state of the host vehicle V
- FIG. 16B is an image showing an outline of alignment.
- the host vehicle V at the current time is located at V1, and the host vehicle V at one time ago is located at V2.
- the other vehicle VX is positioned behind the host vehicle V and is in parallel with the host vehicle V
- the other vehicle VX at the current time is positioned at V3
- the other vehicle VX at one time ago is positioned at V4.
- the host vehicle V has moved a distance d at one time.
- “one time before” may be a time in the past by a predetermined time (for example, one control cycle) from the current time, or may be a time in the past by any time.
- the bird's-eye image PB t at the current time is as shown in FIG. 16 (b).
- the bird's-eye image PB t becomes a rectangular shape for the white line drawn on the road surface, but a relatively accurate is a plan view state, tilting occurs about the position of another vehicle VX at position V3.
- the white line drawn on the road surface is rectangular and relatively flatly viewed, but the other vehicle VX at position V4 Falls down.
- the vertical edges of a three-dimensional object are straight lines along the falling direction by viewpoint conversion processing to bird's eye view image data While the plane image on the road surface does not include vertical edges while it appears as a group, such fall-over does not occur even if viewpoint conversion is performed.
- the alignment unit 32 performs alignment of the bird's-eye view images PB t and PB t-1 as described above on the data. At this time, the alignment unit 32 offsets the bird's-eye view image PB t-1 one time before and makes the position coincide with the bird's-eye view image PB t at the current time.
- the image on the left and the image at the center in FIG. 16 (b) show the state of being offset by the moving distance d '.
- This offset amount d ' is the amount of movement on bird's-eye view image data corresponding to the actual movement distance d of the vehicle V shown in FIG. It is determined based on the time to time.
- the alignment unit 32 obtains the difference between the bird's-eye view images PB t and PB t-1 and generates data of the difference image PD t .
- the pixel value of the difference image PD t may be an absolute value of the difference between the pixel values of the bird's-eye view images PB t and PB t-1 , or the absolute value may be predetermined to correspond to the change in the illumination environment.
- the threshold p of is exceeded, "1" may be set, and when not exceeding it, "0" may be set.
- the threshold value p may be set in advance, or may be changed in accordance with a control command according to the result of the virtual image determination of the control unit 39 described later.
- the three-dimensional object detection unit 33 detects a three-dimensional object on the basis of the data of the difference image PD t shown in FIG. 16 (b). At this time, the three-dimensional object detection unit 33 of this example also calculates the movement distance of the three-dimensional object in real space. In the detection of the three-dimensional object and the calculation of the movement distance, the three-dimensional object detection unit 33 first generates a differential waveform. In addition, the movement distance per time of a solid thing is used for calculation of the movement speed of a solid thing. The moving speed of the three-dimensional object can be used to determine whether the three-dimensional object is a vehicle.
- Three-dimensional object detection unit 33 of the present embodiment when generating the differential waveform sets a detection area in the difference image PD t.
- the on-vehicle ambient environment recognition device 100 of this example is another vehicle that the driver of the host vehicle V pays attention to, and in particular, the host vehicle V which may contact when the host vehicle V changes lanes travels The other vehicle traveling in the lane next to the lane is detected as a detection target. For this reason, in this example which detects a solid object based on image information, two detection fields are set up on the right side and the left side of self-vehicles V among the pictures acquired by camera 1.
- the other vehicle detected in the detection areas A1 and A2 is detected as an obstacle traveling on the adjacent lane next to the lane on which the host vehicle V travels.
- detection areas A1 and A2 may be set from the relative position with respect to the host vehicle V, or may be set based on the position of the white line.
- the moving distance detection device 1 may use, for example, the existing white line recognition technology or the like.
- the three-dimensional object detection unit 33 recognizes the sides (sides along the traveling direction) on the side of the vehicle V of the set detection areas A1 and A2 as ground lines L1 and L2 (FIG. 14).
- the ground line means a line at which a three-dimensional object contacts the ground, but in the present embodiment, it is not the line that contacts the ground but is set as described above. Even in this case, from the experience, the difference between the ground contact line according to the present embodiment and the ground contact line originally obtained from the position of the other vehicle VX does not become too large, and there is no problem in practical use.
- FIG. 17 is a schematic diagram showing how a differential waveform is generated by the three-dimensional object detection unit 33 shown in FIG.
- the three-dimensional object detection unit 33 generates a differential waveform from the portion corresponding to the detection areas A1 and A2 in the differential image PD t (right view in FIG. 16B) calculated by the alignment unit 32. Generate DW t .
- the three-dimensional object detection unit 33 generates a differential waveform DW t along the direction in which the three-dimensional object falls down due to viewpoint conversion.
- FIG. 17 for convenience will be described with reference to only the detection area A1, to produce a difference waveform DW t in the same procedure applies to the detection region A2.
- the three-dimensional object detection unit 33 defines a line La in the direction in which the three-dimensional object falls on the data of the difference image DW t . Then, the three-dimensional object detection unit 33 counts the number of difference pixels DP indicating a predetermined difference on the line La.
- the difference pixel DP indicating a predetermined difference is a predetermined threshold value. In the case where the pixel is exceeded and the pixel value of the difference image DW t is expressed by “0” “1”, it is a pixel representing “1”.
- the three-dimensional object detection unit 33 After counting the number of difference pixels DP, the three-dimensional object detection unit 33 obtains an intersection CP of the line La and the ground line L1. Then, the three-dimensional object detection unit 33 associates the intersection point CP with the count number, determines the horizontal axis position based on the position of the intersection point CP, that is, the position in the vertical axis in FIG. The axial position, i.e., the position on the left and right axis in the right view of FIG.
- the three-dimensional object detection unit 33 defines lines Lb, Lc, ... in the direction in which the three-dimensional object falls down, counts the number of difference pixels DP, and determines the horizontal axis position based on the position of each intersection point CP. The vertical position is determined from the count number (the number of difference pixels DP) and plotted.
- the three-dimensional object detection unit 33 generates the difference waveform DW t as shown in the right of FIG.
- the distance between the line La and the line Lb in the direction in which the three-dimensional object falls is different from that in the detection area A1. Therefore, assuming that the detection area A1 is filled with the difference pixels DP, the number of difference pixels DP is larger on the line La than on the line Lb. Therefore, when the three-dimensional object detection unit 33 determines the position of the vertical axis from the count number of the difference pixels DP, the three-dimensional object detection unit 33 performs the regular operation based on the overlapping distance between the lines La and Lb and the detection area A1 in the falling direction. Turn As a specific example, in the left view of FIG. 17, there are six difference pixels DP on the line La, and there are five difference pixels DP on the line Lb.
- the three-dimensional object detection unit 33 normalizes the count number by dividing it by the overlap distance.
- the value of the differential waveform DW t corresponding to Lb is substantially the same.
- the three-dimensional object detection unit 33 calculates the movement distance by comparison with the difference waveform DW t-1 one time before. That is, the three-dimensional object detection unit 33 calculates the movement distance from the time change of the differential waveforms DW t and DW t ⁇ 1 .
- the three-dimensional object detection unit 33 divides the differential waveform DW t into a plurality of small areas DW t1 to DW tn (n is an arbitrary integer of 2 or more).
- FIG. 18 is a diagram showing small areas DW t1 to DW tn divided by the three-dimensional object detection unit 33. As shown in FIG. The small areas DW t1 to DW tn are divided so as to overlap each other as shown in, for example, FIG. For example, the small area DW t1 and the small area DW t2 overlap, and the small area DW t2 and the small area DW t3 overlap.
- the three-dimensional object detection unit 33 obtains an offset amount (moving amount in the horizontal axis direction (vertical direction in FIG. 18) of the differential waveform) for each of the small areas DW t1 to DW tn .
- the offset amount is determined from the difference between the differential waveform DW t in the difference waveform DW t-1 and the current time before one unit time (distance in the horizontal axis direction).
- three-dimensional object detection unit 33 for each small area DW t1 ⁇ DW tn, when moving the differential waveform DW t1 before one unit time in the horizontal axis direction, the differential waveform DW t at the current time The position where the error is minimized (the position in the horizontal axis direction) is determined, and the amount of movement in the direction of the horizontal axis between the original position of the differential waveform DWt -1 and the position where the error is minimized is determined as the offset amount. Then, the three-dimensional object detection unit 33 counts the offset amount obtained for each of the small areas DW t1 to DW tn to form a histogram.
- FIG. 19 is a diagram illustrating an example of a histogram obtained by the three-dimensional object detection unit 33.
- the three-dimensional object detection unit 33 histograms the offset amount including the variation and calculates the movement distance from the histogram.
- the three-dimensional object detection unit 33 calculates the movement distance of the three-dimensional object from the maximum value of the histogram. That is, in the example shown in FIG. 19, the three-dimensional object detection unit 33 calculates the offset amount indicating the maximum value of the histogram as the movement distance ⁇ * .
- the movement distance ⁇ * is the relative movement distance of the other vehicle VX with respect to the host vehicle V. Therefore, when calculating the absolute movement distance, the three-dimensional object detection unit 33 calculates the absolute movement distance based on the obtained movement distance ⁇ * and the signal from the vehicle speed sensor 5.
- FIG. 20 is a view showing weighting by the three-dimensional object detection unit 33. As shown in FIG.
- the small area DW m (m is an integer of 1 or more and n ⁇ 1 or less) is flat. That is, the small area DW m is the difference between the maximum value and the minimum value of the count of the number of pixels indicating a predetermined difference is small.
- the three-dimensional object detection unit 33 reduces the weight of such a small area DW m . This is because there is no feature in the flat small area DW m and there is a high possibility that the error will be large in calculating the offset amount.
- the small area DW m + k (k is an integer less than or equal to n ⁇ m) is rich in irregularities. That is, the small area DW m is the difference between the maximum value and the minimum value of the count of the number of pixels indicating a predetermined difference is large.
- the three-dimensional object detection unit 33 increases the weight of such a small area DW m . This is because the small region DW m + k rich in unevenness is characteristic and the possibility of accurately calculating the offset amount is high. By weighting in this manner, it is possible to improve the calculation accuracy of the movement distance.
- the other vehicle recognition unit 204 b includes a smear detection unit 40.
- the smear detection unit 40 detects a smear generation area from data of a captured image obtained by capturing with the camera 1.
- the smear is a whiteout phenomenon that occurs in a CCD image sensor or the like, so the smear detection unit 40 may be omitted when the camera 1 using a CMOS image sensor or the like in which such smear does not occur is adopted.
- FIG. 21 is an image diagram for explaining the processing by the smear detection unit 40 and the calculation processing of the differential waveform DW t by the processing.
- data of the captured image P in which the smear S exists is input to the smear detection unit 40.
- the smear detection unit 40 detects the smear S from the captured image P.
- CCD Charge-Coupled? Device
- a region having a luminance value equal to or more than a predetermined value from the lower side of the image to the upper side of the image is searched, and a region continuous in the vertical direction is searched, and this is identified as the smear S generation region.
- the smear detection unit 40 generates data of a smear image SP in which the pixel value is set to “1” for the generation portion of the smear S and the other portion is set to “0”. After generation, the smear detection unit 40 transmits data of the smear image SP to the viewpoint conversion unit 31. Further, the viewpoint conversion unit 31 which has input the data of the smear image SP converts the data into a state of being viewed as a bird's eye view. Thus, the viewpoint conversion unit 31 generates data of the smear bird's-eye view image SB t. After generation, the viewpoint conversion unit 31 transmits the data of the smear bird's-eye view image SB t the positioning unit 33. Further, the viewpoint conversion unit 31 transmits the data of the smear bird's-eye view image SB t-1 one time before to the alignment unit 33.
- the alignment unit 32 performs alignment of the smear bird's-eye view images SB t and SB t-1 on the data.
- the specific alignment is the same as when the alignment of the bird's-eye view images PB t and PB t-1 is performed on data.
- the alignment unit 32 ORs the generation areas of the smears S of the smear bird's-eye view images SB t and SB t ⁇ 1 . Thereby, the alignment unit 32 generates data of the mask image MP. After generation, the alignment unit 32 transmits the data of the mask image MP to the three-dimensional object detection unit 33.
- the three-dimensional object detection unit 33 sets the count number of the frequency distribution to zero for the portion corresponding to the generation region of the smear S in the mask image MP. That is, when the differential waveform DW t as shown in FIG. 21 is generated, the three-dimensional object detection unit 33 sets the count number SC by the smear S to zero and generates the corrected differential waveform DW t ′. Become.
- the three-dimensional object detection unit 33 obtains the moving speed of the vehicle V (camera 1), and obtains the offset amount for the stationary object from the obtained moving speed. After obtaining the offset amount of the stationary object, the three-dimensional object detection unit 33 calculates the movement distance of the three-dimensional object after ignoring the offset amount corresponding to the stationary object among the maximum values of the histogram.
- FIG. 22 is a view showing another example of the histogram obtained by the three-dimensional object detection unit 33.
- the three-dimensional object detection unit 33 obtains the offset amount for the stationary object from the moving speed, ignores the local maximum corresponding to the offset amount, and calculates the moving distance of the three-dimensional object by adopting the other local maximum. Do.
- the three-dimensional object detection unit 33 stops the calculation of the movement distance.
- FIG.23 and FIG.24 is a flowchart which shows the solid-object detection procedure of this embodiment.
- the other-vehicle recognition unit 204b inputs data of an image P captured by the camera 1, and the smear detection unit 40 generates a smear image SP (S1).
- the viewpoint conversion unit 31 generates the data of the bird's-eye view image PB t from captured image data P from the camera 1 generates data of the smear bird's-eye view image SB t from the data of the smear image SP (S2).
- the positioning unit 33 includes a data bird's-eye view image PB t, with aligning the one unit time before bird's PB t-1 of the data, and data of the smear bird's-eye view image SB t, one time before the smear bird's
- the data of the image SB t-1 is aligned (S3).
- the alignment unit 33 generates the data of the difference image PD t, generates data of the mask image MP (S4).
- three-dimensional object detection unit 33, the data of the difference image PD t, and a one unit time before the difference image PD t-1 of the data generates a difference waveform DW t (S5).
- the three-dimensional object detection unit 33 After generating the differential waveform DW t , the three-dimensional object detection unit 33 sets the count number corresponding to the generation region of the smear S in the differential waveform DW t to zero, and suppresses the influence of the smear S (S6).
- the three-dimensional object detection unit 33 determines whether the peak of the difference waveform DW t is equal to or more than the first threshold value ⁇ (S7).
- the first threshold value ⁇ may be set in advance and may be changed according to a control command of the control unit 39 shown in FIG. 15, but the details will be described later.
- the peak of the difference waveform DW t is not equal to or more than the first threshold value ⁇ , that is, when there is almost no difference, it is considered that there is no three-dimensional object in the captured image P.
- the three-dimensional object detection unit 33 does not have a three-dimensional object, and another vehicle is present as an obstacle. It is judged that it does not (FIG. 24: S16). Then, the processing illustrated in FIGS. 23 and 24 is ended.
- the three-dimensional object detection unit 33 determines that a three-dimensional object exists, and the difference waveform DW t It is divided into small regions DW t1 to DW tn (S8). Next, the three-dimensional object detection unit 33 performs weighting for each of the small areas DW t1 to DW tn (S9). Thereafter, the three-dimensional object detection unit 33 calculates an offset amount for each of the small regions DW t1 to DW tn (S10), and generates a histogram by adding weights (S11).
- the three-dimensional object detection unit 33 calculates the relative movement distance, which is the movement distance of the three-dimensional object with respect to the host vehicle V, based on the histogram (S12). Next, the three-dimensional object detection unit 33 calculates the absolute movement speed of the three-dimensional object from the relative movement distance (S13). At this time, the three-dimensional object detection unit 33 differentiates the relative movement distance by time to calculate the relative movement speed, and adds the own vehicle speed detected by the vehicle speed sensor 5 to calculate the absolute movement speed.
- the three-dimensional object detection unit 33 determines whether the absolute movement speed of the three-dimensional object is 10 km / h or more and the relative movement speed of the three-dimensional object with respect to the host vehicle V is +60 km / h or less (S14). If the both are satisfied (S14: YES), the three-dimensional object detection unit 33 determines that the three-dimensional object is the other vehicle VX (S15). Then, the processing illustrated in FIGS. 23 and 24 is ended. On the other hand, when either one is not satisfied (S14: NO), the three-dimensional object detection unit 33 determines that there is no other vehicle (S16). Then, the processing illustrated in FIGS. 23 and 24 is ended.
- the rear side of the host vehicle V is set as the detection areas A1 and A2, and attention should be paid while the host vehicle V is traveling, for example, in the adjacent lane next to the lane of the host vehicle.
- Emphasis is placed on detecting the vehicle VX, in particular, whether or not the host vehicle V may touch if the vehicle changes lanes. This is to determine whether there is a possibility of contact with another vehicle VX traveling in the adjacent lane next to the traveling lane of the own vehicle when the own vehicle V changes lanes. Therefore, the process of step S14 is performed.
- the following effects can be obtained by determining whether the absolute moving speed of the three-dimensional object is 10 km / h or more and the relative moving speed of the three-dimensional object with respect to the host vehicle V is +60 km / h or less in step S14.
- the absolute moving speed of the stationary object may be detected as several km / h. Therefore, it is possible to reduce the possibility that the stationary object is determined to be the other vehicle VX by determining whether it is 10 km / h or more.
- the relative velocity of the three-dimensional object to the vehicle V may be detected as a velocity exceeding +60 km / h. Therefore, the possibility of false detection due to noise can be reduced by determining whether the relative speed is +60 km / h or less.
- the threshold of the relative moving speed for determining the other vehicle VX in step S14 can be set arbitrarily. For example, -20 km / h or more and 100 km / h or less can be set as the threshold of the relative moving speed.
- the negative lower limit value is the lower limit value of the moving speed when the detected object moves to the rear of the host vehicle VX, that is, the detected object flows backward.
- the threshold can be appropriately set in advance, but can be changed in accordance with a control command of the control unit 39 described later.
- step S14 it may be determined that the absolute moving speed is not negative or not 0 km / h. Further, in the present embodiment, emphasis is placed on whether there is a possibility of contact when the host vehicle V changes lanes, so when the other vehicle VX is detected in step S15, the driver of the host vehicle V is A warning sound may be emitted or a display corresponding to the warning may be performed by a predetermined display device.
- the number of pixels indicating a predetermined difference is counted on the data of the difference image PD t along the direction in which the three-dimensional object falls
- the difference waveform DW t is generated by performing frequency distribution.
- the pixel indicating a predetermined difference on the data of the difference image PD t is a pixel that has changed in the image at a different time, in other words, it can be said that it is a place where a three-dimensional object was present.
- the difference waveform DW t is generated by counting the number of pixels along the direction in which the three-dimensional object falls and performing frequency distribution at the location where the three-dimensional object exists.
- the differential waveform DW t is generated from the information in the height direction for the three-dimensional object. Then, it calculates the movement distance of the three-dimensional object from a time change of the differential waveform DW t that contains information in the height direction. For this reason, in the three-dimensional object, the detection location before the time change and the detection location after the time change are specified to include information in the height direction, as compared to the case where attention is focused only to the movement of only one point. The movement distance is easily calculated from the time change of the same portion, and the calculation accuracy of the movement distance can be improved.
- the count number of the frequency distribution is set to zero for the portion of the difference waveform DW t that corresponds to the generation region of the smear S.
- the movement distance of the three-dimensional object is calculated from the offset amount of the differential waveform DW t when the error of the differential waveform DW t generated at different times is minimized. Therefore, the movement distance is calculated from the offset amount of one-dimensional information called waveform, and the calculation cost can be suppressed in calculating the movement distance.
- the differential waveform DW t generated at different times is divided into a plurality of small areas DW t1 to DW tn .
- a plurality of waveforms representing the respective portions of the three-dimensional object can be obtained.
- weighting is performed for each of the plurality of small areas DW t1 to DW tn , and the offset amount obtained for each of the small areas DW t1 to DW tn is counted according to the weights to form a histogram. Therefore, the moving distance can be calculated more appropriately by increasing the weight for the characteristic area and reducing the weight for the non-characteristic area. Therefore, the calculation accuracy of the movement distance can be further improved.
- the weight is increased as the difference between the maximum value and the minimum value of the count of the number of pixels indicating a predetermined difference increases. For this reason, the weight increases as the characteristic relief area has a large difference between the maximum value and the minimum value, and the weight decreases for a flat area where the relief is small.
- the movement distance is calculated by increasing the weight in the area where the difference between the maximum value and the minimum value is large. Accuracy can be further improved.
- the movement distance of the three-dimensional object is calculated from the maximum value of the histogram obtained by counting the offset amount obtained for each of the small regions DW t1 to DW tn . For this reason, even if there is a variation in the offset amount, it is possible to calculate a moving distance with higher accuracy from the maximum value.
- the offset amount for the stationary object is obtained and the offset amount is ignored, it is possible to prevent the situation in which the calculation accuracy of the moving distance of the three-dimensional object is reduced due to the stationary object.
- the calculation of the movement distance of the solid object is stopped. For this reason, it is possible to prevent a situation in which an erroneous movement distance having a plurality of maximum values is calculated.
- the vehicle speed of the host vehicle V is determined based on the signal from the vehicle speed sensor 5 in the above embodiment, the present invention is not limited to this, and the speed may be estimated from a plurality of images at different times. In this case, the vehicle speed sensor becomes unnecessary, and the configuration can be simplified.
- the captured image of the current time and the image of the immediately preceding time are converted into a bird's-eye view, the converted bird's-eye view is aligned, and a difference image PD t is generated.
- the differential waveform DW t is generated by evaluating t along the falling direction (the falling direction of the three-dimensional object when the captured image is converted into a bird's-eye view), but the invention is not limited thereto.
- the differential waveform DW t may be generated by evaluating the image data along the direction corresponding to the falling direction (that is, the direction in which the falling direction is converted to the direction on the captured image).
- the difference image PD t is generated from the difference between the aligned images, and the three-dimensional object when the difference image PD t is converted to a bird's eye view It is not always necessary to generate a bird's eye view clearly if it can be evaluated along the falling direction of.
- FIG. 25 is a view showing an imaging range and the like of the camera 1 of FIG. 15, and FIG. 25 (a) is a plan view, and FIG. 25 (b) is a perspective view in real space Show.
- the camera 1 has a predetermined angle of view a, and images the rear side from the host vehicle V included in the predetermined angle of view a.
- the angle of view a of the camera 1 is set so that, in addition to the lane in which the host vehicle V is traveling, an adjacent lane is also included in the imaging range of the camera 1 as in the case shown in FIG.
- the detection areas A1 and A2 in this example are trapezoidal in plan view (in a bird's-eye view), and the positions, sizes, and shapes of the detection areas A1 and A2 are determined based on the distances d 1 to d 4. Be done.
- the detection areas A1 and A2 in the example shown in the figure are not limited to the trapezoidal shape, but may be another shape such as a rectangle in a bird's-eye view as shown in FIG.
- the distance d1 is a distance from the host vehicle V to the ground lines L1 and L2.
- Grounding lines L1 and L2 mean lines on which a three-dimensional object existing in a lane adjacent to the lane in which the host vehicle V travels contacts the ground. In the present embodiment, it is an object to detect another vehicle VX or the like (including a two-wheeled vehicle etc.) traveling on the left and right lanes adjacent to the lane of the own vehicle V on the rear side of the own vehicle V.
- the distance d1 which is the position of the ground line L1, L2 of the other vehicle VX It can be determined substantially fixedly.
- the distance d1 is not limited to being fixed and may be variable.
- the other vehicle recognition unit 204b recognizes the position of the white line W with respect to the host vehicle V by a technique such as white line recognition, and determines the distance d11 based on the recognized position of the white line W.
- the distance d1 is variably set using the determined distance d11.
- the distance d1 is It shall be fixedly determined.
- the distance d2 is a distance extending from the rear end of the host vehicle V in the traveling direction of the vehicle.
- the distance d2 is determined so that the detection areas A1 and A2 fall within at least the angle of view a of the camera 1.
- the distance d2 is set to be in contact with the range divided into the angle of view a.
- the distance d3 is a distance indicating the length of the detection areas A1 and A2 in the vehicle traveling direction.
- the distance d3 is determined based on the size of the three-dimensional object to be detected. In the present embodiment, since the detection target is the other vehicle VX or the like, the distance d3 is set to a length including the other vehicle VX.
- the distance d4 is a distance indicating a height set so as to include a tire of another vehicle VX or the like in the real space, as shown in FIG. 25 (b).
- the distance d4 is a length shown in FIG. 25 (a) in the bird's-eye view image.
- the distance d4 may be a length not including lanes adjacent to the left and right adjacent lanes (that is, lanes adjacent to two lanes) in the bird's-eye view image. If the lane adjacent to the two lanes from the lane of the host vehicle V is included, whether the other vehicle VX exists in the adjacent lanes to the left and right of the host lane where the host vehicle V is traveling This is because no distinction can be made as to whether the other vehicle VX exists.
- the distances d1 to d4 are determined, and thereby the positions, sizes, and shapes of the detection areas A1 and A2 are determined.
- the position of the upper side b1 of the trapezoidal detection areas A1 and A2 is determined by the distance d1.
- the start position C1 of the upper side b1 is determined by the distance d2.
- the end point position C2 of the upper side b1 is determined by the distance d3.
- Sides b2 of the trapezoidal detection areas A1 and A2 are determined by the straight line L3 extending from the camera 1 toward the start position C1.
- the side b3 of the trapezoidal detection areas A1 and A2 is determined by the straight line L4 extending from the camera 1 toward the end position C2.
- the position of the lower side b4 of the trapezoidal detection areas A1 and A2 is determined by the distance d4.
- regions surrounded by the sides b1 to b4 are detection regions A1 and A2.
- the detection areas A1 and A2 are, as shown in FIG. 25 (b), square (rectangular) in real space on the rear side from the host vehicle V.
- the viewpoint conversion unit 31 inputs captured image data of a predetermined area obtained by imaging by the camera 1.
- the viewpoint conversion unit 31 performs viewpoint conversion processing on the input captured image data on bird's-eye view image data in a state of being viewed from a bird's-eye view.
- the state of being viewed as a bird's eye is a state viewed from the viewpoint of a virtual camera looking down from above, for example, vertically downward (or slightly obliquely downward).
- This viewpoint conversion process can be realized, for example, by the technology described in Japanese Patent Application Laid-Open No. 2008-219063.
- the luminance difference calculation unit 35 calculates the luminance difference with respect to the bird's-eye view image data whose viewpoint is converted by the viewpoint conversion unit 31 in order to detect an edge of a three-dimensional object included in the bird's-eye view image.
- the luminance difference calculation unit 35 calculates, for each of a plurality of positions along a vertical imaginary line extending in the vertical direction in real space, the luminance difference between two pixels in the vicinity of each position.
- the luminance difference calculation unit 35 can calculate the luminance difference by either a method of setting only one vertical imaginary line extending in the vertical direction in real space or a method of setting two vertical imaginary lines.
- the luminance difference calculation unit 35 is different from the first vertical imaginary line corresponding to a line segment extending in the vertical direction in the real space and the first vertical imaginary line in the vertical direction in the real space with respect to the bird's-eye view image subjected to viewpoint conversion.
- a second vertical imaginary line corresponding to the extending line segment is set.
- the brightness difference calculation unit 35 continuously obtains the brightness difference between the point on the first vertical imaginary line and the point on the second vertical imaginary line along the first vertical imaginary line and the second vertical imaginary line.
- the luminance difference calculation unit 35 corresponds to a line segment extending in the vertical direction in real space, as shown in FIG. 26A, and passes through the detection area A1 as a first vertical virtual line La (hereinafter referred to as the attention line La. Set). Further, unlike the attention line La, the luminance difference calculation unit 35 corresponds to a line segment extending in the vertical direction in real space, and a second vertical imaginary line Lr (hereinafter referred to as a reference line Lr) passing through the detection area A1.
- the reference line Lr is set at a position separated from the attention line La by a predetermined distance in real space.
- a line corresponding to a line segment extending in the vertical direction in real space is a line that radially spreads from the position Ps of the camera 1 in a bird's-eye view image.
- the radially extending line is a line along the direction in which the three-dimensional object falls when converted to bird's-eye view.
- the luminance difference calculation unit 35 sets an attention point Pa (a point on the first vertical imaginary line) on the attention line La. Further, the luminance difference calculation unit 35 sets a reference point Pr (a point on the second vertical imaginary line) on the reference line Lr.
- the attention line La, the attention point Pa, the reference line Lr, and the reference point Pr have the relationship shown in FIG. 26B in real space.
- the attention line La and the reference line Lr are lines extending in the vertical direction in real space, and the attention point Pa and the reference point Pr have substantially the same height in real space
- the point is set to The attention point Pa and the reference point Pr do not necessarily have exactly the same height, and an error that allows the attention point Pa and the reference point Pr to be regarded as the same height is allowed.
- the luminance difference calculation unit 35 obtains the luminance difference between the attention point Pa and the reference point Pr. If the luminance difference between the attention point Pa and the reference point Pr is large, it is considered that an edge exists between the attention point Pa and the reference point Pr. Therefore, the edge line detection unit 36 illustrated in FIG. 15 detects an edge line based on the luminance difference between the attention point Pa and the reference point Pr.
- FIG. 27 is a diagram showing the detailed operation of the luminance difference calculation unit 35, and FIG. 27 (a) shows a bird's-eye view image in a bird's-eye view state, and FIG. 27 (b) is shown in FIG. It is the figure which expanded some B1 of the bird's-eye view image.
- FIG. 27 shows only the detection area A1 is illustrated and described with reference to FIG. 27, the luminance difference is calculated for the detection area A2 according to the same procedure.
- the other vehicle VX appears in the captured image captured by the camera 1, the other vehicle VX appears in the detection area A1 in the bird's-eye view image, as shown in FIG. 27 (a).
- an attention line La is set on a rubber portion of a tire of another vehicle VX on a bird's-eye view image.
- the luminance difference calculation unit 35 first sets the reference line Lr.
- the reference line Lr is set along the vertical direction at a position separated by a predetermined distance in real space from the attention line La.
- the reference line Lr is set at a position 10 cm away from the attention line La in real space.
- the reference line Lr is set, for example, on the wheel of the tire of the other vehicle VX which is separated by 10 cm from the rubber of the tire of the other vehicle VX on the bird's-eye view image.
- the luminance difference calculation unit 35 sets a plurality of attention points Pa1 to PaN on the attention line La.
- attention points Pai when indicating arbitrary points
- the number of attention points Pa set on the attention line La may be arbitrary. In the following description, it is assumed that N attention points Pa are set on the attention line La.
- the luminance difference calculation unit 35 sets each of the reference points Pr1 to PrN to have the same height as each of the attention points Pa1 to PaN in real space. Then, the luminance difference calculation unit 35 calculates the luminance difference between the attention point Pa at the same height and the reference point Pr. Thereby, the luminance difference calculation unit 35 calculates the luminance difference of the two pixels at each of a plurality of positions (1 to N) along the vertical imaginary line extending in the vertical direction in the real space. The luminance difference calculation unit 35 calculates, for example, the luminance difference between the first reference point Pa1 and the first reference point Pr1, and the luminance difference between the second attention point Pa2 and the second reference point Pr2. Will be calculated.
- the luminance difference calculation unit 35 continuously obtains the luminance difference along the attention line La and the reference line Lr. That is, the luminance difference calculation unit 35 sequentially obtains the luminance differences between the third to Nth attention points Pa3 to PaN and the third to Nth reference points Pr3 to PrN.
- the luminance difference calculation unit 35 repeatedly executes processing such as setting of the reference line Lr, setting of the attention point Pa and the reference point Pr, and calculation of the luminance difference while shifting the attention line La in the detection area A1. That is, the luminance difference calculation unit 35 repeatedly executes the above process while changing the positions of the attention line La and the reference line Lr by the same distance in the extending direction of the ground line L1 in real space.
- the luminance difference calculation unit 35 sets, for example, a line that has been the reference line Lr in the previous process to the attention line La, sets the reference line Lr to the attention line La, and sequentially obtains the luminance difference. It will be.
- the edge line detection unit 36 detects an edge line from the continuous luminance difference calculated by the luminance difference calculation unit 35.
- the luminance difference is small because the first attention point Pa1 and the first reference point Pr1 are located in the same tire portion.
- the second to sixth attention points Pa2 to Pa6 are located in the rubber portion of the tire, and the second to sixth reference points Pr2 to Pr6 are located in the wheel portion of the tire. Therefore, the luminance difference between the second to sixth attention points Pa2 to Pa6 and the second to sixth reference points Pr2 to Pr6 becomes large.
- the edge line detection unit 36 can detect that an edge line exists between the second to sixth focus points Pa2 to Pa6 having a large luminance difference and the second to sixth reference points Pr2 to Pr6. it can.
- the edge line detection unit 36 first uses the i-th attention point Pai (coordinates (xi, yi)) and the i-th reference point Pri (coordinates (coordinates (coordinate From the luminance difference with xi ′, yi ′)), the i-th attention point Pai is attributed.
- Equation 1 t indicates a threshold, I (xi, yi) indicates the luminance value of the i-th attention point Pai, and I (xi ', yi') indicates the luminance value of the i-th reference point Pri .
- the attribute s (xi, yi) of the attention point Pai is “1”.
- the attribute s (xi, yi) of the attention point Pai is ' ⁇ 1'.
- the threshold value t may be set in advance and may be changed in accordance with a control command issued by the control unit 39 shown in FIG. 15, but the details thereof will be described later.
- the edge line detection unit 36 determines whether or not the attention line La is an edge line based on continuity c (xi, yi) of the attribute s along the attention line La based on Formula 2 below.
- c (xi, yi) 1
- c (xi, yi) 0
- the edge line detection unit 36 obtains the sum of the continuity c of all the attention points Pa on the attention line La.
- the edge line detection unit 36 normalizes the continuity c by dividing the sum of the obtained continuity c by the number N of the attention points Pa.
- the edge line detection unit 36 determines that the attention line La is an edge line.
- the threshold value ⁇ is a value set in advance by experiments or the like.
- the threshold value ⁇ may be set in advance, or may be changed in accordance with a control command according to the determination result of the virtual image of the control unit 39 described later.
- the edge line detection unit 36 determines whether the attention line La is an edge line based on the following Equation 3. Then, the edge line detection unit 36 determines whether all the attention lines La drawn on the detection area A1 are edge lines. (Equation 3) Cc (xi, yi) / N> ⁇
- the three-dimensional object detection unit 37 detects a three-dimensional object based on the amount of edge lines detected by the edge line detection unit 36.
- the on-vehicle ambient environment recognition apparatus 100 detects an edge line extending in the vertical direction in real space. The fact that many edge lines extending in the vertical direction are detected means that there is a high possibility that three-dimensional objects exist in the detection areas A1 and A2.
- the three-dimensional object detection unit 37 detects a three-dimensional object based on the amount of edge lines detected by the edge line detection unit 36. Furthermore, prior to detecting a three-dimensional object, the three-dimensional object detection unit 37 determines whether the edge line detected by the edge line detection unit 36 is correct.
- the three-dimensional object detection unit 37 determines whether or not the change in luminance along the edge line of the bird's-eye view image on the edge line is larger than a predetermined threshold. If the brightness change of the bird's-eye view image on the edge line is larger than the threshold value, it is determined that the edge line is detected due to an erroneous determination. On the other hand, when the luminance change of the bird's-eye view image on the edge line is not larger than the threshold, it is determined that the edge line is correct.
- the threshold is a value set in advance by experiment or the like.
- FIG. 28 shows the luminance distribution of the edge line
- FIG. 28 (a) shows the edge line and the luminance distribution when another vehicle VX is present as a three-dimensional object in the detection area A1
- FIG. 28 (b) Shows an edge line and a luminance distribution when there is no three-dimensional object in the detection area A1.
- the attention line La set in the tire rubber portion of the other vehicle VX in the bird's-eye view image is determined to be an edge line.
- the luminance change of the bird's-eye view image on the attention line La is gentle. This is because the tire of the other vehicle VX is stretched in the bird's-eye view image by the viewpoint conversion of the image captured by the camera 1 into the bird's-eye view image.
- the attention line La set in the white character portion “50” drawn on the road surface in the bird's-eye view image is erroneously determined as an edge line.
- the change in luminance of the bird's-eye view image on the attention line La has a large undulation. This is because on the edge line, a portion with high luminance in white characters and a portion with low luminance such as the road surface are mixed.
- the three-dimensional object detection unit 37 determines whether or not the edge line is detected due to an erroneous determination.
- the three-dimensional object detection unit 37 determines that the edge line is detected by an erroneous determination when the change in luminance along the edge line is larger than a predetermined threshold. And the said edge line is not used for detection of a solid thing.
- white characters such as “50” on the road surface, weeds on the road shoulder, and the like are determined as edge lines, and the detection accuracy of the three-dimensional object is prevented from being lowered.
- the three-dimensional object detection unit 37 calculates the luminance change of the edge line according to any one of the following expressions 4 and 5.
- the change in luminance of the edge line corresponds to the evaluation value in the vertical direction in real space.
- Equation 4 evaluates the luminance distribution by the sum of squares of differences between the ith luminance value I (xi, yi) on the attention line La and the adjacent i + 1th luminance value I (xi + 1, yi + 1).
- Equation 5 evaluates the luminance distribution by the sum of the absolute values of the differences between the ith luminance value I (xi, yi) on the attention line La and the adjacent i + 1 luminance value I (xi + 1, yi + 1). Do.
- b (xi, yi) 0
- the attribute b (xi, yi) of the attention point Pa (xi, yi) Become.
- the attribute b (xi, yi) of the focused point Pai is '0'.
- the threshold value t2 is preset by an experiment or the like to determine that the attention line La is not on the same three-dimensional object. Then, the three-dimensional object detection unit 37 adds up the attributes b for all the attention points Pa on the attention line La to obtain an evaluation value in the vertical equivalent direction, and determines whether the edge line is correct.
- FIG.29 and FIG.30 is a flowchart which shows the detail of the solid-object detection method which concerns on this embodiment.
- the process for the detection area A1 will be described, but the same process is performed for the detection area A2.
- step S21 the camera 1 captures an image of a predetermined area specified by the angle of view a and the mounting position.
- the viewpoint conversion unit 31 inputs the captured image data captured by the camera 1 in step S21, performs viewpoint conversion, and generates bird's-eye view image data.
- step S23 the luminance difference calculation unit 35 sets an attention line La on the detection area A1. At this time, the luminance difference calculation unit 35 sets a line corresponding to a line extending in the vertical direction in real space as the attention line La.
- step S24 the luminance difference calculation unit 35 sets a reference line Lr on the detection area A1. At this time, the luminance difference calculation unit 35 corresponds to a line segment extending in the vertical direction in real space, and sets a line separated from the attention line La and the real space by a predetermined distance as a reference line Lr.
- step S25 the luminance difference calculation unit 35 sets a plurality of focus points Pa on the focus line La. At this time, the luminance difference calculation unit 35 sets as many attention points Pa as there is no problem at the time of edge detection by the edge line detection unit 36. Further, in step S26, the luminance difference calculation unit 35 sets the reference point Pr so that the attention point Pa and the reference point Pr have substantially the same height in real space. As a result, the attention point Pa and the reference point Pr are aligned in a substantially horizontal direction, and it becomes easy to detect an edge line extending in the vertical direction in real space.
- step S27 the luminance difference calculation unit 35 calculates the luminance difference between the reference point Pa and the reference point Pr, which have the same height in real space.
- the edge line detection unit 36 calculates the attribute s of each attention point Pa according to the above-described Equation 1.
- step S28 the edge line detection unit 36 calculates the continuity c of the attribute s of each attention point Pa according to Equation 2 described above.
- step S29 the edge line detection unit 36 determines whether or not the value obtained by normalizing the sum of the continuity c is larger than the threshold value ⁇ according to Equation 3 above.
- the edge line detection unit 36 detects the attention line La as an edge line in step S30. Then, the process proceeds to step S31. If it is determined that the normalized value is not greater than the threshold value ⁇ (S29: NO), the edge line detection unit 36 does not detect the attention line La as an edge line, and the process proceeds to step S31.
- the threshold value ⁇ can be set in advance, but can be changed by the control unit 39 in accordance with a control command.
- step S31 the luminance difference calculation unit 35 determines whether or not the processing in steps S23 to S30 has been performed for all of the attention lines La that can be set on the detection area A1. If it is determined that the above process has not been performed for all the attention lines La (S31: NO), the process returns to step S23, a new attention line La is set, and the process up to step S31 is repeated. On the other hand, when it is determined that the above process has been performed for all the attention lines La (S31: YES), the process proceeds to step S32 in FIG.
- step S32 in FIG. 30 the three-dimensional object detection unit 37 calculates, for each edge line detected in step S30 in FIG. 29, a change in luminance along the edge line.
- the three-dimensional object detection unit 37 calculates the luminance change of the edge line according to any one of the expressions 4, 5 and 6 described above.
- step S33 the three-dimensional object detection unit 37 excludes, among the edge lines, an edge line whose luminance change is larger than a predetermined threshold. That is, it is determined that the edge line having a large change in luminance is not a correct edge line, and the edge line is not used for detection of a three-dimensional object.
- the predetermined threshold value is a value set based on a change in luminance generated by a character on the road surface, a weed on the road shoulder, and the like, which is obtained in advance by experiments and the like.
- step S34 the three-dimensional object detection unit 37 determines whether the amount of edge lines is equal to or greater than a second threshold value ⁇ .
- the second threshold value ⁇ may be obtained in advance by experiment or the like and set, and may be changed in accordance with a control command issued by the control unit 39 shown in FIG. 15, but the details thereof will be described later. For example, when a four-wheeled vehicle is set as a three-dimensional object to be detected, the second threshold value ⁇ is set in advance based on the number of edge lines of the four-wheeled vehicle that has appeared in the detection area A1 by experiment or the like.
- the three-dimensional object detection unit 37 detects that there is a three-dimensional object in the detection area A1 in step S35. On the other hand, when it is determined that the amount of edge lines is not the second threshold ⁇ or more (S34: NO), the three-dimensional object detection unit 37 determines that there is no three-dimensional object in the detection area A1. Thereafter, the processing shown in FIGS. 29 and 30 ends.
- the detected three-dimensional object may be determined to be another vehicle VX traveling in the adjacent lane next to the lane in which the host vehicle V is traveling, or in consideration of the relative velocity of the detected three-dimensional object to the host vehicle V It may be determined whether it is another vehicle VX traveling in the adjacent lane.
- the second threshold value ⁇ can be set in advance, but can be changed to the control unit 39 according to a control command.
- the vertical direction in real space with respect to the bird's-eye view image Set a vertical imaginary line as a line segment extending to Then, for each of a plurality of positions along a virtual imaginary line, it is possible to calculate the luminance difference between two pixels in the vicinity of each position, and to determine the presence or absence of a three-dimensional object based on the continuity of the luminance difference.
- an attention line La corresponding to a line segment extending in the vertical direction in real space and a reference line Lr different from the attention line La are set for the detection areas A1 and A2 in the bird's-eye view image. Then, the luminance difference between the attention point Pa on the attention line La and the reference point Pr on the reference line Lr is continuously obtained along the attention line La and the reference line La. Thus, the luminance difference between the attention line La and the reference line Lr is determined by continuously determining the luminance difference between the points. When the luminance difference between the attention line La and the reference line Lr is high, there is a high possibility that the edge of the three-dimensional object is present at the setting location of the attention line La.
- a three-dimensional object can be detected based on the continuous luminance difference.
- the three-dimensional object Detection process is not affected. Therefore, according to the method of this embodiment, the detection accuracy of the three-dimensional object can be improved.
- the difference in luminance between two points of substantially the same height near the vertical imaginary line is determined. Specifically, since the luminance difference is determined from the attention point Pa on the attention line La and the reference point Lr on the reference line Lr, which have substantially the same height in real space, the luminance in the case where there is an edge extending in the vertical direction The difference can be clearly detected.
- FIG. 31 is a view showing an example of an image for explaining the processing of the edge line detection unit 36. As shown in FIG.
- This image example shows a first stripe pattern 101 showing a stripe pattern in which a high brightness area and a low brightness area are repeated, and a second stripe pattern in which a low brightness area and a high brightness area are repeated.
- 102 are adjacent images. Further, in this example of the image, the area with high luminance of the first stripe pattern 101 and the area with low luminance of the second stripe pattern 102 are adjacent to each other, and the area with low luminance of the first stripe pattern 101 and the second stripe pattern 102. The region where the luminance of the image is high is adjacent. The portion 103 located at the boundary between the first stripe pattern 101 and the second stripe pattern 102 tends not to be perceived as an edge by human senses.
- the part 103 is recognized as an edge when an edge is detected based on only the luminance difference.
- the edge line detection part 36 determines that the part 103 is an edge line only when there is continuity in the attribute of the luminance difference. It is possible to suppress an erroneous determination in which a part 103 not recognized as an edge line as a sense is recognized as an edge line, and edge detection in accordance with human sense can be performed.
- the change in luminance of the edge line detected by the edge line detection unit 36 is larger than a predetermined threshold value, it is determined that the edge line is detected due to an erroneous determination.
- a three-dimensional object included in the captured image tends to appear in the bird's-eye view image in a stretched state.
- the tire of the other vehicle VX is stretched as described above, since one portion of the tire is stretched, the brightness change of the bird's-eye view image in the stretched direction tends to be small.
- the bird's-eye view image includes a mixed region of a high luminance such as a character part and a low luminance region such as a road part.
- the luminance change in the stretched direction tends to be large. Therefore, by determining the luminance change of the bird's-eye view image along the edge line as in the present example, the edge line detected by the erroneous determination can be recognized, and the detection accuracy of the three-dimensional object can be enhanced.
- the other vehicle recognition unit 204b in the on-vehicle ambient environment recognition device 100 of this example determines the three-dimensional object.
- a unit 34, a virtual image determination unit 38, and a control unit 39 are provided.
- the three-dimensional object judgment unit 34 determines whether the detected three-dimensional object is another vehicle VX present in the detection areas A1 and A2. Make a final decision.
- the three-dimensional object detection unit 33 (or the three-dimensional object detection unit 37) performs detection of a three-dimensional object reflecting the determination result of the virtual image determination unit 38 described later. From the result of texture analysis of the image corresponding to the detected three-dimensional object, the virtual image determination unit 38 determines whether the detected three-dimensional object is a virtual image in which an image of a building or the like is transferred to a water film or the like formed on the road surface. To judge. When it is determined that the image corresponding to the three-dimensional object detected by virtual image determination unit 38 is a virtual image, control unit 39 is another vehicle V in which the three-dimensional object to be detected is present in detection regions A1 and A2. And outputs a control command for controlling each part (including the control part 39) constituting the other vehicle recognition part 204b so as to be suppressed.
- the three-dimensional object determination unit 34 of the present embodiment finally determines whether the three-dimensional object detected by the three-dimensional object detection units 33 and 37 is another vehicle VX present in the detection areas A1 and A2.
- the three-dimensional object determination unit 34 determines that the three-dimensional object detected is the other vehicle VX present in the detection areas A1 and A2
- processing such as notification to the occupant is executed.
- the three-dimensional object determination unit 34 can suppress the determination that the detected three-dimensional object is the other vehicle VX in accordance with the control command of the control unit 39.
- the control unit 39 suppresses the determination that the detected three-dimensional object is the other vehicle VX
- Control instruction is sent to the three-dimensional object determination unit 34.
- the three-dimensional object determination unit 34 stops the determination processing of the three-dimensional object according to this control command, or determines that the detected three-dimensional object is not the other vehicle VX, that is, the other vehicle VX does not exist in the detection areas A1 and A2.
- the control command is not obtained, it can also be determined that the three-dimensional object detected by the three-dimensional object detection units 33 and 37 is the other vehicle VX present in the detection areas A1 and A2.
- the virtual image determination unit 38 of the present embodiment can determine, based on the difference waveform information generated by the three-dimensional object detection unit 33, whether or not the image of the three-dimensional object relating to the detection is a virtual image.
- the virtual image determination unit 38 determines that the brightness difference of the image area of the image information corresponding to the three-dimensional object, particularly the image information corresponding to the contour of the three-dimensional object along the vertical direction is less than a predetermined value. In this case, it is determined that the three-dimensional object detected in the area including the image area is a virtual image.
- the virtual image determination unit 38 determines that the frequency counted in the differential waveform information is a predetermined value among the determination lines (La to Lf in FIG. 17) along the direction in which the three-dimensional object falls when converting a bird's-eye view image into a viewpoint.
- One or more reference judgment lines for example, La which are the above are specified, and one or more comparison judgments including the luminance of the image area on the reference judgment line (La) and the judgment line (Lc or Ld) adjacent to the reference judgment line It is determined whether the luminance difference with the luminance of the image area on the line (Lb, Lc, Ld, Le) is less than a predetermined value, and if the luminance difference is less than the predetermined value, the area including the image area It is determined that the three-dimensional object detected in is a virtual image.
- the comparison of the luminance difference is performed by comparing the luminance of a certain pixel on the reference determination line (La) or the image area including this pixel and the one pixel or comparison pixel having the comparison determination line (Lb, Lc, Ld, Le).
- the brightness of the area can be compared.
- the luminance difference can be determined based on the number of pixels indicating the predetermined difference in the differential waveform information shown in FIG. 17 or a frequency-distributed value.
- the virtual image determination unit 38 sets a predetermined number of comparison determination lines (Lb, Lc, Ld, Le) including an image area whose luminance difference from the luminance of the image area on the reference determination line (La) is less than a predetermined value.
- Lb, Lc, Ld, Le comparison determination lines
- it can be determined that the three-dimensional object detected in the area including the image area is a virtual image.
- whether or not the image is a virtual image can be accurately determined by verifying the presence or absence of contrast in a wide range and determining whether the image is a virtual image.
- FIG. 32 is a view showing a state where a water puddle (water film) is formed on the road surface in the detection area A2 and an image of a surrounding structure is reflected on the surface.
- 33 and 34 show the difference waveform information DWt1 generated from the bird's-eye view image of the image of the other vehicle VX existing in the detection area A1, and the image of the surrounding structure reflected in the water film formed in the detection area A2.
- the differential waveform information DWt2 generated from the bird's-eye view image of the base image is shown. As shown in the left side of FIG. 33 and FIG.
- the image of the virtual image in which the surrounding structure is reflected in the water film of the road surface is a real image that is the image corresponding to the detected three-dimensional object by using the feature of low contrast. It can be determined whether it is a virtual image.
- the virtual image determination unit 38 of the present embodiment determines whether the image of the three-dimensional object to be detected is a virtual image based on the edge information generated by the three-dimensional object detection unit 37. Can. Specifically, when the virtual image determination unit 38 performs the viewpoint conversion of the bird's-eye view image, the luminance difference between the image areas adjacent to each other among the determination lines (La to Ld, Lr in FIG.
- One reference determination line for example, Lr
- Lr the luminance of the image area on the reference determination line (Lr) and the determination line (Lb ⁇ ) adjacent to the reference determination line (Lr)
- the luminance difference with the luminance of the image area on one or more comparison determination lines (La to Ld) including Lc) is less than a predetermined value
- the three-dimensional object detected in the area including the image area is a virtual image I judge that there is.
- the virtual image determination unit 38 determines that there are a predetermined number or more of comparison determination lines (Lb to Lc) including an image area whose luminance difference from the luminance of the image area on the reference determination line (Lr) is less than a predetermined value. It can be determined that the three-dimensional object detected in the area including the image area is a virtual image. As described above, whether or not the image is a virtual image can be accurately determined by verifying the presence or absence of contrast in a wide range and determining whether the image is a virtual image.
- the virtual image determination unit 38 of this embodiment determines whether the image information corresponding to the three-dimensional object detected based on the contrast of the image information of the detection area A1 and the detection area A2 is a virtual image or a real image.
- the contrast of the image information is calculated based on the feature amount of the texture of the image information of the detection area A1 and the detection area A2.
- methods such as extraction, evaluation and quantification of the texture of the image information can be appropriately applied to the texture analysis method known at the time of filing.
- control unit 39 When the virtual image determination unit 38 determines that the three-dimensional object detected by the three-dimensional object detection unit 33 is a virtual image in the previous process, the control unit 39 of the present embodiment determines the three-dimensional object detection unit 33 in the next process.
- a control command to be executed in any one or more of 37, the three-dimensional object determination unit 34, the virtual image determination unit 38, or the control unit 39 that is the control unit 39 can be generated.
- the control command of the present embodiment is a command for controlling the operation of each part so that it is suppressed that the detected three-dimensional object is the other vehicle VX. This is to prevent a virtual image in which the surrounding structure is reflected in the water film on the road surface from being erroneously judged as the other vehicle VX. Since the other vehicle recognition unit 204b of this embodiment is a computer, control instructions for the three-dimensional object detection process, the three-dimensional object determination process, and the virtual image determination process may be incorporated in the program of each process in advance or may be sent out at the time of execution. .
- the control command of the present embodiment may be a command to a result of stopping the process of determining the detected three-dimensional object as another vehicle, or determining the detected three-dimensional object as not the other vehicle.
- An instruction to reduce the sensitivity when detecting a three-dimensional object based on differential waveform information, an instruction to adjust the sensitivity when detecting a three-dimensional object based on edge information, and a luminance difference when determining whether it is a virtual image or not It may be an instruction to adjust the value of.
- control commands output by the control unit 39 will be described.
- control instructions in the case of detecting a three-dimensional object based on differential waveform information will be described.
- the three-dimensional object detection unit 33 detects a three-dimensional object based on the difference waveform information and the first threshold value ⁇ .
- the control unit 39 of the present embodiment generates a control instruction to increase the first threshold value ⁇ , and the three-dimensional object detection unit Output to 33.
- the first threshold ⁇ is the first threshold ⁇ for determining the peak of the differential waveform DW t in step S7 of FIG. 23 (see FIG. 17).
- the control unit 39 can output a control instruction to increase the threshold value p regarding the difference of the pixel value in the difference waveform information to the three-dimensional object detection unit 33.
- control unit 39 determines that the image information corresponding to the three-dimensional object is a virtual image in the previous processing, a water film is formed on the detection regions A1 and A2, and the control unit 39 generates image information on the detection regions A1 and A2. It can be judged that there is a high possibility of reflection of surrounding structures. If a three-dimensional object is detected by the same method as usual, a virtual image reflected in the water film may be erroneously detected as a real image of the other vehicle VX although there is no other vehicle VX in the detection areas A1 and A2. is there.
- the threshold value regarding the difference in pixel value when generating the difference waveform information is changed to a high value so that a three-dimensional object is not easily detected in the next processing.
- the detection sensitivity is adjusted so that the other vehicle VX traveling next to the traveling lane of the host vehicle V is difficult to detect, so the surrounding structure reflected in the water film It is possible to prevent an object from being erroneously detected as another vehicle VX traveling in the adjacent lane.
- the control unit 39 of the present embodiment determines the number of pixels indicating a predetermined difference on the difference image of the bird's-eye view image.
- a control instruction that counts and outputs the frequency-distributed value low can be output to the three-dimensional object detection unit 33.
- the value obtained by frequency distribution by counting the number of pixels indicating a predetermined difference on the difference image of the bird's-eye view image is the value on the vertical axis of the difference waveform DW t generated in step S5 of FIG.
- control unit 39 determines that the three-dimensional object is a virtual image in the previous processing, it can be determined that there is a high possibility that a water film is formed in the detection regions A1 and A2.
- the frequency-distributed value of the difference waveform DW t is changed to a low value so that the other vehicle VX is not erroneously detected in A1 and A2.
- the detection sensitivity is adjusted so that the other vehicle VX traveling next to the traveling lane of the host vehicle V is difficult to detect, the virtual image formed on the water film Erroneous detection as another vehicle VX traveling in a lane can be prevented.
- control instructions in the case of detecting a three-dimensional object based on edge information will be described.
- the control unit 39 When the virtual image determination unit 38 determines that the image information corresponding to the three-dimensional object is a virtual image, the control unit 39 according to the present embodiment performs control to increase a predetermined threshold regarding luminance used when detecting edge information.
- the instruction is output to the three-dimensional object detection unit 37.
- the predetermined threshold value for luminance used when detecting edge information is a threshold value ⁇ for determining a value obtained by normalizing the sum total of the continuity c of the attribute of each attention point Pa in step S29 of FIG.
- the second threshold ⁇ for evaluating the amount of edge lines at 34.
- a water film may be formed in the detection areas A1 and A2, and the surrounding structure may be reflected in the water film. Since it can be determined to be high, the threshold ⁇ used when detecting an edge line or the second threshold ⁇ for evaluating the amount of the edge line is changed high so that a three-dimensional object is difficult to detect in the next processing. As described above, by changing the determination threshold value high, the detection sensitivity is adjusted so that the other vehicle VX traveling next to the traveling lane of the host vehicle V is difficult to detect, so the surrounding structure reflected in the water film It is possible to prevent false detection of a virtual image of an object as another vehicle VX traveling in the next lane.
- the control unit 39 of the present embodiment reduces the amount of detected edge information to a three-dimensional object. It is output to the detection unit 37.
- the amount of detected edge information is a value obtained by normalizing the total sum of the continuity c of the attributes of the attention points Pa in step S29 in FIG. 29, or the amount of edge lines in step 34 in FIG. If the control unit 39 determines that the three-dimensional object is a virtual image in the previous processing, it can determine that there is a high possibility that the surrounding structure is reflected in the water film such as a puddle.
- the value obtained by normalizing the sum of the continuity c of the attributes of each attention point Pa or the amount of edge lines is changed to a low value so that an object is not easily detected.
- the detection sensitivity is adjusted by reducing the output value so that the other vehicle VX traveling next to the traveling lane of the host vehicle V is difficult to detect. It is possible to prevent false detection of the virtual image of the reflected surrounding structure as the other vehicle VX traveling in the adjacent lane.
- control unit 39 makes the first threshold ⁇ , the threshold p, the second threshold ⁇ , or the threshold ⁇ higher if the luminances of the detection areas A1 and A2 are equal to or higher than a predetermined value.
- Control instructions can be generated and output to the three-dimensional object detection units 33 and 37.
- the luminance of the detection areas A1 and A2 can be acquired from the captured image of the camera 1.
- the brightness of the detection areas A1 and A2 is higher than a predetermined value and bright, it can be determined that there is a high possibility that a water film reflecting light is formed in the detection areas A1 and A2.
- the detection sensitivity is enhanced by increasing the threshold so that another vehicle VX traveling next to the traveling lane of the host vehicle V is difficult to detect.
- control unit 39 acquires the moving speed of the host vehicle V from the vehicle speed sensor 5, and when the moving speed of the host vehicle V detected by the vehicle speed sensor 5 is less than a predetermined value, A control instruction to further increase the threshold value ⁇ , the threshold value p, the second threshold value ⁇ or the threshold value ⁇ can be generated and output to the solid object detection unit.
- a control instruction to further increase the threshold value ⁇ , the threshold value p, the second threshold value ⁇ or the threshold value ⁇ can be generated and output to the solid object detection unit.
- the threshold value is increased by making it difficult to detect another vehicle VX traveling next to the traveling lane of the host vehicle V.
- the control unit 39 of the present embodiment In the process of determining that the vehicle VX or the like is to be detected, the detection sensitivity is adjusted so that the other vehicle VX traveling next to the traveling lane of the host vehicle V is difficult to detect.
- the control unit 39 determines a predetermined range for evaluating the relative movement speed in the three-dimensional object detection unit 33, 37. A control command to be reduced is generated and output to the three-dimensional object detection units 33 and 37.
- the three-dimensional object detected in the previous process is a virtual image
- the three-dimensional object is an image of a water film formed on the road surface, and it is presumed that the three-dimensional object is a stationary object. can do.
- the detection sensitivity can be increased by narrowing the predetermined range of the relative movement speed used to determine whether the vehicle is the other vehicle VX so as not to erroneously detect such a stationary object as the other vehicle VX.
- control unit 39 can generate a control instruction to reduce the predetermined range by changing the lower limit value indicated by the negative value of the predetermined range to a high value to evaluate the relative movement speed. .
- the control unit 39 can change the lower limit value indicated by a negative value to a high value in the predetermined range defined as -20 km to 100 km, and can define it as -5 km to 100 km, for example.
- the relative movement speed indicated by a negative value is the speed at which the vehicle V travels rearward with respect to the traveling direction. If it is determined that the three-dimensional object detected in the previous process is a virtual image, the three-dimensional object is an image of a water film formed on the road surface, and it is presumed that the three-dimensional object is a stationary object. can do.
- the control unit 39 When adjusting the threshold related to the speed, the control unit 39 generates a control command to further reduce the predetermined range for evaluating the relative movement speed when the luminance of the detection areas A1 and A2 is equal to or more than the predetermined value. It can be output to the three-dimensional object detection units 33 and 37.
- the luminances of the detection areas A1 and A2 can be acquired from the image information of the camera 1 as described above. When the brightness of the detection areas A1 and A2 is higher than a predetermined value and bright, it can be determined that there is a high possibility that a water film reflecting light is formed in the detection areas A1 and A2.
- the detection sensitivity so that VX is hard to be detected, it is possible to prevent false detection of the virtual image of the surrounding structure reflected in the water film as another vehicle VX traveling on the adjacent adjacent lane.
- the control unit 39 acquires the moving speed of the own vehicle V from the vehicle speed sensor 5 and the moving speed of the own vehicle V detected by the vehicle speed sensor 5 is less than a predetermined value. Can generate a control command for further reducing the predetermined range for evaluating the relative movement speed, and can output the control command to the three-dimensional object detection units 33 and 37.
- the moving speed of the host vehicle V is low, there is a tendency that the discriminability of the difference in the difference waveform information and the difference in the edge information decreases.
- the predetermined range for evaluating the relative moving speed is further reduced to travel next to the traveling lane of the host vehicle V.
- FIGS. 35 to 39 operations of the control unit 39 and the three-dimensional object determination unit 34 and the three-dimensional object detection units 33 and 37 that have acquired the control command will be described.
- the processing shown in FIGS. 35 to 39 is the present three-dimensional detection processing performed using the result of the previous processing after the previous three-dimensional object detection processing.
- the virtual image determination unit 38 determines whether the three-dimensional object detected by the three-dimensional object detection unit 33 is a virtual image. Whether or not the three-dimensional object is a virtual image can be determined based on the contrast of the image information of the detected three-dimensional object. In this case, it can be performed based on the difference waveform information generated by the three-dimensional object detection unit 33 described above, or can be performed based on the edge information generated by the three-dimensional object detection unit 37.
- step 42 the control unit 39 determines whether the detected three-dimensional object is a virtual image in the determination of the virtual image calculated in step 41.
- the control unit 39 When the detected three-dimensional object is a virtual image, the control unit 39 outputs a control command to each unit so that it is suppressed that the detected three-dimensional object is the other vehicle VX. As one example, the process proceeds to step S46, and the control unit 39 outputs, to the three-dimensional object determination unit 34, a control instruction of the content for stopping the three-dimensional object detection process. Further, as another example, the process proceeds to step S47, and the control unit 39 can also determine that the detected three-dimensional object is not the other vehicle VX.
- step S43 perform three-dimensional object detection processing.
- the process of detecting the three-dimensional object is performed according to the process using the differential waveform information of FIG. 23 or 24 by the above-mentioned three-dimensional object detection unit 33 or the process using edge information of FIG. It will be.
- step 43 when a solid object is detected in the detection areas A1 and A2 by the solid object detection units 33 and 37, the process proceeds to step S45, and it is determined that the detected solid object is another vehicle VX.
- step S47 determines the other vehicle VX does not exist in the detection areas A1 and A2.
- FIG. 36 shows another processing example.
- the control unit 39 proceeds to step S51, and uses the threshold value p regarding the difference in pixel value when generating difference waveform information, and difference waveform information
- One or more of the first threshold ⁇ used when judging a solid object, the threshold ⁇ when generating edge information, and the second threshold ⁇ used when judging a solid object from edge information are set high
- the control command is sent to the three-dimensional object detection units 33 and 37.
- the first threshold value ⁇ is for determining the peak of the differential waveform DW t in step S7 of FIG.
- the threshold value ⁇ is a threshold value for determining a value obtained by normalizing the sum total of the continuity c of the attributes of the attention points Pa in step S29 in FIG. 29, and the second threshold value ⁇ is the amount of edge lines in step 34 in FIG. Is a threshold for evaluating Note that, instead of raising the threshold value, the control unit 39 may generate a control instruction to lower the output value evaluated by the threshold value and output the control command to the three-dimensional object detection units 33 and 37.
- the other processes are the same as those shown in FIG.
- step S52 when it is determined that the three-dimensional object detected in step 42 is a virtual image as shown in FIG. 37, the process proceeds to step S52 and the luminance of detection areas A1 and A2 is at least a predetermined value. It is determined whether or not If the luminances of the detection areas A1 and A2 are equal to or higher than the predetermined value, the process may proceed to step S53 to generate a control instruction to further increase the threshold in step S51 of FIG. . Note that, instead of raising the threshold value, the control unit 39 may generate a control command to further lower the output value evaluated by the threshold value, and may output the control command to the three-dimensional object detection units 33 and 37.
- the other processes are the same as those shown in FIG.
- step S54 when it is determined that the three-dimensional object detected in step 42 is a virtual image, as shown in FIG. 38, the process proceeds to step S54 and the moving speed of the vehicle is less than a predetermined value. Determine if it is or not. If the moving speed of the host vehicle is less than the predetermined value, the process proceeds to step S55, and a control instruction may be generated to further increase the threshold in step S51 of FIG.
- the control unit 39 generates a control instruction to further lower the output value evaluated by the threshold instead of raising the threshold, and the other processes are the same as those shown in FIG.
- the control unit 39 When the output value is lowered, the control unit 39 counts the number of pixels indicating a predetermined difference on the difference image of the bird's-eye view image, and outputs a control instruction for outputting a low frequency distribution value at a low level. It is output to the detection unit 33.
- the value obtained by frequency distribution by counting the number of pixels indicating a predetermined difference on the difference image of the bird's-eye view image is the value on the vertical axis of the difference waveform DW t generated in step S5 of FIG.
- the control unit 39 can output, to the three-dimensional object detection unit 37, a control instruction that outputs a low amount of detected edge information.
- the amount of detected edge information is a value obtained by normalizing the total sum of the continuity c of the attributes of the attention points Pa in step S29 in FIG. 29, or the amount of edge lines in step 34 in FIG.
- the control unit 39 determines that the three-dimensional object detected in the previous process is a virtual image, it can determine that a water film is formed in the detection areas A1 and A2, and therefore the three-dimensional object in the next process It is possible to output to the three-dimensional object detection unit 37 a value obtained by normalizing the total sum of the continuity c of the attributes of each attention point Pa or changing the amount of the edge line to a low value so that objects are difficult to be detected.
- FIG. 39 shows still another processing example. If it is determined that the three-dimensional object detected in step 42 is a virtual image, the control unit 39 proceeds to step S61 and generates a control command to reduce a predetermined range for evaluating the relative movement speed, Output to the object detection units 33 and 37. Incidentally, when the relative movement speed of the detected three-dimensional object with respect to the vehicle is within a predetermined range, the three-dimensional object detection unit 33, 37 detects the three-dimensional object as a detection target of another vehicle or the like. Send to 34
- step S130 of FIG. 5 the other vehicle recognition unit 204b of the application execution unit 204 can execute the other vehicle recognition processing as described above.
- the on-vehicle ambient environment recognition device 100 recognizes the other vehicle traveling around the vehicle by the application execution unit 204 based on the captured image acquired by the camera 1 and compares the other vehicle with the vehicle The speed is detected (step S130). Further, the reflection determination unit 203 determines the presence / absence of reflection of a background object on the road surface based on the photographed image (step S180). When it is determined in step S180 that there is a reflection, the alarm control unit 205 stops the alarm output signal to the alarm output unit 3 (step S200), and suppresses the alarm output by the alarm output unit 3.
- the degree of suppression of the output of the alarm signal is adjusted by the alarm suppression adjustment unit 206 based on the relative speed of the other vehicle detected in step S130 (step S160), and the alarm signal is adjusted according to the adjusted degree of suppression. Suppress the output. Since this is done, it is possible to prevent an alarm from being output at an incorrect timing as a reflection of a background object on the road surface is erroneously detected as a vehicle.
- step S160 the warning suppression adjustment unit 206 changes the conditions for the reflection determination unit 203 to determine the presence or absence of background objects on the road surface according to the relative speed of the other vehicle.
- the degree of suppression of the output of the alarm signal can be adjusted. Specifically, adjustment of the degree of warning suppression is made by changing the reference value at the time of determining the presence or absence of the background object on the road surface by comparing the feature amounts of the respective regions in step S180. Do. That is, in the on-vehicle ambient environment recognition apparatus 100, background areas 34a to 34f and 36a to 36f and reflection areas 35a to 35f and 37a to 37f are displayed on the photographed image 30 acquired by the camera 1 by the area setting unit 201. Are set (step S20).
- step S180 the reflection determination unit 203 determines the images in the background areas 34a to 34f and 36a to 36f of the captured image 30, and the images in the reflection areas 35a to 35f and 37a to 37f of the captured image 30.
- the threshold is changed according to the relative speed of the other vehicle, and more specifically, when the relative speed of the other vehicle is within the predetermined range, the alarm suppression degree is adjusted by lowering the threshold. . Since it did in this way, adjustment of the warning suppression degree can be performed easily and reliably.
- the condition for determining the presence or absence of the background object may be relaxed in step S180 to facilitate alarm suppression.
- the warning suppression adjustment unit 206 can also adjust the degree of warning suppression by changing the conditions for calculating the feature quantities of the respective regions in step S170. That is, in the on-vehicle ambient environment recognition apparatus 100, the feature quantity calculation unit 202 determines the predetermined values of the images in the background areas 34a to 34f and 36a to 36f and the images in the reflection areas 35a to 35f and 37a to 37f. Edges satisfying the detection conditions are respectively detected, and feature amounts corresponding to the detected edges are respectively calculated for the background regions 34a to 34f and 36a to 36f and the reflection regions 35a to 35f and 37a to 37f (step S170).
- step S180 the reflection determination unit 203 compares the feature amounts of the background regions 34a to 34f and 36a to 36f with the feature amounts of the reflection regions 35a to 35f and 37a to 37f to obtain a background object on the road surface. Determine the presence or absence of reflection.
- step S160 the detection condition is changed according to the relative speed of the other vehicle, more specifically, when the relative speed of the other vehicle is within a predetermined range, the luminance difference as the edge detection condition is lowered. Adjust the degree of alarm suppression. Also in this case, the adjustment of the warning suppression degree can be easily and reliably performed. Furthermore, the condition for determining the presence or absence of the background object may be relaxed in step S180 to facilitate alarm suppression.
- the alarm control unit 205 stops the output of the alarm output signal to the alarm output unit 3 when the notification of the reflection is received from the reflection determination unit 203, thereby outputting an alarm.
- An example of suppression is described.
- the present embodiment when the notification with reflection is received from the reflection determination unit 203, the other vehicle is unlikely to be recognized in the other vehicle recognition process executed by the other vehicle recognition unit 204b of the application execution unit 204.
- An example in the case of suppressing an alarm output will be described.
- the configuration of the on-vehicle ambient environment recognition apparatus 100 according to the present embodiment and the control block diagram of the control unit 2 for suppressing the warning when the road surface is reflected are the same as those shown in FIGS. Therefore, these descriptions are omitted below.
- FIG. 40 is a flowchart of processing executed in the warning suppression at the time of road surface reflection in the present embodiment. Similar to the flowchart of FIG. 5 described in the first embodiment, the process shown in this flowchart is performed at predetermined processing cycles in the control unit 2 while an application (application) is being executed.
- step S161 the control unit 2 causes the alarm suppression adjustment unit 206 to adjust the degree of alarm suppression.
- the alarm suppression can be easily performed by any of the following methods (A), (B), and (C) compared to the case where it is not so As such, adjust the degree of alarm suppression.
- the other vehicle recognition condition is changed when it is determined that there is reflection of a background object on the road surface in the other vehicle recognition process of step S130 from next time onwards. That is, the differential waveform information is acquired as the image information value based on the image in the detection area set in the photographed image, and based on this, the detection of the three-dimensional object by the differential waveform information as described above is performed.
- the threshold value for determining whether or not a three-dimensional object exists from the difference waveform DW t , specifically, the value of the first threshold value ⁇ used for the determination in step S7 of FIG.
- edge information is acquired as an image information value based on an image in a detection area set in a captured image, and based on this, another vehicle is recognized by executing detection of a three-dimensional object by edge information as described above.
- the threshold value for determining whether or not the attention line is an edge line specifically, the value of the threshold ⁇ in Expression 3 is increased.
- the condition at the time of acquiring the image information value may be changed. That is, the differential waveform information is acquired as the image information value based on the image in the detection area set in the photographed image, and based on this, the detection of the three-dimensional object by the differential waveform information as described above is performed.
- the threshold value for obtaining the difference image PD t used to generate the difference waveform DW t specifically, the value of the threshold value p described with reference to FIG.
- edge information is acquired as an image information value based on an image in a detection area set in a captured image, and based on this, another vehicle is recognized by executing detection of a three-dimensional object by edge information as described above.
- the threshold value for attributeing the attention point specifically, the value of the threshold value t in Expression 1 is increased.
- the other vehicle recognition condition can also be changed by adjusting these threshold values.
- step S130 By setting the conditions for recognizing other vehicles in step S130 to be strict by the method described above, it is possible to increase the degree of suppression of the alarm output so that the other vehicles are not easily recognized. As a result, when the relative speed of the other vehicle is within the predetermined range, the degree of alarm suppression is adjusted so that the alarm suppression can be easily performed as compared with the case where it is not so. Note that only one of the adjustment of the threshold as the detection condition of the three-dimensional object and the adjustment of the threshold as the acquisition condition of the image information value described above may be performed, or both may be performed simultaneously.
- step S180 it is determined in step S180 whether the background object is reflected on the road or not. Change the conditions. That is, the threshold value for determining the correlation of the image between the background areas 34a to 34f and the reflected areas 35a to 35f and the background areas 36a to 36f and the reflected areas 37a to 37f in FIG. 6 is lowered. Alternatively, the luminance difference of the edge detection condition for each of the background areas 34a to 34f and 36a to 36f and the reflection areas 35a to 35f and 37a to 37f is lowered.
- step S180 By relaxing the conditions for determining the presence or absence of background objects on the road surface in step S180 by the method as described above, it is easy to obtain the determination result of presence of reflection, so that the alarm output is performed.
- the degree of suppression can be increased.
- the degree of alarm suppression is adjusted so that the alarm suppression can be easily performed as compared with the case where it is not so. Note that only one of the adjustment of the threshold for the correlation and the adjustment of the edge detection condition described above may be performed, or both may be performed simultaneously.
- the alarm suppression period is extended when it is determined that there is no reflection of background objects on the road surface. That is, in the reflection determination in step S180, it is determined that there is no reflection of background objects on the road surface, and it is extended after the determination result of no reflection is obtained if it is determined that there is no reflection thereafter. To suppress the alarm. As a result, when the relative speed of the other vehicle is within the predetermined range, the degree of alarm suppression is adjusted so that the alarm suppression can be easily performed as compared with the case where it is not so.
- the period during which the alarm suppression is extended may be a period during which it is determined that the relative speed of the other vehicle is within the predetermined range after it is determined that the background is not reflected, or the predetermined time or the vehicle may It may be a period until the vehicle travels a predetermined distance.
- step S161 the degree of alarm suppression can be adjusted using at least one of the methods (A) to (C) described above.
- Each of the methods (A) to (C) may be employed singly or in combination.
- step S190 the control unit 2 determines the presence or absence of the background object on the road surface based on the result of the reflection determination in step S180, as in the flowchart of FIG. 5 described in the first embodiment. If it is determined in step S180 that there is reflection of a background on the road surface in at least one of left rear and right rear of the vehicle, the process proceeds from step S190 to step S210. On the other hand, if it is determined in step S180 that there is no reflection of background objects on the road surface in any of left and right rear of the vehicle, the process proceeds from step S190 to step S220.
- step S210 the control unit 2 adopts the threshold value Th1 for the other vehicle recognition process of step S130 after step S130.
- control unit 2 adopts threshold value Th0 for the other vehicle recognition process of step S130 after step S130.
- Th1> Th0 Th0.
- the above-mentioned thresholds Th1 and Th0 correspond to the threshold as the detection condition of the three-dimensional object used in the method of (A) in the above-mentioned step S161. That is, the differential waveform information is acquired as the image information value based on the image in the detection area set in the photographed image, and based on this, the detection of the three-dimensional object by the differential waveform information as described above is performed. Is recognized, the value of either Th1 or Th0 is adopted as the first threshold value ⁇ used in the determination of step S7 of FIG.
- edge information is acquired as an image information value based on an image in a detection area set in a captured image, and based on this, another vehicle is recognized by executing detection of a three-dimensional object by edge information as described above.
- a value of either Th1 or Th0 is adopted as the threshold ⁇ of Formula 3.
- step S210 when it is determined in the reflection determination in step S180 that there is a reflection of a background object on the road surface, in step S210, another vehicle recognizes a threshold Th1 higher than the threshold Th0 when there is no reflection. Adopted as a threshold in processing. In this manner, by making the conditions for recognizing the other vehicle strict in step S130, the alarm output can be suppressed so that the other vehicle is hardly recognized.
- the threshold value Th1 when it is determined that the background object is reflected on the road surface is used. The alarm output is further suppressed by tightening the acquisition condition of the image information value when it is determined that the background object is reflected or raised on the road surface.
- FIGS. 41 to 45 are diagrams for explaining the reduction effect of the false alarm obtained by the on-vehicle ambient environment recognition apparatus 100 of the present embodiment as described above.
- the relative speed of the other vehicle shown in each of FIGS. 41 (a) to 45 (a) changes in the same manner as FIG. 12 described in the first embodiment, It has each illustrated how the output timing of the alarm from the alarm output part 3 changes with adjustment of the suppression degree of the alarm output to perform.
- FIGS. 41 and 42 each show an example in the case where the degree of alarm suppression is adjusted using the method (A) of the aforementioned (A) to (C) in step S161 in FIG.
- FIG. 41 shows an example in the case of adjusting the threshold as a detection condition of a three-dimensional object, that is, the threshold Th1 of other vehicle recognition corresponding to the above-mentioned first threshold ⁇ or threshold ⁇ , and FIG. It is an example at the time of adjusting the threshold as a condition, ie, the above-mentioned threshold p or threshold t.
- the alarm suppression adjustment unit 206 is shown in FIG. 41 (c) in the period from time Tr1 to time Tr2 and time Tr3 to time Tr4 when it is determined that there is a background object on the road surface.
- the alarm output is suppressed by changing the threshold for recognizing the other vehicle from Th0 to Th1.
- adjustment of the suppression degree of the alarm output is executed by raising the threshold Th1.
- the part shown with a broken line in FIG.41 (c) has shown threshold value Th1 when not adjusting adjustment degree of warning output.
- the other vehicle recognition unit 204b of the application execution unit 204 can not easily recognize the other vehicle.
- the timing when the other vehicle starts to be recognized is determined as no reflection from time To3 at which the image information value 50 exceeds the threshold Th1 before adjustment, and the alarm output is suppressed. Is changed to time Tr4 at which the The timing at which the image information value 50 falls below the threshold Th0 and the recognition of the other vehicle ends is unchanged at the time To4 and is not changed.
- the alarm output period is shortened to the period from time Tr4 to time To4.
- the portion shown by the broken line in FIG. 41 (d) is the timing of the alarm output when the adjustment of the suppression degree of the alarm output is not performed, and the alarm output is also in the period from time To3 to time Tr4 in addition to the above period. It shows that it is done.
- the threshold Th1 for recognizing the other vehicle is changed in the period from time Tr3 to time Tr4 when the relative speed of the other vehicle is within the predetermined range and it is determined that the reflection is present.
- the degree of suppression of the alarm output is adjusted. As a result, it is possible to suppress the alarm output in the period from time To3 to time Tr4.
- the warning suppression adjustment unit 206 performs the same process as in FIG.
- the alarm output is suppressed by changing the threshold value for recognizing another vehicle from Th0 to Th1. Furthermore, in the period from time Tr3 to time Tr4, since the relative speed of the other vehicle is within the predetermined range, the degree of suppression of the alarm output is adjusted by making the conditions for acquiring the image information value strict.
- the obtained image information value 50 is, for example, FIG. As shown in (c), it decreases and it becomes difficult to recognize other vehicles.
- the part shown with a broken line among the image information values 50 in FIG.42 (c) has shown the value when adjustment of the suppression degree of alarm output is not performed.
- the timing when the other vehicle starts to be recognized is determined as no reflection from time To3 at which the image information value 50 before adjustment exceeds the threshold Th1, and the alarm output is suppressed.
- the alarm output period is shortened to the period from time Tr4 to time To4 as in the case of FIG.
- the part shown with a broken line in FIG. 42 (d) is the timing of the alarm output when adjustment of the suppression degree of the alarm output is not performed, and the alarm output is also in the period from time To3 to time Tr4 in addition to the above period. It shows that it is done.
- FIG. 43 shows an example in which the degree of alarm suppression is adjusted using the method (B) among the above-mentioned (A) to (C) in step S161 of FIG.
- the alarm suppression adjustment unit 206 starts from time Tv1 at which the relative speed of the other vehicle shown in FIG. 43 (a) is within the predetermined range.
- the period Tv2 adjustment of the suppression degree of the alarm output is performed by relaxing the condition for determining the presence or absence of the background object on the road surface.
- the reflection determination unit 203 can easily obtain the determination result that reflection is present.
- the timing at which it is determined that there is no reflection is moved from time Tr4 to time Tr4a, and the period in which the determination result that there is reflection is obtained is extended.
- the period in which the determination result that the reflection is present is extended, the period in which the alarm output is suppressed is also extended correspondingly as shown in FIG. 43 (c). That is, the timing at which the threshold value for recognizing the other vehicle is lowered from Th1 to Th0 is changed from time Tr4 to time Tr4a. As a result, the timing at which the recognition of the other vehicle ends is the time at which the image information value 50 falls below the threshold Th1 when the alarm output is suppressed from the time To4 at which the image information value 50 falls below Changed to To4a. The timing at which the image information value 50 exceeds the threshold Th1 and another vehicle starts to be recognized remains unchanged at time To3. As a result, as shown in FIG.
- the alarm output period is shortened to the period from time To3 to time To4a.
- the part shown with a broken line in FIG. 43 (d) is the timing of the alarm output when the adjustment of the suppression degree of the alarm output is not performed, and the alarm output is also in the period from time To4a to time To4 in addition to the above period. It shows that it is done.
- FIG. 44 shows an example in which the degree of alarm suppression is adjusted using the method (C) among the above-mentioned (A) to (C) in step S161 of FIG.
- the alarm suppression adjustment unit 206 performs the same operation as in FIGS. 41 to 43 in the period from time Tr1 to time Tr2 and the period from time Tr3 to time Tr4 when it is determined that there is a background object on the road surface.
- the alarm output is suppressed by changing the threshold for recognizing the other vehicle from Th0 to Th1.
- adjustment of the suppression degree of alarm output is performed by extending the period which makes a threshold value Th1 until time Tv2 whose relative speed of other vehicles is in a predetermined range.
- the part shown with a broken line in FIG.44 (c) has shown the timing which reduces a threshold value from Th1 to Th0, when adjustment of the suppression degree of alarm output is not performed.
- the timing at which the threshold for recognizing the other vehicle is lowered from Th1 to Th0 is changed from time Tr4 to time Tv2.
- the timing when the recognition of the other vehicle ends is the threshold Th1 when the alarm output is suppressed from the time To4 at which the image information value 50 falls below the threshold Th0 when the alarm output is not suppressed. Is changed to a time To4a at which the image information value 50 falls.
- the alarm output period is shortened to the period from time To3 to time To4a.
- the part shown with a broken line in FIG. 44 (d) is the timing of the alarm output when adjustment of the suppression degree of the alarm output is not executed, and the alarm output is also in the period from time To4a to time To4 in addition to the above period. It shows that it is done.
- the alarm suppression is extended in the period from the time Tr4 when the determination that the background object is reflected on the road surface is completed to the time Tv2 in which the relative speed of the other vehicle is within the predetermined range.
- the degree of suppression of the alarm output is adjusted. As a result, it is possible to suppress the alarm output in the period from time To4a to time To4.
- FIG. 45 shows an example in which the degree of alarm suppression is adjusted using the method of (A) and the method of (C) among the above (A) to (C) in combination in step S161 of FIG. It shows.
- the threshold as the detection condition of a three-dimensional object, that is, the above-described first threshold ⁇ or threshold ⁇ is adjusted.
- the warning suppression adjustment unit 206 is shown in FIG. 45C in the period from time Tr1 to time Tr2 and time Tr3 to time Tr4 when it is determined that there is a background object on the road surface.
- the alarm output is suppressed by changing the threshold for recognizing the other vehicle from Th0 to Th1. Further, in the period from time Tr3 to time Tr4, the relative speed of the other vehicle is within the predetermined range, and therefore, the degree of suppression of the alarm output is adjusted by raising the threshold Th1. Furthermore, also after time Tr4, adjustment of the suppression degree of alarm output is performed by extending the period which makes a threshold value Th1 until time Tv2 whose relative speed of other vehicles is in a predetermined range.
- the threshold Th1 is not adjusted when the degree of suppression of alarm output is not adjusted in the period from time Tr3 to time Tr4, and the degree of suppression of alarm output is not performed after time Tr4.
- the timing at which the threshold value is lowered from Th1 to Th0 is shown.
- the other vehicle recognition unit 204b of the application execution unit 204 can not easily recognize other vehicles.
- the image information value 50 does not exceed the adjusted threshold Th1 within the period from time Tr3 to time Tv2, and other vehicles are not recognized.
- the alarm is not output in all the periods. That is, the alarm output in all the periods can be suppressed.
- the portion shown by the broken line in FIG. 45 (d) is the timing of the alarm output when the adjustment degree of the alarm output is not adjusted, and indicates that the alarm output is performed in the period from time To3 to time To4. There is.
- the on-vehicle ambient environment recognition device 100 recognizes the other vehicle traveling around the vehicle by the application execution unit 204 based on the captured image acquired by the camera 1 and compares the other vehicle with the vehicle The speed is detected (step S130). Further, the reflection determination unit 203 determines the presence / absence of reflection of a background object on the road surface based on the photographed image (step S180). When it is determined in step S180 that there is a reflection, the threshold Th1 is adopted (step S210), and the output of the alarm by the alarm output unit 3 is suppressed.
- the degree of suppression of the output of the alarm signal is adjusted by the alarm suppression adjusting unit 206 based on the relative speed of the other vehicle detected in step S130 (step S161), and the alarm signal is adjusted according to the adjusted degree of suppression. Suppress the output. Since this is done, similarly to the first embodiment, it is possible to prevent an alarm from being output at an incorrect timing by erroneously detecting the reflection of a background on the road surface as a vehicle.
- step S161 the alarm suppression adjustment unit 206 can adjust the suppression degree of the output of the alarm signal using the methods (A) to (C) described above.
- the condition for the reflection determination unit 203 to determine the presence or absence of a background object on the road surface is changed according to the relative speed of the other vehicle.
- the degree of suppression of the output of the alarm signal can be adjusted. That is, the feature quantities of the respective areas are compared in step S180 and the threshold value of the correlation when determining the presence or absence of the background object on the road surface by the correlation, and the feature quantities of the respective areas are calculated in step S170.
- the degree of alarm suppression is adjusted by changing the edge detection condition at that time. Since it did in this way, adjustment of the warning suppression degree can be performed easily and reliably.
- the condition for determining the presence or absence of the background object may be relaxed in step S180 to facilitate alarm suppression.
- the alarm suppression adjustment unit 206 suppresses the output of the alarm signal by changing the conditions for the application execution unit 204 to recognize the other vehicle according to the relative speed of the other vehicle.
- the degree can be adjusted. That is, the in-vehicle ambient environment recognition device 100 determines, by the application execution unit 204, whether the image information value 50 based on the image in the detection area set in the captured image is equal to or greater than a predetermined threshold Th0 or Th1. Recognize other vehicles.
- the threshold is changed according to the relative speed of the other vehicle, more specifically, when the relative speed of the other vehicle is within a predetermined range, the threshold Th1 during the other vehicle recognition is further increased. Adjust the degree of alarm suppression. Since it did in this way, adjustment of the warning suppression degree can be performed easily and reliably. Furthermore, the conditions for recognizing other vehicles can be tightened in step S130 so that alarm suppression can be facilitated.
- the alarm suppression adjustment unit 206 also changes another condition for the application execution unit 204 to recognize the other vehicle according to the relative speed of the other vehicle,
- the degree of suppression of output can also be adjusted. That is, when the in-vehicle ambient environment recognition device 100 detects an image information value based on an image within the detection area set in the captured image by the application execution unit 204 as a predetermined acquisition condition, the image information value is detected as a detection target. The other vehicle is recognized based on the detected and detected image information value.
- this detection condition is changed according to the relative speed of the other vehicle, more specifically, when the relative speed of the other vehicle is within a predetermined range, the detection condition of the image information value is set more strictly Adjust the degree of warning suppression. Since it did in this way, adjustment of the warning suppression degree can be performed easily and reliably similarly to the above. Furthermore, the conditions for recognizing other vehicles can be tightened in step S130 so that alarm suppression can be facilitated.
- the warning suppression adjustment unit 206 determines that the reflection determination unit 203 determines that the background object is reflected on the road surface, and then determines that the background object is not reflected on the road surface.
- the suppression of the output of the alarm signal can be adjusted by extending the suppression of the output of the alarm signal according to the relative speed of the other vehicle. More specifically, the degree of alarm suppression is adjusted by extending the suppression of the output of the alarm signal during a predetermined time or a period during which the relative speed of the other vehicle continues to be within the predetermined range. Since it did in this way, adjustment of the warning suppression degree can be performed easily and reliably similarly to the above. Furthermore, the conditions for recognizing other vehicles can be tightened in step S130 so that alarm suppression can be facilitated.
- the degree of alarm suppression is adjusted on the condition that the relative speed of the other vehicle is within the predetermined range, but another condition for the relative speed of the other vehicle is used. You may use. For example, the fluctuation (stability) of the relative speed of another vehicle may be confirmed, and the degree of alarm suppression may be adjusted on the condition that this is within a predetermined range. Alternatively, these conditions may be used in combination.
- the camera 1 shoots the road surface behind the vehicle, but the road surface ahead of the vehicle may be photographed. As long as the road surface around the vehicle can be photographed, the photographing range of the camera 1 may be set in any way.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
本発明の第2の態様によると、第1の態様の車載用周囲環境認識装置において、警報抑制調整部は、映り込み判定部が路面への背景物の映り込みの有無を判定するための条件を他車両の相対速度に応じて変化させることにより、警報信号の出力の抑制度合いを調整することが好ましい。
本発明の第3の態様によると、第2の態様の車載用周囲環境認識装置は、撮影画像に背景領域および映り込み領域を設定する領域設定部を備える。この車載用周囲環境認識装置において、映り込み判定部は、撮影画像のうち背景領域内の画像と、撮影画像のうち映り込み領域内の画像とを比較し、その相関性が所定の閾値以上であるか否かを判定することにより、路面への背景物の映り込みの有無を判定することが好ましい。また、警報抑制調整部は、他車両の相対速度に応じて閾値を変化させることにより、警報信号の出力の抑制度合いを調整することが好ましい。
本発明の第4の態様によると、第2の態様の車載用周囲環境認識装置は、撮影画像に背景領域および映り込み領域を設定する領域設定部と、撮影画像のうち背景領域内の画像と、撮影画像のうち映り込み領域内の画像とで、所定の検出条件を満たすエッジをそれぞれ検出し、検出したエッジに応じた特徴量を背景領域と映り込み領域についてそれぞれ算出する特徴量算出部とを備える。この車載用周囲環境認識装置において、映り込み判定部は、背景領域の特徴量と、映り込み領域の特徴量とを比較することにより、路面への背景物の映り込みの有無を判定することが好ましい。また、警報抑制調整部は、他車両の相対速度に応じて検出条件を変化させることにより、警報信号の出力の抑制度合いを調整することが好ましい。
本発明の第5の態様によると、第1の態様の車載用周囲環境認識装置において、警報抑制調整部は、アプリ実行部が他車両を認識するための条件を他車両の相対速度に応じて変化させることにより、警報信号の出力の抑制度合いを調整することが好ましい。
本発明の第6の態様によると、第5の態様の車載用周囲環境認識装置において、アプリ実行部は、撮影画像に設定された検出領域内の画像に基づく画像情報値が所定の閾値以上であるか否かを判定することにより、他車両を認識することが好ましい。また、警報抑制調整部は、他車両の相対速度に応じて閾値を変化させることにより、警報信号の出力の抑制度合いを調整することが好ましい。
本発明の第7の態様によると、第5の態様の車載用周囲環境認識装置において、アプリ実行部は、撮影画像に設定された検出領域内の画像に基づく画像情報値が所定の検出条件を満たす場合にその画像情報値を検出対象として検出し、検出した画像情報値に基づいて他車両を認識することが好ましい。また、警報抑制調整部は、他車両の相対速度に応じて検出条件を変化させることにより、警報信号の出力の抑制度合いを調整することが好ましい。
本発明の第8の態様によると、第1の態様の車載用周囲環境認識装置において、警報抑制調整部は、映り込み判定部が路面への背景物の映り込みありと判定し、その後に路面への背景物の映り込みなしと判定したときに、他車両の相対速度に応じて警報信号の出力の抑制を延長して行うことにより、警報信号の出力の抑制度合いを調整することが好ましい。
本発明の第9の態様によると、第1乃至第8のいずれか一態様の車載用周囲環境認識装置において、警報抑制調整部は、他車両の相対速度が所定の速度条件を満たす場合と満たさない場合とで、警報信号の出力の抑制度合いを変化させることが好ましい。
本発明の第10の態様によると、第9の態様の車載用周囲環境認識装置において、速度条件は、他車両の相対速度が所定範囲内であること、および他車両の相対速度の変動が所定範囲内であることのいずれか少なくとも一方を含むことが好ましい。
本発明の第11の態様による車載用周囲環境認識装置は、車両の周囲の路面を撮影して撮影画像を取得する撮影部と、撮影部により取得された撮影画像に基づいて、車両の周囲を走行している他車両を認識するアプリ実行部と、撮影画像の背景領域と映り込み領域とを区別し、それら領域の画像特徴の相関性に基づいて路面への背景物の映り込みの有無を判定する映り込み判定部と、を備え、映り込み判定部により路面への背景物の映り込みがあると判定された場合、アプリ実行部による他車両の認識を抑制する。
図1は、本発明の一実施形態による車載用周囲環境認識装置100の構成を示すブロック図である。図1に示す車載用周囲環境認識装置100は、車両に搭載されて使用されるものであり、カメラ1と、制御部2と、警報出力部3と、動作状態報知部4と、を備える。
次に、図5のステップS130でアプリ実行部204の他車両認識部204bが実行する他車両認識処理について以下に説明する。
本実施形態の車載用周囲環境認識装置100は、車両後方を撮像する単眼のカメラ1により得られた画像情報に基づいて車両後方の右側検出領域又は左側検出領域に存在する立体物を検出する。
次に、図15に示す検出ブロックAに代えて動作させることが可能である、輝度差算出部35、エッジ線検出部36及び立体物検出部37で構成されるエッジ情報を利用した立体物の検出ブロックBについて説明する。図25は、図15のカメラ1の撮像範囲等を示す図であり、図25(a)は平面図、図25(b)は、自車両Vから後側方における実空間上の斜視図を示す。図25(a)に示すように、カメラ1は所定の画角aとされ、この所定の画角aに含まれる自車両Vから後側方を撮像する。カメラ1の画角aは、図14に示す場合と同様に、カメラ1の撮像範囲に自車両Vが走行する車線に加えて、隣接する車線も含まれるように設定されている。
(数式1)
I(xi,yi)>I(xi’,yi’)+tのとき
s(xi,yi)=1
I(xi,yi)<I(xi’,yi’)-tのとき
s(xi,yi)=-1
上記以外のとき
s(xi,yi)=0
(数式2)
s(xi,yi)=s(xi+1,yi+1)のとき(且つ0=0を除く)、
c(xi,yi)=1
上記以外のとき、
c(xi,yi)=0
(数式3)
Σc(xi,yi)/N>θ
(数式4)
鉛直相当方向の評価値=Σ[{I(xi,yi)-I(xi+1,yi+1)}2]
(数式5)
鉛直相当方向の評価値=Σ|I(xi,yi)-I(xi+1,yi+1)|
(数式6)
鉛直相当方向の評価値=Σb(xi,yi)
但し、|I(xi,yi)-I(xi+1,yi+1)|>t2のとき、
b(xi,yi)=1
上記以外のとき、
b(xi,yi)=0
図15に戻り、上述した2つの立体物検出部33(又は立体物検出部37)による立体物の検出にあたり、本例の車載用周囲環境認識装置100における他車両認識部204bは、立体物判断部34と、虚像判断部38と、制御部39とを備える。立体物判断部34は、立体物検出部33(又は立体物検出部37)による検出結果に基づいて、検出された立体物が検出領域A1,A2に存在する他車両VXであるか否かを最終的に判断する。立体物検出部33(又は立体物検出部37)は、後述する虚像判断部38の判断結果を反映させた立体物の検出を行う。虚像判断部38は、検出された立体物に対応する画像のテクスチャ分析の結果から、検出された立体物が路面に形成された水膜などに建物などの像が移り込んだ虚像であるか否かを判断する。制御部39は、虚像判断部38により検出された立体物に対応する画像が虚像であると判断された場合には、検出される立体物が検出領域A1,A2に存在する他車両Vであると判断されることが抑制されるように他車両認識部204bを構成する各部(制御部39を含む)を制御する制御命令を出力する。
まず、差分波形情報に基づいて立体物を検出する場合の制御命令について説明する。先述したように、立体物検出部33は、差分波形情報と第1閾値αとに基づいて立体物を検出する。そして、本実施形態の制御部39は、虚像判断部38が立体物に対応する像が虚像であると判断した場合には、第1閾値αを高くする制御命令を生成し、立体物検出部33に出力する。第1閾値αとは、図23のステップS7において、差分波形DWtのピークを判断するための第1閾値αである(図17参照)。また、制御部39は、差分波形情報における画素値の差分に関する閾値pを高くする制御命令を立体物検出部33に出力することができる。
次に、本発明の第2の実施の形態について説明する。上記で説明した第1の実施の形態では、映り込み判定部203から映り込みありの通知を受けると、警報制御部205から警報出力部3に対する警報出力信号の出力を停止することで警報出力を抑制する場合の例を説明した。これに対して、本実施形態では、映り込み判定部203から映り込みありの通知を受けると、アプリ実行部204の他車両認識部204bが実行する他車両認識処理において他車両が認識されづらいようにすることで警報出力を抑制する場合の例を説明する。なお、本実施形態による車載用周囲環境認識装置100の構成や、路面映り込み時の警報抑制に関する制御部2の制御ブロック図は、図1、4にそれぞれ示したものと同じである。そのため、以下ではこれらの説明を省略する。
この方法では、次回以降のステップS130の他車両認識処理において、路面への背景物の映り込みがあると判定されたときの他車両認識条件を変更する。すなわち、撮影画像中に設定された検出領域内の画像に基づく画像情報値として差分波形情報を取得し、これを基に前述のような差分波形情報による立体物の検出を実行することで他車両を認識する場合は、差分波形DWtから立体物が存在するか否かを判断するための閾値、具体的には図23のステップS7の判定に用いられる第1閾値αの値を大きくする。また、撮影画像中に設定された検出領域内の画像に基づく画像情報値としてエッジ情報を取得し、これを基に前述のようなエッジ情報による立体物の検出を実行することで他車両を認識する場合は、注目線がエッジ線であるか否かを判断するための閾値、具体的には数式3の閾値θの値を大きくする。これらの閾値の調整により、他車両認識条件を変更することができる。
この方法では、前述の第1の実施形態で説明したのと同様の手法を用いて、ステップS180で路面への背景物の映り込みの有無を判定するための条件を変更する。すなわち、図6の背景領域34a~34fと映り込み領域35a~35f、および背景領域36a~36fと映り込み領域37a~37fの各領域間での画像の相関性を判断するための閾値を下げる。または、背景領域34a~34fおよび36a~36f、映り込み領域35a~35fおよび37a~37fの各領域に対するエッジ検出条件の輝度差を下げる。
この方法では、路面への背景物の映り込みがないと判定された場合に警報抑制期間を延長する。すなわち、ステップS180の映り込み判定において、路面への背景物の映り込みがないと判定され、その後に映り込みなしと判定された場合に、映り込みなしとの判定結果が得られた後にも延長して警報の抑制を行うようにする。これにより、他車両の相対速度が所定範囲内である場合は、そうでない場合と比べて警報抑制が行われやすくなるように、警報抑制の度合いを調整する。なお、警報抑制を延長する期間は、背景物の映り込みなしと判定されてから他車両の相対速度が所定範囲内であることが継続している期間としてもよいし、所定時間または自車両が所定距離走行するまでの期間としてもよい。
日本国特許出願2012年第167603号(2012年7月27日出願)
2 制御部
3 警報出力部
4 動作状態報知部
100 車載用周囲環境認識装置
201 領域設定部
202 特徴量算出部
203 映り込み判定部
204 アプリ実行部
205 警報制御部
206 警報抑制調整部
Claims (11)
- 車両の周囲の路面を撮影して撮影画像を取得する撮影部と、
前記撮影部により取得された撮影画像に基づいて、前記車両の周囲を走行している他車両を認識し、前記車両に対する前記他車両の相対速度を検出するアプリ実行部と、
前記撮影画像に基づいて前記路面への背景物の映り込みの有無を判定する映り込み判定部と、
前記アプリ実行部による前記他車両の認識結果に基づいて警報信号の出力を制御する警報制御部と、
前記映り込み判定部により前記路面への背景物の映り込みがあると判定された場合、前記他車両の相対速度に基づいて前記警報信号の出力を抑制する警報抑制調整部と、を備える車載用周囲環境認識装置。 - 請求項1に記載の車載用周囲環境認識装置において、
前記警報抑制調整部は、前記映り込み判定部が前記路面への背景物の映り込みの有無を判定するための条件を前記他車両の相対速度に応じて変化させることにより、前記警報信号の出力の抑制度合いを調整する車載用周囲環境認識装置。 - 請求項2に記載の車載用周囲環境認識装置において、
前記撮影画像に背景領域および映り込み領域を設定する領域設定部を備え、
前記映り込み判定部は、前記撮影画像のうち前記背景領域内の画像と、前記撮影画像のうち前記映り込み領域内の画像とを比較し、その相関性が所定の閾値以上であるか否かを判定することにより、前記路面への背景物の映り込みの有無を判定し、
前記警報抑制調整部は、前記他車両の相対速度に応じて前記閾値を変化させることにより、前記警報信号の出力の抑制度合いを調整する車載用周囲環境認識装置。 - 請求項2に記載の車載用周囲環境認識装置において、
前記撮影画像に背景領域および映り込み領域を設定する領域設定部と、
前記撮影画像のうち前記背景領域内の画像と、前記撮影画像のうち前記映り込み領域内の画像とで、所定の検出条件を満たすエッジをそれぞれ検出し、検出したエッジに応じた特徴量を前記背景領域と前記映り込み領域についてそれぞれ算出する特徴量算出部とを備え、
前記映り込み判定部は、前記背景領域の特徴量と、前記映り込み領域の特徴量とを比較することにより、前記路面への背景物の映り込みの有無を判定し、
前記警報抑制調整部は、前記他車両の相対速度に応じて前記検出条件を変化させることにより、前記警報信号の出力の抑制度合いを調整する車載用周囲環境認識装置。 - 請求項1に記載の車載用周囲環境認識装置において、
前記警報抑制調整部は、前記アプリ実行部が前記他車両を認識するための条件を前記他車両の相対速度に応じて変化させることにより、前記警報信号の出力の抑制度合いを調整する車載用周囲環境認識装置。 - 請求項5に記載の車載用周囲環境認識装置において、
前記アプリ実行部は、前記撮影画像に設定された検出領域内の画像に基づく画像情報値が所定の閾値以上であるか否かを判定することにより、前記他車両を認識し、
前記警報抑制調整部は、前記他車両の相対速度に応じて前記閾値を変化させることにより、前記警報信号の出力の抑制度合いを調整する車載用周囲環境認識装置。 - 請求項5に記載の車載用周囲環境認識装置において、
前記アプリ実行部は、前記撮影画像に設定された検出領域内の画像に基づく画像情報値が所定の検出条件を満たす場合にその画像情報値を検出対象として検出し、検出した画像情報値に基づいて前記他車両を認識し、
前記警報抑制調整部は、前記他車両の相対速度に応じて前記検出条件を変化させることにより、前記警報信号の出力の抑制度合いを調整する車載用周囲環境認識装置。 - 請求項1に記載の車載用周囲環境認識装置において、
前記警報抑制調整部は、前記映り込み判定部が前記路面への背景物の映り込みありと判定し、その後に前記路面への背景物の映り込みなしと判定したときに、前記他車両の相対速度に応じて前記警報信号の出力の抑制を延長して行うことにより、前記警報信号の出力の抑制度合いを調整する車載用周囲環境認識装置。 - 請求項1乃至8のいずれか一項に記載の車載用周囲環境認識装置において、
前記警報抑制調整部は、前記他車両の相対速度が所定の速度条件を満たす場合と満たさない場合とで、前記警報信号の出力の抑制度合いを変化させる車載用周囲環境認識装置。 - 請求項9に記載の車載用周囲環境認識装置において、
前記速度条件は、前記他車両の相対速度が所定範囲内であること、および前記他車両の相対速度の変動が所定範囲内であることのいずれか少なくとも一方を含む車載用周囲環境認識装置。 - 車両の周囲の路面を撮影して撮影画像を取得する撮影部と、
前記撮影部により取得された撮影画像に基づいて、前記車両の周囲を走行している他車両を認識するアプリ実行部と、
前記撮影画像の背景領域と映り込み領域とを区別し、それら領域の画像特徴の相関性に基づいて前記路面への背景物の映り込みの有無を判定する映り込み判定部と、を備え、
前記映り込み判定部により前記路面への背景物の映り込みがあると判定された場合、前記アプリ実行部による他車両の認識を抑制する車載用周囲環境認識装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13822459.7A EP2879109B1 (en) | 2012-07-27 | 2013-07-11 | Vehicle-mounted surrounding environment recognition device |
JP2014526848A JP6254083B2 (ja) | 2012-07-27 | 2013-07-11 | 車載用周囲環境認識装置 |
CN201380039231.7A CN104508722B (zh) | 2012-07-27 | 2013-07-11 | 车载用周围环境识别装置 |
US14/417,677 US9721460B2 (en) | 2012-07-27 | 2013-07-11 | In-vehicle surrounding environment recognition device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012167603 | 2012-07-27 | ||
JP2012-167603 | 2012-07-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014017302A1 true WO2014017302A1 (ja) | 2014-01-30 |
Family
ID=49997118
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/068935 WO2014017302A1 (ja) | 2012-07-27 | 2013-07-11 | 車載用周囲環境認識装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9721460B2 (ja) |
EP (1) | EP2879109B1 (ja) |
JP (1) | JP6254083B2 (ja) |
CN (1) | CN104508722B (ja) |
WO (1) | WO2014017302A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015216462A (ja) * | 2014-05-08 | 2015-12-03 | 日産自動車株式会社 | 立体物検出装置 |
CN111371902A (zh) * | 2020-03-13 | 2020-07-03 | 腾讯科技(深圳)有限公司 | 一种车辆分配方法及相关设备 |
CN113496201A (zh) * | 2020-04-06 | 2021-10-12 | 丰田自动车株式会社 | 物体状态识别装置、方法、物体状态识别用计算机程序及控制装置 |
Families Citing this family (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012037528A2 (en) * | 2010-09-16 | 2012-03-22 | California Institute Of Technology | Systems and methods for automated water detection using visible sensors |
JP5760884B2 (ja) * | 2011-09-09 | 2015-08-12 | 株式会社デンソー | 車両の旋回予測装置 |
CN105453157A (zh) * | 2013-08-01 | 2016-03-30 | 本田技研工业株式会社 | 车辆周围监测装置 |
CN106031147B (zh) * | 2014-02-13 | 2019-04-09 | 索尼公司 | 用于使用角膜反射调整相机设置的方法和系统 |
US9428194B2 (en) * | 2014-12-11 | 2016-08-30 | Toyota Motor Engineering & Manufacturing North America, Inc. | Splash condition detection for vehicles |
JP6299720B2 (ja) * | 2015-10-02 | 2018-03-28 | トヨタ自動車株式会社 | 物体認識装置及び煙判定方法 |
KR102374921B1 (ko) * | 2015-10-30 | 2022-03-16 | 주식회사 만도모빌리티솔루션즈 | 차량 제어 시스템 및 방법 |
ITUB20155886A1 (it) * | 2015-11-25 | 2017-05-25 | A M General Contractor S P A | Rilevatore d?incendio a radiazione infrarossa con funzione composta per ambiente confinato. |
JP6580982B2 (ja) * | 2015-12-25 | 2019-09-25 | 日立建機株式会社 | オフロードダンプトラック及び障害物判別装置 |
US9940530B2 (en) * | 2015-12-29 | 2018-04-10 | Thunder Power New Energy Vehicle Development Company Limited | Platform for acquiring driver behavior data |
JP6752024B2 (ja) * | 2016-02-12 | 2020-09-09 | 日立オートモティブシステムズ株式会社 | 画像処理装置 |
CN108473140A (zh) * | 2016-02-18 | 2018-08-31 | 本田技研工业株式会社 | 车辆控制装置、车辆控制方法及车辆控制程序 |
JP6969072B2 (ja) * | 2016-03-14 | 2021-11-24 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、プログラム、およびビークル |
CN108027423B (zh) * | 2016-03-14 | 2022-01-04 | 日立建机株式会社 | 矿山用作业机械 |
JP6512164B2 (ja) * | 2016-04-22 | 2019-05-15 | 株式会社デンソー | 物体検出装置、物体検出方法 |
CN107886729B (zh) * | 2016-09-30 | 2021-02-23 | 比亚迪股份有限公司 | 车辆识别方法、装置及车辆 |
JP2018092596A (ja) * | 2016-11-30 | 2018-06-14 | 株式会社リコー | 情報処理装置、撮像装置、機器制御システム、移動体、情報処理方法、およびプログラム |
CN106553655B (zh) * | 2016-12-02 | 2019-11-15 | 深圳地平线机器人科技有限公司 | 危险车辆检测方法和系统以及包括该系统的车辆 |
US10788840B2 (en) * | 2016-12-27 | 2020-09-29 | Panasonic Intellectual Property Corporation Of America | Information processing apparatus, information processing method, and recording medium |
US10053088B1 (en) * | 2017-02-21 | 2018-08-21 | Zoox, Inc. | Occupant aware braking system |
JP6922297B2 (ja) * | 2017-03-21 | 2021-08-18 | 三菱自動車工業株式会社 | 運転支援システム |
JP6729463B2 (ja) * | 2017-03-23 | 2020-07-22 | いすゞ自動車株式会社 | 車線逸脱警報装置の制御装置、車両および車線逸脱警報制御方法 |
US11270589B2 (en) * | 2017-08-25 | 2022-03-08 | Nissan Motor Co., Ltd. | Surrounding vehicle display method and surrounding vehicle display device |
KR102285959B1 (ko) * | 2017-11-07 | 2021-08-05 | 현대모비스 주식회사 | 차량용 물체 식별 장치 및 방법 |
JP6993205B2 (ja) * | 2017-12-18 | 2022-01-13 | 株式会社Soken | 区画線特定装置 |
US10902625B1 (en) * | 2018-01-23 | 2021-01-26 | Apple Inc. | Planar surface detection |
CN109159667A (zh) * | 2018-07-28 | 2019-01-08 | 上海商汤智能科技有限公司 | 智能驾驶控制方法和装置、车辆、电子设备、介质、产品 |
JP7073972B2 (ja) * | 2018-08-03 | 2022-05-24 | トヨタ自動車株式会社 | 情報処理システム、プログラム、及び制御方法 |
DE102018214959A1 (de) | 2018-09-04 | 2020-03-05 | Robert Bosch Gmbh | Verfahren zur Auswertung von Sensordaten mit einer erweiterten Objekterkennung |
JP2020095623A (ja) | 2018-12-14 | 2020-06-18 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP7236857B2 (ja) | 2018-12-14 | 2023-03-10 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP7226986B2 (ja) | 2018-12-14 | 2023-02-21 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP7252750B2 (ja) | 2018-12-14 | 2023-04-05 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP7195131B2 (ja) | 2018-12-14 | 2022-12-23 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP2020095624A (ja) | 2018-12-14 | 2020-06-18 | 株式会社デンソーテン | 画像処理装置、および画像処理方法 |
JP7141940B2 (ja) | 2018-12-14 | 2022-09-26 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP7203586B2 (ja) | 2018-12-14 | 2023-01-13 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP7359541B2 (ja) | 2018-12-14 | 2023-10-11 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP2020095620A (ja) * | 2018-12-14 | 2020-06-18 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP7203587B2 (ja) | 2018-12-14 | 2023-01-13 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
WO2020170835A1 (ja) * | 2019-02-18 | 2020-08-27 | ソニー株式会社 | 情報処理装置、情報処理方法及び情報処理プログラム |
JP7157686B2 (ja) * | 2019-03-15 | 2022-10-20 | 本田技研工業株式会社 | 車両制御装置、車両制御方法、及びプログラム |
FI128495B (en) * | 2019-05-21 | 2020-06-15 | Vaisala Oyj | Method for calibrating optical surface monitoring system, arrangement, device and computer readable memory |
CA3142038A1 (en) * | 2019-05-28 | 2020-12-03 | Abhijith PUNNAPPURATH | System and method for reflection removal using dual-pixel sensor |
KR20190084916A (ko) * | 2019-06-28 | 2019-07-17 | 엘지전자 주식회사 | 주차 위치 알림 장치 및 방법 |
JP7125239B2 (ja) * | 2019-07-31 | 2022-08-24 | トヨタ自動車株式会社 | 車両の注意喚起装置 |
JP7397609B2 (ja) * | 2019-09-24 | 2023-12-13 | 株式会社Subaru | 走行環境認識装置 |
EP3809313A1 (en) * | 2019-10-16 | 2021-04-21 | Ningbo Geely Automobile Research & Development Co. Ltd. | A vehicle parking finder support system, method and computer program product for determining if a vehicle is at a reference parking location |
JP7173062B2 (ja) * | 2020-01-23 | 2022-11-16 | トヨタ自動車株式会社 | 変化点検出装置及び地図情報配信システム |
JP7358263B2 (ja) * | 2020-02-12 | 2023-10-10 | 本田技研工業株式会社 | 車両制御装置、車両制御方法、及び車両制御用プログラム |
JP7138133B2 (ja) * | 2020-03-16 | 2022-09-15 | 本田技研工業株式会社 | 車両制御装置、車両、車両制御装置の動作方法およびプログラム |
US12004118B2 (en) | 2021-03-01 | 2024-06-04 | Toyota Motor North America, Inc. | Detection of aberration on transport |
US12002270B2 (en) | 2021-03-04 | 2024-06-04 | Nec Corporation Of America | Enhanced detection using special road coloring |
US12037757B2 (en) | 2021-03-04 | 2024-07-16 | Nec Corporation Of America | Infrared retroreflective spheres for enhanced road marks |
US12104911B2 (en) | 2021-03-04 | 2024-10-01 | Nec Corporation Of America | Imperceptible road markings to support automated vehicular systems |
US11881033B2 (en) * | 2021-03-04 | 2024-01-23 | Nec Corporation Of America | Reliable visual markers based on multispectral characteristics |
US11900695B2 (en) | 2021-03-04 | 2024-02-13 | Nec Corporation Of America | Marking and detecting road marks |
CN115382797B (zh) * | 2022-07-28 | 2024-05-03 | 中国电子科技集团公司第二十九研究所 | 一种利用光学原理的沉头铆钉筛选工具及使用方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007265016A (ja) | 2006-03-28 | 2007-10-11 | Matsushita Electric Ind Co Ltd | 車両検出装置及び車両検出方法 |
JP2008094377A (ja) * | 2006-09-14 | 2008-04-24 | Toyota Motor Corp | 車両用表示装置 |
JP2008219063A (ja) | 2007-02-28 | 2008-09-18 | Sanyo Electric Co Ltd | 車両周辺監視装置及び方法 |
JP2009031053A (ja) * | 2007-07-25 | 2009-02-12 | Fujitsu Ten Ltd | 前方障害物検出装置 |
JP2010036757A (ja) * | 2008-08-06 | 2010-02-18 | Fuji Heavy Ind Ltd | 車線逸脱防止制御装置 |
JP2012118874A (ja) * | 2010-12-02 | 2012-06-21 | Honda Motor Co Ltd | 車両の制御装置 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU4886897A (en) * | 1996-11-13 | 1998-06-03 | Komatsu Limited | Vehicle on which millimeter wave radar is mounted |
JP3521691B2 (ja) * | 1997-07-07 | 2004-04-19 | 日産自動車株式会社 | 車両走行制御装置 |
JP3945919B2 (ja) | 1998-09-22 | 2007-07-18 | 株式会社デンソー | 走行路検出装置、車両走行制御装置および記録媒体 |
JP2002240659A (ja) * | 2001-02-14 | 2002-08-28 | Nissan Motor Co Ltd | 車両周囲の状況を判断する装置 |
JP2003187228A (ja) * | 2001-12-18 | 2003-07-04 | Daihatsu Motor Co Ltd | 車両認識装置及び認識方法 |
EP1504276B1 (en) * | 2002-05-03 | 2012-08-08 | Donnelly Corporation | Object detection system for vehicle |
US6969183B2 (en) * | 2002-12-27 | 2005-11-29 | Ichikoh Industries, Ltd. | Digital lighting apparatus for vehicle, controller for digital lighting apparatus, and control program for digital lighting apparatus |
US7720580B2 (en) * | 2004-12-23 | 2010-05-18 | Donnelly Corporation | Object detection system for vehicle |
DE102005011241A1 (de) * | 2005-03-11 | 2006-09-14 | Robert Bosch Gmbh | Verfahren und Vorrichtung zur Kollisionswarnung |
JP4674179B2 (ja) | 2006-03-30 | 2011-04-20 | 株式会社デンソーアイティーラボラトリ | 影認識方法及び影境界抽出方法 |
JP4793094B2 (ja) * | 2006-05-17 | 2011-10-12 | 株式会社デンソー | 走行環境認識装置 |
JP4912744B2 (ja) | 2006-05-19 | 2012-04-11 | 富士通テン株式会社 | 路面状態判定装置及び路面状態判定方法 |
JP4909217B2 (ja) * | 2007-04-16 | 2012-04-04 | 本田技研工業株式会社 | 障害物認識装置 |
DE102009003697A1 (de) * | 2009-03-30 | 2010-10-07 | Conti Temic Microelectronic Gmbh | Verfahren und Vorrichtung zur Fahrspurerkennung |
WO2011101949A1 (ja) * | 2010-02-16 | 2011-08-25 | トヨタ自動車株式会社 | 車両制御装置 |
JP5387911B2 (ja) * | 2010-03-12 | 2014-01-15 | スズキ株式会社 | 自動変速機の変速制御装置 |
JP2011232293A (ja) * | 2010-04-30 | 2011-11-17 | Toyota Motor Corp | 車外音検出装置 |
WO2012037528A2 (en) * | 2010-09-16 | 2012-03-22 | California Institute Of Technology | Systems and methods for automated water detection using visible sensors |
JP2012101755A (ja) * | 2010-11-12 | 2012-05-31 | Denso Corp | 車速制御システム |
KR20120072139A (ko) * | 2010-12-23 | 2012-07-03 | 한국전자통신연구원 | 차량 검지 장치 및 방법 |
JP2012141219A (ja) * | 2010-12-28 | 2012-07-26 | Pioneer Electronic Corp | 傾斜角検出装置、方法、プログラムおよび記録媒体 |
-
2013
- 2013-07-11 EP EP13822459.7A patent/EP2879109B1/en active Active
- 2013-07-11 JP JP2014526848A patent/JP6254083B2/ja active Active
- 2013-07-11 CN CN201380039231.7A patent/CN104508722B/zh active Active
- 2013-07-11 US US14/417,677 patent/US9721460B2/en active Active
- 2013-07-11 WO PCT/JP2013/068935 patent/WO2014017302A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007265016A (ja) | 2006-03-28 | 2007-10-11 | Matsushita Electric Ind Co Ltd | 車両検出装置及び車両検出方法 |
JP2008094377A (ja) * | 2006-09-14 | 2008-04-24 | Toyota Motor Corp | 車両用表示装置 |
JP2008219063A (ja) | 2007-02-28 | 2008-09-18 | Sanyo Electric Co Ltd | 車両周辺監視装置及び方法 |
JP2009031053A (ja) * | 2007-07-25 | 2009-02-12 | Fujitsu Ten Ltd | 前方障害物検出装置 |
JP2010036757A (ja) * | 2008-08-06 | 2010-02-18 | Fuji Heavy Ind Ltd | 車線逸脱防止制御装置 |
JP2012118874A (ja) * | 2010-12-02 | 2012-06-21 | Honda Motor Co Ltd | 車両の制御装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2879109A4 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015216462A (ja) * | 2014-05-08 | 2015-12-03 | 日産自動車株式会社 | 立体物検出装置 |
CN111371902A (zh) * | 2020-03-13 | 2020-07-03 | 腾讯科技(深圳)有限公司 | 一种车辆分配方法及相关设备 |
CN113496201A (zh) * | 2020-04-06 | 2021-10-12 | 丰田自动车株式会社 | 物体状态识别装置、方法、物体状态识别用计算机程序及控制装置 |
CN113496201B (zh) * | 2020-04-06 | 2024-02-09 | 丰田自动车株式会社 | 物体状态识别装置、方法、计算机可读取的记录介质及控制装置 |
Also Published As
Publication number | Publication date |
---|---|
JP6254083B2 (ja) | 2017-12-27 |
US9721460B2 (en) | 2017-08-01 |
US20150161881A1 (en) | 2015-06-11 |
CN104508722A (zh) | 2015-04-08 |
CN104508722B (zh) | 2016-08-24 |
EP2879109A4 (en) | 2016-10-05 |
EP2879109A1 (en) | 2015-06-03 |
EP2879109B1 (en) | 2022-02-09 |
JPWO2014017302A1 (ja) | 2016-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014017302A1 (ja) | 車載用周囲環境認識装置 | |
JP5997276B2 (ja) | 立体物検出装置及び異物検出装置 | |
US9349057B2 (en) | Three-dimensional object detection device | |
TWI417207B (zh) | Image - based obstacle detection reversing warning system and method | |
US9558546B2 (en) | Three-dimensional object detection device, and three-dimensional object detection method | |
JP5776795B2 (ja) | 立体物検出装置 | |
US20150195496A1 (en) | Three-dimensional object detection device, and three-dimensional object detection method | |
JP5874831B2 (ja) | 立体物検出装置 | |
US10096124B2 (en) | Water droplet detection device, and three-dimensional object detection device using water droplet detection device | |
JP5794378B2 (ja) | 立体物検出装置及び立体物検出方法 | |
JP5871069B2 (ja) | 立体物検出装置及び立体物検出方法 | |
JP5835459B2 (ja) | 立体物検出装置 | |
JP5783319B2 (ja) | 立体物検出装置及び立体物検出方法 | |
JP5817913B2 (ja) | 立体物検出装置及び立体物検出方法 | |
JP5768927B2 (ja) | 立体物検出装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13822459 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014526848 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14417677 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013822459 Country of ref document: EP |