WO2015008566A1 - 車載装置 - Google Patents
車載装置 Download PDFInfo
- Publication number
- WO2015008566A1 WO2015008566A1 PCT/JP2014/065770 JP2014065770W WO2015008566A1 WO 2015008566 A1 WO2015008566 A1 WO 2015008566A1 JP 2014065770 W JP2014065770 W JP 2014065770W WO 2015008566 A1 WO2015008566 A1 WO 2015008566A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- detection
- vehicle
- detection unit
- detection result
- Prior art date
Links
- 238000001514 detection method Methods 0.000 claims abstract description 610
- 239000002131 composite material Substances 0.000 claims description 46
- 238000002834 transmittance Methods 0.000 claims description 29
- 230000010354 integration Effects 0.000 claims description 25
- 238000012937 correction Methods 0.000 claims description 10
- 230000007423 decrease Effects 0.000 claims description 7
- 239000000126 substance Substances 0.000 claims 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 126
- 238000000034 method Methods 0.000 description 52
- 230000006870 function Effects 0.000 description 32
- 238000012545 processing Methods 0.000 description 32
- 230000001629 suppression Effects 0.000 description 32
- 238000006243 chemical reaction Methods 0.000 description 23
- 238000011109 contamination Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 8
- 230000015654 memory Effects 0.000 description 7
- 238000003708 edge detection Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 2
- 230000008094 contradictory effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000001771 impaired effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000002844 melting Methods 0.000 description 2
- 230000008018 melting Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 239000000356 contaminant Substances 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
Definitions
- the present invention relates to an in-vehicle device.
- a vehicle-mounted device that captures the front of a vehicle with a camera installed in the vehicle, detects edges in the horizontal direction of the captured image, and detects a lane mark (division line) provided on a road surface (patent) Reference 1).
- a pixel whose absolute value of a luminance difference with a pixel adjacent in the horizontal direction exceeds a threshold is extracted as an edge point, and a lane mark is recognized based on the number of measurements of the edge point.
- Patent Document 2 a technique for detecting a foreign object attached to a lens by detecting a non-moving region included in a captured image despite moving, based on the state of an in-vehicle camera lens.
- Patent Document 3 a technique for determining that a water droplet has adhered using an overlapping imaging region of a plurality of cameras is known.
- the in-vehicle device includes an image acquisition unit that acquires a captured image from a camera that captures the surrounding environment of the vehicle via the camera lens, and a plurality of types that are attached to the camera lens based on the captured image. Based on detection results of multiple types of deposits by the deposit detection unit, an image recognition unit that recognizes a predetermined object image existing in the surrounding environment from a captured image, A detection result integration unit that calculates an integrated detection result obtained by integrating a plurality of detection results, and an operation control unit that controls the operation of the image recognition unit based on the integrated detection result.
- the detection result integration unit projects each detection result of the adhering matter detection unit onto the same coordinate system, and a plurality of projection results are obtained. It is preferable that composite coordinates are set on the coordinate system from the above coordinates, and the operation control unit controls the operation of the image recognition unit based on the composite coordinates.
- the coordinate system includes a first coordinate axis related to the lens transmittance by the attached matter and a second coordinate axis related to the area of the attached matter attached to the camera lens.
- the detection results obtained by the adhering matter detection unit are the lens transmittance and the area of the adhering matter, and the coordinate area obtained by these two detection results is different for each type of the adhering matter detection unit.
- Each is preferably set.
- the operation control unit performs the image recognition unit based on whether the combined coordinate is closer to the first coordinate axis or the second coordinate axis. It is preferable to determine the control.
- the motion control unit is configured so that the composite coordinates are within the range of the motion determination region that is predetermined within the range of the first coordinate axis and the second coordinate axis. It is preferable to stop the recognition of the predetermined object image by the image recognition unit.
- the motion control unit is configured so that the combined coordinates are within the range of the motion determination area determined in advance within the range of the first coordinate axis and the second coordinate axis. When it is a coordinate, it is preferable to operate the removal apparatus which removes a deposit
- the adhering matter detection unit further increases the reliability according to the time when each of the plural types of adhering matter is continuously detected. It is preferable that the detection result integration unit calculates the integrated detection result using each reliability calculated by the attached matter detection unit.
- the detection result integration unit calculates the integrated detection result using each reliability calculated by the attached matter detection unit.
- an environment in which reliability is lowered in at least one of a light source environment around the vehicle, a travel path on which the vehicle travels, and weather is detected. It is preferable to further include an environment detection unit and a reliability correction unit that corrects each reliability calculated by the deposit detection unit based on the environment detected by the environment detection unit.
- the information detecting unit that acquires information related to the traveling state including at least the vehicle speed or the yaw rate, and the information detecting unit
- a detection stop unit that stops detection of some of the plurality of types of deposits from the plurality of types of deposits based on information on the running state
- the detection result integration unit includes the plurality of types of deposits. It is preferable to calculate an integrated detection result based on the detection result of the deposit
- the stop determination unit that determines that the vehicle is stopped based on the information on the vehicle speed acquired by the information detection unit, and a predetermined It is preferable to further include an initialization unit that initializes the detection result of the deposit detection unit when the stop determination unit determines that the vehicle has stopped for more than the time.
- the present invention even in the situation of complex dirt, it can be correctly detected as dirt and the operation of the image recognition unit can be appropriately controlled.
- FIG. 1 is a block diagram of an in-vehicle device 1 according to an embodiment of the present invention.
- An in-vehicle device 1 shown in FIG. 1 is an ECU (Electric Control Unit) mounted on a vehicle, and includes a CPU and a memory 10.
- the memory 10 includes a ROM, a working memory, and a RAM used as a buffer memory.
- the ROM stores a program executed by the CPU of the in-vehicle device 1 and information related to a control map, which will be described in detail later.
- the in-vehicle device 1 is connected to cameras 2a and 2b, an alarm output unit 3, and a removal control unit 4.
- the in-vehicle device 1 is connected to a CAN (Controller Area Network), and can acquire information from a car navigation device or an ECU higher than the in-vehicle device 1.
- the in-vehicle device 1 can acquire information related to the vehicle such as the traveling speed of the vehicle, the yaw rate, and the operation state of the wiper via the CAN.
- the cameras 2a and 2b are installed in each part of the vehicle such as a body and a bumper, for example.
- the camera 2a has a camera lens facing the front of the vehicle, and has an angle of view that allows a road surface in front of the vehicle and a road sign installed in front of the vehicle to be photographed simultaneously.
- the camera 2b has a camera lens facing the rear of the vehicle, and has an angle of view that allows the road surface behind the vehicle and the scenery behind the vehicle to be photographed simultaneously.
- the in-vehicle device 1 performs image recognition processing on the captured images of the cameras 2a and 2b, and recognizes images such as lane marks, other vehicles, pedestrians, road signs, and parking frames included in the captured images.
- the lane mark represents a lane boundary line, a roadway center line, a roadway outer line, or the like formed by paint, a road fence (bottsdots), or the like.
- the in-vehicle device 1 detects, for example, that the vehicle is about to depart from the traveling lane or that the vehicle is about to collide with another vehicle based on the result of the image recognition process.
- Water droplets, mud, snow melting agent, etc. may adhere to the camera lenses of the cameras 2a and 2b.
- water, mud, snow melting agent, etc. on the road jumps up to the vehicle and easily adheres to the camera lens.
- Water droplets adhering to the camera lens are raindrops or the like and often contain a large amount of impurities.
- water droplet marks may remain on the camera lens.
- water droplet traces and muddy water may adhere to the camera lens and the camera lens may become clouded.
- adhered matter such as water droplets, water droplet marks, and mud adhering to the camera lens, dirt on the camera lens such as white turbidity, and the like are collectively referred to as adhered matter.
- the alarm output unit 3 outputs an alarm by an alarm lamp, an alarm buzzer, an alarm display screen, etc. to the vehicle driver.
- the removal control unit 4 controls a removal device for removing deposits from the camera lenses of the cameras 2a and 2b, such as an air pump, a washer pump, and a wiper drive unit (not shown).
- the removal control unit 4 controls the removal device based on a command from the in-vehicle device 1 to perform a compressed air or cleaning liquid injection operation, a wiping operation using a lens wiper, and the like from each camera lens of the cameras 2a and 2b. Try to remove the deposits.
- FIG. 2 is a functional block diagram of the in-vehicle device 1.
- the in-vehicle device 1 executes a program stored in the memory 10, whereby a captured image acquisition unit 11, an information detection unit 12, an attached matter detection unit 13, a detection result integration unit 14, an operation control unit 15, and an image recognition unit 16. And function as an alarm control unit 17.
- the captured image acquisition unit 11 acquires captured images from the cameras 2a and 2b at a predetermined frame rate.
- the captured image acquisition unit 11 outputs each captured image acquired for each frame to the attached matter detection unit 13, the detection result integration unit 14, and the image recognition unit 16.
- the information detection unit 12 acquires information such as the speed of the host vehicle, the yaw rate, the steering angle, the operating state of the wiper, and the outside air temperature from the ECU above the in-vehicle device 1 via the CAN.
- the information detection unit 12 outputs the acquired information to the detection result integration unit 14.
- the adhering matter detection unit 13 includes a water droplet detection unit 131, a white turbidity detection unit 132, a water droplet trace detection unit 133, and a mud detection unit 134. From the captured images output by the captured image acquisition unit 11, the cameras 2a and 2b Various deposits such as water droplets, white turbidity, water droplet traces, and mud adhering to each camera lens are detected.
- the water droplet detection unit 131, the white turbidity detection unit 132, the water droplet trace detection unit 133, and the mud detection unit 134 output respective detection results to the detection result integration unit 14. Details of each of the water droplet detection unit 131, the cloudiness detection unit 132, the water droplet trace detection unit 133, and the mud detection unit 134 will be described later.
- the detection result integration unit 14 includes a projection conversion unit 141, a composite coordinate setting unit 142, and an environment detection control unit 143.
- the detection result integration unit 14 integrates the detection results of the adhering matter detection unit 13, determines information regarding one contamination state for each camera lens of the cameras 2 a and 2 b, and sets the information as an operation control unit. 15 is output. Details of the projection conversion unit 141, the composite coordinate setting unit 142, and the environment detection control unit 143 will be described later.
- the operation control unit 15 controls the operation of the image recognition unit 16 based on the information regarding the contamination state of each camera lens of the cameras 2a and 2b output from the detection result integration unit 14.
- the image recognizing unit 16 includes a lane recognizing unit 161, a vehicle recognizing unit 162, a pedestrian recognizing unit 163, a sign recognizing unit 164, and a parking frame recognizing unit 165. From the captured images output by the captured image acquiring unit 11 Recognize object images such as lane marks, other vehicles, pedestrians, road signs, and parking frames.
- the operation control unit 15 determines which of the non-detection measures that make it easy to recognize each recognition target and the erroneous detection measures that make it difficult to recognize each recognition target to the image recognition unit 16.
- the non-detection countermeasure and the false detection countermeasure are each divided into three stages. Hereinafter, each stage of the non-detection countermeasure and the false detection countermeasure is referred to as a suppression mode.
- control parameters related to the recognition sensitivity of the image recognition unit 16 are adjusted.
- the control parameter to be adjusted is determined for each part of the image recognition unit 16, and will be described later.
- control for adjusting the control parameter related to the recognition sensitivity of the image recognition unit 16 to a greater degree than in the first suppression mode, or a region in the photographed image where an adhering matter is detected is detected by the image recognition unit 16.
- Control to exclude from the processing area is performed in the third suppression mode. control is performed to abandon the removal of deposits by the removal control unit 4 and the image recognition by the image recognition unit 16.
- the operation control unit 15 can execute the recognition sensitivity control of the image recognition unit 16 without contradiction.
- the alarm control unit 17 performs an alarm using the alarm output unit 3 based on the recognition result of the image recognition unit 16. For example, when it is determined that the vehicle deviates from the driving lane, when it is determined that the vehicle may collide with another vehicle, when the image recognition by the image recognition unit 16 is abandoned, the alarm output unit 3 outputs an alarm.
- Water drop detector 131 The operation of the water droplet detection unit 131 will be described with reference to FIGS. As shown in FIG. 3A, the water droplet detection unit 131 divides the image area of the captured image 30 into a plurality of blocks B (x, y). Each block B (x, y) includes a plurality of pixels of the captured image.
- the water droplet detection unit 131 calculates a score S 1 (x, y) representing the water droplet adhesion time for each block B (x, y).
- the score S 1 (x, y) has an initial value of zero, and increases by a predetermined value every time it is determined by the determination described below that a water droplet is attached in the block B (x, y). .
- FIG. 3B shows an arbitrary pixel 31 as a point of interest.
- the water droplet detection unit 131 sets pixels that are predetermined distances (for example, 3 pix) from the point of interest 31 in the upward direction, the upper right direction, the lower right direction, the upper left direction, and the lower left direction, respectively, as internal reference points 32. Pixels further separated by a predetermined distance (for example, 3 pix) in five directions are set as the external reference point 33.
- the water droplet detection unit 131 calculates the luminance for each of the internal reference points 32 and each of the external reference points 33.
- the water droplet detection unit 131 determines whether the luminance of the internal reference point 32 is higher than the luminance of the external reference point 33 for each of the five directions. In other words, the water droplet detection unit 131 determines whether or not the point of interest 31 is the center of the water droplet.
- the water droplet detection unit 131 has a score S 1 (x, y) of B (x, y) to which the point of interest 31 belongs. ) Is increased by a predetermined value, for example, 1.
- the water drop detection unit 131 initializes the score S 1 (x, y) of each block B (x, y) from the environment detection control unit 143 described later after making the above determination for all the pixels in the captured image. The elapsed time t1 since then is acquired. Then, the water droplet detection unit 131 divides the score S 1 (x, y) of each block B (x, y) by the elapsed time t1, and calculates the time average S 1 (x, x) of the score S 1 (x, y). , Y) / t1 is calculated.
- the water droplet detection unit 131 calculates the sum of the time average S 1 (x, y) / t1 of all blocks B (x, y) and divides it by the total number of blocks in the captured image 30 to obtain a score average A S1 is calculated.
- the score average AS1 increases for each frame. In other words, when the score average AS1 is large, there is a high probability that water droplets are attached to the camera lens for a long time.
- the water droplet detection unit 131 generates the reliability R 1 of the detection result of the water droplet detection unit 131 by using the score average A S1 .
- the score S 1 (x, y) fluctuates up and down due to the influence of water flowing down on the lens even in a driving situation where rain tends to adhere. For this reason, the score average AS1 is used for the amount of water droplets on the current lens.
- the lens state changes inevitably, and even if it is raining, the score S 1 (x, y) temporarily decreases. There is. Accordingly, utilizing a count C S1 of time exceeds the predetermined value TA S1 with a score average A S1.
- the water droplet detection unit 131 holds the time count C S1 even if the average score A S1 for a predetermined period falls below a certain value TA S1, and when it falls below a predetermined period, the score S 1 (x, y) Is subtracted.
- the water droplet detection unit 131 expresses the reliability R 1 as 1 when the time count C S1 exceeds AS1THR. The closer the reliability R 1 is to 1, the more reliable the adhesion of the detected water droplets.
- the water droplet detection unit 131 detects a block having a score S 1 (x, y) greater than or equal to a predetermined value from all the blocks B (x, y), and calculates the number N 1 of the detected blocks. .
- the water droplet detection unit 131 calculates the luminance difference between the internal reference point 32 and the external reference point 33 for the pixels included in the detected N 1 blocks, and calculates the average A D1 of the luminance difference between these pixels. .
- the water droplet detection unit 131 detects the reliability R 1 , the number N 1 of blocks having a score S 1 (x, y) equal to or greater than a predetermined value, and the average A D1 of luminance differences as detection results, and the detection result integration unit 14. Output to.
- the cloudiness detection unit 132 sets an upper left detection region 41, an upper detection region 42, and an upper right detection region 43 at positions where a horizon line is to appear in the captured image.
- the upper detection area 42 is set at a position that includes the vanishing points of two lane marks provided in parallel with each other on the road surface.
- vanishing points 47 of the two lane marks 46 are included inside the upper detection area 42.
- the upper left detection area 41 is set on the left side of the upper detection area 42
- the upper right detection area 43 is set on the right side of the upper detection area 42.
- the white turbidity detection unit 132 sets the lower left detection region 44 and the lower right detection region 45 at positions where lane marks are to be captured in the captured image.
- the white turbidity detection unit 132 performs edge detection processing in the horizontal direction on the pixels in each of the upper left detection region 41, the upper detection region 42, the upper right detection region 43, the lower left detection region 44, and the lower right detection region 45. .
- edge detection for the upper left detection area 41, the upper detection area 42, and the upper right detection area 43 an edge such as a horizon is detected.
- edge detection for the lower left detection area 44 and the lower right detection area 45 an edge such as a lane mark 46 is detected.
- the white turbidity detection unit 132 calculates the edge strength for each pixel included in each of the detection regions 41 to 45.
- the cloudiness detection unit 132 calculates an average value A E2 of the edge intensity for each of the detection regions 41 to 45, and determines whether the average value A E2 is less than a predetermined threshold ⁇ .
- the white turbidity detection unit 132 determines that the detection region in which the average value A E2 of the edge intensity is less than the threshold ⁇ is cloudy.
- Cloudy detecting unit 132 calculates an average value A E2 of edge intensity in each detection area 41-45 is the number of the detection area of less than the threshold epsilon, i.e. a cloudy to have a detected detection region number N 2 of . Thereafter, the cloudiness detection unit 132 calculates a time t2 when it is continuously determined that each of the detection areas 41 to 45 is cloudy. Then, the white turbidity detection unit 132 calculates the average duration t3 by dividing the sum of the times t2 of the detection regions 41 to 45 by the number of detection regions, that is, five. Cloudy detecting unit 132, the average duration t3, converted to the reliability R 2 of the detection result of the turbidity detection unit 132.
- Reliability R 2 is, the longer the average duration t3 of turbidity, indicating that the reliability is cloudy high. Further, since t3 itself is the average white turbidity duration of the five detection areas 41 to 45, the reliability of white turbidity increases as the duration of all the five detection areas 41 to 45 increases.
- Reliability R 2 is represented by a 0 and 1 numeric, TTHR3 in the case of ⁇ t3 is expressed as all ones, a numerical value indicating a state in which reliability high turbid.
- the white turbidity detection unit 132 calculates the average edge strength A A2 by dividing the sum of the average values A E2 of the edge strengths related to the detection region detected as being cloudy by the number of detection regions, that is, five. To do.
- the white turbidity detection unit 132 outputs the reliability R 2 , the number N 2 of detection regions detected as white turbidity, and the average edge strength A A2 as detection results to the detection result integration unit 14.
- Waterdrop mark detection unit 133 The operation of the water drop mark detection unit 133 will be described. Similarly to the water drop detection unit 131, the water drop detection unit 133 also divides the image area of the captured image 30 into a plurality of blocks B (x, y) as shown in FIG.
- the water drop mark detection unit 133 performs edge detection processing in the horizontal direction on the entire captured image 30, and generates edge intensity for each pixel. And the water droplet trace detection part 133 calculates the sum total of the edge intensity
- the water drop mark detection unit 133 acquires the elapsed time T8 (x, y) from when the score S 3 (x, y) of each block B (x, y) exceeds the optimum threshold from the environment detection control unit 143. To do. And the water droplet trace detection part 133 extracts this elapsed time T8 (x, y) by each pixel, calculates this average TA8, and utilizes this for reliability.
- the water drop mark detection unit 133 converts the score S 3 (x, y) on the screen average score AS 3 into an opacity. If water drop marks are continuously attached to the camera lens, the score average AS 3 increases. In addition, the longer TA8 considering this duration, the higher the probability that waterdrop marks are attached to the camera lens for a longer time.
- score average AS 3 when the score average AS 3 is large, it indicates that the luminance difference between the background and the water droplet trace is large, and that the background is difficult to see. As this value increases, score conversion is performed so that the opacity is higher on the map of the water drop mark range 53 in the projection conversion unit 141 described later.
- the water drop detection unit 133 detects a block having a score S 3 (x, y) of a predetermined value or more from all the blocks B (x, y), and calculates the number N 3 of the detected blocks.
- the block number N 3 is converted to the adhesion area of the stain.
- the water drop mark detection unit 133 uses the area on the screen as the denominator, N 3 / area [pix] on the screen, thereby converting the area into the adhesion area and using it.
- the water drop mark detection unit 133 outputs the reliability R 3 , the score average A S3, and the number N 3 of blocks having a score S 3 (x, y) equal to or greater than a predetermined value as detection results to the detection result integration unit 14. .
- the mud detection unit 134 (Mud detection unit 134) The operation of the mud detector 134 will be described. Similarly to the water drop detection unit 131, the mud detection unit 134 also divides the image area of the captured image 30 into a plurality of blocks B (x, y) as shown in FIG.
- the mud detection unit 134 detects the luminance of each pixel of the captured image 30. Then, the mud detection unit 134 calculates the total luminance I t (x, y) of each pixel included in the block B (x, y) for each block B (x, y). The mud detection unit 134 calculates the difference ⁇ I (x, y) between the I t (x, y) calculated for the captured image of the current frame and the I t ⁇ 1 (x, y) calculated similarly for the captured image of the previous frame. Is calculated for each block B (x, y).
- the mud detection unit 134 detects a block B (x, y) having a small ⁇ I (x, y) and a smaller I t (x, y) than the surrounding blocks, and detects the block B (x , Y), a score S 3 (x, y) corresponding to a predetermined value, for example, 1 is increased.
- the mud detection unit 134 performs the above-described determination for all pixels in the captured image, and then from the environment detection control unit 143, the predetermined period t5 in which the score S 4 (x, y) of each block B (x, y) is present. The score integrated value is acquired. Then, the mud detection unit 134 divides the score S 4 (x, y) of each block B (x, y) by the predetermined period t5 to obtain a time average S 4 (score (S 4 (x, y)). x, y) / t5 is calculated.
- the mud detection unit 134 calculates the sum of the time average S 4 (x, y) / t 5 of all the blocks B (x, y) and divides it by the total number of blocks in the captured image 30 to obtain the score average AR S4 is calculated.
- the score average AR S4 has a meaning close to the amount of mud adhering to the lens during a predetermined period.
- the score average AR S4 increases for each frame that is sequentially captured . In other words, when the score average AR S4 is large, there is a high probability that much mud is attached to the camera lens. Also, score A S4 is used as an index indicating the transmittance. As the reliability, the time when the score A S4 is larger than the threshold value TA S4 is used.
- the mud detection unit 134 detects a block having a score S 4 (x, y) of a predetermined value or more from all the blocks B (x, y), and calculates the number N 4 of the detected blocks.
- the mud detection unit 134 outputs the reliability R 4 , the score average A S4, and the number N 4 of blocks having a score S 4 (x, y) equal to or greater than a predetermined value to the detection result integration unit 14 as detection results.
- the state of lens contamination is detected by the water droplet detection unit 131, the white turbidity detection unit 132, the water droplet trace detection unit 133, and the mud detection unit 134, respectively. Therefore, in order to accurately execute operations such as white line detection based on the image obtained by imaging, if a method of correcting the control operation using each detection result individually is adopted, contradictory correction is instructed. In such a case, it is difficult to execute an accurate control operation. Therefore, in this embodiment, the projection conversion unit 141 described below generates a single control command by combining a plurality of detection results, and the control operation is corrected by this control command.
- each detection result is normalized, and the lens transmittance and the contaminant adhesion area on the lens surface are calculated as physical quantities common to each detection result.
- the control operation is corrected by obtaining a control command corresponding to these two physical quantities.
- the projection conversion unit 141 projects the detection result of each part of the attached matter detection unit 13 onto the coordinate space shown in FIG.
- the coordinate space shown in FIG. 5 is stored as a control map 50 in the memory 10.
- the control map 50 has a coordinate axis related to the transmittance of the lens dirt and a coordinate axis related to the adhesion area where the deposit is attached to the camera lens.
- the value of the coordinate axis regarding the transmittance of the lens dirt is referred to as the p coordinate
- the value of the coordinate axis regarding the adhesion area is referred to as the q coordinate.
- the transmittance of lens dirt becomes opaque as the distance from the origin increases. Further, the adhesion area increases as the distance from the origin increases.
- the control map 50 is preset with a rectangular water drop range 51 and a mud range 54 indicated by broken lines, and a rectangular cloudiness range 52 and a water drop mark range 53 indicated by solid lines.
- the detection results of the plurality of types of deposits obtained by the respective units of the deposit detection unit 13 are the lens transmittance and the area of the deposits, and the coordinate area obtained from these two detection results is the deposit detection unit. 13 is set for each type of deposit detected by each part.
- the projection conversion unit 141 has projection functions f 1 and g 1 . These projection functions f 1 and g 1 project and convert the detection result of the water droplet detection unit 131 into coordinates on each coordinate axis.
- the projection function f 1 converts the brightness difference average A D1 into the p-coordinate p 1 within the water droplet range 51.
- the value of p 1 is a larger value the larger the A D1.
- the projection conversion unit 141 may directly convert the detection result of the water droplet detection unit 131 into coordinates within a range such as the water droplet range 51. That is, the lens transmittance and the adhesion area may be obtained based on the detection result obtained by the water droplet detection unit 131, and these values may be directly converted into predetermined coordinate points of the XY coordinate system that defines the operation map. .
- the projection conversion unit 141 has projection functions f 2 and g 2 .
- the projection functions f 2 and g 2 project and convert the detection result of the cloudiness detection unit 132 into coordinates on each coordinate axis.
- the projection function f 2 converts the value into the p-coordinate p 2 within the cloudiness range 52.
- the value of p 2 is a larger value the larger the A A2.
- the projection function g 2 is converted to the q coordinate q 2 within the cloudiness range 52.
- the value of q 2 becomes larger as N 2 is larger.
- the projection conversion unit 141 may directly convert the detection result of the cloudiness detection unit 132 into coordinates within the water droplet range 51. That is, the lens transmittance and the adhesion area may be obtained based on the detection result obtained by the white turbidity detection unit 132, and these values may be directly converted into predetermined coordinate points of the XY coordinate system that defines the operation map. .
- the projection conversion unit 141 includes projection functions f 3 and g 3 for projecting and converting the detection result of the water drop mark detection unit 133. These projection functions f 3 and g 3 also project and convert the detection result into coordinates on each coordinate axis, as in the above-described projection function.
- the projection function f 3 converts the score average AS 3 into a p-coordinate p 3 within the range of the water drop mark range 53.
- the value of p 3 is a larger value the larger the A S3.
- the projection function g 3 converts the number N 3 into the q coordinate q 3 within the range of the water drop mark range 53.
- the value of q 3 is larger as N 3 is larger.
- the projection conversion unit 141 may directly convert the detection result of the cloudiness detection unit 132 into coordinates within the water droplet range 51. That is, the lens transmittance and the adhesion area may be obtained based on the detection result obtained by the white turbidity detection unit 132, and these values may be directly converted into predetermined coordinate points of the XY coordinate system that defines the operation map. .
- the projection conversion unit 141 has projection functions f 4 and g 4 that project and convert the detection result of the mud detection unit 134. These projection functions f 4 and g 4 also project and convert the detection result into coordinates on each coordinate axis, as in the above-described projection function.
- the projection function f 4 converts the value into the p coordinate p 4 within the mud range 54.
- the value of p 4 is a larger value the larger the A S4.
- the projection conversion unit 141 may directly convert the detection result of the mud detection unit 134 into coordinates within the water droplet range 51. That is, the lens transmittance and the adhesion area may be obtained based on the detection result obtained by the mud detection unit 134, and these values may be directly converted into predetermined coordinate points of the XY coordinate system that defines the operation map. .
- the composite coordinate setting unit 142 is a coordinate (p 1 , q 1 ), a coordinate (p 2 , q 2 ), a coordinate (p 3 , q) obtained by projecting and converting the detection result of each part of the attached matter detection unit 13 by the projection conversion unit 141. 3 ) and coordinates (p 4 , q 4 ) are integrated based on the reliability R 1 , R 2 , R 3 , and R 4 output by each unit of the deposit detection unit 13, and one composite coordinate ( P, Q) is calculated. Formulas for calculating the composite coordinates (P, Q) are shown below.
- the composite coordinates (P, Q) are output to the operation control unit 15 as information regarding the contamination state of the camera lens.
- FIG. 6 is a diagram illustrating an operation example of the composite coordinate setting unit 142.
- coordinates 61 obtained by projecting the detection result of the water droplet detection unit 131
- coordinates 62 obtained by projecting the detection result of the white turbidity detection unit 132
- coordinates 63 obtained by projecting the detection result of the water droplet detection unit 133
- a coordinate 64 obtained by projecting and converting the detection result of the mud detection unit 134 is shown.
- the composite coordinates (P, Q) are center positions of coordinates 61, 62, 63, and 64.
- 65 is set.
- the center position 65 is the center of gravity of the coordinates 61, 62, 63, and 64 based on the reliability.
- the motion control unit 15 Based on the composite coordinates (P, Q), the motion control unit 15 determines which of the false detection countermeasure and the non-detection countermeasure is to be performed on the image recognition unit 16 and in which stage the suppression mode is executed. To decide.
- FIG. 7 shows a coordinate 77 farthest from the origin among the coordinates projected and converted by the projection conversion unit 141, and three quarter circles 78, 79 and 80 centering on the coordinate 77 are shown. It is shown in the figure.
- the coordinate 77 is referred to as the worst contamination point 77.
- the radius of each 1/4 circle 78, 79, 80 becomes small in order of 1/4 circle 78, 79, 80.
- FIG. 7 also shows a line segment 81 that bisects each of the quarter circles 78, 79, and 80, a line segment 82 that is parallel to the coordinate axis related to transmittance, and a line segment 83 that is parallel to the coordinate axis related to the adhesion area. Is extended from the worst pollution point 77.
- the motion determination region 71 exists at a position where the coordinate axis related to the transmittance is closer to the coordinate axis related to the adhesion area, and is surrounded by the circumference of the quarter circle 78, the circumference of the quarter circle 79, the line segment 81, and the line segment 83. This is a broken area.
- the motion control unit 15 outputs a command to execute processing in the first suppression mode for false detection countermeasures to the image recognition unit 16. .
- the motion determination region 73 exists at a position where the coordinate axis related to the transmittance is closer to the coordinate axis related to the adhesion area, and is surrounded by a circumference of a quarter circle 79, a circumference of a quarter circle 80, a line segment 81, and a line segment 83. This is a broken area.
- the motion control unit 15 outputs a process execution command to the image recognition unit 16 in the second detection mode for countermeasure against false detection. .
- the motion determination area 75 is an area where the coordinate axis related to the transmittance is closer to the coordinate axis related to the adhesion area and is surrounded by the circumference of the quarter circle 80, the line segment 81, and the line segment 83.
- the motion control unit 15 outputs an execution command for processing in the third detection mode for false detection countermeasures to the image recognition unit 16. .
- the motion determination area 72 exists at a position where the coordinate axis related to the adhesion area is closer to the coordinate axis related to the transmittance, and is surrounded by a circumference of a quarter circle 78, a circumference of a quarter circle 79, a line segment 81, and a line segment 82. This is a broken area.
- the motion control unit 15 outputs an execution command for processing in the first suppression mode as a non-detection measure to the image recognition unit 16. To do.
- the motion determining region 74 is located at a position where the coordinate axis related to the adhesion area is closer than the coordinate axis related to the transmittance, and is surrounded by a circumference of a quarter circle 79, a circumference of a quarter circle 80, a line segment 81, and a line segment 82. This is a broken area.
- the motion control unit 15 outputs an execution command for processing in the second non-detection countermeasure suppression mode to the image recognition unit 16. .
- the motion determination region 76 is a region where the coordinate axis related to the adhesion area is closer to the coordinate axis related to the transmittance and is surrounded by the circumference of the quarter circle 80, the line segment 81, and the line segment 82.
- the motion control unit 15 outputs a process execution command in the third suppression mode for non-detection countermeasures to the image recognition unit 16. .
- the operation control unit 15 executes either the false detection countermeasure or the non-detection countermeasure based on whether the composite coordinate (P, Q) is closer to the coordinate axis related to the transmittance or the coordinate axis related to the adhesion area.
- the operation control unit 15 determines in which suppression mode the countermeasures are executed based on the distance from the worst contamination point 77 to the composite coordinates (P, Q).
- FIG. 8 is a control block diagram related to the environment detection control unit 143.
- the environment detection control unit 143 includes a light source environment detection unit 144, a traveling road environment detection unit 145, a weather detection unit 146, a time control unit 147, and a detection result control unit 148.
- the light source environment detection unit 144 detects a light source environment in which the reliability R 1 , R 2 , R 3 , and R 4 of the detection result of the attached matter detection unit 13 is likely to be reduced. For example, the light source environment detection unit 144 detects a state in which the sun, the road surface reflected light, the headlight of the following vehicle, and the like are in a backlight state with respect to the camera 2a or 2b.
- FIG. 9 is a flowchart regarding the processing of the light source environment detection unit 144.
- the light source environment detection unit 144 acquires captured images from the cameras 2a and 2b.
- the light source environment detection unit 144 sets a processing region used for detection of the light source environment on the captured image.
- the light source environment detection unit 144 calculates the luminance for each pixel included in the processing region set in step S410, and generates a histogram of the luminance.
- step S430 the light source environment detection unit 144 determines whether or not the total sum of frequencies equal to or higher than a predetermined luminance value exceeds a predetermined threshold in the histogram generated in step S420.
- the light source environment detection unit 144 proceeds to the process of step S440 when the determination in step S430 is affirmative, that is, when the sum of the frequencies equal to or higher than the predetermined luminance value exceeds a predetermined threshold. If the determination in step S430 is negative, that is, if the sum of the frequencies equal to or higher than the predetermined luminance value does not exceed the predetermined threshold value, the light source environment detection unit 144 proceeds to the process of step S400 and acquires the captured image of the next frame. .
- step S440 the light source environment detection unit 144 determines whether or not step S430 is continuously determined for a predetermined time or more.
- the light source environment detection unit 144 makes a negative determination in step S440 until step S430 continues to be positively determined for a predetermined time or longer, proceeds to the process of step S400, and acquires a captured image of the next frame. Then, the light source environment detection unit 144 proceeds to the process of step S450 when step S430 continues to be affirmed for a predetermined time or more.
- step S450 the light source environment detection unit 144 detects a high luminance region from the entire captured image, and adjusts the reliability R 1 , R 2 , R 3 , and R 4 of the detection result of the attached matter detection unit 13. Then, the light source environment detection unit 144 proceeds to the process of step S400, and acquires a captured image of the next frame.
- FIG. 10 is a diagram illustrating an example of the processing area set in step S410.
- the processing area set by the light source environment detection unit 144 changes depending on whether the light source environment is detected in the daytime or at night.
- FIG. 10 illustrates a processing area 501 set in the daytime for the captured image 500.
- the processing area 501 is set so that the sky is reflected.
- the light source environment detection unit 144 limits the processing area to the extent that the headlight of the following vehicle enters.
- the travel path environment detection unit 145 detects an environment such as a travel path and a background in which the reliability R 1 , R 2 , R 3 , and R 4 of the detection result of the attached matter detection unit 13 are likely to decrease.
- the traveling road environment detection unit 145 detects that the vehicle is traveling in a place on which a deposit is likely to adhere to the camera lens, such as a wet road or an off road.
- FIG. 11 is an example of a flowchart related to the processing of the traveling road environment detection unit 145.
- the light source environment detection unit 144 acquires captured images from the cameras 2a and 2b.
- the traveling road environment detection unit 145 acquires information regarding the position and shape of the lane mark from the lane recognition unit 161.
- the traveling road environment detection unit 145 acquires information related to the high luminance area detected by the light source environment detection unit 144.
- the traveling road environment detection unit 145 acquires information such as the outside air temperature, the speed of the host vehicle, the yaw rate, and the steering angle from the information detection unit 12.
- the traveling road environment detection unit 145 includes information on the position and shape of the lane mark acquired in step S510, information on the high brightness area acquired in step S520, the outside air temperature detected in step S530, and the vehicle's own vehicle. Based on information such as speed, yaw rate, and steering angle, it is detected that the host vehicle is traveling on a wet road surface. For example, the traveling road environment detection unit 145 determines the image area of the road surface based on information on the position and shape of the lane mark, and estimates the high brightness area detected from the image area of the road surface as a puddle or a frozen road surface. To detect wet road surfaces.
- the traveling road environment detection unit 145 detects that the host vehicle is traveling based on information related to the speed of the host vehicle. Then, the traveling road environment detection unit 145 may determine that the wet road surface is a frozen road surface when the temperature is below freezing based on the information related to the outside air temperature.
- step S550 the traveling road environment detection unit 145 is unable to acquire information on the position and shape of the lane mark in step S510 while the host vehicle is traveling, and the building image is acquired in step S500. Off-road is detected based on the fact that there are few things. Then, the traveling road environment detection unit 145 proceeds to the process of step S500 and acquires a captured image of the next frame.
- the weather detection unit 146 acquires information related to the weather around the host vehicle.
- FIG. 12 is a flowchart regarding the processing of the weather detection unit 146.
- the weather detection unit 146 acquires information such as the wiper operating state and the outside air temperature from the information detection unit 12.
- step S ⁇ b> 610 the weather detection unit 146 acquires information related to the high luminance area detected by the light source environment detection unit 144.
- step S620 the weather detection unit 146 detects the weather around the host vehicle based on the various information acquired in step S600 and step S610, and outputs information about the weather as a detection result. For example, when the wiper on the front window of the host vehicle is operating a predetermined number of times within a predetermined time, it is determined that the weather is rainy. Further, for example, when the light source environment detection unit 144 detects a high brightness area from the road surface at a frequency of a predetermined number of times within a predetermined time, it is determined that the sky is clear. After determining the weather, the weather detection unit 146 proceeds to the process of step S600.
- the time control unit 147 measures the following various times.
- (A) Operation time of each part of the image recognition unit 16 (b) Elapsed time since the image recognition unit 16 started countermeasures against non-detection or false detection by the suppression mode of each stage (c) The removal control unit 4 is attached Elapsed time since start of removal of kimono (d) Elapsed time after each part of image recognition unit 16 gave up image recognition (e) Detection result of attached matter detection unit 13 is initialized (reset) Time t1, t2, t4 and t5 since (F) Elapsed time since detection of the adhering matter detection unit 13 was stopped
- each part of the adhering matter detection unit 13 presupposes that the host vehicle is traveling, the detection process is stopped when the speed of the host vehicle is equal to or lower than a predetermined speed.
- a predetermined time or more elapses after the detection process of the attached matter detection unit 13 is stopped the reliability of the detection result of each part of the attached matter detection unit 13 decreases. For example, since a water drop may be dry and disappear after several hours, the detection result of the water drop detection unit 131 is not reliable after several hours have passed since the water drop detection unit 131 was stopped.
- the in-vehicle device 1 initializes the detection result of the water droplet detection unit 131 when the elapsed time t6 timed by the time control unit 147 reaches a predetermined time.
- the detection result control unit 148 performs control such as correction and initialization on the detection result of each part of the attached matter detection unit 13.
- FIG. 13 is a flowchart relating to the processing of the attached matter detection unit 13. In step S ⁇ b> 700, the detection result control unit 148 acquires each detection result from each unit of the attached matter detection unit 13.
- the detection result control unit 148 acquires the processing results from the light source environment detection unit 144, the traveling road environment detection unit 145, and the weather detection unit 146. For example, the information regarding the high-intensity region extracted by the light source environment detection unit 144 in step S450 (FIG. 9), the detection result of the road environment detection unit 145 in step S540 (FIG. 11) and step S550 (FIG. 11), step S620 ( In FIG. 12), the detection result control unit 148 acquires information on the weather determined by the weather detection unit 146.
- step S720 the detection result control unit 148 determines whether or not the light source environment detection unit 144 has detected a high luminance area. If the determination in step S720 is affirmative, that is, if the light source environment detection unit 144 has detected a high-luminance region, the detection result control unit 148 advances the processing to step S730 to detect each part of the attached matter detection unit 13. Correct the result.
- the light source environment detection unit 144 extracts a bright light source from behind such as the West or the Asahi in the daytime, and a light source that also causes a reflection on the road surface such as a headlight of the following vehicle at night. Based on these light sources, the detection result control unit 148 adjusts the reliability according to the nature of the logic.
- the reliability R 1 water droplet detection, because of the possibility of erroneous detection by mistake road reflection region lowers the reliability to normal, for example, about 2/3 of the time.
- the score and reliability can be improved by not using the area directly behind the high-intensity areas such as the sun and the headlights of the following cars, which are backlit for daytime cameras. adjust.
- the reliability R 2 for example, reduced by about 20%.
- step S740 the detection result control unit 148 proceeds to the process of step S740.
- step S740 the detection result control unit 148 determines whether the traveling road environment detection unit 145 has detected a wet road surface or off-road. If the determination in step S740 is affirmative, that is, if a wet road surface or off-road has been detected, the detection result control unit 148 proceeds to step S750 to correct the detection results of each part of the attached matter detection unit 13. To do. For example, wet if the road surface has been detected to increase the reliability R 1 of the detection result of the water droplet detecting unit 131 for example 50%, if the offload has been detected in the detection result of mud detecting unit 134 reliability R 4 is increased by, for example, 50%. After the correction, the detection result control unit 148 proceeds to the process of step S760. When a negative determination is made in step S740, the detection result control unit 148 proceeds to the process of step S760.
- step S760 the detection result control unit 148 corrects the detection result of each part of the attached matter detection unit 13 based on the detection result of the weather detection unit 146.
- the weather around the host vehicle in the case of rainy weather, thereby increasing the reliability R 1 of the detection result of the water droplet detection unit 131.
- the reliability R 3 of the detection result of the water droplet mark detection unit 133 when the weather around the vehicle is changed to the sunny from rain increases the reliability R 3 of the detection result of the water droplet mark detection unit 133.
- the reliability R 1 water droplet detection example is increased by 50% in the case of determining the fine weather is reduced by 20%.
- the temperature is 0 ° C.
- step S770 the detection result control unit 148 determines whether or not to initialize the detection result of each part of the attached matter detection unit 13.
- Step S770 is, for example, when a predetermined time or more has elapsed since the light source environment detection unit 144 detected the high-luminance region, or when the host vehicle is determined to stop for a predetermined time or more and then departs. Specifically, an affirmative determination is made, for example, when the traveling speed of the host vehicle reaches a predetermined speed, for example, when the vehicle speed exceeds 10 km / h and continues for a predetermined time, and then exceeds a predetermined speed.
- step S770 the detection result control unit 148 proceeds to the process of step S780, initializes each detection result of the attached matter detection unit 13, and advances the process to step S790. If the determination in step S770 is negative, the detection result control unit 148 proceeds to the process of step S790.
- step S790 the detection result control unit 148 determines whether the vehicle speed is a predetermined speed, for example, 10 km / h or less. When a positive determination is made in step S790, that is, when the vehicle speed is equal to or lower than the predetermined speed, the detection result control unit 148 proceeds to the process of step S810. When a negative determination is made in step S790, the detection result control unit 148 proceeds to the process of step S800.
- a predetermined speed for example, 10 km / h or less.
- step S800 the detection result control unit 148 determines whether the yaw rate of the vehicle is equal to or greater than a predetermined value. When a positive determination is made in step S800, that is, when the vehicle yaw rate is equal to or greater than the predetermined value, the detection result control unit 148 proceeds to the process of step S810. When a negative determination is made in step S800, the detection result control unit 148 proceeds to the process of step S820.
- step S810 the detection result control unit 148 stops the detection of the adhering matter by the water droplet detection unit 131 and the mud detection unit 134.
- step S820 the detection result control unit 148 outputs each detection result of the attached matter detection unit 13 that has been subjected to correction or the like to the projection conversion unit 141.
- step S810 when the detection results of the water droplet detection unit 131 and the mud detection unit 134 are stopped, the detection results are not output. In this case, the projection conversion unit 141 projects only the output detection result onto the control map 50, and the composite coordinate setting unit 142 sets the composite coordinate (P, Q) using only the component of the projection result.
- FIG. 14 is a flowchart regarding the processing of the lane recognition unit 161.
- the lane recognition unit 161 acquires captured images from the cameras 2a and 2b.
- the lane recognition unit 161 extracts feature points corresponding to the lane marks drawn on the left and right of the traveling lane of the host vehicle from the captured image acquired in step S10. For example, the lane recognizing unit 161 sets an extraction region in a predetermined portion in the captured image, and extracts an edge point whose luminance change is equal to or greater than a predetermined threshold in the extraction region as a feature point.
- FIG. 15 shows an example of feature points extracted from the photographed image.
- a captured image 90 shown in FIG. 15 is divided into a road surface image area 92 where the road surface is captured and a background image area 93 where the background is captured.
- the lane recognition unit 161 as shown in FIG. 15, for the captured image 90, an extraction region 94 corresponding to the inside of the lane marking on the right side of the host vehicle, and the left side of the host vehicle. And an extraction area 95 corresponding to the inside of the lane marking line.
- edge points are detected by comparing the brightness of adjacent pixels in these extraction regions 94 and 95 and extracted as feature points 96.
- a plurality of feature points 96 are extracted along the inner contour lines of the left and right lane marks.
- step S30 in FIG. 14 the lane recognizing unit 161, a feature point group are arranged a predetermined number M th or more from among the feature points extracted in step S20 on the same straight line is recognized as the lane mark.
- eleven feature points 96 are arranged in the same straight line in each of the extraction regions 94 and 95.
- the predetermined number M th is 5
- the feature point groups in the extraction regions 94 and 95 illustrated in FIG. 15 are recognized as lane marks.
- the lane recognition unit 161 outputs information related to the lane mark recognized in step S30 to the alarm control unit 17 and the like.
- step S40 the lane recognition unit 161 calculates the time average of the number of feature points included in the feature point group recognized as the lane mark in step S30.
- the lane recognition unit 161 stores a history in the memory 10 for the number of feature points included in each feature point group recognized as a lane mark in the previous frame, and uses the history to determine the number of feature points. The time average of is calculated.
- step S50 the lane recognition unit 161 calculates an index v representing the visibility of the lane mark based on the time average of the number of feature points calculated in step S40.
- step S60 the lane recognition unit 161 determines whether or not the index v indicating the visibility of the lane mark is less than a predetermined threshold value Vth . If the determination in step S60 is affirmative, that is, if the index v is less than the predetermined threshold value Vth , the lane recognition unit 161 proceeds to the process of step S70.
- step S60 determines whether the index v is greater than or equal to the predetermined threshold value Vth . If the determination in step S60 is negative, that is, if the index v is greater than or equal to the predetermined threshold value Vth , the lane recognition unit 161 proceeds to the process in step S10 and acquires the captured image of the next frame from the cameras 2a and 2b. .
- step S70 the lane recognition unit 161 outputs a command to the alarm control unit 17 so as not to output an alarm from the alarm output unit 3, and then proceeds to the process of step S10.
- step S10 the lane recognition unit 161 acquires a captured image of the next frame from the cameras 2a and 2b.
- the operation control unit 15 increases the set value of the predetermined number M th. Thereby, in step S30, it becomes difficult for the lane recognition unit 161 to recognize the feature point group arranged on the same straight line as the lane mark. Further, the set value of the threshold value Vth is increased to make it difficult to output a false alarm.
- the set value of the predetermined number M th and the set value of the threshold value V th may be changed based on the position of the composite coordinates (P, Q).
- the operation control unit 15 decreases the set value of the predetermined number M th .
- the lane recognition unit 161 can easily recognize the feature point group arranged on the same straight line as the lane mark.
- the set value of the threshold value Vth is reduced to make it easier for the alarm control unit 17 to output an alarm using the alarm output unit 3.
- the set value of the predetermined number M th and the set value of the threshold value V th may be changed based on the position of the composite coordinates (P, Q).
- the operation control unit 15 further increases the set value of the predetermined number M th than the first suppression mode. Thereby, the lane recognition unit 161 becomes more difficult to recognize the feature point group arranged on the same straight line as the lane mark. Further, the in-vehicle device 1 excludes the area where the adhered object is detected from the image area targeted for the feature point extraction process in step S20, or detects the adhered object from the feature points extracted in step S20. Feature points extracted from the selected region.
- the non-detection measures second suppression mode
- the operation control unit 15 further reduces the set value of the predetermined number M th than the first suppression mode. This makes it easier for the lane recognition unit 161 to recognize the feature point group arranged on the same straight line as a lane mark.
- the operation control unit 15 In the erroneous detection countermeasure in the third suppression mode and the non-detection countermeasure in the third suppression mode, the operation control unit 15 outputs a command to remove the deposit from the camera lens to the removal control unit 4. In the case where the contamination state of the camera lens is not improved even after performing the deposit removal operation by the removal control unit 4, the operation control unit 15 gives up the lane recognition by the lane recognition unit 161.
- the process for the parking frame recognition unit 165 to recognize the parking frame is the same as the process for the lane recognition unit 161 described with reference to FIG. 14 to recognize the lane mark.
- the countermeasure which the parking frame recognition part 165 implements in the false detection countermeasure and non-detection countermeasure of each suppression mode is the same countermeasure as the countermeasure which the lane recognition part 161 implemented in the false detection countermeasure and non-detection countermeasure of each suppression mode. Good.
- FIG. 16 is a flowchart regarding the processing of the vehicle recognition unit 162.
- step S110 the vehicle recognition unit 162 acquires captured images from the cameras 2a and 2b.
- step S120 the vehicle recognizing section 162 performs edge detection processing on the captured image acquired in step S110, and generates an edge image obtained by binarization based on the threshold value I th predetermined brightness difference.
- step S130 the vehicle recognition unit 162 performs pattern recognition processing on the edge image generated in step S120, and detects an image estimated as another vehicle.
- step S140 the vehicle recognizing unit 162 determines whether or not an image presumed to be another vehicle has been detected continuously in step S130 for a predetermined number of frames Fth or more.
- a negative determination is made in step S140, that is, when an image estimated to be another vehicle is continuously detected for a predetermined number of frames Fth or more
- the vehicle recognizing unit 162 proceeds to the process of step S110 and takes a captured image of the next frame. get.
- step S150 If an affirmative determination is made in step S140, that is, if an image estimated to be another vehicle is detected continuously for a predetermined number of frames Fth or more, the vehicle recognition unit 162 proceeds to the process of step S150.
- step S150 the vehicle recognizing unit 162 recognizes an image that is continuously estimated to be another vehicle for a predetermined number of frames Fth or more as another vehicle. Thereafter, the vehicle recognition unit 162 proceeds to the process of step S110 and acquires a captured image of the next frame.
- the operation control unit 15 increases the set value of the predetermined frame number Fth .
- the vehicle recognition unit 162 becomes difficult to recognize the image estimated as the other vehicle in step S120 as the other vehicle, so that erroneous detection of the other vehicle is reduced.
- the set value of the predetermined number of frames Fth may be changed based on the position of the composite coordinates (P, Q).
- the non-detection measures first suppression mode
- the operation control unit 15 decreases the setting value of the threshold I th predetermined brightness difference.
- Set value of the threshold I th predetermined brightness difference may be changed based on the position of the synthetic coordinates (P, Q).
- the operation control unit 15 excludes the region where the deposit is detected from the image regions to be processed in step S120 and step S130.
- the operation control unit 15 restricts other vehicles that are to be detected by the vehicle recognition unit 162 to vehicles that are highly dangerous for the host vehicle, and excludes other vehicles from the detection targets. For example, from the captured image of the camera 2a, only the other vehicle in front of the vehicle traveling in the traveling lane of the host vehicle is set as a detection target. Further, for example, from the captured image of the camera 2b, the vehicle travels along the lane of the host vehicle, and only other vehicles approaching the host vehicle are detected.
- the operation control unit 15 further reduces the set value of the threshold value I th of the predetermined luminance difference as compared with the first suppression mode.
- the vehicle recognition part 162 uses the other vehicle recognition method which can detect the feature-value of another vehicle, even when the contrast of a picked-up image is low.
- the operation control unit 15 In the erroneous detection countermeasure in the third suppression mode and the non-detection countermeasure in the third suppression mode, the operation control unit 15 outputs a command to remove the deposit from the camera lens to the removal control unit 4. If the contamination state of the camera lens is not improved even after performing the deposit removal operation by the removal control unit 4, the operation control unit 15 gives up the recognition of the other vehicle by the vehicle recognition unit 162.
- the process for the pedestrian recognition unit 163 and the sign recognition unit 164 to recognize the pedestrian and the road sign is the same as the process for the vehicle recognition unit 162 described with reference to FIG. 16 to recognize other vehicles. is there.
- the countermeasure which the pedestrian recognition part 163 and the sign recognition part 164 implement in the erroneous detection countermeasure and non-detection countermeasure of each suppression mode is the countermeasure which the vehicle recognition part 162 implemented in the erroneous detection countermeasure and non-detection countermeasure of each suppression mode. The same measures can be taken.
- FIG. 17 is a flowchart showing the processing contents of the in-vehicle device 1.
- the in-vehicle device 1 acquires captured images from the cameras 2a and 2b using the captured image acquisition unit 11.
- step S ⁇ b> 910 the in-vehicle device 1 detects various deposits attached to the camera lenses of the cameras 2 a and 2 b using the deposit detection unit 13, and outputs a detection result.
- step S920 the in-vehicle device 1 controls the detection result output in step S910 using the detection result control unit 148, such as correction and initialization.
- step S ⁇ b> 930 the in-vehicle device 1 uses the projection conversion unit 141 to project each detection result of the attached matter detection unit 13 after the control in step S ⁇ b> 920 onto the coordinates on the control map 50.
- step S940 the in-vehicle apparatus 1 calculates the composite coordinates (P, Q) by integrating the coordinates obtained by the projection conversion in step S840 using the composite coordinate setting unit 142.
- step S950 the in-vehicle device 1 uses the motion control unit 15 to determine which of the false detection countermeasure and the non-detection countermeasure is to be performed based on the composite coordinates (P, Q) and the suppression mode. .
- step S960 the in-vehicle device 1 executes the image recognition process using each unit of the image recognition unit 16 after taking the measures determined in step S950.
- step S970 the in-vehicle device 1 controls the alarm control unit 17 based on the recognition result of the image recognition process executed in step S960. Thereafter, the in-vehicle device 1 proceeds to the process of step S900.
- the vehicle-mounted apparatus 1 is based on the captured image acquired from the camera which image
- An adhering matter detection unit 13 for detecting a plurality of types of adhering matters according to the state of contamination of the lens by the unit 134; and an image recognition unit 16 for recognizing a predetermined object image existing in the surrounding environment of the vehicle from the photographed image.
- a detection result integration unit 14 that calculates an integrated detection result obtained by integrating a plurality of detection results based on detection results of a plurality of types of attachments by the attachment detection unit 13, and an image recognition unit based on the integrated detection result. And an operation control unit 15 for controlling 16 operations. Therefore, the image recognition unit 16 is prevented from performing contradictory operations due to a plurality of detection results.
- the detection result integration unit 14 projects each detection result of the adhering matter detection unit 13 onto the same coordinate system, combines a plurality of coordinates corresponding to each detection result, and combines the coordinate on the coordinate system.
- the operation control unit 15 controls the operation of the image recognition unit 16 based on the set composite coordinates. That is, the detection result integration unit 14 sets one combination coordinate obtained by combining a plurality of detection results, and controls the operation of the image recognition unit 16 based on the combination coordinate. Accordingly, inconsistent operation control due to a plurality of detection results is prevented.
- the coordinate system has a first coordinate axis related to the lens transmittance due to the deposit, and a second coordinate axis related to the area of the deposit adhered to the camera lens.
- Each detection result obtained by the adhering matter detection unit 13 is a lens transmittance and an area of the adhering matter, and a coordinate area obtained by the two detection results is set for each kind of the adhering matter.
- a water drop range 51, a cloudiness range 52, a water drop mark range 53, and a mud range 54 Therefore, the image recognition operation in the image recognition unit 16 can be accurately controlled according to the lens contamination situation.
- the transmittance of lens dirt is an item strongly related to erroneous detection of each part of the image recognition unit 16, and the adhesion area is an item strongly related to non-detection of each part of the image recognition unit 16.
- the image recognition unit 16 can be controlled with high accuracy using the composite coordinates (P, Q).
- the operation control unit 15 determines the control to be performed by the image recognition unit 16 based on whether the combined coordinate is closer to the first coordinate axis or the second coordinate axis. For example, the operation control unit 15 controls the image recognition unit 16 as a non-detection measure depending on which of the coordinate axis related to the transmittance and the coordinate axis related to the adhesion area is in the operation determination region closer to the combined coordinate (P, Q). Or whether it is a countermeasure for false detection. By separating the control in this way, there is no inconsistency in which both false detection countermeasures and non-detection countermeasures are performed simultaneously.
- the operation control unit 15 stops the recognition of the predetermined object image by the image recognition unit 16 when the composite coordinate is in a coordinate area indicating that the contamination of the camera lens is equal to or greater than a predetermined value.
- the operation control unit 15 operates a removing device that removes a plurality of types of deposits from the camera lens when the composite coordinates are in a coordinate area indicating that the contamination of the camera lens is greater than or equal to a predetermined value.
- the motion control unit 15 determines the suppression mode as the third suppression mode and recognizes the lane mark by the image recognition unit 16 when the composite coordinates (P, Q) are in the motion determination region 75 and the motion determination region 76. Stop (giving up, give up) or let the removal control unit 4 remove the deposit from the camera lens. Thereby, the misdetection and non-detection by the image recognition part 16 can be reduced.
- the adhering matter detection unit 13 further calculates the reliability according to the time when each of the plural types of adhering matter is continuously detected, and the detection result integration unit 14 calculates each reliability calculated by the adhering matter detection unit 13.
- the integrated detection result is also calculated using the degree. For example, each part of the deposit
- the water drop detection unit 133 calculates the reliability R 3 based on the score average A S3 that increases for each frame when water drop marks are continuously attached to the camera lens.
- the mud detection unit 134 calculates the reliability R 4 based on the score average A S4 that increases for each frame when mud continuously adheres to the camera lens.
- the composite coordinate setting unit 142 sets the composite coordinates (P, Q) based on the reliability R 1 , R 2 , R 3 , R 4 calculated by each unit of the attached matter detection unit 13. In this way, by setting the composite coordinates (P, Q) using the reliability R 1 , R 2 , R 3 , R 4 , the contamination state of the camera lens can be accurately determined using the composite coordinates (P, Q). Can be expressed.
- the in-vehicle device 1 is detected by an environment detection unit that detects an environment in which the reliability of at least one of a light source environment around the vehicle, a traveling path on which the vehicle travels, and weather is reduced, and an environment detection unit.
- An environment detection control unit 143 that functions as a reliability correction unit that corrects each reliability calculated by the attached matter detection unit 13 based on the environment.
- the in-vehicle device 1 includes an environment detection control unit 143 including a light source environment detection unit 144 as an environment detection unit, a traveling road environment detection unit 145, a weather detection unit 146, and a detection result control unit 148 as a reliability correction unit.
- the environment detection control unit 143 causes the detection result control unit 148 to set each part of the attached matter detection unit 13 respectively.
- Each calculated reliability is corrected (steps S730 and S750 in FIG. 13).
- the in-vehicle device 1 is based on the information detecting unit 12 that acquires information on the traveling state including at least the speed or yaw rate of the vehicle, and information on the traveling state acquired by the information detecting unit 12.
- An environment detection control unit 143 that functions as a detection stop unit that stops the detection of a part of the attached matter by the attached matter detection unit is further provided.
- the detection result integration part 14 calculates an integrated detection result based on the detection result of the deposit
- the environment detection control unit 143 stops the detection of the respective deposits by the water droplet detection unit 131 and the mud detection unit 134 based on the information regarding the traveling state acquired by the information detection unit 12 (FIG. 13 step S810). Then, the composite coordinate setting unit 142 determines the composite coordinates (P) based on the detection results of the adhering matter by the white turbidity detecting unit 132 and the water droplet trace detecting unit 133 in which the adhering matter detection is not stopped. , Q). By doing in this way, the synthetic
- the in-vehicle device 1 includes a stop determination unit that determines that the vehicle is stopped based on information on the vehicle speed acquired by the information detection unit 12, and a vehicle that has stopped for a predetermined time or longer.
- An environment detection control unit 143 that functions as an initialization unit that initializes the detection result of the attached matter detection unit 13 when leaving the vehicle after the stop determination unit determines that the vehicle is present. For example, the environment detection control unit 143 determines that the vehicle is stopped by the detection result control unit 148 based on the information about the vehicle speed acquired from the information detection unit 12 (step S770 in FIG. 13).
- each detection result of the adhering matter detection unit 13 is initialized (step S780). Thereby, the precision of the composite coordinates (P, Q) can be ensured.
- the cameras 2a and 2b photograph the road surface in front of or behind the vehicle, but may further photograph the road surface on the left side or right side of the vehicle. Further, as long as the road surface around the vehicle can be photographed, the camera installation position and the photographing range may be set in any manner.
- each part of the adhering matter detection unit 13 detects the adhering matter every frame and outputs a detection result.
- a part of the adhering matter detection unit 13 may be executed in the remaining time after each unit of the image recognition unit 16 is executed in each frame.
- the environment detection control unit 143 includes the light source environment detection unit 144, the travel path environment detection unit 145, the weather detection unit 146, the time control unit 147, and the detection result control unit 148.
- the road environment detection unit 145 and the weather detection unit 146 may include at least one of them.
- the control map 50 has a coordinate axis related to the transmittance and a coordinate axis related to the adhesion area, but the coordinate axis included in the control map 50 is not limited to this combination. If the coordinate axis related to the first item strongly related to the false detection of each part of the image recognition unit 16 and the coordinate axis related to the second item strongly related to the non-detection of each part of the image recognition unit 16 are combined, A combination may be used.
- the control map 50 may not have two coordinate axes. For example, you may have the axis
- the method of calculating the reliability of the detection results of the water droplet detection unit 131, the cloudiness detection unit 132, the water droplet trace detection unit 133, and the mud detection unit 134 is not limited to the method described above.
Abstract
Description
また、車載カメラレンズの状態に基づいて、移動中にも関わらず撮影画像中に含まれる不動領域を検出することで、レンズに付着した異物等を検知する手法が知られている(特許文献2)。また、複数のカメラの重複撮像領域を利用し、水滴が付着したことを判定する手法が知られている(特許文献3)。
本発明の第2の態様によると、第1の態様の車載装置において、検知結果統合部は、付着物検知部の各検知結果を同一の座標系にそれぞれ投影し、投影して得られた複数の座標から座標系上に合成座標を設定し、動作制御部は、合成座標に基づいて、画像認識部の動作を制御することが好ましい。
本発明の第3の態様によると、第2の態様の車載装置において、座標系は、付着物によるレンズ透過率に関する第1の座標軸と、カメラレンズに付着した付着物の面積に関する第2の座標軸とを有し、付着物検知部により得られる各検知結果は、レンズ透過率と付着物の面積であり、それらの2つの検知結果により得られる座標の領域は、付着物検知部の種類ごとにそれぞれ設定されていることが好ましい。
本発明の第4の態様によると、第3の態様の車載装置において、動作制御部は、第1の座標軸と第2の座標軸のどちらに合成座標がより近いかに基づいて、画像認識部に行う制御を決定することが好ましい。
本発明の第5の態様によると、第3の態様の車載装置において、動作制御部は、合成座標が第1の座標軸と第2の座標軸の範囲内に予め定められた動作決定領域の範囲内の座標であるとき、画像認識部による所定の物体像の認識を中止させることが好ましい。
本発明の第6の態様によると、第3の態様の車載装置において、動作制御部は、合成座標が第1の座標軸と第2の座標軸の範囲内に予め定めた動作決定領域の範囲内の座標であるとき、カメラレンズから付着物を除去する除去装置を動作させることが好ましい。
本発明の第7の態様によると、第1乃至6のいずれか一態様の車載装置において、付着物検知部は、複数種類の付着物をそれぞれ継続的に検知した時間に応じた信頼度をさらに算出し、検知結果統合部は、付着物検知部が算出した各信頼度も用いて統合検知結果を算出することが好ましい。
本発明の第8の態様によると、第7の態様の車載装置において、車両の周囲の光源環境、車両が走行する走行路、天候の少なくともいずれか一つについて信頼度が低下する環境を検知する環境検知部と、環境検知部により検知された環境に基づいて、付着物検知部が算出した各信頼度をそれぞれ補正する信頼度補正部と、をさらに備えることが好ましい。
本発明の第9の態様によると、第1乃至6のいずれか一態様の車載装置において、車両の速度またはヨーレートを少なくとも含む走行状態に関する情報を取得する情報検出部と、情報検出部が取得した走行状態に関する情報に基づいて、複数種類の付着物のうち一部の付着物の検知を付着物検知部に停止させる検知停止部と、をさらに備え、検知結果統合部は、複数種類の付着物のうち検知停止部により検知が停止されていない付着物の検知結果に基づいて統合検知結果を算出することが好ましい。
本発明の第10の態様によると、第9の態様の車載装置において、情報検出部が取得した車両の速度に関する情報に基づいて、車両が停車していることを判定する停車判定部と、所定時間以上継続して車両が停車していることを停車判定部が判定した後に発車するとき、付着物検知部の検知結果を初期化する初期化部と、をさらに備えることが好ましい。
図3(a)および(b)を用いて水滴検知部131の動作について説明する。図3(a)に示されるように、水滴検知部131は、撮影画像30の画像領域を複数のブロックB(x,y)に分割する。各ブロックB(x,y)には、撮影画像の複数の画素が含まれている。
図4を用いて白濁検知部132の動作について説明する。図4に示されるように、白濁検知部132は、撮影画像中に地平線が写り込む予定の位置に左上検知領域41と上検知領域42と右上検知領域43とを設定する。上検知領域42は、路面に互いに平行に設けられた2本のレーンマークの消失点が含まれるような位置に設定される。図4では、2本のレーンマーク46の消失点47が上検知領域42の内側に含まれている。左上検知領域41は上検知領域42よりも左側に設定され、右上検知領域43は上検知領域42よりも右側に設定される。また、白濁検知部132は、撮影画像中にレーンマークが写る予定の位置に左下検知領域44と右下検知領域45とを設定する。
水滴痕検知部133の動作について説明する。水滴痕検知部133も、水滴検知部131と同様に、撮影画像30の画像領域を図3(a)に示されるように複数のブロックB(x,y)に分割する。
泥検知部134の動作について説明する。泥検知部134も、水滴検知部131と同様に、撮影画像30の画像領域を図3(a)に示されるように複数のブロックB(x,y)に分割する。
そこで、この実施形態では、以下で説明する投影変換部141により、複数の検知結果を合成した一つの制御指令を生成し、この制御指令により制御動作を補正する。複数の検知結果に基づいて一つの制御指令を生成するため、本実施形態では、各検知結果を正規化し、各検知結果に共通する物理量として、レンズ透過率とレンズ表面の汚濁物付着面積を算出し、これら2つの物理量に応じた制御指令を求めて制御動作を補正する。
投影変換部141は、付着物検知部13の各部の検知結果を、図5に示す座標空間に投影する。図5に示す座標空間は、メモリ10に制御マップ50として記憶されている。制御マップ50は、レンズ汚れの透過率に関する座標軸と、カメラレンズに付着物が付着している付着面積に関する座標軸とを有している。以降、レンズ汚れの透過率に関する座標軸の値をp座標、付着面積に関する座標軸の値をq座標と呼ぶ。制御マップ50において、レンズ汚れの透過率は、原点から遠ざかるほど不透過になる。また、付着面積は、原点から遠ざかるほど大きくなる。
合成座標設定部142は、付着物検知部13の各部の検知結果を投影変換部141により投影変換した座標(p1,q1)、座標(p2,q2)、座標(p3,q3)、および座標(p4,q4)を、付着物検知部13の各部が出力した信頼度R1、R2、R3、およびR4に基づいて統合して、一つの合成座標(P,Q)を算出する。合成座標(P,Q)を算出するための数式を以下に示す。
P=(p1×R1+p2×R2+p3×R3+p4×R4)/(R1+R2+R3+R4)
Q=(q1×R1+q2×R2+q3×R3+q4×R4)/(R1+R2+R3+R4)
動作制御部15は、合成座標(P,Q)に基づいて、画像認識部16に対して誤検知対策および不検知対策のどちらを実行するのか、それらの対策をどの段階の抑制モードで実行するのかを決定する。
動作決定領域71は、付着面積に関する座標軸より透過率に関する座標軸が近い位置に存在し、1/4円78の円周と1/4円79の円周と線分81と線分83とで囲われた領域である。合成座標(P,Q)が動作決定領域71の範囲内の座標であるとき、動作制御部15は、画像認識部16に対して誤検知対策用第1抑制モードによる処理の実行指令を出力する。
動作決定領域72は、透過率に関する座標軸より付着面積に関する座標軸が近い位置に存在し、1/4円78の円周と1/4円79の円周と線分81と線分82とで囲われた領域である。合成座標(P,Q)が動作決定領域72の範囲内の座標であるとき、動作制御部15は、画像認識部16に対して不検知対策である第1抑制モードによる処理の実行指令を出力する。
図8は、環境検知制御部143に関する制御ブロック図である。図8に例示されるように、環境検知制御部143は、光源環境検知部144と走行路環境検知部145と天候検知部146と時間制御部147と検知結果制御部148とを備える。
光源環境検知部144は、付着物検知部13の検知結果の信頼度R1、R2、R3、およびR4が低下しやすい光源環境を検知する。たとえば、西日、路面反射光、後続車両のヘッドライトなどがカメラ2aまたは2bに対して逆光状態にある状態を光源環境検知部144は検知する。
走行路環境検知部145は、付着物検知部13の検知結果の信頼度R1、R2、R3、およびR4が低下しやすい走行路や背景などの環境を検知する。走行路環境検知部145は、たとえば、車両が濡れた道路やオフロードなど付着物がカメラレンズに付着しやすい場所を走行していることを検知する。
天候検知部146は、自車両の周囲の天候に関する情報を取得する。図12は、天候検知部146の処理に関するフローチャートである。ステップS600では、天候検知部146は、情報検出部12からワイパの動作状態、外気温などの情報を取得する。ステップS610では、天候検知部146は、光源環境検知部144が検出した高輝度領域に関する情報を取得する。
時間制御部147は以下の種々の時間を計時する。
(a)画像認識部16の各部の動作時間
(b)画像認識部16が各段階の抑制モードにより不検知対策または誤検知対策を開始してからの経過時間
(c)除去制御部4が付着物の除去を開始してからの経過時間
(d)画像認識部16の各部が画像認識を断念してからの経過時間
(e)付着物検知部13の検知結果が初期化(リセット)されてからの経過時間t1、t2、t4、およびt5、
(f)付着物検知部13の検知を停止してからの経過時間
検知結果制御部148は、付着物検知部13の各部の検知結果に対して、補正や初期化などの制御を行う。図13は、付着物検知部13の処理に関するフローチャートである。ステップS700では、検知結果制御部148は、付着物検知部13の各部からそれぞれの検知結果を取得する。
図14および図15を用いて、誤検知対策も不検知対策も実行していない場合のレーン認識部161の動作について説明する。
図16を用いて、誤検知対策も不検知対策も実行していない場合の車両認識部162の動作について説明する。図16は、車両認識部162の処理に関するフローチャートである。
図17は、車載装置1の処理内容を示すフローチャートである。
ステップS900では、車載装置1は、撮影画像取得部11を用いて、カメラ2aおよび2bから撮影画像を取得する。ステップS910では、車載装置1は、付着物検知部13を用いて、カメラ2aおよび2bのカメラレンズにそれぞれ付着した各種付着物を検知して、検知結果を出力させる。
(1)車載装置1は、カメラレンズを介して車両の周囲環境を撮影するカメラから取得した撮影画像に基づいて、たとえば、水滴検知部131、白濁検知部132、水滴痕検知部133、泥検知部134により、レンズの汚濁の状態に応じた複数種類の付着物をそれぞれ検知する付着物検知部13と、車両の周囲環境に存在する所定の物体像を撮影画像から認識する画像認識部16と、付着物検知部13による複数種類の付着物の各検知結果に基づいて、複数の検知結果を統合した統合検知結果を算出する検知結果統合部14と、統合検知結果に基づいて、画像認識部16の動作を制御する動作制御部15とを備える。したがって、複数の検知結果によって画像認識部16が矛盾した動作を行うことが防止される。
レンズ汚れの透過率は画像認識部16の各部の誤検知に強く関係する項目であり、付着面積は画像認識部16の各部の不検知に強く関係する項目である。誤検知と不検知のそれぞれに関係する項目を座標軸に選択することにより、合成座標(P,Q)を用いて精度よく画像認識部16の制御を行うことができる。
たとえば、動作制御部15は、合成座標(P,Q)が動作決定領域75および動作決定領域76にあるとき、抑制モードを第3抑制モードに決定して画像認識部16によるレーンマークの認識を中止(断念、ギブアップ)させたり、除去制御部4に対してカメラレンズから付着物を除去させる。これにより、画像認識部16による誤検知や不検知を低減することができる。
たとえば、付着物検知部13の各部は、それぞれに対応する付着物を継続的に検知した時間に応じて信頼度を算出する。たとえば、水滴検知部131は、カメラレンズに水滴が継続的に付着していると、フレームごとに増加するスコア平均AS1に基づいて信頼度R1を算出する。白濁検知部132は、平均継続時間t3に基づいて信頼度R2を算出する。水滴痕検知部133は、カメラレンズに水滴痕が継続的に付着していると、フレームごとに増加するスコア平均AS3に基づいて信頼度R3を算出する。泥検知部134は、カメラレンズに泥が継続的に付着していると、フレームごとに増加するスコア平均AS4に基づいて信頼度R4を算出する。そして、合成座標設定部142は、付着物検知部13の各部がそれぞれ算出した信頼度R1、R2、R3、R4に基づいて、合成座標(P,Q)を設定する。このように信頼度R1、R2、R3、R4を用いて合成座標(P,Q)を設定することにより、合成座標(P,Q)を用いてカメラレンズの汚濁状態を精度よく表現することができる。
たとえば、車載装置1は、環境検知部としての光源環境検知部144、走行路環境検知部145および天候検知部146と、信頼度補正部としての検知結果制御部148とを備える環境検知制御部143を有する。そして、環境検知制御部143は、その光源環境検知部144と走行路環境検知部145と天候検知部146の検知結果に基づいて、検知結果制御部148により、付着物検知部13の各部がそれぞれ算出した各信頼度を補正する(図13のステップS730、ステップS750)。これにより、車両やカメラが付着物検知部13の検知結果の信頼度R1、R2、R3、およびR4が低下しやすい環境にある場合であっても、合成座標(P,Q)を用いてカメラレンズの汚濁状態を精度よく表現することができる。
たとえば、車載装置1において、環境検知制御部143は、情報検出部12が取得した走行状態に関する情報に基づいて、水滴検知部131と泥検知部134によるそれぞれの付着物の検知を停止させる(図13のステップS810)。そして、合成座標設定部142は、付着物検知部13のうち付着物の検知が停止されていない白濁検知部132と水滴痕検知部133によるそれぞれの付着物の検知結果に基づいて合成座標(P,Q)を設定する。このようにすることで、合成座標設定部142は、車両の走行状態による合成座標(P,Q)の精度の低下を抑制することができる。
たとえば、環境検知制御部143は、情報検出部12から取得した車両の速度に関する情報に基づいて、検知結果制御部148により、車両が停車していることを判定し(図13のステップS770)、所定時間以上継続して車両が停車(時速10km以下)した後に発車(時速10km以上)するとき、付着物検知部13の各検知結果を初期化する(ステップS780)。これにより、合成座標(P,Q)の精度を確保することができる。
日本国特許出願2013年第149747号(2013年7月18日出願)
2a,2b カメラ
10 メモリ
11 撮影画像取得部
12 情報検出部
13 付着物検知部
14 検知結果統合部
15 動作制御部
16 画像認識部
17 警報制御部
50 制御マップ
51 水滴範囲
52 白濁範囲
53 水滴痕範囲
54 泥範囲
71,72,73,74,75,76 動作決定領域
77 最悪汚濁点
131 水滴検知部
132 白濁検知部
133 水滴痕検知部
134 泥検知部
141 投影変換部
142 合成座標設定部
143 環境検知制御部
144 光源環境検知部
145 走行路環境検知部
146 天候検知部
147 時間制御部
148 検知結果制御部
161 レーン認識部
162 車両認識部
163 歩行者認識部
164 標識認識部
165 駐車枠認識部
Claims (10)
- カメラレンズを介して車両の周囲環境を撮影するカメラから撮影画像を取得する画像取得部と、
前記撮影画像に基づいて前記カメラレンズに付着した複数種類の付着物をそれぞれ検知する付着物検知部と、
前記周囲環境に存在する所定の物体像を前記撮影画像から認識する画像認識部と、
前記付着物検知部による前記複数種類の付着物の各検知結果に基づいて、複数の検知結果を統合した統合検知結果を算出する検知結果統合部と、
前記統合検知結果に基づいて、前記画像認識部の動作を制御する動作制御部と、
を備える車載装置。 - 請求項1に記載の車載装置において、
前記検知結果統合部は、前記付着物検知部の各検知結果を同一の座標系にそれぞれ投影し、投影して得られた複数の座標から前記座標系上に合成座標を設定し、
前記動作制御部は、前記合成座標に基づいて、前記画像認識部の動作を制御する車載装置。 - 請求項2に記載の車載装置において、
前記座標系は、前記付着物によるレンズ透過率に関する第1の座標軸と、前記カメラレンズに付着した付着物の面積に関する第2の座標軸とを有し、
前記付着物検知部により得られる各検知結果は、前記レンズ透過率と前記付着物の面積であり、それらの2つの検知結果により得られる座標の領域は、前記付着物の種類ごとにそれぞれ設定されている車載装置。 - 請求項3に記載の車載装置において、
前記動作制御部は、前記第1の座標軸と前記第2の座標軸のどちらに前記合成座標がより近いかに基づいて、前記画像認識部に行う制御を決定する車載装置。 - 請求項3に記載の車載装置において、
前記動作制御部は、前記合成座標が前記第1の座標軸と前記第2の座標軸の範囲内に予め定められた動作決定領域の範囲内の座標であるとき、前記画像認識部による前記所定の物体像の認識を中止させる車載装置。 - 請求項3に記載の車載装置において、
前記動作制御部は、前記合成座標が前記第1の座標軸と前記第2の座標軸の範囲内に予め定めた動作決定領域の範囲内の座標であるとき、前記カメラレンズから前記付着物を除去する除去装置を動作させる車載装置。 - 請求項1乃至6のいずれか1項に記載の車載装置において、
前記付着物検知部は、前記複数種類の付着物をそれぞれ継続的に検知した時間に応じた信頼度をさらに算出し、
前記検知結果統合部は、前記付着物検知部が算出した各信頼度も用いて前記統合検知結果を算出する車載装置。 - 請求項7に記載の車載装置において、
前記車両の周囲の光源環境、前記車両が走行する走行路、天候の少なくともいずれか一つについて前記信頼度が低下する環境を検知する環境検知部と、
前記環境検知部により検知された環境に基づいて、前記付着物検知部が算出した各信頼度をそれぞれ補正する信頼度補正部と、
をさらに備える車載装置。 - 請求項1乃至6のいずれか1項に記載の車載装置において、
前記車両の速度またはヨーレートを少なくとも含む走行状態に関する情報を取得する情報検出部と、
前記情報検出部が取得した前記走行状態に関する情報に基づいて、前記複数種類の付着物のうち一部の付着物の検知を前記付着物検知部に停止させる検知停止部と、
をさらに備え、
前記検知結果統合部は、前記複数種類の付着物のうち前記検知停止部により検知が停止されていない付着物の検知結果に基づいて前記統合検知結果を算出する車載装置。 - 請求項9に記載の車載装置において、
前記情報検出部が取得した前記車両の速度に関する情報に基づいて、前記車両が停車していることを判定する停車判定部と、
所定時間以上継続して前記車両が停車していることを前記停車判定部が判定した後に発車するとき、前記付着物検知部の検知結果を初期化する初期化部と、
をさらに備える車載装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015527223A JP6163207B2 (ja) | 2013-07-18 | 2014-06-13 | 車載装置 |
US14/904,997 US10095934B2 (en) | 2013-07-18 | 2014-06-13 | In-vehicle device |
CN201480038677.2A CN105393293B (zh) | 2013-07-18 | 2014-06-13 | 车载装置 |
EP14826384.1A EP3023962B1 (en) | 2013-07-18 | 2014-06-13 | Vehicle-mounted device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013149747 | 2013-07-18 | ||
JP2013-149747 | 2013-07-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015008566A1 true WO2015008566A1 (ja) | 2015-01-22 |
Family
ID=52346039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/065770 WO2015008566A1 (ja) | 2013-07-18 | 2014-06-13 | 車載装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10095934B2 (ja) |
EP (1) | EP3023962B1 (ja) |
JP (1) | JP6163207B2 (ja) |
CN (1) | CN105393293B (ja) |
WO (1) | WO2015008566A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016129403A1 (ja) * | 2015-02-12 | 2016-08-18 | 日立オートモティブシステムズ株式会社 | 物体検知装置 |
JP2018142756A (ja) * | 2017-02-24 | 2018-09-13 | 京セラ株式会社 | カメラ装置、検出装置、検出システムおよび移動体 |
JP2019016005A (ja) * | 2017-07-03 | 2019-01-31 | アルパイン株式会社 | 車線認識装置 |
JP2019128797A (ja) * | 2018-01-24 | 2019-08-01 | 株式会社デンソーテン | 付着物検出装置および付着物検出方法 |
JP2019133333A (ja) * | 2018-01-30 | 2019-08-08 | 株式会社デンソーテン | 付着物検出装置および付着物検出方法 |
US10552706B2 (en) | 2016-10-24 | 2020-02-04 | Fujitsu Ten Limited | Attachable matter detection apparatus and attachable matter detection method |
JP7319597B2 (ja) | 2020-09-23 | 2023-08-02 | トヨタ自動車株式会社 | 車両運転支援装置 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3065390B1 (en) * | 2013-10-29 | 2019-10-02 | Kyocera Corporation | Image correction parameter output device, camera system, and correction parameter output method |
EP3113477B1 (en) * | 2015-06-30 | 2017-08-02 | Axis AB | Monitoring camera |
JP6690955B2 (ja) * | 2016-02-02 | 2020-04-28 | 株式会社デンソーテン | 画像処理装置及び水滴除去システム |
US10670418B2 (en) * | 2016-05-04 | 2020-06-02 | International Business Machines Corporation | Video based route recognition |
US10562565B2 (en) * | 2016-07-05 | 2020-02-18 | Uisee Technologies (Beijing) Ltd | Steering control method and system of self-driving vehicle |
WO2018109918A1 (ja) * | 2016-12-16 | 2018-06-21 | 本田技研工業株式会社 | 車両制御装置及び方法 |
DE102017207792A1 (de) * | 2017-05-09 | 2018-11-15 | Continental Automotive Gmbh | Vorrichtung und Verfahren zum Prüfen einer Wiedergabe einer Videosequenz einer Spiegelersatzkamera |
US11479213B1 (en) * | 2017-12-11 | 2022-10-25 | Zoox, Inc. | Sensor obstruction detection and mitigation |
US10549723B2 (en) * | 2018-05-04 | 2020-02-04 | Ford Global Technologies, Llc | Vehicle object-detection sensor assembly |
JP2020013332A (ja) * | 2018-07-18 | 2020-01-23 | トヨタ自動車株式会社 | 画像認識装置 |
US10780861B2 (en) * | 2019-01-08 | 2020-09-22 | Ford Global Technologies, Llc | Liquid droplet path prediction |
CN110532876A (zh) * | 2019-07-26 | 2019-12-03 | 纵目科技(上海)股份有限公司 | 夜晚模式镜头付着物的检测方法、系统、终端和存储介质 |
US11673532B2 (en) * | 2019-12-23 | 2023-06-13 | Continental Automotive Systems, Inc. | Automatic camera washer deactivation |
JP7424582B2 (ja) * | 2020-06-03 | 2024-01-30 | 株式会社ニフコ | 車載機器用ブラケット |
JP2022133156A (ja) * | 2021-03-01 | 2022-09-13 | トヨタ自動車株式会社 | 車両用周辺監視装置及び車両用周辺監視システム |
US20230068848A1 (en) * | 2021-08-25 | 2023-03-02 | Argo AI, LLC | Systems and methods for vehicle camera obstruction detection |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005022901A1 (ja) * | 2003-08-29 | 2005-03-10 | Nikon Corporation | 撮像系診断装置、撮像系診断プログラム、撮像系診断プログラム製品、および撮像装置 |
JP2008064630A (ja) * | 2006-09-07 | 2008-03-21 | Hitachi Ltd | 付着物検知機能付き車載用撮像装置 |
JP2010244382A (ja) | 2009-04-08 | 2010-10-28 | Honda Motor Co Ltd | 車両走行支援装置 |
JP2011223075A (ja) | 2010-04-02 | 2011-11-04 | Alpine Electronics Inc | 複数カメラ画像使用車外表示装置 |
JP2012038048A (ja) | 2010-08-06 | 2012-02-23 | Alpine Electronics Inc | 車両用障害物検出装置 |
WO2013018673A1 (ja) * | 2011-08-02 | 2013-02-07 | 日産自動車株式会社 | 立体物検出装置及び立体物検出方法 |
JP2013100077A (ja) * | 2011-10-14 | 2013-05-23 | Denso Corp | カメラ洗浄装置 |
WO2014007153A1 (ja) * | 2012-07-03 | 2014-01-09 | クラリオン株式会社 | 車両周囲監視装置 |
WO2014007286A1 (ja) * | 2012-07-03 | 2014-01-09 | クラリオン株式会社 | 状態認識システム及び状態認識方法 |
WO2014017403A1 (ja) * | 2012-07-27 | 2014-01-30 | クラリオン株式会社 | 車載用画像認識装置 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2552728B2 (ja) * | 1989-05-31 | 1996-11-13 | 富士通株式会社 | 赤外線監視システム |
US6681163B2 (en) * | 2001-10-04 | 2004-01-20 | Gentex Corporation | Moisture sensor and windshield fog detector |
JP3651387B2 (ja) * | 2000-11-22 | 2005-05-25 | 日産自動車株式会社 | 白線検出装置 |
DE10132681C1 (de) * | 2001-07-05 | 2002-08-22 | Bosch Gmbh Robert | Verfahren zur Klassifizierung von einem Hindernis anhand von Precrashsensorsignalen |
JP3987048B2 (ja) * | 2003-03-20 | 2007-10-03 | 本田技研工業株式会社 | 車両周辺監視装置 |
US8553088B2 (en) * | 2005-11-23 | 2013-10-08 | Mobileye Technologies Limited | Systems and methods for detecting obstructions in a camera field of view |
JP4956009B2 (ja) * | 2006-02-02 | 2012-06-20 | キヤノン株式会社 | 撮像装置及びその制御方法 |
US7671725B2 (en) * | 2006-03-24 | 2010-03-02 | Honda Motor Co., Ltd. | Vehicle surroundings monitoring apparatus, vehicle surroundings monitoring method, and vehicle surroundings monitoring program |
JP4784659B2 (ja) * | 2009-02-16 | 2011-10-05 | トヨタ自動車株式会社 | 車両用周辺監視装置 |
JP5269755B2 (ja) * | 2009-12-10 | 2013-08-21 | 株式会社日立製作所 | 人横断支援車両システム及び人横断支援方法 |
DE102010002310A1 (de) | 2010-02-24 | 2011-08-25 | Audi Ag, 85057 | Verfahren und Vorrichtung zur Freisichtprüfung einer Kamera für ein automobiles Umfeld |
-
2014
- 2014-06-13 WO PCT/JP2014/065770 patent/WO2015008566A1/ja active Application Filing
- 2014-06-13 CN CN201480038677.2A patent/CN105393293B/zh active Active
- 2014-06-13 US US14/904,997 patent/US10095934B2/en active Active
- 2014-06-13 EP EP14826384.1A patent/EP3023962B1/en active Active
- 2014-06-13 JP JP2015527223A patent/JP6163207B2/ja active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005022901A1 (ja) * | 2003-08-29 | 2005-03-10 | Nikon Corporation | 撮像系診断装置、撮像系診断プログラム、撮像系診断プログラム製品、および撮像装置 |
JP2008064630A (ja) * | 2006-09-07 | 2008-03-21 | Hitachi Ltd | 付着物検知機能付き車載用撮像装置 |
JP2010244382A (ja) | 2009-04-08 | 2010-10-28 | Honda Motor Co Ltd | 車両走行支援装置 |
JP2011223075A (ja) | 2010-04-02 | 2011-11-04 | Alpine Electronics Inc | 複数カメラ画像使用車外表示装置 |
JP2012038048A (ja) | 2010-08-06 | 2012-02-23 | Alpine Electronics Inc | 車両用障害物検出装置 |
WO2013018673A1 (ja) * | 2011-08-02 | 2013-02-07 | 日産自動車株式会社 | 立体物検出装置及び立体物検出方法 |
JP2013100077A (ja) * | 2011-10-14 | 2013-05-23 | Denso Corp | カメラ洗浄装置 |
WO2014007153A1 (ja) * | 2012-07-03 | 2014-01-09 | クラリオン株式会社 | 車両周囲監視装置 |
WO2014007286A1 (ja) * | 2012-07-03 | 2014-01-09 | クラリオン株式会社 | 状態認識システム及び状態認識方法 |
WO2014017403A1 (ja) * | 2012-07-27 | 2014-01-30 | クラリオン株式会社 | 車載用画像認識装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3023962A4 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016129403A1 (ja) * | 2015-02-12 | 2016-08-18 | 日立オートモティブシステムズ株式会社 | 物体検知装置 |
JP2016148962A (ja) * | 2015-02-12 | 2016-08-18 | 日立オートモティブシステムズ株式会社 | 物体検知装置 |
US10627228B2 (en) | 2015-02-12 | 2020-04-21 | Hitachi Automotive Systems, Ltd. | Object detection device |
US10552706B2 (en) | 2016-10-24 | 2020-02-04 | Fujitsu Ten Limited | Attachable matter detection apparatus and attachable matter detection method |
JP2018142756A (ja) * | 2017-02-24 | 2018-09-13 | 京セラ株式会社 | カメラ装置、検出装置、検出システムおよび移動体 |
JP2019016005A (ja) * | 2017-07-03 | 2019-01-31 | アルパイン株式会社 | 車線認識装置 |
JP2019128797A (ja) * | 2018-01-24 | 2019-08-01 | 株式会社デンソーテン | 付着物検出装置および付着物検出方法 |
JP2019133333A (ja) * | 2018-01-30 | 2019-08-08 | 株式会社デンソーテン | 付着物検出装置および付着物検出方法 |
JP7210882B2 (ja) | 2018-01-30 | 2023-01-24 | 株式会社デンソーテン | 付着物検出装置および付着物検出方法 |
JP7319597B2 (ja) | 2020-09-23 | 2023-08-02 | トヨタ自動車株式会社 | 車両運転支援装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3023962B1 (en) | 2020-08-19 |
US10095934B2 (en) | 2018-10-09 |
CN105393293B (zh) | 2017-05-03 |
EP3023962A4 (en) | 2017-03-22 |
US20160162740A1 (en) | 2016-06-09 |
JPWO2015008566A1 (ja) | 2017-03-02 |
CN105393293A (zh) | 2016-03-09 |
EP3023962A1 (en) | 2016-05-25 |
JP6163207B2 (ja) | 2017-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6163207B2 (ja) | 車載装置 | |
JP6174975B2 (ja) | 周囲環境認識装置 | |
JP6117634B2 (ja) | レンズ付着物検知装置、レンズ付着物検知方法、および、車両システム | |
US20200406897A1 (en) | Method and Device for Recognizing and Evaluating Roadway Conditions and Weather-Related Environmental Influences | |
JP5022609B2 (ja) | 撮像環境認識装置 | |
JP6364797B2 (ja) | 画像解析装置、および画像解析方法 | |
US20140009615A1 (en) | In-Vehicle Apparatus | |
US9185363B2 (en) | Vehicle imaging system and method for categorizing objects using relative motion analysis | |
US9965690B2 (en) | On-vehicle control device | |
US20130039544A1 (en) | Method of fog and raindrop detection on a windscreen and driving assistance device | |
US9508015B2 (en) | Method for evaluating image data of a vehicle camera taking into account information about rain | |
US10933798B2 (en) | Vehicle lighting control system with fog detection | |
JP2008250904A (ja) | 車線区分線情報検出装置、走行車線維持装置、車線区分線認識方法 | |
US20140347487A1 (en) | Method and camera assembly for detecting raindrops on a windscreen of a vehicle | |
JP2008060874A (ja) | 車載カメラ及び車載カメラ用付着物検出装置 | |
US20150085118A1 (en) | Method and camera assembly for detecting raindrops on a windscreen of a vehicle | |
CN111414857B (zh) | 一种基于视觉的多特征融合的前方车辆检测方法 | |
US9230189B2 (en) | Method of raindrop detection on a vehicle windscreen and driving assistance device | |
JP6750449B2 (ja) | 撮像装置 | |
WO2011107115A1 (en) | Method and device of raindrop detection on a windscreen | |
JP2023067558A (ja) | 車両及び車両制御方法 | |
EP2945116A1 (en) | Method and apparatus for providing an augmented image of a vehicle's surrounding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480038677.2 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14826384 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015527223 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014826384 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14904997 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |