WO2018150661A1 - Onboard image-capture device - Google Patents

Onboard image-capture device Download PDF

Info

Publication number
WO2018150661A1
WO2018150661A1 PCT/JP2017/040721 JP2017040721W WO2018150661A1 WO 2018150661 A1 WO2018150661 A1 WO 2018150661A1 JP 2017040721 W JP2017040721 W JP 2017040721W WO 2018150661 A1 WO2018150661 A1 WO 2018150661A1
Authority
WO
WIPO (PCT)
Prior art keywords
dirt
area
wiping
vehicle
image
Prior art date
Application number
PCT/JP2017/040721
Other languages
French (fr)
Japanese (ja)
Inventor
浩平 萬代
健人 緒方
修造 金子
福田 大輔
Original Assignee
クラリオン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by クラリオン株式会社 filed Critical クラリオン株式会社
Publication of WO2018150661A1 publication Critical patent/WO2018150661A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an in-vehicle image pickup apparatus that is mounted on a car body of an automobile and can image the outside of the automobile.
  • an in-vehicle camera mounted on the body of an automobile is used to image at least one of the front, rear, and side directions of the host vehicle, or the entire peripheral direction, and the captured image is displayed on a monitor screen in the vehicle.
  • In-vehicle imaging devices are becoming widespread.
  • the in-vehicle camera is attached to the outside of the vehicle body. Therefore, dirt such as raindrops and mud tends to adhere to the surface of the lens (including lens protection glass) of the in-vehicle camera. If dirt adheres to the lens surface, the dirt is reflected in the captured image of the in-vehicle camera, and the scenery is shielded.
  • a lens dirt removing device that blows water or air onto the lens surface of an in-vehicle camera and removes dirt adhering to the lens surface.
  • Japanese Patent Laid-Open No. 2014-11785 discloses this A lens dirt removal diagnostic technique for diagnosing whether the lens dirt removal apparatus has functioned normally is disclosed. In this technique, before and after the operation of the lens dirt removing device, the captured images of the in-vehicle camera are compared, and it is diagnosed that dirt has been removed from changes in contrast and edge strength.
  • Japanese Patent Laid-Open No. 2016-15583 discloses a lens dirt removal sequential diagnosis technique as a method capable of diagnosing dirt removal even when the timing of dirt removal work execution is unknown.
  • this lens dirt removal sequential diagnosis technology a captured image captured by an in-vehicle camera immediately after the host vehicle stops is used as a reference image, and a difference image between the reference image and a captured image captured by the in-vehicle camera thereafter is sequentially determined. The difference image is generated and compared with a previously detected dirt adhesion region, and if a difference is detected in the majority of the dirt adhesion region, it is diagnosed that the dirt has been wiped away.
  • the object of the present invention is to reduce the possibility of erroneous determination when diagnosing that the dirt on the lens surface has been wiped off even when the operation timing of removing the dirt attached to the lens surface of the in-vehicle camera is unknown.
  • Another object of the present invention is to provide an in-vehicle image pickup apparatus having a lens dirt wiping diagnostic apparatus.
  • the present invention detects the presence or absence of dirt on the lens or a dirt-attached area based on an in-vehicle camera that captures an image through a lens and captured images sequentially captured by the in-vehicle camera.
  • the stain wiping diagnosis unit sequentially captures images with the in-vehicle camera.
  • a stable state determination unit that determines whether the captured image is in a stable state based on a time change of the captured image, and a determination result by the stable state determination unit among the captured images sequentially captured by the in-vehicle camera is in a stable state
  • a wiping area extraction unit that extracts a wiping area for dirt based on a temporal change of a captured image at a certain time, and the dirt state management part is configured to detect the presence or absence of dirt detected by the dirt detection unit or the adhesion of dirt. Storing the range, the wiping area of dirt the wiping region extraction unit has extracted, already reflected in the presence or dirt adhering area of the dirt that stores, performs dirt management of said lens, characterized in that.
  • the present invention it is possible to reduce the possibility of erroneous determination when diagnosing that the dirt on the lens surface has been wiped off even when the operation timing of removing the dirt attached to the lens surface of the in-vehicle camera is unknown. It is possible to provide an in-vehicle imaging device having a lens dirt wiping diagnostic device that can be used.
  • FIG. 1 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 1 of the present invention.
  • the figure which shows an example of the flowchart of a process of the stable state determination part The figure which shows an example of the flowchart of a process of the wiping area extraction part.
  • FIG. 5 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 2 of the present invention.
  • FIG. 6 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 3 of the present invention.
  • FIG. 6 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 4 of the present invention.
  • FIG. 1 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 1 of the present invention.
  • the in-vehicle imaging device 100 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4.
  • the dirt wiping diagnosis unit 2 includes a stable state determination unit 21 and a wiping region extraction unit 22.
  • the dirt wiping diagnosis unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnosis device (the same applies to the other embodiments below).
  • the in-vehicle camera 1 is installed in a vehicle (not shown), and takes an image in front of the in-vehicle camera 1 via a lens (including a lens protection glass).
  • the in-vehicle camera 1 sequentially outputs an image (captured image) obtained by imaging the front of the in-vehicle camera 1. For example, when the frame rate is 20 fps, the in-vehicle camera 1 captures an image and outputs the captured image every 50 milliseconds.
  • the stable state determination unit 21 sequentially acquires captured images output from the in-vehicle camera 1, extracts a change (time change of the captured image) from the captured image acquired immediately before each time, and the magnitude of the change is predetermined. If it is less than the value, it is determined that the captured image is temporally stable (stable state). Conversely, if it exceeds a predetermined value, the captured image is not temporally stable (unstable state). judge.
  • “change in time of captured image” refers to a difference in comparison between image data of a captured image at a certain time (any type of image data) and image data of a captured image after a certain period of time has elapsed. (Change with time), for example, change of each pixel of the captured image (change with time), and “size of time change of captured image” is, for example, the size of the area of the changed region In some cases, for example, there is a change amount in each pixel of the image data. As an example of “time change of captured image”, “time change of feature amount of captured image” may be used.
  • extraction of time-varying captured image refers to a region in which a difference is generated as a result of comparison (that is, a region in which a change has occurred over time) among all regions of the captured image (entire image)
  • extraction may be performed, and in some cases, a captured image in which a difference is generated as a result of comparison is extracted from a large number of captured images that are sequentially captured.
  • image data of a captured image it may be simply referred to as a captured image
  • image data of a difference image it may be simply referred to as a difference image
  • the stable state determination unit 21 finally outputs the determination result as a stable state signal. It is stable when a moving body (for example, a moving body such as a person who cleans the surface of the lens of the in-vehicle camera 1 or another vehicle) is not shown in the captured image, or when the moving amount of the moving body in the captured image is small. It becomes a state.
  • a moving body for example, a moving body such as a person who cleans the surface of the lens of the in-vehicle camera 1 or another vehicle
  • the wiping area extraction unit 22 refers to the stable state signal output from the stable state determination unit 21, sequentially acquires captured images output from the in-vehicle camera 1 only in the stable state, and acquires in the previous stable state each time. A change from the captured image is extracted. As a result, a change in the captured image (change in time of the captured image) in a state where the moving body is not reflected in the captured image or the moving amount of the moving body in the captured image is small (stable state) is extracted. Then, an area having a large change in the captured image is output to the subsequent dirt state management unit 4 as an area (wiping area) from which dirt attached to the lens surface of the in-vehicle camera 1 is wiped off.
  • the dirt detection unit 3 detects and outputs an adhesion area (dirt detection area) of dirt attached to the lens surface of the in-vehicle camera 1 from captured images sequentially output from the in-vehicle camera 1.
  • a known method described in JP 2012-38048 A can be used. For example, there is a method of detecting a region with a small luminance change over a long period from a captured image captured by the in-vehicle camera 1 while the host vehicle is traveling, and setting this region as a stain detection region. Some methods detect only the presence or absence of dirt, not the area where dirt is attached. In this case, the dirt detection unit 3 outputs the strength of the detection reaction (stain detection intensity) corresponding to the dirt adhesion area and density instead of the dirt detection region.
  • the dirt state management unit 4 refers to the dirt detection area or the dirt detection intensity output from the dirt detection unit 3 while the dirt detection unit 3 is functioning, and adheres to the lens surface of the in-vehicle camera 1.
  • the area is set as a dirty storage area, and the presence / absence of dirt is managed as a dirty state (the dirty storage area and the dirty state are stored in a nonvolatile storage unit (not shown)).
  • the dirt state management unit 4 refers to the wiping area output from the wiping area extraction unit 22 and reflects it in the managed dirt state and dirt storage area. For example, when the wiping area from the wiping area extraction unit 22 exceeds a predetermined area, the dirt state management unit 4 changes the stored dirt state from “dirty” to “no dirt”. For example, the dirt state management unit 4 erases the wiping area from the wiping area extraction unit 22 from the stored dirt storage area.
  • the dirt state management unit 4 may store the presence or absence of dirt or the size of the area of the dirt storage area as a dirt state.
  • the dirt state management unit 4 notifies the user of the managed dirt state and dirt storage area (lens dirt management status) with a warning light or a speaker (not shown), or automatically controls the vehicle. It also functions to notify a vehicle control system (not shown) such as a driving system.
  • FIG. 2A is an example of a captured image captured by the in-vehicle camera 1 when water droplets adhere to the lens surface of the in-vehicle camera 1. Water drops are reflected in the areas 31 and 32 of the captured image, respectively, and the scenery beyond that is blocked. Further, FIG. 2B shows a captured image in the case where water drops reflected in the region 31 are removed.
  • FIG. 3 is a difference image obtained by taking the difference between the two captured images of FIGS. 2A and 2B. In the difference image of FIG. 3, only the region 31 from which water droplets have been removed is extracted.
  • FIG. 4 is an example of a captured image when a moving pedestrian appears in front of the in-vehicle camera 1.
  • a moving pedestrian is shown in the region 33 of the captured image.
  • 5 shows a difference image between the captured image of FIG. 2A and the captured image of FIG.
  • the difference image in FIG. 5 not only pedestrians in the region 33 but also water drops in the region 31 are extracted. This is because the pedestrian in the area 33 overlaps the water drop in the area 31 and the brightness of the water drop in the area 31 is changed by the reflected light from the pedestrian.
  • the stable state determination unit 21 detects a state in which the moving body is not reflected on the in-vehicle camera 1.
  • FIG. 6 an example of the flowchart of the process of the said stable state determination part 21 is shown.
  • captured images sequentially output from the in-vehicle camera 1 are acquired (step 2101), and a difference image (sequential difference image) between the acquired captured image and the captured image acquired immediately before that is generated (step 2102). . That is, the acquired image data of the captured image is compared with the image data of the captured image acquired immediately before, and image data in which only the different portions of the comparison results are left is generated as the image data of the difference image.
  • an area having a difference equal to or larger than a predetermined value in the sequential difference image is extracted as a large difference area (sequential difference area) (step 2103), and an area of the sequential difference area (sequential difference area) is derived (step 2104). ).
  • “large” in the large difference area extracted in step 2103 is large enough to exclude differences other than the difference caused by the movement of the moving object, such as the difference in error due to noise in the image data of the captured image. Therefore, it is desirable to adjust according to the actual machine environment. Further, “extracting a region having a large difference” is extracting a region candidate having a difference caused by the movement of the moving body from the entire region of the captured image.
  • step 2105 it is determined whether the successive difference area is equal to or smaller than a predetermined value. If the successive difference area is less than or equal to a predetermined value, it is determined that the moving object is not reflected or the movement of the moving object is small (stable state) (step 2106), and the successive difference area exceeds the predetermined value. For example, it is determined that the moving body is reflected (unstable state) (step 2107). “Small” when the movement of the moving body is small means that the movement of the moving body can be regarded as a stable state even if the moving body is reflected, that is, it is not mistaken for wiping dirt. It is desirable to show that the movement is small, and adjust according to the actual machine environment. Finally, the determination result is output as a stable state signal (steps 2106 and 2107). This series of processing is repeated each time a captured image by the in-vehicle camera 1 is input.
  • step 2104 of the flowchart of FIG. 6 there is a method of accumulating the successive difference area over a certain period and deriving the accumulated area of the successive difference area as the successive difference area.
  • step 2105 of the flowchart of FIG. 6 the average value or maximum value of the successive difference areas is calculated over a certain period, and if the calculated value is smaller than a predetermined value, the stable state signal is set to the stable state, and the calculation is performed. If the value exceeds the predetermined value and is large, a method of making the stable state signal in an unstable state may be used.
  • step 2101 of the flowchart of FIG. 6 there is a method of thinning out and acquiring captured images sequentially output from the in-vehicle camera 1.
  • thinning out the captured image it becomes easy to capture the movement of the moving body, and the stable state signal can be stabilized.
  • the stable state determination unit 21 extracts a feature amount such as an edge image or a HOG (Histogram of Oriented Gradients) feature amount from the captured image without using the captured image of the vehicle-mounted camera 1 as it is (here, the extraction result) Is called a feature amount image), and this feature amount image may be used instead of the captured image.
  • a feature amount image such as an edge image or a HOG (Histogram of Oriented Gradients) feature amount from the captured image without using the captured image of the vehicle-mounted camera 1 as it is (here, the extraction result) Is called a feature amount image
  • a feature amount image may be used instead of the captured image.
  • the detection performance can be improved and the amount of data to be processed can be reduced.
  • the HOG feature amount it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light.
  • an average value such as luminance or edge strength is extracted from the captured image and used, the amount of data to be processed can be greatly reduced.
  • the wiping area extraction unit 22 extracts the time change of the captured image from only the captured image in a stable state in which the moving object is not reflected on the in-vehicle camera 1 or the movement of the moving object reflected is small. Detect the wiped area.
  • FIG. 7 shows an example of a flowchart of processing of the wiping area extraction unit 22.
  • the stable state signal output from the stable state determination unit 21 is referenced to determine whether the stable state signal indicates a stable state (step 2201). If the stable state signal is in a stable state, an image captured by the in-vehicle camera 1 is acquired (step 2202). Then, a difference image between the acquired captured image in the stable state and the captured image in the previous stable state acquired in the previous process is generated (step 2203). Then, an area having a large difference (difference area) is extracted from the difference image (step 2204), and the difference area is output as a dirt wiping area (step 2205).
  • the large difference area extracted in step 2204 is an area where the difference is equal to or greater than a predetermined value.
  • This predetermined value may be the same value as the predetermined value used in the description of step 2103, or a different value. However, it is desirable to adjust according to the actual machine environment. This series of processing is repeatedly executed every time a captured image is output from the in-vehicle camera 1.
  • the wiping area extraction unit 22 may add a process of adding the difference images generated in Step 2203 for a certain period and accumulating between Step 2203 and Step 2204. If the difference image is added for a sufficient period including the start and end of headlight irradiation, the change in the brightness of the stain due to the temporary irradiation of the headlight can be detected from the difference image generated in step 2203. It can be removed by offsetting.
  • the period during which the difference image is accumulated is a method of setting a certain period after the difference is extracted, a method of setting the illuminance of the in-vehicle camera 1 to be a predetermined value or more, or a headlight detection process (not shown).
  • simple headlight detection as an example of headlight detection processing can be realized by detecting a high-luminance circular region from a captured image.
  • a process for accumulating the difference image for a certain period may be added between the step 2203 and the step 2204.
  • the difference area may be accumulated for a certain period to be the wiping area. As a result, it is possible to suppress the removal of the wiping area for dirt.
  • the predetermined period for accumulating the difference image or the difference area may be a predetermined time, but until the dirt state managed by the dirt state management unit 4 becomes “no dirt” or dirt. It is preferable that the detection unit 3 functions. As a result, it is possible to avoid spillage of the wiping area for dirt. In addition, it is preferable that the wiping area is sequentially output so that the dirt state management unit 4 can manage the dirt wiping state even during the certain period in which the difference image or the difference area is accumulated.
  • the wiping area extraction unit 22 also extracts a feature amount such as an edge image or a HOG feature amount from the captured image without using the captured image of the in-vehicle camera 1 as it is (here, the extraction result is referred to as a feature amount image).
  • This feature amount image may be used instead of the captured image.
  • the detection performance can be improved and the amount of data to be processed can be reduced.
  • the HOG feature amount it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light.
  • the dirt state management unit 4 manages the attached area as a dirt storage area by setting the presence or absence of dirt attached to the lens surface of the in-vehicle camera 1 as a dirty state.
  • the dirt state management unit 4 refers to the dirt detection area or the dirt detection intensity output from the dirt detection unit 3 and stores the dirt state and the dirt storage area.
  • the dirt detection unit 3 may not function depending on the situation, for example, when the host vehicle is stopped. Even if the dirt on the lens surface of the in-vehicle camera 1 is wiped off in this state, the dirt wiping is not reflected in the dirt state or the dirt storage area stored in the dirt state management part 4 only by the dirt detector 3.
  • the dirt state management unit 4 refers to the wiping area output from the wiping area extraction unit 22 and erases the wiping area from the dirt storage area stored in the dirt state management unit 4. Further, when the wiping area exceeds a predetermined area, the stored dirt state is changed from “dirty” to “no dirt”.
  • the dirty state management unit 4 may be configured to manage only the dirty state, may be configured to manage only the dirty storage area, or may be configured to manage both the dirty state and the dirty storage area. It may be. In the case where the dirty state management unit 4 manages both the dirty state and the dirty storage area, the stored dirty state is changed from “dirty” to “dirty” when the dirty storage area becomes less than a predetermined area. It may be changed to “None”.
  • the wiping area output from the wiping area extracting unit 22 is obtained by extracting an area that has changed with time in the captured image of the in-vehicle camera 1, and may include an area in which dirt newly appears.
  • the wiping area is newly added to the dirt storage area as a dirt adhesion area. May be stored.
  • the dirt state stored in the dirt state management unit 4 is “no dirt” and the wiping area exceeds a predetermined area, the dirt state is changed from “no dirt” to “dirty”. You may change to
  • contamination adhering to the lens surface of the vehicle-mounted camera 1 may be removed in steps little by little.
  • the lens is not cleaned manually and a part of the dirt remains, and the cleaning is repeated many times, or a part of the water drops run off by its own weight.
  • the wiping area output from the wiping area extraction unit 22 only a part of the dirt that is wiped each time is output.
  • the dirt state management unit 4 accumulates the wiping areas output by the wiping area extraction unit 22, and when the accumulated wiping area exceeds a predetermined area, the stored dirt state is changed from “dirty” to “ It is preferable to change from “no dirt” or “no dirt” to “dirty”.
  • the wiping area output from the wiping area extraction unit 22 may be sequentially reflected in the dirt storage area.
  • the in-vehicle imaging device after excluding the scene where a person or other moving body cleaning the lens surface of the in-vehicle camera 1 is reflected in the captured image of the in-vehicle camera 1, It is possible to provide a lens dirt wiping diagnostic device (dirt wiping diagnostic unit 2, dirt detection unit 3, and dirt state management unit 4) that extracts temporal changes in the captured image and detects wiping of lens dirt.
  • a lens dirt wiping diagnostic device dirty wiping diagnostic unit 2, dirt detection unit 3, and dirt state management unit 4
  • the stable state signal by the stable state determination unit 21 becomes “unstable state” and thus is excluded from the processing in the wiping region extraction unit 22,
  • these “stable states” are targets of processing in the wiping area extraction unit 22.
  • the captured image at the time of stopping must be used as a reference image, and it can be diagnosed at any time, avoiding the separation of the imaging time of the two images that take the difference, and the possibility of misjudgment. Can be reduced.
  • the dirt detection unit 3 that detects dirt attached to the lens surface of the in-vehicle camera 1 does not function (for example, when the host vehicle is stopped)
  • the wiping of the dirt attached to the lens of the in-vehicle camera 1 is detected,
  • the autonomous control system for example, automatic driving system of the vehicle that has been stopped due to the dirt on the lens of the in-vehicle camera 1 can be resumed.
  • FIG. 8 is a block diagram showing a schematic configuration of the in-vehicle imaging device according to the second embodiment of the present invention.
  • the same reference numerals are given to components having the same functions as those of the in-vehicle image pickup apparatus 100 according to the first embodiment shown in FIG.
  • the in-vehicle imaging device 200 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4.
  • the dirt wiping diagnosis unit 2 includes a stable state determination unit 21 and a wiping region extraction unit 22.
  • the dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnostic device.
  • the wiping area extraction unit 22 includes the in-vehicle camera stored in the dirt state management unit 4 in addition to the captured image output from the in-vehicle camera 1 and the stable state signal output from the stable state determination unit 21. Reference is made to the dirt adhesion area (dirt storage area) of the lens surface 1.
  • FIG. 9 is an example of a flowchart of processing of the wiping area extraction unit 22 in the second embodiment.
  • the dirt storage area stored in the dirt state management unit 4 is acquired (step 2210). Thereafter, if the stable state signal output from the stable state determination unit 21 indicates the stable state, the captured image of the in-vehicle camera 1 is acquired, and the captured image and the image of the stable state immediately before acquired in the previous process are acquired. A difference image from the image is generated (steps 2201 to 2203). Then, the difference image is accumulated (step 2211), and an area having a large difference (difference area) is extracted from the accumulated difference image (step 2212). Next, the dirt storage area acquired in step 2210 is compared with the accumulated difference area (step 2213), and if most of the dirt storage area becomes the accumulated difference area, it is accumulated.
  • the difference area is output as an area (wiping area) from which dirt is wiped off (step 2205).
  • the stable state signal indicates an unstable state in step 2201 or if it is not the difference area in which most of the dirt storage area is accumulated in step 2213, a new captured image in the stable state is displayed. In order to acquire, after stopping for a certain period of time (step 2214), the processing is restarted from step 2201.
  • the difference images can continue to be accumulated in step 2211 until most of the dirt is wiped off.
  • region extraction part 22 of a present Example is not limited to the flowchart shown in FIG.
  • the same function can be obtained by omitting step 2211 and extracting a region having a large difference from the difference image in step 2212 and accumulating the extracted region and using the accumulated region as a difference region. .
  • FIG. 10 is a block diagram illustrating a schematic configuration of the in-vehicle imaging device according to the third embodiment of the present invention.
  • the same reference numerals are given to components having the same functions as those of the in-vehicle image pickup apparatus 100 according to the first embodiment shown in FIG.
  • the in-vehicle imaging device 300 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4. Further, the dirt wiping diagnosis unit 2 includes a stable state determination unit 21, a wiping region extraction unit 22, and a cleaning state determination unit 23.
  • the dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnostic device.
  • FIG. 11 is an example of a captured image captured by the in-vehicle camera 1 when a cloth is pressed against the lens of the in-vehicle camera 1.
  • a region 34 which is a region pressed against the cloth, is displayed darkly and black.
  • the cleaning state determination unit 23 refers to the illuminance of the in-vehicle camera 1 and determines that the lens of the in-vehicle camera 1 has been cleaned when the illuminance has greatly decreased by a predetermined value or more, and cleans the determination result. Output as a signal.
  • the cleaning state determination unit 23 refers to the focal length of the in-vehicle camera 1 (not shown), and when the change amount of the focal length is equal to or greater than a predetermined value, the in-vehicle camera 1 The lens may be determined to be cleaned, and the result of the determination may be output as a cleaning signal.
  • the cleaning state determination unit 23 calculates the average luminance from the captured image of the in-vehicle camera 1, and determines that the lens of the in-vehicle camera 1 is cleaned when the average luminance is greatly reduced by a predetermined value or more.
  • the result of the determination may be output as a cleaning signal.
  • the cleaning state determination part 23 extracts the area
  • the lens may be determined to be cleaned, and the result of the determination may be output as a cleaning signal.
  • the cleaning state determination unit 23 determines whether there is a large luminance change at the end of the dark region, and the end of the dark region. If there is a luminance change greater than or equal to a predetermined value, it is determined that the dark area is caused by cleaning the lens, and if there is no luminance change greater than or equal to the predetermined value at the end of the dark area, the dark area is It is determined that it is not caused by cleaning. Thereby, the erroneous detection of cleaning determination can be suppressed.
  • the cleaning state determination unit 23 extracts a region (non-edge region) whose edge strength is equal to or less than a predetermined value from the captured image of the in-vehicle camera 1, and when the area of the non-edge region becomes equal to or greater than the predetermined value, It may be determined that the lens of the in-vehicle camera 1 has been cleaned, and the result of the determination may be output as a cleaning signal.
  • the cleaning state determination unit 23 may determine that the lens of the in-vehicle camera 1 has been cleaned when the decrease in illuminance, the change in focal length, the decrease in average luminance of the captured image, and the like continues for a longer period of time. .
  • the dark region or the non-edge region is detected from the captured image of the in-vehicle camera 1 as a region where dirt has been cleaned. . Then, the dark region or the non-edge region is sequentially accumulated, and when the accumulated area of the dark region or the non-edge region becomes a predetermined value or more, it is determined that the lens of the in-vehicle camera 1 is cleaned. The result of the determination is preferably output as a cleaning signal.
  • FIG. 12 is an example of a flowchart of processing of the wiping area extraction unit 22 in the present embodiment.
  • step 2201 it is determined whether the stable state signal output from the stable state determination unit 21 indicates a stable state (step 2201) and whether the cleaning signal output from the cleaning state determination unit 23 indicates that cleaning is being performed. Perform (step 2221). And when it is in a stable state and not being cleaned (not being cleaned), a captured image of the in-vehicle camera 1 is acquired (step 2202). Thereafter, it is determined whether or not the cleaning signal indicates that cleaning is being performed in the immediately preceding process (step 2222). If the cleaning signal indicates that cleaning is being performed, the captured image acquired in step 2202 is a captured image in a stable state immediately after cleaning. Therefore, a difference image between the captured image acquired in step 2202 and the non-cleaning captured image acquired in the previous process is generated (step 2223).
  • the difference image is the difference between the captured images immediately before and after the cleaning period. Thereafter, an area with a large difference (difference area) is extracted from the difference image (step 2204), and the difference area is output as a wiped area (wiping area) (step 2205). If the steady state signal indicates an unstable state in step 2201 or if the cleaning signal indicates that cleaning is being performed in step 2221, the cleaning signal does not indicate that cleaning is being performed in the immediately preceding process in step 2222. In this case, after the processing is stopped for a certain time (step 2224), the processing is restarted from step 2201.
  • steps 2223, 2204, and 2205 are performed only when the order of steps 2221 and 2202 is reversed to acquire all the captured images in the stable state and the cleaning signal is switched from cleaning to non-cleaning. Even if executed, the same function can be obtained.
  • the wiping area extraction unit 22 may accumulate the difference image for a certain period, and may extract a large difference area (difference area) in the accumulated difference image.
  • the accumulated difference area may be output as the wiping area.
  • the period during which the difference image or the difference area is accumulated may be a predetermined period. However, until the dirt state managed by the dirt state management unit 4 becomes “no dirt” or dirt detection is performed. It is preferable that the part 3 is functioned. As a result, it is possible to avoid spillage of the wiping area for dirt. In addition, it is preferable that the wiping area
  • the wiping area extraction unit 22 extracts a feature amount such as an edge image or a HOG feature amount from the captured image without using the captured image of the in-vehicle camera 1 as it is (here, the extraction result is referred to as a feature amount image). ), This feature amount image may be used instead of the captured image.
  • the detection performance can be improved and the amount of data to be processed can be reduced.
  • the HOG feature amount it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light.
  • the wiping area for the dirt cannot be detected, and thus the wiping is not performed. Instead of the area, the wiping accuracy information is output.
  • the period for performing the dirt wiping detection by the dirt wiping diagnosis unit 2 can be limited.
  • the vehicle speed of the own vehicle and information similar to the passenger getting on and off of the own vehicle are acquired and used.
  • Information similar to getting on and off of the passenger of the own vehicle is information on opening / closing of the door and / or window of the own vehicle or a change in the vehicle weight.
  • a sensor for detecting and acquiring such information is provided. Since the vehicle height also changes depending on the vehicle weight, a change in the vehicle height may be detected instead of the change in the vehicle weight.
  • the lens dirt wiping diagnostic apparatus of this embodiment when the door is opened and closed with the vehicle speed sufficiently lowered, or when the vehicle weight decreases, the occupant removes the lens of the in-vehicle camera 1 from the own vehicle. It is determined that the vehicle has got off, and this is used as a trigger for starting the process of the dirt wiping diagnosis unit 2.
  • the stain wiping diagnosis unit 2 may perform the process on the captured image of the in-vehicle camera 1.
  • the processing of the dirt wiping diagnosis unit 2 may be stopped.
  • the vehicle-mounted camera 1 is a side camera
  • the process of the dirt wiping diagnosis unit 2 for the vehicle-mounted camera 1 may be stopped.
  • the information similar to the passenger getting on and off of the host vehicle described in the fourth embodiment is used.
  • the wiping of the lens of the in-vehicle camera 1 may be determined.
  • the dirt state management unit 4 receives the input and initializes the stored dirt state and / or dirt storage area.
  • the lens dirt wiping diagnostic apparatus can be operated by a vehicle-mounted battery (not shown), but the lens dirt wiping diagnostic apparatus is also immediately stopped because there is a concern that the battery is out of charge. In this case, the lens dirt wiping diagnostic apparatus according to the first to fifth embodiments cannot detect the wiping of dirt attached to the lens of the in-vehicle camera 1.
  • the wiping area extraction unit 22 of the lens dirt wiping diagnostic apparatus before the engine is stopped or immediately after the engine is stopped until the lens dirt wiping diagnostic apparatus is stopped the captured image in a stable state is stored in a nonvolatile storage unit.
  • the storage unit eg, a non-volatile memory
  • the wiping area extraction unit 22 extracts a difference image between the captured image stored in the storage unit and the latest captured image in the stable state acquired after the engine is restarted, and wipes off the dirt. Detect areas. In this case, the cleaning state determination unit 23 is not necessary.
  • the engine stop time is stored in a nonvolatile storage unit (not shown), and when the engine is restarted, the engine stop time is calculated as the difference between the engine stop time stored in the storage unit and the current time, and the engine
  • the stop time is greater than or equal to a predetermined value
  • the captured image of the in-vehicle camera 1 is displayed on a monitor in the vehicle or the like
  • the user is checked for the presence or absence of dirt
  • the confirmation result is input from the monitor input means (for example, touch panel)
  • the dirt state management unit 4 receives the input and initializes the stored dirt state and / or dirt storage area.
  • the present embodiment it is possible to cope with the case where the stop time becomes long, the imaging time of the two images taking the difference is separated, and the surrounding brightness and the arrangement of the object are changed, and the possibility of erroneous determination can be reduced.
  • the passenger in the passenger seat opens the door window and wipes the lens of the side camera on the passenger seat while the vehicle is traveling.
  • the lens of the in-vehicle camera 1 is manually cleaned.
  • the stable state determination unit 21 described in the third embodiment continues to output a stable state signal indicating an unstable state, and the dirt wiping diagnosis unit 2 does not function, and the dirt is wiped off. Cannot be detected.
  • FIG. 13 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 7 of the present invention.
  • the same reference numerals are given to components having the same functions as those in the in-vehicle imaging device 300 of the third embodiment shown in FIG.
  • the in-vehicle image pickup apparatus 400 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4, and further includes a vehicle speed sensor. 5 is provided.
  • the vehicle speed sensor 5 detects and outputs the vehicle speed of the host vehicle.
  • the dirt wiping diagnosis unit 2 includes a cleaning state determination unit 23 in addition to the stable state determination unit 21 and the wiping region extraction unit 22 as in the third embodiment.
  • the stable state determination unit 21 and the wiping area extraction unit 22 can be omitted.
  • the dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 may be a lens dirt wiping diagnostic device, or may include a vehicle speed sensor 5 as a lens dirt wiping diagnostic device.
  • the dirt wiping diagnostic unit 2 acquires the vehicle speed of the host vehicle output from the vehicle speed sensor 5, and when the vehicle speed exceeds a predetermined value, stops the processing of the stable state determination unit 21 and the wiping region extraction unit 22, The cleaning signal output from the cleaning state determination unit 23 is output to the dirt state management unit 4.
  • the cleaning signal output from the cleaning state determination unit 23 merely indicates the presence or absence of cleaning, and it cannot be determined from the cleaning signal whether dirt has actually been wiped off. Therefore, it is preferable not to carry out this embodiment unnecessarily.
  • window opening / closing information is acquired from the vehicle, and only when the window is sufficiently open, the vehicle-mounted camera 1 mounted in the vicinity of the window becomes dirty from the cleaning signal output by the cleaning state determination unit 23. Determine the wiping. Thereby, this embodiment can be prevented from being unnecessarily performed.
  • the surrounding object When the surrounding object is reflected on the in-vehicle camera 1, it can be determined that the area where the object is displayed is not dirty. Further, in the captured image of the in-vehicle camera 1, the outline of the dirt often appears unclear, and it can be determined that there is no dirt in the area where the clear outline appears.
  • an object recognition unit such as pedestrian detection for detecting a pedestrian reflected in a captured image of the in-vehicle camera 1 or a vehicle detection for detecting a vehicle reflected in the captured image, or the in-vehicle camera 1 It may be determined that there is no dirt in an area where a clear contour is detected in the captured image, and the dirt state management unit 4 may reflect this determination on the dirt state and dirt storage area to be stored.
  • the dirt wiping area can be detected even if the past dirt adhesion area and dirt state are unknown.
  • a vehicle-mounted camera that captures an image through a lens
  • a stain detection unit that detects the presence or absence of a stain on the lens or a stain-attached region based on a time change of a captured image sequentially captured by the vehicle-mounted camera
  • a dirt wiping diagnostic unit for diagnosing lens dirt wiping
  • a dirt state management unit for managing the lens dirt, wherein the dirt wiping diagnostic unit is based on captured images sequentially captured by the in-vehicle camera.
  • a wiping area extraction unit for extracting a wiping area for dirt based on The dirt state management unit stores the presence / absence of dirt detected by the dirt detection unit or the dirt adhesion area, and the dirt wiping area extracted by the wiping area extraction unit is stored.
  • the present invention is configured as follows. (2) In the in-vehicle imaging device, the stable state determination unit sequentially acquires captured images output from the in-vehicle camera, and the acquired image is acquired immediately before the acquired captured image and the acquired captured image each time. A difference image with the image is generated, a region with a large difference is extracted based on the generated difference image, and if the area of the region with the large difference is equal to or less than a predetermined value, the time change of the acquired captured image is small. Since the vehicle-mounted image pickup apparatus is characterized in that it is determined that the area is large in the difference and the area of the large difference exceeds a predetermined value, it is determined that the time change of the acquired captured image is not large and stable. The stable state can be appropriately determined by determining the stable state when the area of the large difference area is equal to or smaller than the predetermined value.
  • the present invention is configured as follows. (3) In the in-vehicle imaging device, the stable state determination unit sequentially acquires captured images output from the in-vehicle camera, and sequentially generates feature amount images obtained by extracting feature amounts from the sequentially acquired captured images. Each time, a difference image between the generated feature amount image and the feature amount image generated immediately before the generated feature amount image is generated, a region having a large difference is extracted based on the generated difference image, and the extraction is performed.
  • the area of the large difference area is less than or equal to a predetermined value, it is determined that the acquired captured image has a small temporal change and is in a stable state, and if the area of the large difference area exceeds the predetermined value, the acquired Since the in-vehicle imaging device is characterized in that it is determined that the change in time of the captured image is large and is not in a stable state, the detection performance can be improved or the amount of data to be processed can be reduced by using the feature image.
  • the present invention is configured as follows. (4) In the above-described in-vehicle imaging device, the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, An in-vehicle imaging device characterized by generating a difference image with the acquired captured image in a stable state, extracting a region having a large difference from the generated difference image, and setting the extracted region as a dirt wiping region Therefore, the possibility of erroneous determination can be reduced by using a captured image in a stable state.
  • the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, Generate a difference image with the acquired captured image in a stable state, accumulate the generated difference image for a certain period, extract a region with a large difference from the accumulated difference image, and wipe the dirt of the extracted region Since it is an in-vehicle imaging device characterized by the area, it is possible to deal with the case where the brightness of the dirt changes due to temporary irradiation of headlights of other vehicles by accumulating difference images. is there.
  • the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, Generate a difference image from the acquired captured image in a stable state, acquire the presence or absence of dirt or a dirt adhesion region stored in the dirt state management unit, and the acquired dirt disappears or the acquired dirt adhesion
  • the generated difference image is accumulated until the area of the area becomes a predetermined value or less, a region having a large difference is extracted from the accumulated difference image, and the extracted region is used as a dirt wiping region. Since the vehicle-mounted imaging device is configured to accumulate the difference images, it is possible to suppress the removal of the dirt wiping area.
  • the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and features from the sequentially acquired captured images in the stable state
  • the feature amount image extracted is sequentially generated, a difference image between the generated feature amount image and the previously acquired feature amount image is generated, and a region having a large difference is extracted based on the generated difference image.
  • the vehicle-mounted imaging device is characterized in that the extracted area is a dirt wiping area, the detection performance can be improved and the amount of data to be processed can be reduced by using the feature amount image. .
  • the present invention is configured as follows.
  • the dirt state management unit acquires the presence / absence of dirt or a dirt adhesion area detected by the dirt detection unit, stores the acquired dirt adhesion area as a dirt storage area, and The acquired presence / absence of dirt or the size of the area of the stored dirt storage area is stored as a dirt state, the dirt wiping area detected by the wiping area extraction unit is acquired, and the acquired from the stored dirt adhesion area Since the wiping area is deleted and the dirt state is updated based on the updated dirt adhesion area or the area of the deleted wiping area, the vehicle-mounted image pickup apparatus is provided. It can be carried out.
  • the dirt state management unit acquires the presence or absence of dirt or a dirt adhesion region detected by the dirt detection unit, stores the acquired dirt adhesion region as a dirt storage region, and The acquired presence / absence of dirt or the size of the area of the stored dirt storage area is stored as a dirt state, the dirt wiping area detected by the wiping area extraction unit is acquired and accumulated for a certain period, and the accumulated wiping area If the area of the image sensor is greater than or equal to a predetermined value, the stored in-vehicle image pickup apparatus is characterized by eliminating the dirt. It is possible to deal with cases where it is removed step by step.
  • the present invention is configured as follows. (10) In the above-described in-vehicle imaging device, the dirt state management unit notifies the user of the dirt management status of the lens based on the stored presence or absence of dirt or a dirt adhesion region. Since the vehicle-mounted imaging device is characterized by this, the user can know the management status of lens dirt, and can take appropriate measures.
  • the present invention is configured as follows. (11)
  • the dirt state management unit notifies the system for autonomously controlling the vehicle of the dirt management status of the lens based on the stored presence or absence of dirt or a dirt adhesion region. Since the vehicle-mounted imaging device is characterized by this, in a system that performs autonomous control such as automatic driving, it is possible to know the management status of dirt on the lens and perform appropriate control.
  • the present invention is configured as follows. (12) A vehicle-mounted camera that captures an image through a lens, a stain detection unit that detects the presence or absence of a stain on the lens or a stain-attached region based on captured images sequentially captured by the vehicle-mounted camera, and a stain on the lens A dirt wiping diagnosis unit for diagnosing wiping of the lens, and a dirt state management unit for managing dirt on the lens, and the dirt wiping diagnosis unit is in a stable state based on captured images sequentially captured by the in-vehicle camera.
  • the lens is cleaned based on the determination of the stable state determination unit that determines whether or not there is a cleaning state determination unit that determines that the lens has been cleaned, and the determination of the stable state determination unit and the cleaning state determination unit. Wiping to extract a picked-up image having a small temporal change before and after the period during which the lens is cleaned, and extracting a dirt wiping region from the difference between the two picked-up images before and after the period during which the lens is cleaned
  • the dirt state management unit stores the presence or absence of dirt detected by the dirt detection unit or a dirt adhesion region, and already stores the dirt wiping region extracted by the wiping region extraction unit.
  • an in-vehicle image pickup device that manages the dirt on the lens, so it reflects the cleaning condition of the lens and adheres to the lens surface of the in-vehicle camera.
  • an in-vehicle imaging apparatus having a lens dirt wiping diagnostic device that can reduce the possibility of erroneous determination when diagnosing that dirt on a lens surface has been wiped off even when the operation timing of removing dirt is unknown. can do.
  • the cleaning state determination unit extracts a region having a luminance equal to or lower than a predetermined value from the captured image of the on-vehicle camera as a dark region, and the area of the extracted dark region is equal to or larger than a predetermined value.
  • the cleaning state can be determined by extracting a dark area with low luminance.
  • the cleaning state determination unit determines whether there is a large luminance change at an end of the extracted dark region when the extracted dark region does not reach the entire captured image. Further, only a dark region having a large luminance change at the end is further extracted, and when the area of the dark region having a large luminance change at the extracted end is equal to or larger than a predetermined value, it is determined that the lens is cleaned. Since the vehicle-mounted imaging device is characterized by this, the cleaning state can be more accurately determined by determining the luminance change at the end of the dark region.
  • the cleaning state determination unit extracts a region having a luminance equal to or lower than a predetermined value from the captured image of the in-vehicle camera as a dark region, accumulates the extracted dark region, and accumulates the extracted dark region.
  • the on-vehicle imaging device is characterized in that it is determined that the lens is cleaned. Even when the lens is cleaned, it can be detected that the lens has been cleaned.
  • the dirt wiping diagnosis unit refers to information similar to the speed of the host vehicle and / or boarding / exiting information of the host vehicle, and only when an occupant of the host vehicle can clean the lens. Since the vehicle-mounted imaging device is characterized by operating, the processing load can be reduced by stopping the operation when the lens cannot be cleaned.
  • the present invention is configured as follows. (17)
  • the dirt detection unit, the dirt wiping diagnostic unit, and the dirt state management unit constitute a lens dirt wiping diagnostic device, and the lens dirt wiping diagnostic device is configured to stop an engine. Immediately after the lens dirt wiping diagnostic device is stopped, the captured image of the in-vehicle camera is stored at the time of stop, and when the engine is restarted, the captured image stored at the time of stop is stored in the past before cleaning the lens. Since the vehicle-mounted image pickup device is used as a picked-up image of the lens, it is possible to appropriately manage the dirt state of the lens even if the lens stain wiping diagnostic device is stopped when the engine is stopped.
  • the vehicle-mounted imaging device includes an object recognition unit that detects an object from a captured image of the vehicle-mounted camera, and the dirt state management unit uses a region where the object recognition unit detects an object as a dirt wiping region. Since the vehicle-mounted imaging device is characterized in that the extracted wiping area is deleted from the extracted and stored dirt adhesion area, it is considered that there is no dirt on the lens in the area where the object is detected By reflecting the detection result of the object recognition unit, more appropriate management of the dirt state of the lens can be performed.
  • the present invention is configured as follows. (19) The vehicle-mounted imaging device is provided, and the autonomous control system of the vehicle is permitted to the vehicle autonomous control system when the area of the dirt adhesion region stored in the dirt state management unit is reduced to a predetermined value or less. Since the vehicle control system is characterized by this, autonomous control of the vehicle can be performed without being affected by dirt on the lens.
  • SYMBOLS 1 Car-mounted camera, 2 ... Dirt wiping diagnosis part, 3 ... Dirt detection part, 4 ... Dirt state management part, 5 ... Vehicle speed sensor, 21 ... Stable state determination part, 22 ... Wiping area extraction part, 23 ... Cleaning state determination part , 31, 32... Region (water drop display region), 33... Region (pedestrian display region), 34.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

Provided is an onboard image-capture device including a lens dirt elimination diagnosis device with which the probability of an erroneous determination can be reduced. The onboard image-capture device is provided with: an onboard camera for capturing an image via a lens; a dirt sensing unit which, on the basis of captured images sequentially captured by means of the onboard camera, detects the presence or absence of dirt or a dirt attached area on the lens; a stable state determination unit which, on the basis of a temporal change in the captured images sequentially captured by the onboard camera, determines the presence or absence of a stable state; an eliminated area extraction unit which extracts a dirt eliminated area on the basis of the temporal change in captured images, among the captured images sequentially captured by the onboard camera, when the result of determination by the stable state determination unit indicates a stable state; and a dirt state management unit in which the presence or absence of dirt or the dirt attached area detected by the dirt sensing unit is stored, and which manages dirt on the lens by reflecting the dirt eliminated area extracted by the eliminated area extraction unit on the presence or absence of dirt or the dirt attached area that is stored.

Description

車載用撮像装置In-vehicle imaging device
 本発明は、自動車の車体に搭載され、自動車の外界を撮像可能な車載用撮像装置に関する。 The present invention relates to an in-vehicle image pickup apparatus that is mounted on a car body of an automobile and can image the outside of the automobile.
 近年、自動車の車体に搭載した車載カメラを用いて、自車両の前方、後方、側方の少なくともいずれか一方向、或いは全周囲方向を撮像し、撮像された画像を車内のモニタ画面に表示する車載用撮像装置が普及しつつある。また、車載カメラの撮像画像から自車両に接近する他の車両や歩行者を検知し、運転者に報知する警報装置などもある。 In recent years, an in-vehicle camera mounted on the body of an automobile is used to image at least one of the front, rear, and side directions of the host vehicle, or the entire peripheral direction, and the captured image is displayed on a monitor screen in the vehicle. In-vehicle imaging devices are becoming widespread. There is also an alarm device that detects other vehicles and pedestrians approaching the host vehicle from the captured image of the in-vehicle camera and notifies the driver.
 ところで、一部を除き、車載カメラは車体の外側に取り付けられる。そのため、車載カメラのレンズ(レンズ保護ガラス等を含む)の表面には雨滴や泥等の汚れが付着し易い。レンズ表面に汚れが付着すると、車載カメラの撮像画像に汚れが映り込み、景色が遮蔽されてしまう。 By the way, except for some parts, the in-vehicle camera is attached to the outside of the vehicle body. Therefore, dirt such as raindrops and mud tends to adhere to the surface of the lens (including lens protection glass) of the in-vehicle camera. If dirt adheres to the lens surface, the dirt is reflected in the captured image of the in-vehicle camera, and the scenery is shielded.
 そこで、車載カメラのレンズが汚れたことをユーザーに通知するため、車載カメラで逐次撮像される撮像画像から、この車載カメラのレンズ表面に付着した汚れを検知する技術が開示されている(例えば、特開2012-38048号公報参照)。 Therefore, in order to notify the user that the lens of the in-vehicle camera is dirty, a technique for detecting the dirt attached to the lens surface of the in-vehicle camera from the captured images sequentially captured by the in-vehicle camera is disclosed (for example, JP, 2012-38048, A).
 また、従来、車載カメラのレンズ表面に水や空気を吹き付け、レンズ表面に付着した汚れを取り除くレンズ汚れ除去装置(ウォッシャー、ブロアー)が知られており、特開2014-11785号公報には、このレンズ汚れ除去装置が正常に機能したかを診断するレンズ汚れ除去診断技術が開示されている。この技術では、レンズ汚れ除去装置の作動の前後で、車載カメラの撮像画像を比較し、コントラストやエッジ強度の変化から汚れが除去されたことを診断する。 Conventionally, a lens dirt removing device (washer, blower) that blows water or air onto the lens surface of an in-vehicle camera and removes dirt adhering to the lens surface is known. Japanese Patent Laid-Open No. 2014-11785 discloses this A lens dirt removal diagnostic technique for diagnosing whether the lens dirt removal apparatus has functioned normally is disclosed. In this technique, before and after the operation of the lens dirt removing device, the captured images of the in-vehicle camera are compared, and it is diagnosed that dirt has been removed from changes in contrast and edge strength.
 しかし、レンズ汚れ除去装置を有しない車載カメラは多く、この場合、ユーザーが手作業で車載カメラのレンズ表面を布や指で拭い、レンズ表面の汚れを取り除くことになるが、車載用撮像装置では、汚れ除去作業実施のタイミングが不明であるため、特開2014-11785号公報のような汚れ除去作業の前後における撮像画像比較による汚れ除去の診断が不可能であった。 However, there are many in-vehicle cameras that do not have a lens dirt removal device, and in this case, the user manually wipes the lens surface of the in-vehicle camera with a cloth or finger to remove the dirt on the lens surface. Since the timing of performing the dirt removal work is unknown, it is impossible to diagnose the dirt removal by comparing the captured images before and after the dirt removal work as disclosed in Japanese Patent Application Laid-Open No. 2014-11785.
 これに対応し、汚れ除去作業実施のタイミングが不明であっても汚れ除去の診断が可能な方法としてのレンズ汚れ除去逐次診断技術が特開2016-15583号公報に開示されている。このレンズ汚れ除去逐次診断技術では、自車両が停車した直後に車載カメラで撮像された撮像画像を基準画像とし、この基準画像とそれ以降に車載カメラで撮像される撮像画像との差分画像を逐次生成し、差分画像を予め検出されている汚れの付着領域と比較し、汚れの付着領域の大部分で差分が検出された場合に、汚れが払拭されたと診断する。 Correspondingly, Japanese Patent Laid-Open No. 2016-15583 discloses a lens dirt removal sequential diagnosis technique as a method capable of diagnosing dirt removal even when the timing of dirt removal work execution is unknown. In this lens dirt removal sequential diagnosis technology, a captured image captured by an in-vehicle camera immediately after the host vehicle stops is used as a reference image, and a difference image between the reference image and a captured image captured by the in-vehicle camera thereafter is sequentially determined. The difference image is generated and compared with a previously detected dirt adhesion region, and if a difference is detected in the majority of the dirt adhesion region, it is diagnosed that the dirt has been wiped away.
特開2012-38048号公報JP 2012-38048 A 特開2014-11785号公報JP 2014-11785 A 特開2016-15583号公報Japanese Unexamined Patent Publication No. 2016-15583
 しかしながら、手作業で車載カメラのレンズ表面を清掃した場合、清掃者がカメラに接近して映り込むため、画像全体に大きな変化が生じる。このため、特開2016-15583号公報のように撮像画像の時間変化から汚れの払拭を検知する方式では、清掃者の映り込みによる変化を汚れの除去によるものと誤判定することが懸念される。 However, if the lens surface of the in-vehicle camera is manually cleaned, the cleaner will be reflected closer to the camera, causing a large change in the entire image. For this reason, in the method of detecting the wiping of the dirt from the time change of the picked-up image as in JP-A-2016-15583, there is a concern that the change due to the reflection of the cleaner is erroneously determined to be due to the removal of the dirt. .
 また、特開2016-15583号公報のように停車時の撮像画像を基準画像として用いる場合、停車してから車載カメラのレンズ表面が清掃されるまでの時間が長くなり、差分をとる2つの画像の撮像時刻が離れることが想定される。この場合、環境光や周囲の移動体(歩行者や車両等)の変化が大きくなり、これも誤判定の一因となる。 In addition, when a captured image at the time of stopping is used as a reference image as in Japanese Patent Application Laid-Open No. 2016-15583, the time from when the vehicle is stopped until the lens surface of the in-vehicle camera is cleaned becomes long, and two images that take a difference are taken. It is assumed that the imaging time of is separated. In this case, changes in ambient light and surrounding moving bodies (pedestrians, vehicles, etc.) become large, which also contributes to erroneous determination.
 本発明の目的は、車載カメラのレンズ表面に付着した汚れを除去する作業の実施タイミングが不明であっても、レンズ表面の汚れが払拭されたことを診断するに際して、誤判定が生じる虞を低減できるレンズ汚れ払拭診断装置を有する車載用撮像装置を提供することにある。 The object of the present invention is to reduce the possibility of erroneous determination when diagnosing that the dirt on the lens surface has been wiped off even when the operation timing of removing the dirt attached to the lens surface of the in-vehicle camera is unknown. Another object of the present invention is to provide an in-vehicle image pickup apparatus having a lens dirt wiping diagnostic apparatus.
 本発明は上記の目的を達成するために、レンズを介して画像を撮像する車載カメラと、前記車載カメラで逐次撮像された撮像画像に基づき、前記レンズの汚れの有無又は汚れ付着領域を検出する汚れ検知部と、前記レンズの汚れの払拭を診断する汚れ払拭診断部と、前記レンズの汚れの管理を行う汚れ状態管理部と、を備え、前記汚れ払拭診断部は、前記車載カメラで逐次撮像された撮像画像の時間変化に基づき、安定状態であるか否かを判定する安定状態判定部と、前記車載カメラで逐次撮像された撮像画像のうち前記安定状態判定部による判定結果が安定状態であるときの撮像画像の時間変化に基づいて汚れの払拭領域を抽出する払拭領域抽出部と、を有し、前記汚れ状態管理部は、前記汚れ検知部が検出した汚れの有無又は汚れ付着領域を記憶し、前記払拭領域抽出部が抽出した汚れの払拭領域を、既に記憶してある汚れの有無又は汚れ付着領域に反映し、前記レンズの汚れの管理を行う、ことを特徴とする。 In order to achieve the above object, the present invention detects the presence or absence of dirt on the lens or a dirt-attached area based on an in-vehicle camera that captures an image through a lens and captured images sequentially captured by the in-vehicle camera. A stain detection unit; a stain wiping diagnosis unit for diagnosing wiping of dirt on the lens; and a stain state management unit for managing dirt on the lens. The stain wiping diagnosis unit sequentially captures images with the in-vehicle camera. A stable state determination unit that determines whether the captured image is in a stable state based on a time change of the captured image, and a determination result by the stable state determination unit among the captured images sequentially captured by the in-vehicle camera is in a stable state A wiping area extraction unit that extracts a wiping area for dirt based on a temporal change of a captured image at a certain time, and the dirt state management part is configured to detect the presence or absence of dirt detected by the dirt detection unit or the adhesion of dirt. Storing the range, the wiping area of dirt the wiping region extraction unit has extracted, already reflected in the presence or dirt adhering area of the dirt that stores, performs dirt management of said lens, characterized in that.
 本発明によれば、車載カメラのレンズ表面に付着した汚れを除去する作業の実施タイミングが不明であっても、レンズ表面の汚れが払拭されたことを診断するに際して、誤判定が生じる虞を低減できるレンズ汚れ払拭診断装置を有する車載用撮像装置を提供することができる。 According to the present invention, it is possible to reduce the possibility of erroneous determination when diagnosing that the dirt on the lens surface has been wiped off even when the operation timing of removing the dirt attached to the lens surface of the in-vehicle camera is unknown. It is possible to provide an in-vehicle imaging device having a lens dirt wiping diagnostic device that can be used.
 上記した以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。 Issues, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
本発明の実施例1に係る車載用撮像装置の概略構成を示すブロック図。1 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 1 of the present invention. 車載カメラの撮像画像の一例を示す図。The figure which shows an example of the captured image of a vehicle-mounted camera. 車載カメラの撮像画像の一例を示す図。The figure which shows an example of the captured image of a vehicle-mounted camera. 車載カメラの撮像画像の差分画像の一例を示す図。The figure which shows an example of the difference image of the picked-up image of a vehicle-mounted camera. 車載カメラの撮像画像の一例を示す図。The figure which shows an example of the captured image of a vehicle-mounted camera. 車載カメラの撮像画像の差分画像の一例を示す図。The figure which shows an example of the difference image of the picked-up image of a vehicle-mounted camera. 安定状態判定部の処理のフローチャートの一例を示す図。The figure which shows an example of the flowchart of a process of the stable state determination part. 払拭領域抽出部の処理のフローチャートの一例を示す図。The figure which shows an example of the flowchart of a process of the wiping area extraction part. 本発明の実施例2に係る車載用撮像装置の概略構成を示すブロック図。FIG. 5 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 2 of the present invention. 払拭領域抽出部の処理のフローチャートの一例を示す図。The figure which shows an example of the flowchart of a process of the wiping area extraction part. 本発明の実施例3に係る車載用撮像装置の概略構成を示すブロック図。FIG. 6 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 3 of the present invention. 車載カメラの撮像画像の一例を示す図。The figure which shows an example of the captured image of a vehicle-mounted camera. 清掃状態判定部の処理のフローチャートの一例を示す図。The figure which shows an example of the flowchart of a process of the cleaning state determination part. 本発明の実施例4に係る車載用撮像装置の概略構成を示すブロック図。FIG. 6 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 4 of the present invention.
 以下、図面を参照して本発明に係る車載用撮像装置について説明する。 Hereinafter, an in-vehicle imaging device according to the present invention will be described with reference to the drawings.
 図1は、本発明の実施例1に係る車載用撮像装置の概略構成を示すブロック図である。 FIG. 1 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 1 of the present invention.
 (車載用撮像装置100の構成)
 図1に示すように、本実施例の車載用撮像装置100は、車載カメラ1と、汚れ払拭診断部2と、汚れ検知部3と、汚れ状態管理部4と、を備える。また、汚れ払拭診断部2は、安定状態判定部21と、払拭領域抽出部22と、を備える。また、汚れ払拭診断部2と、汚れ検知部3と、汚れ状態管理部4と、をレンズ汚れ払拭診断装置とも呼ぶ(以下の他の実施例においても同様である。)。
(Configuration of in-vehicle imaging device 100)
As shown in FIG. 1, the in-vehicle imaging device 100 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4. The dirt wiping diagnosis unit 2 includes a stable state determination unit 21 and a wiping region extraction unit 22. Further, the dirt wiping diagnosis unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnosis device (the same applies to the other embodiments below).
 車載カメラ1は、不図示の車両に設置され、不図示のレンズ(レンズ保護ガラス等を含む)を介して車載カメラ1の前方の画像を撮像する。また、車載カメラ1は、当該車載カメラ1の前方を撮像した画像(撮像画像)を逐次出力する。例えば、フレームレートが20fpsであるとき、車載カメラ1は、50ミリ秒おきに、画像を撮像するとともにその撮像画像を出力する。 The in-vehicle camera 1 is installed in a vehicle (not shown), and takes an image in front of the in-vehicle camera 1 via a lens (including a lens protection glass). The in-vehicle camera 1 sequentially outputs an image (captured image) obtained by imaging the front of the in-vehicle camera 1. For example, when the frame rate is 20 fps, the in-vehicle camera 1 captures an image and outputs the captured image every 50 milliseconds.
 安定状態判定部21は、車載カメラ1から出力される撮像画像を逐次取得し、都度、直前に取得した撮像画像からの変化(撮像画像の時間変化)を抽出し、その変化の大きさが所定値以下であれば、撮像画像が時間的に安定している状態(安定状態)と判定し、逆に所定値を超えれば、撮像画像が時間的に安定していない状態(非安定状態)と判定する。 The stable state determination unit 21 sequentially acquires captured images output from the in-vehicle camera 1, extracts a change (time change of the captured image) from the captured image acquired immediately before each time, and the magnitude of the change is predetermined. If it is less than the value, it is determined that the captured image is temporally stable (stable state). Conversely, if it exceeds a predetermined value, the captured image is not temporally stable (unstable state). judge.
 なお、「撮像画像の時間変化」とは、ある時刻の撮像画像の画像データ(如何なる形式の画像データであってもよい。)と、一定時間経過後の撮像画像の画像データとの比較における差分(時間経過に伴う変化)、例えば撮像画像の各画素の変化(時間経過に伴う変化)を指し、「撮像画像の時間変化の大きさ」とは、例えば変化した領域の面積の大きさである場合があり、例えば画像データの画素それぞれにおける変化量である場合がある。また、「撮像画像の時間変化」の一例として、「撮像画像の特徴量の時間変化」を利用してもよい。 Note that “change in time of captured image” refers to a difference in comparison between image data of a captured image at a certain time (any type of image data) and image data of a captured image after a certain period of time has elapsed. (Change with time), for example, change of each pixel of the captured image (change with time), and “size of time change of captured image” is, for example, the size of the area of the changed region In some cases, for example, there is a change amount in each pixel of the image data. As an example of “time change of captured image”, “time change of feature amount of captured image” may be used.
 また、「撮像画像の時間変化の抽出」とは、撮像画像の全領域(画像全体)の中から、比較の結果で差分が生じている領域(すなわち時間経過により変化が生じている領域)を抽出することである場合があり、逐次撮像する多数の撮像画像の中から、比較の結果で差分が生じている撮像画像を抽出することである場合がある。 In addition, “extraction of time-varying captured image” refers to a region in which a difference is generated as a result of comparison (that is, a region in which a change has occurred over time) among all regions of the captured image (entire image) In some cases, extraction may be performed, and in some cases, a captured image in which a difference is generated as a result of comparison is extracted from a large number of captured images that are sequentially captured.
 以下、撮像画像の画像データを指す場合でも単に撮像画像と記す場合があり、差分画像の画像データを指す場合でも単に差分画像と記す場合がある。 Hereinafter, even when referring to image data of a captured image, it may be simply referred to as a captured image, and even when referring to image data of a difference image, it may be simply referred to as a difference image.
 安定状態判定部21は、最後に、判定結果を安定状態信号として出力する。なお、撮像画像に移動体(例えば、車載カメラ1のレンズの表面を清掃する人や他の車両等の移動体)が映っていない、又は撮像画像中の移動体の移動量が小さい場合に安定状態となる。 The stable state determination unit 21 finally outputs the determination result as a stable state signal. It is stable when a moving body (for example, a moving body such as a person who cleans the surface of the lens of the in-vehicle camera 1 or another vehicle) is not shown in the captured image, or when the moving amount of the moving body in the captured image is small. It becomes a state.
 払拭領域抽出部22は、安定状態判定部21から出力される安定状態信号を参照し、安定状態においてのみ車載カメラ1から出力される撮像画像を逐次取得し、都度、直前の安定状態において取得した撮像画像からの変化を抽出する。これにより、撮像画像に移動体が映っていない、又は撮像画像中の移動体の移動量が小さい状態(安定状態)の撮像画像の変化(撮像画像の時間変化)が抽出される。そして、当該撮像画像内で大きな変化のあった領域を、車載カメラ1のレンズ表面に付着した汚れが払拭された領域(払拭領域)として後段の汚れ状態管理部4に出力する。 The wiping area extraction unit 22 refers to the stable state signal output from the stable state determination unit 21, sequentially acquires captured images output from the in-vehicle camera 1 only in the stable state, and acquires in the previous stable state each time. A change from the captured image is extracted. As a result, a change in the captured image (change in time of the captured image) in a state where the moving body is not reflected in the captured image or the moving amount of the moving body in the captured image is small (stable state) is extracted. Then, an area having a large change in the captured image is output to the subsequent dirt state management unit 4 as an area (wiping area) from which dirt attached to the lens surface of the in-vehicle camera 1 is wiped off.
 汚れ検知部3は、車載カメラ1から逐次出力される撮像画像から、当該車載カメラ1のレンズ表面に付着した汚れの付着領域(汚れ検知領域)を検出して出力する。これには特開2012-38048号公報などに記載された公知の手法が利用できる。例えば、自車両の走行中に車載カメラ1が撮像した撮像画像から、長期間に亘って輝度変化の小さい領域を検出し、この領域を汚れ検知領域とする方式などがある。なお、方式によっては、汚れの付着領域ではなく、汚れの有無のみを検知するものがある。この場合、汚れ検知部3は、汚れ検知領域の代わりに、汚れの付着面積や濃度に対応する検知反応の強さ(汚れ検知強度)を出力する。 The dirt detection unit 3 detects and outputs an adhesion area (dirt detection area) of dirt attached to the lens surface of the in-vehicle camera 1 from captured images sequentially output from the in-vehicle camera 1. For this, a known method described in JP 2012-38048 A can be used. For example, there is a method of detecting a region with a small luminance change over a long period from a captured image captured by the in-vehicle camera 1 while the host vehicle is traveling, and setting this region as a stain detection region. Some methods detect only the presence or absence of dirt, not the area where dirt is attached. In this case, the dirt detection unit 3 outputs the strength of the detection reaction (stain detection intensity) corresponding to the dirt adhesion area and density instead of the dirt detection region.
 汚れ状態管理部4は、汚れ検知部3が機能している間、当該汚れ検知部3から出力される汚れ検知領域又は汚れ検知強度を参照し、車載カメラ1のレンズ表面に付着した汚れの付着領域を汚れ記憶領域として、汚れの有無を汚れ状態として管理する(汚れ記憶領域、汚れ状態を不揮発性の記憶部(不図示)に記憶する)。また、汚れ状態管理部4は、払拭領域抽出部22から出力される払拭領域を参照し、管理する汚れ状態や汚れ記憶領域に反映する。例えば、汚れ状態管理部4は、払拭領域抽出部22からの払拭領域が所定の面積を超えた場合に、記憶している汚れ状態を「汚れ有り」から「汚れ無し」に変更する。また例えば、汚れ状態管理部4は、記憶している汚れ記憶領域から、払拭領域抽出部22からの払拭領域を消去する。 The dirt state management unit 4 refers to the dirt detection area or the dirt detection intensity output from the dirt detection unit 3 while the dirt detection unit 3 is functioning, and adheres to the lens surface of the in-vehicle camera 1. The area is set as a dirty storage area, and the presence / absence of dirt is managed as a dirty state (the dirty storage area and the dirty state are stored in a nonvolatile storage unit (not shown)). Further, the dirt state management unit 4 refers to the wiping area output from the wiping area extraction unit 22 and reflects it in the managed dirt state and dirt storage area. For example, when the wiping area from the wiping area extraction unit 22 exceeds a predetermined area, the dirt state management unit 4 changes the stored dirt state from “dirty” to “no dirt”. For example, the dirt state management unit 4 erases the wiping area from the wiping area extraction unit 22 from the stored dirt storage area.
 また、汚れ状態管理部4は、汚れの有無又は汚れ記憶領域の面積の大小を汚れ状態として記憶するものであってもよい。 Further, the dirt state management unit 4 may store the presence or absence of dirt or the size of the area of the dirt storage area as a dirt state.
 さらに、汚れ状態管理部4は、管理している汚れ状態や汚れ記憶領域(レンズの汚れの管理状況)を、不図示の警告灯やスピーカーなどでユーザーに通知したり、車両を自律制御する自動運転システムなどの車両制御システム(不図示)に通知したりする機能も担う。 Furthermore, the dirt state management unit 4 notifies the user of the managed dirt state and dirt storage area (lens dirt management status) with a warning light or a speaker (not shown), or automatically controls the vehicle. It also functions to notify a vehicle control system (not shown) such as a driving system.
 (安定状態判定部21の処理)
 図2Aは、車載カメラ1のレンズ表面に水滴が付着した場合に、当該車載カメラ1で撮像される撮像画像の一例である。当該撮像画像の領域31、32にそれぞれ水滴が映り込み、その先の景色が遮られている。また、領域31に映る水滴が取り除かれた場合の撮像画像を図2Bに示す。
(Processing of stable state determination unit 21)
FIG. 2A is an example of a captured image captured by the in-vehicle camera 1 when water droplets adhere to the lens surface of the in-vehicle camera 1. Water drops are reflected in the areas 31 and 32 of the captured image, respectively, and the scenery beyond that is blocked. Further, FIG. 2B shows a captured image in the case where water drops reflected in the region 31 are removed.
 図3は、図2A及び図2Bの2つの撮像画像の差分を取った差分画像である。図3の差分画像では、水滴が取り除かれた当該領域31のみが抽出される。 FIG. 3 is a difference image obtained by taking the difference between the two captured images of FIGS. 2A and 2B. In the difference image of FIG. 3, only the region 31 from which water droplets have been removed is extracted.
 しかし、車載カメラ1の撮像画像に、歩行者や車両などの動く移動体が映り込む場合、当該移動体が映る領域も差分画像において抽出されてしまう。図4は、車載カメラ1の前方に動く歩行者が現れた場合の撮像画像の一例である。当該撮像画像の領域33に動く歩行者が映っている。また、図2Aの撮像画像と図4の撮像画像との差分画像を図5に示す。図5の差分画像には、領域33の歩行者だけでなく、領域31の水滴も抽出されている。これは、当該領域33の歩行者が当該領域31の水滴に重なり、歩行者からの反射光によって当該領域31の水滴の明るさが変化したためである。 However, when a moving moving body such as a pedestrian or a vehicle is reflected in the captured image of the in-vehicle camera 1, the region where the moving body is reflected is also extracted in the difference image. FIG. 4 is an example of a captured image when a moving pedestrian appears in front of the in-vehicle camera 1. A moving pedestrian is shown in the region 33 of the captured image. 5 shows a difference image between the captured image of FIG. 2A and the captured image of FIG. In the difference image in FIG. 5, not only pedestrians in the region 33 but also water drops in the region 31 are extracted. This is because the pedestrian in the area 33 overlaps the water drop in the area 31 and the brightness of the water drop in the area 31 is changed by the reflected light from the pedestrian.
 そこで、安定状態判定部21は、車載カメラ1に移動体が映らない状態を検知する。図6に、当該安定状態判定部21の処理のフローチャートの一例を示す。 Therefore, the stable state determination unit 21 detects a state in which the moving body is not reflected on the in-vehicle camera 1. In FIG. 6, an example of the flowchart of the process of the said stable state determination part 21 is shown.
 まず、車載カメラ1から逐次出力される撮像画像を取得し(ステップ2101)、都度、取得した撮像画像とその直前に取得した撮像画像との差分画像(逐次差分画像)を生成する(ステップ2102)。すなわち、取得した撮像画像の画像データと、その直前に取得した撮像画像の画像データとを比較し、比較結果の相互に異なる部分のみを残した画像データを差分画像の画像データとして生成する。次に、当該逐次差分画像において差分が所定値以上の領域を差分の大きな領域(逐次差分領域)として抽出し(ステップ2103)、当該逐次差分領域の面積(逐次差分面積)を導出する(ステップ2104)。なお、ステップ2103で抽出する差分の大きな領域の「大きな」は、撮像画像の画像データの雑音による誤差の差分などの、移動体の移動によって生じた差分以外のものを除外する程度の大きさのことであり、実機環境に応じて調整することが望ましい。また、「差分の大きな領域を抽出する」とは、撮像画像の全領域から、移動体の移動によって生じた差分を有する領域の候補を抽出することである。 First, captured images sequentially output from the in-vehicle camera 1 are acquired (step 2101), and a difference image (sequential difference image) between the acquired captured image and the captured image acquired immediately before that is generated (step 2102). . That is, the acquired image data of the captured image is compared with the image data of the captured image acquired immediately before, and image data in which only the different portions of the comparison results are left is generated as the image data of the difference image. Next, an area having a difference equal to or larger than a predetermined value in the sequential difference image is extracted as a large difference area (sequential difference area) (step 2103), and an area of the sequential difference area (sequential difference area) is derived (step 2104). ). Note that “large” in the large difference area extracted in step 2103 is large enough to exclude differences other than the difference caused by the movement of the moving object, such as the difference in error due to noise in the image data of the captured image. Therefore, it is desirable to adjust according to the actual machine environment. Further, “extracting a region having a large difference” is extracting a region candidate having a difference caused by the movement of the moving body from the entire region of the captured image.
 そして、当該逐次差分面積が所定値以下か判定する(ステップ2105)。当該逐次差分面積が所定値以下であれば、移動体が映り込んでいない、又は移動体の動きが小さい状態(安定状態)と判定し(ステップ2106)、当該逐次差分面積が所定値を超過すれば、移動体が映り込んだ状態(非安定状態)と判定する(ステップ2107)。なお、移動体の動きが小さい状態の「小さい」は、移動体が写り込んでいたとしてもその移動体の動きが安定状態と見なせる程度の小ささのことであり、すなわち汚れの払拭と誤認しない程度の小さな動きであることを示し、実機環境に応じて調整することが望ましい。最後に、この判定結果を安定状態信号として出力する(ステップ2106、2107)。この一連の処理が、車載カメラ1による撮像画像が入力されるたびに繰り返し行われる。 Then, it is determined whether the successive difference area is equal to or smaller than a predetermined value (step 2105). If the successive difference area is less than or equal to a predetermined value, it is determined that the moving object is not reflected or the movement of the moving object is small (stable state) (step 2106), and the successive difference area exceeds the predetermined value. For example, it is determined that the moving body is reflected (unstable state) (step 2107). “Small” when the movement of the moving body is small means that the movement of the moving body can be regarded as a stable state even if the moving body is reflected, that is, it is not mistaken for wiping dirt. It is desirable to show that the movement is small, and adjust according to the actual machine environment. Finally, the determination result is output as a stable state signal (steps 2106 and 2107). This series of processing is repeated each time a captured image by the in-vehicle camera 1 is input.
 (安定状態判定部21の処理の拡張例1)
 ところで、前記車載カメラ1から逐次出力される撮像画像ごとに安定状態信号を生成すると、当該撮像画像に映り込む移動体の動きが一時遅くなっただけで撮像画像が安定状態であると判定されてしまい、安定状態信号が安定しなくなる。
(Extended example 1 of processing of the stable state determination unit 21)
By the way, when a stable state signal is generated for each captured image sequentially output from the in-vehicle camera 1, it is determined that the captured image is in a stable state only when the movement of the moving body reflected in the captured image is temporarily delayed. As a result, the steady state signal becomes unstable.
 そこで、安定状態信号を安定化することが好ましい。その方法はいくつかあるが、いかなる方法を用いてもよい。例えば、図6のフローチャートのステップ2104において、前記逐次差分領域を一定期間に亘って累積し、累積された逐次差分領域の面積を前記逐次差分面積として導出する方法がある。 Therefore, it is preferable to stabilize the steady state signal. There are several methods, but any method may be used. For example, in step 2104 of the flowchart of FIG. 6, there is a method of accumulating the successive difference area over a certain period and deriving the accumulated area of the successive difference area as the successive difference area.
 または、図6のフローチャートのステップ2105において、一定期間に亘って前記逐次差分面積の平均値又は最大値を算出し、その算出値が所定値以下に小さければ安定状態信号を安定状態とし、その算出値が所定値を超過して大きければ安定状態信号を非安定状態とする方法を用いてもよい。 Alternatively, in step 2105 of the flowchart of FIG. 6, the average value or maximum value of the successive difference areas is calculated over a certain period, and if the calculated value is smaller than a predetermined value, the stable state signal is set to the stable state, and the calculation is performed. If the value exceeds the predetermined value and is large, a method of making the stable state signal in an unstable state may be used.
 他にも、図6のフローチャートのステップ2101において、車載カメラ1から逐次出力される撮像画像を間引いて取得する方法がある。撮像画像を間引くことで移動体の動きが捉えやすくなり、安定状態信号を安定化することができる。 In addition, in step 2101 of the flowchart of FIG. 6, there is a method of thinning out and acquiring captured images sequentially output from the in-vehicle camera 1. By thinning out the captured image, it becomes easy to capture the movement of the moving body, and the stable state signal can be stabilized.
 (安定状態判定部21の処理の拡張例2)
 ところで、安定状態判定部21は、車載カメラ1の撮像画像をそのまま用いずに、当該撮像画像からエッジ画像やHOG(Histogram of Oriented Gradients)特徴量などの特徴量を抽出して(ここでは抽出結果を特徴量画像と呼ぶ)、この特徴量画像を当該撮像画像の代わりに用いてもよい。適切な特徴量を用いることで、検出性能を向上したり、処理するデータ量を削減したりできる。例えば、HOG特徴量を用いた場合、移動体や環境光のわずかな揺らぎによる不要な画像変化を抽出されにくくすることができる。また、前記撮像画像から輝度やエッジ強度などの平均値を抽出して用いれば、処理するデータ量を大幅に削減することができる。
(Extended example 2 of processing of the stable state determination unit 21)
By the way, the stable state determination unit 21 extracts a feature amount such as an edge image or a HOG (Histogram of Oriented Gradients) feature amount from the captured image without using the captured image of the vehicle-mounted camera 1 as it is (here, the extraction result) Is called a feature amount image), and this feature amount image may be used instead of the captured image. By using an appropriate feature amount, the detection performance can be improved and the amount of data to be processed can be reduced. For example, when the HOG feature amount is used, it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light. In addition, if an average value such as luminance or edge strength is extracted from the captured image and used, the amount of data to be processed can be greatly reduced.
 (払拭領域抽出部22の処理)
 前述したように、車載カメラ1の撮像画像の時間変化から当該車載カメラ1のレンズ表面に付着する汚れの払拭を検出できるが、当該車載カメラ1に移動体が映り込む場合にも撮像画像に時間変化が生じてしまう。
(Process of wiping area extraction unit 22)
As described above, it is possible to detect the wiping of dirt adhering to the lens surface of the in-vehicle camera 1 from the time change of the in-vehicle camera 1, but even when a moving object is reflected in the in-vehicle camera 1, Change will occur.
 そこで、払拭領域抽出部22は、車載カメラ1に移動体が映らない、又は映り込む移動体の動きが小さい状態である安定状態の撮像画像のみから前記撮像画像の時間変化を抽出し、汚れが払拭された領域を検出する。 Therefore, the wiping area extraction unit 22 extracts the time change of the captured image from only the captured image in a stable state in which the moving object is not reflected on the in-vehicle camera 1 or the movement of the moving object reflected is small. Detect the wiped area.
 図7に、払拭領域抽出部22の処理のフローチャートの一例を示す。 FIG. 7 shows an example of a flowchart of processing of the wiping area extraction unit 22.
 まず、安定状態判定部21が出力する安定状態信号を参照し、当該安定状態信号が安定状態を示しているかを判定する(ステップ2201)。当該安定状態信号が安定状態であれば、車載カメラ1による撮像画像を取得する(ステップ2202)。そして、取得した安定状態における当該撮像画像と、前の処理で取得した直前の安定状態における撮像画像との差分画像を生成する(ステップ2203)。そして、当該差分画像から差分の大きな領域(差分領域)を抽出し(ステップ2204)、その差分領域を汚れの払拭領域として出力する(ステップ2205)。なお、ステップ2204で抽出する差分の大きな領域とは差分が所定値以上である領域のことであり、この所定値はステップ2103の説明で用いた所定値と同じ値であってもよく、異なる値であってもよく、実機環境に応じて調整することが望ましい。この一連の処理を、車載カメラ1から撮像画像が出力されるたびに繰り返し実行する。 First, the stable state signal output from the stable state determination unit 21 is referenced to determine whether the stable state signal indicates a stable state (step 2201). If the stable state signal is in a stable state, an image captured by the in-vehicle camera 1 is acquired (step 2202). Then, a difference image between the acquired captured image in the stable state and the captured image in the previous stable state acquired in the previous process is generated (step 2203). Then, an area having a large difference (difference area) is extracted from the difference image (step 2204), and the difference area is output as a dirt wiping area (step 2205). The large difference area extracted in step 2204 is an area where the difference is equal to or greater than a predetermined value. This predetermined value may be the same value as the predetermined value used in the description of step 2103, or a different value. However, it is desirable to adjust according to the actual machine environment. This series of processing is repeatedly executed every time a captured image is output from the in-vehicle camera 1.
 (払拭領域抽出部22の処理の拡張例1)
 ところで、車載カメラ1に例えば周辺車両のヘッドライトが照射されると、当該車載カメラ1のレンズ表面に付着した汚れ(水滴等)の明るさが変化し、前記ステップ2203において生成される前記差分画像では明るさの変化した汚れの領域が抽出されてしまう。また、ヘッドライトの照射が止み、当該汚れの明るさが元に戻る際にも、前記ステップ2203において生成される前記差分画像では当該汚れの領域が再び抽出される。
(Expansion example 1 of the process of the wiping area extraction unit 22)
By the way, when the in-vehicle camera 1 is irradiated with, for example, a headlight of a surrounding vehicle, the brightness of dirt (water droplets) attached to the lens surface of the in-vehicle camera 1 changes, and the difference image generated in the step 2203 is changed. In this case, a dirt region with a changed brightness is extracted. Also, when the headlight irradiation stops and the brightness of the stain returns, the stain area is extracted again from the difference image generated in step 2203.
 ここで、ヘッドライトの照射の開始時と終了時とでは、前記ステップ2203において生成される差分の符号が異なる。そこで、払拭領域抽出部22は、前記ステップ2203と前記ステップ2204の間に、当該ステップ2203において生成される前記差分画像を一定期間加算して累積する処理を追加してもよい。ヘッドライトの照射の開始時と終了時を含む十分な期間、前記差分画像を加算すれば、ヘッドライトの一時的な照射による汚れの明るさの変化を、前記ステップ2203において生成される差分画像から相殺により取り除くことができる。ここで、当該差分画像を累積する期間は、差分が抽出されてから一定の期間とする方法や、車載カメラ1の照度が所定値以上となる期間とする方法、又は図示しないヘッドライト検知処理が車載カメラ1の撮像画像からヘッドライトを検知する期間とする方法などがある。ここで、ヘッドライト検知処理の一例としての簡単なヘッドライト検知は、撮像画像から高輝度の円形領域を検出することで実現できる。 Here, the sign of the difference generated in step 2203 is different between the start and end of headlight irradiation. Therefore, the wiping area extraction unit 22 may add a process of adding the difference images generated in Step 2203 for a certain period and accumulating between Step 2203 and Step 2204. If the difference image is added for a sufficient period including the start and end of headlight irradiation, the change in the brightness of the stain due to the temporary irradiation of the headlight can be detected from the difference image generated in step 2203. It can be removed by offsetting. Here, the period during which the difference image is accumulated is a method of setting a certain period after the difference is extracted, a method of setting the illuminance of the in-vehicle camera 1 to be a predetermined value or more, or a headlight detection process (not shown). There is a method of setting a period for detecting a headlight from a captured image of the in-vehicle camera 1. Here, simple headlight detection as an example of headlight detection processing can be realized by detecting a high-luminance circular region from a captured image.
 なお、ヘッドライトが照射されると水滴等の汚れは極端に明るくなるため、極端に明るく変化した領域はヘッドライトが照射したことで明るさの変化した領域とし、汚れの払拭領域から除外することもできる。 In addition, since dirt such as water drops becomes extremely bright when irradiated with headlights, areas that have changed extremely brightly should be areas where the brightness has changed due to irradiation with headlights, and should be excluded from areas where dirt is wiped off. You can also.
 (払拭領域抽出部22の処理の拡張例2)
 ところで、車載カメラ1のレンズ表面に付着した汚れは少しずつ段階的に取り除かれる場合もある。例えば、手作業によるレンズの清掃が不十分で一部の汚れが残ってしまい、何度も清掃を繰り返す場合や、一部の水滴が自重で流れ落ちた場合などである。これらの場合、前記ステップ2203で抽出される差分画像には、都度払拭される一部の汚れのみが抽出される。
(Expansion example 2 of the process of the wiping area extraction unit 22)
By the way, the stain | pollution | contamination adhering to the lens surface of the vehicle-mounted camera 1 may be removed in steps little by little. For example, there is a case where the lens is not cleaned manually and a part of the dirt remains, and the cleaning is repeated many times, or a part of the water drops run off by its own weight. In these cases, only a part of the dirt that is wiped off each time is extracted from the difference image extracted in step 2203.
 そこで、前記ステップ2203と前記2204の間に、前記差分画像を一定期間累積する処理を追加してもよい。または、前記ステップ2205で前記差分領域を一定期間累積し、前記払拭領域としてもよい。これにより、汚れの払拭領域の取りこぼしを抑えることができる。 Therefore, a process for accumulating the difference image for a certain period may be added between the step 2203 and the step 2204. Alternatively, in the step 2205, the difference area may be accumulated for a certain period to be the wiping area. As a result, it is possible to suppress the removal of the wiping area for dirt.
 ここで、前記差分画像ないしは前記差分領域を累積する前記一定期間は、予め定めた所定の時間としてもよいが、汚れ状態管理部4が管理する汚れ状態が「汚れ無し」になるまで、ないしは汚れ検知部3が機能するまでとすることが好ましい。これにより、汚れの払拭領域の取りこぼしを避けることができる。なお、前記差分画像ないしは前記差分領域を累積している前記一定期間中も、汚れ状態管理部4が汚れの払拭状態を管理できるよう、逐次、前記払拭領域を出力することが好ましい。 Here, the predetermined period for accumulating the difference image or the difference area may be a predetermined time, but until the dirt state managed by the dirt state management unit 4 becomes “no dirt” or dirt. It is preferable that the detection unit 3 functions. As a result, it is possible to avoid spillage of the wiping area for dirt. In addition, it is preferable that the wiping area is sequentially output so that the dirt state management unit 4 can manage the dirt wiping state even during the certain period in which the difference image or the difference area is accumulated.
 (払拭領域抽出部22の処理の拡張例3)
 ところで、払拭領域抽出部22においても、車載カメラ1の撮像画像をそのまま用いずに、当該撮像画像からエッジ画像やHOG特徴量などの特徴量を抽出して(ここでは抽出結果を特徴量画像と呼ぶ)、この特徴量画像を当該撮像画像の代わりに用いてもよい。適切な特徴量を用いることで、検出性能を向上したり、処理するデータ量を削減したりできる。例えば、HOG特徴量を用いた場合、移動体や環境光のわずかな揺らぎによる不要な画像変化が抽出されにくくすることができる。また、前記撮像画像から輝度やエッジ強度などの平均値を抽出して用いて、処理するデータ量を大幅に削減することもできるが、この場合、前記汚れの払拭領域が検出できないため、払拭領域の代わりに、払拭の確度情報を出力する。
(Extended example 3 of processing of the wiping area extraction unit 22)
By the way, the wiping area extraction unit 22 also extracts a feature amount such as an edge image or a HOG feature amount from the captured image without using the captured image of the in-vehicle camera 1 as it is (here, the extraction result is referred to as a feature amount image). This feature amount image may be used instead of the captured image. By using an appropriate feature amount, the detection performance can be improved and the amount of data to be processed can be reduced. For example, when the HOG feature amount is used, it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light. Further, it is possible to significantly reduce the amount of data to be processed by extracting and using an average value such as luminance and edge strength from the captured image, but in this case, the wiping area cannot be detected, so the wiping area Instead, the wiping accuracy information is output.
 (汚れ状態管理部4の処理)
 汚れ状態管理部4は、車載カメラ1のレンズ表面に付着した汚れの有無を汚れ状態として、当該付着領域を汚れ記憶領域として管理する。
(Processing of the dirt state management unit 4)
The dirt state management unit 4 manages the attached area as a dirt storage area by setting the presence or absence of dirt attached to the lens surface of the in-vehicle camera 1 as a dirty state.
 そこで、汚れ状態管理部4は、汚れ検知部3から出力される汚れ検知領域又は汚れ検知強度を参照し、前記汚れ状態や前記汚れ記憶領域を記憶する。 Therefore, the dirt state management unit 4 refers to the dirt detection area or the dirt detection intensity output from the dirt detection unit 3 and stores the dirt state and the dirt storage area.
 しかし、汚れ検知部3は、例えば自車両が停車している場合など、状況によっては機能しないことがある。この状態で車載カメラ1のレンズ表面の汚れが払拭されても、汚れ検知部3だけでは、当該汚れ状態管理部4が記憶している汚れ状態や汚れ記憶領域に、汚れの払拭が反映されない。 However, the dirt detection unit 3 may not function depending on the situation, for example, when the host vehicle is stopped. Even if the dirt on the lens surface of the in-vehicle camera 1 is wiped off in this state, the dirt wiping is not reflected in the dirt state or the dirt storage area stored in the dirt state management part 4 only by the dirt detector 3.
 そこで、当該汚れ状態管理部4は、払拭領域抽出部22から出力される払拭領域を参照し、当該汚れ状態管理部4が記憶している汚れ記憶領域から当該払拭領域を消去する。また、当該払拭領域が所定の面積を超えた場合に、記憶している汚れ状態を「汚れ有り」から「汚れ無し」に変更する。 Therefore, the dirt state management unit 4 refers to the wiping area output from the wiping area extraction unit 22 and erases the wiping area from the dirt storage area stored in the dirt state management unit 4. Further, when the wiping area exceeds a predetermined area, the stored dirt state is changed from “dirty” to “no dirt”.
 なお、汚れ状態管理部4は、汚れ状態のみを管理する構成であってもよいし、汚れ記憶領域のみを管理する構成であってもよいし、汚れ状態及び汚れ記憶領域の両方を管理する構成であってもよい。汚れ状態管理部4が、汚れ状態及び汚れ記憶領域の両方を管理する場合においては、汚れ記憶領域が所定の面積未満になった場合に、記憶している汚れ状態を「汚れ有り」から「汚れ無し」に変更するようにしてもよい。 The dirty state management unit 4 may be configured to manage only the dirty state, may be configured to manage only the dirty storage area, or may be configured to manage both the dirty state and the dirty storage area. It may be. In the case where the dirty state management unit 4 manages both the dirty state and the dirty storage area, the stored dirty state is changed from “dirty” to “dirty” when the dirty storage area becomes less than a predetermined area. It may be changed to “None”.
 (汚れ状態管理部4の処理の拡張例1)
 なお、払拭領域抽出部22から出力される払拭領域は、車載カメラ1の撮像画像において時間変化のあった領域を抽出したものであり、汚れが新たに現れた領域を含む可能性がある。
(Extended example 1 of processing of the dirt state management unit 4)
Note that the wiping area output from the wiping area extracting unit 22 is obtained by extracting an area that has changed with time in the captured image of the in-vehicle camera 1, and may include an area in which dirt newly appears.
 そこで、汚れ状態管理部4が記憶する汚れ記憶領域にない領域が払拭領域抽出部22から払拭領域として新たに出力された場合には、当該払拭領域を汚れの付着領域として当該汚れ記憶領域に新たに記憶してもよい。同様に、汚れ状態管理部4が記憶している汚れ状態が「汚れ無し」の場合に、前記払拭領域が所定の面積を超えた場合に、当該汚れ状態を「汚れ無し」から「汚れ有り」に変更してもよい。 Therefore, when an area that is not in the dirt storage area stored by the dirt state management unit 4 is newly output from the wiping area extraction unit 22 as a wiping area, the wiping area is newly added to the dirt storage area as a dirt adhesion area. May be stored. Similarly, when the dirt state stored in the dirt state management unit 4 is “no dirt” and the wiping area exceeds a predetermined area, the dirt state is changed from “no dirt” to “dirty”. You may change to
 (汚れ状態管理部4の処理の拡張例2)
 ところで、車載カメラ1のレンズ表面に付着した汚れは少しずつ段階的に取り除かれる場合もある。例えば、手作業によるレンズの清掃が不十分で一部の汚れが残ってしまい、何度も清掃を繰り返す場合や、一部の水滴が自重で流れ落ちた場合などである。これらの場合、払拭領域抽出部22から出力される払拭領域として、都度払拭される一部の汚れのみが出力される。
(Extended example 2 of processing of the dirt state management unit 4)
By the way, the stain | pollution | contamination adhering to the lens surface of the vehicle-mounted camera 1 may be removed in steps little by little. For example, there is a case where the lens is not cleaned manually and a part of the dirt remains, and the cleaning is repeated many times, or a part of the water drops run off by its own weight. In these cases, as the wiping area output from the wiping area extraction unit 22, only a part of the dirt that is wiped each time is output.
 そこで、汚れ状態管理部4は、払拭領域抽出部22が出力する払拭領域を累積し、累積した払拭領域が所定の面積を超過する場合に、記憶している汚れ状態を「汚れ有り」から「汚れ無し」、又は「汚れ無し」から「汚れ有り」に変化させることが好ましい。 Therefore, the dirt state management unit 4 accumulates the wiping areas output by the wiping area extraction unit 22, and when the accumulated wiping area exceeds a predetermined area, the stored dirt state is changed from “dirty” to “ It is preferable to change from “no dirt” or “no dirt” to “dirty”.
 一方、汚れ状態管理部4が前記汚れ記憶領域を管理している場合には、払拭領域抽出部22から出力される払拭領域を当該汚れ記憶領域に逐次反映させればよい。 On the other hand, when the dirt state management unit 4 manages the dirt storage area, the wiping area output from the wiping area extraction unit 22 may be sequentially reflected in the dirt storage area.
 上述した本実施例を用いることで、車載用撮像装置において、車載カメラ1のレンズ表面を清掃する人やその他の移動体が当該車載カメラ1の撮像画像に映り込んだ場面を除外した上で、当該撮像画像の時間変化を抽出し、レンズ汚れの払拭を検知するレンズ汚れ払拭診断装置(汚れ払拭診断部2、汚れ検知部3、汚れ状態管理部4)を提供することができる。 By using the present embodiment described above, in the in-vehicle imaging device, after excluding the scene where a person or other moving body cleaning the lens surface of the in-vehicle camera 1 is reflected in the captured image of the in-vehicle camera 1, It is possible to provide a lens dirt wiping diagnostic device (dirt wiping diagnostic unit 2, dirt detection unit 3, and dirt state management unit 4) that extracts temporal changes in the captured image and detects wiping of lens dirt.
 すなわち、清掃者が車載カメラ1に接近して映り込んだ状態などでは安定状態判定部21による安定状態信号が「非安定状態」になるため払拭領域抽出部22での処理から除外され、一方、汚れの払拭については、汚れているときに「安定状態」があり、汚れが払拭された後にも「安定状態」があるため、この「安定状態」同士が払拭領域抽出部22での処理の対象となり、汚れの払しょくを検知することが可能となる。 That is, in a state where the cleaner is reflected close to the in-vehicle camera 1, the stable state signal by the stable state determination unit 21 becomes “unstable state” and thus is excluded from the processing in the wiping region extraction unit 22, As for the wiping of dirt, there is a “stable state” when it is dirty, and there is a “stable state” even after the dirt is wiped off. Therefore, these “stable states” are targets of processing in the wiping area extraction unit 22. Thus, it becomes possible to detect the removal of dirt.
 また、停車時の撮像画像を基準画像として用いなければならないという制限を受けず、随時、診断することができ、差分をとる2つの画像の撮像時刻が離れることを避け、誤判定が生じる虞を低減できる。 Further, there is no restriction that the captured image at the time of stopping must be used as a reference image, and it can be diagnosed at any time, avoiding the separation of the imaging time of the two images that take the difference, and the possibility of misjudgment. Can be reduced.
 また、車載カメラ1のレンズ表面に付着した汚れを検知する汚れ検知部3が機能しない状況(例えば自車両の停車中)でも、当該車載カメラ1のレンズに付着した汚れの払拭を検知し、当該車載カメラ1のレンズに汚れが付着していたことで停止させていた車両の自律制御システム(例えば自動運転システム)を再開させることができる。 Further, even in a situation where the dirt detection unit 3 that detects dirt attached to the lens surface of the in-vehicle camera 1 does not function (for example, when the host vehicle is stopped), the wiping of the dirt attached to the lens of the in-vehicle camera 1 is detected, The autonomous control system (for example, automatic driving system) of the vehicle that has been stopped due to the dirt on the lens of the in-vehicle camera 1 can be resumed.
 図8は、本発明の実施例2に係る車載用撮像装置の概略構成を示すブロック図である。なお、図8に示す車載用撮像装置200において、図1に示した実施例1の車載用撮像装置100と同一機能を有する構成部には同一の符号を付し、重複する説明は省略する。 FIG. 8 is a block diagram showing a schematic configuration of the in-vehicle imaging device according to the second embodiment of the present invention. In the in-vehicle image pickup apparatus 200 shown in FIG. 8, the same reference numerals are given to components having the same functions as those of the in-vehicle image pickup apparatus 100 according to the first embodiment shown in FIG.
 (車載用撮像装置200の構成)
 図8に示すように、本実施例の車載用撮像装置200は、車載カメラ1と、汚れ払拭診断部2と、汚れ検知部3と、汚れ状態管理部4と、を備える。また、汚れ払拭診断部2は、安定状態判定部21と、払拭領域抽出部22と、を備える。また、汚れ払拭診断部2と、汚れ検知部3と、汚れ状態管理部4と、をレンズ汚れ払拭診断装置とも呼ぶ。
(Configuration of in-vehicle imaging device 200)
As shown in FIG. 8, the in-vehicle imaging device 200 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4. The dirt wiping diagnosis unit 2 includes a stable state determination unit 21 and a wiping region extraction unit 22. The dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnostic device.
 本実施例の払拭領域抽出部22は、車載カメラ1から出力される撮像画像、及び安定状態判定部21から出力される安定状態信号の他に、汚れ状態管理部4が記憶している車載カメラ1のレンズ表面の汚れの付着領域(汚れ記憶領域)を参照する。 The wiping area extraction unit 22 according to the present embodiment includes the in-vehicle camera stored in the dirt state management unit 4 in addition to the captured image output from the in-vehicle camera 1 and the stable state signal output from the stable state determination unit 21. Reference is made to the dirt adhesion area (dirt storage area) of the lens surface 1.
 (払拭領域抽出部22の処理)
 図9は、実施例2における払拭領域抽出部22の処理のフローチャートの一例である。
(Process of wiping area extraction unit 22)
FIG. 9 is an example of a flowchart of processing of the wiping area extraction unit 22 in the second embodiment.
 まず、汚れ状態管理部4が記憶する前記汚れ記憶領域を取得する(ステップ2210)。その後、安定状態判定部21から出力される安定状態信号が安定状態を示していれば、車載カメラ1の撮像画像を取得し、当該撮像画像と、前の処理で取得した直前の安定状態における撮像画像との差分画像を生成する(ステップ2201~2203)。そして、当該差分画像を累積し(ステップ2211)、累積した当該差分画像から差分の大きな領域(差分領域)を抽出する(ステップ2212)。次に、ステップ2210で取得した前記汚れ記憶領域と累積された当該差分領域を比較し(ステップ2213)、前記汚れ記憶領域の大部分が累積された当該差分領域となった場合は、累積された当該差分領域を汚れが払拭された領域(払拭領域)として出力する(ステップ2205)。一方、前記ステップ2201において前記安定状態信号が非安定状態を示している場合や、ステップ2213において前記汚れ記憶領域の大部分が累積された前記差分領域でない場合は、安定状態における新たな撮像画像を取得するため、一定時間停止した後に(ステップ2214)、ステップ2201から処理を再開する。 First, the dirt storage area stored in the dirt state management unit 4 is acquired (step 2210). Thereafter, if the stable state signal output from the stable state determination unit 21 indicates the stable state, the captured image of the in-vehicle camera 1 is acquired, and the captured image and the image of the stable state immediately before acquired in the previous process are acquired. A difference image from the image is generated (steps 2201 to 2203). Then, the difference image is accumulated (step 2211), and an area having a large difference (difference area) is extracted from the accumulated difference image (step 2212). Next, the dirt storage area acquired in step 2210 is compared with the accumulated difference area (step 2213), and if most of the dirt storage area becomes the accumulated difference area, it is accumulated. The difference area is output as an area (wiping area) from which dirt is wiped off (step 2205). On the other hand, if the stable state signal indicates an unstable state in step 2201 or if it is not the difference area in which most of the dirt storage area is accumulated in step 2213, a new captured image in the stable state is displayed. In order to acquire, after stopping for a certain period of time (step 2214), the processing is restarted from step 2201.
 これにより、長期間に亘って段階的に汚れが払拭された場合でも、大部分の汚れが払拭され切るまで、ステップ2211において、差分画像を累積し続けることができる。 Thus, even when the dirt is wiped off stepwise over a long period of time, the difference images can continue to be accumulated in step 2211 until most of the dirt is wiped off.
 ここで、累積される差分領域には、差分の向き(符号)も記憶させることが好ましい。これにより、上記(払拭領域抽出部22の処理の拡張例1)と同様、周辺車両のヘッドライトの一時的な照射など汚れの払拭によらない画像変化を前記差分画像ないしは前記払拭領域から除外できる。 Here, it is preferable to store the direction (sign) of the difference in the accumulated difference area. As a result, as in the above (Extended example 1 of processing of the wiping area extraction unit 22), image changes that are not caused by wiping off dirt such as temporary irradiation of headlights of surrounding vehicles can be excluded from the difference image or the wiping area. .
 なお、本実施例の払拭領域抽出部22の処理は、図9に示すフローチャートに限定されない。例えば、ステップ2211を省略し、代わりにステップ2212で前記差分画像から差分の大きな領域を抽出し、さらに抽出した当該領域を累積し、累積された当該領域を差分領域としても同様の機能が得られる。 In addition, the process of the wiping area | region extraction part 22 of a present Example is not limited to the flowchart shown in FIG. For example, the same function can be obtained by omitting step 2211 and extracting a region having a large difference from the difference image in step 2212 and accumulating the extracted region and using the accumulated region as a difference region. .
 図10は、本発明の実施例3に係る車載用撮像装置の概略構成を示すブロック図である。なお、図10に示す車載用撮像装置300において、図1に示した実施例1の車載用撮像装置100と同一機能を有する構成部には同一の符号を付し、重複する説明は省略する。 FIG. 10 is a block diagram illustrating a schematic configuration of the in-vehicle imaging device according to the third embodiment of the present invention. In the in-vehicle image pickup apparatus 300 shown in FIG. 10, the same reference numerals are given to components having the same functions as those of the in-vehicle image pickup apparatus 100 according to the first embodiment shown in FIG.
 (車載用撮像装置300の構成)
 図10に示すように、本実施例の車載用撮像装置300は、車載カメラ1と、汚れ払拭診断部2と、汚れ検知部3と、汚れ状態管理部4と、を備える。また、汚れ払拭診断部2は、安定状態判定部21と、払拭領域抽出部22と、清掃状態判定部23と、を備える。また、汚れ払拭診断部2と、汚れ検知部3と、汚れ状態管理部4と、をレンズ汚れ払拭診断装置とも呼ぶ。
(Configuration of in-vehicle imaging device 300)
As shown in FIG. 10, the in-vehicle imaging device 300 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4. Further, the dirt wiping diagnosis unit 2 includes a stable state determination unit 21, a wiping region extraction unit 22, and a cleaning state determination unit 23. The dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnostic device.
 (清掃状態判定部23の処理)
 車載カメラ1のレンズを手作業で清掃する場合、当該車載カメラ1のレンズに布や指を押し当てることとなる。布や指を押し当てた領域では外界からの光が遮られるため、当該車載カメラ1の照度の低下や、焦点のずれが生じる。また、当該車載カメラ1の撮像画像において、布や指を押し当てた領域が暗くなる。
(Processing of cleaning state determination unit 23)
When the lens of the in-vehicle camera 1 is manually cleaned, a cloth or a finger is pressed against the lens of the in-vehicle camera 1. Since light from the outside is blocked in the area where the cloth or finger is pressed, the illuminance of the in-vehicle camera 1 is reduced or the focus is shifted. Moreover, in the captured image of the vehicle-mounted camera 1, the area where the cloth or finger is pressed becomes dark.
 図11は、車載カメラ1のレンズに布を押しあてた場合に当該車載カメラ1で撮像される撮像画像の一例である。図11を参照すると、布を押しあてた領域である領域34が暗く黒く表示されている。 FIG. 11 is an example of a captured image captured by the in-vehicle camera 1 when a cloth is pressed against the lens of the in-vehicle camera 1. Referring to FIG. 11, a region 34, which is a region pressed against the cloth, is displayed darkly and black.
 そこで、清掃状態判定部23は、車載カメラ1の照度を参照し、照度が所定値以上大きく低下した場合に、当該車載カメラ1のレンズが清掃されていると判定し、当該判定の結果を清掃信号として出力する。 Therefore, the cleaning state determination unit 23 refers to the illuminance of the in-vehicle camera 1 and determines that the lens of the in-vehicle camera 1 has been cleaned when the illuminance has greatly decreased by a predetermined value or more, and cleans the determination result. Output as a signal.
 (清掃状態判定部23の処理の別例1)
 また、清掃状態判定部23は、車載カメラ1がオートフォーカス機能をもつ場合、図示しない当該車載カメラ1の焦点距離を参照し、焦点距離の変化量が所定値以上の場合に、当該車載カメラ1のレンズが清掃されていると判定し、当該判定の結果を清掃信号として出力してもよい。
(Another example 1 of the process of the cleaning state determination unit 23)
Further, when the in-vehicle camera 1 has an autofocus function, the cleaning state determination unit 23 refers to the focal length of the in-vehicle camera 1 (not shown), and when the change amount of the focal length is equal to or greater than a predetermined value, the in-vehicle camera 1 The lens may be determined to be cleaned, and the result of the determination may be output as a cleaning signal.
 (清掃状態判定部23の処理の別例2)
 他に、清掃状態判定部23は、車載カメラ1の撮像画像から平均輝度を算出し、当該平均輝度が所定値以上大きく低下した場合に、当該車載カメラ1のレンズが清掃されていると判定し、当該判定の結果を清掃信号として出力してもよい。
(Another example 2 of the process of the cleaning state determination unit 23)
In addition, the cleaning state determination unit 23 calculates the average luminance from the captured image of the in-vehicle camera 1, and determines that the lens of the in-vehicle camera 1 is cleaned when the average luminance is greatly reduced by a predetermined value or more. The result of the determination may be output as a cleaning signal.
 (清掃状態判定部23の処理の別例3)
 また、清掃状態判定部23は、車載カメラ1の撮像画像から所定値以下の輝度の領域(暗領域)を抽出し、当該暗領域の面積が所定値以上となった場合に、当該車載カメラ1のレンズが清掃されていると判定し、当該判定の結果を清掃信号として出力してもよい。
(Another example 3 of the process of the cleaning state determination unit 23)
Moreover, the cleaning state determination part 23 extracts the area | region (dark area) below the predetermined value from the picked-up image of the vehicle-mounted camera 1, and when the area of the said dark area becomes more than predetermined value, the said vehicle-mounted camera 1 The lens may be determined to be cleaned, and the result of the determination may be output as a cleaning signal.
 (清掃状態判定部23の処理の別例3の拡張例)
 ところで、図11の撮像画像に示すように、布を押し当てた領域である領域34が画面全体に及ばない場合、当該領域34の端には大きな輝度変化が生じる。
(Extended example of another example 3 of the process of the cleaning state determination unit 23)
By the way, as shown in the captured image of FIG. 11, when the area 34 that is the area where the cloth is pressed does not reach the entire screen, a large luminance change occurs at the end of the area 34.
 そこで、清掃状態判定部23は、画面全体(撮像画像の全体)に及ばない暗領域が検出された場合に、当該暗領域の端に大きな輝度変化があるかを判定し、当該暗領域の端に所定値以上の輝度変化がある場合に、当該暗領域はレンズを清掃したことで生じたものと判定し、当該暗領域の端に所定値以上の輝度変化がない場合、当該暗領域はレンズを清掃したことで生じたものではないと判定する。これにより、清掃判定の誤検知を抑えることができる。 Therefore, when a dark region that does not reach the entire screen (the entire captured image) is detected, the cleaning state determination unit 23 determines whether there is a large luminance change at the end of the dark region, and the end of the dark region. If there is a luminance change greater than or equal to a predetermined value, it is determined that the dark area is caused by cleaning the lens, and if there is no luminance change greater than or equal to the predetermined value at the end of the dark area, the dark area is It is determined that it is not caused by cleaning. Thereby, the erroneous detection of cleaning determination can be suppressed.
 (清掃状態判定部23の処理の別例4)
 また、前記暗領域は輝度が低いだけでなく、エッジがないという特徴もある。そこで、清掃状態判定部23は、車載カメラ1の撮像画像からエッジ強度が所定値以下の領域(非エッジ領域)を抽出し、当該非エッジ領域の面積が所定値以上となった場合に、当該車載カメラ1のレンズが清掃されていると判定し、当該判定の結果を清掃信号として出力してもよい。
(Another example 4 of the process of the cleaning state determination unit 23)
In addition, the dark region has not only low brightness but also has no edge. Therefore, the cleaning state determination unit 23 extracts a region (non-edge region) whose edge strength is equal to or less than a predetermined value from the captured image of the in-vehicle camera 1, and when the area of the non-edge region becomes equal to or greater than the predetermined value, It may be determined that the lens of the in-vehicle camera 1 has been cleaned, and the result of the determination may be output as a cleaning signal.
 (清掃状態判定部23の処理の拡張例)
 ところで、幾度かに分けて車載カメラ1のレンズを清掃した場合、1度の清掃では、当該車載カメラ1の照度の低下量や焦点距離の変化量、撮像画像の平均輝度の低下量、暗領域の面積、非エッジ領域の面積が十分大きくならず、当該車載カメラ1のレンズが清掃されていると判定されないことが懸念される。
(Extended example of processing of cleaning state determination unit 23)
By the way, when the lens of the vehicle-mounted camera 1 is cleaned in several times, the amount of decrease in illuminance and focal length of the vehicle-mounted camera 1, the amount of decrease in the average luminance of the captured image, and the dark region are one cleaning. And the area of the non-edge region are not sufficiently large, and there is a concern that it is not determined that the lens of the in-vehicle camera 1 is cleaned.
 そこで、清掃状態判定部23は、前記照度の低下や焦点距離の変化、撮像画像の平均輝度の低下などが所定の期間よりも長く続く場合、車載カメラ1のレンズが清掃されたと判定すればよい。 Therefore, the cleaning state determination unit 23 may determine that the lens of the in-vehicle camera 1 has been cleaned when the decrease in illuminance, the change in focal length, the decrease in average luminance of the captured image, and the like continues for a longer period of time. .
 また、前記暗領域の面積、前記非エッジ領域の面積から清掃状態を判定する場合、まず車載カメラ1の撮像画像から、前記暗領域ないしは前記非エッジ領域を、汚れが清掃された領域として検出する。そして、当該暗領域ないしは当該非エッジ領域を逐次累積し、累積された当該暗領域ないしは当該非エッジ領域の面積が所定値以上となった場合に、当該車載カメラ1のレンズが清掃されたと判定し、当該判定の結果を清掃信号として出力することが好ましい。 Further, when determining the cleaning state from the area of the dark region and the area of the non-edge region, first, the dark region or the non-edge region is detected from the captured image of the in-vehicle camera 1 as a region where dirt has been cleaned. . Then, the dark region or the non-edge region is sequentially accumulated, and when the accumulated area of the dark region or the non-edge region becomes a predetermined value or more, it is determined that the lens of the in-vehicle camera 1 is cleaned. The result of the determination is preferably output as a cleaning signal.
 これらにより、幾度かに分けて車載カメラ1のレンズを清掃した場合でも、当該レンズが清掃されたことを検知することができる。 Thus, even when the lens of the in-vehicle camera 1 is cleaned several times, it can be detected that the lens has been cleaned.
 (払拭領域抽出部22の処理)
 車載カメラ1のレンズを清掃している間(清掃期間)は、当該車載カメラ1のレンズに押し当てた布や指などが当該車載カメラの撮像画像に大きく表示されるため、払拭領域抽出部22は、当該車載カメラ1の撮像画像から、当該車載カメラ1のレンズ表面に付着した汚れを観測できない。また、当該清掃期間の直前と直後の撮像画像を比較すれば、当該車載カメラ1のレンズの清掃によって払拭された汚れの領域を検出することができる。
(Process of wiping area extraction unit 22)
While the lens of the in-vehicle camera 1 is being cleaned (cleaning period), the cloth or the finger pressed against the lens of the in-vehicle camera 1 is displayed large in the captured image of the in-vehicle camera. Cannot observe dirt attached to the lens surface of the in-vehicle camera 1 from the captured image of the in-vehicle camera 1. Further, by comparing the captured images immediately before and immediately after the cleaning period, it is possible to detect a dirt area wiped off by cleaning the lens of the in-vehicle camera 1.
 図12は、本実施例における払拭領域抽出部22の処理のフローチャートの一例である。 FIG. 12 is an example of a flowchart of processing of the wiping area extraction unit 22 in the present embodiment.
 まず、安定状態判定部21から出力される安定状態信号が安定状態を示しているかの判定(ステップ2201)と、清掃状態判定部23から出力される清掃信号が清掃中を示しているかの判定を行う(ステップ2221)。そして、安定状態かつ清掃中でない(非清掃中である)場合に、車載カメラ1の撮像画像を取得する(ステップ2202)。その後、直前の処理において前記清掃信号が清掃中を示していたかを判定し(ステップ2222)、清掃中を示していた場合、ステップ2202で取得した撮像画像は清掃直後の安定状態における撮像画像であることが特定されるため、ステップ2202で取得した前記撮像画像と、以前の処理で取得した非清掃中の撮像画像との差分画像を生成する(ステップ2223)。当該差分画像が、清掃期間の直前と直後の撮像画像の差分である。その後、当該差分画像から差分の大きな領域(差分領域)を抽出し(ステップ2204)、当該差分領域を汚れの払拭された領域(払拭領域)として出力する(ステップ2205)。なお、ステップ2201で安定状態信号が非安定状態を示している場合や、ステップ2221で清掃信号が清掃中を示している場合、ステップ2222で直前の処理において前記清掃信号が清掃中を示していない場合は、一定時間処理を停止した後(ステップ2224)、ステップ2201から処理を再開する。 First, it is determined whether the stable state signal output from the stable state determination unit 21 indicates a stable state (step 2201) and whether the cleaning signal output from the cleaning state determination unit 23 indicates that cleaning is being performed. Perform (step 2221). And when it is in a stable state and not being cleaned (not being cleaned), a captured image of the in-vehicle camera 1 is acquired (step 2202). Thereafter, it is determined whether or not the cleaning signal indicates that cleaning is being performed in the immediately preceding process (step 2222). If the cleaning signal indicates that cleaning is being performed, the captured image acquired in step 2202 is a captured image in a stable state immediately after cleaning. Therefore, a difference image between the captured image acquired in step 2202 and the non-cleaning captured image acquired in the previous process is generated (step 2223). The difference image is the difference between the captured images immediately before and after the cleaning period. Thereafter, an area with a large difference (difference area) is extracted from the difference image (step 2204), and the difference area is output as a wiped area (wiping area) (step 2205). If the steady state signal indicates an unstable state in step 2201 or if the cleaning signal indicates that cleaning is being performed in step 2221, the cleaning signal does not indicate that cleaning is being performed in the immediately preceding process in step 2222. In this case, after the processing is stopped for a certain time (step 2224), the processing is restarted from step 2201.
 なお、払拭領域抽出部22の処理のフローチャートは他の形であってもよい。例えば、ステップ2221とステップ2202の順序を逆転し、安定状態における前記撮像画像を全て取得した上で、前記清掃信号が清掃中から非清掃中に切り替わった場合にのみ、ステップ2223、2204、2205を実行しても、同様の機能を得ることができる。 Note that the flowchart of the process of the wiping area extraction unit 22 may take other forms. For example, steps 2223, 2204, and 2205 are performed only when the order of steps 2221 and 2202 is reversed to acquire all the captured images in the stable state and the cleaning signal is switched from cleaning to non-cleaning. Even if executed, the same function can be obtained.
 (払拭領域抽出部22の処理の拡張例1)
 また、幾度かに分けて車載カメラ1のレンズを清掃した場合、何度か清掃期間が断続して続くことが想定される。
(Expansion example 1 of the process of the wiping area extraction unit 22)
Moreover, when the lens of the vehicle-mounted camera 1 is cleaned several times, it is assumed that the cleaning period continues several times.
 そこで、払拭領域抽出部22は、ステップ2204において、前記差分画像を一定期間累積し、累積された当該差分画像において差分の大きな領域(差分領域)を抽出するようにしてもよい。または、ステップ2205において、前記差分領域を一定期間累積した後、累積された当該差分領域を前記払拭領域として出力するようにしてもよい。 Therefore, in step 2204, the wiping area extraction unit 22 may accumulate the difference image for a certain period, and may extract a large difference area (difference area) in the accumulated difference image. Alternatively, in step 2205, after the difference area is accumulated for a certain period, the accumulated difference area may be output as the wiping area.
 ここで、当該差分画像ないしは当該差分領域を累積する期間は、予め定めた所定の期間としてもよいが、汚れ状態管理部4が管理する汚れ状態が「汚れ無し」になるまで、ないしは、汚れ検知部3が機能するまでとすることが好ましい。これにより、汚れの払拭領域の取りこぼしを避けることができる。なお、前記差分画像ないしは前記差分領域の累積期間中も、汚れの払拭状態を管理できるよう、払拭領域抽出部22は、逐次、払拭領域を出力することが好ましい。 Here, the period during which the difference image or the difference area is accumulated may be a predetermined period. However, until the dirt state managed by the dirt state management unit 4 becomes “no dirt” or dirt detection is performed. It is preferable that the part 3 is functioned. As a result, it is possible to avoid spillage of the wiping area for dirt. In addition, it is preferable that the wiping area | region extraction part 22 outputs a wiping area | region sequentially so that the wiping state of dirt can be managed also during the accumulation period of the said difference image thru | or the said difference area | region.
 (払拭領域抽出部22の処理の拡張例2)
 さらに、払拭領域抽出部22は、車載カメラ1の撮像画像をそのまま用いずに、当該撮像画像からエッジ画像やHOG特徴量などの特徴量を抽出して(ここでは抽出結果を特徴量画像と呼ぶ)、この特徴量画像を当該撮像画像の代わりに用いてもよい。適切な特徴量を用いることで、検出性能を向上したり、処理するデータ量を削減したりできる。例えば、HOG特徴量を用いた場合、移動体や環境光のわずかな揺らぎによる不要な画像変化を抽出されにくくすることができる。また、当該撮像画像から輝度やエッジ強度などの平均値を抽出して用いて、処理するデータ量を大幅に削減することもできるが、この場合、前記汚れの払拭領域が検出できないため、前記払拭領域の代わりに、払拭の確度情報を出力する。
(Expansion example 2 of the process of the wiping area extraction unit 22)
Further, the wiping area extraction unit 22 extracts a feature amount such as an edge image or a HOG feature amount from the captured image without using the captured image of the in-vehicle camera 1 as it is (here, the extraction result is referred to as a feature amount image). ), This feature amount image may be used instead of the captured image. By using an appropriate feature amount, the detection performance can be improved and the amount of data to be processed can be reduced. For example, when the HOG feature amount is used, it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light. Further, it is possible to significantly reduce the amount of data to be processed by extracting and using average values such as luminance and edge intensity from the captured image, but in this case, the wiping area for the dirt cannot be detected, and thus the wiping is not performed. Instead of the area, the wiping accuracy information is output.
 本発明の実施例1及至3において、自車両の車速及び乗員の乗り降りに類する情報を検知すれば、汚れ払拭診断部2による汚れ払拭検知を行う期間を限定できる。本実施例では、自車両の車速、及び自車両の乗員の乗り降りに類する情報を取得し、利用する。 In the first to third embodiments of the present invention, if information similar to the vehicle speed of the host vehicle and passengers getting on and off is detected, the period for performing the dirt wiping detection by the dirt wiping diagnosis unit 2 can be limited. In this embodiment, the vehicle speed of the own vehicle and information similar to the passenger getting on and off of the own vehicle are acquired and used.
 当該自車両の乗員の乗り降りに類する情報は、自車両のドア及び/又はウィンドウの開閉、又は車重の変化などに関する情報である。本実施例ではこれらの情報を検知、取得するセンサなどを備える。なお、車重によって車高も変化するため、車重の変化の代わりに車高の変化を検知してもよい。 Information similar to getting on and off of the passenger of the own vehicle is information on opening / closing of the door and / or window of the own vehicle or a change in the vehicle weight. In this embodiment, a sensor for detecting and acquiring such information is provided. Since the vehicle height also changes depending on the vehicle weight, a change in the vehicle height may be detected instead of the change in the vehicle weight.
 本実施例のレンズ汚れ払拭診断装置は、車速が十分に落ちた状態でドアの開閉があった場合や、車重が減少した場合、車載カメラ1のレンズを清掃するために乗員が自車両から降車したと判断し、汚れ払拭診断部2の処理の開始トリガとする。 In the lens dirt wiping diagnostic apparatus of this embodiment, when the door is opened and closed with the vehicle speed sufficiently lowered, or when the vehicle weight decreases, the occupant removes the lens of the in-vehicle camera 1 from the own vehicle. It is determined that the vehicle has got off, and this is used as a trigger for starting the process of the dirt wiping diagnosis unit 2.
 また、車載カメラ1が自車両の側面に設置されたサイドカメラである場合、当該車載カメラ1の付近のウィンドウが開いている間は、乗員が車内から当該車載カメラ1のレンズを清掃できるため、車速に関わらず、当該車載カメラ1の撮像画像に対して、汚れ払拭診断部2の処理を実行してもよい。 Moreover, when the vehicle-mounted camera 1 is a side camera installed on the side surface of the host vehicle, the passenger can clean the lens of the vehicle-mounted camera 1 from inside the vehicle while the window near the vehicle-mounted camera 1 is open. Regardless of the vehicle speed, the stain wiping diagnosis unit 2 may perform the process on the captured image of the in-vehicle camera 1.
 また、乗員が自車両から降車したと判断してから所定の時間が経過した後、ドアが再び開閉した場合や、車重が重くなった場合は、乗員が自車両に乗車したと判断し、汚れ払拭診断部2の処理を停止してもよい。また、車載カメラ1がサイドカメラの場合は、車載カメラ1の付近のウィンドウが閉じた場合に、当該車載カメラ1に対する汚れ払拭診断部2の処理を停止してもよい。 In addition, if the door opens and closes again after a predetermined time has elapsed since it is determined that the occupant got off the vehicle, or if the vehicle becomes heavy, it is determined that the occupant has boarded the vehicle, The processing of the dirt wiping diagnosis unit 2 may be stopped. Moreover, when the vehicle-mounted camera 1 is a side camera, when the window near the vehicle-mounted camera 1 is closed, the process of the dirt wiping diagnosis unit 2 for the vehicle-mounted camera 1 may be stopped.
 本発明の実施例1及至3において、自車両の周囲が暗く、車載カメラ1の撮像画像において汚れが視認できないような場合には、実施例4に記載の自車両の乗員の乗り降りに類する情報から、当該車載カメラ1のレンズの払拭を判定してもよい。 In the first to third embodiments of the present invention, when the surroundings of the host vehicle are dark and dirt cannot be visually recognized in the captured image of the in-vehicle camera 1, the information similar to the passenger getting on and off of the host vehicle described in the fourth embodiment is used. The wiping of the lens of the in-vehicle camera 1 may be determined.
 ただし、これだけでは、実際に汚れが払拭されたかを判定できない。そこで、周囲が暗く、車載カメラ1の撮像画像において汚れが視認できないような場合には、ユーザーに当該車載カメラ1のレンズ表面の汚れの有無を判定させる機構を設けることが好ましい。例えば、車載カメラ1の照度が極端に低い場合に、車内等のモニタに、当該車載カメラ1の撮像画像を表示すると共に、ユーザーに対して汚れの有無を確認して確認結果をモニタの入力手段(例えばタッチパネル)から入力するように促す表示を行い、ユーザーに汚れの有無を確認させる。そして、汚れが無いことがユーザーによって確認されて、ユーザーがその入力を行うと、これを受けて、汚れ状態管理部4は記憶している汚れ状態及び/又は汚れ記憶領域を初期化する。 However, this alone cannot determine whether the dirt has actually been wiped off. In view of this, it is preferable to provide a mechanism that allows the user to determine the presence or absence of dirt on the lens surface of the in-vehicle camera 1 when the surroundings are dark and dirt cannot be visually recognized in the captured image of the in-vehicle camera 1. For example, when the illuminance of the in-vehicle camera 1 is extremely low, the captured image of the in-vehicle camera 1 is displayed on a monitor in the vehicle or the like, and the user is checked for the presence or absence of dirt and the confirmation result is input to the monitor. A display prompting input from (for example, a touch panel) is performed, and the user is checked for the presence or absence of dirt. When the user confirms that there is no dirt and the user makes an input, the dirt state management unit 4 receives the input and initializes the stored dirt state and / or dirt storage area.
 ところで、自車両を停車して車載カメラ1のレンズを清掃する場合、ユーザーは停車した自車両のエンジンを切る可能性がある。その場合、不図示の車載バッテリーでレンズ汚れ払拭診断装置を動作させることもできるが、このバッテリーの充電切れが懸念されるため、レンズ汚れ払拭診断装置も直ちに停止される。この場合、実施例1及至5のレンズ汚れ払拭診断装置では、車載カメラ1のレンズに付着する汚れの払拭を検知することはできない。 By the way, when the host vehicle is stopped and the lens of the in-vehicle camera 1 is cleaned, the user may turn off the engine of the stopped vehicle. In this case, the lens dirt wiping diagnostic apparatus can be operated by a vehicle-mounted battery (not shown), but the lens dirt wiping diagnostic apparatus is also immediately stopped because there is a concern that the battery is out of charge. In this case, the lens dirt wiping diagnostic apparatus according to the first to fifth embodiments cannot detect the wiping of dirt attached to the lens of the in-vehicle camera 1.
 (払拭領域抽出部22の処理)
 本実施例では、エンジン停止前、又はエンジン停止直後からレンズ汚れ払拭診断装置を停止させるまでの間に、レンズ汚れ払拭診断装置の払拭領域抽出部22が、安定状態における撮像画像を不揮発な記憶部(不図示)に記憶する。当該記憶部(例えば、不揮発メモリ)は、電源の供給が途絶えても記憶を保持することができる。そして、エンジン再開時に、払拭領域抽出部22は、当該記憶部に記憶しておいた撮像画像と、エンジン再開後に取得される最新の安定状態における撮像画像との差分画像を抽出し、汚れの払拭領域を検出する。なお、この場合、清掃状態判定部23は不要となる。
(Process of wiping area extraction unit 22)
In the present embodiment, the wiping area extraction unit 22 of the lens dirt wiping diagnostic apparatus before the engine is stopped or immediately after the engine is stopped until the lens dirt wiping diagnostic apparatus is stopped, the captured image in a stable state is stored in a nonvolatile storage unit. (Not shown). The storage unit (eg, a non-volatile memory) can hold the memory even when power supply is interrupted. Then, when the engine is restarted, the wiping area extraction unit 22 extracts a difference image between the captured image stored in the storage unit and the latest captured image in the stable state acquired after the engine is restarted, and wipes off the dirt. Detect areas. In this case, the cleaning state determination unit 23 is not necessary.
 (払拭領域抽出部22の処理の拡張例1)
 ただし、エンジンを停止した場合、エンジンを再開するまでの時間が長くなることが懸念される。そこで、車載カメラ1の撮像画像をそのまま用いずに、時間変化に強いエッジ画像やHOG特徴量を抽出して(ここでは抽出結果を特徴量画像と呼ぶ)、この特徴量画像を用いることが好ましい。
(Expansion example 1 of the process of the wiping area extraction unit 22)
However, when the engine is stopped, there is a concern that it takes a long time to restart the engine. Therefore, it is preferable to extract an edge image or HOG feature value that is resistant to temporal changes without using the captured image of the in-vehicle camera 1 as it is (here, the extraction result is referred to as a feature value image), and use this feature value image. .
 (払拭領域抽出部22の処理の拡張例2)
 エンジン停止時間が極端に長くなると、周囲の明るさが変わるだけでなく、移動可能な周囲の物体(車両等)の配置が変わることが懸念される。
(Expansion example 2 of the process of the wiping area extraction unit 22)
If the engine stop time becomes extremely long, not only the brightness of the surroundings changes, but also there is a concern that the arrangement of movable surrounding objects (vehicles, etc.) may change.
 そこで、エンジン停止時間が極端に長い場合には、車載カメラ1のレンズ表面の汚れの有無をユーザーに判定させる機構を設けることが好ましい。例えば、エンジンの停止時刻を不揮発な記憶部(不図示)に記憶し、エンジン再開時に、当該記憶部に記憶した前記エンジンの停止時刻と現在時刻との差としてエンジン停止時間を算出し、当該エンジン停止時間が所定値以上の場合、車内等のモニタに車載カメラ1の撮像画像を表示し、ユーザーに対して汚れの有無を確認して確認結果をモニタの入力手段(例えばタッチパネル)から入力するように促す表示を行い、ユーザーに汚れの有無を確認させる。そして、汚れが無いことがユーザーによって確認されて、ユーザーがその入力を行うと、これを受けて、汚れ状態管理部4は記憶している汚れ状態及び/又は汚れ記憶領域を初期化する。 Therefore, when the engine stop time is extremely long, it is preferable to provide a mechanism for allowing the user to determine whether the lens surface of the in-vehicle camera 1 is dirty. For example, the engine stop time is stored in a nonvolatile storage unit (not shown), and when the engine is restarted, the engine stop time is calculated as the difference between the engine stop time stored in the storage unit and the current time, and the engine When the stop time is greater than or equal to a predetermined value, the captured image of the in-vehicle camera 1 is displayed on a monitor in the vehicle or the like, the user is checked for the presence or absence of dirt, and the confirmation result is input from the monitor input means (for example, touch panel) To prompt the user to check for contamination. When the user confirms that there is no dirt and the user makes an input, the dirt state management unit 4 receives the input and initializes the stored dirt state and / or dirt storage area.
 本実施例によれば、停車時間が長くなり、差分をとる2つの画像の撮像時刻が離れ、周囲の明るさや物体の配置が変わる場合にも対処でき、誤判定が生じる虞を低減できる。 According to the present embodiment, it is possible to cope with the case where the stop time becomes long, the imaging time of the two images taking the difference is separated, and the surrounding brightness and the arrangement of the object are changed, and the possibility of erroneous determination can be reduced.
 ところで、実施例4に記載したように、自車両の走行中に助手席の乗員がドアのウィンドウを開けて助手席側のサイドカメラのレンズを拭き取る場合など、自車両の走行中であっても、車載カメラ1のレンズが手作業で清掃される場合がある。ところが、単調な車道を走行していなければ、実施例3に記載の安定状態判定部21は非安定状態を示す安定状態信号を出力し続け、汚れ払拭診断部2が機能しなくなり、汚れの払拭を検知できなくなってしまう。 By the way, as described in the fourth embodiment, even when the passenger is traveling, the passenger in the passenger seat opens the door window and wipes the lens of the side camera on the passenger seat while the vehicle is traveling. In some cases, the lens of the in-vehicle camera 1 is manually cleaned. However, if the vehicle is not traveling on a monotonous roadway, the stable state determination unit 21 described in the third embodiment continues to output a stable state signal indicating an unstable state, and the dirt wiping diagnosis unit 2 does not function, and the dirt is wiped off. Cannot be detected.
 そこで、自車両の車速が所定値以上の場合、清掃状態判定部23だけで車載カメラ1のレンズが清掃されたことを判定する。 Therefore, when the vehicle speed of the host vehicle is equal to or higher than the predetermined value, it is determined only by the cleaning state determination unit 23 that the lens of the in-vehicle camera 1 has been cleaned.
 図13は、本発明の実施例7に係る車載用撮像装置の概略構成を示すブロック図である。なお、図13に示す車載用撮像装置400において、図10に示した実施例3の車載用撮像装置300と同一機能を有する構成部には同一の符号を付し、重複する説明は省略する。 FIG. 13 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 7 of the present invention. In the in-vehicle imaging device 400 shown in FIG. 13, the same reference numerals are given to components having the same functions as those in the in-vehicle imaging device 300 of the third embodiment shown in FIG.
 (車載用撮像装置400の構成)
 図13に示すように、本実施例の車載用撮像装置400は、車載カメラ1と、汚れ払拭診断部2と、汚れ検知部3と、汚れ状態管理部4と、を備え、さらに、車速センサ5を備える。ここで、車速センサ5は、自車両の車速を検出し、出力する。また、汚れ払拭診断部2は、実施例3と同様、安定状態判定部21、及び払拭領域抽出部22の他に、清掃状態判定部23を備える。ただし、本実施例では、安定状態判定部21、及び払拭領域抽出部22を省略することもできる。また、汚れ払拭診断部2と、汚れ検知部3と、汚れ状態管理部4と、をレンズ汚れ払拭診断装置としてもよいし、これに車速センサ5を含めてレンズ汚れ払拭診断装置としてもよい。
(Configuration of in-vehicle imaging device 400)
As shown in FIG. 13, the in-vehicle image pickup apparatus 400 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4, and further includes a vehicle speed sensor. 5 is provided. Here, the vehicle speed sensor 5 detects and outputs the vehicle speed of the host vehicle. Further, the dirt wiping diagnosis unit 2 includes a cleaning state determination unit 23 in addition to the stable state determination unit 21 and the wiping region extraction unit 22 as in the third embodiment. However, in this embodiment, the stable state determination unit 21 and the wiping area extraction unit 22 can be omitted. Further, the dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 may be a lens dirt wiping diagnostic device, or may include a vehicle speed sensor 5 as a lens dirt wiping diagnostic device.
 (汚れ払拭診断部2の処理)
 汚れ払拭診断部2は、車速センサ5から出力される自車両の車速を取得し、当該車速が所定値以上となった場合、安定状態判定部21及び払拭領域抽出部22の処理を停止し、清掃状態判定部23の出力する清掃信号を汚れ状態管理部4に出力する。
(Processing of the dirt wiping diagnosis unit 2)
The dirt wiping diagnostic unit 2 acquires the vehicle speed of the host vehicle output from the vehicle speed sensor 5, and when the vehicle speed exceeds a predetermined value, stops the processing of the stable state determination unit 21 and the wiping region extraction unit 22, The cleaning signal output from the cleaning state determination unit 23 is output to the dirt state management unit 4.
 (汚れ状態管理部4の処理)
 汚れ状態管理部4は、清掃状態判定部23から出力される清掃信号が、「非清掃中」から「清掃中」に、さらに「清掃中」から「非清掃中」に変化した場合、車載カメラ1のレンズが清掃されたものとして、記憶している汚れ状態を「汚れ有り」から「汚れ無し」に変更する。また、汚れ記憶領域を削除(初期化)する。
(Processing of the dirt state management unit 4)
When the cleaning signal output from the cleaning state determination unit 23 changes from “not cleaning” to “cleaning” and further from “cleaning” to “not cleaning”, the in-vehicle camera Assuming that the lens No. 1 has been cleaned, the stored dirt state is changed from “dirty” to “no dirt”. Also, the dirty storage area is deleted (initialized).
 (汚れ状態管理部4の処理の拡張例)
 ただし、清掃状態判定部23から出力される清掃信号は清掃の有無を示すだけで、実際に汚れが払拭されたかを当該清掃信号から判別することはできない。そのため、本実施例は不必要に実施しないようにすることが好ましい。
(Extended example of processing of the dirt state management unit 4)
However, the cleaning signal output from the cleaning state determination unit 23 merely indicates the presence or absence of cleaning, and it cannot be determined from the cleaning signal whether dirt has actually been wiped off. Therefore, it is preferable not to carry out this embodiment unnecessarily.
 そこで、例えば車両からウィンドウの開閉情報を取得し、ウィンドウが十分に開いている場合にのみ、そのウィンドウ近辺に搭載された車載カメラ1に対して、清掃状態判定部23の出力する清掃信号から汚れの払拭を判定する。これにより、本実施例が不必要に実施されないようにできる。 Therefore, for example, window opening / closing information is acquired from the vehicle, and only when the window is sufficiently open, the vehicle-mounted camera 1 mounted in the vicinity of the window becomes dirty from the cleaning signal output by the cleaning state determination unit 23. Determine the wiping. Thereby, this embodiment can be prevented from being unnecessarily performed.
 他にも、自車両の座席に荷重センサを備えて乗員の存在を検知し、付近の座席に乗員の存在する車載カメラ1に対してのみ本実施例を実施するなど、実施条件を制限することが好ましい。 In addition, limiting the implementation conditions, such as providing a load sensor in the seat of the host vehicle to detect the presence of an occupant and carrying out the present embodiment only for the in-vehicle camera 1 in which a passenger is present in a nearby seat Is preferred.
 車載カメラ1に周囲の物体が映り込む場合、物体が表示される領域には汚れが無いと判定できる。また、車載カメラ1の撮像画像には、汚れの輪郭が不鮮明に映ることが多く、鮮明な輪郭が現れた領域には汚れが無いと判定できる。 When the surrounding object is reflected on the in-vehicle camera 1, it can be determined that the area where the object is displayed is not dirty. Further, in the captured image of the in-vehicle camera 1, the outline of the dirt often appears unclear, and it can be determined that there is no dirt in the area where the clear outline appears.
 そこで、車載カメラ1の撮像画像に映る歩行者を検出する歩行者検知や当該撮像画像に映る車両を検出する車両検知などの物体認識部(不図示)によって物体が検出できる領域、又は車載カメラ1の撮像画像において鮮明な輪郭が検出される領域には汚れが無いと判定し、汚れ状態管理部4は、記憶する汚れ状態や汚れ記憶領域にこの判定を反映してもよい。 Therefore, an area where an object can be detected by an object recognition unit (not shown) such as pedestrian detection for detecting a pedestrian reflected in a captured image of the in-vehicle camera 1 or a vehicle detection for detecting a vehicle reflected in the captured image, or the in-vehicle camera 1 It may be determined that there is no dirt in an area where a clear contour is detected in the captured image, and the dirt state management unit 4 may reflect this determination on the dirt state and dirt storage area to be stored.
 本実施例によれば、過去の汚れの付着領域や汚れ状態が未知でも、汚れの払拭領域を検出することができる。 According to the present embodiment, the dirt wiping area can be detected even if the past dirt adhesion area and dirt state are unknown.
 <付記1>
 なお、以上説明した本発明は、以下のように構成される。 
 (1)レンズを介して画像を撮像する車載カメラと、前記車載カメラで逐次撮像された撮像画像の時間変化に基づき、前記レンズの汚れの有無又は汚れ付着領域を検出する汚れ検知部と、前記レンズの汚れの払拭を診断する汚れ払拭診断部と、前記レンズの汚れの管理を行う汚れ状態管理部と、を備え、前記汚れ払拭診断部は、前記車載カメラで逐次撮像された撮像画像に基づき、安定状態であるか否かを判定する安定状態判定部と、前記車載カメラで逐次撮像された撮像画像のうち前記安定状態判定部による判定結果が安定状態であるときの撮像画像の時間変化に基づいて汚れの払拭領域を抽出する払拭領域抽出部と、
を有し、前記汚れ状態管理部は、前記汚れ検知部が検出した汚れの有無又は汚れ付着領域を記憶し、前記払拭領域抽出部が抽出した汚れの払拭領域を、既に記憶してある汚れの有無又は汚れ付着領域に反映し、前記レンズの汚れの管理を行うことを特徴とする車載用撮像装置、としたので、車載カメラのレンズ表面に付着した汚れを除去する作業の実施タイミングが不明であっても、レンズ表面の汚れが払拭されたことを診断するに際して、誤判定が生じる虞を低減できるレンズ汚れ払拭診断装置を有する車載用撮像装置を提供することができる。
<Appendix 1>
The present invention described above is configured as follows.
(1) A vehicle-mounted camera that captures an image through a lens, a stain detection unit that detects the presence or absence of a stain on the lens or a stain-attached region based on a time change of a captured image sequentially captured by the vehicle-mounted camera, A dirt wiping diagnostic unit for diagnosing lens dirt wiping, and a dirt state management unit for managing the lens dirt, wherein the dirt wiping diagnostic unit is based on captured images sequentially captured by the in-vehicle camera. A change in time of a captured image when a determination result by the stable state determination unit among the captured images sequentially captured by the in-vehicle camera is in a stable state; A wiping area extraction unit for extracting a wiping area for dirt based on
The dirt state management unit stores the presence / absence of dirt detected by the dirt detection unit or the dirt adhesion area, and the dirt wiping area extracted by the wiping area extraction unit is stored. Since it is an on-vehicle imaging device that manages the contamination of the lens by reflecting on the presence or absence or contamination area, the timing of performing the work to remove the contamination attached to the lens surface of the in-vehicle camera is unknown Even if it exists, when diagnosing that the dirt on the lens surface has been wiped off, it is possible to provide an in-vehicle imaging device having a lens dirt wiping diagnostic device that can reduce the possibility of erroneous determination.
 また、本発明は、以下のように構成される。 
 (2)上記車載用撮像装置において、前記安定状態判定部は、前記車載カメラから出力される撮像画像を逐次取得し、都度、該取得した撮像画像と該取得した撮像画像の直前に取得した撮像画像との差分画像を生成し、該生成した差分画像に基づいて差分の大きな領域を抽出し、該差分の大きな領域の面積が所定値以下であれば、該取得した撮像画像の時間変化が小さく安定状態であると判定し、前記差分の大きな領域の面積が所定値を超えれば、該取得した撮像画像の時間変化が大きく安定状態でないと判定することを特徴とする車載用撮像装置としたので、差分の大きな領域の面積が所定値以下であるときに安定状態であると判定することで、安定状態を適切に判定することができる。
Further, the present invention is configured as follows.
(2) In the in-vehicle imaging device, the stable state determination unit sequentially acquires captured images output from the in-vehicle camera, and the acquired image is acquired immediately before the acquired captured image and the acquired captured image each time. A difference image with the image is generated, a region with a large difference is extracted based on the generated difference image, and if the area of the region with the large difference is equal to or less than a predetermined value, the time change of the acquired captured image is small. Since the vehicle-mounted image pickup apparatus is characterized in that it is determined that the area is large in the difference and the area of the large difference exceeds a predetermined value, it is determined that the time change of the acquired captured image is not large and stable. The stable state can be appropriately determined by determining the stable state when the area of the large difference area is equal to or smaller than the predetermined value.
 また、本発明は、以下のように構成される。 
 (3)上記車載用撮像装置において、前記安定状態判定部は、前記車載カメラから出力される撮像画像を逐次取得し、該逐次取得した撮像画像から特徴量を抽出した特徴量画像を逐次生成し、都度、該生成した特徴量画像と該生成した特徴量画像の直前に生成した特徴量画像との差分画像を生成し、該生成した差分画像に基づいて差分の大きな領域を抽出し、該抽出した差分の大きな領域の面積が所定値以下であれば、該取得した撮像画像の時間変化が小さく安定状態であると判定し、前記差分の大きな領域の面積が所定値を超えれば、該取得した撮像画像の時間変化が大きく安定状態でないと判定することを特徴とする車載用撮像装置としたので、特徴量画像を用いることで、検出性能を向上したり、処理するデータ量を削減したりできる。
Further, the present invention is configured as follows.
(3) In the in-vehicle imaging device, the stable state determination unit sequentially acquires captured images output from the in-vehicle camera, and sequentially generates feature amount images obtained by extracting feature amounts from the sequentially acquired captured images. Each time, a difference image between the generated feature amount image and the feature amount image generated immediately before the generated feature amount image is generated, a region having a large difference is extracted based on the generated difference image, and the extraction is performed. If the area of the large difference area is less than or equal to a predetermined value, it is determined that the acquired captured image has a small temporal change and is in a stable state, and if the area of the large difference area exceeds the predetermined value, the acquired Since the in-vehicle imaging device is characterized in that it is determined that the change in time of the captured image is large and is not in a stable state, the detection performance can be improved or the amount of data to be processed can be reduced by using the feature image.
 また、本発明は、以下のように構成される。 
 (4)上記車載用撮像装置において、前記払拭領域抽出部は、前記安定状態判定部による判定結果が安定状態での撮像画像を逐次取得し、該取得した安定状態での撮像画像と、以前に取得した安定状態での撮像画像との差分画像を生成し、該生成した差分画像から差分の大きな領域を抽出し、該抽出した領域を汚れの払拭領域とすることを特徴とする車載用撮像装置としたので、安定状態での撮像画像を用いることで誤判定が生じる虞を低減することができる。
Further, the present invention is configured as follows.
(4) In the above-described in-vehicle imaging device, the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, An in-vehicle imaging device characterized by generating a difference image with the acquired captured image in a stable state, extracting a region having a large difference from the generated difference image, and setting the extracted region as a dirt wiping region Therefore, the possibility of erroneous determination can be reduced by using a captured image in a stable state.
 また、本発明は、以下のように構成される。 
 (5)上記車載用撮像装置において、前記払拭領域抽出部は、前記安定状態判定部による判定結果が安定状態での撮像画像を逐次取得し、該取得した安定状態での撮像画像と、以前に取得した安定状態での撮像画像との差分画像を生成し、該生成した差分画像を一定期間累積し、該累積された差分画像から差分の大きな領域を抽出し、該抽出した領域を汚れの払拭領域とすることを特徴とする車載用撮像装置としたので、差分画像を累積することで、他の車両のヘッドライトの一時的な照射などによって汚れの明るさが変化した場合にも対応可能である。
Further, the present invention is configured as follows.
(5) In the above-described in-vehicle imaging device, the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, Generate a difference image with the acquired captured image in a stable state, accumulate the generated difference image for a certain period, extract a region with a large difference from the accumulated difference image, and wipe the dirt of the extracted region Since it is an in-vehicle imaging device characterized by the area, it is possible to deal with the case where the brightness of the dirt changes due to temporary irradiation of headlights of other vehicles by accumulating difference images. is there.
 また、本発明は、以下のように構成される。 
 (6)上記車載用撮像装置において、前記払拭領域抽出部は、前記安定状態判定部による判定結果が安定状態での撮像画像を逐次取得し、該取得した安定状態での撮像画像と、以前に取得した安定状態での撮像画像との差分画像を生成し、前記汚れ状態管理部が記憶している汚れの有無又は汚れ付着領域を取得し、該取得した汚れが無くなる、又は該取得した汚れ付着領域の面積が所定値以下になるまで、該生成した差分画像を累積し、該累積された差分画像から差分の大きな領域を抽出し、該抽出した領域を汚れの払拭領域とすることを特徴とする車載用撮像装置としたので、差分画像を累積することで、汚れの払拭領域の取りこぼしを抑えることができる。
Further, the present invention is configured as follows.
(6) In the above-described in-vehicle imaging device, the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, Generate a difference image from the acquired captured image in a stable state, acquire the presence or absence of dirt or a dirt adhesion region stored in the dirt state management unit, and the acquired dirt disappears or the acquired dirt adhesion The generated difference image is accumulated until the area of the area becomes a predetermined value or less, a region having a large difference is extracted from the accumulated difference image, and the extracted region is used as a dirt wiping region. Since the vehicle-mounted imaging device is configured to accumulate the difference images, it is possible to suppress the removal of the dirt wiping area.
 また、本発明は、以下のように構成される。 
 (7)上記車載用撮像装置において、前記払拭領域抽出部は、前記安定状態判定部による判定結果が安定状態での撮像画像を逐次取得し、該逐次取得した安定状態での撮像画像から特徴量を抽出した特徴量画像を逐次生成し、該生成した特徴量画像と、以前に取得し生成した特徴量画像との差分画像を生成し、該生成した差分画像に基づいて差分の大きな領域を抽出し、該抽出した領域を汚れの払拭領域とすることを特徴とする車載用撮像装置としたので、特徴量画像を用いることで、検出性能を向上したり、処理するデータ量を削減したりできる。
Further, the present invention is configured as follows.
(7) In the on-vehicle imaging device, the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and features from the sequentially acquired captured images in the stable state The feature amount image extracted is sequentially generated, a difference image between the generated feature amount image and the previously acquired feature amount image is generated, and a region having a large difference is extracted based on the generated difference image. In addition, since the vehicle-mounted imaging device is characterized in that the extracted area is a dirt wiping area, the detection performance can be improved and the amount of data to be processed can be reduced by using the feature amount image. .
 また、本発明は、以下のように構成される。 
 (8)上記車載用撮像装置において、前記汚れ状態管理部は、前記汚れ検知部が検出する汚れの有無又は汚れ付着領域を取得し、該取得した汚れ付着領域を汚れ記憶領域として記憶し、前記取得した汚れの有無又は該記憶した汚れ記憶領域の面積の大小を汚れ状態として記憶し、前記払拭領域抽出部が検出する汚れの払拭領域を取得し、記憶している汚れ付着領域から該取得した払拭領域を削除し、更新された汚れ付着領域又は該削除した払拭領域の面積に基づいて汚れ状態を更新することを特徴とする車載用撮像装置としたので、レンズの汚れ状態の適切な管理を行うことができる。
Further, the present invention is configured as follows.
(8) In the on-vehicle imaging device, the dirt state management unit acquires the presence / absence of dirt or a dirt adhesion area detected by the dirt detection unit, stores the acquired dirt adhesion area as a dirt storage area, and The acquired presence / absence of dirt or the size of the area of the stored dirt storage area is stored as a dirt state, the dirt wiping area detected by the wiping area extraction unit is acquired, and the acquired from the stored dirt adhesion area Since the wiping area is deleted and the dirt state is updated based on the updated dirt adhesion area or the area of the deleted wiping area, the vehicle-mounted image pickup apparatus is provided. It can be carried out.
 また、本発明は、以下のように構成される。 
 (9)上記車載用撮像装置において、前記汚れ状態管理部は、前記汚れ検知部が検出する汚れの有無又は汚れ付着領域を取得し、該取得した汚れ付着領域を汚れ記憶領域として記憶し、前記取得した汚れの有無又は該記憶した汚れ記憶領域の面積の大小を汚れ状態として記憶し、前記払拭領域抽出部が検出する汚れの払拭領域を取得して一定期間累積し、該累積された払拭領域の面積が所定値以上の場合、記憶している汚れ状態を汚れ無しにすることを特徴とする車載用撮像装置としたので、汚れの払拭領域を累積することで、レンズ表面に付着した汚れが少しずつ段階的に取り除かれる場合などにも対応することができる。
Further, the present invention is configured as follows.
(9) In the above-described on-vehicle imaging device, the dirt state management unit acquires the presence or absence of dirt or a dirt adhesion region detected by the dirt detection unit, stores the acquired dirt adhesion region as a dirt storage region, and The acquired presence / absence of dirt or the size of the area of the stored dirt storage area is stored as a dirt state, the dirt wiping area detected by the wiping area extraction unit is acquired and accumulated for a certain period, and the accumulated wiping area If the area of the image sensor is greater than or equal to a predetermined value, the stored in-vehicle image pickup apparatus is characterized by eliminating the dirt. It is possible to deal with cases where it is removed step by step.
 また、本発明は、以下のように構成される。 
 (10)上記車載用撮像装置において、前記汚れ状態管理部は、記憶している汚れの有無又は汚れ付着領域に基づいて、前記レンズの汚れの管理状況をユーザーに通知する、
ことを特徴とする車載用撮像装置としたので、ユーザーが、レンズの汚れの管理状況を知ることができ、適切な対応を取ることができる。
Further, the present invention is configured as follows.
(10) In the above-described in-vehicle imaging device, the dirt state management unit notifies the user of the dirt management status of the lens based on the stored presence or absence of dirt or a dirt adhesion region.
Since the vehicle-mounted imaging device is characterized by this, the user can know the management status of lens dirt, and can take appropriate measures.
 また、本発明は、以下のように構成される。 
 (11)上記車載用撮像装置において、前記汚れ状態管理部は、記憶している汚れの有無又は汚れ付着領域に基づいて、前記レンズの汚れの管理状況を、車両を自律制御するシステムに通知することを特徴とする車載用撮像装置としたので、自動運転などの自律制御するシステムにおいて、レンズの汚れの管理状況を知ることができ、適切な制御を行うことができる。
Further, the present invention is configured as follows.
(11) In the on-vehicle imaging device, the dirt state management unit notifies the system for autonomously controlling the vehicle of the dirt management status of the lens based on the stored presence or absence of dirt or a dirt adhesion region. Since the vehicle-mounted imaging device is characterized by this, in a system that performs autonomous control such as automatic driving, it is possible to know the management status of dirt on the lens and perform appropriate control.
 また、本発明は、以下のように構成される。 
 (12)レンズを介して画像を撮像する車載カメラと、前記車載カメラで逐次撮像された撮像画像に基づき、前記レンズの汚れの有無又は汚れ付着領域を検出する汚れ検知部と、前記レンズの汚れの払拭を診断する汚れ払拭診断部と、前記レンズの汚れの管理を行う汚れ状態管理部とを備え、前記汚れ払拭診断部は、前記車載カメラで逐次撮像された撮像画像に基づき、安定状態であるか否かを判定する安定状態判定部と、前記レンズが清掃されたことを判定する清掃状態判定部と、前記安定状態判定部及び前記清掃状態判定部の判定に基づいて、前記レンズが清掃されている期間の前後それぞれにおける時間変化の小さい撮像画像を抽出し、該レンズが清掃されている期間の前後の2つの撮像画像の差分から汚れの払拭領域を抽出する払拭領域抽出部とを有し、前記汚れ状態管理部は、前記汚れ検知部が検出した汚れの有無又は汚れ付着領域を記憶し、前記払拭領域抽出部が抽出した汚れの払拭領域を、既に記憶してある汚れの有無又は汚れ付着領域に反映し、前記レンズの汚れの管理を行うことを特徴とする車載用撮像装置としたので、レンズの清掃状態を反映し、車載カメラのレンズ表面に付着した汚れを除去する作業の実施タイミングが不明であっても、レンズ表面の汚れが払拭されたことを診断するに際して、誤判定が生じる虞を低減できるレンズ汚れ払拭診断装置を有する車載用撮像装置を提供することができる。
Further, the present invention is configured as follows.
(12) A vehicle-mounted camera that captures an image through a lens, a stain detection unit that detects the presence or absence of a stain on the lens or a stain-attached region based on captured images sequentially captured by the vehicle-mounted camera, and a stain on the lens A dirt wiping diagnosis unit for diagnosing wiping of the lens, and a dirt state management unit for managing dirt on the lens, and the dirt wiping diagnosis unit is in a stable state based on captured images sequentially captured by the in-vehicle camera. The lens is cleaned based on the determination of the stable state determination unit that determines whether or not there is a cleaning state determination unit that determines that the lens has been cleaned, and the determination of the stable state determination unit and the cleaning state determination unit. Wiping to extract a picked-up image having a small temporal change before and after the period during which the lens is cleaned, and extracting a dirt wiping region from the difference between the two picked-up images before and after the period during which the lens is cleaned The dirt state management unit stores the presence or absence of dirt detected by the dirt detection unit or a dirt adhesion region, and already stores the dirt wiping region extracted by the wiping region extraction unit. It is reflected in the presence or absence of dirt or the area where dirt is attached, and it is an in-vehicle image pickup device that manages the dirt on the lens, so it reflects the cleaning condition of the lens and adheres to the lens surface of the in-vehicle camera. Provided is an in-vehicle imaging apparatus having a lens dirt wiping diagnostic device that can reduce the possibility of erroneous determination when diagnosing that dirt on a lens surface has been wiped off even when the operation timing of removing dirt is unknown. can do.
 また、本発明は、以下のように構成される。 
 (13)上記車載用撮像装置において、前記清掃状態判定部は、前記車載カメラの撮像画像から所定値以下の輝度の領域を暗領域として抽出し、該抽出した暗領域の面積が所定値以上となった場合に、前記レンズが清掃されていると判定することを特徴とする車載用撮像装置としたので、輝度の低い暗領域を抽出することで、清掃状態を判定することができる。
Further, the present invention is configured as follows.
(13) In the on-vehicle imaging device, the cleaning state determination unit extracts a region having a luminance equal to or lower than a predetermined value from the captured image of the on-vehicle camera as a dark region, and the area of the extracted dark region is equal to or larger than a predetermined value. In this case, since the in-vehicle imaging apparatus is characterized in that it is determined that the lens is cleaned, the cleaning state can be determined by extracting a dark area with low luminance.
 また、本発明は、以下のように構成される。 
 (14)上記車載用撮像装置において、前記清掃状態判定部は、前記抽出した暗領域が撮像画像の全体に及ばない場合に、該抽出した暗領域の端に大きな輝度変化があるかを判定し、端に大きな輝度変化のある暗領域のみをさらに抽出し、該抽出した端に大きな輝度変化のある暗領域の面積が所定値以上となった場合に、前記レンズが清掃されていると判定することを特徴とする車載用撮像装置としたので、暗領域の端の輝度変化を判定することで、清掃状態をより正確に判定することができる。
Further, the present invention is configured as follows.
(14) In the on-vehicle imaging device, the cleaning state determination unit determines whether there is a large luminance change at an end of the extracted dark region when the extracted dark region does not reach the entire captured image. Further, only a dark region having a large luminance change at the end is further extracted, and when the area of the dark region having a large luminance change at the extracted end is equal to or larger than a predetermined value, it is determined that the lens is cleaned. Since the vehicle-mounted imaging device is characterized by this, the cleaning state can be more accurately determined by determining the luminance change at the end of the dark region.
 また、本発明は、以下のように構成される。 
 (15)上記車載用撮像装置において、前記清掃状態判定部は、前記車載カメラの撮像画像から所定値以下の輝度の領域を暗領域として抽出し、該抽出した暗領域を累積し、該累積された暗領域の面積が所定値以上となった場合に、前記レンズが清掃されていると判定することを特徴とする車載用撮像装置としたので、暗領域を累積することで、幾度かに分けてレンズを清掃した場合でも、レンズが清掃されたことを検知することができる。
Further, the present invention is configured as follows.
(15) In the on-vehicle imaging device, the cleaning state determination unit extracts a region having a luminance equal to or lower than a predetermined value from the captured image of the in-vehicle camera as a dark region, accumulates the extracted dark region, and accumulates the extracted dark region. When the area of the dark region becomes equal to or larger than a predetermined value, the on-vehicle imaging device is characterized in that it is determined that the lens is cleaned. Even when the lens is cleaned, it can be detected that the lens has been cleaned.
 また、本発明は、以下のように構成される。 
 (16)上記車載用撮像装置において、前記汚れ払拭診断部は、自車両の車速及び/又は自車両の乗降情報に類する情報を参照し、自車両の乗員が前記レンズを清掃可能な場合にのみ動作することを特徴とする車載用撮像装置としたので、レンズの清掃が不可能な場合に動作を停止することで、処理負荷を低減することができる。
Further, the present invention is configured as follows.
(16) In the on-vehicle imaging device, the dirt wiping diagnosis unit refers to information similar to the speed of the host vehicle and / or boarding / exiting information of the host vehicle, and only when an occupant of the host vehicle can clean the lens. Since the vehicle-mounted imaging device is characterized by operating, the processing load can be reduced by stopping the operation when the lens cannot be cleaned.
 また、本発明は、以下のように構成される。 
 (17)上記車載用撮像装置において、前記汚れ検知部、前記汚れ払拭診断部及び前記汚れ状態管理部は、レンズ汚れ払拭診断装置を成すものであって、前記レンズ汚れ払拭診断装置は、エンジン停止直後から該レンズ汚れ払拭診断装置が停止するまでの間である停止時に前記車載カメラの撮像画像を記憶し、エンジンの再開時に、前記停止時に記憶した撮像画像を、前記レンズを清掃する前の過去の撮像画像として利用することを特徴とする車載用撮像装置としたので、エンジン停止時にレンズ汚れ払拭診断装置を停止させる構成であってもレンズの汚れ状態の適切な管理を行うことができる。
Further, the present invention is configured as follows.
(17) In the on-vehicle imaging device, the dirt detection unit, the dirt wiping diagnostic unit, and the dirt state management unit constitute a lens dirt wiping diagnostic device, and the lens dirt wiping diagnostic device is configured to stop an engine. Immediately after the lens dirt wiping diagnostic device is stopped, the captured image of the in-vehicle camera is stored at the time of stop, and when the engine is restarted, the captured image stored at the time of stop is stored in the past before cleaning the lens. Since the vehicle-mounted image pickup device is used as a picked-up image of the lens, it is possible to appropriately manage the dirt state of the lens even if the lens stain wiping diagnostic device is stopped when the engine is stopped.
 また、本発明は、以下のように構成される。 
 (18)上記車載用撮像装置において、前記車載カメラの撮像画像から物体を検出する物体認識部を備え、前記汚れ状態管理部は、前記物体認識部が物体を検出した領域を汚れの払拭領域として抽出し、記憶している汚れ付着領域から該抽出した払拭領域を削除することを特徴とする車載用撮像装置としたので、物体が検出された領域にはレンズの汚れがないと考えられることから、物体認識部による検出結果を反映することで、レンズの汚れ状態のより適切な管理を行うことができる。
Further, the present invention is configured as follows.
(18) The vehicle-mounted imaging device includes an object recognition unit that detects an object from a captured image of the vehicle-mounted camera, and the dirt state management unit uses a region where the object recognition unit detects an object as a dirt wiping region. Since the vehicle-mounted imaging device is characterized in that the extracted wiping area is deleted from the extracted and stored dirt adhesion area, it is considered that there is no dirt on the lens in the area where the object is detected By reflecting the detection result of the object recognition unit, more appropriate management of the dirt state of the lens can be performed.
 また、本発明は、以下のように構成される。 
 (19)上記車載用撮像装置を備え、前記汚れ状態管理部が記憶する汚れ付着領域の面積が所定値以下に減少した場合に、車両の自律制御システムに、車両の自律制御を許可する、
ことを特徴とする車両制御システムとしたので、レンズの汚れの影響を受けずに車両の自律制御を行うことができる。
Further, the present invention is configured as follows.
(19) The vehicle-mounted imaging device is provided, and the autonomous control system of the vehicle is permitted to the vehicle autonomous control system when the area of the dirt adhesion region stored in the dirt state management unit is reduced to a predetermined value or less.
Since the vehicle control system is characterized by this, autonomous control of the vehicle can be performed without being affected by dirt on the lens.
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、上述した個々の実施例の各要素のいかなる組合せも本発明に含むものである。 In addition, this invention is not limited to the above-mentioned Example, Various modifications are included. For example, the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described. Any combination of the elements of the individual embodiments described above is included in the present invention.
 1…車載カメラ、2…汚れ払拭診断部、3…汚れ検知部、4…汚れ状態管理部、5…車速センサ、21…安定状態判定部、22…払拭領域抽出部、23…清掃状態判定部、31、32…領域(水滴表示領域)、33…領域(歩行者表示領域)、34…領域(布表示領域)。 DESCRIPTION OF SYMBOLS 1 ... Car-mounted camera, 2 ... Dirt wiping diagnosis part, 3 ... Dirt detection part, 4 ... Dirt state management part, 5 ... Vehicle speed sensor, 21 ... Stable state determination part, 22 ... Wiping area extraction part, 23 ... Cleaning state determination part , 31, 32... Region (water drop display region), 33... Region (pedestrian display region), 34.

Claims (15)

  1.  レンズを介して画像を撮像する車載カメラと、
     前記車載カメラで逐次撮像された撮像画像に基づき、前記レンズの汚れの有無又は汚れ付着領域を検出する汚れ検知部と、
     前記レンズの汚れの払拭を診断する汚れ払拭診断部と、
     前記レンズの汚れの管理を行う汚れ状態管理部と、
    を備え、
     前記汚れ払拭診断部は、
     前記車載カメラで逐次撮像された撮像画像の時間変化に基づき、安定状態であるか否かを判定する安定状態判定部と、
     前記車載カメラで逐次撮像された撮像画像のうち前記安定状態判定部による判定結果が安定状態であるときの撮像画像の時間変化に基づいて汚れの払拭領域を抽出する払拭領域抽出部と、
    を有し、
     前記汚れ状態管理部は、前記汚れ検知部が検出した汚れの有無又は汚れ付着領域を記憶し、前記払拭領域抽出部が抽出した汚れの払拭領域を、既に記憶してある汚れの有無又は汚れ付着領域に反映し、前記レンズの汚れの管理を行う、
    ことを特徴とする車載用撮像装置。
    An in-vehicle camera that captures an image through a lens;
    Based on captured images sequentially captured by the in-vehicle camera, a dirt detection unit that detects the presence or absence of dirt on the lens or a dirt adhesion region;
    A dirt wiping diagnostic unit for diagnosing wiping of dirt on the lens;
    A dirt state management unit for managing dirt on the lens;
    With
    The dirt wiping diagnostic unit is
    A stable state determination unit that determines whether or not a stable state based on a temporal change of captured images sequentially captured by the in-vehicle camera;
    A wiping region extraction unit that extracts a wiping region of dirt based on a temporal change of a captured image when a determination result by the stable state determination unit is in a stable state among captured images sequentially captured by the in-vehicle camera;
    Have
    The dirt state management unit stores the presence / absence of dirt detected by the dirt detection unit or the dirt adhesion area, and the dirt wiping area extracted by the wiping area extraction unit is stored as the presence / absence of dirt or dirt adhesion. Reflect in the area and manage the dirt on the lens,
    An in-vehicle imaging device characterized by the above.
  2.  請求項1に記載の車載用撮像装置において、
     前記安定状態判定部は、前記車載カメラから出力される撮像画像を逐次取得し、都度、該取得した撮像画像と該取得した撮像画像の直前に取得した撮像画像との差分画像を生成し、該生成した差分画像に基づいて差分の大きな領域を抽出し、該差分の大きな領域の面積が所定値以下であれば、該取得した撮像画像の時間変化が小さく安定状態であると判定し、前記差分の大きな領域の面積が所定値を超えれば、該取得した撮像画像の時間変化が大きく安定状態でないと判定する、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The stable state determination unit sequentially acquires captured images output from the in-vehicle camera, and generates a difference image between the acquired captured image and the captured image acquired immediately before the acquired captured image each time, A region having a large difference is extracted based on the generated difference image, and if the area of the region having the large difference is equal to or less than a predetermined value, it is determined that the acquired captured image has a small temporal change and is in a stable state, and the difference If the area of the large area exceeds a predetermined value, it is determined that the time change of the acquired captured image is large and is not stable.
    An in-vehicle imaging device characterized by the above.
  3.  請求項1に記載の車載用撮像装置において、
     前記安定状態判定部は、前記車載カメラから出力される撮像画像を逐次取得し、該逐次取得した撮像画像から特徴量を抽出した特徴量画像を逐次生成し、都度、該生成した特徴量画像と該生成した特徴量画像の直前に生成した特徴量画像との差分画像を生成し、該生成した差分画像に基づいて差分の大きな領域を抽出し、該抽出した差分の大きな領域の面積が所定値以下であれば、該取得した撮像画像の時間変化が小さく安定状態であると判定し、前記差分の大きな領域の面積が所定値を超えれば、該取得した撮像画像の時間変化が大きく安定状態でないと判定する、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The stable state determination unit sequentially acquires captured images output from the in-vehicle camera, sequentially generates feature amount images obtained by extracting feature amounts from the sequentially acquired captured images, and each time the generated feature amount image and A difference image with the feature amount image generated immediately before the generated feature amount image is generated, a region with a large difference is extracted based on the generated difference image, and an area of the extracted region with a large difference is a predetermined value If it is below, it is determined that the time change of the acquired captured image is small and stable, and if the area of the region with the large difference exceeds a predetermined value, the time change of the acquired captured image is large and not stable. To determine,
    An in-vehicle imaging device characterized by the above.
  4.  請求項1に記載の車載用撮像装置において、
     前記払拭領域抽出部は、前記安定状態判定部による判定結果が安定状態での撮像画像を逐次取得し、該取得した安定状態での撮像画像と、以前に取得した安定状態での撮像画像との差分画像を生成し、該生成した差分画像から差分の大きな領域を抽出し、該抽出した領域を汚れの払拭領域とする、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state and a captured image in the stable state acquired previously Generating a difference image, extracting a large difference area from the generated difference image, the extracted area as a dirt wiping area,
    An in-vehicle imaging device characterized by the above.
  5.  請求項1に記載の車載用撮像装置において、
     前記払拭領域抽出部は、前記安定状態判定部による判定結果が安定状態での撮像画像を逐次取得し、該取得した安定状態での撮像画像と、以前に取得した安定状態での撮像画像との差分画像を生成し、該生成した差分画像を一定期間累積し、該累積された差分画像から差分の大きな領域を抽出し、該抽出した領域を汚れの払拭領域とする、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state and a captured image in the stable state acquired previously Generating a difference image, accumulating the generated difference image for a certain period, extracting a region having a large difference from the accumulated difference image, and setting the extracted region as a dirt wiping region;
    An in-vehicle imaging device characterized by the above.
  6.  請求項1に記載の車載用撮像装置において、
     前記払拭領域抽出部は、前記安定状態判定部による判定結果が安定状態での撮像画像を逐次取得し、該取得した安定状態での撮像画像と、以前に取得した安定状態での撮像画像との差分画像を生成し、前記汚れ状態管理部が記憶している汚れの有無又は汚れ付着領域を取得し、該取得した汚れが無くなる、又は該取得した汚れ付着領域の面積が所定値以下になるまで、該生成した差分画像を累積し、該累積された差分画像から差分の大きな領域を抽出し、該抽出した領域を汚れの払拭領域とする、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state and a captured image in the stable state acquired previously A difference image is generated, the presence or absence of dirt or a dirt adhesion area stored in the dirt state management unit is acquired, and the acquired dirt disappears or the area of the acquired dirt adhesion area becomes a predetermined value or less. , The generated difference image is accumulated, a large difference area is extracted from the accumulated difference image, and the extracted area is set as a dirt wiping area.
    An in-vehicle imaging device characterized by the above.
  7.  請求項1に記載の車載用撮像装置において、
     前記払拭領域抽出部は、前記安定状態判定部による判定結果が安定状態での撮像画像を逐次取得し、該逐次取得した安定状態での撮像画像から特徴量を抽出した特徴量画像を逐次生成し、該生成した特徴量画像と、以前に取得し生成した特徴量画像との差分画像を生成し、該生成した差分画像に基づいて差分の大きな領域を抽出し、該抽出した領域を汚れの払拭領域とする、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and sequentially generates feature amount images obtained by extracting feature amounts from the sequentially acquired captured images in the stable state. Generating a difference image between the generated feature amount image and the previously acquired and generated feature amount image, extracting a region having a large difference based on the generated difference image, and wiping the extracted region with dirt Territory,
    An in-vehicle imaging device characterized by the above.
  8.  請求項1に記載の車載用撮像装置において、
     前記汚れ状態管理部は、前記汚れ検知部が検出する汚れの有無又は汚れ付着領域を取得し、該取得した汚れ付着領域を汚れ記憶領域として記憶し、前記取得した汚れの有無又は該記憶した汚れ記憶領域の面積の大小を汚れ状態として記憶し、前記払拭領域抽出部が検出する汚れの払拭領域を取得し、記憶している汚れ付着領域から該取得した払拭領域を削除し、更新された汚れ付着領域又は該削除した払拭領域の面積に基づいて汚れ状態を更新する、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The dirt state management unit acquires the presence / absence of dirt or a dirt adhesion area detected by the dirt detection unit, stores the obtained dirt adhesion area as a dirt storage area, and stores the obtained dirt presence / absence or the stored dirt. The size of the area of the storage area is stored as a dirty state, the wiping area of the dirt detected by the wiping area extraction unit is acquired, the acquired wiping area is deleted from the stored dirt adhesion area, and the updated dirt Update the dirt state based on the area of the attached area or the deleted wiping area,
    An in-vehicle imaging device characterized by the above.
  9.  請求項1に記載の車載用撮像装置において、
     前記汚れ状態管理部は、前記汚れ検知部が検出する汚れの有無又は汚れ付着領域を取得し、該取得した汚れ付着領域を汚れ記憶領域として記憶し、前記取得した汚れの有無又は該記憶した汚れ記憶領域の面積の大小を汚れ状態として記憶し、前記払拭領域抽出部が検出する汚れの払拭領域を取得して一定期間累積し、該累積された払拭領域の面積が所定値以上の場合、記憶している汚れ状態を汚れ無しにする、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The dirt state management unit acquires the presence / absence of dirt or a dirt adhesion area detected by the dirt detection unit, stores the obtained dirt adhesion area as a dirt storage area, and stores the obtained dirt presence / absence or the stored dirt. The size of the area of the storage area is stored as a dirt state, the dirt wiping area detected by the wiping area extraction unit is acquired and accumulated for a certain period, and the area of the accumulated wiping area is stored when the area is equal to or greater than a predetermined value To make the dirty state without dirt,
    An in-vehicle imaging device characterized by the above.
  10.  請求項1に記載の車載用撮像装置において、
     前記汚れ状態管理部は、記憶している汚れの有無又は汚れ付着領域に基づいて、前記レンズの汚れの管理状況をユーザーに通知する、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The dirt state management unit notifies the user of the dirt management status of the lens based on the stored presence or absence of dirt or a dirt adhesion area.
    An in-vehicle imaging device characterized by the above.
  11.  請求項1に記載の車載用撮像装置において、
     前記汚れ状態管理部は、記憶している汚れの有無又は汚れ付着領域に基づいて、前記レンズの汚れの管理状況を、車両を自律制御するシステムに通知する、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 1,
    The dirt state management unit notifies the system of autonomously controlling the vehicle of the dirt management status of the lens based on the presence or absence of dirt stored or a dirt adhesion region.
    An in-vehicle imaging device characterized by the above.
  12.  レンズを介して画像を撮像する車載カメラと、
     前記車載カメラで逐次撮像された撮像画像に基づき、前記レンズの汚れの有無又は汚れ付着領域を検出する汚れ検知部と、
     前記レンズの汚れの払拭を診断する汚れ払拭診断部と、
     前記レンズの汚れの管理を行う汚れ状態管理部と、
    を備え、
     前記汚れ払拭診断部は、
     前記車載カメラで逐次撮像された撮像画像に基づき、安定状態であるか否かを判定する安定状態判定部と、
     前記レンズが清掃されたことを判定する清掃状態判定部と、
     前記安定状態判定部及び前記清掃状態判定部の判定に基づいて、前記レンズが清掃されている期間の前後それぞれにおける時間変化の小さい撮像画像を抽出し、該レンズが清掃されている期間の前後の2つの撮像画像の差分から汚れの払拭領域を抽出する払拭領域抽出部と、
    を有し、
     前記汚れ状態管理部は、前記汚れ検知部が検出した汚れの有無又は汚れ付着領域を記憶し、前記払拭領域抽出部が抽出した汚れの払拭領域を、既に記憶してある汚れの有無又は汚れ付着領域に反映し、前記レンズの汚れの管理を行う、
    ことを特徴とする車載用撮像装置。
    An in-vehicle camera that captures an image through a lens;
    Based on captured images sequentially captured by the in-vehicle camera, a dirt detection unit that detects the presence or absence of dirt on the lens or a dirt adhesion region;
    A dirt wiping diagnostic unit for diagnosing wiping of dirt on the lens;
    A dirt state management unit for managing dirt on the lens;
    With
    The dirt wiping diagnostic unit is
    A stable state determination unit that determines whether or not a stable state based on captured images sequentially captured by the in-vehicle camera;
    A cleaning state determination unit that determines that the lens has been cleaned;
    Based on the determination of the stable state determination unit and the cleaning state determination unit, a captured image with a small time change before and after the period in which the lens is cleaned is extracted, and before and after the period in which the lens is cleaned. A wiping area extracting unit that extracts a wiping area for dirt from a difference between two captured images;
    Have
    The dirt state management unit stores the presence / absence of dirt detected by the dirt detection unit or the dirt adhesion area, and the dirt wiping area extracted by the wiping area extraction unit is stored as the presence / absence of dirt or dirt adhesion. Reflect in the area and manage the dirt on the lens,
    An in-vehicle imaging device characterized by the above.
  13.  請求項12に記載の車載用撮像装置において、
     前記清掃状態判定部は、前記車載カメラの撮像画像から所定値以下の輝度の領域を暗領域として抽出し、該抽出した暗領域の面積が所定値以上となった場合に、前記レンズが清掃されていると判定する、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 12,
    The cleaning state determination unit extracts a region having a luminance of a predetermined value or less as a dark region from a captured image of the in-vehicle camera, and the lens is cleaned when the area of the extracted dark region becomes a predetermined value or more. It is determined that
    An in-vehicle imaging device characterized by the above.
  14.  請求項13に記載の車載用撮像装置において、
     前記清掃状態判定部は、前記抽出した暗領域が撮像画像の全体に及ばない場合に、該抽出した暗領域の端に大きな輝度変化があるかを判定し、端に大きな輝度変化のある暗領域のみをさらに抽出し、該抽出した端に大きな輝度変化のある暗領域の面積が所定値以上となった場合に、前記レンズが清掃されていると判定する、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 13,
    The cleaning state determination unit determines whether there is a large luminance change at the end of the extracted dark region when the extracted dark region does not reach the entire captured image, and the dark region having a large luminance change at the end Only when the area of a dark region having a large luminance change at the extracted edge is equal to or greater than a predetermined value, it is determined that the lens is cleaned.
    An in-vehicle imaging device characterized by the above.
  15.  請求項12に記載の車載用撮像装置において、
     前記清掃状態判定部は、前記車載カメラの撮像画像から所定値以下の輝度の領域を暗領域として抽出し、該抽出した暗領域を累積し、該累積された暗領域の面積が所定値以上となった場合に、前記レンズが清掃されていると判定する、
    ことを特徴とする車載用撮像装置。
    The in-vehicle imaging device according to claim 12,
    The cleaning state determination unit extracts a region having a luminance equal to or lower than a predetermined value from the captured image of the in-vehicle camera as a dark region, accumulates the extracted dark region, and the accumulated dark region has an area of a predetermined value or more. It is determined that the lens has been cleaned.
    An in-vehicle imaging device characterized by the above.
PCT/JP2017/040721 2017-02-14 2017-11-13 Onboard image-capture device WO2018150661A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017024810A JP6757271B2 (en) 2017-02-14 2017-02-14 In-vehicle imaging device
JP2017-024810 2017-02-14

Publications (1)

Publication Number Publication Date
WO2018150661A1 true WO2018150661A1 (en) 2018-08-23

Family

ID=63169194

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/040721 WO2018150661A1 (en) 2017-02-14 2017-11-13 Onboard image-capture device

Country Status (2)

Country Link
JP (1) JP6757271B2 (en)
WO (1) WO2018150661A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7394602B2 (en) 2019-05-17 2023-12-08 株式会社Lixil Judgment device
JP7251425B2 (en) * 2019-09-20 2023-04-04 株式会社デンソーテン Attached matter detection device and attached matter detection method
JP7151675B2 (en) * 2019-09-20 2022-10-12 株式会社デンソーテン Attached matter detection device and attached matter detection method
JP7442384B2 (en) 2019-09-24 2024-03-04 株式会社Lixil toilet seat device
US20220395149A1 (en) * 2019-09-24 2022-12-15 Lixil Corporation Toilet seat device
WO2021060212A1 (en) * 2019-09-24 2021-04-01 株式会社Lixil Toilet seat device
CN112348784A (en) * 2020-10-28 2021-02-09 北京市商汤科技开发有限公司 Method, device and equipment for detecting state of camera lens and storage medium
JP7398644B2 (en) * 2022-03-29 2023-12-15 パナソニックIpマネジメント株式会社 image monitoring equipment
JP7398643B2 (en) * 2022-03-29 2023-12-15 パナソニックIpマネジメント株式会社 image monitoring equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014007294A1 (en) * 2012-07-03 2014-01-09 クラリオン株式会社 On-board device
JP2016015583A (en) * 2014-07-01 2016-01-28 クラリオン株式会社 On-vehicle imaging device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014007294A1 (en) * 2012-07-03 2014-01-09 クラリオン株式会社 On-board device
JP2016015583A (en) * 2014-07-01 2016-01-28 クラリオン株式会社 On-vehicle imaging device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object

Also Published As

Publication number Publication date
JP6757271B2 (en) 2020-09-16
JP2018132861A (en) 2018-08-23

Similar Documents

Publication Publication Date Title
WO2018150661A1 (en) Onboard image-capture device
US9956941B2 (en) On-board device controlling accumulation removing units
JP5925314B2 (en) Vehicle perimeter monitoring device
JP4668838B2 (en) Raindrop detection device and wiper control device
KR100659227B1 (en) Wiper controller for controlling windshield wiper
US11237388B2 (en) Processing apparatus, image capturing apparatus, and image processing method
US11565659B2 (en) Raindrop recognition device, vehicular control apparatus, method of training model, and trained model
US20140232869A1 (en) Vehicle vision system with dirt detection
JP6755161B2 (en) Adhesion detection device and deposit detection method
WO2016002308A1 (en) Vehicle-mounted imaging device
JP6081034B2 (en) In-vehicle camera control device
US20140241589A1 (en) Method and apparatus for the detection of visibility impairment of a pane
US9001204B2 (en) Vehicle peripheral monitoring device
JP2014011785A (en) Diagnostic device and diagnostic method for on-vehicle camera contamination removal apparatus, and vehicle system
KR20170055907A (en) Image capture system
KR102420289B1 (en) Method, control device and vehicle for detecting at least one object present on a vehicle
CN116569016A (en) Optical device verification
GB2570156A (en) A Controller For Controlling Cleaning of a Vehicle Camera
KR20220152823A (en) Apparatus for recording drive video of vehicle and method thereof
JP2006265865A (en) Car window control device
JP3636955B2 (en) Raindrop detection device for vehicles
JP6841725B2 (en) Other vehicle monitoring system
JPH0872641A (en) Image recognizing device for vehicle
US20200207313A1 (en) Method for controlling at least one washing device of at least one sensor situated on an outer contour of a vehicle
JPH11185023A (en) Window glass contamination detector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17896428

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17896428

Country of ref document: EP

Kind code of ref document: A1