CN111583169A - Pollution treatment method and system for vehicle-mounted camera lens - Google Patents

Pollution treatment method and system for vehicle-mounted camera lens Download PDF

Info

Publication number
CN111583169A
CN111583169A CN201910090634.3A CN201910090634A CN111583169A CN 111583169 A CN111583169 A CN 111583169A CN 201910090634 A CN201910090634 A CN 201910090634A CN 111583169 A CN111583169 A CN 111583169A
Authority
CN
China
Prior art keywords
pollution
vehicle
camera lens
mounted camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910090634.3A
Other languages
Chinese (zh)
Inventor
相徐斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910090634.3A priority Critical patent/CN111583169A/en
Publication of CN111583169A publication Critical patent/CN111583169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/36Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Electromagnetism (AREA)
  • Power Engineering (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a pollution treatment method and a system for a vehicle-mounted camera lens, wherein the method comprises the following steps: when the speed of a target vehicle exceeds a preset pollution detection starting threshold value, adopting a trained deep learning model to carry out pollution detection on image frames acquired by the vehicle-mounted camera lens so as to determine that the vehicle-mounted camera lens is of a pollution type when being polluted; and generating a lens pollution signal according to the pollution type of the vehicle-mounted camera lens, and sending the lens pollution signal to an application device associated with the vehicle-mounted camera lens so as to carry out pollution treatment by the application device according to the lens pollution signal. The embodiment of the application can timely detect the pollution condition of the lens, improve the accuracy of pollution detection and enrich the functions of the lens of the vehicle-mounted camera.

Description

Pollution treatment method and system for vehicle-mounted camera lens
Technical Field
The application relates to the technical field of image processing, in particular to a pollution treatment method and system for a vehicle-mounted camera lens.
Background
With the development of computer vision technology, more and more scenes need to acquire images of the scenes through a camera and obtain related information of the scenes through the analysis of the images.
For example, in the case of an in-vehicle camera, the in-vehicle camera lens is increasingly used, such as a look-around system, a car recorder, and the like. The onboard camera lens also plays an extremely important role in ADAS (Advanced Driver Assistance System), and the operation of many intelligent applications depends on image data captured by the onboard camera lens. When the lens of the vehicle-mounted camera is directly exposed outside, the lens is easily influenced by pollutants such as dust, silt, sewage, leaves and the like; when the vehicle-mounted camera lens is arranged in the windshield of the vehicle, the vehicle-mounted camera lens is easily influenced by the cleanness degree of the windshield. These can lead to errors or failure of the image-based intelligent assistance function.
Disclosure of Invention
In view of the above, the present application provides a method and a system for processing contamination of a vehicle-mounted camera lens.
Specifically, the method is realized through the following technical scheme:
in a first aspect, the present application provides a pollution treatment method for a vehicle-mounted camera lens, the method including:
when the speed of a target vehicle exceeds a preset pollution detection starting threshold value, adopting a trained deep learning model to carry out pollution detection on image frames acquired by the vehicle-mounted camera lens so as to determine that the vehicle-mounted camera lens is of a pollution type when being polluted;
and generating a lens pollution signal according to the pollution type of the vehicle-mounted camera lens, and sending the lens pollution signal to an application device associated with the vehicle-mounted camera lens so as to carry out pollution treatment by the application device according to the lens pollution signal.
Preferably, the contamination treatment comprises at least one or a combination of:
sending a pollution prompt;
cleaning the vehicle-mounted camera lens;
suspending a driving assistance function based on the onboard camera lens.
Preferably, the performing contamination detection on the image frames acquired by the vehicle-mounted camera lens by using the trained deep learning model to determine that the vehicle-mounted camera lens is a contamination type when contamination exists includes:
inputting image frames collected by the vehicle-mounted camera lens into a trained deep learning model, dividing the image frames into a plurality of image subregions by the deep learning model, and calculating the pollution probability of each image subregion;
and determining that the vehicle-mounted camera lens is a pollution type when the pollution exists according to the pollution probability of each image subregion.
Preferably, the determining that the vehicle-mounted camera lens is of a pollution type when the pollution exists according to the pollution probability of each image subregion includes:
when the number of the image frames input into the deep learning model reaches a preset number, performing mean value operation on the pollution probabilities of the image subregions with the same region position in the preset number of image frames to obtain the region pollution probability of each region position;
and if the area position with the area pollution probability larger than or equal to the preset pollution threshold exists, determining that the vehicle-mounted camera lens is of a pollution type.
Preferably, an inter-frame interval of the image frames input to the deep learning model is determined according to a vehicle speed of the target vehicle.
In a second aspect, the present application provides a contamination treatment system for a vehicle-mounted camera lens, the system comprising:
the vehicle speed acquisition module is used for acquiring the vehicle speed of a target vehicle detected by the vehicle speed sensor and sending the vehicle speed to the lens pollution detection module;
the vehicle-mounted camera lens is used for acquiring image frames;
the camera lens pollution detection module is used for acquiring image frames acquired by the vehicle-mounted camera lens when the vehicle speed of a target vehicle is judged to exceed a preset pollution detection starting threshold value, and performing pollution detection on the acquired image frames by adopting a trained deep learning model so as to determine the pollution type of the vehicle-mounted camera lens when the vehicle-mounted camera lens is polluted;
and the signal generation module is used for generating a lens pollution signal according to the pollution type of the vehicle-mounted camera lens and sending the lens pollution signal to an application device associated with the vehicle-mounted camera lens so as to carry out pollution treatment by the application device according to the lens pollution signal.
Preferably, the application device comprises at least one or a combination of the following:
the prompting device is used for sending a pollution prompt;
the cleaning device is used for cleaning the vehicle-mounted camera lens;
an advanced driving assistance ADAS device for suspending a driving assistance function based on the onboard camera lens.
Preferably, the lens contamination detection module includes:
the pollution probability determination submodule is used for inputting the image frames acquired by the vehicle-mounted camera lens into a trained deep learning model so as to divide the image frames into a plurality of image subregions by the deep learning model and calculate the pollution probability of each image subregion;
and the pollution type determining submodule is used for determining the pollution type of the vehicle-mounted camera lens when the vehicle-mounted camera lens is polluted according to the pollution probability of each image subregion.
Preferably, the contamination type determination submodule is specifically configured to:
when the number of the image frames input into the deep learning model reaches a preset number, performing mean value operation on the pollution probabilities of the image subregions with the same region position in the preset number of image frames to obtain the region pollution probability of each region position;
and if the area position with the area pollution probability larger than or equal to the preset pollution threshold exists, determining that the vehicle-mounted camera lens is of a pollution type.
Preferably, an inter-frame interval of the image frames input to the deep learning model is determined according to a vehicle speed of the target vehicle.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, the vehicle-mounted camera lens can obtain the speed of a target vehicle, if the speed of the target vehicle is detected to exceed a preset pollution detection starting threshold value, a trained deep learning model is adopted to carry out pollution detection on image frames collected by the vehicle-mounted camera lens so as to determine the pollution type of the vehicle-mounted camera lens when the vehicle-mounted camera lens is polluted, the pollution detection on the vehicle-mounted camera lens is realized through the deep learning model, the pollution condition of the lens can be detected in time, the accuracy of the pollution detection is improved, and meanwhile, the functions of the vehicle-mounted camera lens are enriched.
When the pollution type of the vehicle-mounted camera lens is identified, a lens pollution signal is generated according to the pollution type of the vehicle-mounted camera lens, and the lens pollution signal is sent to an application device associated with the vehicle-mounted camera lens, so that the application device automatically carries out pollution treatment according to the lens pollution signal, the pollution treatment efficiency can be improved, and the influence caused by lens pollution is reduced.
Drawings
FIG. 1 is a flowchart illustrating steps of an embodiment of a method for pollution treatment of a vehicle-mounted camera lens according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a lens contamination detection step according to an exemplary embodiment of the present application;
FIG. 3 is a diagram illustrating an image frame divided into a plurality of image sub-regions according to an exemplary embodiment of the present application;
FIG. 4 is a diagram illustrating a fusion of the contamination probabilities of image subregions of t image frames according to an exemplary embodiment of the present application;
FIG. 5 is a flowchart illustrating deep learning model training steps according to an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a sample of a non-contaminated image captured by a vehicle mounted all-round lens according to an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a contaminated image sample acquired by a vehicle mounted all-round lens according to an exemplary embodiment of the present application;
FIG. 8 is a truth diagram illustrating contaminated image sample correspondences according to an exemplary embodiment of the present application;
FIG. 9 is a hardware block diagram of the device in which the apparatus of the present application is located;
FIG. 10 is a block diagram illustrating an exemplary embodiment of a contamination processing apparatus for a vehicle-mounted camera lens according to the present disclosure;
fig. 11 is a block diagram illustrating an exemplary embodiment of a contamination processing system for a vehicle-mounted camera lens according to the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a pollution treatment method for a vehicle-mounted camera lens according to an exemplary embodiment of the present application is shown, and specifically may include the following steps:
step 101, when the speed of a target vehicle exceeds a set pollution detection starting threshold value, adopting a trained deep learning model to carry out pollution detection on image frames acquired by a vehicle-mounted camera lens so as to determine that the vehicle-mounted camera lens is of a pollution type when the vehicle-mounted camera lens is polluted;
in this embodiment, the vehicle-mounted camera lens can acquire the speed of the target vehicle in real time. When the method is realized, the real-time speed of the target vehicle can be acquired through the vehicle speed sensor, and the speed is transmitted to the vehicle-mounted camera lens.
After the vehicle-mounted camera lens obtains the speed of the target vehicle, whether the real-time speed exceeds a set pollution detection starting threshold value or not can be further judged. And if the real-time vehicle speed exceeds the pollution detection starting threshold value, starting the pollution detection function of the vehicle-mounted camera lens. And if the real-time vehicle speed does not exceed the pollution detection starting threshold value, ending the process.
After the pollution detection function of the vehicle-mounted camera lens is started, the vehicle-mounted camera lens can adopt the trained deep learning model to carry out pollution detection on the image frames acquired by the vehicle-mounted camera lens so as to determine that the vehicle-mounted camera lens is a pollution type when the pollution exists.
In practice, in addition to the above-described contamination types, the deep learning model may output a non-contamination type when the on-vehicle camera lens is not contaminated. When implemented, the no-pollution type may be represented by a first value, such as a numerical value of 0; the contamination type may be represented by a second value, such as the value 1.
As one example, the onboard camera lens may include, but is not limited to, a look-around camera, a tachograph, and the like.
In a preferred embodiment of the present application, as shown in the flowchart of the lens contamination detection step of fig. 2, step 101 may include the following sub-steps:
a substep 101-1, inputting image frames collected by the vehicle-mounted camera lens into a trained deep learning model, so that the image frames are divided into a plurality of image subregions by the deep learning model and the pollution probability of each image subregion is calculated;
when the vehicle speed of the target vehicle is detected to exceed the pollution detection starting threshold value, the image frames collected by the vehicle-mounted camera lens can be input into the trained deep learning model. In the deep learning model, the received image frame may be divided into a plurality of image sub-regions by using a preset image division rule, and a pollution probability of each image sub-region may be calculated.
In one embodiment, the preset image dividing rule may include the number of image sub-regions to be divided, and the size of each image sub-region may be determined according to the size of the input image frame and the number of image sub-regions to be divided. For example, assuming that the number of image sub-regions to be divided is n × n, the width of the input image frame is w, and the height is h, it may be determined that the width of each image sub-region is (w/n), the step size in the width direction is (w/n), the height is (h/n), and the step size in the height direction is (h/n), and finally the image frame may be divided into n × n image sub-regions. As shown in fig. 3, P0 to Pn are image sub-regions divided for one image frame.
Of course, the embodiment of the present application is not limited to the above-mentioned image division rule, and different division rules may be set according to actual needs.
After the image frame is divided into a plurality of image sub-regions, the deep learning model can further perform pollution detection on each divided image sub-region, and calculate the pollution probability of each image sub-region.
And a substep 101-2, determining the pollution type of the vehicle-mounted camera lens when the pollution exists according to the pollution probability of each image subregion.
In the embodiment, the pollution type of the vehicle-mounted camera lens can be determined according to the pollution probability of each image subregion divided by the image frame acquired by the vehicle-mounted camera lens, the pollution detection of the lens is carried out based on an image processing mode, and the accuracy of the lens pollution detection can be improved.
In a preferred embodiment of the present application, the sub-step 101-2 may further comprise the sub-steps of:
when the number of the image frames input into the deep learning model reaches a preset number, performing mean value operation on the pollution probabilities of the image subregions with the same region position in the preset number of image frames to obtain the region pollution probability of each region position; and if the area position with the area pollution probability larger than or equal to the preset pollution threshold exists, determining that the vehicle-mounted camera lens is of a pollution type.
Specifically, when the number of image frames input to the deep learning model reaches a preset number, for example, the number of image frames input to the deep learning model is t frames, the pollution type of the vehicle-mounted camera lens may be detected by fusing prediction results output by the deep learning model respectively for the t frames of image frames. According to the method and the device, the prediction results of the t frame image frames are fused, the condition that the prediction result of a single image frame is inaccurate can be reduced, and the pollution detection accuracy of the lens is improved.
The fusion mode comprises the step of carrying out mean value operation on the pollution probabilities of the image sub-regions with the same region position in the t frame image frame to obtain the region pollution probability of each region position. In practice, each image subregion has a corresponding region position in the image frame, and the region position distributions of the image subregions of each image frame obtained by dividing according to the same image division rule are the same. For example, as shown in fig. 4, in the t image frames, the area position of the image sub-area at the PX position of each image frame is the same, where X is 0, … …, n. In fig. 4, the pollution probability of the image sub-region at the P0 position in each image frame is averaged to obtain the region pollution probability at the P0 region position; and performing an average operation on the pollution probability of the image sub-region at the P1 position in each image frame to obtain the region pollution probability at the P1 region position, and so on until the region pollution probability at the Pn region position is calculated.
In a specific implementation, the contamination threshold prob _ th may be preset, where prob _ th may be set according to an empirical value, and the embodiment of the present application is not limited thereto. When the area pollution probability of a certain area position is greater than or equal to prob _ th, the vehicle-mounted camera lens can be determined to be the pollution type. When the area contamination probability of all the area positions is less than prob _ th, it may be determined that the in-vehicle camera lens is of a contamination-free type.
In one embodiment, the above-described inter-frame interval f of the image frames input to the deep learning model may be determined according to the vehicle speed V of the target vehicle. An embodiment of determining the inter-frame space f may be:
Figure BDA0001963161740000081
v1 is a pollution detection starting threshold, and V2 is a corresponding preset vehicle speed threshold when f is minimum; fmin is a preset minimum inter-frame space.
In a preferred embodiment of the present application, referring to the flowchart of deep learning model training steps shown in fig. 5, the deep learning model training process may include the following steps:
step 201, acquiring an image sample set, wherein the image sample set comprises a pollution-free image sample and a polluted image sample of a marked pollution area;
in implementation, one image sample in the image sample set is an image captured by a vehicle-mounted camera lens, where the vehicle-mounted camera lens may include the vehicle-mounted camera lens subjected to contamination detection in this embodiment, or may be another vehicle-mounted camera lens, which is not limited in this embodiment of the application.
The image sample set of the present embodiment includes a large number of non-contaminated image samples and contaminated image samples. For example, fig. 6 is a schematic diagram illustrating a sample of a pollution-free image obtained by an on-board all-round lens, in fig. 6, an area a is a non-imaging area, and an area B is a vehicle body area. Fig. 7 is a schematic diagram showing a contaminated image sample acquired by a vehicle-mounted all-round lens, and in fig. 7, a C area is a contaminated area.
In practice, the image samples in the image sample set can cover various scenes such as day, night, rainy days, tunnels, cities, high speeds, mountain roads and the like. The contaminants in the contaminated image sample may cover many common lens contaminants such as dust, silt, sewage, leaves, etc.
It should be noted that the image sample set acquired in step 201 is a training sample set, and the sample set in this embodiment may include a test sample set in addition to the training sample set. For example, assuming there are 100000 samples in the sample set, where the number of the contaminated image samples and the number of the non-contaminated image samples are half, the division ratio of the training sample set to the test sample set is set to 8: 2, 80000 samples can be randomly selected from the sample set to form a training sample set, and the remaining 20000 samples form a testing sample set.
After the image sample set is obtained, the image sample set may be input to a convolutional neural network CNN for training to generate a deep learning model.
Step 202, dividing each image sample in the image sample set into a plurality of first image sub-regions according to a preset size and a preset step length through a data layer in the convolutional neural network, identifying each first image sub-region in the image sample as a pollution type or a pollution-free type aiming at each image sample, and outputting the pollution type or the pollution-free type of each first image sub-region of the image sample to a convolutional layer;
in a specific implementation, the convolutional neural network may receive the image sample set through a Data layer, and after the Data layer receives the image sample set, each image sample in the image sample set may be first divided into a plurality of first image sub-regions.
In the actual dividing, the image samples may be divided according to a preset image division rule, where the image division rule may include the number of first image sub-regions that each image sample needs to be divided, and the image sample may be divided into regions according to the size of each image sample. For example, assuming that the width of an image sample is w, the height of the image sample is h, and the number of first image sub-regions to be divided is n × n, it may be determined that the width of each first image sub-region in each image sample is (w/n), the step size in the width direction is (w/n), the height is (h/n), and the step size in the height direction is (h/n), and finally each image sample is divided into n × n first image sub-regions.
After the data layer divides each image sample into n × n first image sub-regions, each first image sub-region can be identified as a pollution type or a pollution-free type.
In a preferred embodiment of the present application, the step of identifying, for each image sample, each first image sub-region in the image sample as a contamination type or a non-contamination type further includes the following sub-steps:
a substep 202-1, generating a true value map of each contaminated image sample according to a labeled contaminated area in the contaminated image sample, wherein the true value map is consistent with the size of the corresponding contaminated image sample, the pixel value of a pixel point included in the labeled contaminated area in the true value map is a first preset pixel value, and the pixel value of a pixel point outside the contaminated area is a second preset pixel value;
in a specific implementation, in the sample marking, for each contaminated sample, a contaminated area of the contaminated sample may be marked, for example, the marked contaminated area may be as shown in fig. 7. From the marked contamination areas, the type of contamination of the first image sub-areas in the contaminated image samples can be determined.
In this embodiment, a true value map of each contaminated image sample may be generated based on the marked contaminated regions in each contaminated image sample. The truth map of each contaminated image sample is consistent with the size of the contaminated image sample corresponding to the truth map.
In the truth map, the pixel values of the pixels included in the marked polluted region may be set as a first preset pixel value, and the pixel values of the pixels outside the polluted region may be set as a second preset pixel value. For example, according to the contaminated image sample shown in fig. 7, if the first preset pixel value of the pixel point in the contaminated area is set to 255, and the second preset pixel value of the pixel point outside the contaminated area is set to 0, a black-and-white image as shown in fig. 8 can be obtained, where white represents the contaminated area.
Substep 202-2, dividing the truth map into a plurality of second image sub-regions according to a sub-region division mode consistent with the corresponding contaminated image samples, wherein each second image sub-region in the truth map has a mapped first image sub-region in the corresponding contaminated image sample;
after the data layer of the convolutional neural network obtains the truth map of each contaminated image sample, the truth map may be divided into a plurality of second image sub-regions according to a sub-region division manner consistent with the contaminated image sample, and since the true value map is consistent with the image size of the contaminated image sample corresponding to the true value map, the image sub-regions obtained by division in the consistent sub-region division manner are also in one-to-one correspondence, that is, each second image sub-region in the true value map has a mapped first image sub-region in the corresponding contaminated image sample.
Substep 202-3, calculating the pixel point proportion of which the pixel value is the first preset pixel value in each second image subregion, and determining the first image subregion mapped by the second image subregion of which the proportion is greater than or equal to a preset proportion threshold value as a pollution type; and determining the first image subarea mapped by the second image subarea with the occupation ratio smaller than a preset occupation ratio threshold value as a pollution-free type.
Specifically, after each truth map is divided into a plurality of second image sub-regions, for each second image sub-region, a pixel ratio of pixels having pixel values equal to a first preset pixel value in the second image sub-region may be calculated, that is, the ratio is the number of pixels having pixel values equal to the first preset pixel value in the second image sub-region/the total number of pixels in the second image sub-region. When the ratio is greater than or equal to the preset ratio threshold, it may be determined that the second image sub-region is a contaminated sub-region, and it is further determined that the first image sub-region mapped by the second image sub-region is a contaminated type. Correspondingly, when the ratio is smaller than the preset ratio threshold, it may be determined that the second image sub-region is a non-polluted sub-region, and further determine that the first image sub-region mapped by the second image sub-region is a non-polluted type.
In another preferred embodiment of the present application, the step of identifying, for each image sample, each first image sub-region in the image sample as a contamination type or a non-contamination type further includes the following sub-steps:
for each non-contaminated image sample, determining each first image sub-region of the non-contaminated image sample as a non-contaminated type.
Specifically, for each non-pollution image sample, each first image sub-region of the non-pollution image sample can be directly determined as a non-pollution type.
After the data layer identifies that each first image sub-region in each image sample is of a contamination type or a non-contamination type, the data layer may output the contamination type or the non-contamination type of each first image sub-region of the image sample to the convolution layer.
Step 203, the convolutional layer learns the characteristics of the first image sub-region of the pollution type based on the pollution type or the pollution-free type of each first image sub-region of the received image sample, and generates a deep learning model after performing multiple iterative training on the learned characteristics in combination with the pooling layer.
After the first image sub-regions in the image samples sent by the data layer and received by the convolutional layer are of a pollution type or a pollution-free type, the features of the first image sub-regions of the pollution type can be extracted, the extracted features are automatically learned, and a deep learning model is generated after multiple iterative training is performed by combining with the pooling layer.
The generated deep learning model can be tested by adopting a test sample set.
In the embodiment of the application, when the deep learning model is trained, the adopted image samples collectively comprise a large number of pollution-free image samples and polluted image samples, so that the obtained deep learning model supports lens pollution detection in various scenes, and the universality is higher.
During model training, each image sample in the image sample set is divided through the convolutional neural network to obtain a plurality of first image sub-regions, and each first image sub-region is identified to be a pollution type or a pollution-free type, so that a deep learning model with higher accuracy can be obtained.
102, generating a lens pollution signal according to the pollution type of the vehicle-mounted camera lens, and sending the lens pollution signal to an application device associated with the vehicle-mounted camera lens so as to carry out pollution treatment by the application device according to the lens pollution signal.
In a specific implementation, after the pollution detection is performed on the vehicle-mounted camera lens to obtain that the vehicle-mounted camera lens belongs to the pollution type, a lens pollution signal can be generated according to the pollution type, the lens pollution signal is input into an application device associated with the vehicle-mounted camera lens, and the application device performs pollution treatment.
In one embodiment, the application device may comprise a cleaning device, and the contamination treatment may be cleaning of the onboard camera lens.
Specifically, after the cleaning device receives the lens pollution signal, the lens pollution signal is analyzed, and the vehicle-mounted camera lens is cleaned.
If the signal received by the cleaning device is a non-pollution signal generated by judging that the lens is not polluted, the cleaning device does not perform cleaning treatment.
It should be noted that, in addition to the cleaning device for cleaning the lens of the vehicle-mounted camera, if there is a glass in front of the lens, for example, a windshield of a vehicle, the glass in front of the lens can be cleaned at the same time.
In practice, the embodiment may further establish a mapping relationship between each image subregion and the lens subregion divided for the image frame, then position the corresponding lens subregion as a contaminated lens subregion according to the image subregion of the contamination type, and control the cleaning device to perform targeted cleaning on the contaminated lens subregion, so as to improve the efficiency and effect of contamination processing.
In another embodiment, the application device may comprise a notification device, and the contamination treatment may be a contamination notification.
Specifically, after the prompting device receives the lens pollution signal, the lens pollution signal is analyzed, and a pollution prompt is sent in a voice or visual mode to prompt a user of the pollution condition of the lens.
If the signal received by the prompting device is a pollution-free signal generated by judging that the lens is pollution-free, the prompting device does not prompt.
In another embodiment, the application device may comprise an advanced driving assistance ADAS device, and the pollution treatment may be suspending a driving assistance function based on the onboard camera lens.
Specifically, after receiving the lens contamination signal, the ADAS device parses the lens contamination signal, and suspends the operation of the auxiliary function related to the lens, for example, the auxiliary function may include functions of vehicle detection and tracking based on a camera, so as to prevent the auxiliary function from being mistaken or disabled due to the lens contamination.
If the signal received by the ADAS device is a pollution-free signal generated by judging that the lens is pollution-free, the ADAS device continues to operate the auxiliary function related to the lens.
It should be noted that the embodiments of the present application are not limited to the above application apparatus, and are not limited to the above contamination treatment method, and those skilled in the art may perform the contamination treatment by other methods.
In practice, after the application device completes the relevant pollution treatment work, a response message of completing the treatment can be returned, and the step 101 is continued until the vehicle-mounted camera lens is of a pollution-free type.
In the embodiment of the application, the vehicle-mounted camera lens can obtain the speed of a target vehicle, if the speed of the target vehicle is detected to exceed a preset pollution detection starting threshold value, a trained deep learning model is adopted to carry out pollution detection on image frames collected by the vehicle-mounted camera lens so as to determine the pollution type of the vehicle-mounted camera lens when the vehicle-mounted camera lens is polluted, the pollution detection on the vehicle-mounted camera lens is realized through the deep learning model, the pollution condition of the lens can be detected in time, the accuracy of the pollution detection is improved, and meanwhile, the functions of the vehicle-mounted camera lens are enriched.
When the pollution type of the vehicle-mounted camera lens is identified, a lens pollution signal is generated according to the pollution type of the vehicle-mounted camera lens, and the lens pollution signal is sent to an application device associated with the vehicle-mounted camera lens, so that the application device automatically carries out pollution treatment according to the lens pollution signal, the pollution treatment efficiency can be improved, and the influence caused by lens pollution is reduced.
Corresponding to the embodiment of the method, the application also provides an embodiment of the pollution treatment device for the vehicle-mounted camera lens.
The embodiment of the pollution treatment device for the vehicle-mounted camera lens can be applied to vehicle-mounted equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the device where the software implementation is located as a logical means. From a hardware aspect, as shown in fig. 9, the hardware structure diagram of the device in the present application is a hardware structure diagram of an apparatus, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 9, the apparatus in the embodiment may also include other hardware according to an actual function of the device, which is not described again.
Referring to fig. 10, a block diagram of a structure of an embodiment of a pollution treatment device for a vehicle-mounted camera lens according to an exemplary embodiment of the present application is shown, and the structure may specifically include the following modules:
the pollution detection module 1001 is used for detecting pollution of image frames acquired by the vehicle-mounted camera lens by adopting a trained deep learning model when the speed of a target vehicle exceeds a preset pollution detection starting threshold value so as to determine that the vehicle-mounted camera lens is of a pollution type when the vehicle-mounted camera lens is polluted;
a pollution signal generating module 1002, configured to generate a lens pollution signal according to a pollution type of the vehicle-mounted camera lens;
a contamination signal sending module 1003, configured to send the lens contamination signal to an application device associated with the vehicle-mounted camera lens, so that the application device performs contamination processing according to the lens contamination signal.
In a preferred embodiment of the embodiments of the present application, the contamination treatment includes at least one or a combination of the following:
sending a pollution prompt;
cleaning the vehicle-mounted camera lens;
suspending a driving assistance function based on the onboard camera lens.
In a preferred embodiment of the present application, the contamination detection module 1001 further includes the following sub-modules:
the image subregion pollution detection submodule is used for inputting the image frames acquired by the vehicle-mounted camera lens into a trained deep learning model so as to divide the image frames into a plurality of image subregions by the deep learning model and calculate the pollution probability of each image subregion;
and the pollution type determining submodule is used for determining the pollution type of the vehicle-mounted camera lens when the vehicle-mounted camera lens is polluted according to the pollution probability of each image subregion.
In a preferred embodiment of the present application, the contamination type determination submodule is specifically configured to:
when the number of the image frames input into the deep learning model reaches a preset number, performing mean value operation on the pollution probabilities of the image subregions with the same region position in the preset number of image frames to obtain the region pollution probability of each region position;
and if the area position with the area pollution probability larger than or equal to the preset pollution threshold exists, determining that the vehicle-mounted camera lens is of a pollution type.
In a preferred embodiment of the present application, an inter-frame interval of the image frames input to the deep learning model is determined according to a vehicle speed of the target vehicle.
Referring to fig. 11, a block diagram of a structure of an embodiment of a pollution treatment system for a vehicle-mounted camera lens according to an exemplary embodiment of the present disclosure is shown, and the structure may specifically include the following modules:
the vehicle speed acquisition module 1101 is used for acquiring the vehicle speed of a target vehicle detected by a vehicle speed sensor and sending the vehicle speed to the lens pollution detection module;
a vehicle-mounted camera lens 1102 for acquiring image frames;
a lens pollution detection module 1103, configured to, when it is determined that the vehicle speed of the target vehicle exceeds a preset pollution detection start threshold, obtain image frames acquired by the vehicle-mounted camera lens, and perform pollution detection on the obtained image frames by using a trained deep learning model, so as to determine that the vehicle-mounted camera lens is a pollution type when there is pollution;
and the signal generation module 1104 is configured to generate a lens contamination signal according to a contamination type of the vehicle-mounted camera lens, and send the lens contamination signal to an application device associated with the vehicle-mounted camera lens, so that the application device performs contamination processing according to the lens contamination signal.
In a preferred embodiment of the present application, the application device 1105 includes at least one or a combination of the following:
the prompting device is used for sending a pollution prompt;
the cleaning device is used for cleaning the vehicle-mounted camera lens;
an advanced driving assistance ADAS device for suspending a driving assistance function based on the onboard camera lens.
In a preferred embodiment of the present application, the lens contamination detecting module 1103 includes:
the pollution probability determination submodule is used for inputting the image frames acquired by the vehicle-mounted camera lens into a trained deep learning model so as to divide the image frames into a plurality of image subregions by the deep learning model and calculate the pollution probability of each image subregion;
and the pollution type determining submodule is used for determining the pollution type of the vehicle-mounted camera lens when the vehicle-mounted camera lens is polluted according to the pollution probability of each image subregion.
In a preferred embodiment of the present application, the contamination type determination submodule is specifically configured to:
when the number of the image frames input into the deep learning model reaches a preset number, performing mean value operation on the pollution probabilities of the image subregions with the same region position in the preset number of image frames to obtain the region pollution probability of each region position;
and if the area position with the area pollution probability larger than or equal to the preset pollution threshold exists, determining that the vehicle-mounted camera lens is of a pollution type.
In a preferred embodiment of the present application, an inter-frame interval of the image frames input to the deep learning model is determined according to a vehicle speed of the target vehicle.
For the device and system embodiments, since they correspond substantially to the method embodiments, reference may be made to the method embodiments for their part of the description.
The above-described system embodiments are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the above-described method embodiments.
The embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method embodiments when executing the program.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Further, the computer may be embedded in another device, e.g., a vehicle-mounted terminal, a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A pollution treatment method for a vehicle-mounted camera lens is characterized by comprising the following steps:
when the speed of a target vehicle exceeds a preset pollution detection starting threshold value, adopting a trained deep learning model to carry out pollution detection on image frames acquired by the vehicle-mounted camera lens so as to determine that the vehicle-mounted camera lens is of a pollution type when being polluted;
and generating a lens pollution signal according to the pollution type of the vehicle-mounted camera lens, and sending the lens pollution signal to an application device associated with the vehicle-mounted camera lens so as to carry out pollution treatment by the application device according to the lens pollution signal.
2. The method of claim 1, wherein the contamination treatment comprises at least one or a combination of:
sending a pollution prompt;
cleaning the vehicle-mounted camera lens;
suspending a driving assistance function based on the onboard camera lens.
3. The method according to claim 1 or 2, wherein the performing contamination detection on the image frames acquired by the vehicle-mounted camera lens by using the trained deep learning model to determine that the vehicle-mounted camera lens is a contamination type when contamination exists comprises:
inputting image frames collected by the vehicle-mounted camera lens into a trained deep learning model, dividing the image frames into a plurality of image subregions by the deep learning model, and calculating the pollution probability of each image subregion;
and determining that the vehicle-mounted camera lens is a pollution type when the pollution exists according to the pollution probability of each image subregion.
4. The method according to claim 3, wherein the determining that the on-vehicle camera lens is of a pollution type when the on-vehicle camera lens is polluted according to the pollution probability of each image subregion comprises:
when the number of the image frames input into the deep learning model reaches a preset number, performing mean value operation on the pollution probabilities of the image subregions with the same region position in the preset number of image frames to obtain the region pollution probability of each region position;
and if the area position with the area pollution probability larger than or equal to the preset pollution threshold exists, determining that the vehicle-mounted camera lens is of a pollution type.
5. The method of claim 4, wherein the inter-frame spacing of the image frames input to the deep learning model is determined according to the vehicle speed of the target vehicle.
6. A contamination processing system for a vehicle-mounted camera lens, the system comprising:
the vehicle speed acquisition module is used for acquiring the vehicle speed of a target vehicle detected by the vehicle speed sensor and sending the vehicle speed to the lens pollution detection module;
the vehicle-mounted camera lens is used for acquiring image frames;
the camera lens pollution detection module is used for acquiring image frames acquired by the vehicle-mounted camera lens when the vehicle speed of the target vehicle is judged to exceed a preset pollution detection starting threshold value, and performing pollution detection on the acquired image frames by adopting a trained deep learning model so as to determine that the vehicle-mounted camera lens is a pollution type when the vehicle-mounted camera lens is polluted;
and the signal generation module is used for generating a lens pollution signal according to the pollution type of the vehicle-mounted camera lens and sending the lens pollution signal to an application device associated with the vehicle-mounted camera lens so as to carry out pollution treatment by the application device according to the lens pollution signal.
7. The system according to claim 6, wherein the application device comprises at least one or a combination of the following:
the prompting device is used for sending a pollution prompt;
the cleaning device is used for cleaning the vehicle-mounted camera lens;
an advanced driving assistance ADAS device for suspending a driving assistance function based on the onboard camera lens.
8. The system of claim 6 or 7, wherein the lens contamination detection module comprises:
the pollution probability determination submodule is used for inputting the image frames acquired by the vehicle-mounted camera lens into a trained deep learning model so as to divide the image frames into a plurality of image subregions by the deep learning model and calculate the pollution probability of each image subregion;
and the pollution type determining submodule is used for determining the pollution type of the vehicle-mounted camera lens when the vehicle-mounted camera lens is polluted according to the pollution probability of each image subregion.
9. The system of claim 8, wherein the contamination type determination submodule is specifically configured to:
when the number of the image frames input into the deep learning model reaches a preset number, performing mean value operation on the pollution probabilities of the image subregions with the same region position in the preset number of image frames to obtain the region pollution probability of each region position;
and if the area position with the area pollution probability larger than or equal to the preset pollution threshold exists, determining that the vehicle-mounted camera lens is of a pollution type.
10. The system of claim 9, wherein the inter-frame spacing of the image frames input to the deep learning model is determined according to the vehicle speed of the target vehicle.
CN201910090634.3A 2019-01-30 2019-01-30 Pollution treatment method and system for vehicle-mounted camera lens Pending CN111583169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910090634.3A CN111583169A (en) 2019-01-30 2019-01-30 Pollution treatment method and system for vehicle-mounted camera lens

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910090634.3A CN111583169A (en) 2019-01-30 2019-01-30 Pollution treatment method and system for vehicle-mounted camera lens

Publications (1)

Publication Number Publication Date
CN111583169A true CN111583169A (en) 2020-08-25

Family

ID=72116695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910090634.3A Pending CN111583169A (en) 2019-01-30 2019-01-30 Pollution treatment method and system for vehicle-mounted camera lens

Country Status (1)

Country Link
CN (1) CN111583169A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261403A (en) * 2020-09-22 2021-01-22 深圳市豪恩汽车电子装备股份有限公司 Device and method for detecting dirt of vehicle-mounted camera
CN113099128A (en) * 2021-04-08 2021-07-09 杭州竖品文化创意有限公司 Video processing method and video processing system
CN113705790A (en) * 2021-08-31 2021-11-26 湖北航天技术研究院总体设计所 Window mirror cleaning method and device
CN116156307A (en) * 2021-11-16 2023-05-23 中移(上海)信息通信科技有限公司 Camera cleaning method, device and equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529639A (en) * 2012-07-03 2014-01-22 歌乐牌株式会社 Lens-attached matter detector, lens-attached matter detection method, and vehicle system
CN104412573A (en) * 2012-07-03 2015-03-11 歌乐株式会社 On-board device
CN104509090A (en) * 2012-07-27 2015-04-08 歌乐牌株式会社 Vehicle-mounted image recognition device
CN104539939A (en) * 2014-12-17 2015-04-22 惠州Tcl移动通信有限公司 Lens cleanliness detection method and system based on mobile terminal
CN106415598A (en) * 2014-05-27 2017-02-15 罗伯特·博世有限公司 Detection, identification, and mitigation of lens contamination for vehicle mounted camera systems
CN106657716A (en) * 2016-12-29 2017-05-10 惠州华阳通用电子有限公司 Field of view clearing method and device for electronic in-car rearview mirror
CN107194409A (en) * 2016-03-15 2017-09-22 罗伯特·博世有限公司 Detect method, equipment and detection system, the grader machine learning method of pollution
US20180096474A1 (en) * 2015-07-02 2018-04-05 Continental Automotive Gmbh Detection of lens contamination using expected edge trajectories

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529639A (en) * 2012-07-03 2014-01-22 歌乐牌株式会社 Lens-attached matter detector, lens-attached matter detection method, and vehicle system
CN104412573A (en) * 2012-07-03 2015-03-11 歌乐株式会社 On-board device
CN104509090A (en) * 2012-07-27 2015-04-08 歌乐牌株式会社 Vehicle-mounted image recognition device
CN106415598A (en) * 2014-05-27 2017-02-15 罗伯特·博世有限公司 Detection, identification, and mitigation of lens contamination for vehicle mounted camera systems
CN104539939A (en) * 2014-12-17 2015-04-22 惠州Tcl移动通信有限公司 Lens cleanliness detection method and system based on mobile terminal
US20180096474A1 (en) * 2015-07-02 2018-04-05 Continental Automotive Gmbh Detection of lens contamination using expected edge trajectories
CN107194409A (en) * 2016-03-15 2017-09-22 罗伯特·博世有限公司 Detect method, equipment and detection system, the grader machine learning method of pollution
CN106657716A (en) * 2016-12-29 2017-05-10 惠州华阳通用电子有限公司 Field of view clearing method and device for electronic in-car rearview mirror

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
侯春雨: "基于边缘散焦模型的污点检测算法的研究" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261403A (en) * 2020-09-22 2021-01-22 深圳市豪恩汽车电子装备股份有限公司 Device and method for detecting dirt of vehicle-mounted camera
CN112261403B (en) * 2020-09-22 2022-06-28 深圳市豪恩汽车电子装备股份有限公司 Device and method for detecting dirt of vehicle-mounted camera
CN113099128A (en) * 2021-04-08 2021-07-09 杭州竖品文化创意有限公司 Video processing method and video processing system
CN113705790A (en) * 2021-08-31 2021-11-26 湖北航天技术研究院总体设计所 Window mirror cleaning method and device
CN116156307A (en) * 2021-11-16 2023-05-23 中移(上海)信息通信科技有限公司 Camera cleaning method, device and equipment

Similar Documents

Publication Publication Date Title
CN111583169A (en) Pollution treatment method and system for vehicle-mounted camera lens
CN112417953B (en) Road condition detection and map data updating method, device, system and equipment
WO2021063228A1 (en) Dashed lane line detection method and device, and electronic apparatus
CN108877269B (en) Intersection vehicle state detection and V2X broadcasting method
WO2019136491A3 (en) Map and environment based activation of neural networks for highly automated driving
CN110796007B (en) Scene recognition method and computing device
CN110858405A (en) Attitude estimation method, device and system of vehicle-mounted camera and electronic equipment
CN109977776A (en) A kind of method for detecting lane lines, device and mobile unit
CN113112524B (en) Track prediction method and device for moving object in automatic driving and computing equipment
JP7052660B2 (en) Learning image sorting device
CN112339770B (en) Vehicle-mounted device and method for providing traffic signal lamp information
US11270136B2 (en) Driving support device, vehicle, information providing device, driving support system, and driving support method
CN113743179A (en) Road obstacle detection device, method and recording medium
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium
CN113516099A (en) Traffic behavior recognition method and device, electronic equipment and storage medium
CN111695627A (en) Road condition detection method and device, electronic equipment and readable storage medium
CN114264310B (en) Positioning and navigation method, device, electronic equipment and computer storage medium
CN112528944B (en) Image recognition method and device, electronic equipment and storage medium
EP3349201B1 (en) Parking assist method and vehicle parking assist system
JP7224431B2 (en) Method and apparatus for determining vehicle position
CN115762153A (en) Method and device for detecting backing up
US20210231459A1 (en) Apparatus and method for collecting data for map generation
CN112598314B (en) Method, device, equipment and medium for determining perception confidence of intelligent driving automobile
CN110741379A (en) Method for determining the type of road on which a vehicle is travelling
CN114373081A (en) Image processing method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination