CN110544232A - detection system, terminal and storage medium for lens attached object - Google Patents

detection system, terminal and storage medium for lens attached object Download PDF

Info

Publication number
CN110544232A
CN110544232A CN201910679999.XA CN201910679999A CN110544232A CN 110544232 A CN110544232 A CN 110544232A CN 201910679999 A CN201910679999 A CN 201910679999A CN 110544232 A CN110544232 A CN 110544232A
Authority
CN
China
Prior art keywords
image
area
module
evaluation
fuzzy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910679999.XA
Other languages
Chinese (zh)
Inventor
罗亮
唐锐
张笑东
于璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zongmu Technology Shanghai Co Ltd
Original Assignee
Zongmu Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zongmu Technology Shanghai Co Ltd filed Critical Zongmu Technology Shanghai Co Ltd
Priority to CN201910679999.XA priority Critical patent/CN110544232A/en
Publication of CN110544232A publication Critical patent/CN110544232A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a detection system, a terminal and a storage medium for a lens attached object, which comprises the following steps: s01: image segmentation: carrying out suspected region segmentation on the input image; s02: feature extraction: extracting a fuzzy outline area caused by attachments, performing statistical calculation on various definition evaluation indexes in the outline, and judging whether the area is the attachments or not by integrating various index results; s03: early warning judgment: and carrying out accumulation marking on the judgment area, and carrying out early warning triggering judgment on the processing result of which the accumulation times exceed the alarm threshold. The invention aims to ensure the normal operation of the auxiliary system, ensure the safety of the body of a user, solve the problem that the lens is shielded by water drops in different forms or is shielded and polluted by dark stains, light spots, refracted light and the like, detect and alarm in time.

Description

Detection system, terminal and storage medium for lens attached object
Technical Field
the invention relates to the technical field of automotive electronics, in particular to a detection system, a terminal and a storage medium for a lens attached object.
background
In the prior art, "Auto Valet Parking" (Auto Valet Parking) becomes one of the popular techniques in the field of Auto-driving, and also becomes an important milestone on the road of Auto-driving mass production. As a complete set of automatic unmanned vehicle systems, AVP systems drive vehicles at low speeds or park vehicles in a confined area, such as a parking lot or surrounding roadway. Further, as a function expansion of the parking assist, it is expected to be one of the most commercialized fully automatic driving functions.
In the running process of a vehicle, occasional reasons such as road conditions, weather and the like are frequently encountered, so that the situation that stains and rainwater exist in a lens can be caused, and the situations have destructive influence on the normal operation of an AVP system. Therefore, during the running of the vehicle, the camera picture needs to be detected to judge whether the imaging effect is credible.
Disclosure of Invention
In order to solve the above and other potential technical problems, the invention provides a detection system, a terminal and a storage medium for a lens attached object, which are used for detecting and timely alarming the problem that a lens is shielded by water drops of different forms, such as condensed water drops and fuzzy water stains, or shielded pollution caused by dark stains, light spots, refracted light and the like, in order to ensure the normal operation of an auxiliary system and the safety of a user vehicle body.
a method for detecting a lens attachment includes the following steps:
s01: image segmentation: carrying out suspected region segmentation on the input image;
s02: feature extraction: extracting a fuzzy outline area caused by attachments, performing statistical calculation on one or more evaluation indexes in the outline, and judging whether the area is the attachments or not by integrating index results;
S03: early warning judgment: and carrying out accumulation marking on the judgment area, and carrying out early warning triggering on the processing result of which the accumulation times exceed the alarm threshold value.
further, in step S01, the image segmentation method may be implemented by using a suspected region segmentation algorithm or by using a deep learning method.
Further, in step S01, the image segmentation method using the suspected region segmentation algorithm includes the following steps:
S011: the image segmentation is down-sampled and the image segmentation is performed,
S012: the blurred difference map is extracted and,
s013: and carrying out multi-graph superposition;
s014: and one or more operations of binarization, filtering, morphology and threshold are utilized to realize the segmentation of the suspected area image.
Further, in step S011, the image segmentation and downsampling operations are expressed as follows: the original captured image is assumed to be an image with a pixel value of N x M, the down-sampling coefficient is assumed to be k, the down-sampling operation is that every k pixel points of each row and each column in the original captured image are taken to form a down-sampled image, and the down-sampled image has the function of reducing the calculated amount of image processing and ensuring the real-time performance of the image processing.
Further, in step S012, the operation of extracting the blur difference map is described as follows: and carrying out fuzzy processing on the captured image by using a filter operator, subtracting the original image and the image after the fuzzy processing, and taking an absolute value to obtain a fuzzy difference image.
Assuming that xsrc is the original image and xblu is the image obtained by blur smoothing, the current blur difference image is defined as: deltax ═ xsrc-xblur |. The function of extracting the fuzzy differential image is that the fuzzy characteristic of an image area attached by rainwater is utilized, and the fuzzy differential image is insensitive to a filtering smoothing algorithm; compared with a rainwater-free attachment area, the change is less, and an image area attached with rainwater can be distinguished.
Deltax is a current frame fuzzy difference image, xsrc is a current frame original image, and xblur is a picture after filtering and smoothing of the current frame image.
The fuzzy processing can be one or more of Gaussian filtering, median filtering and mean filtering for smoothing.
the relationship between the sizes of the filter kernels, the median filter kernel, the mean filter kernel and the gaussian filter kernel is shown in the following table 1:
TABLE 1
Preferably, as can be seen from table 1, in the case of the current image size, when the filtering kernel size is 5, the effect of using the mean filtering is the best.
further, the operation of performing the multi-map superimposition in step S013 is expressed as follows: n-frame accumulation xaccum ═ delatxk + deltaxk + n is performed on the blur difference map obtained in S012
Xaccum is a fusion feature map obtained by multi-frame accumulation, and is an accumulation result from a k frame to a k + n frame, deltaxk is a fuzzy differential map at the k frame time, and deltaxk + n is a fuzzy differential map at the k + n frame time.
The fusion characteristic diagram obtained by accumulation has the effects that the distribution and the transformation of the rainwater form and the position are less in a short time, the contrast between the rainwater attachment fuzzy area and the background can be enhanced by accumulating continuous multi-frame fuzzy differential diagrams, and the rainwater attachment area on the image is highlighted.
further, the operation of the step S014 using the binarization processing, the neighborhood filtering processing, the morphology processing is expressed as follows:
The binarization effect on the fusion characteristic image obtained by multi-frame accumulation is to convert the gray scale image into a binary image by using an automatic threshold value division algorithm and divide the image into an interested area suspected of attachment and an area without the attachment;
the neighborhood filtering function is to count the pixel distribution condition in the neighborhood of the binary image and eliminate isolated noise points so as to reduce the influence of the noise points on the region of interest of the attachment;
the morphological filtering function is to perform corrosion operation on the binary image to remove a smaller noise area, perform expansion operation to fill and extract a cavity existing in the suspected area, and repair the area of the suspected area.
Further, in step S01, the image segmentation method using the deep learning method includes the steps of:
pretreatment: and (3) down-sampling the image to the size of M x N, and converting the image data storage format into a three-channel BGR format.
image segmentation: and (3) sending the input image data into a semantic segmentation convolutional neural network, outputting the classification of each pixel point through forward propagation, and obtaining a pixel point set of a suspected attachment area.
Further, when the image segmentation result is obtained by adopting the deep learning mode, the used network model is a convolution neural network segmented by semantics, and a main network for feature extraction can adopt a network such as resnet18, squeezenet1.1, and mobilent; the semantic segmentation deconvolution part adopts a PSPnet frame, integrates the feature maps of the last 4 layers of the backbone network with different scales, and finally outputs a segmentation result map with the same size as the original image.
Further, in the feature extraction in step S02, the specific expression of the extraction region extraction profile is as follows:
And the contour extraction function is to extract the contour of the processed fusion feature map to obtain pixel sets with different contours, and the pixel sets are used for carrying out feature extraction calculation on regions with different contours and evaluating the definition of each set and the credibility of the regions judged to be attached with rainwater.
Further, in the feature extraction in step S02, the specific expression of performing statistical calculation on the multiple sharpness evaluation indexes in the contour is as follows:
And carrying out statistical calculation on the divided outline by utilizing one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different types of evaluation index values.
Image statistical characteristics: gray, Grads gradient, Laplac quadratic gradient, mean/variance/max/min mathematical statistics, mean variance, maximum, minimum
Shape texture characteristics: Round/Area roundness and Area, Wavelet transform operator of Wavelet _ f,
Clarity evaluation characteristics: varian, EVA, Hist, Laplas
Value=F(area,vector)。
Further, when the statistical calculation is performed on the divided contours in step S02 to obtain different types of evaluation index values, two evaluation methods are included: and accumulating the evaluation mode by using the reliability value and classifying and judging the contour region by using a classifier.
Further, when the classifier is used for classifying and judging the contour regions, N evaluation index values are obtained by calculating a certain contour region, and the evaluation index values in N are integrated into a feature vector of the region; and (5) counting to obtain a feature vector of the rainwater area, and sending the feature vector as a training sample to a classifier for training. The classifier can select a decision tree, an SVM, a BP network and the like to classify the divided outline area and judge whether the divided outline area is a rainwater area.
further, when a credibility value accumulation evaluation mode is used for judgment, N evaluation indexes are set in a certain outline area, each evaluation index is provided with a judgment selection threshold value, and the judgment selection threshold value is used for expressing whether the evaluation index value of the certain outline area can be judged to be a rainwater area or not;
respectively calculating each evaluation index of the contour area to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold value corresponding to the evaluation index, and if the evaluation index exceeds the judgment selection threshold value of the evaluation index, adding one score to the reliability of the contour area; if the evaluation index does not exceed the evaluation selection threshold, the contour region is eliminated or the reliability of the contour region is not divided;
And finally, counting contour regions with evaluation indexes higher than a judgment selection threshold value in the image, and marking the position and area information of the regions.
Further, the specific manner of determining the area accumulation flag in step S03 is as follows: gridding the detection picture, and dividing the detection picture into MxN grids; and mapping the output result of multi-frame accumulation to the corresponding grid position, counting the number of attached objects of the grid, and giving a quantitative shielding condition.
A lens attachment detection system comprises the following modules:
The image segmentation module is used for carrying out down-sampling, blurring and differential processing on the captured image so as to distinguish a contour area suspected of being blurred due to the attachment from a contour area suspected of not being blurred due to the attachment in the image;
The characteristic extraction module is used for extracting a contour region suspected of being fuzzy due to attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and judging whether the region is the attachments or not by integrating various index results;
and the early warning judgment module is used for counting the judgment areas of the characteristic extraction module, marking the accumulated values and triggering and warning the judgment areas with the accumulated times exceeding the warning threshold value.
Furthermore, the image segmentation module also comprises a down-sampling module, a fuzzy processing module, a superposition module and a post-processing module;
the down-sampling module is used for reducing image pixels and ensuring the real-time property of image processing;
the fuzzy processing module is used for carrying out fuzzy processing on the captured image by using a filtering operator, subtracting the original image and the image after the fuzzy processing, and taking an absolute value to obtain a fuzzy difference image, wherein the fuzzy difference image is used for distinguishing a contour area suspected of being fuzzy due to attachments and a contour area suspected of not being fuzzy due to the attachments in the image;
the superposition module is used for fusing the characteristics of the multi-frame continuous fuzzy differential image to form a fused characteristic image, the superposition module has the functions of less change of the form and the position distribution of the attached object in a short time, the contrast between the fuzzy area of the attached object and the background can be enhanced by accumulating the continuous multi-frame fuzzy differential image, and the outline of the attached object on the image is highlighted;
the post-processing module comprises a binarization module, a neighborhood filtering module and a morphology processing module, and is used for eliminating isolated noise points, removing smaller noise regions, filling cavities through expansion operation and repairing the area of the regions.
A mobile terminal, which may be a vehicle-mounted terminal or a mobile phone terminal,
The vehicle-mounted terminal can execute the detection method of the lens attached object or carry a detection system of the lens attached object;
The mobile terminal of the mobile phone can execute the detection method of the lens attachment object or carry the detection system of the lens attachment object.
A computer storage medium which is a computer program written in accordance with the above-described method for detecting a lens sticking object.
as described above, the present invention has the following advantageous effects:
In the driving process in rainy days, rainwater adheres to the surface of the camera in different forms, and the conditions of blocking the lens by stains, defects in the lens and the like also exist. Under the condition, the imaging effect of the camera is influenced, and the accuracy and the effectiveness of the algorithm are reduced. In order to ensure the normal operation of the AVP system and the safety of the body of a user, the problem that the lens is shielded and polluted needs to be detected, and an alarm is given in time.
drawings
in order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows images of several situations in the background art where problems are encountered.
FIG. 2 is a schematic view of the image segmentation process according to the present invention.
FIG. 3 is a schematic diagram of the early warning step of the present invention.
FIG. 4 is a diagram illustrating neighborhood filtering according to the present invention.
Fig. 5 shows an enlarged view of the outline of a partial region in fig. 2 and a comparison view after filling of the void.
Fig. 6 is a schematic diagram of fig. 2 after feature extraction.
FIG. 7 is a schematic diagram showing the result of detecting lens attachments in a non-focusing situation according to the method of the present invention.
FIG. 8 is a schematic diagram showing the result of detecting lens attachments in the presence of water mist according to the method of the present invention.
FIG. 9 is a schematic diagram showing the result of detecting lens attachments in a dirty condition according to the present invention.
FIG. 10 is a schematic diagram showing the result of detecting lens attachments in a focusing situation according to the present invention.
FIG. 11 is a schematic diagram showing the result of lens attachment detection in a large-area scene.
Fig. 12 shows a comparison display of the original image, the feature fusion image, the feature extraction image, and the feature extraction image of the same frame image when the depth learning is used as the image segmentation method in the flow of the method of the present invention.
Fig. 13 is a comparative illustration of the original image, the feature fusion image, the feature extraction image, and the feature extraction image of the same frame image when the depth learning is used as the image segmentation method in the method flow according to another embodiment of the present invention.
Detailed Description
the embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
it should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only used for matching with the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions under which the present invention can be implemented, so that the present invention has no technical significance, and any structural modification, ratio relationship change, or size adjustment should still fall within the scope of the present invention without affecting the efficacy and the achievable purpose of the present invention. In addition, the terms "upper", "lower", "left", "right", "middle" and "one" used in the present specification are for clarity of description, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not to be construed as a scope of the present invention.
With reference to figures 1 to 13 of the drawings,
a method for detecting a lens attachment includes the following steps:
s01: image segmentation: carrying out suspected region segmentation on the input image;
S02: feature extraction: extracting a fuzzy outline area caused by attachments, performing statistical calculation on various definition evaluation indexes in the outline, and judging whether the area is the attachments or not by integrating various index results;
S03: early warning judgment: and carrying out accumulation marking on the judgment area, and carrying out early warning triggering judgment on the processing result of which the accumulation times exceed the alarm threshold.
As a preferred embodiment, in step S01, the image segmentation method may be implemented by using a suspected region segmentation algorithm or by using a deep learning method.
As a preferred embodiment, in step S01, the image segmentation method using the suspected region segmentation algorithm includes the following steps:
S011: the image segmentation is down-sampled and the image segmentation is performed,
s012: the blurred difference map is extracted and,
s013: and carrying out multi-graph superposition;
s014: and one or more operations of filtering, morphology and threshold are utilized to realize the image segmentation of the suspected area.
in step S011, the operation of image segmentation and down-sampling is expressed as follows: the original captured image is assumed to be an image with a pixel value of N x M, the down-sampling coefficient is assumed to be k, the down-sampling operation is that every k pixel points of each row and each column in the original captured image are taken to form a down-sampled image, and the down-sampled image has the function of reducing the calculated amount of image processing and ensuring the real-time performance of the image processing.
as a preferred embodiment, in step S012, the operation of extracting the blur difference map is as follows: and carrying out fuzzy processing on the captured image by using a filter operator, subtracting the original image and the image after the fuzzy processing, and taking an absolute value to obtain a fuzzy difference image.
Assuming that xsrc is the original image and xblu is the image obtained by blur smoothing, the current blur difference image is defined as: deltax ═ xsrc-xblur |. The function of extracting the fuzzy differential image is that the fuzzy characteristic of an image area attached by rainwater is utilized, and the fuzzy differential image is insensitive to a filtering smoothing algorithm; compared with a rainwater-free attachment area, the change is less, and an image area attached with rainwater can be distinguished.
Deltax is a current frame fuzzy difference image, xsrc is a current frame original image, and xblur is a picture after filtering and smoothing of the current frame image.
the fuzzy processing can be one or more of Gaussian filtering, median filtering and mean filtering for smoothing.
The relationship between the sizes of the filtering kernels, the median filtering, the mean filtering and the gaussian filtering is shown in the following table 1:
TABLE 1
Preferably, as can be seen from table 1, in the case of the current image size, when the filtering kernel size is 5, the effect of using the mean filtering is the best.
As a preferred embodiment, the operation of performing the multi-map superimposition in step S013 is expressed as follows: n-frame accumulation xaccum ═ delatxk + deltaxk + n is performed on the blur difference map obtained in S012
xaccum is a fusion feature map obtained by multi-frame accumulation, and is an accumulation result from a k frame to a k + n frame, deltaxk is a fuzzy differential map at the k frame time, and deltaxk + n is a fuzzy differential map at the k + n frame time.
The fusion characteristic diagram obtained by accumulation has the effects that the distribution and the transformation of the rainwater form and the position are less in a short time, the contrast between the rainwater attachment fuzzy area and the background can be enhanced by accumulating continuous multi-frame fuzzy differential diagrams, and the rainwater attachment area on the image is highlighted.
as a preferred embodiment, the operations of step S014 using the binarization process, the neighborhood filtering process, and the morphological process are expressed as follows:
the binarization function on the fused feature map obtained by multi-frame accumulation is to convert the gray scale map into a binary map by using an automatic threshold value division algorithm and divide the image into an area of interest with attached rainwater and an area without rainwater; the neighborhood filtering function is to count the pixel distribution condition in the neighborhood of the binary image and eliminate isolated noise points; the morphological filtering function is to carry out corrosion operation on the binary image to remove a smaller noise area, expand the operation to fill a cavity and repair the area of the area.
In step S01, the image segmentation method using the deep learning method includes the following steps:
Pretreatment: and (4) down-sampling the image to MxN size, and converting the image data storage format into a 3-channel BGR format.
image segmentation: and (3) sending the input image data into a semantic segmentation convolutional neural network, and outputting the classification of each pixel point through forward propagation to obtain a pixel point set of a suspected rainwater area.
as a preferred embodiment, when the image segmentation result is obtained by using the deep learning method, the used network model is a convolution neural network of semantic segmentation, and a backbone network for feature extraction may use a network such as resnet18, squeezenet1.1, or mobile; the semantic segmentation deconvolution part adopts a PSPnet frame, integrates the feature maps of the last 4 layers of the backbone network with different scales, and finally outputs a segmentation result map with the same size as the original image.
as a preferred embodiment, in the step S02, the specific expression of the extracted region extraction profile is as follows:
And the contour extraction function is to extract the contour of the processed fusion feature map to obtain pixel sets with different contours, and the pixel sets are used for carrying out feature extraction calculation on regions with different contours and evaluating the definition of each set and the credibility of the regions judged to be attached with rainwater.
as a preferred embodiment, in the step S02, the specific expression of performing statistical calculation on the multiple sharpness evaluation indexes in the contour in the feature extraction is as follows:
and carrying out statistical calculation on the divided outline by utilizing one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different types of evaluation index values.
Image statistical characteristics: gray scale (Gray), gradient (Grads), quadratic gradient (lapis), mathematical statistics, mean variance, maximum, minimum (mean/variance/max/min)
shape texture characteristics: roundness and Area (Round/Area), Wavelet transform operator (Wavelet _ f),
Clarity evaluation characteristics: mean (Variance), EVA, Hist, second order gradient (Laplas)
Value=F(area,vector)。
as a preferred embodiment, when the statistical calculation is performed on the divided contours in step S02 to obtain different types of evaluation index values, the method includes two evaluation methods: and accumulating the evaluation mode by using the reliability value and classifying and judging the contour region by using a classifier.
As a preferred embodiment, when a classifier is used for classifying and judging a contour region, N evaluation index values are obtained by calculating a certain contour region, and the N evaluation index values are integrated into a feature vector of the region; and (5) counting to obtain a feature vector of the rainwater area, and sending the feature vector as a training sample to a classifier for training. The classifier can select a decision tree, an SVM, a BP network and the like to classify the divided outline area and judge whether the divided outline area is a rainwater area.
as a preferred embodiment, when the evaluation is performed by using the confidence value accumulation evaluation method, N evaluation indexes are set for a certain contour area, each evaluation index is set with a judgment selection threshold value, and the judgment selection threshold value is used for expressing whether the evaluation index value of the certain contour area can be judged as a rainwater area;
Respectively calculating each evaluation index of the contour area to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold value corresponding to the evaluation index, and if the evaluation index exceeds the judgment selection threshold value of the evaluation index, adding one score to the reliability of the contour area; if the evaluation index does not exceed the evaluation selection threshold, the contour region is eliminated or the reliability of the contour region is not divided;
and finally, counting contour regions with evaluation indexes higher than a judgment selection threshold value in the image, and marking the position and area information of the regions.
As a preferred embodiment, the specific manner of determining the area accumulation flag in step S03 is as follows: gridding the detection picture, and dividing the detection picture into MxN grids; and mapping the output result of multi-frame accumulation to the corresponding grid position, counting the number of attached objects of the grid, and giving a quantitative shielding condition.
A lens attachment detection system comprises the following modules:
The image segmentation module is used for carrying out down-sampling, blurring and differential processing on the captured image so as to distinguish a contour area suspected of being blurred due to the attachment from a contour area suspected of not being blurred due to the attachment in the image;
the characteristic extraction module is used for extracting a contour region suspected of being fuzzy due to attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and judging whether the region is the attachments or not by integrating various index results;
and the early warning judgment module is used for counting the judgment areas of the characteristic extraction module, marking the accumulated values and triggering and warning the judgment areas with the accumulated times exceeding the warning threshold value.
As a preferred embodiment, the image segmentation module further comprises a down-sampling module, a fuzzy processing module, an overlapping module and a post-processing module;
The down-sampling module is used for reducing image pixels and ensuring the real-time property of image processing;
The fuzzy processing module is used for carrying out fuzzy processing on the captured image by using a filtering operator, subtracting the original image and the image after the fuzzy processing, and taking an absolute value to obtain a fuzzy difference image, wherein the fuzzy difference image is used for distinguishing a contour area suspected of being fuzzy due to attachments and a contour area suspected of not being fuzzy due to the attachments in the image;
The superposition module is used for fusing the characteristics of the multi-frame continuous fuzzy differential image to form a fused characteristic image, the superposition module has the functions of less change of the form and the position distribution of the attached object in a short time, the contrast between the fuzzy area of the attached object and the background can be enhanced by accumulating the continuous multi-frame fuzzy differential image, and the outline of the attached object on the image is highlighted;
the post-processing module comprises a binarization module, a neighborhood filtering module and a morphology processing module, and is used for eliminating isolated noise points, removing smaller noise regions, filling cavities through expansion operation and repairing the area of the regions.
As a preferred embodiment, the technical parameters of the lens attachment detection system are shown in table 2:
TABLE 2
As a preferred embodiment, the configuration of the detection system for lens attachments requires:
The detection system for the lens attached object can be configured with a background to independently operate or is matched with other algorithms to trigger operation when operating, and a multi-frame interval detection method is adopted. The detection input of the lens attachment includes: four-way camera image original drawing, automobile body CAN signal: speed, ambient brightness information brightness. When the vehicle body moves, the detection system of the lens attached object is triggered to detect the four-way camera images.
As a preferred embodiment, the detection alarm performance requirement of the detection system of the lens attached object is as follows:
(1) the application range is as follows: the detection can be realized for raindrops and stains in different forms in indoor and outdoor environments and different road conditions, and the detection capability for serious water mist and lens defects is realized;
(2) stability: the method is not influenced by factors such as weather and environmental changes, and has better reliability;
(3) The algorithm running time and occupied resources meet the requirements.
As a preferred embodiment, the specific application scenarios and special scenario descriptions of the system for detecting a lens sticking object are shown in table 3 below:
TABLE 3
a mobile terminal, which may be a vehicle-mounted terminal or a mobile phone terminal,
the vehicle-mounted terminal can execute the detection method of the lens attached object or carry a detection system of the lens attached object;
the mobile terminal of the mobile phone can execute the detection method of the lens attachment object or carry the detection system of the lens attachment object.
a computer storage medium which is a computer program written in accordance with the above-described method for detecting a lens sticking object.
as a preferred embodiment, the present embodiment further provides a terminal device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted cloud, a blade-type cloud, a tower-type cloud, or a rack-type cloud (including an independent cloud or a cloud cluster composed of multiple clouds) capable of executing a program. The terminal device of this embodiment at least includes but is not limited to: a memory, a processor communicatively coupled to each other via a system bus. It is noted that a terminal device having a component memory, a processor, but it is understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented in alternative methods of detecting lens deposits.
as a preferred embodiment, the memory (i.e., readable storage medium) includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the memory may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device. Of course, the memory may also include both internal and external storage devices for the computer device. In this embodiment, the memory is generally used to store an operating system and various types of application software installed in the computer device, for example, a program code of a method for detecting a lens sticking object in the embodiment. In addition, the memory may also be used to temporarily store various types of data that have been output or are to be output.
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a cloud, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the present embodiment is used for a lens attached object detection method program, and when executed by a processor, implements a lens attached object detection method in the embodiment of the lens attached object detection method program.
the foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention shall be covered by the claims of the present invention.

Claims (14)

1. The system for detecting the attached object of the lens is characterized by comprising the following modules:
the image segmentation module is used for carrying out down-sampling, blurring and differential processing on the captured image so as to distinguish a contour area suspected of being blurred due to the attachment from a contour area suspected of not being blurred due to the attachment in the image;
The characteristic extraction module is used for extracting a contour region suspected of being fuzzy due to attachments, carrying out statistical calculation on various evaluation indexes in the contour, and judging whether the region is the attachments or not by integrating various index results;
And the early warning judgment module is used for counting the judgment areas of the characteristic extraction module, marking the accumulated values and triggering and warning the judgment areas with the accumulated times exceeding the warning threshold value.
2. The system of claim 1, wherein the image segmentation module further comprises a down-sampling module, a blurring processing module, a superimposing module, and a post-processing module.
3. The lens attachment detection system according to claim 2,
The down-sampling module is used for reducing image pixels and ensuring the real-time property of image processing;
the fuzzy processing module is used for carrying out fuzzy processing on the captured image by using a filtering operator, subtracting the original image and the image after the fuzzy processing, and taking an absolute value to obtain a fuzzy difference image, wherein the fuzzy difference image is used for distinguishing a contour area suspected of being fuzzy due to attachments and a contour area suspected of not being fuzzy due to the attachments in the image;
The superposition module is used for fusing the characteristics of the multi-frame continuous fuzzy difference images to form a fused characteristic image, the superposition module has the function of less change of the form and the position distribution of the attached object in a short time, and the accumulated continuous multi-frame fuzzy difference images can enhance the contrast between the attached object fuzzy area and the background and highlight the attached object outline on the image.
4. the system for detecting lens sticking objects according to claim 3, wherein the blur processing module is used for:
Blurring the captured image by using a filter operator, subtracting the original image from the image after blurring, and taking an absolute value to obtain a blurring difference image;
defining the current blur difference image as: deltax ═ xsrc-xblur |;
Wherein Deltax is a current frame fuzzy difference image, xsrc is a current frame original image, and xblur is a image after the current frame image is subjected to filtering and smoothing processing.
5. The lens attachment detection system according to claim 4, wherein the superimposing module functions to: carrying out n-frame accumulation xaccum ═ delatxk + deltaxk + n on the fuzzy difference image to obtain a fusion characteristic image;
the method comprises the steps of obtaining a fusion feature map by multi-frame accumulation, obtaining an accumulation result from a k frame to a k + n frame, obtaining a fuzzy differential map at the k frame time by deltaxk, and obtaining a fuzzy differential map at the k + n frame time by deltaxk + n.
6. the system for detecting the lens attachment according to claim 5, wherein the post-processing module comprises a binarization module, a neighborhood filtering module and a morphology processing module, and the post-processing module is used for eliminating isolated noise points, removing smaller noise regions, filling cavities by expansion operation and repairing the area of the regions;
The binarization effect on the fusion characteristic image obtained by multi-frame accumulation is to convert the gray scale image into a binary image by using an automatic threshold value division algorithm and divide the image into an interested area suspected of attachment and an area without the attachment;
The neighborhood filtering function is to count the pixel distribution condition in the neighborhood of the binary image and eliminate isolated noise points so as to reduce the influence of the noise points on the region of interest of the attachment;
The morphological filtering function is to perform corrosion operation on the binary image to remove a smaller noise area, perform expansion operation to fill and extract a cavity existing in the suspected area, and repair the area of the suspected area.
7. the system of claim 6, wherein the image segmentation module further comprises a deep learning network model, the deep learning network model is used for performing image segmentation on the input image, and the deep learning network model comprises the following parts:
A preprocessing module: the preprocessing module down-samples the image to M × N size, and converts the image data storage format into a three-channel BGR format;
an image segmentation module: the image segmentation module sends input image data into a semantic segmentation convolutional neural network, and outputs classification of each pixel point through forward propagation to obtain a pixel point set of a suspected attachment area.
8. The system for detecting lens sticking objects according to claim 7, wherein the deep learning network model is characterized in that the backbone network for extracting features can adopt a network such as resnet18, squeezenet1.1, and mobilent; the semantic segmentation deconvolution part adopts a PSPnet frame, integrates the last four layers of feature maps with different scales of a backbone network, and finally outputs a segmentation result map with the same size as the original image.
9. The system for detecting a lens sticking object according to claim 8, wherein the feature extraction module further includes an index evaluation module for performing statistical calculation on a plurality of indexes in the contour, and the index evaluation module:
Carrying out statistical calculation on the divided outline by utilizing one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different types of evaluation index values;
carrying out statistical calculation on the divided outline by utilizing one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different types of evaluation index values;
wherein the image statistics include, but are not limited to: gray scale (Gray), gradient (Grads), quadratic gradient (lapis), mathematical statistics, mean variance, maximum, minimum (mean/variance/max/min);
Wherein shape texture features include, but are not limited to: roundness and Area (Round/Area), Wavelet transform operator (Wavelet _ f);
wherein the sharpness evaluation features include, but are not limited to: mean (Variance), EVA, Hist, second order gradient (Laplas);
Value=F(area,vector)。
10. the system for detecting a lens sticking object according to claim 9, wherein the index evaluation module includes two evaluation methods: and accumulating the evaluation mode by using the reliability value and classifying and judging the contour region by using a classifier.
11. The system of claim 10, wherein when the index evaluation module performs the determination by using the confidence value cumulative evaluation method, N evaluation indexes are provided for a certain contour region, and each evaluation index is provided with a judgment selection threshold for indicating whether the evaluation index value of the certain contour region can be determined as the rain region;
Respectively calculating each evaluation index of the contour area to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold value corresponding to the evaluation index, and if the evaluation index exceeds the judgment selection threshold value of the evaluation index, adding one score to the reliability of the contour area; if the evaluation index does not exceed the evaluation selection threshold, the contour region is eliminated or the reliability of the contour region is not divided;
And finally, counting contour regions with evaluation indexes higher than a judgment selection threshold value in the image, and marking the position and area information of the regions.
12. The system of claim 10, wherein the index evaluation module calculates N evaluation index values for a contour region when determining by the classifier, and integrates the N evaluation index values into a feature vector of the region; and (5) counting to obtain a feature vector of the rainwater area, and sending the feature vector as a training sample to a classifier for training. The classifier can select a decision tree, an SVM, a BP network and the like to classify the divided outline area and judge whether the divided outline area is a rainwater area.
13. a mobile terminal, characterized by: it may be a car terminal or a mobile phone terminal, which executes the system for detecting a lens sticking object according to any one of claims 1 to 12.
14. A computer storage medium being a computer program written in accordance with the system for detecting lens deposits according to claims 1-12.
CN201910679999.XA 2019-07-26 2019-07-26 detection system, terminal and storage medium for lens attached object Pending CN110544232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910679999.XA CN110544232A (en) 2019-07-26 2019-07-26 detection system, terminal and storage medium for lens attached object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910679999.XA CN110544232A (en) 2019-07-26 2019-07-26 detection system, terminal and storage medium for lens attached object

Publications (1)

Publication Number Publication Date
CN110544232A true CN110544232A (en) 2019-12-06

Family

ID=68710329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910679999.XA Pending CN110544232A (en) 2019-07-26 2019-07-26 detection system, terminal and storage medium for lens attached object

Country Status (1)

Country Link
CN (1) CN110544232A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object
CN112298106A (en) * 2020-10-29 2021-02-02 恒大恒驰新能源汽车研究院(上海)有限公司 Windscreen wiper control method, device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object
CN112298106A (en) * 2020-10-29 2021-02-02 恒大恒驰新能源汽车研究院(上海)有限公司 Windscreen wiper control method, device and system

Similar Documents

Publication Publication Date Title
CN110544211B (en) Method, system, terminal and storage medium for detecting lens attached object
CN110532875B (en) Night mode lens attachment detection system, terminal and storage medium
CN111582083B (en) Lane line detection method based on vanishing point estimation and semantic segmentation
Wu et al. Lane-mark extraction for automobiles under complex conditions
Negru et al. Image based fog detection and visibility estimation for driving assistance systems
US8797417B2 (en) Image restoration method in computer vision system, including method and apparatus for identifying raindrops on a windshield
US7231288B2 (en) System to determine distance to a lead vehicle
CN110532876A (en) Night mode camera lens pays detection method, system, terminal and the storage medium of object
CN105260701A (en) Front vehicle detection method applied to complex scene
CN112666553B (en) Road ponding identification method and equipment based on millimeter wave radar
CN112927283A (en) Distance measuring method and device, storage medium and electronic equipment
CN110544232A (en) detection system, terminal and storage medium for lens attached object
CN115457358A (en) Image and point cloud fusion processing method and device and unmanned vehicle
Chen et al. A novel lane departure warning system for improving road safety
Chiu et al. Real-time traffic light detection on resource-limited mobile platform
FAN et al. Robust lane detection and tracking based on machine vision
EP3392800A1 (en) Device for determining a weather state
Hsieh et al. A real-time mobile vehicle license plate detection and recognition for vehicle monitoring and management
JP2020098389A (en) Road sign recognition device and program thereof
CN112734745B (en) Unmanned aerial vehicle thermal infrared image heating pipeline leakage detection method fusing GIS data
CN113139488B (en) Method and device for training segmented neural network
KR100976142B1 (en) detection method of road vehicles
CN114821529A (en) Visual detection system, method and device for intelligent automobile
Banu et al. Video based vehicle detection using morphological operation and hog feature extraction
Setiyono et al. The rain noise reduction using guided filter to improve performance of vehicle counting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination