WO2022088620A1 - 相机镜头的状态检测方法、装置、设备及存储介质 - Google Patents

相机镜头的状态检测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022088620A1
WO2022088620A1 PCT/CN2021/088211 CN2021088211W WO2022088620A1 WO 2022088620 A1 WO2022088620 A1 WO 2022088620A1 CN 2021088211 W CN2021088211 W CN 2021088211W WO 2022088620 A1 WO2022088620 A1 WO 2022088620A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
abnormal
image
preset
target image
Prior art date
Application number
PCT/CN2021/088211
Other languages
English (en)
French (fr)
Chinese (zh)
Inventor
姚兴华
曾星宇
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to KR1020217037804A priority Critical patent/KR20220058843A/ko
Priority to JP2021565780A priority patent/JP2023503749A/ja
Publication of WO2022088620A1 publication Critical patent/WO2022088620A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B43/00Testing correct operation of photographic apparatus or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing

Definitions

  • the present disclosure relates to the field of detection technology, and in particular, to a state detection method, device, device and storage medium of a camera lens.
  • the camera has an image acquisition function, and the captured image data can be widely used in various computer vision applications such as smart cities, smart transportation, smart factories, smart campuses, smart communities, autonomous driving, and robot perception.
  • a camera collects the external environment through its lens to obtain image data. Therefore, the lens of the camera serves as the source of data in the entire application and plays an extremely important role.
  • the lens of the camera In real applications, the lens of the camera is exposed to the outdoor environment, and it usually suffers from wind and rain, dust pollution, abnormal occlusion and other abnormal conditions that affect its normal operation. Therefore, it is very important to monitor the status of the lens of the camera and find out the abnormality of the lens of the camera in time.
  • Embodiments of the present disclosure provide at least a method, apparatus, device, and storage medium for detecting a state of a camera lens.
  • a first aspect of the embodiments of the present disclosure provides a method for detecting the state of a camera lens.
  • the method includes: performing abnormality detection on a target image captured by a camera to obtain a final abnormal area in the target image; analyzing the final abnormal area to determine Whether the lens of the camera is in an abnormal state.
  • the abnormal state detection of the camera lens is directly realized by using the abnormal area of the image, without relying on other sensors, thus reducing the difficulty of detecting the camera lens and reducing the detection cost.
  • the above-mentioned abnormal detection of the target image captured by the camera to obtain the final abnormal area in the target image includes: performing blur detection on the target image captured by the camera to obtain several candidate abnormal areas in the target image; Select at least one candidate abnormal area in the area as the final abnormal area.
  • the abnormal area existing in the target image can be determined according to the fuzzy condition of the target image, and the abnormal area identification of the target image is realized, so that the camera can be judged according to the final abnormal area obtained later.
  • Anomalies of the lens various types of abnormal conditions of the lens, such as water mist, smudges, occlusion, etc., often cause blurring of some areas in the target image. Therefore, the blur detection of the image can be used to detect all the above abnormal conditions. out to improve detection breadth.
  • performing blur detection on the target image captured by the camera to obtain several candidate abnormal regions in the target image including: performing preset transformation on the target image captured by the camera to obtain a transformed image, wherein the pixels in the transformed image are The pixel value can reflect the blur information of the transformed image; based on the pixel value of the transformed image, the candidate abnormal area is determined.
  • performing preset transformation on the target image captured by the camera to obtain the transformed image includes: preprocessing the target image to obtain the preprocessed image; performing Laplacian transformation on the preprocessed image to obtain the transformed image .
  • the above-mentioned determination of the candidate abnormal region based on the pixel value of the transformed image includes: performing binarization processing based on the transformed image to obtain a binarized image; finding out the pixel points whose pixel value satisfies a preset pixel condition from the binarized image, to form candidate abnormal regions.
  • the value obtained after Laplacian transformation can reflect the blur information of the changed image, making the edge of the blurred area more obvious.
  • the blurred area in the transformed image can be preliminarily obtained by the size of the obtained pixel value.
  • the amount of data can be greatly reduced, which can improve the operation speed and speed up the detection of candidate abnormal areas; Candidate abnormal regions are highlighted.
  • the above-mentioned preprocessing of the target image to obtain the preprocessed image includes: performing grayscale processing on the target image to obtain the preprocessed image.
  • the above-mentioned binarization processing based on the transformed image to obtain a binarized image includes: filtering the transformed image to obtain a filtered image; the filtering processing includes morphological closing operation; and binarizing the filtered image to obtain a binarized image deal with.
  • the above-mentioned finding out the pixel points whose pixel value meets the preset pixel condition from the binarized image to form the candidate abnormal area includes: inverting the pixel value of the binarized image to obtain an inverse binarized image; Pixels whose pixel values satisfy preset pixel conditions are found in the valued image to form candidate abnormal regions.
  • the amount of data that needs to be calculated can be reduced, and the calculation speed can be improved.
  • filtering processing such as morphological closing operation
  • the interference information inside the unblurred area can be eliminated, the accuracy of subsequent detection of the blurred area can be improved, and the accuracy of judging whether the camera lens is in an abnormal state can be improved.
  • the subsequent processing of the blurred area can be facilitated, the operation speed is improved, and the detection difficulty is reduced.
  • selecting at least one candidate abnormal area from the above several candidate abnormal areas as the final abnormal area includes: obtaining a first area of each candidate abnormal area; selecting a candidate whose first area satisfies a preset area condition from the several candidate abnormal areas The abnormal area is regarded as the pending abnormal area; at least one pending abnormal area is determined as the final abnormal area.
  • the fuzzy area can be more accurately identified.
  • the above-mentioned preset area condition is that the first area is greater than the first preset area threshold.
  • the above-mentioned determination of at least one undetermined abnormal area as the final abnormal area includes: determining at least one statistical value of the undetermined abnormal area in the target image; wherein, the at least one statistical value of the undetermined abnormal area includes the saturation of the undetermined abnormal area obtained by statistics. At least one first statistical value, and/or, at least one second statistical value obtained by counting the pixel values of the abnormal area to be determined; if it is determined that at least one statistical value of the abnormal area to be determined satisfies the preset statistical conditions, the abnormal area to be determined is determined is the final abnormal area.
  • the accuracy of the blur detection can be improved, thereby improving the accuracy of the state detection of the camera lens.
  • the first statistical value and the second statistical value respectively include at least one of a mean value and a variance
  • the above-mentioned preset statistical conditions include: each statistical value of the undetermined abnormal area is greater than a preset threshold value corresponding to the statistical value.
  • the above-mentioned determining at least one statistical value of the undetermined abnormal area in the target image includes: generating at least one mask corresponding to the undetermined abnormal area based on the pixel position information of the undetermined abnormal area, wherein the at least one mask includes an area mask and/or Boundary mask; in the case where at least one mask includes a region mask, obtain a saturation image corresponding to the target image, and use the region mask corresponding to the undetermined abnormal region to obtain a saturation image corresponding to the undetermined abnormal region the first area to be counted; perform statistics on the saturation of the first area to be counted to obtain at least one first statistic value of the unusual area to be determined; and/or, in the case that at least one mask includes a boundary mask, use and The boundary mask
  • the mask is used to extract the area related to the abnormal area to be determined in the target image, so as to realize the accurate statistics of the statistical value of the abnormal area to be determined, and improve the accuracy of subsequent blur detection.
  • the above analysis of the final abnormal area to determine whether the lens of the camera is in an abnormal state includes: obtaining the second area of the final abnormal area; judging whether the second area of the final abnormal area satisfies the preset area condition; if so, determining The target image is in an abnormal state; when it is detected that there are at least a second preset number of frames in an abnormal state in the target image of the continuous first preset number of frames, it is determined that the camera lens is in an abnormal state; wherein the first preset number and the first preset number of frames are in an abnormal state; 2.
  • the preset number is a positive integer.
  • the abnormal state of the target image is determined by using the area of the final abnormal area in the target image, and whether the camera lens is in an abnormal state is determined based on the state of the target image of a preset number of frames, thereby realizing the abnormality detection of the camera lens, and, When the preset number is greater than 1, the state of the target images of consecutive multiple frames is used to comprehensively determine whether the camera lens is in an abnormal state, so the detection accuracy of the state of the camera lens can be improved.
  • judging whether the second area of the final abnormal area satisfies the preset area condition includes: judging whether the second area of the final abnormal area satisfies the first preset area condition or the second preset area condition. If the above-mentioned conditions are satisfied, then determining that the target image is in an abnormal state includes: if the first preset area condition is satisfied, determining that the target image is in the first abnormal state; if the second preset area condition is satisfied, determining that the target image is in the second abnormal state abnormal state.
  • the above-mentioned determining that the camera lens is in an abnormal state when it is detected that there are at least a second preset number of frames in an abnormal state in the target image of the continuous first preset number of frames includes: detecting that the continuous first preset number of frames of the target image is in an abnormal state. In the case where the images are all in the first abnormal state, it is determined that the camera lens is in the first abnormal state. When it is detected that the target images of the first preset number of consecutive frames are not uniformly in the first abnormal state, and there are at least the second preset number of frames in the first abnormal state. In the case of an abnormal state or a second abnormal state, it is determined that the camera lens is in the second abnormal state.
  • a second aspect of the embodiments of the present disclosure provides a device for detecting a state of a camera lens, the device including: a region detecting part and a state analyzing part.
  • the area detection part is configured to perform abnormality detection on the target image captured by the camera to obtain the final abnormal area in the target image.
  • the state analysis part is configured to analyze the final abnormal area to determine whether the lens of the camera is in an abnormal state.
  • a third aspect of the embodiments of the present disclosure provides an electronic device, including a mutually coupled memory and a processor, where the processor is configured to execute a computer program stored in the memory to implement the method for detecting the state of a camera lens in the first aspect.
  • a fourth aspect of the embodiments of the present disclosure provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, implements the method for detecting the state of a camera lens in the first aspect.
  • a fifth aspect of the embodiments of the present disclosure provides a computer program, including computer-readable codes.
  • a processor in the electronic device executes a process for implementing the camera lens in the first aspect. Status detection method.
  • the above solution by performing abnormal detection on the target image captured by the camera, directly uses the abnormal area of the image to detect the abnormal state of the camera lens without relying on other sensors, thus reducing the difficulty of detecting the camera lens and reducing the detection cost. .
  • FIG. 1A is a schematic diagram 1 of an application scenario of an electronic device provided by an embodiment of the present disclosure
  • FIG. 1B is a second schematic diagram of an application scenario of an electronic device method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for detecting a state of a camera lens provided by an embodiment of the present disclosure
  • FIG. 3 is a first schematic flowchart of a second embodiment of a method for detecting a state of a camera lens provided by an embodiment of the present disclosure
  • FIG. 4 is a second schematic flowchart of a second embodiment of a method for detecting a state of a camera lens provided by an embodiment of the present disclosure
  • FIG. 5 is a third schematic flowchart of the second embodiment of the state detection method for a camera lens provided by an embodiment of the present disclosure
  • FIG. 6 is a fourth schematic flowchart of the second embodiment of the state detection method of the camera lens provided by the embodiment of the present disclosure.
  • FIG. 7 is a fifth schematic flowchart of the second embodiment of the camera lens state detection method provided by the embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of an application scenario of a method for detecting a state of a camera lens provided by an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of a processing flow of an abnormal area detection module provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a grayscale image in each scene provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a transformed image in each scene provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of filtered images in various scenarios provided by an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of an inverse binarized image in each scene provided by an embodiment of the present disclosure.
  • FIG. 14 is a schematic diagram of candidate abnormal regions in various scenarios provided by an embodiment of the present disclosure.
  • FIG. 15 is a schematic frame diagram of an embodiment of a state detection device for a camera lens provided by an embodiment of the present disclosure
  • 16 is a schematic diagram of a framework of an embodiment of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 17 is a schematic framework diagram of an embodiment of a computer-readable storage medium provided by an embodiment of the present disclosure.
  • the camera described in the embodiments of the present disclosure can be any device that can realize image acquisition, such as a video camera, a camera, a mobile phone and other terminal devices, and the lens of the camera is the optical component of the device for forming an image. It can be understood that as long as it is a device capable of capturing images or videos, the method for detecting the state of a camera lens described in the embodiments of the present disclosure can be applied. In some possible implementations, the method for detecting the state of a camera lens described in the embodiments of the present disclosure may be implemented by a processor calling a computer program stored in a memory.
  • the execution subject of the method for detecting the state of the camera lens provided by the embodiment of the present disclosure may be an electronic device.
  • the electronic device may include a processor 11 and a camera 12 .
  • the electronic device 11 can collect the target image through the camera 12
  • the processor 12 can detect the target image.
  • the target image is analyzed to determine whether the lens of the camera 12 is in an abnormal state.
  • the electronic device may be implemented as a cell phone.
  • the electronic device 10 can receive the real-time captured target image transmitted by other devices 13 through the network 14 .
  • the electronic device 10 can analyze and process the received target image, so as to determine whether the camera lens of the other device 13 is in an abnormal state.
  • the electronic device can be implemented as a computer, and the computer can receive the target image collected by the camera through the network.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for detecting a state of a camera lens according to an embodiment of the present disclosure. Specifically, the following steps can be included:
  • Step S11 Perform abnormality detection on the target image captured by the camera to obtain the final abnormal area in the target image.
  • an image captured by a camera is defined as a target image.
  • the target image may be a certain frame image in the video captured by the camera lens, or may be a single image captured directly.
  • the target image may be a color image or a grayscale image.
  • abnormality detection can be performed directly on the target image to obtain the final abnormal area in the target image.
  • Anomaly detection is to detect the abnormal area in the target image.
  • the abnormal area of the target image can be understood as the area without normal imaging, such as the fuzzy area in the target image, or the whole target image is black, etc. .
  • the situations that cause the target image to have an abnormal area are, for example, that the lens is contaminated with water mist, stains, the lens is blocked, the lens is out of focus, and so on.
  • the obtained area with abnormality in the target image is defined as the final abnormal area.
  • the final abnormal area may be a part of the target image, and the number may be one or more, and the final abnormal area may also be that the entire target image is an abnormal area.
  • Step S12 Analyze the final abnormal area to determine whether the lens of the camera is in an abnormal state.
  • the obtained final abnormal area can be analyzed to more accurately determine whether the lens is in an abnormal state.
  • the position, size, shape, pixel information, etc. of the final abnormal area can be analyzed. It can be understood that all the information about the final abnormal area can be used as basic data for analysis.
  • a plurality of basic data for analysis can be combined, and whether the lens of the camera is in an abnormal state can be more accurately determined through the analysis of the plurality of data.
  • the analysis can be performed in a corresponding manner according to the type of the basic data of the final abnormal area to be analyzed.
  • the saturation analysis method can be used to calculate the saturation of the final abnormal area, and then the saturation of the final abnormal area can be used to determine whether the camera lens is in an abnormal state, or the pixel point statistics method can be used to calculate the final abnormal area.
  • the included pixels are used to determine the size of the final area.
  • the abnormal state detection of the camera lens is directly realized by using the abnormal area of the image, without relying on other sensors, thus reducing the difficulty of detecting the camera lens, and Reduced inspection costs.
  • FIG. 3 is a first schematic flowchart of a second embodiment of a state detection method for a camera lens according to an embodiment of the present disclosure. This embodiment is further described on the basis of the above-mentioned first embodiment. Specifically, the following steps may be included:
  • Step S21 Perform blur detection on the target image captured by the camera to obtain several candidate abnormal regions in the target image.
  • blur detection may be performed on the target image first to preliminarily determine the abnormality of the target image.
  • fuzzy areas in the target image can be obtained, which are defined as candidate abnormal areas.
  • the candidate abnormal region is a part of the target image, the number of candidate abnormal regions may be one or more, and the specific number is determined according to the blur detection result.
  • FIG. 4 is a schematic flowchart of a second embodiment of a method for detecting a state of a camera lens according to an embodiment of the present disclosure.
  • blur detection is performed on a target image captured by a camera to obtain several candidate abnormal regions in the target image, which can be specifically implemented by the following steps:
  • Step S211 Perform preset transformation on the target image captured by the camera to obtain a transformed image, wherein the pixel value of each pixel in the transformed image can reflect the blur information of the transformed image.
  • a preset transformation is performed on the target image, and the transformed image is defined as a transformed image.
  • the pixel value of each pixel in the obtained transformed image can reflect the blur information of the transformed image. Since the preset transformation is performed on each pixel point, the pixel value corresponding to each pixel point can be obtained accordingly.
  • the pixel value of the pixel can reflect the blur information of the changing image, which can be reflected by the change trend of the pixel value of the pixel. For example, the boundary between the blurred area and the non-blurred area can be made more obvious by the change trend of the pixel value. , that is, to make the candidate anomaly regions more obvious.
  • the size of the pixel value of the pixel point can also reflect whether the area where the pixel point is located is blurred. That is, the fuzzy area and the non-blurred area can be preliminarily determined by judging the size of the pixel value of the pixel point.
  • the preset transformation may be to process the information contained in each pixel in the target image, and the value obtained after processing is the pixel value.
  • the target image is a color image
  • it can be processed according to the values of the red (R), green (G), and blue (B) three color channels of each pixel of the target image, and the final value is the pixel value.
  • the preset transformation is performed by using, for example, a Brenner gradient function, a Tenengrad gradient function, a Laplacian (Laplace) gradient function, or an SMD (grayscale variance) function. It can be understood that any method that can be used to detect the blurring of an image can be applied to the embodiments of the present disclosure.
  • a preset transformation is performed on a target image captured by a camera to obtain a transformed image, which specifically includes the following steps:
  • Step S2111 Preprocess the target image to obtain a preprocessed image.
  • the target image When performing preset transformation on the target image, the target image may be preprocessed first, and the obtained image is the preprocessed image.
  • the preprocessing may be to process the information contained in each pixel in the image, for example, to process the RGB three-channel information of the target image.
  • the preprocessing may be to perform grayscale processing on the target image, and the obtained grayscale image is the preprocessed image.
  • the target image can be grayscaled according to the following formula:
  • R, G, and B respectively represent the values of the three color channels of red (R), green (G), and blue (B) of a pixel, and the obtained value Gray is the value of the pixel. grayscale value. It can be understood that the specific calculation formula of the grayscale processing can be adjusted accordingly according to the specific situation.
  • the information contained in each pixel can be converted from RGB three-channel to single-channel information of gray value, which can simplify the amount of data in subsequent processing of the information contained in the pixel. Improve operation speed.
  • Step S2112 Laplace transform is performed on the preprocessed image to obtain a transformed image.
  • the preprocessed image obtained after preprocessing can be processed by using the Laplacian gradient function transformation, and the obtained image is the transformed image.
  • the Laplacian transform By using the Laplacian transform, the real blur area in the preprocessed image can be detected to obtain the blurry area in the preprocessed image.
  • the specific process of Laplacian is to first use the Sobel operator to calculate the second-order x and y differences, and then sum them up.
  • the formula is as follows:
  • x and y are the coordinates of each pixel in the pixel coordinates, respectively, src represents the input image, and dst represents the output image.
  • the obtained value is the pixel value of each pixel point, and the pixel value can reflect the blur information of the changed image, for example, to make the edge of the blurred area more obvious, and at the same time, the obtained pixel value can be obtained by Judging the size of the image, the blurred area in the transformed image is initially obtained.
  • Step S212 Determine candidate abnormal regions based on pixel values of the transformed image.
  • the candidate abnormal region can be determined based on the pixel value of the transformed image. Based on the pixel value of the transformed image, it means that the candidate abnormal area can be determined according to the change trend of the pixel value of the transformed image, the overall distribution of the pixel value, or the size of the pixel value and other information related to the pixel value. That is, the candidate abnormal region may be determined according to the pixel values of a part of the transformed image, or may be determined according to the pixel points of the entire transformed image. For example, a region enclosed by a range in which the change trend of the pixel value of the pixel point exceeds a certain level can be determined as a candidate abnormal region.
  • the candidate abnormal region can be determined from the transformed image by binarizing the transformed image. Specifically, it can be illustrated with reference to FIG. 5 , which is a third schematic flowchart of the second embodiment of the state detection method of the camera lens according to the embodiment of the present disclosure.
  • “determining candidate abnormal regions based on pixel values of the transformed image” can be specifically implemented through the following steps:
  • Step S2121 Perform binarization processing based on the transformed image to obtain a binarized image.
  • the transformed image can be binarized.
  • the transformed image can be thresholded first. Threshold segmentation is to classify the pixel value of the pixel, for example, a feature threshold is set, the pixels whose pixel value is less than or equal to the feature threshold are classified into one category, and the pixels whose pixel value is greater than the feature threshold are classified into one category.
  • the threshold is, for example, any number from 50 to 80, such as 50.
  • the pixel value of each pixel in the transformed image is divided into two categories.
  • a binarization operation can be performed to make the difference between the pixel values of the two types of pixel points more obvious.
  • the pixel degree of a pixel whose pixel value is greater than the feature threshold may be set as the first preset pixel value
  • the pixel value of the pixel whose pixel value is less than or equal to the feature threshold may be set as the second preset pixel value
  • the pixel value may be 0, and the second preset pixel value may be 255; or the first preset pixel value may be 255, and the second preset pixel value may be 0.
  • the above-mentioned first preset pixel value and second preset pixel value may also be set to other pixel values, which are not limited herein.
  • the amount of data contained in the image can be greatly reduced, thereby improving the operation speed and speeding up the detection of candidate abnormal regions.
  • the boundary of the blurred area in the binarized image can be made more obvious, and the candidate abnormal area can be highlighted.
  • performing threshold segmentation and image binarization based on the transformed image to obtain a binarized image may specifically include the following steps 1 and 2:
  • Step 1 Filter the transformed image to obtain a filtered image.
  • the filtering process may include, but is not limited to, a morphological closing operation. Morphological closing operation is to perform dilation operation first, and then perform erosion operation.
  • the dilation operation can be used to fill in the holes of the unblurred areas and to remove the small grain noise (interference information in the unblurred areas) contained in the unblurred areas.
  • the formula is as follows:
  • A can be a non-blurred area
  • B can be a structural element.
  • the unblurred area A is expanded by the structural element B.
  • the erosion operation can cause the boundary of the unblurred area to shrink, which can be used to eliminate some small holes or small cracks.
  • the formula is as follows:
  • the obtained image is defined as a filtered image.
  • the interference information in the unblurred area has been eliminated by the morphological closing operation in the filtered image, which can improve the accuracy of subsequent detection of the blurred area and the accuracy of judging whether the camera lens is in an abnormal state.
  • Step 2 Binarize the filtered image to obtain a binarized image.
  • the above binarization process can be used, and the image obtained after processing is a binary value. image.
  • Step S2122 Find out the pixel points whose pixel value meets the preset pixel condition from the binarized image to form a candidate abnormal area.
  • the pixel values of the pixels in the binarized image at this time are only two.
  • the pixel points whose pixel values satisfy the preset pixel condition can be found, and the area composed of these pixel points is the candidate abnormal area.
  • the pixel value of the pixel point in the unblurred area is set to a smaller value, the point with the larger pixel value in the binarized image can be selected to form the blurred area.
  • the pixel value of the pixel point in the unblurred area can be set to 255, and the pixel point with the pixel value of 255 can be found by using the cv::findContours function in the OpenCV open source computer vision library.
  • the pixel value of the blurred area when performing an image binarization operation on the transformed image, the pixel value of the blurred area may be set to 0. Since the pixel value of the blurred area is 0, it is inconvenient for subsequent processing of the blurred area. The speed of operation reduces the difficulty of calculation.
  • the following steps can be performed to realize "find out the pixel points whose pixel value meets the preset pixel condition from the binarized image to form a candidate abnormal area", which specifically includes the following steps 1 and 2:
  • Step 1 Invert the pixel values of the binarized image to obtain an inverse binarized image.
  • the inversion operation can be performed, that is, the pixel values of the unblurred area and the blurred area can be exchanged .
  • the pixel value of the blurred area is 0 and the pixel value of the unblurred area is 255
  • the pixel value of the blurred area is 255
  • the pixel value of the unblurred area is 0.
  • the pixel value of the blurred area can be made not to be 0, which can facilitate subsequent processing of the blurred area, improve the operation speed, and reduce the difficulty of detection.
  • thresh is the pixel threshold of the preset value
  • max Val is the maximum pixel value
  • the obtained image is defined as an inverse binarized image.
  • Step 2 Find out the pixel points whose pixel value satisfies the preset pixel condition from the inverse binarization image to form a candidate abnormal area.
  • the inverse binarization image it is also possible to find out the pixel points whose pixel value satisfies the preset pixel condition, and the pixel point whose pixel value satisfies the preset pixel condition. Since the pixel value of the blurred area is greater than 0, the preset pixel condition may be that the pixel value of the pixel point is greater than 0.
  • the way to find pixels can be to use the cv::findContours function in the OpenCV open source computer vision library to find them.
  • Step S22 Select at least one candidate abnormal area from several candidate abnormal areas as the final abnormal area.
  • At least one of the candidate abnormal regions can be selected as the final abnormal region. For example, after performing certain processing on the candidate abnormal regions, at least one of them can be selected as the final abnormal region.
  • FIG. 6 is a fourth schematic flowchart of the second embodiment of the state detection method of the camera lens according to the embodiment of the present disclosure.
  • the candidate abnormal area may be further analyzed to determine the final abnormal area. For example, "select at least one candidate abnormal area from several candidate abnormal areas as the final abnormal area" may specifically include the following steps:
  • Step S221 Obtain the first area of each candidate abnormal region.
  • the area of the candidate abnormal region is defined as the first area.
  • the area of the candidate abnormal area can be determined by counting the number of pixels contained in the candidate abnormal area. For example, if the number of pixels contained in a candidate abnormal area is 100, the first area size of the candidate abnormal area is 100 pixel size.
  • Step S222 From several candidate abnormal areas, select a candidate abnormal area whose first area satisfies a preset area condition as a pending abnormal area.
  • the candidate abnormal area can be screened by judging whether the area of the candidate abnormal area satisfies the preset area condition, and the candidate abnormal area that satisfies the preset area condition is defined as the pending abnormal area.
  • the preset area condition may be that the first area is greater than the first preset area threshold.
  • the first preset area threshold is, for example, any value of 2500-3000 pixels in size, such as 2500 pixels in size. Since the area of the fuzzy area is generally large, by setting the first area threshold, the candidate abnormal area with a small area can be excluded, and the accuracy of identifying the fuzzy area is improved.
  • Step S223 Determine at least one pending abnormal area as the final abnormal area.
  • At least one abnormal area to be determined can be selected as the final abnormal area.
  • at least one of them can be selected as the final abnormal area.
  • FIG. 7 is a fifth schematic flowchart of the second embodiment of the state detection method of the camera lens according to the embodiment of the present disclosure.
  • "determining at least one pending abnormal area as the final abnormal area” may specifically include the following steps:
  • Step S2231 Determine at least one statistical value of the abnormal region to be determined in the target image.
  • processing may be performed on the pending abnormal area to further determine whether the pending abnormal area is the final abnormal area.
  • statistics may be performed on the pending abnormal area to obtain statistical values related to the pending abnormal area.
  • further analysis can be carried out by using the saturation information of the target image to determine the abnormal area.
  • at least one first statistical value may be obtained by performing statistics on the saturation of the abnormal region to be determined.
  • the pixel values of the to-be-determined abnormal region can also be counted, so as to obtain at least one second statistical value. It can be understood that when implementing the state detection method for a camera lens described in this disclosure, the first statistical value and the second statistical value can be obtained as the statistical value of the pending abnormal area, and one of them can also be selected as the statistical value of the pending abnormal area. value.
  • max(R, G, B) is the maximum value obtained after processing the RGB value of each pixel in the target image
  • min(R, G, B) is the pixel value of each pixel in the target image.
  • the minimum value obtained after processing the RGB values, S represents the saturation of the target image.
  • the method for processing the RGB values of the pixels in the target image is, for example, the above-mentioned formula (1).
  • the above-mentioned first statistical value and second statistical value respectively include at least one of a mean value and a variance. That is, the at least one first statistical value includes the mean value and/or the variance of the saturation of the abnormal area to be determined, and the at least one second statistical value includes the mean value and/or the variance of the pixel values of the abnormal area to be determined.
  • step S2231 may specifically include the following steps 1 to 3.
  • Step 1 Based on the pixel position information of the abnormal region to be determined, generate at least one mask corresponding to the abnormal region to be determined, wherein the at least one mask includes an area mask and/or a boundary mask.
  • the position of the pixel points can be determined by analyzing the position information, such as coordinates, of the pixel points in the undetermined abnormal area.
  • the mask corresponding to the undetermined abnormal area is consistent with the shape and size of the entire undetermined abnormal area, or only the shape and size of a part of the undetermined abnormal area.
  • Masks can be used to block out pending anomaly regions.
  • the mask may be an image, a figure, etc., used to block the abnormal area to be determined.
  • a mask corresponding to a complete undetermined abnormal region is a region mask; generating a mask corresponding to the boundary of a pending abnormal region is a boundary mask.
  • the number and shape of the generated masks are not limited. For example, an area mask and a boundary mask corresponding to each pending anomaly area may be generated for all pending anomaly areas.
  • Step 2 In the case where at least one mask includes an area mask, obtain a saturation image corresponding to the target image, and use the area mask corresponding to the undetermined abnormal area to obtain a saturation image corresponding to the undetermined abnormal area.
  • the first area to be counted; the saturation of the first area to be counted is counted to obtain at least one first statistical value of the to-be-determined abnormal area.
  • step 1 if the generated mask includes a region mask, a saturation image corresponding to the target image can be obtained at this time. Specifically, saturation calculation can be performed on each pixel of the target image, so as to obtain a saturation image corresponding to the target image.
  • the saturation calculation is, for example, converting an RGB image to a hue-saturation-lightness (HSV) image and extracting the value of the saturation S channel.
  • HSV hue-saturation-lightness
  • the corresponding position of the undetermined abnormal area on the saturation image that is, the first area to be counted, can be obtained through the position information, such as coordinates, of the pixels of the undetermined abnormal area in the target image.
  • the coordinates of a certain pixel in the undetermined abnormal area on the target image are (1,1)
  • the coordinates of the pixel corresponding to the pixel are also (1,1). Therefore, the first to-be-statistical area corresponding to the to-be-determined abnormal area can be obtained in the saturation image.
  • the saturation of the first region to be counted may also be counted, and a value obtained from the count is defined as the first statistic value.
  • the statistics on the saturation of the first to-be-statistical area may be performed on the entire to-be-determined abnormal area, or may be performed on a certain part of the to-be-determined abnormal area.
  • the statistical method is not limited, for example, the average saturation, variance, standard deviation, etc., of the undetermined abnormal area can be counted.
  • step 3 can be performed at this time.
  • Step 3 Using the boundary mask corresponding to the abnormal area to be determined, obtain a second area to be counted corresponding to the abnormal area to be determined in the target image; Count the pixel values of the second area to be counted to obtain at least one area of the abnormal area to be determined. Second statistic.
  • the boundary mask corresponding to the undetermined abnormal area is generated, the area covered by the boundary mask is the second to-be-statistical area corresponding to the undetermined abnormal area.
  • the pixel values of the second area to be counted may be counted, for example, the pixel values are grayscale values, and the statistical value obtained by the statistics is the second statistical value.
  • both steps 2 and 3 may be performed, or only one of the steps may be performed.
  • Step S2232 If it is determined that at least one statistical value of the pending abnormal area satisfies the preset statistical condition, the pending abnormal area is determined as the final abnormal area.
  • the statistical value can be analyzed to further determine whether the pending abnormal area is a fuzzy area, and obtain the final abnormal area. Specifically, if it is determined that at least one statistical value of the pending abnormal area satisfies the preset statistical condition, the pending abnormal area is determined as the final abnormal area.
  • the preset statistical condition may be that the size of the statistical value is within a certain range. For example, when the statistical value is an arithmetic mean value, the preset condition may be that the arithmetic mean value is between 40-216.
  • the pending abnormal area can be determined as the final abnormal area, or several statistical values meet the preset statistical conditions, or all statistical values can be determined as the final abnormal area. The values all meet the preset statistical conditions.
  • the preset statistical condition may be that each statistical value of the abnormal region to be determined is greater than a preset threshold corresponding to the statistical value.
  • the statistical value includes the arithmetic mean and variance of the whole and boundary of the undetermined abnormal area, it means that there will be 4 statistical values in an undetermined abnormal area, which are the arithmetic mean and variance of the entire undetermined abnormal area, to be determined Arithmetic mean and variance of anomaly region boundaries. Only when the four statistical values are all greater than the preset threshold corresponding to the statistical value, the pending abnormal area is determined as the final abnormal area.
  • the preset threshold value of the arithmetic mean may be 40, and the preset threshold value of the variance may be 100.
  • the blur situation of the abnormal area to be determined can be further judged, thereby improving the accuracy of blur detection and further improving the accuracy of the state detection of the camera lens.
  • the detection steps can be simplified and the detection speed can be accelerated.
  • the final blurred area of the image is obtained. At this time, it can be determined whether the camera lens is in an abnormal state by analyzing the final abnormal area.
  • Step S23 obtaining the second area of the final abnormal area
  • the fuzzy area can be analyzed by obtaining the area of the final abnormal area.
  • the area of the final abnormal region is defined as the second area.
  • the area of the final abnormal area can also be represented by the number of pixels it contains. For example, if a final abnormal area contains 3000 pixels, the area of the final abnormal area is 3000 pixels in size.
  • Step S24 Determine whether the second area of the final abnormal area satisfies the preset area condition. If satisfied, go to step S25, if not, go to step S27.
  • the final abnormal area can be classified by judging whether the second area satisfies the preset area condition.
  • the preset area condition is, for example, the size of the area, or the position of the final abnormal area in the target image, and so on.
  • the preset area condition may include a first preset area condition or a second preset area condition.
  • "judging whether the second area of the final abnormal area satisfies the preset area condition" specifically includes: judging whether the second area of the final abnormal area satisfies the first preset area condition or the second preset area condition.
  • the first preset area condition is that the second area is greater than the second preset area threshold; the second preset area condition is that the second area is greater than the second preset area threshold and smaller than the third preset area threshold, Wherein, the second preset area threshold is greater than the third preset area threshold.
  • the second preset area threshold may be a size of 5000 pixels, and the third preset area threshold may be a size of 3000 pixels.
  • Step S27 If not satisfied, determine that the target image is in a normal state.
  • the target image is in a normal state.
  • Step S25 If satisfied, determine that the target image is in an abnormal state.
  • the target image can be considered to be in an abnormal state. It can be understood that when the second area of the final abnormal area does not meet the preset area condition, it can be considered that the target image of the frame is in a normal state.
  • the final abnormal area can be further classified according to these two conditions. Specifically, when the second area of the final abnormal area satisfies the first preset area condition, it can be determined that the target image is in the first abnormal state; when the second area of the final abnormal area satisfies the second preset area condition, the target image is determined in the second abnormal state.
  • the abnormal state of the target image is more serious.
  • the first abnormal state is, for example, an error state
  • the second abnormal state is, for example, a warning state.
  • the abnormal state of the target image can be further divided, so that a more accurate judgment of the abnormal state of the target image can be realized.
  • the abnormal state of the frame of target image can be determined. It can be understood that when the lens of the camera is in an abnormal state, generally speaking, several consecutive frames of target images captured by the camera are in an abnormal state. Therefore, the state of the camera lens can be further judged by analyzing the number of target images in an abnormal state. Based on this, after step S25, the following steps can be continued:
  • Step S26 in the case of detecting that there are at least a second preset number of frames in an abnormal state in the target image of the continuous first preset number of frames, determine that the camera lens is in an abnormal state; wherein the first preset number and the second preset number of frames are in an abnormal state; Quantity is a positive integer.
  • the first preset number and the second preset number are positive integers.
  • the first preset number is, for example, 30, and the second preset number is, for example, 15.
  • part of the target image may be periodically extracted from the target avatar shot by the camera lens within a period of time to determine whether the extracted target image is in an abnormal state. In this way, it can be determined whether the camera lens is in an abnormal state within a certain period of time, thereby improving the detection accuracy of the camera lens.
  • the target image is in the first abnormal state or the second abnormal state, The following situations may occur:
  • the first situation if it is detected that the target images of the first preset number of consecutive frames are all in the first abnormal state, it can be determined that the camera lens is in the first abnormal state. At this time, it can be considered that the abnormal state of the camera lens is relatively serious, and the first abnormal state is, for example, an error state.
  • the second situation if it is detected that the target images of the first consecutive preset number of frames are in a normal state, then the camera lens can be considered to be in a normal state at this time.
  • the third situation if it is detected that the target images of the first preset number of consecutive frames are not all in the first abnormal state, and there is a situation that at least the target image of the second preset number of frames is in the first abnormal state or the second abnormal state , to determine that the camera lens is in the second abnormal state. At this time, it is possible that the target images of the consecutive first preset number of frames are all in the second abnormal state, or at least the second preset number of frame images of the target images of the consecutive first preset number of frames (the second preset number of frames) The number is not equal to the first preset number) is in the first abnormal state or the second abnormal state.
  • the camera lens may be in an abnormal state, so it is determined that the camera lens is in the second abnormal state.
  • the second abnormal state is, for example, a warning state.
  • the second preset number can be specifically set according to the actual situation, for example, it can be any positive integer such as 1, 5, and 15.
  • the abnormal state of the target image is determined by using the area of the final abnormal area in the target image, and whether the camera lens is in an abnormal state is determined based on the state of the target image in a preset number of frames, thereby realizing the abnormality detection of the camera lens.
  • the number is set to be greater than 1, that is, whether the camera lens is in an abnormal state is comprehensively determined by using the state of the target images of multiple consecutive frames, so the detection accuracy of the state of the camera lens can be improved.
  • the blur detection of the image can be used to detect all the above abnormal conditions. out to improve detection breadth.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the state detection method of the camera lens can be completed by the abnormal area detection module 81 , the abnormal area filtering module 82 , the state analysis processing module 83 and the control processing module 84 .
  • the abnormal area detection module 81 can detect the abnormal area of the target image.
  • the types of abnormal areas include: the lens is stained with water mist; the lens is stained, and the lens is blocked.
  • the abnormal area filtering module 82 can filter the results of the abnormal area detection module. Filtering is achieved through a series of human-designed rules and characteristics.
  • the filtering rules here include statistics such as region size, region color, region boundary features, etc.
  • the state analysis and processing module 83 can detect whether the state of the camera lens is an abnormal state through the threshold value set manually.
  • the processing results of the state analysis and processing module can be divided into three states: normal, abnormal, and error. Among them, the normal state, that is, the camera lens does not have any abnormality. Abnormal state, that is, the state of the camera lens is slightly abnormal, but not serious. Error status, that is, the camera lens status is abnormally serious and needs to be shut down immediately.
  • the output result of this module can be handed over to the control module 84 for processing, so that the control module can make different responses to different states.
  • the abnormal area detection module 81 may perform the following steps:
  • Step 811 Acquire a target image.
  • the target image is an image captured by a camera.
  • Step 812 Perform grayscale processing on the target image to obtain a grayscale image.
  • the target image is a color RGB image.
  • the target image can be subjected to grayscale processing according to the above formula (1) to obtain the grayscale value of each pixel, thereby obtaining a grayscale image.
  • Figure 10 shows grayscale images in different scenarios.
  • 1001 in Figure 10 is the grayscale image in normal conditions
  • 1002 in Figure 10 is the grayscale image of the camera lens contaminated with water mist
  • 1003 in Figure 10 is the grayscale image of the camera lens contaminated with contamination
  • 1004 in Figure 10 Grayscale image with occlusions for the camera lens.
  • Step 813 Perform Laplacian transformation on the grayscale image to obtain a transformed image.
  • Laplacian is, for example, firstly use the sobel operator to calculate the second-order x and y differences, and then sum them up.
  • the formula please refer to the above formula (2).
  • the transformed image shown in FIG. 11 can be obtained.
  • 1101 in Fig. 11 is the transformed image collected under normal conditions
  • 1102 in Fig. 11 is the transformed image of the camera lens contaminated with water mist
  • 1103 in Fig. 11 is the transformed image of the camera lens contaminated with stains
  • 1104 in Fig. 11 A rollover image with occlusions for the camera lens.
  • Step 814 Perform filtering processing on the transformed image to obtain a filtered image.
  • the filtering process may include, but is not limited to, morphological closing operations. Morphological closing operation is to perform dilation operation first, and then perform erosion operation. The specific processes of the expansion operation and the erosion operation are the same as those of the above-mentioned embodiment, and are not repeated here.
  • the filtered image shown in FIG. 12 can be obtained.
  • 1201 in Fig. 12 is the filtered image collected under normal conditions
  • 1202 in Fig. 12 is the filtered image of the camera lens contaminated with water mist
  • 1203 in Fig. 12 is the filtered image of the camera lens contaminated with contamination
  • 1204 in Fig. 12 A filtered image with occlusions for the camera lens.
  • Step 815 Perform binarization processing on the filtered image to obtain a binarized image.
  • Step 816 Invert the pixel values of the binarized image to obtain an inverse binarized image.
  • the formula for performing the binarization processing and the inversion operation on the filtered image may refer to the formula (5) in the above embodiment, which will not be repeated here.
  • the inverse binarized image shown in FIG. 13 can be obtained.
  • 1301 in Figure 13 is the inverse binarization image under normal conditions
  • 1302 in Figure 13 is the inverse binarization image of the camera lens contaminated with water mist
  • 1303 in Figure 13 is the inverse binary image of the camera lens contaminated with stains 1304 in Fig. 13 is an inverse binarized image where the camera lens is occluded.
  • Step 817 Find out the pixel points whose pixel value meets the preset pixel condition from the inverse binarized image, so as to form several candidate abnormal regions.
  • contour region search is performed on the inverse binary image to obtain independent individual abnormal regions.
  • the specific implementation can use the cv::findContours function in the OpenCV open source computer vision library.
  • the area framed by the solid line frame is the candidate abnormal area in different scenarios.
  • the abnormal area filtering module 82 may perform the following steps:
  • Step 821 Obtain the first area of each candidate abnormal region.
  • Step 822 from several candidate abnormal areas, select at least one candidate abnormal area whose first area satisfies the preset area condition as the pending abnormal area.
  • the preset area condition is that the first area is greater than the first preset area threshold. That is to say, the abnormal candidate contours can be filtered by area, and the area that is too small can be deleted to obtain the undetermined abnormal area.
  • Step 823 based on the pixel position information of the abnormal region to be determined, generate at least one mask corresponding to the abnormal region to be determined, wherein the at least one mask includes a region mask and/or a boundary mask.
  • Step 824 In the case where at least one mask includes an area mask, obtain a saturation image corresponding to the target image, and use the area mask corresponding to the undetermined abnormal area to obtain a saturation image corresponding to the undetermined abnormal area.
  • the first area to be counted; the saturation of the first area to be counted is counted to obtain at least one first statistical value of the to-be-determined abnormal area.
  • the saturation information of the target image can be calculated according to formula (6) to obtain a saturation image.
  • the first statistical value may be one of the mean value and the variance of the saturation of the undetermined abnormal area.
  • Step 825 Obtain a second area to be counted corresponding to the abnormal area to be determined in the target image by using the boundary mask corresponding to the abnormal area to be determined; perform statistics on the pixel values of the second area to be counted to obtain at least one abnormal area to be determined. Second statistic.
  • the second statistical value includes at least one of a mean and a variance.
  • Step 826 If it is determined that at least one statistical value of the pending abnormal area satisfies the preset statistical condition, determine the pending abnormal area as the final abnormal area.
  • the state analysis processing module 83 may perform the following steps:
  • Step 831 Obtain the second area of the final abnormal area.
  • the cv::contourArea function in the OpenCV open source computer vision library can be used to calculate the second area of the most total abnormal area.
  • Step 832 Determine whether the second area of the final abnormal area satisfies the first preset area condition or the second preset area condition.
  • the first preset area condition is that the second area is greater than the second preset area threshold.
  • the second preset area condition is that the second area is greater than the second preset area threshold and smaller than the third preset area threshold, wherein the second preset area threshold is greater than the third preset area threshold.
  • Step 833 If the first preset area condition is satisfied, it is determined that the target image is in the first abnormal state; if the second preset area condition is satisfied, it is determined that the target image is in the second abnormal state.
  • Step 834 when it is detected that the target images of the first preset number of consecutive frames are in the first abnormal state, determine that the camera lens is in the first abnormal state.
  • the abnormal area existing in the target image can be determined according to the fuzzy condition of the target image, and the abnormal area identification of the target image is realized, so that the follow-up can be based on the obtained final abnormal area.
  • various types of abnormal conditions of the lens such as water mist, smudges, occlusion, etc., often cause blurring of some areas in the target image. Therefore, the blur detection of the image can be used to detect all the above abnormal conditions. out to improve detection breadth.
  • FIG. 15 is a schematic frame diagram of an embodiment of a state detection apparatus for a camera lens according to an embodiment of the present disclosure.
  • the detection device 70 includes an area detection part 71 and a state analysis part 72 .
  • the area detection section 71 is configured to perform abnormality detection on the target image captured by the camera to obtain the final abnormal area in the target image.
  • the state analysis section 72 is configured to analyze the final abnormal area to determine whether the lens of the camera is in an abnormal state.
  • the area detection part 71 is also configured to perform blur detection on the target image captured by the camera to obtain several candidate abnormal areas in the target image; select at least one candidate abnormal area from several candidate abnormal areas as the final abnormal area .
  • the region detection part 71 is also configured to perform preset transformation on the target image captured by the camera to obtain a transformed image, wherein the pixel value of each pixel in the transformed image can reflect the blur information of the transformed image; Pixel values to identify candidate abnormal regions.
  • the region detection part 71 is further configured to preprocess the target image to obtain a preprocessed image, and to perform Laplace transform on the preprocessed image to obtain a transformed image.
  • the region detection part 71 is further configured to perform binarization processing based on the transformed image to obtain a binarized image; to find out the pixel points whose pixel value meets the preset pixel condition from the binarized image to form a candidate abnormal region.
  • the region detection part 71 is further configured to perform grayscale processing on the target image to obtain a preprocessed image.
  • the region detection part 71 is further configured to perform filtering processing on the transformed image to obtain a filtered image.
  • the filtering process may be, but not limited to, a morphological closing operation.
  • the region detection section 71 is further configured to perform inversion of pixel values of the binarized image to obtain an inverse binarized image. Execute to find out the pixel points whose pixel value satisfies the preset pixel condition from the de-binarized image, so as to form a candidate abnormal area.
  • the area detection part 71 is further configured to obtain the first area of each candidate abnormal area; select a candidate abnormal area whose first area satisfies the preset area condition from several candidate abnormal areas, as the pending abnormal area; At least one pending abnormal area is determined as the final abnormal area.
  • the above-mentioned preset area condition is that the first area is greater than the first preset area threshold.
  • the region detection part 71 is further configured to perform determining at least one statistical value of the abnormal region to be determined in the target image; wherein, the at least one statistical value of the abnormal region to be determined includes at least one first statistical value obtained from the saturation statistics of the abnormal region to be determined A statistical value, and/or, at least one second statistical value obtained by counting the pixel values of the abnormal area to be determined; if it is determined that at least one statistical value of the abnormal area to be determined satisfies the preset statistical conditions, the abnormal area to be determined is determined as the final abnormality area.
  • the above-mentioned at least one statistical value includes at least one of mean and variance.
  • the above-mentioned preset statistical conditions include: each statistical value of the undetermined abnormal area is greater than a preset threshold corresponding to the statistical value.
  • the area detection part 71 is further configured to generate at least one mask corresponding to the undetermined abnormal area based on the pixel position information of the undetermined abnormal area, wherein the at least one mask includes an area mask and/or a boundary mask;
  • a mask includes a region mask
  • a saturation image corresponding to the target image is obtained, and at least one first pending abnormal region corresponding to the pending abnormal region is obtained in the saturation image by using the region mask corresponding to the pending abnormal region.
  • Statistical area wherein at least one area to be counted includes an overall area and/or a boundary area; obtain a saturation image corresponding to the target image, and determine the saturation of the area corresponding to the overall area in the saturation image of the first area to be counted and/or, in the case where at least one mask includes a boundary mask, use the boundary mask corresponding to the abnormal area to be determined to obtain in the target image a
  • the second to-be-statistical area corresponding to the undetermined abnormal area; the pixel values of the second to-be-statistical area are counted to obtain at least one second statistical value of the undetermined abnormal area.
  • the state analysis part 72 is further configured to execute obtaining the second area of the final abnormal area; execute determining whether the second area of the final abnormal area satisfies the preset area condition; if so, execute determining that the target image is in an abnormal state; execute in When it is detected that there are at least a second preset number of frames in an abnormal state in the target image of the continuous first preset number of frames, it is determined that the camera lens is in an abnormal state; wherein the first preset number and the second preset number are positive integers .
  • the state analysis part 72 is further configured to judge whether the second area of the final abnormal area satisfies the first preset area condition or the second preset area condition. If so, the state analysis part 72 is further configured to determine that the target image is in the first abnormal state if the first preset area condition is met; and to execute the determination that the target image is in the second abnormal state if the second preset area condition is met.
  • the state analysis part 72 is further configured to determine that the camera lens is in the first abnormal state when it is detected that the target images of the consecutive first preset number of frames are in the first abnormal state;
  • the frame target images are not uniformly in the first abnormal state, and when at least a second preset number of frames are in the first abnormal state or the second abnormal state, it is determined that the camera lens is in the second abnormal state.
  • FIG. 16 is a schematic frame diagram of an embodiment of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 80 includes a memory 81 and a processor 82 coupled to each other, and the processor 82 is configured to execute the computer program stored in the memory 81 to implement the steps of any of the above embodiments of the camera lens state detection method.
  • the electronic device 80 may include, but is not limited to, a microcomputer and a server.
  • the electronic device 80 may also include mobile devices such as a notebook computer and a tablet computer, which are not limited herein.
  • the processor 82 is configured to control itself and the memory 81 to implement the steps of any of the above embodiments of the camera lens state detection method.
  • the processor 82 may also be referred to as a CPU (Central Processing Unit, central processing unit).
  • the processor 82 may be an integrated circuit chip with signal processing capability.
  • the processor 82 can also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field-Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the processor 82 may be jointly implemented by an integrated circuit chip.
  • FIG. 17 is a schematic diagram of a framework of an embodiment of a computer-readable storage medium according to an embodiment of the present disclosure.
  • the computer-readable storage medium 90 stores a computer program 901 that can be executed by the processor, and the computer program 901 is used to implement the steps of any of the foregoing embodiments of the camera lens state detection method.
  • An embodiment of the present disclosure further provides a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes a state detection method configured to implement the above-mentioned camera lens Example steps.
  • the functions or included parts of the apparatus, device, or medium provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments, and the specific implementation may refer to the descriptions in the above method embodiments. For the sake of brevity, details are not repeated here.
  • the disclosed method and apparatus may be implemented in other manners.
  • the device implementations described above are only illustrative, for example, the division of parts or units is only a logical function division, and other divisions may be used in actual implementation, for example, units or components may be combined or integrated to another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, and can also be implemented in the form of software functional units.
  • the integrated unit if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium.
  • a computer-readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various implementation manners of the embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
PCT/CN2021/088211 2020-10-28 2021-04-19 相机镜头的状态检测方法、装置、设备及存储介质 WO2022088620A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020217037804A KR20220058843A (ko) 2020-10-28 2021-04-19 카메라 렌즈의 상태 검출 방법, 장치, 기기 및 저장 매체
JP2021565780A JP2023503749A (ja) 2020-10-28 2021-04-19 カメラレンズの状態検出方法、装置、機器及び記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011172425.2A CN112348784A (zh) 2020-10-28 2020-10-28 相机镜头的状态检测方法、装置、设备及存储介质
CN202011172425.2 2020-10-28

Publications (1)

Publication Number Publication Date
WO2022088620A1 true WO2022088620A1 (zh) 2022-05-05

Family

ID=74358940

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/088211 WO2022088620A1 (zh) 2020-10-28 2021-04-19 相机镜头的状态检测方法、装置、设备及存储介质

Country Status (4)

Country Link
JP (1) JP2023503749A (ko)
KR (1) KR20220058843A (ko)
CN (1) CN112348784A (ko)
WO (1) WO2022088620A1 (ko)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943938A (zh) * 2022-07-26 2022-08-26 珠海视熙科技有限公司 客流统计方法、装置、系统及介质
CN114998317A (zh) * 2022-07-18 2022-09-02 福思(杭州)智能科技有限公司 镜头遮挡检测方法、装置、摄像装置和存储介质
CN115081957A (zh) * 2022-08-18 2022-09-20 山东超华环保智能装备有限公司 一种危废暂存及监测的危废管理平台
CN115379208A (zh) * 2022-10-19 2022-11-22 荣耀终端有限公司 一种摄像头的测评方法及设备
CN116883446A (zh) * 2023-09-08 2023-10-13 鲁冉光电(微山)有限公司 一种车载摄像头镜片碾磨程度实时监测系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348784A (zh) * 2020-10-28 2021-02-09 北京市商汤科技开发有限公司 相机镜头的状态检测方法、装置、设备及存储介质
CN113596420B (zh) * 2021-07-28 2022-12-09 歌尔科技有限公司 投影仪镜片的检测方法、装置、投影仪及可读存储介质
CN113658169A (zh) * 2021-08-26 2021-11-16 歌尔科技有限公司 图像斑点检测方法、设备、介质及计算机程序产品
CN113727097B (zh) * 2021-08-31 2023-07-07 重庆紫光华山智安科技有限公司 拍摄设备状态确认方法、系统、设备及介质
CN113884123A (zh) * 2021-09-23 2022-01-04 广州小鹏汽车科技有限公司 一种传感器校验方法及装置、车辆、存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107404647A (zh) * 2016-05-20 2017-11-28 中兴通讯股份有限公司 镜头状态检测方法及装置
JP2018132861A (ja) * 2017-02-14 2018-08-23 クラリオン株式会社 車載用撮像装置
CN108986097A (zh) * 2018-08-23 2018-12-11 上海小萌科技有限公司 一种镜头起雾状态检测方法、计算机装置及可读存储介质
CN110632077A (zh) * 2018-06-01 2019-12-31 发那科株式会社 视觉传感器的镜头或镜头盖的异常检测系统
CN110992327A (zh) * 2019-11-27 2020-04-10 北京达佳互联信息技术有限公司 镜头脏污状态的检测方法、装置、终端及存储介质
CN112348784A (zh) * 2020-10-28 2021-02-09 北京市商汤科技开发有限公司 相机镜头的状态检测方法、装置、设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544211B (zh) * 2019-07-26 2024-02-09 纵目科技(上海)股份有限公司 一种镜头付着物的检测方法、系统、终端和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107404647A (zh) * 2016-05-20 2017-11-28 中兴通讯股份有限公司 镜头状态检测方法及装置
JP2018132861A (ja) * 2017-02-14 2018-08-23 クラリオン株式会社 車載用撮像装置
CN110632077A (zh) * 2018-06-01 2019-12-31 发那科株式会社 视觉传感器的镜头或镜头盖的异常检测系统
CN108986097A (zh) * 2018-08-23 2018-12-11 上海小萌科技有限公司 一种镜头起雾状态检测方法、计算机装置及可读存储介质
CN110992327A (zh) * 2019-11-27 2020-04-10 北京达佳互联信息技术有限公司 镜头脏污状态的检测方法、装置、终端及存储介质
CN112348784A (zh) * 2020-10-28 2021-02-09 北京市商汤科技开发有限公司 相机镜头的状态检测方法、装置、设备及存储介质

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998317A (zh) * 2022-07-18 2022-09-02 福思(杭州)智能科技有限公司 镜头遮挡检测方法、装置、摄像装置和存储介质
CN114998317B (zh) * 2022-07-18 2022-11-04 福思(杭州)智能科技有限公司 镜头遮挡检测方法、装置、摄像装置和存储介质
CN114943938A (zh) * 2022-07-26 2022-08-26 珠海视熙科技有限公司 客流统计方法、装置、系统及介质
CN115081957A (zh) * 2022-08-18 2022-09-20 山东超华环保智能装备有限公司 一种危废暂存及监测的危废管理平台
CN115081957B (zh) * 2022-08-18 2022-11-15 山东超华环保智能装备有限公司 一种危废暂存及监测的危废管理平台
CN115379208A (zh) * 2022-10-19 2022-11-22 荣耀终端有限公司 一种摄像头的测评方法及设备
CN116883446A (zh) * 2023-09-08 2023-10-13 鲁冉光电(微山)有限公司 一种车载摄像头镜片碾磨程度实时监测系统
CN116883446B (zh) * 2023-09-08 2023-11-21 鲁冉光电(微山)有限公司 一种车载摄像头镜片碾磨程度实时监测系统

Also Published As

Publication number Publication date
CN112348784A (zh) 2021-02-09
JP2023503749A (ja) 2023-02-01
KR20220058843A (ko) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2022088620A1 (zh) 相机镜头的状态检测方法、装置、设备及存储介质
WO2020029518A1 (zh) 一种监控视频处理方法、装置及计算机可读介质
WO2021109697A1 (zh) 字符分割方法、装置以及计算机可读存储介质
CN113449606B (zh) 一种目标对象识别方法、装置、计算机设备及存储介质
CN115205223B (zh) 透明物体的视觉检测方法、装置、计算机设备及介质
CN112291551A (zh) 一种基于图像处理的视频质量检测方法、存储设备及移动终端
KR100868884B1 (ko) 유리 기판 유리 불량 정보 시스템 및 분류 방법
US11068740B2 (en) Particle boundary identification
EP4000040A1 (en) Method, computer program product and computer readable medium for generating a mask for a camera stream
CN115731493A (zh) 基于视频图像识别的降水微物理特征参量提取与分析方法
EP3350574A1 (en) Image analysis system and method
CN113283439B (zh) 基于图像识别的智能计数方法、装置及系统
CN110569840A (zh) 目标检测方法及相关装置
JP2018142828A (ja) 付着物検出装置および付着物検出方法
Tama et al. Nailfold capillaroscopy image processing for morphological parameters measurement
WO2024016632A1 (zh) 亮点定位方法、亮点定位装置、电子设备及存储介质
CN111027560B (zh) 文本检测方法以及相关装置
JP2019067377A (ja) 画像処理装置及び方法及び監視システム
CN114998283A (zh) 一种镜头遮挡物检测方法及装置
CN109859200B (zh) 一种基于背景分析的低空慢速无人机快速检测方法
CN111563869B (zh) 用于摄像模组质检的污点测试方法
CN115082326A (zh) 视频去模糊的处理方法、边缘计算设备及中心处理器
CN112329572B (zh) 一种基于边框和闪光点的快速静态活体检测方法及装置
JP6043178B2 (ja) フラットパネルディスプレイの自動ムラ検出装置および自動ムラ検出方法
CN115546135A (zh) 一种视频图像质量检测方法、装置、设备及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021565780

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884363

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884363

Country of ref document: EP

Kind code of ref document: A1