CN116977895A - Stain detection method and device for universal camera lens and computer equipment - Google Patents

Stain detection method and device for universal camera lens and computer equipment Download PDF

Info

Publication number
CN116977895A
CN116977895A CN202310787895.7A CN202310787895A CN116977895A CN 116977895 A CN116977895 A CN 116977895A CN 202310787895 A CN202310787895 A CN 202310787895A CN 116977895 A CN116977895 A CN 116977895A
Authority
CN
China
Prior art keywords
image
image block
camera lens
index
stain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310787895.7A
Other languages
Chinese (zh)
Inventor
周奇明
姚卫忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huanuokang Technology Co ltd
Original Assignee
Zhejiang Huanuokang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huanuokang Technology Co ltd filed Critical Zhejiang Huanuokang Technology Co ltd
Priority to CN202310787895.7A priority Critical patent/CN116977895A/en
Publication of CN116977895A publication Critical patent/CN116977895A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a stain detection method and device for a universal camera lens and computer equipment. The method comprises the following steps: obtaining an image to be detected, and cutting the image to be detected into a plurality of image blocks with preset sizes; calculating various indexes of the statistical value of the image block; based on the indexes, constructing a four-dimensional vector describing the change trend of the image block in a time threshold; and inputting the four-dimensional vector into a preset time sequence waveform depth convolution neural network to obtain an index score of the image block, and obtaining a stain result of the camera lens based on the index score. The method can comprehensively cover the detection area and analyze the change trend of the image block on the time sequence, so that the stain can be accurately judged, the stain area is output, the problem that the stain detection effect is poor when stains which are difficult to identify exist on the camera lens is solved, and the accuracy of the stain detection is improved.

Description

Stain detection method and device for universal camera lens and computer equipment
Technical Field
The present application relates to the field of video image processing technologies, and in particular, to a stain detection method and apparatus for a universal camera lens, and a computer device.
Background
The camera can be stained with different stains due to the influence of external factors in the use process, for example, the camera can be stained with water stains in daily monitoring cameras, the medical endoscope lens can be stained with tissue stains of human bodies, and the vehicle-mounted lens can be stained with various sludge and the like. The stains of the camera can influence the imaging quality and imaging content of the camera.
In order to automatically detect stains on a camera lens, there are existing technologies that use an image segmentation technique to obtain a region mask for an image to be detected obtained from a camera, calculate a mask invariant region in the mask to obtain a lens stain condition, or perform edge extraction on an image captured by the camera to identify a stain region.
However, the method has the defects that the types and the forms of the stains are relatively large, some stains do not change the area mask, the edges of some stains are not obvious, all the stained areas cannot be detected, and the reliability of stain detection is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a stain detection method, apparatus, and computer device for a general-purpose camera lens that can improve the stain detection performance of the camera lens.
In a first aspect, the present application provides a method for stain detection for a universal camera lens, the method comprising:
obtaining an image to be detected, and cutting the image to be detected into a plurality of image blocks with preset sizes;
calculating various indexes of the statistical value of the image block; based on the indexes, constructing a four-dimensional vector describing the change trend of the image block in a time threshold;
and inputting the four-dimensional vector into a preset time sequence waveform depth convolution neural network to obtain an index score of the image block, and obtaining a stain result of the camera lens based on the index score.
In one embodiment, the acquiring the image to be measured includes:
acquiring a video frame sequence shot by a camera, and preprocessing a frame image in the video frame sequence to obtainImage to be measured
In one embodiment, the segmenting the image to be measured into a plurality of image blocks with preset sizes includes:
creating an image pyramid for the image to be detected, selecting images with corresponding level sizes from the image pyramid based on the hardware platform resource level, and performing segmentation processing to obtain a plurality of image blocks with preset sizes.
In one embodiment, the calculating the various indexes of the statistical value of the image block, based on the various indexes, constructs a four-dimensional vector describing the variation trend of the image block within a time threshold, including:
calculating various indexes of the statistical value of the image block to be measured, and constructing a two-dimensional vector describing the variation trend of the image block within a time threshold based on the calculation result of each index;
stacking the two-dimensional vectors according to the index number to obtain a three-dimensional vector;
stacking the three-dimensional vectors according to the number of the image blocks to obtain four-dimensional vectors; the number of the image blocks is the total number of the image blocks cut out from one image to be detected.
In one embodiment, the calculating each index of the statistic value of the image block to be measured, based on the calculation result of each index, constructs a two-dimensional vector describing the variation trend of the image block within a time threshold, including:
calculating various indexes of the statistical value of the image block to be detected;
constructing a corresponding index waveform diagram according to the dimension of time and value of each index based on the calculated value of the index;
and converting each index waveform diagram into a two-dimensional vector of the corresponding image block within a time threshold.
In one embodiment, the statistical value is a pixel gray value, and the index includes at least one index of a mean, a variance, a maximum value, a minimum value, a skewness, and a kurtosis.
In one embodiment, the inputting the four-dimensional vector into the preset time sequence waveform depth convolution neural network includes:
and carrying out feature normalization processing on the four-dimensional vector, and inputting the four-dimensional vector subjected to the normalization processing into a preset time sequence waveform depth convolution neural network.
In one embodiment, the obtaining the stain result of the camera lens based on the index score includes:
when the index score of the image block is higher than a preset threshold value, judging that stains exist in the area corresponding to the image block;
and obtaining a stain result of the camera lens based on the judging results of all the areas corresponding to the images.
In a second aspect, the present application also provides a stain detection device for a universal camera lens, the device comprising:
the image block segmentation module is used for acquiring an image to be detected and segmenting the image to be detected into a plurality of image blocks with preset sizes;
the index calculation module is used for calculating various indexes of the statistical value of the image block; based on the indexes, constructing a four-dimensional vector describing the change trend of the image block in a time threshold;
and the stain judging module is used for inputting the four-dimensional vector into a preset time sequence waveform depth convolution neural network to obtain an index score of the image block, and obtaining a stain result of the camera lens based on the index score.
In a third aspect, the present application also provides a computer device comprising a memory storing a computer program and a processor implementing the method for stain detection for a universal camera lens of any of the first aspects above when the computer program is executed by the processor.
According to the stain detection method, the stain detection device and the computer equipment for the universal camera lens, the image to be detected is segmented, the indexes of the statistics value of the segmented image blocks are calculated, the four-dimensional vector describing the change trend of the image blocks within the time threshold is constructed based on the indexes, the four-dimensional vector is used as the input data of the time sequence waveform deep convolution neural network, the image blocks are detected, the stain result of the camera lens is obtained based on the detection result of each image block, the detection area can be covered on the whole surface, the change trend of the image blocks is analyzed in time sequence, so that stains are judged more accurately, the stain area is output, the problem that the stain detection effect is poor when stains which are difficult to identify exist on the camera lens is solved, and the accuracy of stain detection is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a block diagram of the hardware architecture of a method for stain detection for a universal camera lens in one embodiment;
FIG. 2 is a flow chart of a method for detecting stains in a universal camera lens according to an embodiment;
FIG. 3 is a flow chart of a method for stain detection for a universal camera lens in a preferred embodiment;
fig. 4 is a block diagram of a soil detection device for a universal camera lens in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or similar computing device. For example, running on a terminal, fig. 1 is a block diagram of the hardware structure of the stain detection method for a general-purpose camera lens of the present embodiment. As shown in fig. 1, the terminal may include one or more (only one is shown in fig. 1) processors 102 and a memory 104 for storing data, wherein the processors 102 may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the stain detection method for a general-purpose camera lens in the present embodiment, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, to implement the above-described method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Card, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for detecting stains in a universal camera lens is provided, and fig. 2 is a schematic flow chart of the method for detecting stains in a universal camera lens according to this embodiment, as shown in fig. 2, the flow includes the following steps:
step S210, obtaining an image to be detected, and cutting the image to be detected into a plurality of image blocks with preset sizes.
Specifically, video frame images shot by a camera are read, preprocessing operations such as graying and Gaussian filtering are performed on the video frame images, an image pyramid is created on the preprocessed video frame images, images with proper level size are selected according to the hardware platform resource level, and the images with the level size are divided into N image blocks with preset pixel size, wherein N is a positive integer larger than 0. The video frame image shot by the camera can be an image shot by a camera lens of the medical endoscope, or can be an image shot by a camera lens of the vehicle-mounted camera, or can be an image shot by a camera lens under a scene.
Step S220, calculating various indexes of the statistical value of the image block; based on various indexes, a four-dimensional vector describing the change trend of the image block in the time threshold is constructed.
The statistical value may be replaced freely, and may be a gray value, a gradient value, or the like. The index includes mean, variance, maximum, minimum, skewness, kurtosis, etc.
Illustratively, the mean, variance, maximum, minimum, skewness, kurtosis of the pixel gray values of the individual image blocks are counted. And repeatedly executing the step S210 within a certain time threshold range to obtain image blocks, calculating various indexes of the gray value of each image block, and constructing a four-dimensional vector describing the change trend of the image blocks within the time threshold based on the various indexes of the gray value after reaching a preset time threshold.
Specifically, based on various indexes of the gray value, the following method can be adopted to construct a four-dimensional vector describing the change trend of the image block in the time threshold: acquiring various indexes of the image block in a time threshold; constructing a two-dimensional vector of each index according to the dimension of time and value; stacking the two-dimensional vectors of each index to obtain a three-dimensional vector of each image block; and stacking the three-dimensional vectors of each image block to obtain a four-dimensional vector of the current image to be detected.
Step S230, inputting the four-dimensional vector into a preset time sequence waveform depth convolution neural network to obtain index scores of the image blocks, and obtaining a stain result of the camera lens based on the index scores.
Specifically, the four-dimensional vector is subjected to feature normalization, a time sequence waveform depth convolution neural network is input, N scores are output by the network, N is the total number of the image blocks, the score is the probability score of the current image block having stains, and when the score of the image block is higher than a threshold value, the image block area is considered to have stains. If a plurality of blocks exist, stains exist in a plurality of areas.
The time sequence waveform deep convolution neural network is a special network for processing time sequence waveforms and can be obtained through training. The training data is index data of N×C×H×W dimensions, wherein N is the number of blocks in one image, C is the number of indexes, and H×W=T end -T start I.e. the time interval threshold. The training stage data marks the image block without the dirt with a label of 0 and marks the image block with the dirt with a label of 1. The loss function adopted is a sigmoid loss function:
wherein m represents the number of training samples, y (i) represents the true label of the ith sample, x (i) represents the eigenvector of the ith sample, h θ (x (i) ) Representing the probability that the i-th sample is predicted to be a positive example, θ represents a model parameter.
The index changes of most of the image blocks without stains have great difference with the index changes of the image blocks with stains, and the difference between the index changes and the index changes is amplified through the deep neural network, so that the algorithm is more robust and has higher accuracy.
According to the embodiment, the image to be detected is segmented, various indexes of statistical values of the segmented image blocks are calculated, four-dimensional vectors describing the change trend of the image blocks within a time threshold are constructed based on the various indexes, the four-dimensional vectors are used as input data of a time sequence waveform depth convolution neural network, the image blocks are detected, a stain result of a camera lens is obtained based on a detection result of each image block, and accuracy and reliability of stain identification are improved. Compared with the prior art, the method and the device have the advantages that edges of images are not required to be extracted or mask unchanged areas are calculated, each index of each image block is calculated in a blocking mode, and robustness is high. In addition, aiming at the scene that some shooting objects and stains have obvious relative motion, the method does not need to pay attention to the shooting objects, can judge whether the lens has stains or not only by detecting the index change condition of all image blocks in the shooting process, and has simpler algorithm logic and higher detection accuracy.
In one embodiment, based on the step S210, the obtaining the image to be measured may specifically include the following steps:
step S211, a video frame sequence shot by a camera is obtained, and frame images in the video frame sequence are preprocessed to obtain images to be detected.
Specifically, a video frame sequence within a time threshold range is acquired, and each frame image in the video frame sequence is preprocessed, wherein the preprocessing comprises graying and Gaussian filtering.
In this embodiment, a sequence of video frames is acquired for subsequent temporal analysis of the image and the image is preprocessed to enhance the detectability of the relevant information and simplify the data.
In one embodiment, based on the step S210, the method for segmenting the image to be measured into a plurality of image blocks with a preset size may specifically include the following steps:
step S212, an image pyramid is created for the image to be detected, images with the corresponding level size are selected from the image pyramid based on the hardware platform resource level, and segmentation processing is carried out, so that a plurality of image blocks with preset sizes are obtained.
The image pyramid is an image set formed by a plurality of sub-images with different resolutions of one image, the image pyramid comprises a plurality of levels, the bottom level is formed by original pictures with large sizes, and the size is smaller towards the upper layer, and the image pyramid is stacked to be in a pyramid form.
According to the resource level of a hardware platform executing the stain detection method for the universal camera lens, selecting images of corresponding levels for subsequent processing, when the resource level of the hardware platform is higher, selecting the level images with higher resolution for subsequent segmentation processing, when the resource level of the hardware platform is lower, selecting the level images with lower resolution for subsequent segmentation processing, so that hardware platform resources are utilized to the greatest extent, the efficiency of image processing is improved, and parallel acceleration processing can be performed on a graphics processor (Graphic Processing Unit, GPU).
When selecting the corresponding level image for processing, one layer can be selected for segmentation and subsequent other processing operations can be executed, at least two layers can be selected for segmentation and subsequent other processing operations can be executed, the data volume of one layer is smaller, the processing speed is higher, the data volume of at least two layers is larger, and a more accurate stain identification result can be obtained.
In the embodiment, by constructing the image pyramid, a hierarchical image matched with the resource level of the hierarchical image can be provided for the hardware platform, so that the hardware platform resource is utilized to the maximum extent, and the efficiency and quality of image processing are improved.
In one embodiment, based on the step S220, each index of the statistical value of the image block is calculated, and based on each index, a four-dimensional vector describing the variation trend of the image block within the time threshold is constructed, which may specifically include the following steps:
step S221, each index of the statistic value of the image block to be measured is calculated, and based on the calculation result of each index, a two-dimensional vector describing the variation trend of the image block within the time threshold is constructed.
The statistical value is a pixel gray value, and the index comprises at least one index of a mean, a variance var, a maximum max, a skewness and a kurtosis, and is calculated by the following formula:
max=max i,j I i,j
wherein mean represents the mean value of the image, I ij The pixel values representing the ith row and jth column of the image block, M and N representing the number of rows and columns of the image, respectively.
Specifically, at a time threshold T start And T is end And constructing a waveform diagram of each index calculation result according to the time dimension and the value dimension, wherein the abscissa of the waveform diagram is time, and the ordinate is the index value at the moment. The mean waveform diagram, the variance waveform diagram, the maximum waveform diagram, the skewness waveform diagram and the kurtosis waveform diagram in the time range can be obtained. Converting each waveform diagram into a two-dimensional vector of H×W=T start -T end . The value of each position in the two-dimensional vector of h×w represents the magnitude of the index value at the corresponding time. C=5 in this example.
In step S222, the two-dimensional vectors are stacked according to the number of indexes to obtain a three-dimensional vector with a size of c×h×w, where C is the number of indexes. C=5 in this example. The three-dimensional vector c×h×w represents a combination of changes in index values of mean, variance var, maximum max, skewness, kurtosis of an image block over a period of time.
Step S223, stacking the three-dimensional vectors according to the number N of the image blocks to obtain four-dimensional vectors with the size of N multiplied by C multiplied by H multiplied by W; the number N of the image blocks is the total number of the image blocks cut out from one image to be detected. Wherein the four-dimensional vector of nxc×h×w represents a combination of changes in each index over a period of time for all image blocks.
In this embodiment, two-dimensional vectors are constructed by using time and index values, and the four-dimensional vectors describing the variation trend of the image blocks within the time threshold are obtained by stacking the two-dimensional vectors according to the number of the indexes and the number of the image blocks, so as to be used as the inputtable neural network for predicting stains.
In one embodiment, based on the step S230, the four-dimensional vector is input into a preset time sequence waveform depth convolution neural network to obtain an index score of the image block, and a stain result of the camera lens is obtained based on the index score, which specifically may include the following steps:
and step S231, performing feature normalization processing on the four-dimensional vector, and inputting the normalized four-dimensional vector into a preset time sequence waveform depth convolution neural network.
The time sequence waveform deep convolution neural network is a special network for processing time sequence waveforms and can be obtained through training. The training data is index data of N×C×H×W dimensions, wherein N is the number of blocks in one image, C is the number of indexes, and H×W=T end -T start I.e. the time interval threshold. The training stage data marks the image block without the dirt with a label of 0 and marks the image block with the dirt with a label of 1.
Step S232, obtaining an index score of each image block based on the output result of the network.
The network input data based on the NXCXHXW is calculated, the network outputs N scores, wherein N is the total number of the image blocks, and the score is the probability score of the stain existing in the current image block.
In step S233, when the index score of the image block is higher than the preset threshold, it is determined that the region corresponding to the image block has stains, and based on the determination results of the regions corresponding to all the images, a stain result of the camera lens is obtained.
In this embodiment, the difference of the mixed index waveform diagram of the enlarged dirty image block and the non-dirty image block in the time sequence range can be learned through the time sequence waveform deep convolution neural network, so that the image block with the dirty can be more accurately judged.
The present embodiment is described and illustrated below by way of preferred embodiments.
Fig. 3 is a flowchart of the stain detection method for the universal camera lens of the present preferred embodiment.
Step S301, reading a video frame image shot by a camera, and carrying out graying and Gaussian filtering treatment on the video frame image;
step S302, an image pyramid is created for a video frame image, an image with a proper level size is selected according to the resource level of a hardware platform, and the image is divided into image blocks with preset pixel sizes;
step S303, counting various index values of pixel gray of each image block, wherein the various index values comprise: mean, variance, maximum, minimum, skewness, kurtosis;
step S304, after the time threshold is reached, constructing two-dimensional vectors of each index according to the time dimension and the value dimension of each index value of each image block;
step S305, overlapping the two-dimensional vectors of all indexes into three-dimensional vectors, and constructing the index three-dimensional vectors of the image block;
step S306, stacking index three-dimensional vectors of all image blocks to construct index four-dimensional vectors of all image blocks of the current image;
step S307, inputting all four-dimensional vectors into the time sequence waveform depth convolution neural network, outputting index scores of each image block, and judging that the image blocks higher than a threshold value are areas with stains;
step S308, based on the image blocks with index scores higher than the threshold value, outputting image block areas with stains, and obtaining all stain results of the camera lens.
In the preferred embodiment, the variation trend of the pixels of each segmented image of the camera lens is obtained by counting the statistic value mixed index waveform diagram of the image blocks in the time sequence range, the four-dimensional vector reflecting the variation trend is input into the time sequence waveform depth convolution neural network to detect the existence of stains of the image blocks, the used time sequence waveform depth convolution neural network can learn and amplify the difference of the mixed index waveform diagram of the gray values of the stain-free image blocks and the stain-free image blocks in the time sequence range, and the stain image blocks are more accurately distinguished, so that the stain area of the camera lens is obtained, and the condition of missed detection is avoided.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, in this embodiment, a stain detection device for a general camera lens is further provided, and the system is used to implement the foregoing embodiments and preferred embodiments, which have been described and will not be repeated. The terms "module," "unit," "sub-unit," and the like as used below may refer to a combination of software and/or hardware that performs a predetermined function. While the system described in the following embodiments is preferably implemented in software, implementation of hardware, or a combination of software and hardware, is also possible and contemplated.
In one embodiment, as shown in fig. 4, there is provided a stain detection device for a universal camera lens, comprising: an image block segmentation module 41, an index calculation module 42, and a stain determination module 43, wherein:
the image block segmentation module 41 is configured to obtain an image to be detected, and segment the image to be detected into a plurality of image blocks with a preset size.
An index calculation module 42, configured to calculate each index of the statistics of the image block; based on various indexes, a four-dimensional vector describing the change trend of the image block in the time threshold is constructed.
The stain judging module 43 is configured to input a four-dimensional vector into a preset time-sequence waveform deep convolution neural network to obtain an index score of the image block, and obtain a stain result of the camera lens based on the index score.
In one embodiment, the image block segmentation module 41 is further configured to obtain a video frame sequence captured by the camera, and pre-process a frame image in the video frame sequence to obtain an image to be tested
In one embodiment, the image block segmentation module 41 is further configured to create an image pyramid for the image to be detected, select an image with a corresponding level size from the image pyramid based on the hardware platform resource level, and perform segmentation processing to obtain a plurality of image blocks with preset sizes.
In one embodiment, the index calculation module 42 is further configured to calculate each index of the statistic value of the image block to be measured, and construct a two-dimensional vector describing the variation trend of the image block within the time threshold based on the calculation result of each index; stacking the two-dimensional vectors according to the index number to obtain a three-dimensional vector; stacking the three-dimensional vectors according to the number of the image blocks to obtain four-dimensional vectors; the number of the image blocks is the total number of the image blocks cut out from one image to be detected.
In one embodiment, the index calculation module 42 is further configured to calculate each index of the statistic value of the image block to be measured; constructing a corresponding index waveform diagram according to the dimension of each index according to time and value based on the calculated value of the index; each index oscillogram is converted into a two-dimensional vector of the corresponding image block within a time threshold.
In one embodiment, the statistics in the index calculation module 42 are pixel gray values, and the index includes at least one of mean, variance, maximum, minimum, skewness, kurtosis.
In one embodiment, the stain determination module 43 is further configured to perform feature normalization processing on the four-dimensional vector, and input the normalized four-dimensional vector into a preset time-series waveform deep convolutional neural network.
In one embodiment, the stain determining module 43 is further configured to determine that a stain exists in an area corresponding to the image block when the index score of the image block is higher than a preset threshold; and obtaining a stain result of the camera lens based on the judging results of the areas corresponding to all the images.
The various modules in the stain detection device for a universal camera lens described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, including a memory and a processor, the memory storing a computer program, the processor implementing the stain detection method for a universal camera lens of any of the embodiments described above when executing the computer program.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor implements the stain detection method for a universal camera lens of any of the above embodiments.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the stain detection method for a universal camera lens of any of the above embodiments.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method for stain detection for a universal camera lens, the method comprising:
obtaining an image to be detected, and cutting the image to be detected into a plurality of image blocks with preset sizes;
calculating various indexes of the statistical value of the image block; based on the indexes, constructing a four-dimensional vector describing the change trend of the image block in a time threshold;
and inputting the four-dimensional vector into a preset time sequence waveform depth convolution neural network to obtain an index score of the image block, and obtaining a stain result of the camera lens based on the index score.
2. The method for stain detection of a universal camera lens of claim 1, wherein the acquiring the image to be detected comprises:
acquiring a video frame sequence shot by a camera, and preprocessing a frame image in the video frame sequence to obtain an image to be detected
3. The method for detecting stains in a universal camera lens according to claim 1, wherein the slicing the image to be detected into a plurality of image blocks of a preset size comprises:
creating an image pyramid for the image to be detected, selecting images with corresponding level sizes from the image pyramid based on the hardware platform resource level, and performing segmentation processing to obtain a plurality of image blocks with preset sizes.
4. The method according to claim 1, wherein the calculating the indices of the statistics of the image block, based on the indices, constructs a four-dimensional vector describing a trend of the image block over a time threshold, comprises:
calculating various indexes of the statistical value of the image block to be measured, and constructing a two-dimensional vector describing the variation trend of the image block within a time threshold based on the calculation result of each index;
stacking the two-dimensional vectors according to the index number to obtain a three-dimensional vector;
stacking the three-dimensional vectors according to the number of the image blocks to obtain four-dimensional vectors; the number of the image blocks is the total number of the image blocks cut out from one image to be detected.
5. The method for detecting stains in a universal camera lens according to claim 4, wherein the calculating the indexes of the statistics of the image block to be detected, based on the calculation result of each of the indexes, constructs a two-dimensional vector describing the trend of the image block within a time threshold, comprises:
calculating various indexes of the statistical value of the image block to be detected;
constructing a corresponding index waveform diagram according to the dimension of time and value of each index based on the calculated value of the index;
and converting each index waveform diagram into a two-dimensional vector of the corresponding image block within a time threshold.
6. The method for detecting stains in a universal camera lens according to claim 1, wherein the statistical value is a pixel gray value, and the index comprises at least one index of mean, variance, maximum, minimum, skewness, kurtosis.
7. The stain detection method for a universal camera lens of claim 1, wherein the inputting the four-dimensional vector into a pre-set time-series waveform depth convolution neural network comprises:
and carrying out feature normalization processing on the four-dimensional vector, and inputting the four-dimensional vector subjected to the normalization processing into a preset time sequence waveform depth convolution neural network.
8. The method for stain detection of a universal camera lens of claim 1, wherein the deriving a stain result for the camera lens based on the index score comprises:
when the index score of the image block is higher than a preset threshold value, judging that stains exist in the area corresponding to the image block;
and obtaining a stain result of the camera lens based on the judging results of all the areas corresponding to the images.
9. A soil detection device for a universal camera lens, the device comprising:
the image block segmentation module is used for acquiring an image to be detected and segmenting the image to be detected into a plurality of image blocks with preset sizes;
the index calculation module is used for calculating various indexes of the statistical value of the image block; based on the indexes, constructing a four-dimensional vector describing the change trend of the image block in a time threshold;
and the stain judging module is used for inputting the four-dimensional vector into a preset time sequence waveform depth convolution neural network to obtain an index score of the image block, and obtaining a stain result of the camera lens based on the index score.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the stain detection method for a universal camera lens of any of claims 1 to 8 when the computer program is executed.
CN202310787895.7A 2023-06-29 2023-06-29 Stain detection method and device for universal camera lens and computer equipment Pending CN116977895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310787895.7A CN116977895A (en) 2023-06-29 2023-06-29 Stain detection method and device for universal camera lens and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310787895.7A CN116977895A (en) 2023-06-29 2023-06-29 Stain detection method and device for universal camera lens and computer equipment

Publications (1)

Publication Number Publication Date
CN116977895A true CN116977895A (en) 2023-10-31

Family

ID=88480633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310787895.7A Pending CN116977895A (en) 2023-06-29 2023-06-29 Stain detection method and device for universal camera lens and computer equipment

Country Status (1)

Country Link
CN (1) CN116977895A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893611A (en) * 2024-03-14 2024-04-16 浙江华诺康科技有限公司 Image sensor dirt detection method and device and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893611A (en) * 2024-03-14 2024-04-16 浙江华诺康科技有限公司 Image sensor dirt detection method and device and computer equipment
CN117893611B (en) * 2024-03-14 2024-06-11 浙江华诺康科技有限公司 Image sensor dirt detection method and device and computer equipment

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
EP3333768A1 (en) Method and apparatus for detecting target
CN109815770B (en) Two-dimensional code detection method, device and system
CN108510504B (en) Image segmentation method and device
CN112329702B (en) Method and device for rapid face density prediction and face detection, electronic equipment and storage medium
CN105447532A (en) Identity authentication method and device
CN110942456B (en) Tamper image detection method, device, equipment and storage medium
CN114549445A (en) Image detection and related model training method, related device, equipment and medium
CN112580458A (en) Facial expression recognition method, device, equipment and storage medium
CN111753775B (en) Fish growth assessment method, device, equipment and storage medium
CN114581709A (en) Model training, method, apparatus, and medium for recognizing target in medical image
CN116524312A (en) Infrared small target detection method based on attention fusion characteristic pyramid network
CN116977895A (en) Stain detection method and device for universal camera lens and computer equipment
CN116452966A (en) Target detection method, device and equipment for underwater image and storage medium
CN111144425B (en) Method and device for detecting shot screen picture, electronic equipment and storage medium
CN111368698A (en) Subject recognition method, subject recognition device, electronic device, and medium
CN112699842A (en) Pet identification method, device, equipment and computer readable storage medium
CN117133041A (en) Three-dimensional reconstruction network face recognition method, system, equipment and medium based on deep learning
CN116740528A (en) Shadow feature-based side-scan sonar image target detection method and system
Chen et al. Color image splicing localization algorithm by quaternion fully convolutional networks and superpixel-enhanced pairwise conditional random field
Yancey Deep Learning for Localization of Mixed Image Tampering Techniques
CN112785550B (en) Image quality value determining method and device, storage medium and electronic device
CN114445788A (en) Vehicle parking detection method and device, terminal equipment and readable storage medium
CN114445916A (en) Living body detection method, terminal device and storage medium
CN112949634A (en) Bird nest detection method for railway contact network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination