CN116152175A - Virtual focus detection method for video monitoring equipment - Google Patents
Virtual focus detection method for video monitoring equipment Download PDFInfo
- Publication number
- CN116152175A CN116152175A CN202211676112.XA CN202211676112A CN116152175A CN 116152175 A CN116152175 A CN 116152175A CN 202211676112 A CN202211676112 A CN 202211676112A CN 116152175 A CN116152175 A CN 116152175A
- Authority
- CN
- China
- Prior art keywords
- image
- frequency information
- low
- virtual focus
- moment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/168—Segmentation; Edge detection involving transform domain methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
A virtual focus detection method of video monitoring equipment relates to the field of image processing, and the detection method comprises the following steps: acquiring a first moment image, and carrying out mean value normalization processing on the image to acquire a first moment mean value normalization image; performing Fourier transform and high-pass and low-pass filtering on the first time average normalized image, and performing inverse Fourier transform to obtain a first low-frequency information image and a first high-frequency information image; carrying out differential processing on the average normalized image and the first low-frequency information image to obtain a first differential image; repeating the steps S1-S3 to obtain a second moment image, a second moment average normalized image, a second low-frequency information image, a second high-frequency information image and a second difference image; and analyzing the two difference images and the high-frequency information image, and detecting whether a virtual focus phenomenon exists. The invention has the advantages of small quantity of historical data, small calculated amount, strong noise resistance, wide scene adaptation, high detection precision and the like.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a virtual focus detection method of video monitoring equipment.
Background
The video monitoring device is widely applied to various public places such as markets, hospitals and stations, and along with the popularization of the video monitoring device, the life and public property safety of people can be better protected. With the continuous innovation and development of technology, higher requirements are also put on the monitoring image quality of video monitoring equipment. However, in the actual use process, the phenomena of blurred monitoring pictures and poor visibility caused by loose lenses and environmental temperature change often occur, and the monitoring effect is affected to a certain extent. Meanwhile, the video monitoring equipment generally calls automatic focusing at regular time, and the focusing of the lens is controlled according to a focusing searching strategy during focusing, so that the automatic focusing mode can influence the use of a user, the probability of focusing failure exists, and the phenomenon of incapability of focusing exists when the back focus is abnormal. Therefore, how to realize timely and accurately detecting the virtual focus phenomenon of the video monitoring equipment has important practical significance for the security industry. The virtual focus detection technology can not only avoid the untimely problem of manual inspection, but also reduce the service life influence problem caused by frequent automatic focusing of video monitoring equipment, and can also detect the phenomenon of unfocused caused by back focus.
With the development of image processing technology, the virtual focus detection method based on images is mature, and has the advantages of high accuracy, low cost and easy maintenance, so that the virtual focus detection method based on the images becomes a preferred method for virtual focus detection of video monitoring equipment. For example, chinese patent publication No. CN113301324a discloses a virtual focus detection method, device, equipment and medium based on an image capturing apparatus, and the patent proposes a virtual focus detection method, which includes: acquiring a definition value of a current monitoring video of the camera device; judging whether the definition value of the current monitoring video meets a preset definition threshold value or not; and if the definition value of the current monitoring video meets a preset definition threshold, judging that the image pickup device has virtual focus. According to the method, whether the virtual focus phenomenon exists is judged by whether the definition value of the current monitoring video meets a preset definition threshold value or not, and the definition value can be directly obtained through gradient information of the monitoring scene. However, the method directly obtains gradient information aiming at the image, is easy to be interfered by scene difference and light difference, has no universality and is generally only suitable for calibrating scenes. For non-calibrated scenes, because the gradient information of the non-calibrated scenes has larger difference, the threshold value related to the definition measurement is generally an empirical value, the threshold value needs to be set in advance, and when the scene changes, the problem of non-optimal threshold value exists, so that the threshold value cannot be self-adaptive. As another chinese patent publication No. CN111275657a, a virtual focus detection method, apparatus, and computer-readable medium are disclosed, and the patent proposes a virtual focus detection method based on boundary points, where the method includes: determining a strong boundary point set from an image to be detected, wherein the strong boundary point set comprises a plurality of strong boundary points; calculating the boundary width of the strong boundary point by adopting a height retraction algorithm; determining a narrow boundary point and a wide boundary point in the strong boundary point set according to the boundary width of the strong boundary point; and determining the image to be detected as a focused image or a virtual focus image according to the first duty ratio of the narrow boundary point in the strong boundary point set and the second duty ratio of the wide boundary point in the strong boundary point set. The method has unstable wide and narrow boundaries extracted in a complex field environment or a low-light scene, and has poor robustness. Further, as disclosed in chinese patent publication No. CN107240092a, an image blur degree detection method and apparatus are disclosed, and the patent proposes an image blur degree detection method, in which an image is divided into n image blocks, and a sharpness evaluation function value and a brightness weight value on a plurality of frequency bands are calculated for each block, so as to obtain a blur degree estimation value of the entire image. The method improves the ambiguity judgment accuracy of the low-illumination scene, but the high-frequency information in the general image is concentrated in a small part area, and the calculated amount is increased by extracting the frequency band information of a plurality of image blocks.
Disclosure of Invention
In order to solve the problems of the conventional virtual focus detection method based on images, the invention provides a virtual focus detection method of video monitoring equipment.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the invention discloses a virtual focus detection method of video monitoring equipment, which comprises the following steps:
step S1, acquiring a first moment image, and carrying out mean value normalization processing on the image to acquire a first moment mean value normalization image;
s2, carrying out edge information extraction and Fourier transformation on the first moment mean normalized image, and respectively carrying out low-pass filtering and high-pass filtering on the transformed image;
s3, performing inverse transformation after filtering to obtain a first low-frequency information image and a first high-frequency information image, and performing differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image;
s4, repeating the step S1-the step S3, and obtaining a second moment image, a second moment average normalized image, a second low-frequency information image, a second high-frequency information image and a second difference image;
and S5, analyzing the image characteristic difference, and detecting whether a virtual focus phenomenon exists.
Further, the specific operation steps of step S1 are as follows:
s1.1 acquiring the first moment image I clr ;
S1.2 for first time image I clr Gray level conversion is carried out to obtain a gray level image I gray ;
S1.3 calculates a gray image mean,wherein M, N is the number of lines and columns of the gray scale image, I gray (i, j) is a gray value of an i-th row and a j-th column of the gray image; acquiring a first time mean normalized image +.>I.e. < ->In (1) the->Normalizing the image for the first time mean +.>The pixel value of the i-th row and the j-th column of (a).
Further, the specific operation steps of step S2 are as follows:
s2.1 normalizing the image to the first time mean valuePerforming discrete Fourier transform to obtain a transformed image F, wherein the formula of the discrete Fourier transform is as follows:
wherein M, N is the number of lines and columns of the gray scale image respectively;normalizing images for a first temporal meanPixel values of the ith row and the jth column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s2.2, carrying out low-pass filtering on the image F to obtain a low-pass filtered image G low G, i.e low (u,v)=H low (u, v) F (u, v), wherein,d (u, v) represents the point (u, v) anddistance between frequency domain rectangular centers, σ=50;
s2.3, performing high-pass filtering on the image F to obtain a high-pass filtered image G high G, i.e high (u,v)=H high (u, v) F (u, v), wherein,d (u, v) denotes a distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=100. />
Further, the specific operation steps of step S3 are as follows:
s3.1 Low pass filtered image G low Performing Fourier inverse transformation to obtain first low-frequency information imageThe first low-frequency information image +.>Representing the original image (first time mean normalized image +.>) Low-frequency information image after removing high-frequency information, i.e. +.>In (1) the->For the first low-frequency information image +.>Pixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.2 high pass filtered image G high Performing inverse Fourier transform to obtain second high-frequency information imageThe second high-frequency information image->Representing the original image (first time mean normalized image +.>) High-frequency information image after removing low-frequency information, i.e. +.>In (1) the->For the first high-frequency information imagePixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.3, carrying out differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image, namely the first time average normalized imageAnd the first low-frequency information image->Information of difference between
Further, the specific operation steps of step S4 are as follows:
repeating the steps S1-S3 to obtain a second moment imageSecond moment mean normalized image->Second low-frequency information image->Second high-frequency information image->And a second difference map, i.e. a second time mean normalized image +.>And the first low-frequency information image->Difference information between->
Further, the specific operation steps of step S5 are as follows:
s5.1, respectively performing binarization processing on the first differential graph and the second differential graph by using an ostu algorithm to obtain a binary graphAnd->Separately statistical binary patterns->And->Non-zero number of pixels +.>And->
S5.2 applying an ostu algorithm to the first high frequency information image respectivelyAnd a second high-frequency information image +.>Binarization processing is carried out to obtain a binary image +.>And->Calculating a binary image->And->Is the union of (1)Separately statistical binary patterns->And union->Non-zero number of pixels +.>And->
S5.3, judging whether the virtual focus phenomenon exists according to the binary image information, whenAnd is also provided withAnd judging that the virtual focus phenomenon exists.
The beneficial effects of the invention are as follows:
according to the virtual focus detection method for the video monitoring equipment, disclosed by the invention, a virtual focus detection function is realized by measuring the definition of an image. The method comprises the steps of collecting two images at different moments, respectively carrying out mean value normalization processing on the two images at different moments, removing influence of scene light change on image definition, carrying out Fourier transform on the image subjected to the mean value normalization processing, analyzing differences between the image subjected to the mean value normalization processing and the image subjected to the mean value normalization processing after high-frequency information is removed through the Fourier transform, and comprehensively analyzing two difference images and high-frequency information change to determine whether a virtual focus phenomenon exists.
The invention requires a small amount of history data: only one piece of historical data is needed to be used as reference, and the filtering data corresponding to the reference data can be extracted in advance, so that the calculated amount is reduced. The noise immunity of the invention is strong: the data preprocessing increases the mean normalization and can remove the interference caused by scene light change. The invention has wide scene adaptation: the image data acquired at different moments increases the priori knowledge of the scene, and can be suitable for any environment such as indoor and outdoor environments. The invention has high detection precision: the detection method integrates the fuzzy characteristic of the current scene and the fuzzy change of the relative historical data, and can improve the accuracy.
Drawings
Fig. 1 is a flowchart of a virtual focus detection method of a video monitoring device according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention relates to a virtual focus detection method of video monitoring equipment, which is used for determining whether a virtual focus phenomenon exists or not through high-frequency information change of two images, and mainly comprises the following main processes:
step S1, acquiring a first moment image, and carrying out mean value normalization processing on the image to acquire a first moment mean value normalization image;
s2, carrying out edge information extraction and Fourier transformation on the first moment mean normalized image, and respectively carrying out low-pass filtering and high-pass filtering on the transformed image;
s3, performing inverse transformation after filtering to obtain a first low-frequency information image and a first high-frequency information image (the high-frequency information refers to the place where the intensity (brightness/gray level) of the image is changed severely and usually corresponds to the edge (outline) of the image), and performing differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image;
s4, repeating the step S1-the step S3, and obtaining a second moment image, a second moment average normalized image, a second low-frequency information image, a second high-frequency information image and a second difference image;
and S5, analyzing the image characteristic difference to detect whether a virtual focus phenomenon exists (the virtual focus appears as image blurring, and high-frequency information in the image is reduced).
As shown in fig. 1, the method for detecting virtual focus of video monitoring equipment of the present invention specifically includes the following steps:
s1, acquiring a first moment image, and carrying out mean value normalization processing on the image; the specific operation process is as follows:
s1.1 acquiring the first moment image I clr The image is generally determined manually, and no virtual focus phenomenon exists;
s1.2 for first time image I clr Gray level conversion is carried out to obtain a gray level image I gray ;
S1.3 pairs of grayscale images I gray The average value normalization processing is carried out, and the influence of scene light change on the detection effect can be removed through the average value normalization processing; the specific operation process is as follows:
s1.3.1 the gray image mean is calculated,wherein M, N is the number of lines and columns of the gray scale image, I gray (i, j) is a gray value of an i-th row and a j-th column of the gray image;
s1.3.2 obtaining a first time mean normalization by means of a mean normalization processImage processing apparatusI.e.In (1) the->Normalizing the image for the first time mean +.>The pixel value of the i-th row and the j-th column of (a).
Step S2, normalizing the obtained first time average value imageExtracting edge information and carrying out Fourier transformation, and respectively carrying out low-pass filtering and high-pass filtering on the transformed image; the specific operation process is as follows:
s2.1 normalizing the image to the first time mean valuePerforming discrete Fourier transform to obtain a transformed image F, wherein the specific formula of the discrete Fourier transform is as follows:
wherein M, N is the number of lines and columns of the gray scale image respectively; i gray (i, j) normalizing the image for the first time meanPixel values of the ith row and the jth column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s2.2, carrying out low-pass filtering on the image F to obtain a low-pass filtered image G low G, i.e low (u,v)=H low (u, v) F (u, v), wherein,d (u, v) represents the distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=50;
s2.3, performing high-pass filtering on the image F to obtain a high-pass filtered image G high G, i.e high (u,v)=H high (u, v) F (u, v), wherein,d (u, v) denotes a distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=100.
S3, performing inverse transformation after filtering to obtain a first low-frequency information image and a first high-frequency information image, and performing differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image; the specific operation process is as follows:
s3.1 Low pass filtered image G low Performing Fourier inverse transformation to obtain first low-frequency information imageThe first low-frequency information image +.>Representing the original image (first time mean normalized image +.>) Low-frequency information image after removing high-frequency information, i.e. +.>In (1) the->For the first low-frequency information image +.>Line x of (2)Pixel values of the y-th column; (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.2 high pass filtered image G high Performing inverse Fourier transform to obtain second high-frequency information imageThe second high-frequency information image->Representing the original image (first time mean normalized image +.>) High-frequency information image after removing low-frequency information, i.e. +.>In (1) the->For the first high-frequency information imagePixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.3, carrying out differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image, namely the first time average normalized imageAnd the first low-frequency information image->Information of difference betweenThe image difference information is smaller when the virtual focus phenomenon exists.
Step S4, repeating the steps S1-S3 to obtain a second moment imageSecond moment mean normalized image->Second low-frequency information image->Second high-frequency information image->And a second difference map, namely a second time mean normalized image +.>And the first low-frequency information image->Difference information between->The image difference information is smaller when the virtual focus phenomenon exists.
S5, analyzing the image characteristic difference, and detecting whether a virtual focus phenomenon exists or not; the specific operation process is as follows:
s5.1, performing binarization processing on the first differential graph obtained in the step S3 by using an ostu algorithm to obtain a binary graphSimultaneously, binarizing the second differential graph obtained in the step S4 by using an ostu algorithm to obtain a binary graph +.>Statistical binary diagram->And binary diagram->Is respectively marked as +.>And->
S5.2 calculating the change of the high-frequency information at two moments, namely using an ostu algorithm to image the first high-frequency information corresponding to the first momentBinarization processing is carried out to obtain a binary image +.>Simultaneously using an ostu algorithm for the second high-frequency information image corresponding to the second moment +.>Binarization processing is carried out to obtain a binary image +.>Calculating a binary image->And binary diagram->Is->Simultaneous statistics of the binary diagram->And (2) union->Is respectively marked as +.>And->
S5.3, judging whether a virtual focus phenomenon exists according to the binary image information of the step S5.1 and the step S5.2, wherein the judgment is specifically as follows: when (when)And->Judging that the virtual focus phenomenon exists; otherwise, if the virtual focus phenomenon does not exist, repeating the step S4-the step S5.
And S6, when the virtual focus phenomenon is judged to exist in the step S5, a virtual focus alarm can be sent.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.
Claims (6)
1. The virtual focus detection method for the video monitoring equipment is characterized by comprising the following steps of:
step S1, acquiring a first moment image, and carrying out mean value normalization processing on the image to acquire a first moment mean value normalization image;
s2, carrying out edge information extraction and Fourier transformation on the first moment mean normalized image, and respectively carrying out low-pass filtering and high-pass filtering on the transformed image;
s3, performing inverse transformation after filtering to obtain a first low-frequency information image and a first high-frequency information image, and performing differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image;
s4, repeating the step S1-the step S3, and obtaining a second moment image, a second moment average normalized image, a second low-frequency information image, a second high-frequency information image and a second difference image;
and S5, analyzing the image characteristic difference, and detecting whether a virtual focus phenomenon exists.
2. The method for detecting virtual focus of video monitoring equipment according to claim 1, wherein the specific operation steps of step S1 are as follows:
s1.1 acquiring the first moment image I clr ;
S1.2 for first time image I clr Gray level conversion is carried out to obtain a gray level image I gray ;
S1.3 calculates a gray image mean,wherein M, N is the number of lines and columns of the gray scale image, I gray (i, j) is a gray value of an i-th row and a j-th column of the gray image; acquiring a first time mean normalized image +.>I.e. < ->In (1) the->Normalizing the image for the first time mean +.>The pixel value of the i-th row and the j-th column of (a).
3. The method for detecting virtual focus of video monitoring equipment according to claim 2, wherein the specific operation steps of step S2 are as follows:
s2.1 normalizing the image to the first time mean valuePerforming discrete Fourier transform to obtain a transformed image F, wherein the formula of the discrete Fourier transform is as follows:
wherein M, N is the number of lines and columns of the gray scale image respectively; i gray (i, j) normalizing the image for the first time meanPixel values of the ith row and the jth column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s2.2, carrying out low-pass filtering on the image F to obtain a low-pass filtered image G low G, i.e low (u,v)=H low (u, v) F (u, v), wherein,d (u, v) represents the distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=50;
4. The method for detecting virtual focus of video monitoring equipment according to claim 3, wherein the specific operation steps of step S3 are as follows:
s3.1 Low pass filtered image G low Performing Fourier inverse transformation to obtain first low-frequency information imageThe first low-frequency information image +.>Representing the original image (first time mean normalized image +.>) Low-frequency information image after removing high-frequency information, i.e. +.>In (1) the->For the first low-frequency information image +.>Pixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.2 high pass filtered image G high Performing inverse Fourier transform to obtain second high-frequency information imageThe second high-frequency information image->Representing the original image (first time mean normalized image +.>) A high-frequency information image from which the low-frequency information is removed,i.e. < ->Wherein I is high (x, y) is the first high-frequency information image I high Pixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
5. The method for detecting virtual focus of video monitoring equipment according to claim 4, wherein the specific operation steps of step S4 are as follows:
repeating the steps S1-S3 to obtain a second moment imageSecond moment mean normalized image->Second low-frequency information image->Second high-frequency information image->And a second differenceThe second difference graph is a second time mean normalized image +.>And the first low-frequency information image->Difference information between->
6. The method for detecting virtual focus of video monitoring equipment according to claim 5, wherein the specific operation steps of step S5 are as follows:
s5.1, respectively performing binarization processing on the first differential graph and the second differential graph by using an ostu algorithm to obtain a binary graphAnd->Separately statistical binary patterns->And->Non-zero number of pixels +.>And->
S5.2 applying an ostu algorithm to the first high frequency information image respectivelyAnd a second high-frequency information image +.>Binarization processing is carried out to obtain a binary image +.>And->Calculating a binary image->And->Is->Separately statistical binary patterns->And union->Non-zero number of pixels +.>And->
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211676112.XA CN116152175A (en) | 2022-12-26 | 2022-12-26 | Virtual focus detection method for video monitoring equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211676112.XA CN116152175A (en) | 2022-12-26 | 2022-12-26 | Virtual focus detection method for video monitoring equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116152175A true CN116152175A (en) | 2023-05-23 |
Family
ID=86355437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211676112.XA Pending CN116152175A (en) | 2022-12-26 | 2022-12-26 | Virtual focus detection method for video monitoring equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116152175A (en) |
-
2022
- 2022-12-26 CN CN202211676112.XA patent/CN116152175A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114937055B (en) | Image self-adaptive segmentation method and system based on artificial intelligence | |
CN102800082B (en) | No-reference image definition detection method | |
CN108876768B (en) | Shadow defect detection method for light guide plate | |
CN106599783B (en) | Video occlusion detection method and device | |
CN111612741B (en) | Accurate reference-free image quality evaluation method based on distortion recognition | |
CN115861325B (en) | Suspension spring defect detection method and system based on image data | |
CN107742307A (en) | Based on the transmission line galloping feature extraction and parameters analysis method for improving frame difference method | |
CN115131354B (en) | Laboratory plastic film defect detection method based on optical means | |
CN110648330B (en) | Defect detection method for camera glass | |
JPWO2017047494A1 (en) | Image processing device | |
CN111353968B (en) | Infrared image quality evaluation method based on blind pixel detection and analysis | |
WO2017001096A1 (en) | Static soiling detection and correction | |
CN116883412B (en) | Graphene far infrared electric heating equipment fault detection method | |
CN116363126B (en) | Welding quality detection method for data line USB plug | |
CN111612773B (en) | Thermal infrared imager and real-time automatic blind pixel detection processing method | |
CN110880184A (en) | Method and device for carrying out automatic camera inspection based on optical flow field | |
CN111445435B (en) | Multi-block wavelet transform-based reference-free image quality evaluation method | |
CN114004850A (en) | Sky segmentation method, image defogging method, electronic device and storage medium | |
CN112613456A (en) | Small target detection method based on multi-frame differential image accumulation | |
WO2024016632A1 (en) | Bright spot location method, bright spot location apparatus, electronic device and storage medium | |
CN116958058A (en) | Lens dirt detection method and device and image detection equipment | |
CN109360189B (en) | Method for detecting pixel defect point of image of uncooled infrared movement | |
CN116152175A (en) | Virtual focus detection method for video monitoring equipment | |
KR20200099834A (en) | Imaging processing device for motion detection and method thereof | |
CN111062887B (en) | Image definition judging method based on improved Retinex algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |