CN116152175A - Virtual focus detection method for video monitoring equipment - Google Patents

Virtual focus detection method for video monitoring equipment Download PDF

Info

Publication number
CN116152175A
CN116152175A CN202211676112.XA CN202211676112A CN116152175A CN 116152175 A CN116152175 A CN 116152175A CN 202211676112 A CN202211676112 A CN 202211676112A CN 116152175 A CN116152175 A CN 116152175A
Authority
CN
China
Prior art keywords
image
frequency information
low
virtual focus
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211676112.XA
Other languages
Chinese (zh)
Inventor
曲达明
孙博
孟繁斌
孟贺
王生杰
黄艳金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Forestry Star Beijing Technology Information Co ltd
Original Assignee
China Forestry Star Beijing Technology Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Forestry Star Beijing Technology Information Co ltd filed Critical China Forestry Star Beijing Technology Information Co ltd
Priority to CN202211676112.XA priority Critical patent/CN116152175A/en
Publication of CN116152175A publication Critical patent/CN116152175A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

A virtual focus detection method of video monitoring equipment relates to the field of image processing, and the detection method comprises the following steps: acquiring a first moment image, and carrying out mean value normalization processing on the image to acquire a first moment mean value normalization image; performing Fourier transform and high-pass and low-pass filtering on the first time average normalized image, and performing inverse Fourier transform to obtain a first low-frequency information image and a first high-frequency information image; carrying out differential processing on the average normalized image and the first low-frequency information image to obtain a first differential image; repeating the steps S1-S3 to obtain a second moment image, a second moment average normalized image, a second low-frequency information image, a second high-frequency information image and a second difference image; and analyzing the two difference images and the high-frequency information image, and detecting whether a virtual focus phenomenon exists. The invention has the advantages of small quantity of historical data, small calculated amount, strong noise resistance, wide scene adaptation, high detection precision and the like.

Description

Virtual focus detection method for video monitoring equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a virtual focus detection method of video monitoring equipment.
Background
The video monitoring device is widely applied to various public places such as markets, hospitals and stations, and along with the popularization of the video monitoring device, the life and public property safety of people can be better protected. With the continuous innovation and development of technology, higher requirements are also put on the monitoring image quality of video monitoring equipment. However, in the actual use process, the phenomena of blurred monitoring pictures and poor visibility caused by loose lenses and environmental temperature change often occur, and the monitoring effect is affected to a certain extent. Meanwhile, the video monitoring equipment generally calls automatic focusing at regular time, and the focusing of the lens is controlled according to a focusing searching strategy during focusing, so that the automatic focusing mode can influence the use of a user, the probability of focusing failure exists, and the phenomenon of incapability of focusing exists when the back focus is abnormal. Therefore, how to realize timely and accurately detecting the virtual focus phenomenon of the video monitoring equipment has important practical significance for the security industry. The virtual focus detection technology can not only avoid the untimely problem of manual inspection, but also reduce the service life influence problem caused by frequent automatic focusing of video monitoring equipment, and can also detect the phenomenon of unfocused caused by back focus.
With the development of image processing technology, the virtual focus detection method based on images is mature, and has the advantages of high accuracy, low cost and easy maintenance, so that the virtual focus detection method based on the images becomes a preferred method for virtual focus detection of video monitoring equipment. For example, chinese patent publication No. CN113301324a discloses a virtual focus detection method, device, equipment and medium based on an image capturing apparatus, and the patent proposes a virtual focus detection method, which includes: acquiring a definition value of a current monitoring video of the camera device; judging whether the definition value of the current monitoring video meets a preset definition threshold value or not; and if the definition value of the current monitoring video meets a preset definition threshold, judging that the image pickup device has virtual focus. According to the method, whether the virtual focus phenomenon exists is judged by whether the definition value of the current monitoring video meets a preset definition threshold value or not, and the definition value can be directly obtained through gradient information of the monitoring scene. However, the method directly obtains gradient information aiming at the image, is easy to be interfered by scene difference and light difference, has no universality and is generally only suitable for calibrating scenes. For non-calibrated scenes, because the gradient information of the non-calibrated scenes has larger difference, the threshold value related to the definition measurement is generally an empirical value, the threshold value needs to be set in advance, and when the scene changes, the problem of non-optimal threshold value exists, so that the threshold value cannot be self-adaptive. As another chinese patent publication No. CN111275657a, a virtual focus detection method, apparatus, and computer-readable medium are disclosed, and the patent proposes a virtual focus detection method based on boundary points, where the method includes: determining a strong boundary point set from an image to be detected, wherein the strong boundary point set comprises a plurality of strong boundary points; calculating the boundary width of the strong boundary point by adopting a height retraction algorithm; determining a narrow boundary point and a wide boundary point in the strong boundary point set according to the boundary width of the strong boundary point; and determining the image to be detected as a focused image or a virtual focus image according to the first duty ratio of the narrow boundary point in the strong boundary point set and the second duty ratio of the wide boundary point in the strong boundary point set. The method has unstable wide and narrow boundaries extracted in a complex field environment or a low-light scene, and has poor robustness. Further, as disclosed in chinese patent publication No. CN107240092a, an image blur degree detection method and apparatus are disclosed, and the patent proposes an image blur degree detection method, in which an image is divided into n image blocks, and a sharpness evaluation function value and a brightness weight value on a plurality of frequency bands are calculated for each block, so as to obtain a blur degree estimation value of the entire image. The method improves the ambiguity judgment accuracy of the low-illumination scene, but the high-frequency information in the general image is concentrated in a small part area, and the calculated amount is increased by extracting the frequency band information of a plurality of image blocks.
Disclosure of Invention
In order to solve the problems of the conventional virtual focus detection method based on images, the invention provides a virtual focus detection method of video monitoring equipment.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the invention discloses a virtual focus detection method of video monitoring equipment, which comprises the following steps:
step S1, acquiring a first moment image, and carrying out mean value normalization processing on the image to acquire a first moment mean value normalization image;
s2, carrying out edge information extraction and Fourier transformation on the first moment mean normalized image, and respectively carrying out low-pass filtering and high-pass filtering on the transformed image;
s3, performing inverse transformation after filtering to obtain a first low-frequency information image and a first high-frequency information image, and performing differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image;
s4, repeating the step S1-the step S3, and obtaining a second moment image, a second moment average normalized image, a second low-frequency information image, a second high-frequency information image and a second difference image;
and S5, analyzing the image characteristic difference, and detecting whether a virtual focus phenomenon exists.
Further, the specific operation steps of step S1 are as follows:
s1.1 acquiring the first moment image I clr
S1.2 for first time image I clr Gray level conversion is carried out to obtain a gray level image I gray
S1.3 calculates a gray image mean,
Figure BDA0004018438620000031
wherein M, N is the number of lines and columns of the gray scale image, I gray (i, j) is a gray value of an i-th row and a j-th column of the gray image; acquiring a first time mean normalized image +.>
Figure BDA0004018438620000032
I.e. < ->
Figure BDA0004018438620000033
In (1) the->
Figure BDA0004018438620000034
Normalizing the image for the first time mean +.>
Figure BDA0004018438620000035
The pixel value of the i-th row and the j-th column of (a).
Further, the specific operation steps of step S2 are as follows:
s2.1 normalizing the image to the first time mean value
Figure BDA0004018438620000036
Performing discrete Fourier transform to obtain a transformed image F, wherein the formula of the discrete Fourier transform is as follows:
Figure BDA0004018438620000037
wherein M, N is the number of lines and columns of the gray scale image respectively;
Figure BDA0004018438620000041
normalizing images for a first temporal mean
Figure BDA0004018438620000042
Pixel values of the ith row and the jth column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s2.2, carrying out low-pass filtering on the image F to obtain a low-pass filtered image G low G, i.e low (u,v)=H low (u, v) F (u, v), wherein,
Figure BDA0004018438620000043
d (u, v) represents the point (u, v) anddistance between frequency domain rectangular centers, σ=50;
s2.3, performing high-pass filtering on the image F to obtain a high-pass filtered image G high G, i.e high (u,v)=H high (u, v) F (u, v), wherein,
Figure BDA0004018438620000044
d (u, v) denotes a distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=100. />
Further, the specific operation steps of step S3 are as follows:
s3.1 Low pass filtered image G low Performing Fourier inverse transformation to obtain first low-frequency information image
Figure BDA0004018438620000045
The first low-frequency information image +.>
Figure BDA0004018438620000046
Representing the original image (first time mean normalized image +.>
Figure BDA0004018438620000047
) Low-frequency information image after removing high-frequency information, i.e. +.>
Figure BDA0004018438620000048
In (1) the->
Figure BDA0004018438620000049
For the first low-frequency information image +.>
Figure BDA00040184386200000410
Pixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.2 high pass filtered image G high Performing inverse Fourier transform to obtain second high-frequency information image
Figure BDA00040184386200000411
The second high-frequency information image->
Figure BDA00040184386200000412
Representing the original image (first time mean normalized image +.>
Figure BDA00040184386200000413
) High-frequency information image after removing low-frequency information, i.e. +.>
Figure BDA00040184386200000414
In (1) the->
Figure BDA00040184386200000416
For the first high-frequency information image
Figure BDA00040184386200000415
Pixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.3, carrying out differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image, namely the first time average normalized image
Figure BDA0004018438620000051
And the first low-frequency information image->
Figure BDA0004018438620000052
Information of difference between
Figure BDA0004018438620000053
Further, the specific operation steps of step S4 are as follows:
repeating the steps S1-S3 to obtain a second moment image
Figure BDA0004018438620000054
Second moment mean normalized image->
Figure BDA0004018438620000055
Second low-frequency information image->
Figure BDA0004018438620000056
Second high-frequency information image->
Figure BDA0004018438620000057
And a second difference map, i.e. a second time mean normalized image +.>
Figure BDA0004018438620000058
And the first low-frequency information image->
Figure BDA0004018438620000059
Difference information between->
Figure BDA00040184386200000510
Further, the specific operation steps of step S5 are as follows:
s5.1, respectively performing binarization processing on the first differential graph and the second differential graph by using an ostu algorithm to obtain a binary graph
Figure BDA00040184386200000511
And->
Figure BDA00040184386200000512
Separately statistical binary patterns->
Figure BDA00040184386200000513
And->
Figure BDA00040184386200000514
Non-zero number of pixels +.>
Figure BDA00040184386200000515
And->
Figure BDA00040184386200000516
S5.2 applying an ostu algorithm to the first high frequency information image respectively
Figure BDA00040184386200000517
And a second high-frequency information image +.>
Figure BDA00040184386200000518
Binarization processing is carried out to obtain a binary image +.>
Figure BDA00040184386200000519
And->
Figure BDA00040184386200000520
Calculating a binary image->
Figure BDA00040184386200000521
And->
Figure BDA00040184386200000522
Is the union of (1)
Figure BDA00040184386200000523
Separately statistical binary patterns->
Figure BDA00040184386200000524
And union->
Figure BDA00040184386200000525
Non-zero number of pixels +.>
Figure BDA00040184386200000526
And->
Figure BDA00040184386200000529
S5.3, judging whether the virtual focus phenomenon exists according to the binary image information, when
Figure BDA00040184386200000527
And is also provided with
Figure BDA00040184386200000528
And judging that the virtual focus phenomenon exists.
The beneficial effects of the invention are as follows:
according to the virtual focus detection method for the video monitoring equipment, disclosed by the invention, a virtual focus detection function is realized by measuring the definition of an image. The method comprises the steps of collecting two images at different moments, respectively carrying out mean value normalization processing on the two images at different moments, removing influence of scene light change on image definition, carrying out Fourier transform on the image subjected to the mean value normalization processing, analyzing differences between the image subjected to the mean value normalization processing and the image subjected to the mean value normalization processing after high-frequency information is removed through the Fourier transform, and comprehensively analyzing two difference images and high-frequency information change to determine whether a virtual focus phenomenon exists.
The invention requires a small amount of history data: only one piece of historical data is needed to be used as reference, and the filtering data corresponding to the reference data can be extracted in advance, so that the calculated amount is reduced. The noise immunity of the invention is strong: the data preprocessing increases the mean normalization and can remove the interference caused by scene light change. The invention has wide scene adaptation: the image data acquired at different moments increases the priori knowledge of the scene, and can be suitable for any environment such as indoor and outdoor environments. The invention has high detection precision: the detection method integrates the fuzzy characteristic of the current scene and the fuzzy change of the relative historical data, and can improve the accuracy.
Drawings
Fig. 1 is a flowchart of a virtual focus detection method of a video monitoring device according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention relates to a virtual focus detection method of video monitoring equipment, which is used for determining whether a virtual focus phenomenon exists or not through high-frequency information change of two images, and mainly comprises the following main processes:
step S1, acquiring a first moment image, and carrying out mean value normalization processing on the image to acquire a first moment mean value normalization image;
s2, carrying out edge information extraction and Fourier transformation on the first moment mean normalized image, and respectively carrying out low-pass filtering and high-pass filtering on the transformed image;
s3, performing inverse transformation after filtering to obtain a first low-frequency information image and a first high-frequency information image (the high-frequency information refers to the place where the intensity (brightness/gray level) of the image is changed severely and usually corresponds to the edge (outline) of the image), and performing differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image;
s4, repeating the step S1-the step S3, and obtaining a second moment image, a second moment average normalized image, a second low-frequency information image, a second high-frequency information image and a second difference image;
and S5, analyzing the image characteristic difference to detect whether a virtual focus phenomenon exists (the virtual focus appears as image blurring, and high-frequency information in the image is reduced).
As shown in fig. 1, the method for detecting virtual focus of video monitoring equipment of the present invention specifically includes the following steps:
s1, acquiring a first moment image, and carrying out mean value normalization processing on the image; the specific operation process is as follows:
s1.1 acquiring the first moment image I clr The image is generally determined manually, and no virtual focus phenomenon exists;
s1.2 for first time image I clr Gray level conversion is carried out to obtain a gray level image I gray
S1.3 pairs of grayscale images I gray The average value normalization processing is carried out, and the influence of scene light change on the detection effect can be removed through the average value normalization processing; the specific operation process is as follows:
s1.3.1 the gray image mean is calculated,
Figure BDA0004018438620000071
wherein M, N is the number of lines and columns of the gray scale image, I gray (i, j) is a gray value of an i-th row and a j-th column of the gray image;
s1.3.2 obtaining a first time mean normalization by means of a mean normalization processImage processing apparatus
Figure BDA0004018438620000072
I.e.
Figure BDA0004018438620000073
In (1) the->
Figure BDA0004018438620000074
Normalizing the image for the first time mean +.>
Figure BDA0004018438620000075
The pixel value of the i-th row and the j-th column of (a).
Step S2, normalizing the obtained first time average value image
Figure BDA0004018438620000076
Extracting edge information and carrying out Fourier transformation, and respectively carrying out low-pass filtering and high-pass filtering on the transformed image; the specific operation process is as follows:
s2.1 normalizing the image to the first time mean value
Figure BDA0004018438620000077
Performing discrete Fourier transform to obtain a transformed image F, wherein the specific formula of the discrete Fourier transform is as follows:
Figure BDA0004018438620000078
wherein M, N is the number of lines and columns of the gray scale image respectively; i gray (i, j) normalizing the image for the first time mean
Figure BDA0004018438620000081
Pixel values of the ith row and the jth column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s2.2, carrying out low-pass filtering on the image F to obtain a low-pass filtered image G low G, i.e low (u,v)=H low (u, v) F (u, v), wherein,
Figure BDA0004018438620000082
d (u, v) represents the distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=50;
s2.3, performing high-pass filtering on the image F to obtain a high-pass filtered image G high G, i.e high (u,v)=H high (u, v) F (u, v), wherein,
Figure BDA0004018438620000083
d (u, v) denotes a distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=100.
S3, performing inverse transformation after filtering to obtain a first low-frequency information image and a first high-frequency information image, and performing differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image; the specific operation process is as follows:
s3.1 Low pass filtered image G low Performing Fourier inverse transformation to obtain first low-frequency information image
Figure BDA0004018438620000084
The first low-frequency information image +.>
Figure BDA0004018438620000085
Representing the original image (first time mean normalized image +.>
Figure BDA0004018438620000086
) Low-frequency information image after removing high-frequency information, i.e. +.>
Figure BDA0004018438620000087
In (1) the->
Figure BDA0004018438620000088
For the first low-frequency information image +.>
Figure BDA0004018438620000089
Line x of (2)Pixel values of the y-th column; (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.2 high pass filtered image G high Performing inverse Fourier transform to obtain second high-frequency information image
Figure BDA00040184386200000810
The second high-frequency information image->
Figure BDA00040184386200000811
Representing the original image (first time mean normalized image +.>
Figure BDA00040184386200000812
) High-frequency information image after removing low-frequency information, i.e. +.>
Figure BDA00040184386200000813
In (1) the->
Figure BDA00040184386200000815
For the first high-frequency information image
Figure BDA00040184386200000814
Pixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.3, carrying out differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image, namely the first time average normalized image
Figure BDA0004018438620000091
And the first low-frequency information image->
Figure BDA0004018438620000092
Information of difference between
Figure BDA0004018438620000093
The image difference information is smaller when the virtual focus phenomenon exists.
Step S4, repeating the steps S1-S3 to obtain a second moment image
Figure BDA0004018438620000094
Second moment mean normalized image->
Figure BDA0004018438620000095
Second low-frequency information image->
Figure BDA0004018438620000096
Second high-frequency information image->
Figure BDA0004018438620000097
And a second difference map, namely a second time mean normalized image +.>
Figure BDA0004018438620000098
And the first low-frequency information image->
Figure BDA0004018438620000099
Difference information between->
Figure BDA00040184386200000910
The image difference information is smaller when the virtual focus phenomenon exists.
S5, analyzing the image characteristic difference, and detecting whether a virtual focus phenomenon exists or not; the specific operation process is as follows:
s5.1, performing binarization processing on the first differential graph obtained in the step S3 by using an ostu algorithm to obtain a binary graph
Figure BDA00040184386200000911
Simultaneously, binarizing the second differential graph obtained in the step S4 by using an ostu algorithm to obtain a binary graph +.>
Figure BDA00040184386200000912
Statistical binary diagram->
Figure BDA00040184386200000913
And binary diagram->
Figure BDA00040184386200000914
Is respectively marked as +.>
Figure BDA00040184386200000915
And->
Figure BDA00040184386200000916
S5.2 calculating the change of the high-frequency information at two moments, namely using an ostu algorithm to image the first high-frequency information corresponding to the first moment
Figure BDA00040184386200000917
Binarization processing is carried out to obtain a binary image +.>
Figure BDA00040184386200000918
Simultaneously using an ostu algorithm for the second high-frequency information image corresponding to the second moment +.>
Figure BDA00040184386200000919
Binarization processing is carried out to obtain a binary image +.>
Figure BDA00040184386200000920
Calculating a binary image->
Figure BDA00040184386200000921
And binary diagram->
Figure BDA00040184386200000922
Is->
Figure BDA00040184386200000923
Simultaneous statistics of the binary diagram->
Figure BDA00040184386200000924
And (2) union->
Figure BDA00040184386200000925
Is respectively marked as +.>
Figure BDA00040184386200000926
And->
Figure BDA00040184386200000927
S5.3, judging whether a virtual focus phenomenon exists according to the binary image information of the step S5.1 and the step S5.2, wherein the judgment is specifically as follows: when (when)
Figure BDA00040184386200000928
And->
Figure BDA00040184386200000929
Judging that the virtual focus phenomenon exists; otherwise, if the virtual focus phenomenon does not exist, repeating the step S4-the step S5.
And S6, when the virtual focus phenomenon is judged to exist in the step S5, a virtual focus alarm can be sent.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (6)

1. The virtual focus detection method for the video monitoring equipment is characterized by comprising the following steps of:
step S1, acquiring a first moment image, and carrying out mean value normalization processing on the image to acquire a first moment mean value normalization image;
s2, carrying out edge information extraction and Fourier transformation on the first moment mean normalized image, and respectively carrying out low-pass filtering and high-pass filtering on the transformed image;
s3, performing inverse transformation after filtering to obtain a first low-frequency information image and a first high-frequency information image, and performing differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image;
s4, repeating the step S1-the step S3, and obtaining a second moment image, a second moment average normalized image, a second low-frequency information image, a second high-frequency information image and a second difference image;
and S5, analyzing the image characteristic difference, and detecting whether a virtual focus phenomenon exists.
2. The method for detecting virtual focus of video monitoring equipment according to claim 1, wherein the specific operation steps of step S1 are as follows:
s1.1 acquiring the first moment image I clr
S1.2 for first time image I clr Gray level conversion is carried out to obtain a gray level image I gray
S1.3 calculates a gray image mean,
Figure FDA0004018438610000011
wherein M, N is the number of lines and columns of the gray scale image, I gray (i, j) is a gray value of an i-th row and a j-th column of the gray image; acquiring a first time mean normalized image +.>
Figure FDA0004018438610000012
I.e. < ->
Figure FDA0004018438610000013
In (1) the->
Figure FDA0004018438610000014
Normalizing the image for the first time mean +.>
Figure FDA0004018438610000015
The pixel value of the i-th row and the j-th column of (a).
3. The method for detecting virtual focus of video monitoring equipment according to claim 2, wherein the specific operation steps of step S2 are as follows:
s2.1 normalizing the image to the first time mean value
Figure FDA0004018438610000021
Performing discrete Fourier transform to obtain a transformed image F, wherein the formula of the discrete Fourier transform is as follows:
Figure FDA0004018438610000022
wherein M, N is the number of lines and columns of the gray scale image respectively; i gray (i, j) normalizing the image for the first time mean
Figure FDA0004018438610000023
Pixel values of the ith row and the jth column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s2.2, carrying out low-pass filtering on the image F to obtain a low-pass filtered image G low G, i.e low (u,v)=H low (u, v) F (u, v), wherein,
Figure FDA0004018438610000024
d (u, v) represents the distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=50;
s2.3, performing high-pass filtering on the image F to obtain a high-pass filtered image G high G, i.e high (u,v)=H high (u, v) F (u, v), wherein,
Figure FDA0004018438610000025
d (u, v) denotes a distance between the point (u, v) in the frequency domain and the center of the frequency domain rectangle, σ=100.
4. The method for detecting virtual focus of video monitoring equipment according to claim 3, wherein the specific operation steps of step S3 are as follows:
s3.1 Low pass filtered image G low Performing Fourier inverse transformation to obtain first low-frequency information image
Figure FDA0004018438610000026
The first low-frequency information image +.>
Figure FDA0004018438610000027
Representing the original image (first time mean normalized image +.>
Figure FDA0004018438610000028
) Low-frequency information image after removing high-frequency information, i.e. +.>
Figure FDA0004018438610000029
In (1) the->
Figure FDA00040184386100000210
For the first low-frequency information image +.>
Figure FDA00040184386100000211
Pixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.2 high pass filtered image G high Performing inverse Fourier transform to obtain second high-frequency information image
Figure FDA00040184386100000212
The second high-frequency information image->
Figure FDA00040184386100000213
Representing the original image (first time mean normalized image +.>
Figure FDA00040184386100000214
) A high-frequency information image from which the low-frequency information is removed,i.e. < ->
Figure FDA0004018438610000031
Wherein I is high (x, y) is the first high-frequency information image I high Pixel values of the x-th row and the y-th column of (a); (u, v) is a point in the frequency domain, u e {0,2,., M-1}, v e {0,2,., N-1}; x.epsilon.0, 2, M-1, y.epsilon.0, 2, N-1; a represents an imaginary number;
s3.3, carrying out differential processing on the first time average normalized image and the first low-frequency information image to obtain a first differential image, namely the first time average normalized image
Figure FDA0004018438610000032
And the first low-frequency information image->
Figure FDA0004018438610000033
Difference information between->
Figure FDA0004018438610000034
5. The method for detecting virtual focus of video monitoring equipment according to claim 4, wherein the specific operation steps of step S4 are as follows:
repeating the steps S1-S3 to obtain a second moment image
Figure FDA0004018438610000035
Second moment mean normalized image->
Figure FDA0004018438610000036
Second low-frequency information image->
Figure FDA0004018438610000037
Second high-frequency information image->
Figure FDA0004018438610000038
And a second differenceThe second difference graph is a second time mean normalized image +.>
Figure FDA0004018438610000039
And the first low-frequency information image->
Figure FDA00040184386100000310
Difference information between->
Figure FDA00040184386100000311
6. The method for detecting virtual focus of video monitoring equipment according to claim 5, wherein the specific operation steps of step S5 are as follows:
s5.1, respectively performing binarization processing on the first differential graph and the second differential graph by using an ostu algorithm to obtain a binary graph
Figure FDA00040184386100000312
And->
Figure FDA00040184386100000313
Separately statistical binary patterns->
Figure FDA00040184386100000314
And->
Figure FDA00040184386100000315
Non-zero number of pixels +.>
Figure FDA00040184386100000316
And->
Figure FDA00040184386100000317
S5.2 applying an ostu algorithm to the first high frequency information image respectively
Figure FDA00040184386100000318
And a second high-frequency information image +.>
Figure FDA00040184386100000319
Binarization processing is carried out to obtain a binary image +.>
Figure FDA00040184386100000320
And->
Figure FDA00040184386100000321
Calculating a binary image->
Figure FDA00040184386100000322
And->
Figure FDA00040184386100000323
Is->
Figure FDA00040184386100000324
Separately statistical binary patterns->
Figure FDA00040184386100000325
And union->
Figure FDA00040184386100000326
Non-zero number of pixels +.>
Figure FDA00040184386100000327
And->
Figure FDA00040184386100000328
S5.3, judging whether the virtual focus phenomenon exists according to the binary image information, when
Figure FDA00040184386100000329
And is also provided with
Figure FDA0004018438610000041
And judging that the virtual focus phenomenon exists. />
CN202211676112.XA 2022-12-26 2022-12-26 Virtual focus detection method for video monitoring equipment Pending CN116152175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211676112.XA CN116152175A (en) 2022-12-26 2022-12-26 Virtual focus detection method for video monitoring equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211676112.XA CN116152175A (en) 2022-12-26 2022-12-26 Virtual focus detection method for video monitoring equipment

Publications (1)

Publication Number Publication Date
CN116152175A true CN116152175A (en) 2023-05-23

Family

ID=86355437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211676112.XA Pending CN116152175A (en) 2022-12-26 2022-12-26 Virtual focus detection method for video monitoring equipment

Country Status (1)

Country Link
CN (1) CN116152175A (en)

Similar Documents

Publication Publication Date Title
CN114937055B (en) Image self-adaptive segmentation method and system based on artificial intelligence
CN102800082B (en) No-reference image definition detection method
CN108876768B (en) Shadow defect detection method for light guide plate
CN106599783B (en) Video occlusion detection method and device
CN111612741B (en) Accurate reference-free image quality evaluation method based on distortion recognition
CN115861325B (en) Suspension spring defect detection method and system based on image data
CN107742307A (en) Based on the transmission line galloping feature extraction and parameters analysis method for improving frame difference method
CN115131354B (en) Laboratory plastic film defect detection method based on optical means
CN110648330B (en) Defect detection method for camera glass
JPWO2017047494A1 (en) Image processing device
CN111353968B (en) Infrared image quality evaluation method based on blind pixel detection and analysis
WO2017001096A1 (en) Static soiling detection and correction
CN116883412B (en) Graphene far infrared electric heating equipment fault detection method
CN116363126B (en) Welding quality detection method for data line USB plug
CN111612773B (en) Thermal infrared imager and real-time automatic blind pixel detection processing method
CN110880184A (en) Method and device for carrying out automatic camera inspection based on optical flow field
CN111445435B (en) Multi-block wavelet transform-based reference-free image quality evaluation method
CN114004850A (en) Sky segmentation method, image defogging method, electronic device and storage medium
CN112613456A (en) Small target detection method based on multi-frame differential image accumulation
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
CN116958058A (en) Lens dirt detection method and device and image detection equipment
CN109360189B (en) Method for detecting pixel defect point of image of uncooled infrared movement
CN116152175A (en) Virtual focus detection method for video monitoring equipment
KR20200099834A (en) Imaging processing device for motion detection and method thereof
CN111062887B (en) Image definition judging method based on improved Retinex algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination