CN115514955A - Camera picture quality abnormity detection and identification method - Google Patents

Camera picture quality abnormity detection and identification method Download PDF

Info

Publication number
CN115514955A
CN115514955A CN202211349015.XA CN202211349015A CN115514955A CN 115514955 A CN115514955 A CN 115514955A CN 202211349015 A CN202211349015 A CN 202211349015A CN 115514955 A CN115514955 A CN 115514955A
Authority
CN
China
Prior art keywords
image
img
value
camera
jitter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211349015.XA
Other languages
Chinese (zh)
Other versions
CN115514955B (en
Inventor
张何伟
琚午阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Ruixin Intelligent Technology Co ltd
Original Assignee
Weihai Ruixin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Ruixin Intelligent Technology Co ltd filed Critical Weihai Ruixin Intelligent Technology Co ltd
Priority to CN202211349015.XA priority Critical patent/CN115514955B/en
Publication of CN115514955A publication Critical patent/CN115514955A/en
Application granted granted Critical
Publication of CN115514955B publication Critical patent/CN115514955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The invention discloses a camera picture quality abnormity detection and identification method, which comprises the following steps: s1: reading a video frame from a video stream, and setting a shift reference frame and a jitter reference frame; s2: carrying out signal detection on the camera input; s3: detecting partial lost of picture for camera input; s4: performing picture freezing detection on the camera input according to the current shift reference frame and the current jitter reference frame; s5: carrying out shielding detection on the camera input; s6: performing brightness detection on the input of the camera according to the converted Lab color space image img _ Lab; s7: performing color cast detection on the camera input according to the Lab color space image img _ Lab; s8: performing definition detection on the input of the camera according to the Gaussian blurred image img _ Gaussian; s9: carrying out noise detection on the camera input; s10: performing band-shaped stripe detection on the input of the camera; s11: performing jitter detection on the input of the camera according to the current jitter reference frame; s12: and carrying out displacement detection on the camera input according to the current displacement reference frame.

Description

Camera picture quality abnormity detection and identification method
Technical Field
The invention relates to the field of camera abnormity detection, in particular to a camera picture quality abnormity detection and identification method.
Background
Nowadays, in the face of a huge and growing number of video monitoring devices, higher requirements are put forward for operation and maintenance of the monitoring devices and video images, and with the gradual expansion of the scale of a video monitoring system, more and more cameras are used in the system, so that the workload of traditional managers is doubled; in addition, due to a complex external environment, or due to the set quality problem and the video quality damage caused in the transmission process, the abnormity of the camera cannot be identified by the naked eyes, and the online real-time monitoring for 7-by-24 hours is difficult to achieve manually. Therefore, the realization that the monitoring system automatically detects the abnormal condition of the camera has very important significance. The video abnormity detection system aims to improve the working efficiency of the monitoring system and reduce the workload of monitoring workers. Common video quality problems include camera signal loss, picture partial loss, picture freezing, camera occlusion, picture brightness anomalies, picture color cast, picture blurring, picture noise or banding, camera shaking, camera shifting, and the like.
The application with publication number CN 112804520A discloses a high-speed monitoring video quality detection method (referred to as prior art 1 hereinafter), as shown in fig. 1, fig. 1 is a detection flow chart of prior art 1, which includes black screen detection, occlusion detection, blur detection, luminance anomaly detection and chrominance anomaly detection, wherein: the black screen detection is to Gray the image, calculate the grayed value of the pixel by the formula Gray = R0.299 + g 0.587+ b 0.114, the pixel with the grayed value smaller than T1 is called a partial-dark pixel, then count the proportion rate of the partial-dark pixel to the total pixel, rate = blackNum/totalNum, blackNum is the total number of the partial-dark pixels, totalNum is the total number of the pixels, totalNum = Gray. Setting a contrast threshold T, comparing the ratio of the dark pixels to the total pixels with the contrast threshold T, if the ratio is greater than T, judging that the current image is a black screen, otherwise, judging that the current image is not a black screen; the shielding detection is to graye an image, extract edge characteristic information through a Laplace algorithm, calculate the outline of a darker area in the gray image, extract the outline with the area larger than a threshold value, call a pixel with the grayed value smaller than T1 as a dark pixel, form the darker area by a dark pixel set, calculate the mean value and the variance of the Laplace edge of the extracted outline, and judge whether the shielding phenomenon exists according to the calculation result by setting the threshold value; the fuzzy detection is that the gray level image is firstly subjected to three-layer Harr wavelet decomposition, the edge detection is carried out on the frequency domain image after wavelet conversion, the square of the pixel matrix of each layer of wavelet conversion image is solved, the pixel of the matrix is accumulated to solve an average matrix and is assigned to a new matrix, and the new matrix is called as a distance matrix; respectively traversing the distance matrix of each layer of wavelet transformation, setting an m × n window, wherein m represents the row number of the distance matrix, n represents the column number of the distance matrix, solving the maximum value in the window to obtain the distance maximum value matrix after each layer of transformation, then solving the mean value and variance of the distance maximum value matrix after each layer of transformation, solving the mean value of the distance maximum value matrix of three-layer wavelet decomposition, then setting the threshold condition of each layer according to the distance maximum value matrix of three-layer wavelet domain, traversing the distance maximum value matrix, counting the number of pixels which simultaneously meet the threshold value in the three distance maximum value matrices and are recorded as nEdge, counting the number of pixels which meet the requirement that the matrix 1 is more than the matrix 2 is more than the matrix 3 and are recorded as nDa, and counting the fuzzy coefficient Per = float (nDa)/nEdge, and then judging whether the image is fuzzy or not according to the threshold value set by the fuzzy system; the brightness abnormity detection is that the average value and the variance of the image gray level image are calculated, if the average value deviates from an average value point and the variance is smaller than a set normal variance, the image brightness is judged to be abnormal; the chromaticity abnormity detection comprises the steps of firstly converting an image RGB space into a Lab space, calculating the mean value and the variance of the image on a component a and a component b, judging whether the image has color cast or not by setting a threshold value, and judging what color the image has cast according to the positive and negative offset values of the component a and the component b.
However, in the above solution, when detecting a black screen, for a current camera, if there is no signal, it is not always a completely black screen, and a picture with a lost signal is usually in a state with a caption, so the adaptive range of the black screen detection is narrow. Edge feature information is extracted by using a laplacian in occlusion detection, but the laplacian is sensitive to noise. In addition, the above scheme does not consider the situations that noise or abnormal strips exist in the picture, and the picture of the monitoring camera still has jitter or displacement.
Application publication No. CN 112291551A discloses a video quality detection method based on image processing, a storage device, and a mobile terminal (hereinafter referred to as prior art 2), as shown in fig. 2, fig. 2 is a detection flow chart of prior art 2, which obtains a value of a standard frame by first selecting a standard image and performing image preprocessing on the standard image; and then continuously reading photos from the real-time video, carrying out local image processing on the real-time video pictures, comparing the results of the local image processing with the standard frame, finally carrying out various detections on the processed local images, and synthesizing all detection results to obtain the video quality result. However, a standard image is selected, the reference image is not updated on time, and as time and light change, such as ambient light becomes dark or bright, the calculation results of the detection indexes of the current image and the standard image are greatly different, and false detection may occur.
Therefore, there is a need in the art for a more efficient and more comprehensive method for detecting and identifying abnormal image quality of a camera.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for detecting and identifying abnormal camera picture quality, which detects abnormal conditions of a camera picture by processing a real-time video stream in a camera, and identifies 11 abnormal conditions of camera signal loss, picture local loss, picture freezing, camera shading, picture brightness abnormality, picture color cast, picture blurring, picture noise or banding, camera shaking, and camera shifting, and the detection and identification efficiency is higher.
In order to achieve the above object, the present invention provides a method for detecting and identifying abnormal picture quality of a camera, which comprises:
step S1: reading video frames from a video stream, sequentially decoding the video stream transmitted by a camera in real time into RGB three-channel image frames img, and scaling each image frame img into an image img _ resize in an equal proportion, wherein the width of any image img _ resize is w, and the height of any image img _ resize is h, copying the image img _ resize of a first frame into initial images of a shifting reference frame img _ move and a shaking reference frame img _ jitter, updating every n seconds, taking the corresponding moment image img _ resize as an updated shifting reference frame img _ move, and taking the previous frame image img _ resize as an updated shaking reference frame img _ jitter;
step S2: the method for detecting the camera input signal specifically comprises the following steps:
step S201: carrying out Gaussian blur on any image img _ resize to obtain a Gaussian blur image img _ gaussian;
step S202: performing edge extraction on the Gaussian blurred image img _ gaussian through a Canny operator to obtain an image img _ Canny;
step S203: searching the number of connected areas, namely, a connected area number, in the image img _ canny, calculating a value of 1-connected area _ num/(w x h), and if the calculated value is greater than a preset threshold value, namely, signal _ thres, judging that no signal exists and directly finishing detection; otherwise, judging that a signal exists, and continuing subsequent detection;
and step S3: detecting partial lost of picture for camera input;
and step S4: performing picture freezing detection on the camera input according to the current shift reference frame img _ move and the current jitter reference frame img _ jitter;
step S5: carrying out occlusion detection on camera input, specifically, converting any image img _ resize into an Lab color space image img _ Lab, calculating an area block _ S1 of a region with a gray value smaller than a preset threshold value block _ thres1, correspondingly finding a corresponding edge in the image img _ canny of the step S202, counting an area block _ S2 of the corresponding edge region, finally calculating a value of block _ S2/block _ S1, and if the calculated value is larger than the preset threshold value block _ thres2, judging that an object is occluded;
step S6: performing brightness detection on the input of the camera according to the Lab color space image img _ Lab;
step S7: performing color cast detection on the input of the camera according to the Lab color space image img _ Lab;
step S8: performing definition detection on the input of the camera according to the Gaussian blurred image img _ Gaussian;
step S9: carrying out noise detection on the camera input;
step S10: performing stripe detection on the camera input;
step S11: performing jitter detection on the input of the camera according to the current jitter reference frame img _ jitter;
step S12: and carrying out displacement detection on the camera input according to the current displacement reference frame img _ move.
In an embodiment of the present invention, step S3 specifically includes:
step S301: calculating the area partLost _ s1 of a connected region in which the difference of pixel values between adjacent pixels in any image img _ resize does not exceed a preset threshold partLose _ thres 1;
step S302: and calculating the value of partLost _ s 1/(w × h), and if the calculation result is greater than a preset threshold partLose _ thres2, judging that the picture part is lost.
In an embodiment of the present invention, step S4 specifically includes:
step S401: the current frame to be detected is simultaneously and respectively subtracted from the current shift reference frame img _ move and the jitter reference frame img _ jitter to obtain two differential images which are diff _ img _ move and diff _ img _ jitter;
step S402: and counting the sum of the number of 0 pixels in the difference image, namely, freeze _ sum, calculating the value of freeze _ sum/(w × h), and if the calculation result is greater than a preset threshold, namely, freeze _ thres, judging that the picture is frozen.
In an embodiment of the present invention, step S6 specifically includes:
step S601: counting the area bright _ s1 of the L channel value smaller than the preset threshold bright _ thres1 in any image img _ Lab;
step S602: counting the area bright _ s2 of the L channel value in any image img _ Lab, which is greater than a preset threshold bright _ thres 2;
step S603: values of bright _ s 1/(w × h) and bright _ s 2/(w × h) are calculated, respectively:
if bright _ s 1/(w h) is greater than preset threshold bright _ thres3, determining that the image is dark;
if bright _ s 2/(w h) is greater than preset threshold bright _ thres4, the image is judged to be bright.
In an embodiment of the present invention, step S7 specifically includes:
step S701: calculating the statistical mean value d of the pixels of the channel a and the channel b in any image img _ Lab by the formula (1) a And d b
Figure BDA0003918217690000051
In the formula, a and b are respectively the pixel values of the ith row and the jth column of the a channel and the b channel of img _ Lab;
step S702: the value of the color cast factor K is found by the following calculation:
Figure BDA0003918217690000052
Figure BDA0003918217690000061
K=D/M
wherein D and M are calculated intermediate values;
step S703: if the value of the color cast factor K is larger than the color cast threshold value color _ thres, judging that the image has color cast;
step S704: comparison | d a I and I d b The size of the | is such that,
if | d a |>|d b If | d of a Value of if d a If the value of | is more than 128, judging that the image is reddish, otherwise, judging that the image is greenish;
if | d a |<|d b If | d of b Value of if d b If the value is greater than 128, judging that the image is yellow, otherwise, judging that the image is blue.
In an embodiment of the present invention, step S8 specifically includes:
step S801: acquiring the absolute value of the difference between every two adjacent pixels in any image img _ resize space after graying, wherein the calculation formula is as follows:
diff_src=abs(I(i-1,j)-I(i,j))+abs(I(i,j-1)-I(i,j))) (2)
wherein abs is the sign of absolute value, I and j represent ith row and jth column, respectively, and I represents the corresponding pixel value;
step S802: summing all values yields a sum diff _ src _ sum,
Figure BDA0003918217690000062
step S803: acquiring an absolute value of an adjacent pixel difference of a spatial corresponding position after the graying of the img _ gaussian blurred image corresponding to the image in the step S201, wherein a calculation formula is also formula (2);
step S804: for the same position, the difference between the absolute value obtained in step S801 and the absolute value obtained in step S803 is calculated, and diff _ diff _ sum is obtained by summing the following formulas,
Figure BDA0003918217690000063
in the formula, diff _ src average (i, j) represents the absolute value of the ith row and the jth column obtained in step S801, diff _ src gaussian (i, j)) represents the absolute value of the corresponding ith row and jth column obtained in step S803;
step S805: a value of diff _ diff _ sum/diff _ src _ sum is calculated, and if the value is larger than a preset threshold sharpness _ thres, it is determined that the image is blurred.
In an embodiment of the present invention, step S9 specifically includes:
step S901: after any image img _ resize is grayed, an image img _ average is obtained through mean value filtering and denoising;
step S902: the signal-to-noise ratio snr of the image is calculated,
Figure BDA0003918217690000071
in the formula I resize (I, j) is the pixel value of the image img _ resize at the ith row and j column, I average (i, j) is the pixel value of the image img _ average in the ith row and j column;
step S903: and when the signal-to-noise ratio snr is less than a preset threshold value noise _ thres, judging that the input image has obvious noise.
In an embodiment of the present invention, step S10 specifically includes:
step S1001: performing discrete Fourier transform on any image img _ resize;
step S1002: counting the total number of abnormal bright spots of which the pixel values are greater than a preset threshold value strip _ thres1 in an area with the width of the horizontal and vertical center lines of the spectrogram obtained after transformation as a preset value strip _ width;
step S1003: and when the total number of the abnormal bright spots is greater than a preset threshold value stripe _ thres2, judging that the image has a strip stripe abnormal condition.
In an embodiment of the present invention, step S11 specifically includes:
step S1101: extracting uniform surf characteristics from the current jitter reference frame img _ jitter, wherein the uniform surf characteristics are subjected to spatial uniform processing on the basis of the conventional surf characteristic extraction, namely, each layer of a pyramid constructed by a surf algorithm is divided into a plurality of grids, the total number of the grids is more than the number of required characteristic points, then a key point with the highest response value is selected from each grid, and jitter _ n characteristic points jitter _ points1 are extracted through a surf algorithm;
step S1102: extracting homogenization surf characteristics from the current image img _ resize, and extracting jitter _ n characteristic points jitter _ points2;
step S1103: calculating a two-dimensional vector of a space coordinate difference of matching points of the feature point jitter _ points1 and the feature point jitter _ points2 by adopting a Hamming distance violence matching method, and judging that jitter exists if the length of the two-dimensional vector is greater than a preset threshold jitter _ thres;
step S1104: and sending the corresponding two-dimensional vector into a queue of a corresponding frame, and performing trigonometric function fitting on the horizontal and vertical coordinates of the queue to obtain the corresponding jitter amplitude, frequency and direction.
In an embodiment of the present invention, step S12 specifically includes:
step S1201: extracting the jitter _ n characteristic points move _ points of the current shift reference frame img _ move in the same way of extracting the uniform surf characteristics in the step S1101;
step S1202: performing feature point matching on the feature points move _ points and the feature point jitter _ points2 in the step S1102, and calculating the number of points move _ match _ n which are closest to matching and have a feature matching distance smaller than a preset value move _ thres 1;
step S1203: if the point number move _ match _ n is smaller than a preset value threshold value move _ thres2, judging that serious displacement occurs; otherwise, calculating the image space position of each matching point;
step S1204: and calculating the average distance of the image space position of each matching point, if the average distance is greater than a preset threshold value move _ thres3, judging that the shift occurs, and calculating the shift distance and direction according to the space coordinate vector of the image space position.
Compared with the prior art, the camera picture quality abnormity detection and identification method provided by the invention can be used for detecting and analyzing various abnormal conditions aiming at the monitoring camera, when the number of the monitoring cameras is large, the manpower resource for manually checking the camera picture can be greatly saved, and meanwhile, the method has higher time and space freedom compared with the manpower check, realizes seamless detection and alarm in the whole time period, and can adapt to the monitoring requirements of various scenes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of prior art 1 detection;
FIG. 2 is a flow chart of prior art 2 detection;
FIG. 3 is a flowchart illustrating video anomaly detection according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without inventive step, are within the scope of the present invention.
Fig. 3 is a flow chart of video anomaly detection according to an embodiment of the present invention, and as shown in fig. 3, the embodiment provides a method for detecting and identifying camera image quality anomalies, which includes:
step S1: reading video frames from a video stream, sequentially decoding the video stream transmitted by a camera in real time into RGB three-channel image frames img, scaling each image frame img into an image img _ resize in an equal proportion, wherein the size of the image img _ resize can be set according to actual processing performance, the scaling is smaller when the processing frame rate is required to be higher, the embodiment does not specifically limit the size of the image frame, wherein the width of any image img _ resize is w and the height of any image is h, at the moment, the image img _ resize of a first frame is copied into initial images of a shifting reference frame img _ move and a shaking reference frame img _ jitter, then the image img _ resize is updated every n seconds, the image img _ resize at the corresponding moment is used as the updated shifting reference frame img _ move, and the image img _ resize of a previous frame is used as the updated shaking reference frame img _ jitter;
step S2: carry out signal detection to camera input, specifically do:
step S201: carrying out Gaussian blur on any image img _ resize to obtain a Gaussian blur image img _ gaussian;
step S202: performing edge extraction on the Gaussian blurred image img _ gaussian through a Canny operator to obtain an image img _ Canny;
step S203: searching the number of connected areas, namely, a connected area number, in the image img _ canny, calculating a value of 1-connected area _ num/(w x h), and if the calculated value is greater than a preset threshold value, namely, signal _ thres, judging that no signal exists and directly finishing detection; otherwise, judging that a signal exists, and continuing subsequent detection;
and step S3: detecting partial lost of picture for camera input;
in this embodiment, step S3 specifically includes:
step S301: calculating the area partLost _ s1 of a connected region in which the difference of pixel values between adjacent pixels in any image img _ resize does not exceed a preset threshold partLose _ thres 1;
step S302: and calculating the value of partLost _ s 1/(w × h), and if the calculation result is greater than a preset threshold partLose _ thres2, judging that the picture part is lost.
And step S4: performing picture freezing detection on the camera input according to the current shift reference frame img _ move and the current jitter reference frame img _ jitter;
in this embodiment, step S4 specifically includes:
step S401: the current frame to be detected is simultaneously and respectively subtracted from the current shift reference frame img _ move and the jitter reference frame img _ jitter to obtain two differential images which are diff _ img _ move and diff _ img _ jitter;
step S402: and counting the sum of the number of 0 pixels in the difference image, namely, freeze _ sum, calculating the value of freeze _ sum/(w × h), and if the calculation result is greater than a preset threshold, namely, freeze _ thres, judging that the picture is frozen.
Step S5: carrying out occlusion detection on camera input, specifically, converting any image img _ resize into a Lab (a color model) color space image img _ Lab (any existing conversion mode can be adopted, and the method is not limited by the invention), calculating an area block _ S1 of a region with a gray value smaller than a preset threshold block _ thres1, correspondingly finding a corresponding edge in the image img _ canny of the step S202, counting the area block _ S2 of the corresponding edge region, finally calculating the value of block _ S2/block _ S1, and if the calculated value is larger than the preset threshold block _ thres2, judging that an object is occluded;
step S6: performing brightness detection on the input of the camera according to the Lab color space image img _ Lab;
in this embodiment, step S6 specifically includes:
step S601: counting the area bright _ s1 of the L channel value smaller than the preset threshold bright _ thresk in any image img _ Lab;
step S602: counting the area bright _ s2 of the L channel value in any image img _ Lab, which is greater than the preset threshold bright _ thres 2;
step S603: values of bright _ s 1/(w × h) and bright _ s 2/(w × h) are calculated, respectively:
if bright _ s 1/(w h) is greater than preset threshold bright _ thres3, determining that the image is dark;
if bright _ s 2/(w h) is greater than preset threshold value bright _ thres4, the image is determined to be brighter.
Step S7: performing color cast detection on the input of the camera according to the Lab color space image img _ Lab;
in this embodiment, step S7 specifically includes:
step S701: calculating the statistical mean value d of the pixels of the channel a and the channel b in any image img _ Lab by the formula (1) a And d b
Figure BDA0003918217690000111
In the formula, a and b are respectively the pixel values of the ith row and the jth column of the a channel and the b channel of img _ Lab;
step S702: the value of the color cast factor K is found by the following calculation:
Figure BDA0003918217690000112
Figure BDA0003918217690000113
K=D/M
in the formula, D and M are calculated intermediate values and can be approximately regarded as a mean value and a variance in statistics;
step S703: if the value of the color cast factor K is larger than the color cast threshold value color _ thres, judging that the image has color cast;
step S704: comparison | d a I and I d b The size of the l is such that,
if | d a |>|d b If | d viewed a Value of if d a If the value of | is more than 128, judging that the image is reddish, otherwise, judging that the image is greenish;
if | d a |<|d b If | d viewed b Value of if d b If the value is larger than 128, the image is judged to be yellow, otherwise, the image is judged to be blue.
Step S8: performing definition detection on the camera input according to the Gaussian blurred image img _ Gaussian;
in this embodiment, step S8 specifically includes:
step S801: acquiring the absolute value of the difference between every two adjacent pixels of the space after any image img _ resize is grayed, wherein the calculation formula is as follows:
diff_src=abs(I(i-1,j)-I(i,j))+abs(I(i,j-1)-I(i,j))) (2)
wherein abs is the sign of absolute value, I and j represent ith row and jth column, respectively, and I represents the corresponding pixel value; for example, I (I, j) is the pixel value of the image in the ith row and j column;
step S802: summing all values to get the sum diffsrcsum,
Figure BDA0003918217690000121
step S803: acquiring an absolute value of an adjacent pixel difference of a spatial corresponding position after the graying of the gaussian blurred image img _ gaussian of the corresponding image in the step S201, wherein a calculation formula is also formula (2);
step S804: for the same position, the difference between the absolute value obtained in step S801 and the absolute value obtained in step S803 is calculated, and diff _ diff _ sum is obtained by summing the following formulas,
Figure BDA0003918217690000122
in the formula, diff _ src average (i, j) represents the absolute value of the ith row and jth column obtained in step S801, diff _ src gaussian (i, j)) represents the absolute value of the ith row and jth column obtained in step S803;
step S805: a value of diff _ diff _ sum/diff _ src _ sum is calculated, and if the value is greater than a preset threshold sharpness _ thres, it is determined that the image is blurred.
Step S9: carrying out noise detection on the camera input;
in this embodiment, step S9 specifically includes:
step S901: after any image img _ resize is grayed, an image img _ average is obtained through mean value filtering and denoising; the method for filtering and denoising the mean value can adopt the existing image processing method, and the embodiment does not limit the method;
step S902: the signal-to-noise ratio snr of the image is calculated,
Figure BDA0003918217690000131
in the formula I resize (I, j) is the pixel value of the image img _ resize at the ith row and j column, I average (i, j) is the pixel value of the image img _ average in the ith row and j column;
step S903: when the signal-to-noise ratio snr is smaller than a preset threshold value noise _ thres, the input image is judged to have obvious noise.
Step S10: performing stripe detection on the camera input;
in this embodiment, step S10 specifically includes:
step S1001: performing Discrete Fourier Transform (DFT) on any image img _ resize;
step S1002: counting the total number of abnormal bright spots of which the pixel values are greater than a preset threshold value strip _ thres1 in an area with the width of the horizontal and vertical center lines of the spectrogram obtained after transformation as a preset value strip _ width;
step S1003: and when the total number of the abnormal bright spots is greater than a preset threshold value stripe _ thres2, judging that the image has a strip stripe abnormal condition.
Step S11: performing jitter detection on the input of the camera according to the current jitter reference frame img _ jitter;
in this embodiment, step S11 specifically includes:
step S1101: extracting uniform surf characteristics from the current jitter reference frame img _ jitter, wherein the uniform surf characteristics are subjected to space homogenization treatment on the basis of the existing surf characteristic extraction (Speeded Up Robust Features can be regarded as having Robust characteristics in accelerated version), namely, each layer of a pyramid constructed by a surf algorithm is divided into a plurality of grids, the total number of the grids is greater than the number of required characteristic points, then a key point with the highest response value is selected from each grid, and jitter _ points1 of characteristic points are extracted by the surf algorithm;
the method can be realized by independently extracting key points of each grid and reducing a response threshold value if the key points are not extracted, and specifically comprises the following steps:
uniformly selecting jitter _ n key points based on the pyramid quadtree;
if the number of key points in any grid is more than 1, splitting the grid into 4 grids; if the number of the key points in any grid is 0, deleting the corresponding grid;
if the number of key points in the newly-split grid is more than 1, continuously splitting the grid into 4 grids; is deleted if the value is 0;
and repeating the process until the total number of the grids is larger than the required characteristic points or the splitting can not be carried out any more.
Step S1102: extracting homogenization surf characteristics from the current image img _ resize, and extracting jitter _ n characteristic points jitter _ points2;
step S1103: calculating a two-dimensional vector of a space coordinate difference of matching points of the feature point jitter _ points1 and the feature point jitter _ points2 by using a method of using Hamming distance violence matching (Brute-Force match), and judging that jitter exists if the length of the two-dimensional vector is greater than a preset threshold jitter _ thres;
step S1104: and sending the corresponding two-dimensional vector into a queue of a corresponding frame, for example, each frame has a queue with the length of jitter _ len, and performing trigonometric function fitting on the horizontal and vertical coordinates of the queue to obtain the corresponding jitter amplitude, frequency and direction.
Step S12: and carrying out displacement detection on the camera input according to the current displacement reference frame img _ move.
In this embodiment, step S12 specifically includes:
step S1201: extracting the jitter _ n characteristic points move _ points of the current shift reference frame img _ move in the same way of extracting the uniform surf characteristics in the step S1101;
step S1202: performing feature point matching on the feature point move _ points and the feature point jitter _ points2 in the step S1102, and calculating the number of points move _ match _ n which are closest to matching and have a feature matching distance smaller than a preset value move _ thres 1;
step S1203: if the point number move _ match _ n is smaller than a preset value threshold value move _ thres2, judging that serious displacement occurs; otherwise, calculating the image space position of each matching point;
step S1204: and calculating the average distance of the image space position of each matching point, if the average distance is greater than a preset threshold value move _ thres3, judging that the shift occurs, and calculating the shift distance and direction according to the space coordinate vector of the image space position.
Compared with the prior art 1, the detection and identification provided by the embodiment are provided with the algorithm without signal detection, the algorithm covers the lost picture states of different camera signals, the applicability is wider, the whole process covers the detection and identification of various abnormal conditions, and the detection range is wider. In addition, in this embodiment, a canny operator is used for extracting the edge, which can be more accurate for extracting the edge with noise in a dark area compared with the laplacian operator, and reduce the blocking missing report caused by the noise.
Compared with the prior art 2, the detection and identification provided by the embodiment has the advantages that the shift reference frame and the jitter reference frame can be updated in real time, and the surf characteristic point algorithm is uniformly distributed for detecting the camera image jitter and the camera shift, so that compared with the sift algorithm of the prior art 2, the algorithm has higher speed without losing precision, the problem of characteristic point concentration is improved, and better adaptability is realized when the image jitter and the camera shift are detected.
The camera picture quality abnormity detection and identification method provided by the invention can be used for detecting and analyzing various abnormal conditions aiming at the monitoring camera, when the monitoring camera is huge in quantity, the manpower resource for manually checking the camera picture can be greatly saved, and meanwhile, the method has higher time and space freedom degree than the manpower check, realizes seamless detection and alarm in a whole period, and can adapt to the monitoring requirements of various scenes.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A camera image quality abnormity detection and identification method is characterized by comprising the following steps:
step S1: reading video frames from a video stream, sequentially decoding the video stream transmitted by a camera in real time into RGB three-channel image frames img, and scaling each image frame img into an image img _ resize in an equal proportion mode, wherein the width of any image img _ resize is w, and the height of any image img _ resize is h, copying the image img _ resize of a first frame into initial images of a shifting reference frame img _ move and a shaking reference frame img _ jitter, updating every n seconds, taking the image img _ resize at the corresponding moment as an updated shifting reference frame img _ move, and taking the image img _ resize of a previous frame as an updated shaking reference frame img _ jitter;
step S2: carry out signal detection to camera input, specifically do:
step S201: carrying out Gaussian blur on any image img _ resize to obtain a Gaussian blur image img _ gaussian;
step S202: performing edge extraction on the Gaussian blurred image img _ gaussian through a Canny operator to obtain an image img _ Canny;
step S203: searching the number of connected regions contournum in the image img _ canny, calculating the value of 1-contournum/(w x h), if the calculated value is greater than a preset threshold value signal _ thres, judging that no signal exists, and directly finishing detection; otherwise, judging that a signal exists, and continuing subsequent detection;
and step S3: detecting partial lost of picture for the input of a camera;
and step S4: performing picture freezing detection on the camera input according to the current shift reference frame img _ move and the current jitter reference frame img _ jitter;
step S5: carrying out occlusion detection on camera input, specifically, converting any image img _ resize into a Lab color space image img _ Lab, calculating an area block _ S1 of a region with a gray value smaller than a preset threshold value block _ thres1, correspondingly finding a corresponding edge in the image img _ canny of the step S202, counting an area block _ S2 of the corresponding edge region, finally calculating a value of block _ S2/block _ S1, and if the calculated value is larger than the preset threshold value block _ thres2, judging that an object is occluded;
step S6: performing brightness detection on the input of the camera according to the Lab color space image img _ Lab;
step S7: performing color cast detection on the input of the camera according to the Lab color space image img _ Lab;
step S8: performing definition detection on the input of the camera according to the Gaussian blurred image img _ Gaussian;
step S9: carrying out noise detection on the camera input;
step S10: performing stripe detection on the camera input;
step S11: performing jitter detection on the input of the camera according to the current jitter reference frame img _ jitter;
step S12: and carrying out displacement detection on the camera input according to the current displacement reference frame img _ move.
2. The method for detecting and identifying the abnormal picture quality of the camera according to claim 1, wherein the step S3 is specifically as follows:
step S301: calculating the area partLost _ s1 of a connected region in which the difference of pixel values between adjacent pixels in any image img _ resize does not exceed a preset threshold partLose _ thres 1;
step S302: and calculating the value of partLost _ s 1/(w × h), and if the calculation result is greater than a preset threshold partLose _ thres2, judging that the picture part is lost.
3. The method for detecting and identifying camera image quality abnormality according to claim 1, wherein the step S4 is specifically:
step S401: the method comprises the steps that a current frame to be detected is simultaneously and respectively subtracted from a current shifting reference frame img _ move and a current shaking reference frame img _ jitter to obtain two differential images which are diff _ img _ move and diff _ img _ jitter;
step S402: and counting the sum of the number of 0 pixels in the difference image, namely, freeze _ sum, calculating the value of freeze _ sum/(w × h), and if the calculation result is greater than a preset threshold, namely, freeze _ thres, judging that the picture is frozen.
4. The method for detecting and identifying the abnormal picture quality of the camera according to claim 1, wherein the step S6 is specifically:
step S601: counting the area bright _ s1 of the L channel value smaller than the preset threshold bright _ thres1 in any image img _ Lab;
step S602: counting the area bright _ s2 of the L channel value in any image img _ Lab, which is greater than a preset threshold bright _ thres 2;
step S603: values of bright _ s 1/(w × h) and bright _ s 2/(w × h) are calculated, respectively:
if bright _ s 1/(w h) is greater than preset threshold bright _ thres3, determining that the image is dark;
if bright _ s 2/(w h) is greater than preset threshold bright _ thres4, the image is judged to be bright.
5. The method for detecting and identifying the abnormal picture quality of the camera according to claim 1, wherein the step S7 is specifically:
step S701: calculating the statistical mean value d of the pixels of the channel a and the channel b in any image img _ Lab by the formula (1) a And d b
Figure FDA0003918217680000031
In the formula, a and b are respectively the pixel values of the ith row and the jth column of the a channel and the b channel of img _ Lab;
step S702: the value of the color cast factor K is found by the following calculation:
Figure FDA0003918217680000032
Figure FDA0003918217680000033
K=D/M
wherein D and M are calculated intermediate values;
step S703: if the value of the color cast factor K is larger than the color cast threshold value color _ thres, judging that the image has color cast;
step S704: comparing | d a | and | d b The size of | is determined,
if | d a ∣>∣d b | d, then looking over | d a | a value, if | d a ∣>128, judging the image is reddishOtherwise, judging that the image is green;
if | d a ∣<∣d b | d, then viewing | b An | value, an | d b ∣>And 128, judging that the image is yellow, otherwise, judging that the image is blue.
6. The method for detecting and identifying camera image quality abnormality according to claim 1, wherein step S8 is specifically:
step S801: acquiring the absolute value of the difference between every two adjacent pixels in any image img _ resize space after graying, wherein the calculation formula is as follows:
diff_src=abs(I(i-1,j)-I(i,j))+abs(I(i,j-1)-I(i,j))) (2)
wherein abs is the sign of absolute value, I and j represent ith row and jth column, respectively, and I represents the corresponding pixel value;
step S802: summing all values yields a sum diff _ src _ sum,
Figure FDA0003918217680000041
step S803: acquiring an absolute value of an adjacent pixel difference of a spatial corresponding position after the graying of the img _ gaussian blurred image corresponding to the image in the step S201, wherein a calculation formula is also formula (2);
step S804: for the same position, the difference between the absolute value obtained in step S801 and the absolute value obtained in step S803 is calculated, and diff _ diff _ sum is obtained by summing the following formulas,
Figure FDA0003918217680000042
in the formula, diff _ src resize (i, j) represents the absolute value of the ith row and the jth column obtained in step S801, diff _ src average (i, j) represents the absolute value of the ith row and the jth column obtained in step S803;
step S805: a value of diff _ diff _ sum/diff _ src _ sum is calculated, and if the value is larger than a preset threshold sharpness _ thres, it is determined that the image is blurred.
7. The method for detecting and identifying camera image quality abnormality according to claim 1, wherein step S9 is specifically:
step S901: after any image img _ resize is grayed, an image img _ average is obtained through mean value filtering and denoising;
step S902: the signal-to-noise ratio snr of the image is calculated,
Figure FDA0003918217680000043
in the formula I resize (I, j) is the pixel value of the image img _ resize at the ith row and j column, I average (i, j) is the pixel value of the image img _ average in the ith row and j column;
step S903: and when the signal-to-noise ratio snr is less than a preset threshold value noise _ thres, judging that the input image has obvious noise.
8. The method for detecting and identifying the abnormal picture quality of the camera according to claim 1, wherein the step S10 is specifically:
step S1001: performing discrete Fourier transform on any image img _ resize;
step S1002: counting the total number of abnormal bright spots of which the pixel values are greater than a preset threshold value strip _ thres1 in an area with the width of the transverse center line and the longitudinal center line of the spectrogram obtained after transformation as a preset value strip _ width;
step S1003: and when the total number of the abnormal bright spots is greater than a preset threshold value strip _ thres2, judging that the image has a strip stripe abnormal condition.
9. The method for detecting and identifying camera image quality abnormality according to claim 1, wherein step S11 is specifically:
step S1101: extracting uniform surf characteristics from the current jitter reference frame img _ jitter, wherein the uniform surf characteristics are subjected to spatial uniform processing on the basis of the conventional surf characteristic extraction, namely, each layer of a pyramid constructed by a surf algorithm is divided into a plurality of grids, the total number of the grids is greater than the number of required characteristic points, a key point with the highest response value is selected from each grid, and jitter _ n characteristic points jitter _ points1 are extracted through a surf algorithm;
step S1102: extracting homogenization surf characteristics from the current image img _ resize, and extracting jitter _ n characteristic points jitter _ points2;
step S1103: calculating a two-dimensional vector of a space coordinate difference of matching points of the feature point jitter _ points1 and the feature point jitter _ points2 by adopting a Hamming distance violence matching method, and judging that jitter exists if the length of the two-dimensional vector is greater than a preset threshold jitter _ thres;
step S1104: and sending the corresponding two-dimensional vector into a queue of a corresponding frame, and performing trigonometric function fitting on the horizontal and vertical coordinates of the queue to obtain the corresponding jitter amplitude, frequency and direction.
10. The method for detecting and identifying camera image quality abnormality according to claim 9, wherein step S12 is specifically:
step S1201: extracting jitter _ n feature points move _ points of the current shift reference frame img _ move in the same way of extracting homogenization surf features in the step S1101;
step S1202: performing feature point matching on the feature points move _ points and the feature point jitter _ points2 in the step S1102, and calculating the number of points move _ match _ n which are closest to matching and have a feature matching distance smaller than a preset value move _ thres 1;
step S1203: if the point number move _ match _ n is smaller than a preset value threshold value move _ thres2, judging that serious displacement occurs; otherwise, calculating the image space position of each matching point;
step S1204: and calculating the average distance of the image space position of each matching point, if the average distance is greater than a preset threshold value move _ thres3, judging that the shift occurs, and calculating the shift distance and direction according to the space coordinate vector of the image space position.
CN202211349015.XA 2022-10-31 2022-10-31 Camera picture quality abnormality detection and identification method Active CN115514955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211349015.XA CN115514955B (en) 2022-10-31 2022-10-31 Camera picture quality abnormality detection and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211349015.XA CN115514955B (en) 2022-10-31 2022-10-31 Camera picture quality abnormality detection and identification method

Publications (2)

Publication Number Publication Date
CN115514955A true CN115514955A (en) 2022-12-23
CN115514955B CN115514955B (en) 2023-11-14

Family

ID=84513109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211349015.XA Active CN115514955B (en) 2022-10-31 2022-10-31 Camera picture quality abnormality detection and identification method

Country Status (1)

Country Link
CN (1) CN115514955B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239583A1 (en) * 2005-04-25 2006-10-26 Samsung Electronics Co., Ltd. Method and apparatus for adjusting brightness of image
EP2921989A1 (en) * 2014-03-17 2015-09-23 Université de Genève Method for object recognition and/or verification on portable devices
CN109102013A (en) * 2018-08-01 2018-12-28 重庆大学 A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic
CN115118934A (en) * 2022-06-28 2022-09-27 广州阿凡提电子科技有限公司 Live broadcast effect monitoring processing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239583A1 (en) * 2005-04-25 2006-10-26 Samsung Electronics Co., Ltd. Method and apparatus for adjusting brightness of image
EP2921989A1 (en) * 2014-03-17 2015-09-23 Université de Genève Method for object recognition and/or verification on portable devices
CN109102013A (en) * 2018-08-01 2018-12-28 重庆大学 A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic
CN115118934A (en) * 2022-06-28 2022-09-27 广州阿凡提电子科技有限公司 Live broadcast effect monitoring processing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘志强: "基于智能图像分析的视频质量诊断系统关键技术研究", 信息通信, no. 12, pages 140 - 142 *

Also Published As

Publication number Publication date
CN115514955B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
US8149336B2 (en) Method for digital noise reduction in low light video
US8279345B2 (en) System and method for random noise estimation in a sequence of images
KR100721543B1 (en) A method for removing noise in image using statistical information and a system thereof
KR100835380B1 (en) Method for detecting edge of an image and apparatus thereof and computer readable medium processing the method
WO2006022493A1 (en) Method for removing noise in image and system thereof
CN111612725B (en) Image fusion method based on contrast enhancement of visible light image
JP2004522372A (en) Spatio-temporal adaptive noise removal / high-quality image restoration method and high-quality image input device using the same
US6611295B1 (en) MPEG block detector
CN111489346B (en) Full-reference image quality evaluation method and system
Liu et al. A perceptually relevant approach to ringing region detection
CN112291551A (en) Video quality detection method based on image processing, storage device and mobile terminal
CN111741290B (en) Image stroboscopic detection method and device, storage medium and terminal
US20080239155A1 (en) Low Complexity Color De-noising Filter
CN111510709B (en) Image stroboscopic detection method and device, storage medium and terminal
CN110659627A (en) Intelligent video monitoring method based on video segmentation
CN115514955B (en) Camera picture quality abnormality detection and identification method
US6384872B1 (en) Method and apparatus for interlaced image enhancement
KR100869134B1 (en) Image processing apparatus and method
CN112804520B (en) High-speed monitoring video quality detection method
KR20220151130A (en) Image processing method and device, electronic equipment and medium
Wang et al. Improving visibility of a fast dehazing method
CN113542864B (en) Video splash screen area detection method, device and equipment and readable storage medium
CN117011288B (en) Video quality diagnosis method and system
CN113271457B (en) Video data abnormality determination method and apparatus, storage medium, and control apparatus
KR100485593B1 (en) A method for processing consecutive image input and a system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant