CN115514955B - Camera picture quality abnormality detection and identification method - Google Patents

Camera picture quality abnormality detection and identification method Download PDF

Info

Publication number
CN115514955B
CN115514955B CN202211349015.XA CN202211349015A CN115514955B CN 115514955 B CN115514955 B CN 115514955B CN 202211349015 A CN202211349015 A CN 202211349015A CN 115514955 B CN115514955 B CN 115514955B
Authority
CN
China
Prior art keywords
image
img
value
jitter
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211349015.XA
Other languages
Chinese (zh)
Other versions
CN115514955A (en
Inventor
张何伟
琚午阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Ruixin Intelligent Technology Co ltd
Original Assignee
Weihai Ruixin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Ruixin Intelligent Technology Co ltd filed Critical Weihai Ruixin Intelligent Technology Co ltd
Priority to CN202211349015.XA priority Critical patent/CN115514955B/en
Publication of CN115514955A publication Critical patent/CN115514955A/en
Application granted granted Critical
Publication of CN115514955B publication Critical patent/CN115514955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting and identifying abnormal picture quality of a camera, which comprises the following steps: s1: reading video frames from a video stream, and setting shift reference frames and jitter reference frames; s2: detecting signals input by the camera; s3: detecting the loss of a picture part of the camera input; s4: performing picture freezing detection on camera input according to the current shift reference frame and the current shake reference frame; s5: carrying out shielding detection on camera input; s6: detecting brightness of camera input according to the converted Lab color space image img_Lab; s7: performing color cast detection on camera input according to the Lab color space image img_Lab; s8: performing definition detection on camera input according to Gaussian blur image img_gaussian; s9: noise detection is carried out on the camera input; s10: carrying out strip stripe detection on camera input; s11: performing jitter detection on camera input according to a current jitter reference frame; s12: and performing shift detection on the camera input according to the current shift reference frame.

Description

Camera picture quality abnormality detection and identification method
Technical Field
The invention relates to the field of camera anomaly detection, in particular to a camera picture quality anomaly detection and identification method.
Background
Nowadays, a huge and growing number of video monitoring devices are faced, higher requirements are put on operation and maintenance of the monitoring devices and video images, and as the scale of a video monitoring system is gradually enlarged, more cameras are used in the system, so that the workload of traditional management staff is doubled; in addition, due to the complex external environment or the quality problem of setting and the video quality damage caused in the transmission process, many times, the abnormality of the camera cannot be identified by naked eyes, and 7 x 24 hours of online real-time monitoring is difficult to be performed manually. Therefore, the method has very important significance in realizing the automatic detection of the abnormal condition of the camera by the monitoring system. The video anomaly detection system is used for improving the working efficiency of the monitoring system and reducing the workload of monitoring staff. Common video quality problems include camera signal loss, local loss of picture, picture freezing, camera occlusion, picture brightness anomaly, picture color cast, picture blurring, picture noise or banding, camera shake, camera shift, etc.
The application with publication number CN 112804520A discloses a high-speed monitoring video quality detection method (hereinafter referred to as prior art 1), as shown in fig. 1, fig. 1 is a detection flow chart of prior art 1, which includes black screen detection, occlusion detection, blur detection, luminance anomaly detection and chrominance anomaly detection, wherein: the black screen detection is to Gray an image, a pixel Gray value is calculated through a formula gray=r 0.299+g 0.587+b 0.114, a pixel with the pixel Gray value smaller than T1 is called a dark pixel, then the dark pixel is counted to occupy a total pixel proportion rate, rate=blacknum/total num, blackNum is the total number of dark pixels, total num is the total number of pixels, total num=gray.rows is the gray.cols, gray.rows is the image height, gray.cols is the image width, and T1 is the empirical value for judging the dark pixel; setting a contrast threshold T, comparing the ratio of the dark pixels to the total pixels with the contrast threshold T, if the ratio is more than T, the current image is a black screen, otherwise, the current image is not a black screen; the occlusion detection comprises the steps of graying an image, extracting edge characteristic information through a Laplace algorithm, calculating the outline of a darker region in a gray level image, extracting the outline with the area larger than a threshold value, wherein pixels with the gray values smaller than T1 are called dark pixels, the dark pixels form a darker region, calculating the mean value and variance of the Laplace edge of the extracted outline, and setting the threshold value according to a calculation result to judge whether occlusion exists or not; the fuzzy detection is to perform three layers of Harr wavelet decomposition on a gray level image, perform edge detection on a frequency domain image after wavelet conversion, square each layer of wavelet conversion image pixel matrix, accumulate matrix pixels, average the matrix and assign the matrix to a new matrix, wherein the new matrix is called a distance matrix; respectively traversing the distance matrix of each layer of wavelet transformation, setting m x n windows, wherein m represents the number of rows of the distance matrix, n represents the number of columns of the distance matrix, obtaining the maximum value matrix of the distance after each layer of transformation, then obtaining the mean value and variance of the maximum value matrix of the distance after each layer of transformation, obtaining the mean value of the maximum value matrix of the distance after three layers of wavelet decomposition, setting the threshold condition of each layer according to the maximum value matrix of the distance of three layers of wavelet domains, traversing the maximum value matrix of the distance, counting the number of pixels which simultaneously meet the threshold value and are smaller than the threshold value in the three maximum value matrices of the distance, marking as nEdge, counting the number of pixels which meet the matrix 1 & gtmatrix 2 & gtmatrix 3, marking as nDa, and determining whether the image is blurred or not according to the threshold value set by a blurring system; the abnormal brightness detection is to calculate the mean value and variance of the gray level image, if the mean value deviates from the mean value point and the variance is smaller than the set normal variance, the abnormal brightness of the image is judged; the chromaticity anomaly detection is to convert an RGB space of an image into a Lab space, calculate the mean value and variance of the image on a component a and a component b, judge whether the image has color cast in a set threshold value mode, and judge what color the image has according to the positive and negative offset of the component a and the component b.
However, in the above scheme, when the black screen detection is performed, if no signal is not necessarily completely black screen, the picture of the lost signal is usually in a state with subtitle prompt, so that the black screen detection application range is narrower. The edge feature information is extracted using the laplace operator at the time of occlusion detection, but the laplace operator is relatively sensitive to noise. And the scheme does not consider the situation that the picture has noise or stripe abnormality and the situation that the picture of the monitoring camera has jitter or shift.
The application with publication number CN 112291551A discloses a video quality detection method, a storage device and a mobile terminal (hereinafter referred to as prior art 2) based on image processing, as shown in fig. 2, fig. 2 is a detection flow chart of prior art 2, in which a standard image is selected first, and image preprocessing is performed on the standard image to obtain a value of a standard frame; and then continuously reading photos from the real-time video, carrying out local image processing on the real-time video photos, comparing the local image processing result with a standard frame, and finally carrying out various detections on the processed local images, and synthesizing all detection results to obtain a video quality result. However, when a standard image is selected and the reference image is not updated on time, as the ambient light becomes dark or bright obviously, various detection indexes of the current image are greatly different from the calculation result of the standard image, and false detection can occur.
Therefore, there is a need for a more efficient and more fully functional method for detecting and identifying anomalies in camera picture quality.
Disclosure of Invention
In order to solve the problems, the invention provides a method for detecting and identifying the abnormal picture quality of a camera, which detects the abnormal picture condition of the camera by processing a real-time video stream in the camera, identifies the signal loss of the camera, and has higher detection and identification efficiency in 11 abnormal conditions of local picture loss, picture freezing, camera shielding, picture brightness abnormality, picture color cast, picture blurring, picture noise or band occurrence, camera shake and camera shift.
In order to achieve the above object, the present invention provides a method for detecting and identifying an abnormal image quality of a camera, comprising:
step S1: reading video frames from the video stream, sequentially decoding the video stream transmitted by a camera in real time into RGB three-channel image frames img, scaling each image frame img into an image img_resize in equal proportion, wherein the width of any image img_resize is w, the height is h, copying the image img_resize of a first frame into initial images of a shift reference frame img_move and a dither reference frame img_jitter, updating once every n seconds, and taking the image img_resize at the corresponding moment as the updated shift reference frame img_move, and taking the image img_resize of the previous frame as the updated dither reference frame img_jitter;
step S2: the signal detection is carried out on the camera input, and specifically comprises the following steps:
step S201: carrying out Gaussian blur on any image img_resize to obtain a Gaussian blurred image img_gaussian;
step S202: edge extraction is carried out on the Gaussian blur image img_gaussian through a Canny operator to obtain an image img_canny;
step S203: searching the number of connected areas in the image img_channel, calculating the value of 1-connected_num/(w×h), judging that no signal exists if the calculated value is greater than a preset threshold signal_thres, and directly ending detection; otherwise, judging that the signal exists, and continuing to detect the signal;
step S3: detecting the loss of a picture part of the camera input;
step S4: performing picture freezing detection on camera input according to the current shift reference frame img_move and the current jitter reference frame img_jitter;
step S5: performing occlusion detection on camera input, namely converting any image img_size into a Lab color space image img_Lab, calculating an area block_s1 with a gray value smaller than a preset threshold value block_thres1, correspondingly finding a corresponding edge in an image img_canny of the step S202, counting an area block_s2 of the corresponding edge area, and finally calculating the value of block_s2/block_s1, wherein if the calculated value is larger than the preset threshold value block_thres2, judging that an object is occluded;
step S6: detecting brightness of camera input according to the Lab color space image img_Lab;
step S7: performing color cast detection on camera input according to the Lab color space image img_Lab;
step S8: performing definition detection on camera input according to Gaussian blur image img_gaussian;
step S9: noise detection is carried out on the camera input;
step S10: carrying out strip stripe detection on camera input;
step S11: performing jitter detection on camera input according to a current jitter reference frame img_jitter;
step S12: and performing shift detection on camera input according to the current shift reference frame img_move.
In an embodiment of the present invention, step S3 specifically includes:
step S301: counting the area partlost_s1 of a communication area of which the pixel value difference between adjacent pixels in any image img_resize does not exceed a preset threshold value partlose_thres1;
step S302: calculating the value of partLost_s1/(w.h), and if the calculation result is greater than a preset threshold partLose_thres2, judging that the situation of picture part loss exists.
In an embodiment of the present invention, step S4 specifically includes:
step S401: respectively differencing a current frame to be detected with a current shift reference frame img_move and a jitter reference frame img_jitter to obtain two differential images, namely diff_img_move and diff_img_jitter;
step S402: and counting the sum of the numbers of 0 pixels in the differential image, calculating the value of the freeze_sum/(w×h), and judging that the picture is frozen if the calculation result is larger than a preset threshold value of the freeze_thres.
In an embodiment of the present invention, step S6 specifically includes:
step S601: counting the area bright_s1 of which the L channel value in any image img_Lab is smaller than a preset threshold bright_thres1;
step S602: counting the area bright_s2 of which the L channel value in any image img_Lab is larger than a preset threshold bright_thres2;
step S603: the values of bright_s1/(w×h) and bright_s2/(w×h) are calculated, respectively:
if bright_s1/(w×h) is greater than a preset threshold bright_thres3, determining that the image is dark;
if bright_s2/(w×h) is greater than the preset threshold bright_thres4, it is determined that the image is bright.
In an embodiment of the present invention, step S7 specifically includes:
step S701: calculating the pixel statistical mean value d of the a channel and the b channel in any image img_Lab by the formula (1) a And d b
Wherein a and b are pixel values of an a channel and a j-th column of an i-th row of a b channel of img_Lab respectively;
step S702: the value of the color cast factor K was found by the following calculation:
K=D/M
wherein D and M are calculated intermediate values;
step S703: if the value of the color cast factor K is larger than the color cast threshold value color_thres, judging that the image has color cast;
step S704: comparison of |d a I and d b The magnitude of the i is such that,
if |d a |>|d b |, then view |d a Value of |d a Judging that the image is redder if the I is more than 128, otherwise, judging that the image is greener;
if |d a |<|d b |, then view |d b Value of |d b And if the I is more than 128, judging that the image is yellow, otherwise, judging that the image is blue.
In an embodiment of the present invention, step S8 specifically includes:
step S801: the absolute value of the difference between every two adjacent pixels of the space after any image img_resolution graying is acquired, and the calculation formula is as follows:
diff_src=abs(I(i-1,j)-I(i,j))+abs(I(i,j-1)-I(i,j))) (2)
wherein abs is an absolute value sign, I and j respectively represent an ith row and a jth column, and I represents a corresponding pixel value;
step S802: summing all values gives the sum diff src sum,
step S803: acquiring an absolute value of a neighboring pixel difference of a space corresponding position after the Gaussian blur image img_gaussian of the corresponding image in the step S201 is grayed, wherein the calculation formula is the same as the formula (2);
step S804: for the same position, the difference between the absolute value obtained in step S801 and the absolute value obtained in step S803 is calculated, and diff_diff_sum is obtained by summing up the following formulas,
in the formula, diff_src average (i, j) represents the absolute value of the corresponding ith row and jth column obtained in step S801, diff_src gaussian (i, j)) represents the absolute value of the corresponding i-th row and j-th column obtained in step S803;
step S805: the value of diff_diff_sum/diff_src_sum is calculated, and if the value is greater than a preset threshold sharpness_thres, the image blurring is determined.
In an embodiment of the present invention, step S9 specifically includes:
step S901: after any image img_resize is grayed, denoising is carried out through mean value filtering to obtain an image img_average;
step S902: the signal-to-noise ratio snr of the image is calculated,
wherein I is resize (I, j) is the pixel value of the image img_resize in the ith row and j column, I average (i, j) is the pixel value of the image img_average in the ith row and j column;
step S903: when the signal-to-noise ratio snr is smaller than a preset threshold noise_thres, it is determined that significant noise exists in the input image.
In an embodiment of the present invention, step S10 specifically includes:
step S1001: performing discrete Fourier transform on any image img_resize;
step S1002: counting the total number of abnormal bright spots, of which the pixel value is larger than a preset threshold value strip_thres1, in a region with the width of the transverse and longitudinal central lines of the spectrogram obtained after transformation being a preset value strip_width;
step S1003: when the total number of abnormal bright spots is larger than a preset threshold value strip_thres2, judging that the image has the abnormal condition of the banded stripes.
In an embodiment of the present invention, step S11 specifically includes:
step S1101: extracting homogenizing surf features from a current jitter reference frame img_jitter, wherein the homogenizing surf features are space homogenizing treatment based on the existing extracting surf features, namely dividing a plurality of grids for each layer of a pyramid constructed by a surf algorithm, wherein the total number of the grids is larger than the number of required feature points, selecting a key point with the highest response value from each grid, and extracting jitter_n feature points jitter_point1 by the surf algorithm;
step S1102: extracting homogenized surf features from the current image img_resize, and extracting jitter_n feature points jitter_poins2;
step S1103: calculating a two-dimensional vector of a space coordinate difference of a characteristic point jitter_points1 and a characteristic point jitter_points2 matching point by adopting a hamming distance violent matching method, and judging that jitter exists if the length of the two-dimensional vector is greater than a preset threshold jitter_thres;
step S1104: and sending the corresponding two-dimensional vector into a queue of the corresponding frame, and performing trigonometric function fitting on the abscissa of the queue to obtain the corresponding jitter amplitude, frequency and direction.
In an embodiment of the present invention, step S12 specifically includes:
step S1201: extracting jitter_n feature points move_points of the current shift reference frame img_move in the same way as the homogenized surf feature is extracted in the step S1101;
step S1202: performing feature point matching on the feature point move_points and the feature point jitter_points2 of the step 1102, and calculating the point move_match_n which is closest to the matching and has a feature matching distance smaller than a preset value move_thres1;
step S1203: if the number of points move_match_n is smaller than a preset value threshold move_thres2, judging that serious shift occurs; otherwise, calculating the image space position of each matching point;
step S1204: and calculating the average distance of the image space positions of the matching points, judging that the shift occurs if the average distance is larger than a preset threshold value move_thres3, and calculating the shift distance and direction according to the space coordinate vector of the image space positions.
Compared with the prior art, the method for detecting and identifying the abnormal picture quality of the camera can detect and analyze various abnormal conditions of the monitoring cameras, can greatly save manpower resources for manually inspecting the picture of the camera when the number of the monitoring cameras is huge, has higher time and space degrees of freedom than manual inspection, realizes full-period seamless detection and alarm, and can meet the monitoring requirements of various scenes.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the detection of prior art 1;
FIG. 2 is a flow chart of the detection of prior art 2;
FIG. 3 is a flowchart of video anomaly detection according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
Fig. 3 is a flowchart of video anomaly detection according to an embodiment of the present invention, as shown in fig. 3, the present embodiment provides a method for detecting and identifying a camera image quality anomaly, which includes:
step S1: the video frames are read from the video stream, the video stream transmitted by the camera in real time is sequentially decoded into RGB three-channel image frames img, each image frame img is scaled into an image img_resize in equal proportion, the size of the image img_resize can be set according to actual processing performance, the higher the processing frame rate is required, the smaller the scaling is, the size of the image is not particularly limited, wherein the width of any image img_resize is w, the height is h, at the moment, the image img_resize of the first frame is copied into an initial image of a shift reference frame img_move and a shake reference frame img_jitter, then the image img_resize at the corresponding moment is updated once every n seconds, and the image img_resize of the previous frame is used as the updated shift reference frame img_move;
step S2: the signal detection is carried out on the camera input, and specifically comprises the following steps:
step S201: carrying out Gaussian blur on any image img_resize to obtain a Gaussian blurred image img_gaussian;
step S202: edge extraction is carried out on the Gaussian blur image img_gaussian through a Canny operator to obtain an image img_canny;
step S203: searching the number of connected areas in the image img_channel, calculating the value of 1-connected_num/(w×h), judging that no signal exists if the calculated value is greater than a preset threshold signal_thres, and directly ending detection; otherwise, judging that the signal exists, and continuing to detect the signal;
step S3: detecting the loss of a picture part of the camera input;
in this embodiment, step S3 specifically includes:
step S301: counting the area partlost_s1 of a communication area of which the pixel value difference between adjacent pixels in any image img_resize does not exceed a preset threshold value partlose_thres1;
step S302: calculating the value of partLost_s1/(w.h), and if the calculation result is greater than a preset threshold partLose_thres2, judging that the situation of picture part loss exists.
Step S4: performing picture freezing detection on camera input according to the current shift reference frame img_move and the current jitter reference frame img_jitter;
in this embodiment, step S4 specifically includes:
step S401: respectively differencing a current frame to be detected with a current shift reference frame img_move and a jitter reference frame img_jitter to obtain two differential images, namely diff_img_move and diff_img_jitter;
step S402: and counting the sum of the numbers of 0 pixels in the differential image, calculating the value of the freeze_sum/(w×h), and judging that the picture is frozen if the calculation result is larger than a preset threshold value of the freeze_thres.
Step S5: the method comprises the steps of performing occlusion detection on camera input, specifically converting any image img_resolution into a Lab (a color model) color space image img_Lab (which can be realized by adopting any existing conversion mode, the method is not limited), calculating a region area block_s1 with a gray value smaller than a preset threshold value block_thres1, correspondingly finding a corresponding edge in an image img_canny of step S202, counting the region block_s2 of the corresponding edge region, finally calculating the value of block_s2/block_s1, and judging that an object is occluded if the calculated value is larger than the preset threshold value block_thres2;
step S6: detecting brightness of camera input according to the Lab color space image img_Lab;
in this embodiment, step S6 specifically includes:
step S601: counting the area bright_s1 of which the L channel value in any image img_Lab is smaller than a preset threshold bright_thresh;
step S602: counting the area bright_s2 of which the L channel value in any image img_Lab is larger than a preset threshold bright_thres2;
step S603: the values of bright_s1/(w×h) and bright_s2/(w×h) are calculated, respectively:
if bright_s1/(w×h) is greater than a preset threshold bright_thres3, determining that the image is dark;
if bright_s2/(w×h) is greater than the preset threshold bright_thres4, it is determined that the image is bright.
Step S7: performing color cast detection on camera input according to the Lab color space image img_Lab;
in this embodiment, step S7 specifically includes:
step S701: calculating the pixel statistical mean value d of the a channel and the b channel in any image img_Lab by the formula (1) a And d b
Wherein a and b are pixel values of an a channel and a j-th column of an i-th row of a b channel of img_Lab respectively;
step S702: the value of the color cast factor K was found by the following calculation:
K=D/M
wherein D and M are calculated intermediate values, which can be regarded as mean and variance in statistics approximately;
step S703: if the value of the color cast factor K is larger than the color cast threshold value color_thres, judging that the image has color cast;
step S704: comparison of |d a I and d b The magnitude of the i is such that,
if |d a |>|d b |, then view |d a Value of |d a Judging that the image is redder if the I is more than 128, otherwise, judging that the image is greener;
if |d a |<|d b Check IViewing |d b Value of |d b And if the I is more than 128, judging that the image is yellow, otherwise, judging that the image is blue.
Step S8: performing definition detection on camera input according to Gaussian blur image img_gaussian;
in this embodiment, step S8 specifically includes:
step S801: the absolute value of the difference between every two adjacent pixels of the space after any image img_resolution graying is acquired, and the calculation formula is as follows:
diff_src=abs(I(i-1,j)-I(i,j))+abs(I(i,j-1)-I(i,j))) (2)
wherein abs is an absolute value sign, I and j respectively represent an ith row and a jth column, and I represents a corresponding pixel value; for example, I (I, j) is the pixel value of the image in row I, column j;
step S802: summing all values gives the sum diffsrcsum,
step S803: acquiring an absolute value of a neighboring pixel difference of a space corresponding position after the Gaussian blur image img_gaussian of the corresponding image in the step S201 is grayed, wherein the calculation formula is the same as the formula (2);
step S804: for the same position, the difference between the absolute value obtained in step S801 and the absolute value obtained in step S803 is calculated, and diff_diff_sum is obtained by summing up the following formulas,
in the formula, diff_src average (i, j) represents the absolute value of the corresponding ith row and jth column obtained in step S801, diff_src gaussian (i, j)) represents the absolute value of the corresponding i-th row and j-th column obtained in step S803;
step S805: the value of diff_diff_sum/diff_src_sum is calculated, and if the value is greater than a preset threshold sharpness_thres, the image blurring is determined.
Step S9: noise detection is carried out on the camera input;
in this embodiment, step S9 specifically includes:
step S901: after any image img_resize is grayed, denoising is carried out through mean value filtering to obtain an image img_average; the method of mean filtering denoising can adopt the existing image processing method, and the embodiment does not limit the method;
step S902: the signal-to-noise ratio snr of the image is calculated,
wherein I is resize (I, j) is the pixel value of the image img_resize in the ith row and j column, I average (i, j) is the pixel value of the image img_average in the ith row and j column;
step S903: when the signal-to-noise ratio snr is smaller than a preset threshold noise_thres, it is determined that significant noise exists in the input image.
Step S10: carrying out strip stripe detection on camera input;
in this embodiment, step S10 specifically includes:
step S1001: performing a Discrete Fourier Transform (DFT) on either image img_resize;
step S1002: counting the total number of abnormal bright spots, of which the pixel value is larger than a preset threshold value strip_thres1, in a region with the width of the transverse and longitudinal central lines of the spectrogram obtained after transformation being a preset value strip_width;
step S1003: when the total number of abnormal bright spots is larger than a preset threshold value strip_thres2, judging that the image has the abnormal condition of the banded stripes.
Step S11: performing jitter detection on camera input according to a current jitter reference frame img_jitter;
in this embodiment, step S11 specifically includes:
step S1101: extracting homogenizing surf characteristics from a current jitter reference frame img_jitter, wherein the homogenizing surf characteristics are spatial homogenizing treatment based on the existing extracting surf characteristics (Speeded Up Robust Features, which can be regarded as accelerating edition with robustness characteristics), namely dividing a plurality of grids for each layer of pyramid constructed by surf algorithm, wherein the total number of the grids is larger than the required characteristic point number, selecting a key point with the highest response value from each grid, and extracting jitter_n characteristic points jitter_poits 1 by surf algorithm;
it can be realized by extracting key points for each lattice separately and reducing the response threshold if the key points are not extracted, specifically:
uniformly selecting jitter_n key points based on a quadtree of the pyramid;
if the key point number in any grid is greater than 1, splitting the grid into 4 grids; if the key point number in any grid is 0, deleting the corresponding grid;
if the key point number in the newly split grid is greater than 1, splitting the grid into 4 grids; delete at 0;
repeating the above process until the total number of grids is greater than the required feature points or splitting can not be performed any more.
Step S1102: extracting homogenized surf features from the current image img_resize, and extracting jitter_n feature points jitter_poins2;
step S1103: calculating a two-dimensional vector of a spatial coordinate difference of a characteristic point jitter_points1 and a characteristic point jitter_points2 matching point by adopting a method of using Hamming distance violent matching (Brute-Force match), and judging that jitter exists if the length of the two-dimensional vector is greater than a preset threshold jitter_thres;
step S1104: and (3) sending the corresponding two-dimensional vector into a queue of corresponding frames, for example, each frame has a queue with a length of jitter_len, and performing trigonometric function fitting on the abscissa and the ordinate of the queue to obtain the corresponding jitter amplitude, frequency and direction.
Step S12: and performing shift detection on camera input according to the current shift reference frame img_move.
In this embodiment, step S12 specifically includes:
step S1201: extracting jitter_n feature points move_points of the current shift reference frame img_move in the same way as the homogenized surf feature is extracted in the step S1101;
step S1202: performing feature point matching on the feature point move_points and the feature point jitter_points2 of the step 1102, and calculating the point move_match_n which is closest to the matching and has a feature matching distance smaller than a preset value move_thres1;
step S1203: if the number of points move_match_n is smaller than a preset value threshold move_thres2, judging that serious shift occurs; otherwise, calculating the image space position of each matching point;
step S1204: and calculating the average distance of the image space positions of the matching points, judging that the shift occurs if the average distance is larger than a preset threshold value move_thres3, and calculating the shift distance and direction according to the space coordinate vector of the image space positions.
Compared with the prior art 1, the detection and identification method provided by the embodiment is provided with the algorithm without signal detection, the algorithm comprises the picture states of different camera signal loss, the applicability is wider, the whole process covers the detection and identification of various abnormal conditions, and the detection range is wider. In addition, the canny operator is adopted for extracting the edges, so that compared with the Laplace operator, the extraction of the edges with noise in a dark area can be more accurate, and shielding missing report caused by the noise is reduced.
Compared with the prior art 2, the detection and recognition method provided by the embodiment can update the shift reference frame and the jitter reference frame in real time, and uniformly distributed surf characteristic point algorithm is adopted for detecting the picture jitter and the picture shift of the camera, and compared with the sift algorithm in the prior art 2, the method has the advantages that the speed is higher under the condition of not losing the precision, and the problem of characteristic point concentration is improved, so that the method has better adaptability in detecting the picture jitter and the picture shift.
The method for detecting and identifying the abnormal picture quality of the cameras can detect and analyze various abnormal conditions aiming at the monitoring cameras, can greatly save manpower resources for manually inspecting the pictures of the cameras when the number of the monitoring cameras is huge, has higher time and space degrees of freedom than manual inspection, realizes full-period seamless detection and alarm, and can meet the monitoring requirements of various scenes.
Those of ordinary skill in the art will appreciate that: the drawing is a schematic diagram of one embodiment and the modules or flows in the drawing are not necessarily required to practice the invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. The method for detecting and identifying the abnormal picture quality of the camera is characterized by comprising the following steps:
step S1: reading video frames from the video stream, sequentially decoding the video stream transmitted by a camera in real time into RGB three-channel image frames img, scaling each image frame img into an image img_resize in equal proportion, wherein the width of any image img_resize is w, the height is h, copying the image img_resize of a first frame into initial images of a shift reference frame img_move and a dither reference frame img_jitter, updating once every n seconds, and taking the image img_resize at the corresponding moment as the updated shift reference frame img_move, and taking the image img_resize of the previous frame as the updated dither reference frame img_jitter;
step S2: the signal detection is carried out on the camera input, and specifically comprises the following steps:
step S201: carrying out Gaussian blur on any image img_resize to obtain a Gaussian blurred image img_gaussian;
step S202: carrying out edge extraction on the Gaussian blur image img_gaussian by a Canny operator to obtain an image img_canny;
step S203: searching the number of connected areas in the image img_channel, calculating the value of 1-connected_num/(w×h), judging that no signal exists if the calculated value is greater than a preset threshold signal_thres, and directly ending detection; otherwise, judging that the signal exists, and continuing to detect the signal;
step S3: the method for detecting the loss of the picture part of the camera input specifically comprises the following steps:
step S301: counting the area partlost_s1 of a communication area of which the pixel value difference between adjacent pixels in any image img_resize does not exceed a preset threshold value partlose_thres1;
step S302: calculating the value of partLost_s1/(w.h), and if the calculation result is larger than a preset threshold value partLose_thres2, judging that the situation of partial loss of the picture exists;
step S4: performing picture freezing detection on camera input according to the current shift reference frame img_move and the current jitter reference frame img_jitter;
step S5: performing occlusion detection on camera input, namely converting any image img_size into a Lab color space image img_Lab, calculating an area block_s1 with a gray value smaller than a preset threshold value block_thres1, correspondingly finding a corresponding edge in an image img_canny of the step S202, counting an area block_s2 of the corresponding edge area, and finally calculating the value of block_s2/block_s1, wherein if the calculated value is larger than the preset threshold value block_thres2, judging that an object is occluded;
step S6: the brightness detection is carried out on the camera input according to the Lab color space image img_Lab, specifically:
step S601: counting the area bright_s1 of which the L channel value in any image img_Lab is smaller than a preset threshold bright_thres1;
step S602: counting the area bright_s2 of which the L channel value in any image img_Lab is larger than a preset threshold bright_thres2;
step S603: the values of bright_s1/(w×h) and bright_s2/(w×h) are calculated, respectively:
if bright_s1/(w×h) is greater than a preset threshold bright_thres3, determining that the image is dark;
if bright_s2/(w×h) is greater than a preset threshold bright_thres4, determining that the image is bright;
step S7: performing color cast detection on camera input according to the Lab color space image img_Lab;
step S8: performing definition detection on camera input according to Gaussian blur image img_gaussian;
step S9: noise detection is carried out on the camera input;
step S10: carrying out strip stripe detection on camera input;
step S11: the camera input is subjected to jitter detection according to the current jitter reference frame img_jitter, specifically:
step S1101: extracting homogenizing surf features from a current jitter reference frame img_jitter, wherein the homogenizing surf features are space homogenizing treatment based on the existing extracting surf features, namely dividing a plurality of grids for each layer of a pyramid constructed by a surf algorithm, wherein the total number of the grids is larger than the number of required feature points, selecting a key point with the highest response value from each grid, and extracting jitter_n feature points jitter_point1 by the surf algorithm;
step S1102: extracting homogenized surf features from the current image img_resize, and extracting jitter_n feature points jitter_poins2;
step S1103: calculating a two-dimensional vector of a space coordinate difference of a characteristic point jitter_points1 and a characteristic point jitter_points2 matching point by adopting a hamming distance violent matching method, and judging that jitter exists if the length of the two-dimensional vector is greater than a preset threshold jitter_thres;
step S1104: sending the corresponding two-dimensional vector into a queue of the corresponding frame, and performing trigonometric function fitting on the horizontal and vertical coordinates of the queue to obtain the corresponding jitter amplitude, frequency and direction;
step S12: and performing shift detection on camera input according to the current shift reference frame img_move.
2. The method for detecting and identifying abnormal picture quality of a camera according to claim 1, wherein the step S4 is specifically:
step S401: respectively differencing a current frame to be detected with a current shift reference frame img_move and a jitter reference frame img_jitter to obtain two differential images, namely diff_img_move and diff_img_jitter;
step S402: and counting the sum of the numbers of 0 pixels in the differential image, calculating the value of the freeze_sum/(w×h), and judging that the picture is frozen if the calculation result is larger than a preset threshold value of the freeze_thres.
3. The method for detecting and identifying abnormal picture quality of a camera according to claim 1, wherein the step S7 is specifically:
step S701: calculating the pixel statistical mean value d of the a channel and the b channel in any image img_Lab by the formula (1) a And d b
Wherein a and b are pixel values of an a channel and a j-th column of an i-th row of a b channel of img_Lab respectively;
step S702: the value of the color cast factor K was found by the following calculation:
K=D/M
wherein D and M are calculated intermediate values;
step S703: if the value of the color cast factor K is larger than the color cast threshold value color_thres, judging that the image has color cast;
step S704: comparison |d a |and |d b The size of the i-is,
if |d a ∣>∣d b | d, view | a Value of |d a ∣>128, judging that the image is redder, otherwise, judging that the image is greener;
if it is∣d a ∣<∣d b | d, view | b Value of |d b ∣>128, if the image is yellow, or else, the image is blue.
4. The method for detecting and identifying abnormal picture quality of a camera according to claim 1, wherein the step S8 is specifically:
step S801: the absolute value of the difference between every two adjacent pixels of the space after any image img_resolution graying is acquired, and the calculation formula is as follows:
diff_src=abs(I(i-1,j)-I(i,j))+abs(I(i,j-1)-I(i,j))) (2)
wherein abs is an absolute value sign, I and j respectively represent an ith row and a jth column, and I represents a corresponding pixel value;
step S802: summing all values gives the sum diff src sum,
step S803: acquiring an absolute value of a neighboring pixel difference of a space corresponding position after the Gaussian blur image img_gaussian of the corresponding image in the step S201 is grayed, wherein the calculation formula is the same as the formula (2);
step S804: for the same position, the difference between the absolute value obtained in step S801 and the absolute value obtained in step S803 is calculated, and diff_diff_sum is obtained by summing up the following formulas,
in the formula, diff_src resize (i, j) represents the absolute value of the corresponding i-th row and j-th column obtained in step S801,representing the absolute value of the corresponding ith row and jth column obtained in step S803;
step S805: the value of diff_diff_sum/diff_src_sum is calculated, and if the value is greater than a preset threshold sharpness_thres, the image blurring is determined.
5. The method for detecting and identifying abnormal picture quality of a camera according to claim 1, wherein step S9 specifically comprises:
step S901: after any image img_resize is grayed, denoising is carried out through mean value filtering to obtain an image img_average;
step S902: the signal-to-noise ratio snr of the image is calculated,
wherein I is resize (I, j) is the pixel value of the image img_resize in the ith row and j column, I average (i, j) is the pixel value of the image img_average in the ith row and j column;
step S903: when the signal-to-noise ratio snr is smaller than a preset threshold noise_thres, it is determined that significant noise exists in the input image.
6. The method for detecting and identifying abnormal picture quality of a camera according to claim 1, wherein the step S10 is specifically:
step S1001: performing discrete Fourier transform on any image img_resize;
step S1002: counting the total number of abnormal bright spots, of which the pixel value is larger than a preset threshold value strip_thres1, in a region with the width of the transverse and longitudinal central lines of the spectrogram obtained after transformation being a preset value strip_width;
step S1003: when the total number of abnormal bright spots is larger than a preset threshold value strip_thres2, judging that the image has the abnormal condition of the banded stripes.
7. The method for detecting and identifying abnormal picture quality of a camera according to claim 1, wherein the step S12 is specifically:
step S1201: extracting jitter_n feature points move_points of the current shift reference frame img_move in the same way as the homogenized surf feature is extracted in the step S1101;
step S1202: performing feature point matching on the feature point move_points and the feature point jitter_points2 of the step 1102, and calculating the point move_match_n which is closest to the matching and has a feature matching distance smaller than a preset value move_thres1;
step S1203: if the number of points move_match_n is smaller than a preset value threshold move_thres2, judging that serious shift occurs; otherwise, calculating the image space position of each matching point;
step S1204: and calculating the average distance of the image space positions of the matching points, judging that the shift occurs if the average distance is larger than a preset threshold value move_thres3, and calculating the shift distance and direction according to the space coordinate vector of the image space positions.
CN202211349015.XA 2022-10-31 2022-10-31 Camera picture quality abnormality detection and identification method Active CN115514955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211349015.XA CN115514955B (en) 2022-10-31 2022-10-31 Camera picture quality abnormality detection and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211349015.XA CN115514955B (en) 2022-10-31 2022-10-31 Camera picture quality abnormality detection and identification method

Publications (2)

Publication Number Publication Date
CN115514955A CN115514955A (en) 2022-12-23
CN115514955B true CN115514955B (en) 2023-11-14

Family

ID=84513109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211349015.XA Active CN115514955B (en) 2022-10-31 2022-10-31 Camera picture quality abnormality detection and identification method

Country Status (1)

Country Link
CN (1) CN115514955B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2921989A1 (en) * 2014-03-17 2015-09-23 Université de Genève Method for object recognition and/or verification on portable devices
CN109102013A (en) * 2018-08-01 2018-12-28 重庆大学 A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic
CN115118934A (en) * 2022-06-28 2022-09-27 广州阿凡提电子科技有限公司 Live broadcast effect monitoring processing method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702177B2 (en) * 2005-04-25 2010-04-20 Samsung Electronics Co., Ltd. Method and apparatus for adjusting brightness of image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2921989A1 (en) * 2014-03-17 2015-09-23 Université de Genève Method for object recognition and/or verification on portable devices
CN109102013A (en) * 2018-08-01 2018-12-28 重庆大学 A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic
CN115118934A (en) * 2022-06-28 2022-09-27 广州阿凡提电子科技有限公司 Live broadcast effect monitoring processing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于智能图像分析的视频质量诊断系统关键技术研究;刘志强;信息通信(第12期);第140-142页 *

Also Published As

Publication number Publication date
CN115514955A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
US8149336B2 (en) Method for digital noise reduction in low light video
EP3104327B1 (en) Anomalous pixel detection
US7570833B2 (en) Removal of poisson false color noise in low-light images usng time-domain mean and variance measurements
CN105678700B (en) Image interpolation method and system based on prediction gradient
KR100835380B1 (en) Method for detecting edge of an image and apparatus thereof and computer readable medium processing the method
US20110019094A1 (en) System and method for random noise estimation in a sequence of images
US9870600B2 (en) Raw sensor image and video de-hazing and atmospheric light analysis methods and systems
US7903900B2 (en) Low complexity color de-noising filter
US11272146B1 (en) Content adaptive lens shading correction method and apparatus
CN112291551A (en) Video quality detection method based on image processing, storage device and mobile terminal
CN111741290B (en) Image stroboscopic detection method and device, storage medium and terminal
US8077774B1 (en) Automated monitoring of digital video image quality
CN115965889A (en) Video quality assessment data processing method, device and equipment
CN111967345A (en) Method for judging shielding state of camera in real time
KR101215666B1 (en) Method, system and computer program product for object color correction
CN110659627A (en) Intelligent video monitoring method based on video segmentation
KR101428728B1 (en) Apparatus and method for recognition license plate number
CN115514955B (en) Camera picture quality abnormality detection and identification method
CN106778822B (en) Image straight line detection method based on funnel transformation
CN117218039A (en) Image processing method, device, computer equipment and storage medium
CN111510709A (en) Image stroboscopic detection method and device, storage medium and terminal
US20230048649A1 (en) Method of processing image, electronic device, and medium
Maalouf et al. Offline quality monitoring for legal evidence images in video-surveillance applications
CN112804520B (en) High-speed monitoring video quality detection method
WO2021189460A1 (en) Image processing method and apparatus, and movable platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant