CN111967345A - Method for judging shielding state of camera in real time - Google Patents

Method for judging shielding state of camera in real time Download PDF

Info

Publication number
CN111967345A
CN111967345A CN202010736809.6A CN202010736809A CN111967345A CN 111967345 A CN111967345 A CN 111967345A CN 202010736809 A CN202010736809 A CN 202010736809A CN 111967345 A CN111967345 A CN 111967345A
Authority
CN
China
Prior art keywords
image
camera
points
gray level
areas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010736809.6A
Other languages
Chinese (zh)
Other versions
CN111967345B (en
Inventor
申富饶
李金桥
姜少魁
陆志浩
金祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
State Grid Shanghai Electric Power Co Ltd
Original Assignee
Nanjing University
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University, State Grid Shanghai Electric Power Co Ltd filed Critical Nanjing University
Priority to CN202010736809.6A priority Critical patent/CN111967345B/en
Publication of CN111967345A publication Critical patent/CN111967345A/en
Application granted granted Critical
Publication of CN111967345B publication Critical patent/CN111967345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for judging the shielding state of a camera in real time, which comprises the following steps: reading an RGB image shot by a camera in real time; firstly, scaling the RGB image to a target size, and then correcting and removing distortion; carrying out chromaticity space conversion on the corrected and undistorted RGB image obtained in the step (2) to obtain a gray level image, and extracting feature points in the gray level image to obtain a feature point set of the gray level image; dividing the gray level image into 4 areas, numbering the areas, and respectively calculating the number of characteristic points of each area; if the number of the characteristic points of any one of the 4 areas is smaller than a preset number threshold, outputting a judgment result of the shielding of the camera; and if the number of the feature points of all the 4 regions is greater than the preset number threshold, outputting a judgment result that the camera is not shielded. The method has the advantages of simplicity, high efficiency and high speed, can accurately judge the shielding state of the camera by only one frame of image, and is suitable for scenes and embedded equipment with high real-time requirements.

Description

Method for judging shielding state of camera in real time
Technical Field
The invention relates to the field of computer vision, in particular to a method for judging the shielding state of a camera in real time.
Background
In recent years, due to rapid development of visual theory and computer science and technology, more and more scholars are invested in research in the field of computer vision. Computer vision shows good development prospect in the application of automatic driving and auxiliary driving, and is highly valued by people. The automatic driving and the assistant driving at the present stage mostly acquire image information in front of the vehicle through the camera, but the camera can be partially shielded by objects such as silt and the like, so that the normal work of a driving system is influenced, and the camera can also be shielded accidentally due to other special reasons, so that if the driving system does not find and process the situation in time, a very serious safety problem can be caused.
The occlusion detection method adopted by chinese patent (publication No. cn103139547.b) entitled "method for determining the occlusion state of an imaging lens based on a video image signal" is: extracting the background of the image by using a frame difference method, obtaining the foreground by using a background subtraction method, binarizing the foreground, and dividing a plurality of foreground detection units; eliminating foreground detection units with pixel areas smaller than a threshold value, and further screening out candidate occlusion areas; tracking pixels of a subsequent frame of the candidate occlusion area, and if the change of the gray information and the texture information of the pixels is smaller than a threshold value, determining that the pixels are suspected to be the occlusion area; and tracking and counting subsequent frames of the suspected occlusion area, and if the subsequent frames stably exist in the video frame and exceed a preset time threshold, determining that the camera is occluded. Although the method can judge the occlusion area on the long-term processing camera, continuous frames in a period of time need to be processed, the speed is low, and the method is not suitable for application scenes with high real-time requirements. And has no identification for the shelters with mobility, such as malicious manual operation and the like.
Chinese patent publication No. cn200710145468.x, entitled "method for detecting video occlusion in network-based video surveillance", discloses an occlusion detection method that requires first determining a reference frame and then detecting occlusion from a motion region. The method has certain effect, but the limitation is that the selection of the reference frame can be realized, the occlusion detection can be carried out only when the reference frame meeting the condition appears, and the method also needs to process continuous frames for a period of time, so the speed is slow, and the practicability is low.
In summary, how to provide a method for judging the shielding state of a camera, which has high accuracy and high judgment speed and can be used for application scenes with high real-time requirements, is the current problem.
Disclosure of Invention
The invention provides a method for judging a camera shielding state in real time, which aims to solve the problem that the safety of a system is low due to low precision and low speed of the existing camera shielding judging method.
A method for judging the shielding state of a camera in real time comprises the following steps:
step 1, reading a frame of RGB image shot by a camera in real time;
step 2, zooming the RGB image to a target size, and then correcting and removing distortion;
step 3, carrying out chromaticity space conversion on the corrected and undistorted RGB image obtained in the step 2 to obtain a gray level image, and extracting feature points in the gray level image to obtain a feature point set of the gray level image;
step 4, dividing the gray level image into 4 areas, numbering the areas, and respectively calculating the number of characteristic points of each area;
step 5, if the number of the characteristic points in any one of the 4 areas is smaller than a preset number threshold t, outputting a judgment result of the shielding of the camera; and if the number of the feature points of all the 4 regions is greater than a preset number threshold t, outputting a judgment result that the camera is not shielded.
Further, in one implementation, the step 1 includes: before the camera is used, calibrating the camera to obtain an internal reference matrix and a distortion coefficient of the camera;
the step 2 comprises the following steps: and correcting and de-distorting the RGB image after being scaled to the target size by combining an opencv _ undistort algorithm according to the internal reference matrix and the distortion coefficient of the camera.
Further, in an implementation manner, the step 3 includes extracting image feature points based on a corner point detection method, where the corner point detection method includes:
if the difference value between the gray value of a certain pixel point of the gray image and the gray values of a certain number of pixel points in the surrounding field of the pixel point is larger than or equal to a preset difference threshold tpAnd determining the pixel points as angular points, namely the pixel points are image feature points.
Further, in an implementation manner, the extracting image feature points based on the fast corner detection method includes:
step 3-1, selecting pixel points P from the gray level image, wherein the gray level value of the pixel points P is IP
Step 3-2, setting a discretized Bresenham circle by taking the pixel point P as a circle center and 3 pixels as a radius, wherein the discretized Bresenham circle is provided with 16 pixel points;
step 3-3, if n continuous pixels exist on the discretized Bresenham circle, the absolute values of the differences between the gray values of the n continuous pixels and the gray value of the circle center are all larger than a preset difference threshold tpNamely:
Figure BDA0002605278040000031
wherein, IiThe gray value of the ith pixel point in n continuous pixel points is 1,2 …, n represents the serial number of the pixel point, IPGray value with the center of a circle, tpIs a preset difference threshold value;
and extracting the circle center of the discrete Bresenham circle as an image feature point based on a fast corner point detection method.
Further, in one implementation, the step 4 includes: equally dividing the gray image into 4 areas, wherein the 4 areas are respectively positioned at the upper left position, the upper right position, the lower left position and the lower right position of the gray image, and the specific positions of the areas are represented by the coordinates, the widths and the heights of the upper left corners of the areas, namely:
region No. 1
Figure BDA0002605278040000032
Region No. 2
Figure BDA0002605278040000033
Region No. 3
Figure BDA0002605278040000034
And region No. 4
Figure BDA0002605278040000035
Where w is the width of the grayscale image and h is the height of the grayscale image.
Further, in an implementation manner, the step 4 further includes:
step 4-1, after the gray image feature point set is obtained in the step 3, extracting feature point information of the gray image, wherein the feature point information comprises: coordinates of the feature points on the gray level image;
step 4-2, determining the number r of the area where each feature point is located in the gray level image according to the coordinates of the feature points on the gray level image, and counting the number of the feature points in 4 areas respectively:
if it is
Figure BDA0002605278040000036
And is
Figure BDA0002605278040000037
Then r is determined to be 1, and the feature point P is determinedjIs located in the region No. 1, the region No. 1Adding 1 to the number of the characteristic points of the domain;
if it is
Figure BDA0002605278040000038
And is
Figure BDA0002605278040000039
Then r is determined to be 3, the feature point PjThe number of the characteristic points in the No. 3 area is added with 1;
if it is
Figure BDA00026052780400000310
And is
Figure BDA00026052780400000311
Then r is determined to be 2, the feature point PjThe number of the characteristic points in the No. 2 area is added with 1;
if it is
Figure BDA00026052780400000312
And is
Figure BDA00026052780400000313
Then r is determined to be 4, and the feature point P is determinedjThe number of the characteristic points in the No. 4 area is added with 1;
wherein (x)j,yj) Is a characteristic point PjCoordinates on the grayscale image.
Further, in an implementation manner, before the step 5, determining the preset number threshold t includes:
step 5-1, shooting a video under the condition that the camera is not shielded;
step 5-2, according to the video under the condition that the camera is not shielded, determining a sampling image frame number set which needs to be used by using a random sampling method, namely:
Sk=Sk-1∪{yk}
Figure BDA0002605278040000042
k=1,…,γ
wherein S iskSetting the image frame sequence number after the kth sampling
Figure BDA0002605278040000041
F is the frame number of the video shot under the condition that the camera is not shielded, theta is a random variable and meets the uniform distribution in the interval of 0,1, namely thetakIs a real number, y, over the interval [0,1) randomly generated at the kth samplekIs the image frame number obtained by the kth sampling, gamma is the sampling frequency, SγThe sampling result is the finally obtained sampling result, namely a sampling image frame number set which needs to be used;
step 5-3, reading each frame of the sampling image set according to a sampling sequence, and sequentially executing the steps 1 to 4 on each frame to obtain the number of feature points of 4 areas in the gray level image corresponding to each frame;
step 5-4, sorting the number of the feature points of all the areas in all the gray level images from small to large to obtain a sequence a of the number of the feature points, wherein a is (a)1,a2,…,a4×γ) Wherein a islThe number of characteristic points representing the ith smallest;
5-5, calculating the preset number threshold t based on a box chart analysis method:
t=Q1-1.5×IQR
IQR=|Q1-Q2|
wherein t represents a preset number threshold, i.e., the lower bound of the normal value of the boxplot analysis method, Q1Denotes the lower quartile, Q2Representing the upper quartile, IQR is the interquartile range, representing the upper quartile Q2And lower quartile Q1The absolute value of the difference of (a);
the lower quartile Q1And upper quartile Q2The calculation method comprises the following steps:
Figure BDA0002605278040000043
do=μo-co
Figure BDA0002605278040000051
μo=II(o=1)×0.25×(4×γ+1)+II(o=2)×0.75×(4×γ+1)
o=1,2
wherein, muoRepresents the lower quartile Q1Or upper quartile Q2In step 5-4, the sequence of feature point numbers a ═ a1,a2,…,a4×γ) When o is 1, mu1Represents the lower quartile Q1When the position o in the feature point number sequence a is 2, mu2Representing the upper quartile Q2The position in the feature point number sequence a, II is an indication function for distinguishing the upper quartile Q currently calculated2Or a lower quartile Q1,coIs muoInteger part of (d)oIs muoThe fractional part of (a).
As can be seen from the foregoing technical solutions, an embodiment of the present invention provides a method for determining a shielding state of a camera in real time, where the method includes: step 1, reading a frame of RGB image shot by a camera in real time; step 2, zooming the RGB image to a target size, and then correcting and removing distortion; step 3, carrying out chromaticity space conversion on the corrected and undistorted RGB image obtained in the step 2 to obtain a gray level image, and extracting feature points in the gray level image to obtain a feature point set of the gray level image; step 4, dividing the gray level image into 4 areas, numbering the areas, and respectively calculating the number of characteristic points of each area; step 5, if the number of the characteristic points in any one of the 4 areas is smaller than a preset number threshold t, outputting a judgment result of the shielding of the camera; and if the number of the feature points of all the 4 regions is greater than the preset number threshold t, outputting a judgment result that the camera is not shielded.
In the prior art, the camera shielding judgment method is low in precision and low in speed. The method or the device can be suitable for any equipment with a camera, and the shielding detection can be realized by only one camera; the method extracts the feature points based on the angular point detection, and the feature point extraction speed is high; the method can realize detection by using a single frame image, and does not depend on continuous video image frame information or prestored information; by dividing the image into regions, the method can be well suitable for dynamic scenes, full occlusion and partial occlusion. In conclusion, the method is suitable for detecting the shielding of the single camera in the application scene with higher real-time requirement, and has the advantages of high speed, high accuracy and no dependence on continuous frames.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
Fig. 1 is a schematic workflow diagram of a method for determining a shielding state of a camera in real time according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a method for determining a shielding state of a camera in real time according to an embodiment of the present invention;
fig. 3 is a schematic view of image partition in a method for determining a shielding state of a camera in real time according to an embodiment of the present invention;
fig. 4a is a schematic diagram illustrating a first effect of feature extraction in a method for determining a shielding state of a camera in real time according to an embodiment of the present invention;
fig. 4b is a schematic diagram of a second effect of feature extraction in the method for determining the shielding state of the camera in real time according to the embodiment of the present invention;
fig. 4c is a schematic diagram illustrating a third effect of feature extraction in the method for determining a shielding state of a camera in real time according to the embodiment of the present invention;
fig. 5a is a schematic diagram illustrating a first effect of occlusion detection in a method for determining an occlusion state of a camera in real time according to an embodiment of the present invention;
fig. 5b is a schematic diagram illustrating a second effect of occlusion detection in a method for determining an occlusion state of a camera in real time according to an embodiment of the present invention;
fig. 5c is a schematic diagram illustrating a third effect of occlusion detection in the method for determining an occlusion state of a camera in real time according to the embodiment of the present invention;
fig. 5d is a schematic diagram illustrating a fourth effect of occlusion detection in the method for determining an occlusion state of a camera in real time according to the embodiment of the present invention;
fig. 5e is a schematic diagram illustrating a fifth effect of occlusion detection in the method for determining an occlusion state of a camera in real time according to the embodiment of the present invention;
fig. 5f is a schematic diagram of a sixth effect of occlusion detection in the method for determining an occlusion state of a camera in real time according to the embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The embodiment of the invention discloses a method for judging the shielding state of a camera in real time, which is applied to an application scene with higher real-time requirement, and can realize shielding detection only by a single camera. Because the method extracts the characteristic points based on the rapid angular point detection method, the extraction speed of the characteristic points is high; particularly, the method can realize detection by using a single frame image, and does not depend on continuous video image frame information or pre-stored information; by dividing the image into regions, the method can be well suitable for dynamic scenes, full occlusion and partial occlusion. Therefore, the method is suitable for the single-camera shielding detection of the application scene with higher real-time requirement, and has the advantages of high speed, high accuracy and no dependence on continuous frames.
Fig. 1 is a schematic view of a process of occlusion detection, which is a method for determining a camera state in real time based on image feature points, and comprises 5 steps:
step 1, reading a frame of RGB image shot by a camera in real time;
step 2, zooming the RGB image to a target size, and then correcting and removing distortion;
step 3, carrying out chromaticity space conversion on the corrected and undistorted RGB image obtained in the step 2 to obtain a gray level image, and extracting feature points in the gray level image to obtain a feature point set of the gray level image;
step 4, dividing the gray level image into 4 areas, numbering the areas, and respectively calculating the number of characteristic points of each area;
in this step, the region is divided to better detect local occlusion. However, too many divided regions result in an area of each region being too small, and if the number of feature points in some small regions in the image is too small, the region is erroneously recognized as a mask. Therefore, the 4 regions can ensure the realization of local occlusion detection, and the situation of false alarm occlusion can be reduced as much as possible.
Step 5, if the number of the characteristic points in any one of the 4 areas is smaller than a preset number threshold t, outputting a judgment result of the shielding of the camera; and if the number of the feature points of all the 4 regions is greater than a preset number threshold t, outputting a judgment result that the camera is not shielded.
In the method for determining a shielding state of a camera in real time according to this embodiment, the step 1 includes: before the camera is used, calibrating the camera to obtain an internal reference matrix and a distortion coefficient of the camera;
the step 2 comprises the following steps: and correcting and de-distorting the RGB image after being scaled to the target size by combining an opencv _ undistort algorithm according to the internal reference matrix and the distortion coefficient of the camera.
As shown in fig. 2, in the method for determining a shielding state of a camera in real time according to this embodiment, the step 3 includes extracting image feature points based on a fast corner detection method, where the fast corner detection method includes:
if the difference value between the gray value of a certain pixel point of the gray image and the gray values of a certain number of pixel points in the surrounding field of the pixel point is larger than or equal to a preset difference threshold tpAnd determining the pixel points as angular points, namely the pixel points are image feature points.
In this embodiment, the image feature points are image feature points detected based on corner points. The corner points are pixel points containing key information, the feature points are the extension of the concept of the corner points, and the detected corner points are taken as the feature points in the image
The basic idea of detecting feature points based on corner points is as follows: if a certain pixel and a certain number of pixels in the surrounding area are in different image areas, the pixel may be an angular point. In particular, for a gray-scale image, if the gray-scale value of the point is greater than or less than the gray-scale values of a certain number of pixel points in the surrounding area, the point may be an angular point.
In the method for determining a shielding state of a camera in real time according to this embodiment, the extracting feature points of an image based on an angular point detection method includes:
step 3-1, selecting pixel points P from the gray level image, wherein the gray level value of the pixel points P is IP
Step 3-2, setting a discretized Bresenham circle by taking the pixel point P as a center and 3 pixels as a radius, wherein the discretized Bresenham circle is provided with 16 pixel points;
step 3-3, if n continuous pixels exist on the discretized Bresenham circle, the absolute values of the differences between the gray values of the n continuous pixels and the gray value of the center are all larger than a preset difference threshold tpNamely:
Figure BDA0002605278040000081
wherein, IiThe gray value of the ith pixel point in n continuous pixel points is 1,2 …, n represents the serial number of the pixel point, IPGray value with the center of a circle, tpIs a preset difference threshold value;
and extracting the center of the discrete Bresenham circle as an image feature point based on a corner point detection method. Generally, if the Bresenham circle includes N pixels, it is necessary to satisfy
Figure BDA0002605278040000082
The Bresenham circle includes 16 pixels, specifically, in this embodiment, the value may be set to 12 or 9, and n is preferably 9. In most cases, in order to avoid the detected feature points being false feature points, a preset difference threshold t needs to be setpSet to a larger value, the preset difference threshold tpAnd may be set to 50 in general.
As shown in fig. 3, in the method for determining a shielding state of a camera in real time according to this embodiment, the step 4 includes: equally dividing the gray image into 4 areas, wherein the 4 areas are respectively positioned at the upper left position, the upper right position, the lower left position and the lower right position of the gray image, and the specific positions of the areas are represented by the coordinates, the widths and the heights of the upper left corners of the areas, namely:
region No. 1
Figure BDA0002605278040000091
Region No. 2
Figure BDA0002605278040000092
Region No. 3
Figure BDA0002605278040000093
And region No. 4
Figure BDA0002605278040000094
Where w is the width of the grayscale image and h is the height of the grayscale image.
In the method for determining a shielding state of a camera in real time according to this embodiment, the step 4 further includes:
step 4-1, after the gray image feature point set is obtained in the step 3, extracting feature point information of the gray image, wherein the feature point information comprises: coordinates of the feature points on the gray level image;
step 4-2, determining the number r of the area where each feature point is located in the gray level image according to the coordinates of the feature points on the gray level image, and counting the number of the feature points in 4 areas respectively:
if it is
Figure BDA0002605278040000095
And is
Figure BDA0002605278040000096
Then r is determined to be 1, and the feature point P is determinedjThe number of the characteristic points in the No. 1 area is added with 1;
if it is
Figure BDA0002605278040000097
And is
Figure BDA0002605278040000098
Then r is determined to be 3, the feature point PjThe number of the characteristic points in the No. 3 area is added with 1;
if it is
Figure BDA0002605278040000099
And is
Figure BDA00026052780400000910
Then r is determined to be 2, the feature point PjThe number of the characteristic points in the No. 2 area is added with 1;
if it is
Figure BDA00026052780400000911
And is
Figure BDA00026052780400000912
Then r is determined to be 4, and the feature point P is determinedjThe number of the characteristic points in the No. 4 area is added with 1;
wherein (x)j,yj) Is a characteristic point PjCoordinates on the grayscale image.
In the method for determining a shielding state of a camera in real time according to this embodiment, since the number threshold is a key factor for determining shielding, and the number of feature points in an image has a certain relationship with an environment, in order to ensure that the method has a high shielding detection capability, the number threshold needs to be preset according to an actual environment, and therefore, before step 5, the preset number threshold t needs to be determined, which includes:
step 5-1, shooting a video under the condition that the camera is not shielded;
step 5-2, according to the video under the condition that the camera is not shielded, in order to reduce the contingency, the video duration should be as long as possible, in this embodiment, a video of more than 10 minutes is selected, and a random sampling method is used to determine a sampling image frame number set to be used, that is:
Sk=Sk-1∪{yk}
Figure BDA0002605278040000101
k=1,…,γ
wherein S iskSetting the image frame sequence number after the kth sampling
Figure BDA0002605278040000102
F is the frame number of the video shot under the condition that the camera is not shielded, theta is a random variable and meets the uniform distribution in the interval of 0,1, namely thetakIs a real number, y, over the interval [0,1) randomly generated at the kth samplekIs the image frame number obtained by the kth sampling, gamma is the sampling frequency, SγThe sampling result is the finally obtained sampling result, namely a sampling image frame number set which needs to be used;
step 5-3, reading each frame of the sampling image set according to a sampling sequence, and sequentially executing the steps 1 to 4 on each frame to obtain the number of feature points of 4 areas in the gray level image corresponding to each frame;
step 5-4, all the areas in all the gray level imagesThe number of the characteristic points is sorted from small to large to obtain a sequence a of the number of the characteristic points, wherein a is (a)1,a2,…,a4×γ) Wherein a islThe number of characteristic points representing the ith smallest;
5-5, calculating the preset number threshold t based on a box chart analysis method:
t=Q1-1.5×IQR
IQR=|Q1-Q2|
wherein t represents a preset number threshold, i.e., the lower bound of the normal value of the boxplot analysis method, Q1Denotes the lower quartile, Q2Representing the upper quartile, IQR is the interquartile range, representing the upper quartile Q2And lower quartile Q1The absolute value of the difference of (a);
the lower quartile Q1And upper quartile Q2The calculation method comprises the following steps:
Figure BDA0002605278040000103
do=μo-co
Figure BDA0002605278040000104
μo=II(o=1)×0.25×(4×γ+1)+II(o=2)×0.75×(4×γ+1)
o=1,2
wherein, muoRepresents the lower quartile Q1Or upper quartile Q2In step 5-4, the sequence of feature point numbers a ═ a1,a2,…,a4×γ) When o is 1, mu1Represents the lower quartile Q1When the position o in the feature point number sequence a is 2, mu2Representing the upper quartile Q2The position in the feature point number sequence a, II is an indication function for distinguishing the upper quartile Q currently calculated2Or a lower quartile Q1,coIs muoInteger part of (d)oIs muoThe fractional part of (a).
Boxed graph analysis is often used to detect outliers, which may be considered to be outliers when the observed value is greater than the upper bound of the normal value or less than the lower bound of the normal value. Here, the lower bound of the normal value may be regarded as a number threshold based on a box plot analysis method, and when the number of feature points in the region is smaller than the number threshold, it is determined that an abnormality has occurred, that is, it is determined that occlusion has occurred.
To verify the validity of the method, instance verification is performed on the actually captured video. The method comprises the steps of carrying out shielding detection on each frame of image in the videos by completely shielding, partially shielding and unshielded images, and judging the shielding state of the camera in real time.
Taking the actually acquired video as an example, for each frame of image in the video, judging the shielding state of the camera according to the following steps:
step 1, reading a frame of RGB image shot by a camera in real time;
step 2, zooming the RGB image to a target size, and then correcting and removing distortion;
step 3, carrying out chromaticity space conversion on the corrected and undistorted RGB image obtained in the step 2 to obtain a gray level image, and extracting feature points in the gray level image to obtain a feature point set of the gray level image;
step 4, dividing the gray level image into 4 areas, numbering the areas, and respectively calculating the number of characteristic points of each area;
step 5, if the number of the characteristic points in any one of the 4 areas is smaller than a preset number threshold t, outputting a judgment result of the shielding of the camera; and if the number of the feature points of all the 4 regions is greater than a preset number threshold t, outputting a judgment result that the camera is not shielded.
Fig. 4a to 4c show the effect of image feature point extraction, where fig. 4a is a schematic diagram of the effect of full occlusion, fig. 4b is a schematic diagram of the effect of partial occlusion, and fig. 4c is a schematic diagram of the effect of non-occlusion.
In fig. 5a to 5f, the occlusion detection effect of the present invention on the camera is shown. For convenience of explanation here, when the camera is determined to be occluded, the frame number to which the image belongs and the occlusion word are output to the drawing. FIGS. 5a and 5b are schematic diagrams illustrating the effect of full occlusion, and FIGS. 5c and 5d are schematic diagrams illustrating the effect of partial occlusion, and it can be seen that all of FIGS. 5a, 5b, 5c and 5d show "Camera is elongated! "typeface; fig. 5e and 5f are schematic diagrams of the effect without occlusion, and it can be seen that characters of the camera occluded by the camera are not shown in both fig. 5e and 5 f. The verification on the data shows that the accuracy and the speed of the method for judging the shielding state of the camera in real time provided by the invention show satisfactory results.
As can be seen from the foregoing technical solutions, an embodiment of the present invention provides a method for determining a shielding state of a camera in real time, where the method includes: step 1, reading a frame of RGB image shot by a camera in real time; step 2, zooming the RGB image to a target size, and then correcting and removing distortion; step 3, carrying out chromaticity space conversion on the corrected and undistorted RGB image obtained in the step 2 to obtain a gray level image, and extracting feature points in the gray level image to obtain a feature point set of the gray level image; step 4, dividing the gray level image into 4 areas, numbering the areas, and respectively calculating the number of characteristic points of each area; step 5, if the number of the characteristic points in any one of the 4 areas is smaller than a preset number threshold t, outputting a judgment result of the shielding of the camera; and if the number of the feature points of all the 4 regions is greater than the preset number threshold t, outputting a judgment result that the camera is not shielded.
In the prior art, the camera shielding judgment method is low in precision and low in speed. The method or the device can be suitable for any equipment with a camera, and the shielding detection can be realized by only one camera; the method extracts the feature points based on the angular point detection, and the feature point extraction speed is high; the method can realize detection by using a single frame image, and does not depend on continuous video image frame information or prestored information; by dividing the image into regions, the method can be well suitable for dynamic scenes, full occlusion and partial occlusion. In conclusion, the method is suitable for detecting the shielding of the single camera in the application scene with higher real-time requirement, and has the advantages of high speed, high accuracy and no dependence on continuous frames.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in each embodiment of the method for determining the shielding state of the camera in real time. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (7)

1. A method for judging the shielding state of a camera in real time is characterized by comprising the following steps:
step 1, reading a frame of RGB image shot by a camera in real time;
step 2, zooming the RGB image to a target size, and then correcting and removing distortion;
step 3, carrying out chromaticity space conversion on the corrected and undistorted RGB image obtained in the step 2 to obtain a gray level image, and extracting feature points in the gray level image to obtain a feature point set of the gray level image;
step 4, dividing the gray level image into 4 areas, numbering the areas, and respectively calculating the number of characteristic points of each area;
step 5, if the number of the characteristic points in any one of the 4 areas is smaller than a preset number threshold t, outputting a judgment result of the shielding of the camera; and if the number of the feature points of all the 4 regions is greater than a preset number threshold t, outputting a judgment result that the camera is not shielded.
2. The method for judging the shielding state of the camera in real time according to claim 1, wherein the step 1 comprises: before the camera is used, calibrating the camera to obtain an internal reference matrix and a distortion coefficient of the camera;
the step 2 comprises the following steps: and correcting and de-distorting the RGB image after being scaled to the target size by combining an opencv _ undistort algorithm according to the internal reference matrix and the distortion coefficient of the camera.
3. The method according to claim 1, wherein the step 3 comprises extracting image feature points based on a fast corner detection method, and the fast corner detection method comprises:
if the difference value between the gray value of a certain pixel point of the gray image and the gray values of a certain number of pixel points in the surrounding field of the pixel point is larger than or equal to a preset difference threshold tpAnd determining the pixel points as angular points, namely the pixel points are image feature points.
4. The method according to claim 3, wherein the extracting of the image feature points based on the fast corner detection method comprises:
step 3-1, selecting pixel points P from the gray level image, wherein the gray level value of the pixel points P is IP
Step 3-2, setting a discretized Bresenham circle by taking the pixel point P as a circle center and 3 pixels as a radius, wherein the discretized Bresenham circle is provided with 16 pixel points;
step 3-3, if n continuous pixels exist on the discretized Bresenham circle, the absolute values of the differences between the gray values of the n continuous pixels and the gray value of the circle center are all larger than a preset difference threshold tpNamely:
Figure FDA0002605278030000021
wherein, IiThe gray value of the ith pixel point in n continuous pixel points is 1,2 …, n represents the serial number of the pixel point, IPGray value with the center of a circle, tpIs a preset difference threshold value;
and extracting the circle center of the discrete Bresenham circle as an image feature point based on a fast corner point detection method.
5. The method for judging the shielding state of the camera in real time according to claim 1, wherein the step 4 comprises: equally dividing the gray level image into 4 areas, which are respectively marked as area No. 1, area No. 2, area No. 3 and area No. 4, wherein the 4 areas are respectively positioned at the upper left position, the upper right position, the lower left position and the lower right position of the gray level image, and the specific positions of the areas are represented by the coordinates, the widths and the heights of the upper left corners of the areas, namely:
region No. 1
Figure FDA0002605278030000022
Region No. 2
Figure FDA0002605278030000023
Region No. 3
Figure FDA0002605278030000024
And region No. 4
Figure FDA0002605278030000025
Where w is the width of the grayscale image and h is the height of the grayscale image.
6. The method for judging the shielding state of the camera in real time according to claim 5, wherein the step 4 further comprises:
step 4-1, after the gray image feature point set is obtained in the step 3, extracting feature point information of the gray image, wherein the feature point information comprises: coordinates of the feature points on the gray level image;
step 4-2, determining the number r of the area where each feature point is located in the gray level image according to the coordinates of the feature points on the gray level image, and counting the number of the feature points in 4 areas respectively:
if it is
Figure FDA0002605278030000026
And is
Figure FDA0002605278030000027
Then r is determined to be 1, and the feature point P is determinedjThe number of the characteristic points in the No. 1 area is added with 1;
if it is
Figure FDA0002605278030000028
And is
Figure FDA0002605278030000029
Then r is determined to be 3, the feature point PjThe number of the characteristic points in the No. 3 area is added with 1;
if it is
Figure FDA00026052780300000210
And is
Figure FDA00026052780300000211
Then r is determined to be 2, the feature point PjThe number of the characteristic points in the No. 2 area is added with 1;
if it is
Figure FDA00026052780300000212
And is
Figure FDA00026052780300000213
Then r is determined to be 4, and the feature point P is determinedjThe number of the characteristic points in the No. 4 area is added with 1;
wherein (x)j,yj) Is a characteristic point PjCoordinates on the grayscale image.
7. The method for judging the shielding state of the camera in real time according to claim 1, wherein before the step 5, a preset number threshold t needs to be determined, and the method comprises the following steps:
step 5-1, shooting a video under the condition that the camera is not shielded;
step 5-2, according to the video under the condition that the camera is not shielded, determining a sampling image frame number set which needs to be used by using a random sampling method, namely:
Sk=Sk-1∪{yk}
Figure FDA0002605278030000033
k=1,…,γ
wherein S iskSetting the image frame sequence number after the kth sampling
Figure FDA0002605278030000031
F is the frame number of the video shot under the condition that the camera is not shielded, theta is a random variable and meets the uniform distribution in the interval of 0,1, namely thetakIs a real number, y, over the interval [0,1) randomly generated at the kth samplekIs the image frame number obtained by the kth sampling, gamma is the sampling frequency, SγThe sampling result is the finally obtained sampling result, namely a sampling image frame number set which needs to be used;
step 5-3, reading each frame of the sampling image set according to a sampling sequence, and sequentially executing the steps 1 to 4 on each frame to obtain the number of feature points of 4 areas in the gray level image corresponding to each frame;
step 5-4, sorting the number of the feature points of all the areas in all the gray level images from small to large to obtain a sequence a of the number of the feature points, wherein a is (a)1,a2,…,a4×γ) Wherein a islThe number of characteristic points representing the ith smallest;
5-5, calculating the preset number threshold t based on a box chart analysis method:
t=Q1-1.5×IQR
IQR=|Q1-Q2|
wherein t represents a preset number threshold, i.e., the lower bound of the normal value of the boxplot analysis method, Q1Denotes the lower quartile, Q2Representing the upper quartile, IQR is the interquartile range, representing the upper quartile Q2And lower quartile Q1The absolute value of the difference of (a);
the lower quartile Q1And upper quartile Q2The calculation method comprises the following steps:
Figure FDA0002605278030000032
do=μo-co
Figure FDA0002605278030000043
Figure FDA0002605278030000041
o=1,2
wherein, muoRepresents the lower quartile Q1Or upper quartile Q2In step 5-4, the sequence of feature point numbers a ═ a1,a2,…,a4×γ) When o is 1, mu1Represents the lower quartile Q1When the position o in the feature point number sequence a is 2, mu2Representing the upper quartile Q2At a position in the sequence a of the number of feature points,
Figure FDA0002605278030000042
for indicating functions, it is the upper quartile Q that is used to distinguish the current calculation2Or a lower quartile Q1,coIs muoInteger part of (d)oIs muoThe fractional part of (a).
CN202010736809.6A 2020-07-28 2020-07-28 Method for judging shielding state of camera in real time Active CN111967345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010736809.6A CN111967345B (en) 2020-07-28 2020-07-28 Method for judging shielding state of camera in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010736809.6A CN111967345B (en) 2020-07-28 2020-07-28 Method for judging shielding state of camera in real time

Publications (2)

Publication Number Publication Date
CN111967345A true CN111967345A (en) 2020-11-20
CN111967345B CN111967345B (en) 2023-10-31

Family

ID=73362971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010736809.6A Active CN111967345B (en) 2020-07-28 2020-07-28 Method for judging shielding state of camera in real time

Country Status (1)

Country Link
CN (1) CN111967345B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927262A (en) * 2021-03-22 2021-06-08 瓴盛科技有限公司 Camera lens shielding detection method and system based on video
CN113282208A (en) * 2021-05-25 2021-08-20 歌尔科技有限公司 Terminal device control method, terminal device and computer readable storage medium
CN116522417A (en) * 2023-07-04 2023-08-01 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070118490A1 (en) * 2003-06-30 2007-05-24 Gyros Patent Ab Confidence determination
US20110164832A1 (en) * 2010-01-04 2011-07-07 Samsung Electronics Co., Ltd. Image-based localization feature point registration apparatus, method and computer-readable medium
CN105139016A (en) * 2015-08-11 2015-12-09 豪威科技(上海)有限公司 Interference detection system for surveillance camera and application method of interference detection system
CN105427276A (en) * 2015-10-29 2016-03-23 重庆电信系统集成有限公司 Camera detection method based on image local edge characteristics
CN105744268A (en) * 2016-05-04 2016-07-06 深圳众思科技有限公司 Camera shielding detection method and device
JP2016134804A (en) * 2015-01-20 2016-07-25 富士通株式会社 Imaging range abnormality discrimination device, imaging range abnormality discrimination method and computer program for imaging range abnormality discrimination
JP2016148956A (en) * 2015-02-10 2016-08-18 株式会社デンソーアイティーラボラトリ Positioning device, positioning method and positioning computer program
CN107710279A (en) * 2015-07-02 2018-02-16 大陆汽车有限责任公司 Static dirty detection and correction
US20180224380A1 (en) * 2017-02-09 2018-08-09 Glasstech, Inc. System and associated method for online measurement of the optical characteristics of a glass sheet
CN108763346A (en) * 2018-05-15 2018-11-06 中南大学 A kind of abnormal point processing method of sliding window box figure medium filtering
CN110008964A (en) * 2019-03-28 2019-07-12 上海交通大学 The corner feature of heterologous image extracts and description method
CN110414385A (en) * 2019-07-12 2019-11-05 淮阴工学院 A kind of method for detecting lane lines and system based on homography conversion and characteristic window
CN110751371A (en) * 2019-09-20 2020-02-04 苏宁云计算有限公司 Commodity inventory risk early warning method and system based on statistical four-bit distance and computer readable storage medium
CN110913212A (en) * 2019-12-27 2020-03-24 上海智驾汽车科技有限公司 Intelligent vehicle-mounted camera shielding monitoring method and device based on optical flow and auxiliary driving system
CN111275658A (en) * 2018-12-03 2020-06-12 北京嘀嘀无限科技发展有限公司 Camera shielding detection method and system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070118490A1 (en) * 2003-06-30 2007-05-24 Gyros Patent Ab Confidence determination
US20110164832A1 (en) * 2010-01-04 2011-07-07 Samsung Electronics Co., Ltd. Image-based localization feature point registration apparatus, method and computer-readable medium
JP2016134804A (en) * 2015-01-20 2016-07-25 富士通株式会社 Imaging range abnormality discrimination device, imaging range abnormality discrimination method and computer program for imaging range abnormality discrimination
JP2016148956A (en) * 2015-02-10 2016-08-18 株式会社デンソーアイティーラボラトリ Positioning device, positioning method and positioning computer program
CN107710279A (en) * 2015-07-02 2018-02-16 大陆汽车有限责任公司 Static dirty detection and correction
CN105139016A (en) * 2015-08-11 2015-12-09 豪威科技(上海)有限公司 Interference detection system for surveillance camera and application method of interference detection system
CN105427276A (en) * 2015-10-29 2016-03-23 重庆电信系统集成有限公司 Camera detection method based on image local edge characteristics
CN105744268A (en) * 2016-05-04 2016-07-06 深圳众思科技有限公司 Camera shielding detection method and device
US20180224380A1 (en) * 2017-02-09 2018-08-09 Glasstech, Inc. System and associated method for online measurement of the optical characteristics of a glass sheet
CN108763346A (en) * 2018-05-15 2018-11-06 中南大学 A kind of abnormal point processing method of sliding window box figure medium filtering
CN111275658A (en) * 2018-12-03 2020-06-12 北京嘀嘀无限科技发展有限公司 Camera shielding detection method and system
CN110008964A (en) * 2019-03-28 2019-07-12 上海交通大学 The corner feature of heterologous image extracts and description method
CN110414385A (en) * 2019-07-12 2019-11-05 淮阴工学院 A kind of method for detecting lane lines and system based on homography conversion and characteristic window
CN110751371A (en) * 2019-09-20 2020-02-04 苏宁云计算有限公司 Commodity inventory risk early warning method and system based on statistical four-bit distance and computer readable storage medium
CN110913212A (en) * 2019-12-27 2020-03-24 上海智驾汽车科技有限公司 Intelligent vehicle-mounted camera shielding monitoring method and device based on optical flow and auxiliary driving system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王红岩;汪晓帆;高亮;李强子;赵龙才;杜鑫;张源;: "基于季相变化特征的撂荒地遥感提取方法研究", 遥感技术与应用, no. 03 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927262A (en) * 2021-03-22 2021-06-08 瓴盛科技有限公司 Camera lens shielding detection method and system based on video
CN112927262B (en) * 2021-03-22 2023-06-20 瓴盛科技有限公司 Camera lens shielding detection method and system based on video
CN113282208A (en) * 2021-05-25 2021-08-20 歌尔科技有限公司 Terminal device control method, terminal device and computer readable storage medium
CN116522417A (en) * 2023-07-04 2023-08-01 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment
CN116522417B (en) * 2023-07-04 2023-09-19 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment

Also Published As

Publication number Publication date
CN111967345B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN111967345A (en) Method for judging shielding state of camera in real time
WO2008009656A1 (en) Image processing for change detection
CN110197185B (en) Method and system for monitoring space under bridge based on scale invariant feature transform algorithm
CN111898610B (en) Card unfilled corner detection method, device, computer equipment and storage medium
CN111047624A (en) Image dim target detection method, device, equipment and storage medium
CN111784624A (en) Target detection method, device, equipment and computer readable storage medium
CN114155285B (en) Image registration method based on gray histogram
CN111814776A (en) Image processing method, device, server and storage medium
CN109660814B (en) Method for detecting deletion tampering of video foreground
CN106778822B (en) Image straight line detection method based on funnel transformation
JP4192719B2 (en) Image processing apparatus and method, and program
JPH07249128A (en) Picture processor for vehicle
Fang et al. 1-D barcode localization in complex background
JPH06308256A (en) Cloudy fog detecting method
CN117218633A (en) Article detection method, device, equipment and storage medium
CN115249024A (en) Bar code identification method and device, storage medium and computer equipment
CN112085683B (en) Depth map credibility detection method in saliency detection
CN114359183A (en) Image quality evaluation method and device, and lens occlusion determination method
WO2021189460A1 (en) Image processing method and apparatus, and movable platform
CN111027560B (en) Text detection method and related device
JP3127598B2 (en) Method for extracting density-varying constituent pixels in image and method for determining density-fluctuation block
CN111985423A (en) Living body detection method, living body detection device, living body detection equipment and readable storage medium
JP3490196B2 (en) Image processing apparatus and method
Yang et al. Object extraction combining image partition with motion detection
Kumari et al. Real-Time Assessment of Edge Detection Techniques in Image Processing: A Performance Comparison

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant