CN115984973A - Human body abnormal behavior monitoring method for peeping-proof screen - Google Patents

Human body abnormal behavior monitoring method for peeping-proof screen Download PDF

Info

Publication number
CN115984973A
CN115984973A CN202310272365.9A CN202310272365A CN115984973A CN 115984973 A CN115984973 A CN 115984973A CN 202310272365 A CN202310272365 A CN 202310272365A CN 115984973 A CN115984973 A CN 115984973A
Authority
CN
China
Prior art keywords
edge line
pixel points
points
line pixel
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310272365.9A
Other languages
Chinese (zh)
Other versions
CN115984973B (en
Inventor
杨俊辉
吴旭镇
张立雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jiarun Original Xinxian Technology Co ltd
Original Assignee
Shenzhen Jiarun Original Xinxian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jiarun Original Xinxian Technology Co ltd filed Critical Shenzhen Jiarun Original Xinxian Technology Co ltd
Priority to CN202310272365.9A priority Critical patent/CN115984973B/en
Publication of CN115984973A publication Critical patent/CN115984973A/en
Application granted granted Critical
Publication of CN115984973B publication Critical patent/CN115984973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a human body abnormal behavior monitoring method for an anti-peeping screen. However, in the process of analyzing the movement behavior of suspicious personnel, the calculation amount of the existing three-step search method is large and the time is long, and the monitoring result needs to be fed back in time, so that the selection of the search pixel points of the existing method is improved, the characteristic pixel points are screened out by calculating the gray difference, the distance characteristic and the angular point number of the edge line pixel points, the calculation amount and the calculation time are reduced through the angular points and the characteristic pixel points, the warning coefficient is determined according to the integral motion amount condition obtained by the angular points and the characteristic pixel points, and the peeping prevention monitoring result is judged more accurately and timely by monitoring and judging whether peeping is performed through the warning coefficient.

Description

Human body abnormal behavior monitoring method for peeping-proof screen
Technical Field
The invention relates to the technical field of image data processing, in particular to a human body abnormal behavior monitoring method for an anti-peeping screen.
Background
With the rapid development of the internet, the information spreading and diffusing speed is accelerated, meanwhile, the self-protection awareness and the right awareness of people are also enhanced, and the personal privacy protection is more and more emphasized by people. The number of times that mobile terminals such as smart phones or smart televisions are used in public places is increasing, and the chances of privacy disclosure are also increasing, for example, when an office uses a smart television to meet, important information is disclosed by peeping a screen by people passing around. In the current peep-proof method on the mobile terminal, a camera is adopted to detect the number of human faces or human eyes, whether personnel who do not acquire the watching permission exist is judged, and if the personnel who do not acquire the watching permission are found, a prompt box is popped up or a screen is directly locked to prevent peep.
In practice, the inventors found that the above prior art has the following disadvantages: the existing peep-proof method utilizes a computer vision algorithm to identify unauthorized suspicious persons through shot images and then directly closes a screen. However, in an actual scene, the suspicious people do not all have peeping behaviors, the subsequent behavior actions of the suspicious people are not analyzed in the prior art, and the screen is directly closed, so that the waste of hardware control resources is caused, and the use experience of a user is seriously influenced.
Disclosure of Invention
In order to solve the technical problems that in the prior art, a screen is directly closed after an unauthorized suspicious person is identified through a shot image by using a computer vision algorithm, subsequent behavior actions of the suspicious person are not analyzed, so that hardware control resources are wasted, and the use experience of a user is seriously influenced, the invention aims to provide a human body abnormal behavior monitoring method for preventing peeping the screen, and the adopted technical scheme is specifically as follows:
acquiring continuous multi-frame real-time images of a peep-proof camera, identifying personnel in the real-time images, and judging whether the personnel are suspicious personnel;
taking continuous multi-frame real-time images with suspicious people as images to be analyzed, and performing down-sampling on the images to be analyzed to obtain down-sampled images;
obtaining edge lines and angular points in the down-sampling image; calculating the gray difference between each edge line pixel point on the same edge line in the down-sampling image and other edge line pixel points in the first preset neighborhood range; calculating the distance characteristic between each edge line pixel point on the same edge line and an angular point in a second preset neighborhood range; calculating the number of corner points of the edge where each edge line pixel point is located; screening edge line pixel points through gray level difference, distance characteristics and angular point quantity to obtain characteristic pixel points;
acquiring the overall motion quantity of suspicious personnel according to angular points and characteristic pixel points in two adjacent frames of down-sampling images, and calculating the variable quantity of the overall motion quantity; the method comprises the steps of obtaining an alarm coefficient through the accumulated value and the variable quantity of the whole motion quantity of continuous multi-frame down-sampling images, and monitoring whether suspicious personnel peep or not through the numerical value of the alarm coefficient.
Further, the step of obtaining the gray difference of the edge line pixel points includes:
and calculating the average value of the difference absolute values of the gray values of each edge line pixel point on the same edge line in the down-sampling image and other edge line pixel points in the first preset neighborhood range to obtain the gray difference of the edge line pixel points.
Further, the step of obtaining the distance characteristics of the edge line pixel points includes:
and calculating the average value of the Euclidean distance between each edge line pixel point on the same edge line in the down-sampling image and the corner point in the second preset neighborhood range to obtain the distance characteristic of the edge line pixel point.
Further, the step of obtaining the characteristic pixel point includes:
calculating the product of the gray difference, the distance characteristics and the angular point number of the edge line pixel points to obtain the possibility that the edge line pixel points are characteristic pixel points; and presetting a possibility threshold, and screening edge line pixel blocks exceeding the possibility threshold to serve as characteristic pixel points.
Further, the acquiring of the overall quantity of motion includes:
and according to the angular points and the characteristic pixel points obtained by calculation in the down-sampling images, obtaining the motion vectors of the angular points and the characteristic pixel points by a three-step searching method, and calculating the average value of the modular length of all the motion vectors which are not zero in the two adjacent down-sampling images to obtain the overall motion amount.
Further, the acquiring of the amount of change in the entire amount of motion may include:
and calculating the difference value of the obtained overall motion quantity of the second frame and the last frame of the down-sampled images of the continuous frames to obtain the variable quantity of the overall motion quantity.
Further, the step of obtaining the warning coefficient includes:
and calculating the ratio of the accumulated value of the overall motion quantity to the variation of the overall motion quantity, and subtracting the ratio of the accumulated value to the variation to obtain an alarm coefficient.
Further, the step of monitoring for peeping prevention through the value of the warning coefficient comprises:
and presetting a warning coefficient threshold, and when the obtained warning coefficient exceeds the warning coefficient threshold, determining that the suspicious person is an intentional peep-proof object, and sending a warning and closing a screen at the moment.
Further, the method for identifying the person in the real-time image and judging whether the person is a suspicious person includes:
and identifying the persons appearing in the real-time image according to the convolutional neural network, and identifying the faces of the persons appearing through a face identification algorithm to judge whether the persons are suspicious.
The invention has the following beneficial effects: firstly, continuous multi-frame real-time images with suspicious people are taken as images to be analyzed, and the purpose is to judge whether peeping is intentionally performed according to the subsequent movement action of the suspicious people instead of directly closing a screen according to a face recognition result. The application scene of the method has higher requirement on timeliness of the monitoring result, and more image pixel points can increase the calculation amount and the calculation time, so that the image to be analyzed is downsampled, and the calculation amount of a subsequent algorithm is reduced. Furthermore, the gray difference of the edge line pixel points in the down-sampling image is calculated to reflect the obvious degree of the edge line pixel points in the edge line of the down-sampling image, the distance characteristic of the edge line pixel points is calculated to reflect the distance condition of the edge line pixel points from the angular points, and the gray difference, the distance characteristic and the angular point number of the edge line pixel points are combined to judge whether the edge line pixel points can be the characteristic pixel points, so that the effective extraction of the characteristic pixel points is realized, and the reference degree of subsequent motion information is increased. Angular points and characteristic pixel points are used as the basis for acquiring motion information, so that the calculation amount and time of the whole motion amount process of suspicious personnel are reduced, and the timeliness of monitoring results is improved; the subsequent movement behavior of the suspicious person can be clearly analyzed through the variable quantity and the accumulated value of the overall movement amount, and whether the suspicious person is peeped intentionally or not can be directly judged through a warning coefficient obtained through the variable quantity and the accumulated value of the overall movement amount; therefore, the invention not only judges whether peeping exists according to the face recognition, but also judges whether peeping is intended according to the subsequent movement behavior of the suspicious person, thereby reducing the calculated amount and time in the process of analyzing the subsequent movement behavior of the suspicious person and improving the timeliness of the peeping monitoring result.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for monitoring abnormal human behavior for a peep-proof screen according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects of the present invention adopted to achieve the predetermined purpose, the following detailed description, with reference to the accompanying drawings and preferred embodiments, describes specific embodiments, structures, features and effects of a human body abnormal behavior monitoring method for a peep-proof screen according to the present invention. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the human body abnormal behavior monitoring method for the anti-peeping screen in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for monitoring abnormal human behavior for peeping-proof screen according to an embodiment of the present invention is shown, and the method includes the following steps.
S1, acquiring continuous multi-frame real-time images of the peep-proof camera, identifying personnel in the real-time images, and judging whether the personnel are suspicious personnel.
In the embodiment of the invention, the peep-proof object is an intelligent large-screen television used in meetings for working, and the camera is placed above the peep-proof object and can shoot all angle ranges in which the screen can be seen. It should be noted that, the peep-proof object may be a mobile terminal with a display function in the implementation process, the installation position of the camera may be determined according to the scene, and it is only necessary to ensure that all the angle intervals in which the screen can be seen can be photographed. The person who has the viewing right refers to a person who is allowed to use and view the screen, and the suspicious person who has no right refers to a person who is not allowed to use and view the screen.
In the embodiment of the invention, the camera captures a frame of real-time image every second, and it should be noted that the capturing frequency can be determined by an implementer in the implementation process. In order to obtain a clearer image and reduce the calculation amount so as to facilitate subsequent analysis, graying and noise removal need to be performed on the shot image. It should be noted that the weighted average graying method and the gaussian filtering used in the preprocessing are well known to those skilled in the art, and detailed steps are not described again.
The specific steps of identifying the personnel in the real-time image and judging whether the personnel are suspicious comprise:
and identifying personnel in the preprocessed real-time image through a convolutional neural network, and when the personnel in the real-time image is identified, performing face identification algorithm processing on the real-time image to judge whether the real-time image is suspicious. In the embodiment of the invention, the face recognition is carried out by using a face recognition LBPH algorithm, and the specific training process of the face recognition is as follows: the face information of the person with the authority is used as a training object, face recognition is carried out by using an LBPH algorithm, the face data of the person with the authority is used as a template image, the obtained real-time image of the person with the authority is compared with the template image by using the LBPH algorithm, and whether suspicious persons exist in the obtained real-time image is judged according to the similarity of the face data. If the identified person is a person with authority, subsequent analysis is not performed on the person, the camera normally continues to monitor and shoot pictures, and if the identified person is a suspicious person without authority, subsequent action conditions of the person need to be tracked and analyzed to judge whether the screen information is intentionally peeped. It should be noted that specific algorithm structures and training methods of the convolutional neural network and the face recognition algorithm are technical means well known to those skilled in the art, and detailed steps are not described again.
And S2, taking continuous multi-frame real-time images with suspicious people as images to be analyzed, and performing down-sampling on the images to be analyzed to obtain down-sampled images.
When suspicious personnel pass through the screen, the suspicious personnel may be inadvertently swept to the screen, at the moment, the face image can be shot by the camera and judged to be the suspicious personnel without permission, and in order to judge whether peeping is intended, subsequent movement behaviors of the suspicious personnel need to be analyzed. And taking the real-time image with the suspicious personnel as an image to be analyzed. It should be noted that, since the real-time image is an image of a plurality of consecutive frames, the image to be analyzed screened by identifying the suspicious person is also an image of a plurality of consecutive frames.
Because the peep-proof scene applied by the invention has higher requirement on the timeliness of the monitoring result and the calculation and analysis time of the image to be analyzed is shortened as much as possible, before the image to be analyzed is calculated and analyzed, in order to reduce the calculation amount in the image data processing and improve the timeliness of the monitoring result, the image to be analyzed needs to be downsampled to obtain a downsampled image. The specific steps of obtaining the down-sampled image include:
in the embodiment of the invention, a super-pixel segmentation algorithm is used for carrying out blocking operation on an image to be analyzed, the size of a pre-blocking block is 10 x 10, and a down-sampling image of the image to be analyzed is obtained. It should be noted that, an implementer may determine the size of the pre-partition by himself during the implementation process; the super-pixel segmentation is a technical means well known to those skilled in the art, and detailed steps are not described again.
S3, obtaining edge lines and angular points in the down-sampling image; calculating the gray difference between each edge line pixel point on the same edge line in the down-sampling image and other edge line pixel points in the first preset neighborhood range; calculating the distance characteristic between each edge line pixel point on the same edge line and an angular point in a second preset neighborhood range; calculating the number of corner points of the edge where each edge line pixel point is located; and screening edge line pixel points through gray difference, distance characteristics and angular point quantity to obtain characteristic pixel points.
The purpose of obtaining edge lines and angular points in a down-sampling image is as follows: when the movement behavior condition of suspicious personnel is analyzed, motion vectors of all pixel points in a down-sampling image need to be calculated, the calculation of the motion vectors of a large number of pixel points can take a long time and requires certain hardware requirements, the timeliness of monitoring results cannot be improved, high cost is required, and the accuracy of motion information can be influenced by the fact that some meaningless or relatively weak-characteristic pixel points participate in motion analysis. Therefore, in order to ensure the accuracy of the monitoring result and improve the timeliness, the feature pixel points and the angular points are screened out to calculate the motion vectors, and the number of the calculated pixel points can be reduced by only calculating the motion vectors of the feature pixel points and the angular points. Most of characteristic pixel points capable of expressing the movement behavior condition of suspicious personnel are on the edge line in the down-sampling image; the corner points are pixel points with characteristics in the image and can express the movement behavior of suspicious personnel, so that the specific steps of firstly acquiring edge lines and corner points in the down-sampling image and acquiring the edge lines and the corner points in the down-sampling image are as follows:
the edge detection algorithm is used for obtaining the edge information in the down-sampling image, in the embodiment of the invention, canny operators are used for carrying out edge detection on the down-sampling image, and edge lines in the down-sampling image are obtained. And acquiring the corners in the downsampled image by using corner detection, and acquiring the corners in the downsampled image by SUSAN corner detection. It should be noted that the Canny operator edge detection and the SUSAN corner detection are all technical means well known to those skilled in the art, and detailed steps are not described again.
After the downsampled image and the angular points are obtained, if only the motion vectors of the angular points are calculated, the number of the angular points is not enough to accurately calculate the moving behavior condition of the suspicious person. Therefore, it is necessary to obtain pixel points on the edge line in the down-sampling image as many as possible as feature pixel points, and to screen the pixel points on the edge line by calculating the gray scale difference, the distance features and the number of the corner points, and the specific steps of obtaining the feature pixel points include:
(1) The formula for obtaining the gray difference between each edge line pixel point on the same edge line in the down-sampled image and other edge line pixel points in the first preset neighborhood range specifically comprises the following steps:
Figure SMS_1
in the formula (I), the compound is shown in the specification,
Figure SMS_2
representing the second in a down-sampled image
Figure SMS_3
The gray difference value of the pixel points of each edge line,
Figure SMS_4
is shown as
Figure SMS_5
The gray values of the pixel points of the edge lines,
Figure SMS_6
indicating the first predetermined neighborhood within
Figure SMS_7
The gray value of each pixel point is calculated,
Figure SMS_8
and expressing the number of the residual pixel points of the edge line pixel point to be calculated in the first preset neighborhood range, wherein the formula meaning is the average value of the absolute value of the gray value difference value of the edge line pixel point to be calculated and other pixel points in the first preset neighborhood range. It should be noted that the edge line pixel points are pixel points on the edge line in the down-sampling image.
In the embodiment of the invention, the first preset neighborhood range refers to the range of even number of adjacent pixels on the same edge line where the pixels of the edge line pixel block to be calculated are located
Figure SMS_9
The selection is 8, for example, on a certain edge line, a first preset neighborhood range composed of 8 pixels from each of 4 pixels on the left and right sides of the to-be-calculated pixel, it should be noted that, in the implementation process, the implementer can set the number of even number of pixels as the first preset neighborhood range of the to-be-calculated edge line pixel, if the to-be-calculated edge line pixel is on the edge of the edge line and the number of pixels in the first preset neighborhood range is insufficient, the to-be-discarded edge line pixel is not used for calculating the gray difference.
When the gray difference degree between the edge line pixel point and the pixel point in the first preset neighborhood range is larger, the edge line pixel point is more obvious in the edge line, and the probability that the edge line pixel point is used as a characteristic pixel point is higher.
(2) Calculating the distance characteristic between each edge line pixel point on the same edge line in the down-sampling image and the corner point in the second preset neighborhood range, wherein the acquisition formula of the distance characteristic specifically comprises the following steps:
Figure SMS_10
in the formula (I), the compound is shown in the specification,
Figure SMS_11
is the first in the down-sampled image
Figure SMS_12
The distance characteristics of the individual edge line pixel points,
Figure SMS_13
is shown as
Figure SMS_14
Each edge line pixel point and the second in the second preset neighborhood range
Figure SMS_15
The euclidean distance of the corner points,
Figure SMS_16
and expressing the number of the angular points in the second preset neighborhood range, wherein the formula means the average value of Euclidean distances between the edge line pixel points to be calculated and the angular points in the second preset neighborhood range on the same edge line in the down-sampled image.
In the embodiment of the present invention, the second preset neighborhood range refers to a range of the number of corner points, where the edge line pixel point to be calculated is located, most adjacent to the edge line pixel point to be calculated
Figure SMS_17
The value is 10, namely 10 corner points nearest to the pixel points of the edge lines to be calculated are found on the same edge line of the pixel points of the edge lines to be calculated,it should be noted that, an implementer may set a second preset neighborhood range by himself during the implementation process, and if the edge line where the edge line pixel point to be calculated is located is less than 10 corner points, the actual number of corner points is calculated.
The purpose of calculating the distance characteristics of the edge line pixel points in the down-sampling image is as follows: when the movement behavior condition of the suspicious person is determined, more feature pixel points need to be found as much as possible to calculate the motion vectors of the feature pixel points, so that the movement behavior condition of the suspicious person is obtained. The determined feature pixel points are corner points in the down-sampling image, and because the corner points are defined as extreme points in the image, namely, the more prominent points in some attributes, the edge line pixel points which are closer to the corner points on the same edge line do not need to be taken as feature pixel points to increase the calculation amount of the subsequent calculation motion vectors, and the edge line pixel points which are farther from the corner points in the Euclidean distance on the same edge line have higher possibility to be taken as feature pixel points, so that the distance features of the edge line pixel points are calculated, and when the distance features are larger, the more the distance features mean the farther the corner points are, the higher the possibility that the edge line pixel points become the feature pixel points is; when the distance characteristic is smaller, the closer the distance characteristic is to the corner point, the smaller the possibility that the edge line pixel point becomes the characteristic pixel point is.
(3) The number of the angular points on the edge where the edge line pixel points are located is calculated, in the obtained down-sampling image, when edge detection is carried out, the obtained edge information not only contains the contour edge of a human body, but also can obtain the clothing information of the human body. In the embodiment of the present invention, in order to further facilitate the operation of subsequent indexes, the counted number of corner points of the edge line where each edge line pixel point is located is normalized, and the specific expression includes:
Figure SMS_18
in the formula (I), the compound is shown in the specification,
Figure SMS_19
is shown as
Figure SMS_20
The number of corner points on the edge line where the edge line pixel points are located,
Figure SMS_21
is a natural constant
Figure SMS_22
An exponential function of the base is used,
Figure SMS_23
the number of the corner points on the edge line of the edge line pixel points to be calculated is represented, and the formula meaning is that when the number of the corner points on the edge of the edge line pixel points is more, the more the number of the corner points is
Figure SMS_24
The closer to 1, the smaller the number of corner points on the edge where the edge line pixel points are located, the smaller the distance between the corner points and the edge line pixel points is
Figure SMS_25
The closer to 0, the more the formula is, the more the number of corner points of the edge line where the edge line pixel points are located is normalized to [0,1]]。
When the number of the corner points of the edge line where the edge line pixel points are located is more, the probability that the edge line pixel points on the edge line become the feature pixel points is higher; when the number of the corner points of the edge line where the edge line pixel points are located is 0, the edge line is ignored, and the edge line pixel points on the edge line are not analyzed.
(4) Screening edge line pixel points through gray difference, distance features and angular point quantity, and acquiring the specific steps of the feature pixel points: by calculating the gray level difference of the edge line pixel points, it can be known that the larger the gray level difference degree between the edge line pixel points and the pixel points in the first preset neighborhood range is, the more obvious the edge line pixel points are in the edge lines, and the higher the possibility that the edge line pixel points are used as feature pixel points is. It can be known by calculating the distance features of the edge line pixel points, when the distance features are larger, the farther the distance feature means from the angular point, the higher the possibility that the edge line pixel points become feature pixel points is. By calculating the number of the corner points of the edge line where the edge line pixel points are located, when the number of the corner points of the edge line where the edge line pixel points are located is more, the probability that the edge line pixel points on the edge line become the feature pixel points is higher.
Therefore, whether the edge line pixel points become the feature pixel points or not can be screened according to the gray difference, the distance features and the number of the angular points, and the gray difference and the distance features are standardized to enable the numerical range to be in [0,1]. The step of calculating the possibility that the edge line pixel points become the characteristic pixel points comprises the following steps: multiplying the gray difference, the distance characteristics and the angular point quantity of the same edge line pixel point with the numerical range of [0,1] to obtain the probability numerical value of the edge line pixel point becoming the characteristic pixel point, wherein when the product result is closer to 1, the probability is higher; when the product result is closer to 0, it means that the probability is lower. And presetting a possibility threshold, and screening the edge line pixel point as a characteristic pixel point when the possibility of the edge line pixel point exceeds the possibility threshold. In the embodiment of the present invention, the preset possibility threshold is 0.8, and in different application scenarios, the specific preset possibility threshold value may be specifically set according to a specific implementation manner.
S4, obtaining the overall movement amount of the suspicious personnel according to the angular points and the characteristic pixel points in the two adjacent frames of down-sampled images, and calculating the variable quantity of the overall movement amount; the method comprises the steps of obtaining an alarm coefficient through the accumulated value and the variable quantity of the whole motion quantity of continuous multi-frame down-sampling images, and monitoring whether suspicious personnel peep or not through the numerical value of the alarm coefficient.
After angular points and characteristic pixel points of continuous multiframe downsampled images are screened out, motion vectors of the angular point characteristic pixel points in the downsampled images are obtained through a three-step searching method, compared with the time for searching and matching all pixel points, the time for searching and matching the angular points and the characteristic pixel points to solve the motion vectors is reduced by a large amount of calculation amount and time, and the motion vectors refer to the position change condition of the same angular point or characteristic pixel point in two adjacent frames of images. It should be noted that the three-step search method is a technical means known to those skilled in the art, and the detailed steps are not described again.
Calculating the average value of the module lengths of the motion vectors of all the angular points or the characteristic pixel points in the two adjacent frames of down-sampling images, wherein the module lengths of the motion vectors are not zero, so that the overall motion amount of the two adjacent frames of down-sampling images is obtained.
The specific steps for judging whether peeping exists in the suspicious personnel comprise:
the formula for acquiring the variation of the overall exercise amount specifically includes:
Figure SMS_26
in the formula (I), the compound is shown in the specification,
Figure SMS_27
the amount of change in the overall amount of motion is expressed,
Figure SMS_28
the overall motion quantity of the head of the suspicious person obtained by the adjacent downsampled images of the first frame and the second frame after the suspicious person appears is shown,
Figure SMS_29
representing the overall amount of motion obtained by successively acquiring a plurality of down-sampled images of frames through the last down-sampled image and the second to last down-sampled image, in the embodiment of the present invention, 10 down-sampled images, that is, 10 down-sampled images are successively acquired
Figure SMS_30
The total amount of motion is obtained from the ninth frame and the tenth frame
Figure SMS_31
It should be noted that, the implementer may determine the number of consecutive multi-frame images in the implementation process.
Figure SMS_32
The result of (A) may be greater than zero or not greater than zero when
Figure SMS_33
When the value is close to zero or less than 0, the final overall motion amount is not reduced or even larger than the initial overall motion amount, which indicates that the suspicious person in the down-sampled image is moving at a constant speed or accelerating. At the moment, the possibility that suspicious people pass is considered to be higher, and the possibility of peeping is lower; when the temperature is higher than the set temperature
Figure SMS_34
The result of (2) is far greater than zero, which indicates that the suspicious person is moving at a reduced speed, and the probability of peeping is considered to be high.
The formula for obtaining the accumulated value of the entire motion amount specifically includes:
Figure SMS_35
in the formula (I), the compound is shown in the specification,
Figure SMS_38
showing the accumulated value of the whole motion quantity in the continuous multi-frame images after the suspicious person appears,
Figure SMS_39
representing the number of the set continuous multi-frame down-sampling images, the embodiment of the invention selects
Figure SMS_41
Is a number of 10 and is provided with,
Figure SMS_37
represents any one frame down-sampled image excluding the first frame,
Figure SMS_40
is indicated by
Figure SMS_42
Frame sum
Figure SMS_43
Integer calculated from frame imageThe amount of physical exercise. In the embodiment of the present invention, it is,
Figure SMS_36
the meaning of (1) is the sum of the overall motion amount calculated in the next 9 frames of images after a suspicious person is found, because the motion vector is obtained according to two adjacent frames of down-sampling images, 9 overall motion amount values can be obtained by 10 frames of down-sampling images, and the sum of 9 overall motion amounts is the accumulated value of the overall motion amount.
When in use
Figure SMS_44
When the screen is larger, the integral moving speed of the suspicious personnel in front of the screen is higher, the moving amount is larger, the possibility of passing through the screen is higher, and the possibility of peeping is lower; when in use
Figure SMS_45
The smaller the movement speed of the whole suspicious person in front of the screen is, the smaller the movement amount is, and the longer the screen is viewed, the higher the possibility of peeping is.
The formula for obtaining the warning coefficient specifically includes:
Figure SMS_46
in the formula (I), the compound is shown in the specification,
Figure SMS_47
to monitor the warning factor of whether the suspicious person peeps,
Figure SMS_48
as an accumulated value of the normalized overall motion amount,
Figure SMS_49
is the normalized variation of the overall motion quantity. When the suspicious people in front of the screen move more and more slowly and the variable quantity of the whole motion amount is less and less, the required result is
Figure SMS_50
The bigger the suspicious person is, the more the suspicious person isThe slower the moving speed and the smaller the total value of the whole motion amount, that is, what is required
Figure SMS_51
The smaller the size, the higher the probability of peeping by suspicious persons in front of the screen.
Whether peeping behaviors of suspicious personnel exist is monitored through changes of the warning coefficients, warning coefficient threshold values are preset, when the warning coefficients exceed the warning coefficient threshold values, the fact that the suspicious personnel have the peeping behaviors is judged, peeping warnings are sent out, and a screen is closed. In the embodiment of the present invention, the preset warning coefficient threshold is 0.7, and in different application scenarios, the specific preset warning coefficient threshold value may be specifically set according to a specific implementation manner.
In summary, in the monitoring of the peeping prevention abnormal behavior of the screen, the embodiment of the invention can not only judge whether the person is a suspicious person through face recognition, but also analyze the subsequent movement behavior of the suspicious person to judge whether the person is peeped intentionally. However, in the process of analyzing movement behaviors of suspicious personnel, the calculation amount of the existing three-step searching method is large, the calculation time is long, and monitoring results need to be fed back in time, so that the selection of searching pixel points in the existing method is improved, the characteristic pixel points are screened out by calculating the gray difference, the distance characteristic and the number of the corner points of edge line pixel points, the calculation amount and the calculation time are reduced through the corner points and the characteristic pixel points, a warning coefficient is determined according to the integral motion amount condition obtained by the corner points and the characteristic pixel points, and whether peeping behaviors exist or not is judged through the monitoring of the warning coefficient, so that the judgment of peeping-preventing monitoring results is more accurate and timely.
It should be noted that: the sequence of the above embodiments of the present invention is only for description, and does not represent the advantages or disadvantages of the embodiments. The processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments.

Claims (9)

1. A human body abnormal behavior monitoring method for peeping prevention of a screen is characterized by comprising the following steps:
acquiring continuous multi-frame real-time images of a peep-proof camera, identifying personnel in the real-time images, and judging whether the personnel are suspicious personnel;
taking continuous multi-frame real-time images with suspicious people as images to be analyzed, and performing down-sampling on the images to be analyzed to obtain down-sampled images;
obtaining edge lines and angular points in the down-sampling image; calculating the gray difference between each edge line pixel point on the same edge line in the down-sampling image and other edge line pixel points in the first preset neighborhood range; calculating the distance characteristic between each edge line pixel point on the same edge line and the corner point in the second preset neighborhood range; calculating the number of corner points of the edge where each edge line pixel point is located; screening edge line pixel points through gray level difference, distance characteristics and angular point quantity to obtain characteristic pixel points;
acquiring the overall movement amount of suspicious personnel according to angular points and characteristic pixel points in two adjacent frames of down-sampled images, and calculating the variable quantity of the overall movement amount; the method comprises the steps of obtaining an alarm coefficient through the accumulated value and the variable quantity of the whole motion quantity of continuous multi-frame down-sampling images, and monitoring whether the suspicious personnel peep or not through the numerical value of the alarm coefficient.
2. The method for monitoring the abnormal human behavior for the peep-proof screen according to claim 1, wherein the step of obtaining the gray level difference of the edge line pixel points comprises the following steps:
and calculating the average value of the difference absolute values of the gray values of each edge line pixel point on the same edge line in the down-sampling image and other edge line pixel points in the first preset neighborhood range to obtain the gray difference of the edge line pixel points.
3. The method for monitoring the abnormal human behavior for the anti-peeping screen according to claim 1, wherein the step of obtaining the distance characteristics of the edge line pixel points comprises:
and calculating the average value of the Euclidean distance between each edge line pixel point on the same edge line in the down-sampling image and the corner point in the second preset neighborhood range to obtain the distance characteristic of the edge line pixel point.
4. The method for monitoring the abnormal human behavior for the anti-peeping screen according to claim 1, wherein the step of obtaining the characteristic pixel points comprises:
calculating the product of the gray difference, the distance characteristics and the angular point number of the edge line pixel points to obtain the possibility that the edge line pixel points are characteristic pixel points; and (4) presetting a possibility threshold, and screening edge line pixel blocks exceeding the possibility threshold as characteristic pixel points.
5. The method for monitoring the abnormal behavior of the human body for the peep-proof screen according to claim 1, wherein the acquiring of the overall quantity of motion comprises:
and according to the angular points and the characteristic pixel points obtained by calculation in the down-sampling images, obtaining motion vectors of the angular points and the characteristic pixel points by a three-step searching method, and calculating the average value of the modular lengths of all the motion vectors which are not zero in the two adjacent down-sampling images to obtain the integral motion amount.
6. The method for monitoring abnormal human behavior for the anti-peeping screen according to claim 1, wherein the step of acquiring the variation of the overall quantity of motion comprises:
and calculating the difference value of the obtained overall motion quantity of the second frame and the last frame of the down-sampled images of the continuous frames to obtain the variable quantity of the overall motion quantity.
7. The method for monitoring abnormal human behavior for the anti-peeping screen according to claim 1, wherein the step of acquiring the warning coefficient comprises:
and calculating the ratio of the accumulated value of the overall motion quantity to the variation of the overall motion quantity, and subtracting the ratio of the accumulated value to the variation to obtain an alarm coefficient.
8. The method for monitoring abnormal human behavior for peep-proof screen according to claim 7, wherein the step of monitoring peep-proof by the value of the warning coefficient comprises:
and presetting a warning coefficient threshold, and when the obtained warning coefficient exceeds the warning coefficient threshold, determining that the suspicious person is an intentional peep-proof object, and sending a warning and closing a screen at the moment.
9. The method for monitoring the abnormal human behavior for the anti-peeping screen according to claim 1, wherein the method for identifying the person in the real-time image and judging whether the person is a suspicious person comprises the following steps:
and identifying the person appearing in the real-time image according to the convolutional neural network, identifying the face of the person appearing through a face identification algorithm, and judging whether the person appears as a suspicious person.
CN202310272365.9A 2023-03-21 2023-03-21 Human body abnormal behavior monitoring method for peeping-preventing screen Active CN115984973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310272365.9A CN115984973B (en) 2023-03-21 2023-03-21 Human body abnormal behavior monitoring method for peeping-preventing screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310272365.9A CN115984973B (en) 2023-03-21 2023-03-21 Human body abnormal behavior monitoring method for peeping-preventing screen

Publications (2)

Publication Number Publication Date
CN115984973A true CN115984973A (en) 2023-04-18
CN115984973B CN115984973B (en) 2023-06-27

Family

ID=85965235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310272365.9A Active CN115984973B (en) 2023-03-21 2023-03-21 Human body abnormal behavior monitoring method for peeping-preventing screen

Country Status (1)

Country Link
CN (1) CN115984973B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311383A (en) * 2023-05-16 2023-06-23 成都航空职业技术学院 Intelligent building power consumption management system based on image processing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110090340A1 (en) * 2009-10-19 2011-04-21 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN104902265A (en) * 2015-05-22 2015-09-09 深圳市赛为智能股份有限公司 Background edge model-based video camera anomaly detection method and system
CN106020456A (en) * 2016-05-11 2016-10-12 北京暴风魔镜科技有限公司 Method, device and system for acquiring head posture of user
CN106657628A (en) * 2016-12-07 2017-05-10 努比亚技术有限公司 Anti-peeping method, device and terminal of mobile terminal
CN110298237A (en) * 2019-05-20 2019-10-01 平安科技(深圳)有限公司 Head pose recognition methods, device, computer equipment and storage medium
CN110706259A (en) * 2019-10-12 2020-01-17 四川航天神坤科技有限公司 Space constraint-based cross-shot tracking method and device for suspicious people
CN112507772A (en) * 2020-09-03 2021-03-16 广州市标准化研究院 Face recognition security system and suspicious person detection and early warning method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110090340A1 (en) * 2009-10-19 2011-04-21 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN104902265A (en) * 2015-05-22 2015-09-09 深圳市赛为智能股份有限公司 Background edge model-based video camera anomaly detection method and system
CN106020456A (en) * 2016-05-11 2016-10-12 北京暴风魔镜科技有限公司 Method, device and system for acquiring head posture of user
CN106657628A (en) * 2016-12-07 2017-05-10 努比亚技术有限公司 Anti-peeping method, device and terminal of mobile terminal
CN110298237A (en) * 2019-05-20 2019-10-01 平安科技(深圳)有限公司 Head pose recognition methods, device, computer equipment and storage medium
CN110706259A (en) * 2019-10-12 2020-01-17 四川航天神坤科技有限公司 Space constraint-based cross-shot tracking method and device for suspicious people
CN112507772A (en) * 2020-09-03 2021-03-16 广州市标准化研究院 Face recognition security system and suspicious person detection and early warning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡志兆;吕建勋: "基于Kinect的运动检测平台设计与实现", 应用科技, no. 03, pages 68 - 73 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311383A (en) * 2023-05-16 2023-06-23 成都航空职业技术学院 Intelligent building power consumption management system based on image processing

Also Published As

Publication number Publication date
CN115984973B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US9104914B1 (en) Object detection with false positive filtering
CN105893920B (en) Face living body detection method and device
CN106056079B (en) A kind of occlusion detection method of image capture device and human face five-sense-organ
KR101215948B1 (en) Image information masking method of monitoring system based on face recognition and body information
CN109670430A (en) A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning
CN103942539B (en) A kind of oval accurate high efficiency extraction of head part and masking method for detecting human face
CN111144366A (en) Strange face clustering method based on joint face quality assessment
WO2019114145A1 (en) Head count detection method and device in surveillance video
CN112396011B (en) Face recognition system based on video image heart rate detection and living body detection
CN107483894A (en) Judge to realize the high ferro station video monitoring system of passenger transportation management based on scene
CN115984973B (en) Human body abnormal behavior monitoring method for peeping-preventing screen
JP2012212969A (en) Image monitoring apparatus
CN114885119A (en) Intelligent monitoring alarm system and method based on computer vision
KR20080049206A (en) Face recognition method
CN108460319B (en) Abnormal face detection method and device
CN116962598B (en) Monitoring video information fusion method and system
CN116385316B (en) Multi-target image dynamic capturing method and related device
KR20200060868A (en) multi-view monitoring system using object-oriented auto-tracking function
Hadis et al. The impact of preprocessing on face recognition using pseudorandom pixel placement
KR100800322B1 (en) Digital video recoding system for privacy protection and method thereof
CN112907206B (en) Business auditing method, device and equipment based on video object identification
CN113014914B (en) Neural network-based single face-changing short video identification method and system
CN115909400A (en) Identification method for using mobile phone behaviors in low-resolution monitoring scene
Chondro et al. Detecting abnormal massive crowd flows: Characterizing fleeing en masse by analyzing the acceleration of object vectors
CN111985331B (en) Detection method and device for preventing trade secret from being stolen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant