CN111242075A - Method for recognizing sleeping behavior through police supervision - Google Patents

Method for recognizing sleeping behavior through police supervision Download PDF

Info

Publication number
CN111242075A
CN111242075A CN202010064790.5A CN202010064790A CN111242075A CN 111242075 A CN111242075 A CN 111242075A CN 202010064790 A CN202010064790 A CN 202010064790A CN 111242075 A CN111242075 A CN 111242075A
Authority
CN
China
Prior art keywords
head
coordinates
arm
sleeping
desktop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010064790.5A
Other languages
Chinese (zh)
Inventor
刘军
冯江远
吴介桅
曹治江
朱军伟
蒲俗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Guoyi Electronic Technology Co ltd
Original Assignee
Chengdu Guoyi Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Guoyi Electronic Technology Co ltd filed Critical Chengdu Guoyi Electronic Technology Co ltd
Priority to CN202010064790.5A priority Critical patent/CN111242075A/en
Publication of CN111242075A publication Critical patent/CN111242075A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a method for identifying sleeping behaviors for police supervision, which comprises the steps of analyzing a sequence picture decoded by a video through sleeping behavior detection, detecting an analysis result, alarming the possible sleeping behaviors of an auditor, and determining the possible sleeping behaviors of a lying sleep through manual examination. The invention realizes the accurate positioning of the lying sleeping action through a mature target detection algorithm in deep learning, can be subdivided from the whole to the part step by step, and realizes the fine processing; meanwhile, the method is combined with engineering, so that the method is very suitable for a public security supervision system, and the occurrence of false alarm is reduced.

Description

Method for recognizing sleeping behavior through police supervision
Technical Field
The invention relates to the field of computer vision and image processing, in particular to a sleeping behavior identification method for policeman supervision.
Background
At present, in order to improve the civil police law enforcement ability standard civilization law enforcement of the central authorities, strengthen the supervision to the law enforcement process of the interrogation room, carry through the theory of the science and technology strong police, use the advanced technology of modernization, supervise the law enforcement process of the interrogation room, give a large amount of tedious video examination work to the machine execution, can effectually promote the supervision and supervision efficiency, protect the navigation for civilization law enforcement insurance driving. During the supervision, the situation that the public security staff sleep on the front during the interrogation process is found to be rare, the civilized law enforcement image is damaged, the case handling efficiency is also reduced, and the harm of sleep on the front is large. Therefore, the method for monitoring whether the inquiries sleep on the stomach in real time is designed, and the condition that the inquiries violate rules can be greatly reduced.
Aiming at the situation, the following problems exist by combining the prior technical scheme:
(1) there is no method for accurately detecting the sleeping action on the stomach, and the method is applied to a public security supervision system;
(2) the false alarm rate is high, and a reasonable and effective method for judging the result fed back by the system is not available.
Disclosure of Invention
The invention aims to provide a sleeping behavior identification method for policeman supervision, aiming at the problems.
The invention aims to be realized by the following technical scheme: a method for recognizing sleeping behaviors through police supervision is characterized by comprising the following steps:
s1: collecting video data, and collecting the video data in an interrogation room in real time;
s2: decoding the video into a sequence picture, and decoding the video data into the sequence picture to detect the sleeping behavior state;
s3: detecting sleeping behaviors, and detecting results of respective areas of arms, a table top and a head;
s4: judging sleeping behaviors, and judging whether a sleep behavior of lying prone exists or not by analyzing the detection area result and a specified threshold value;
s5: the image alarm reminding is carried out, and when a sleeping behavior of lying prone exists, alarm processing is carried out;
s6: and (5) manually checking, and confirming the lying sleeping behavior through manual checking.
Specifically, the step S3 further includes the following sub-steps:
s301: detecting a rectangular area of the desktop;
s302: detecting a head rectangular area;
s303: detecting a rectangular area of an arm;
specifically, the step S4 further includes the following sub-steps:
s401: judging whether the distance result obtained in the step S301 and the step S302 is smaller than a first threshold value;
s402: judging whether the distance result obtained in the step S302 and the step S303 is smaller than a second threshold value;
specifically, the step S4 further includes determining that the user is in a "lying sleep" state when the distance results obtained in the steps S401 and S402 are both smaller than the corresponding threshold, and performing the steps S5 and S6.
Specifically, the step S301 further includes the following sub-steps:
s3011: acquiring image data of different interrogation rooms, different angles and different time periods in advance;
s3012: selecting all image data containing the desktop;
s3013: making label data from the data;
s3014: training data by using a CenterNet target detection algorithm to obtain a desktop detection model;
s3015: and (3) predicting a rectangular region BBboxA (x, y, w, h) of the desktop by using a trained CenterNet target detection algorithm in the image, wherein x and y are coordinates of a central point of the rectangle, w and h are the width and the height of the rectangular region, and overlapping the character desktop above a rectangular frame.
Specifically, the step S302 further includes the following sub-steps:
s3021: acquiring image data of different interrogation rooms, different angles and different time periods in advance;
s3022: selecting all image data containing heads;
s3023: making label data from the data;
s3024: training data of a CenterNet target detection algorithm is selected to obtain a head detection model;
s3025: and (3) predicting a rectangular region BBboxB (x, y, w, h) of the head by using a trained CenterNet target detection algorithm in the image, wherein x and y are coordinates of a central point of the rectangle, w and h are the width and the height of the rectangular region, and superposing the character head on a rectangular frame.
The step S303 further includes the following sub-steps:
s3031: acquiring image data of different interrogation rooms, different angles and different time periods in advance;
s3032: selecting all image data containing arms;
s3033: making label data from the data;
s3034: training data of a CenterNet target detection algorithm is selected to obtain an arm detection model;
s3035: respectively obtaining a rectangular region BBboxCL (x, y, w, h) of a left arm and a right arm BBboxCR (x, y, w, h) of a right arm by applying a trained CenterNet target detection algorithm to predict in an image, wherein x and y are coordinates of a rectangular central point, w and h are the width and height of the rectangular region, and characters are superposed above a rectangular frame
The "left arm" and the "right arm".
Specifically, in step S3035, the left arm and the right arm are distinguished by designating the arm detected on the left side of the approach image as the left arm and the arm detected on the right side of the approach image as the right arm.
Specifically, in step S401, whether the euclidean distance between the coordinates of the center point of the desktop and the coordinates of the center point of the head is smaller than a given value is calculated:
first, assume that the desktop center point coordinates BBboxA (x, y, w, h) are denoted as a (x2, y2), and the center point coordinates of the head BBboxB (x, y, w, h) are abbreviated as B (x1, y 1).
The distance calculation method is as follows:
Figure BDA0002375637940000031
when rho is less than or equal to 50, the head is considered to be very close to the desktop, and a prone sleeping state may exist.
Specifically, in step S402, the method for calculating whether the straight line where the arm is located and the distance from the human head to the straight line are smaller than the given value is as follows:
let the equation of the straight line L be Ax + By + C-0;
let the coordinates of the center points of the left arm BBboxCL (x, y, w, h) and the right arm BBboxCR (x, y, w, h) be abbreviated as L (x, y) and R (x, y), and the coordinates of the center point of the human head BBboxB (x, y, w, h) be abbreviated as B (x0, y 0);
the values of A, B and C can be easily obtained by substituting the two points into a straight line L equation;
obtaining a distance d according to a distance formula from a point to a straight line;
Figure BDA0002375637940000032
when d is less than or equal to 20, the head and the arms are considered to be close to each other, and a prone sleeping state may exist.
The invention has the beneficial effects that:
(1) the behavior of the personnel handling the case can be effectively normalized by the method for identifying the sleeping behavior on the stomach;
(2) the method has the advantages that the identification of the sleeping on the face is greatly optimized and improved, a mature target detection algorithm is used, the detection distance is judged, and the range of a set threshold value is judged, so that the accuracy of identifying the sleeping on the face is improved;
(3) by using a manual auditing method, the false alarm probability is effectively reduced.
Drawings
FIG. 1 is a block diagram of the architecture flow of the present invention;
FIG. 2 is a block diagram of the steps S3 and S4 according to the present invention.
Detailed Description
In order to more clearly understand the technical features, objects and effects of the present invention, embodiments of the present invention will be described with reference to the accompanying drawings.
In this embodiment, as shown in fig. 1, a method for police service supervision sleeping behavior recognition includes the following steps:
step one, collecting video data of an interrogation room in real time.
And step two, decoding the video into sequence pictures.
And step three, detecting the head in the sequence picture by using a head detection module to obtain a rectangular region of the head, wherein the rectangular region is represented as BBboxA (x, y, w, h), x and y are coordinates of a central point of the rectangle, w and h are the width and the height of the rectangular region, and characters 'the head' are superposed above a rectangular frame. The main technical points for human head detection are as follows: (1) acquiring about 30 thousands of image data of different interrogation rooms, different angles and different time periods in advance; (2) selecting all image data containing human heads; (3) making label data from the data; (4) training head data by using a target detection algorithm of CenterNet (a latest detection frame based on target center point estimation, paper name: Objects as Points) to obtain a head detection model; (5) and predicting to obtain a rectangular region BBboxA (x, y, w, h) of the human head in the image by using a trained CenterNet target detection algorithm.
And step four, detecting the desktop in the sequence picture by using a desktop detection module to obtain a rectangular frame BBboxB (x, y, w, h) of the desktop, and superposing the character desktop above the rectangular frame. Wherein x, y, w, h are defined in step 3, and the main technical key for detecting and obtaining the desktop area BBboxB (x, y, w, h) is the same as the method in step 3.
And fifthly, detecting two arms by using an arm detection module in the sequence picture, respectively obtaining a left arm BBboxCL (x, y, w, h) and a right arm BBboxCL (x, y, w, h) of a rectangular frame of the arms, and respectively superposing characters of the left arm and the right arm above the rectangular frame. Wherein x, y, w, h are defined in step 3, and the main technical point of obtaining the left arm BBboxCL (x, y, w, h) and the right arm BBboxCR (x, y, w, h) through detection is the same as the method in step 3, wherein the method for distinguishing the left arm from the right arm is that the arm detected near the left side of the image is the left arm, and the arm detected near the right side of the image is the right arm.
Step six, calculating whether the straight line where the arm is located and the distance from the human head to the straight line are smaller than given values:
firstly, setting the equation of a straight line L as Ax + By + C as 0;
secondly, suppose that the coordinates of the center points of the left arm BBboxCL (x, y, w, h) and the right arm BBboxCR (x, y, w, h) are abbreviated as L (x, y) and R (x, y), and the coordinates of the center point of the human head BBboxA (x, y, w, h) are abbreviated as a (x0, y 0);
then, the two points are substituted into a straight line L equation to easily obtain the values of A, B and C;
and finally, obtaining the distance d according to a distance formula from the point to the straight line:
Figure BDA0002375637940000051
when d is less than or equal to 20, the human head is close to the straight line of the arm, and whether the desktop and the human head are less than a certain Euclidean distance is judged.
Step seven, calculating whether the Euclidean distance between the coordinates of the center point of the desktop and the coordinates of the center point of the head is smaller than a given value:
first, assume that the desktop center point coordinates BBboxB (x, y, w, h), denoted as B (x2, y2), and the center point coordinates of the human head BBboxA (x, y, w, h) are abbreviated as a (x1, y 1).
The distance calculation method is as follows:
Figure BDA0002375637940000052
when rho is less than or equal to 50, the head is considered to be very close to the desktop, and the user may lie on the stomach for sleeping.
Step eight, the conclusion is drawn by integrating the distances d and rho:
only when d is less than or equal to 20 and rho is less than or equal to 50, the user is considered to be in the lying sleeping state.
And step nine, finally, according to the alarm information in the step eight, the staff carries out manual rechecking/confirmation.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (8)

1. A method for recognizing sleeping behaviors through police supervision is characterized by comprising the following steps:
s1: collecting video data, and collecting the video data in an interrogation room in real time;
s2: decoding the video into a sequence picture, and decoding the video data into the sequence picture to detect the sleeping behavior state;
s3: detecting sleeping behaviors, and detecting results of respective areas of arms, a table top and a head;
s4: judging sleeping behaviors, and judging whether a sleep behavior of lying prone exists or not by analyzing the detection area result and a specified threshold value;
s5: the image alarm reminding is carried out, and when a sleeping behavior of lying prone exists, alarm processing is carried out;
s6: manually checking, namely confirming the lying sleeping behavior through manual checking;
the step S3 further includes the following sub-steps:
s301: detecting a rectangular area of the desktop;
s302: detecting a head rectangular area;
s303: detecting a rectangular area of an arm;
the step S4 further includes the following sub-steps:
s401: judging whether the distance result obtained in the step S301 and the step S302 is smaller than a first threshold value;
s402: it is determined whether the distance results obtained in steps S302 and S303 are less than a second threshold.
2. The method as claimed in claim 1, wherein the step S4 further comprises determining that the user is lying asleep when the distance results obtained in the steps S401 and S402 are both less than the corresponding threshold, and executing the steps S5 and S6.
3. A method as claimed in claim 1, wherein the step S301 further comprises the following sub-steps:
s3011: acquiring image data of different interrogation rooms, different angles and different time periods in advance;
s3012: selecting all image data containing the desktop;
s3013: making label data from the data;
s3014: training data by using a CenterNet target detection algorithm to obtain a desktop detection model;
s3015: and (3) predicting a rectangular region BBboxA (x, y, w, h) of the desktop by using a trained CenterNet target detection algorithm in the image, wherein x and y are coordinates of a central point of the rectangle, w and h are the width and the height of the rectangular region, and overlapping the character desktop above a rectangular frame.
4. A method as claimed in claim 1, wherein the step S302 further comprises the following sub-steps:
s3021: acquiring image data of different interrogation rooms, different angles and different time periods in advance;
s3022: selecting all image data containing heads;
s3023: making label data from the data;
s3024: training data of a CenterNet target detection algorithm is selected to obtain a head detection model;
s3025: and (3) predicting a rectangular region BBboxB (x, y, w, h) of the head by using a trained CenterNet target detection algorithm in the image, wherein x and y are coordinates of a central point of the rectangle, w and h are the width and the height of the rectangular region, and superposing the character head on a rectangular frame.
5. A method as claimed in claim 1, wherein the step S303 further comprises the following sub-steps:
s3031: acquiring image data of different interrogation rooms, different angles and different time periods in advance;
s3032: selecting all image data containing arms;
s3033: making label data from the data;
s3034: training data of a CenterNet target detection algorithm is selected to obtain an arm detection model;
s3035: and respectively obtaining a rectangular region BBboxCL (x, y, w, h) of a left arm and a right arm BBboxCR (x, y, w, h) of the right arm by applying a trained CenterNet target detection algorithm to predict in the image, wherein x and y are coordinates of a rectangular central point, w and h are the width and height of the rectangular region, and characters 'left arm' and 'right arm' are superposed above a rectangular frame.
6. The method of claim 5, wherein in step S3035, the detected arm near the left side of the image is designated as a left arm, and the detected arm near the right side of the image is designated as a right arm, so as to distinguish the left arm from the right arm.
7. The method of claim 1, wherein in step S401, the euclidean distance between the coordinates of the center point of the desktop and the coordinates of the center point of the head is calculated to determine whether the distance between the coordinates of the center point of the desktop and the coordinates of the center point of the head is less than a predetermined value:
first, assume desktop center point coordinates BBboxA (x, y, w, h) denoted as a (x2, y2), and center point coordinates of head BBboxB (x, y, w, h) are abbreviated as B (x1, y 1);
the distance calculation method is as follows:
Figure FDA0002375637930000031
when rho is less than or equal to 50, the head is considered to be very close to the desktop, and a prone sleeping state may exist.
8. The method of claim 1, wherein the step S402 of calculating the straight line of the arm and the distance between the head and the straight line is less than a predetermined value comprises the following steps:
let the equation of the straight line L be Ax + By + C-0;
let the coordinates of the center points of the left arm BBboxCL (x, y, w, h) and the right arm BBboxCR (x, y, w, h) be abbreviated as L (x, y) and R (x, y), and the coordinates of the center point of the human head BBboxB (x, y, w, h) be abbreviated as B (x0, y 0);
the values of A, B and C can be easily obtained by substituting the two points into a straight line L equation;
obtaining a distance d according to a distance formula from a point to a straight line;
Figure FDA0002375637930000032
when d is less than or equal to 20, the head and the arms are considered to be close to each other, and a prone sleeping state may exist.
CN202010064790.5A 2020-01-20 2020-01-20 Method for recognizing sleeping behavior through police supervision Withdrawn CN111242075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010064790.5A CN111242075A (en) 2020-01-20 2020-01-20 Method for recognizing sleeping behavior through police supervision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010064790.5A CN111242075A (en) 2020-01-20 2020-01-20 Method for recognizing sleeping behavior through police supervision

Publications (1)

Publication Number Publication Date
CN111242075A true CN111242075A (en) 2020-06-05

Family

ID=70874761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010064790.5A Withdrawn CN111242075A (en) 2020-01-20 2020-01-20 Method for recognizing sleeping behavior through police supervision

Country Status (1)

Country Link
CN (1) CN111242075A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597961A (en) * 2020-12-30 2021-04-02 上海大学 Interest target extraction method and system based on big data
CN113269142A (en) * 2021-06-18 2021-08-17 中电科大数据研究院有限公司 Method for identifying sleeping behaviors of person on duty in field of inspection
CN114234393A (en) * 2021-11-19 2022-03-25 青岛海尔空调器有限总公司 Air conditioner sitting posture auxiliary control method and device and air conditioner
CN114234393B (en) * 2021-11-19 2024-05-24 青岛海尔空调器有限总公司 Sitting posture auxiliary control method and device for air conditioner and air conditioner

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597961A (en) * 2020-12-30 2021-04-02 上海大学 Interest target extraction method and system based on big data
CN113269142A (en) * 2021-06-18 2021-08-17 中电科大数据研究院有限公司 Method for identifying sleeping behaviors of person on duty in field of inspection
CN114234393A (en) * 2021-11-19 2022-03-25 青岛海尔空调器有限总公司 Air conditioner sitting posture auxiliary control method and device and air conditioner
CN114234393B (en) * 2021-11-19 2024-05-24 青岛海尔空调器有限总公司 Sitting posture auxiliary control method and device for air conditioner and air conditioner

Similar Documents

Publication Publication Date Title
US8139817B2 (en) Face image log creation
US20050169520A1 (en) Detecting human faces and detecting red eyes
CN110851835A (en) Image model detection method and device, electronic equipment and storage medium
CN107622489A (en) A kind of distorted image detection method and device
CN111242075A (en) Method for recognizing sleeping behavior through police supervision
CN106650670A (en) Method and device for detection of living body face video
CN111753724A (en) Abnormal behavior identification method and device
RU2713876C1 (en) Method and system for detecting alarm events when interacting with self-service device
US20130259324A1 (en) Method for face recognition
CN111753642B (en) Method and device for determining key frame
CN112307852A (en) Matching method of face detection target and marking, storage medium and processor
CN114639152A (en) Multi-modal voice interaction method, device, equipment and medium based on face recognition
CN112580531B (en) Identification detection method and system for true and false license plates
CN111209893A (en) Recognition method for policing supervision of smoking behavior
CN112818150B (en) Picture content auditing method, device, equipment and medium
JP3305551B2 (en) Specific symmetric object judgment method
CN113052049A (en) Off-duty detection method and device based on artificial intelligence tool identification
CN112766230A (en) Video streaming personnel online time length estimation method and corresponding system
CN114092743B (en) Compliance detection method and device for sensitive picture, storage medium and equipment
Parmani et al. Inter frame video forgery detection using normalized multi scale one level subtraction
RU2789609C1 (en) Method for tracking, detection and identification of objects of interest and autonomous device with protection from copying and hacking for their implementation
CN114971643B (en) Abnormal transaction identification method, device, equipment and storage medium
CN113537058B (en) Method for judging association relation of stranger and security control system
CN116824513B (en) Drilling process automatic identification supervision method and system based on deep learning
CN113569637B (en) Personnel track positioning and supervising system based on Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200605