CN112380951A - Method and device for identifying abnormal behavior, computer equipment and storage medium - Google Patents

Method and device for identifying abnormal behavior, computer equipment and storage medium Download PDF

Info

Publication number
CN112380951A
CN112380951A CN202011245772.3A CN202011245772A CN112380951A CN 112380951 A CN112380951 A CN 112380951A CN 202011245772 A CN202011245772 A CN 202011245772A CN 112380951 A CN112380951 A CN 112380951A
Authority
CN
China
Prior art keywords
area
distance
forearm
region
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011245772.3A
Other languages
Chinese (zh)
Other versions
CN112380951B (en
Inventor
刘少林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011245772.3A priority Critical patent/CN112380951B/en
Publication of CN112380951A publication Critical patent/CN112380951A/en
Application granted granted Critical
Publication of CN112380951B publication Critical patent/CN112380951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device, a computer device and a storage medium for identifying abnormal behaviors, which are used for improving the efficiency of identifying abnormal behaviors in videos. The method comprises the following steps: sequentially acquiring each video frame to be identified in a video to be identified, and determining at least one area to be identified in each video frame to be identified; wherein the region to be identified comprises a head and shoulder part region and a forearm part region of the user; determining a first distance between a head and shoulder part area and a forearm part area in each to-be-recognized area of each to-be-recognized video frame, if at least one target area with the first distance within a first distance range exists, determining a second distance between the forearm part area of each target area and a forearm part area of the to-be-recognized area adjacent to the target area, and performing target detection on an area between two forearm part areas with the second distance within a second distance range; and outputting an abnormal behavior recognition result according to the target detection result.

Description

Method and device for identifying abnormal behavior, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for identifying an abnormal behavior, a computer device, and a storage medium.
Background
In the examination process, the cheating behavior of the examinee can seriously affect the fairness of the examination. At present, in examinations in various fields, an examination taker needs to take an invigilation mode to avoid cheating behaviors in the examination process.
The traditional invigilating modes comprise two modes, wherein one mode is that an invigilating teacher invigilates on site, or a camera carries out remote invigilation on an examination room to determine whether abnormal behaviors exist in the examination room; and the other method is that after the examination is finished, the characteristics of the shot video are identified, and whether abnormal behaviors exist in the video is determined. However, for the manual invigilation method, a large amount of labor cost is required, the invigilation time is the same as the examination time, and the invigilation efficiency is low. For the automatic invigilation mode, because the number of examinees is large, the amount of shot video is large, the amount of data to be processed is also relatively large, the efficiency of identifying abnormal behaviors in the video is low, and real-time invigilation cannot be realized.
Disclosure of Invention
The embodiment of the application provides a method and a device for identifying abnormal behaviors, computer equipment and a storage medium, which are used for improving the efficiency of identifying abnormal behaviors in videos.
In a first aspect, a method for identifying abnormal behavior is provided, including:
sequentially acquiring each video frame to be identified in a video to be identified, and determining at least one area to be identified in each video frame to be identified; wherein the region to be identified comprises a head and shoulder part region and a forearm part region of the user;
determining a first distance between a head and shoulder part area and a forearm part area in each to-be-recognized area of each to-be-recognized video frame, if at least one target area with the first distance within a first distance range exists, determining a second distance between the forearm part area of each target area and a forearm part area of the to-be-recognized area adjacent to the target area, and performing target detection on an area between two forearm part areas with the second distance within a second distance range;
outputting an abnormal behavior recognition result according to the target detection result; wherein the target detection result is used for indicating whether a target object exists in the area between the two forearm part areas, and the abnormal behavior identification result is used for indicating whether abnormal behavior exists.
Optionally, determining a second distance between the forearm part area of each target area and the forearm part area of the area to be recognized adjacent to the target area specifically includes:
mapping the forearm part area of each target area and the forearm part area of the area to be identified adjacent to the target area into a coordinate system, detecting key points of the forearm part areas, and determining a first key coordinate point in each forearm part area; wherein the first key coordinate point is used for indicating the position of the forearm of the user;
and determining the coordinate distance between the first key coordinate point of each target area and the first key coordinate point of the area to be identified adjacent to the target area as a second distance.
Optionally, the first key coordinate point is a coordinate point of a wrist joint in the forearm part area in the coordinate system.
Optionally, after determining a first distance between the head and shoulder region and the forearm region in each to-be-identified region of each to-be-identified video frame, the method further includes:
for the area to be recognized, of which the first distance is not within the first distance range, determining a target detection area according to a forearm part area in the area to be recognized, and performing target detection on the target detection area;
and outputting an abnormal behavior recognition result according to the target detection result.
Optionally, determining a target detection region according to a forearm part region in the region to be identified specifically includes:
mapping the forearm part area to a coordinate system, detecting key points of the forearm part area, and determining four second key coordinate points in the forearm part area; the second key coordinate point is used for indicating the vertex of a quadrilateral area surrounded by two forearms of a user;
and determining the quadrilateral area with the second key coordinate point as a vertex as a target detection area.
Optionally, the second key coordinate points include coordinate points of a wrist joint and an elbow joint in the coordinate system in a forearm site area.
Optionally, after determining a first distance between the head and shoulder region and the forearm region in each to-be-identified region of each to-be-identified video frame, the method further includes:
for the area to be identified, the first distance of which is not within the first distance range, mapping the head and shoulder region into a coordinate system, identifying the face orientation in the head and shoulder region, and obtaining the face orientation angle between the face orientation in each head and shoulder region and a specified coordinate axis;
if the face orientation angles corresponding to two adjacent to-be-recognized areas meet the preset angle relationship, outputting an abnormal behavior recognition result for indicating that abnormal behaviors exist; the preset angle relationship is used for representing the behavior of two adjacent users facing each other.
In a second aspect, an apparatus for identifying abnormal behavior is provided, including:
an acquisition module: the device comprises a video acquisition module, a video recognition module, a storage module and a display module, wherein the video acquisition module is used for sequentially acquiring each video frame to be recognized in a video to be recognized and determining at least one region to be recognized in each video frame to be recognized; wherein the region to be identified comprises a head and shoulder part region and a forearm part region of the user;
a processing module: the device comprises a recognition unit, a first distance detection unit, a second distance detection unit and a second distance detection unit, wherein the recognition unit is used for determining a first distance between a head shoulder part area and a forearm part area in each to-be-recognized area of each to-be-recognized video frame, if at least one target area with the first distance within a first distance range exists, determining a second distance between the forearm part area of each target area and a forearm part area of an area to be recognized adjacent to the target area, and performing target detection on an area between two forearm part areas with the second distance within a second distance range; outputting an abnormal behavior recognition result according to the target detection result; wherein the target detection result is used for indicating whether a target object exists in the area between the two forearm part areas, and the abnormal behavior identification result is used for indicating whether abnormal behavior exists.
Optionally, the processing module is specifically configured to:
mapping the forearm part area of each target area and the forearm part area of the area to be identified adjacent to the target area into a coordinate system, detecting key points of the forearm part areas, and determining a first key coordinate point in each forearm part area; wherein the first key coordinate point is used for indicating the position of the forearm of the user;
and determining the coordinate distance between the first key coordinate point of each target area and the first key coordinate point of the area to be identified adjacent to the target area as a second distance.
Optionally, the first key coordinate point is a coordinate point of a wrist joint in the forearm part area in the coordinate system.
Optionally, the processing module is further configured to:
after a first distance between a head and shoulder part area and a forearm part area in each to-be-recognized area of each to-be-recognized video frame is determined, for the to-be-recognized area of which the first distance is not within the first distance range, determining a target detection area according to the forearm part area in the to-be-recognized area, and performing target detection on the target detection area;
and outputting an abnormal behavior recognition result according to the target detection result.
Optionally, the processing module is specifically configured to:
mapping the forearm part area to a coordinate system, detecting key points of the forearm part area, and determining four second key coordinate points in the forearm part area; the second key coordinate point is used for indicating the vertex of a quadrilateral area surrounded by two forearms of a user;
and determining the quadrilateral area with the second key coordinate point as a vertex as a target detection area.
Optionally, the second key coordinate points include coordinate points of a wrist joint and an elbow joint in the coordinate system in a forearm site area.
Optionally, the processing module is further configured to:
after determining a first distance between a head and shoulder part area and a forearm part area in each to-be-identified area of each to-be-identified video frame, mapping the head and shoulder part area into a coordinate system for the to-be-identified area of which the first distance is not within the first distance range, identifying the face orientation in the head and shoulder part area, and obtaining a face orientation angle between the face orientation in each head and shoulder part area and a specified coordinate axis;
if the face orientation angles corresponding to two adjacent to-be-recognized areas meet the preset angle relationship, outputting an abnormal behavior recognition result for indicating that abnormal behaviors exist; the preset angle relationship is used for representing the behavior of two adjacent users facing each other.
In a third aspect, a computer device comprises:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the method according to the first aspect according to the obtained program instructions.
In a fourth aspect, a storage medium stores computer-executable instructions for causing a computer to perform the method of the first aspect.
In the embodiment of the application, each video frame to be recognized in the video to be recognized is sequentially recognized, so that the missing situation of abnormal behaviors is reduced, and the accuracy of recognizing the abnormal behaviors is improved. And before target detection is carried out, a pre-detection strategy is adopted to filter out a region to be identified where the behavior of transferring articles is impossible, namely the region to be identified, where the distance between the head and shoulder region and the forearm region of the user is small, so that the calculated amount in the target detection process is reduced, and the efficiency of identifying abnormal behaviors is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a method for identifying abnormal behavior according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for identifying abnormal behavior according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a first principle of identifying abnormal behavior according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a principle of identifying abnormal behavior according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a third principle of recognizing abnormal behavior according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a principle of recognizing an abnormal behavior according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a principle of recognizing abnormal behavior according to an embodiment of the present application;
fig. 8 is a schematic diagram six illustrating a principle of recognizing abnormal behavior according to an embodiment of the present application;
fig. 9 is a schematic diagram seven illustrating a principle of recognizing abnormal behavior according to an embodiment of the present application;
fig. 10 is a schematic diagram eight illustrating a principle of identifying abnormal behavior according to an embodiment of the present application;
fig. 11 is a schematic diagram nine illustrating a principle of identifying abnormal behavior according to an embodiment of the present application;
fig. 12 is a schematic diagram ten illustrating a principle of identifying abnormal behavior according to an embodiment of the present application;
fig. 13 is a first schematic structural diagram of an apparatus for detecting a target area according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of a device for detecting a target area according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In addition, in the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items.
In the examination process, the cheating behavior of the examinee can seriously affect the fairness of the examination. At present, in examinations in various fields, an examination taker needs to take an invigilation mode to avoid cheating behaviors in the examination process. In order to save labor cost, more and more examination rooms adopt cameras for invigilation, for example, an invigilation teacher remotely invigilates a plurality of examination rooms through a plurality of monitoring pictures, or after the examination is finished, performs feature recognition on videos shot by the cameras to determine abnormal behaviors in the videos. However, in the remote invigilation mode of the invigilation teacher, the invigilation teacher needs to continuously watch each monitoring picture, the possibility of omission is high, and the efficiency of determining the abnormal behavior of the examinee is low. The mode of carrying out characteristic identification on the video shot by the camera cannot determine abnormal behaviors in the video in real time, and the efficiency is low, so that measures cannot be taken for the abnormal behaviors of the examinees in time, and further the examination fairness, the examination atmosphere and the like of the whole examination room are influenced.
In view of this, the present application provides a method for identifying an abnormal behavior, which may be applied to a terminal device or a network device. The terminal equipment can be a mobile phone, a tablet computer, a personal computer or the like; the network device may be a local server, a third party server, a cloud server, or the like.
In the embodiment of the application, each video frame to be recognized in the video to be recognized is sequentially recognized, so that the missing situation of abnormal behaviors is reduced, and the accuracy of recognizing the abnormal behaviors is improved. And before the target detection is carried out, the distance between the head and shoulder part area and the forearm part area of the user is determined, and when the distance is larger, the target detection is carried out to identify whether the behavior of transferring the object exists or not. Through a pre-detection strategy, the areas to be identified, in which the behaviors of transferring articles cannot exist, are filtered, namely the areas to be identified, in which the distance between the head and shoulder part areas and the forearm part areas of the user is smaller, so that the calculated amount in the target detection process is reduced, the efficiency of identifying abnormal behaviors is improved, and the instantaneity of identifying the abnormal behaviors is improved.
Please refer to fig. 1, which is a schematic view of an application scenario of the method for identifying abnormal behavior according to the embodiment of the present application. The application scenario includes a photographing device 101, a recognition device 102, and a processing device 103. Communication is possible between the photographing apparatus 101 and the recognition apparatus 102; the recognition device 102 and the processing device 103 may communicate with each other; communication is possible between the photographing apparatus 101 and the processing apparatus 103. The communication mode may be a wired communication mode, for example, communication is performed through a connection network line or a serial port line; the communication may also be in a wireless communication mode, for example, communication is performed through technologies such as bluetooth or wireless fidelity (WIFI), and the specific limitations are not limited.
As an embodiment, the photographing apparatus 101 and the recognition apparatus 102 may be the same apparatus; alternatively, the identification device 102 and the processing device 103 may be the same device; alternatively, the photographing apparatus 101 and the processing apparatus 103 may be the same apparatus; alternatively, the photographing apparatus 101, the recognition apparatus 102, and the processing apparatus 103 may be the same apparatus. In the embodiment of the present application, the photographing apparatus 101, the recognition apparatus 102, and the processing apparatus 103 are different apparatuses, respectively.
The following is a brief description of the interaction process between the devices based on the application scenario of fig. 1.
The shooting device 101 shoots the examination room to form a video to be identified. The shooting device 101 transmits the video to be recognized to the recognition device 102, and the recognition device 102 receives the video to be recognized transmitted by the shooting device 101. The identification device 102 sequentially acquires each video frame to be identified in the video to be identified, and determines at least one area to be identified in each video frame to be identified. The region to be recognized includes a head and shoulder region and a forearm region of the user.
After obtaining the respective areas to be recognized, the recognition device 102 determines a first distance between the head-shoulder part area and the forearm part area in each area to be recognized of each video frame to be recognized. If the recognition device 102 determines that there is at least one target area having a first distance within a first distance range, the recognition device 102 further determines, for the target areas, a second distance between the forearm site area of each target area and the forearm site area of the area to be recognized adjacent to the target area. For two forearm site areas with a second distance within a second distance range, the recognition device 102 performs target detection on an area between the two forearm site areas.
The recognition device 102 outputs an abnormal behavior recognition result according to the target detection result. The target detection result is used to indicate whether a target object is present in the area between the two forearm site areas. The abnormal behavior recognition result is used for indicating whether abnormal behavior exists.
Please refer to fig. 2, which is a flowchart illustrating a method for identifying abnormal behavior according to an embodiment of the present application. The following describes a method for identifying abnormal behavior.
S201, sequentially obtaining each video frame to be identified in the video to be identified.
In the process of shooting the examination room, the shooting device 101 may send a shot video, that is, a video to be recognized, to the recognition device 102 with a certain duration as a cycle; or, in order to reduce the data loss, the shooting device 101 may send the video to be identified to the identification device 102 when the occupancy rate of the transmission resource is low; alternatively, in order to more reasonably utilize the computing resources of the recognition device 102, the shooting device 101 may transmit the video to be recognized or the like to the recognition device 102 when the computing resource occupancy rate in the recognition device 102 is low. After the photographing device 101 transmits the video to be recognized to the recognition device 102, the recognition device 102 may receive the video to be recognized transmitted by the photographing device 101.
The video to be identified comprises a plurality of video frames to be identified which are arranged according to a time sequence, the video to be identified can be transmitted in a video stream mode, and the specific transmission mode is not limited.
After the identification device 102 receives the video to be identified sent by the shooting device 101, the identification device 102 may sequentially acquire each video frame to be identified in the video to be identified according to a time sequence; or, each key video frame to be identified, i.e. I frame, in the video to be identified may be sequentially acquired according to the type of the video frame to be identified, as each video frame to be identified, and the like.
S202, determining at least one to-be-identified area in each to-be-identified video frame.
When the identification device 102 sequentially acquires each video frame to be identified in the video to be identified, area identification is performed for each video frame to be identified. There are various methods for identifying regions of a video frame to be identified, for example, identification is performed by using a trained region identification model, or identification is performed by using a conventional image processing method such as an edge detection algorithm. The following describes a process of determining at least one to-be-identified region in a to-be-identified video frame, taking a method of identifying by using a trained region identification model as an example.
The recognition device 102 inputs the video frame to be recognized into the trained region recognition model. The region identification model extracts a plurality of candidate regions in the video frame to be identified, and performs feature extraction on each candidate region in the plurality of candidate regions to obtain a feature vector corresponding to each candidate region. And the region identification model identifies the category of each candidate region according to the feature vector corresponding to each candidate region. The recognition device 102 obtains a region to be recognized in the video frame to be recognized, that is, the region to be recognized includes the head-shoulder region and the forearm region of the user, based on the head-shoulder region classified as the head-shoulder region of the user and the forearm region classified as the forearm region of the user. The recognition device 102 obtains at least one region to be recognized in each video frame to be recognized according to all the determined head-shoulder region and forearm region.
As an example, other body parts of the user, such as hands, etc., may also be included in the area to be recognized.
As an embodiment, since the shooting device 101 shoots the examination room, the examinee in each frame of the video to be recognized should be the same for the same video to be recognized. After obtaining at least one to-be-identified region in each to-be-identified video frame, the identifying device 102 may associate each to-be-identified region in each to-be-identified video frame with a user identification. The identification device 102 may associate each to-be-identified region in each to-be-identified video frame through the trained target tracking model, or may associate each to-be-identified region in each to-be-identified video frame through the position of each to-be-identified region in each to-be-identified video frame, where a specific association manner is not limited. Therefore, in different video frames to be identified, the areas to be identified of the same user are marked by the same user identification, and in the same video frame to be identified, the areas to be identified of different users are marked by different user identifications. The user identifier may be a name, a school number, or a user ID of the user, and is not limited in particular.
S203, determining a first distance between the head and shoulder part area and the forearm part area in each to-be-identified area of each to-be-identified video frame.
Considering that the common abnormal behaviors in the examination room include the behaviors of transferring electronic equipment, textbooks or test papers among examinees, and detecting whether the behaviors of transferring articles exist among each examinee, the behaviors of each examinee need to be analyzed, and the calculation amount is large. Therefore, in the embodiment of the application, the area to be identified is pre-checked, the area to be identified without the behavior of transferring articles is filtered through simple calculation, and the behavior of the examinee in the suspicious area to be identified is analyzed, so that the data amount required to be calculated in the process of analyzing the abnormal behavior of the examinee is greatly reduced, and the efficiency of identifying the abnormal behavior is improved.
After obtaining at least one to-be-identified region in each to-be-identified video frame, the identification device 102 may determine a first distance between the head-shoulder part region and the forearm part region in each to-be-identified region of each to-be-identified video frame. The recognition device 102 may map the video frames to be recognized into a coordinate system, so that each region to be recognized may be mapped into the coordinate system; alternatively, the recognition device 102 may map each region to be recognized into the coordinate system, etc. when calculating the first distance. Please refer to fig. 3, which is a schematic diagram illustrating the principle of determining the first distance. The recognition device 102 sets a coordinate distance between the center point coordinates of the head and shoulder region and the center point coordinates of the forearm region as the first distance L1. The coordinate distance may be the length of the line connecting the two center point coordinates in the coordinate system, etc.
As an example, typically a user has two forearms, and thus, a forearm site area may refer to an area that includes two forearms, or may be a sub-area that includes two forearm sites, and so forth. If the forearm site area is an area that includes two forearms, then the two forearms in the forearm site area may be marked, e.g., the forearm site area to the left of the head and shoulder site area is marked as the left forearm and the forearm site area to the right of the head and shoulder site area is marked as the right forearm. The first distance between the head shoulder site area and the forearm site area may be a first distance between the head shoulder site area and a forearm site area labeled as the left forearm, or may be a first distance between the head shoulder site area and a forearm site area labeled as the right forearm. In the present embodiment, the forearm part area includes two forearm part sub-areas, and without additional description, the forearm part area may refer to one of the forearm part sub-areas.
If the forearm site area includes two forearm site sub-areas, the two forearm site sub-areas may be marked separately, for example, with the left forearm at the left side of the head and shoulder site area and the right forearm at the right side of the head and shoulder site area. The first distance between the head shoulder site area and the forearm site area may be a first distance between the head shoulder site area and a forearm site sub-area labeled as the left forearm, or may be a first distance between the head shoulder site area and a forearm site sub-area labeled as the right forearm.
As an example, after determining the first distance between the head and shoulder region and the forearm region of the region to be identified, the recognition device 102 may further determine whether the first distance is within a first distance range. The first distance range may be a preset range, or may be calculated based on a historical first distance between the head and shoulder region and the forearm region in the history data, or may be calculated based on personal information such as height or arm length submitted by the examinee taking the test, or the like.
The various distances encompassed by the first range of distances may be used to indicate that the distance between the head and shoulder area and the forearm area is greater and the user is likely to transfer the item. Therefore, if the recognition device 102 determines that the first distance is within the first distance range, indicating that the first distance between the head and shoulder region and the forearm region is large, indicating that the user may have an arm reaching behavior, then there may be an article transferring behavior, and the recognition device 102 performs S204. If the recognition device 102 determines that the first distance is not within the first distance range, it indicates that the first distance between the shoulder region and the forearm region is small, indicating that the user is in a regular test posture, and no analysis of the abnormal behavior of the user regarding the delivery of the item is required.
As an embodiment, for a user who does not have an abnormal behavior for transferring an article, other abnormal behaviors can be detected, the other abnormal behaviors include multiple types, and two of the other abnormal behaviors are taken as examples below to specifically describe a detection process of the other abnormal behaviors.
The abnormal behavior is as follows:
and in the examination process, checking abnormal behaviors of illegal articles.
If the abnormal behavior of transferring the articles does not exist between the user and other users, target detection can be performed on the area to be recognized, of which the first distance is not within the first distance range, and whether illegal articles such as electronic equipment or textbooks exist in the area formed by the two forearms of the user or not can be determined.
For the forearm site area of the area to be identified, keypoint detection may be performed. After the recognition device 102 maps the video frame to be recognized into the coordinate system, the forearm part area of the area to be recognized can be mapped into the coordinate system; alternatively, the recognition device 102 may map the forearm part area of the area to be recognized into the coordinate system or the like when detecting abnormal behavior one.
The recognition device 102 inputs the forearm part area of the area to be recognized into the key point recognition model, the key point recognition model determines each key point in the forearm part area of the area to be recognized, and outputs four second key coordinate points. The recognition device 102 determines a quadrilateral area surrounded by four second key coordinate points as vertices as a target detection area.
As an embodiment, please refer to fig. 4, which is a schematic diagram illustrating a principle of determining a quadrilateral area. The key points may be a wrist joint and an elbow joint of the user, and the second key coordinate point includes coordinate points of the wrist joint and the elbow joint in a coordinate system in the forearm site area. Two forearm site sub-regions of the forearm site region each include a wrist joint and an elbow joint, such that four second key coordinate points may be determined in the forearm site region.
As an embodiment, please refer to fig. 5, which is a schematic diagram illustrating a principle of determining a quadrilateral area. The key points may be the wrist joint of the user and two points corresponding to the table edge and the wrist joint. The second key coordinate point includes a wrist joint in the forearm part area and a coordinate point of two points of the table edge corresponding to the wrist joint in the coordinate system. Thus, four second key coordinate points may be determined in the forearm site area.
The recognition device 102 inputs the target detection area into the target detection model, and the target detection model performs feature extraction on the target detection area to obtain a feature vector corresponding to the target detection area. And the target detection model classifies the characteristic vectors corresponding to the target detection area to obtain the category of the target detection area. The categories of target detection areas may include electronic devices, textbooks, or others.
If the recognition device 102 determines that the category of the target detection area is an electronic device or a textbook, the recognition device 102 may output an abnormal behavior recognition result indicating that there is a behavior of viewing the illegal item. If the recognition device 102 determines that the category of the target detection area is other, or does not recognize the category of the target detection area, the recognition device 102 may output an abnormal behavior recognition result indicating that there is no behavior of viewing the illegal item.
And (4) abnormal behavior II:
abnormal behavior of the joint and the ear in the examination process.
If the abnormal behavior of transferring articles does not exist between the user and other users, face orientation recognition can be carried out on the area to be recognized, of which the first distance is not within the first distance range, and whether the abnormal behavior of meeting the ears exists between the two users is determined.
After the identification device 102 maps the video frame to be identified into the coordinate system, the head and shoulder region of the region to be identified can be mapped into the coordinate system; alternatively, the recognition device 102 may map the head and shoulder region of the region to be recognized into the coordinate system when detecting the abnormal behavior two, and each head and shoulder region may be mapped into one coordinate system, respectively.
The recognition device 102 inputs the head and shoulder region of the region to be recognized into the trained face orientation recognition model, and the face orientation recognition model performs feature extraction on the head and shoulder region to obtain a feature vector of the head and shoulder region of the region to be recognized. The face orientation recognition model classifies the feature vectors of the head and shoulder region and outputs the face orientation category of the head and shoulder region. The face orientation category includes a plurality of face orientation angles, which may be an angle between the face orientation and the positive direction of the ordinate axis, or may be an angle between the face orientation and the positive direction of the abscissa axis, and the like, and is not particularly limited, for example, the angle between the face orientation and the positive direction of the ordinate axis includes 0 °, 30 °, 60 °, 90 °, 120 °, 180 °, -30 °, -60 °, -90 °, and-120 °, and the like. The recognition device 102 obtains the face orientation angle of the head-shoulder region from the face orientation category of the head-shoulder region output by the face orientation recognition model.
As an example, the face orientation recognition model outputs the face orientation category of the head and shoulder region as a face orientation angle, which may indicate the angle in various descriptive forms, for example, the face orientation angle includes an angle with a positive direction of an ordinate axis, an angle with a negative direction of the ordinate axis, an angle with a positive direction of an abscissa axis, an angle with a negative direction of an abscissa axis, and the like. Thus, the recognition device 102 can determine the abnormal behavior of the joint-ears using any of the descriptive forms of the face orientation angles.
After obtaining the face orientation angles corresponding to two adjacent regions to be identified, the identification device 102 determines whether the two face orientation angles satisfy a preset angle relationship. The preset angular relationship is used for representing that two adjacent users face each other. If the two face orientation angles satisfy the preset angular relationship, the recognition device 102 determines that there is an ear-to-ear behavior of the two users. If the two face orientation angles do not satisfy the preset angular relationship, the recognition device 102 determines that there is no ear-to-ear behavior for the two users. The recognition device 102 outputs an abnormal behavior recognition result of whether there is a cross-ear behavior.
There are various predetermined angular relationships, two of which are described below as examples. The face orientation angles corresponding to two adjacent to-be-identified areas are represented by a first face orientation angle and a second face orientation angle respectively. And users indicated by the user identifications of the two adjacent areas to be identified are respectively identified by a first user and a second user.
Presetting an angle relation I:
when the first face orientation angle is an included angle between the face orientation of the first user and the positive direction of the abscissa axis, and the second face orientation angle is an included angle between the face orientation of the second user and the negative direction of the abscissa axis, the sum of the first face orientation angle and the second face orientation angle is smaller than a first preset angle value.
The first preset angle value may be a value set in advance, or may be calculated from the sum of the orientation angles of the two faces in the history data, or the like. Please refer to fig. 6, which is a schematic diagram of a predetermined angle relationship. If the sum of the first face orientation angle and the second face orientation angle is less than the first preset angle value, indicating that the first user and the second user are facing each other, the identification device 102 determines that an ear-to-head behavior exists between the first user and the second user. The recognition device 102 outputs an abnormal behavior recognition result indicating the presence of abnormal behavior. If the sum of the first face orientation angle and the second face orientation angle is greater than the first preset angle value, indicating that the first user and the second user are not facing each other, the identification device 102 determines that there is no ear-to-head behavior between the first user and the second user. The recognition device 102 outputs an abnormal behavior recognition result indicating that there is no abnormal behavior.
Presetting a second angle relation:
when the first face orientation angle is an included angle between the face orientation of the first user and the positive direction of the ordinate axis, the first face orientation angle is larger than a second preset angle value.
The second preset angle value may be a value set in advance, or may be calculated from the face orientation angle in the history data, or the like. Please refer to fig. 7, which is a schematic diagram of a predetermined angle relationship. If the first face orientation angle is greater than the second preset angle value, which indicates that the first user and the second user behind the first user may face each other, the identification device 102 determines that there is an ear-to-head behavior between the first user and the second user. The recognition device 102 outputs an abnormal behavior recognition result indicating the presence of the cross-ear behavior. If the first face orientation angle is smaller than the second preset angle value, which indicates that the first user and the second user behind the first user cannot face each other, the identification device 102 determines that there is no ear-to-head behavior between the first user and the second user. The recognition device 102 outputs an abnormal behavior recognition result indicating that there is no cross-ear behavior.
And S204, if at least one target area with the first distance within the first distance range exists, determining a second distance between the forearm part area of each target area and the forearm part area of the area to be identified adjacent to the target area.
If the recognition device 102 determines that the first distance is within the first distance range, the area to be recognized may be determined as a target area, and the recognition device 102 may obtain at least one target area. The target area may represent a likelihood that the user indicated by the user identification of the target area has the item delivered. The identification device 102 can only detect whether the abnormal behavior of the article transmission exists in the target area, and does not need to detect whether the abnormal behavior of the article transmission exists in all the areas to be identified, so that the calculation amount of the identification device 102 is greatly reduced, and the efficiency of the abnormal behavior is improved.
The recognition device 102 may determine a second distance between the forearm site area of each target area and the forearm site area of the area to be recognized adjacent to the target area. The area to be recognized adjacent to the target area may be an area to be recognized corresponding to users adjacent to each other on the left and right sides of the user indicated by the user identifier of the target area. The region to be identified adjacent to the target region may be the target region, may not be the target region, and is not limited specifically.
There are various ways in which the recognition device 102 determines the second distance between the forearm site area of the target area and the forearm site area of the area to be recognized adjacent to the target area, two of which are described below as examples.
The method comprises the following steps:
and carrying out key point detection on the forearm part area, and determining a first key coordinate point in the forearm part area. And determining the coordinate distance between the first key coordinate point of the target area and the first key coordinate point of the area to be identified adjacent to the target area as a second distance.
After the recognition device 102 maps the video frame to be recognized into the coordinate system, the forearm part area of the target area, and the forearm part area of the area to be recognized adjacent to the target area may be mapped into the coordinate system; alternatively, the recognition device 102 may map the forearm part area of the target area and the forearm part area of the area to be recognized adjacent to the target area into the coordinate system or the like when calculating the second distance.
The recognition device 102 may perform keypoint detection for the target region, and the region to be recognized adjacent to the target region. The recognition device 102 inputs the target area and the area to be recognized adjacent to the target area into the trained key point recognition model, the key point recognition model determines the target area and each key point in the area to be recognized adjacent to the target area, and outputs a first key coordinate point of the target area and a first key coordinate point of the area to be recognized adjacent to the target area. The recognition device 102 determines a coordinate distance between the first key coordinate point of the target area and the first key coordinate point of the area to be recognized adjacent to the target area as a second distance.
As an example, the key point may be a wrist joint of the user, and the first key coordinate point includes a coordinate point of the wrist joint in the coordinate system in the forearm site area. The forearm site area may refer to a forearm site sub-area having a first distance from the head and shoulder site area within a first range of distances. The left-right relationship between the target region and the region to be recognized adjacent to the target region may correspond to the left-right relationship between the head-shoulder region in the target region and the forearm region having the first distance within the first distance range.
For example, please refer to fig. 8, which is a schematic diagram illustrating a principle of determining the second distance. In the area to be recognized, the first distance between the forearm part sub-area located on the left side of the head-shoulder part area and the head-shoulder part area is within the first distance range, and then the recognition device 102 determines the die recognition area as the target area. The recognition device 102 may determine a second distance L2 between the forearm site sub-region of the target region, located to the left of the head and shoulder site region, and the forearm site sub-region of the region to be recognized, located to the right of the head and shoulder site region, located to the left of the target region.
The second method comprises the following steps:
and determining a coordinate distance between the center point of the forearm part area of the target area and the center point of the forearm part area of the area to be identified adjacent to the target area, and determining the coordinate distance as a second distance.
The recognition device 102 may map the video frame to be recognized into a coordinate system, so that a forearm part area of the target area, and a forearm part area of the area to be recognized adjacent to the target area may be mapped into the coordinate system; alternatively, the recognition device 102 may map the forearm part area of the target area and the forearm part area of the area to be recognized adjacent to the target area into the coordinate system or the like when calculating the second distance. Please refer to fig. 9, which is a schematic diagram illustrating a principle of determining the second distance. The recognition device 102 passes a coordinate distance between the center point coordinates of the forearm part area of the target area and the center point coordinates of the forearm part area of the area to be recognized adjacent to the target area as the second distance L2.
As an example, the recognition device 102 may further determine whether the second distance is within a second distance range after determining the second distance between the forearm part area of each target area and the forearm part area of the area to be recognized adjacent to the target area. If the second distance is within the second distance range, the user indicated by the user identifier of the target area is closer to the forearm distance indicated by the user identifier of the adjacent area to be recognized, which indicates that there may be an action of delivering the item by the two users, and S205 is executed. If the second distance is not within the second distance range, the user indicated by the user identifier of the target area is far away from the forearm indicated by the user identifier of the adjacent area to be recognized, which indicates that there is no behavior of transferring articles for the two users, and other abnormal behaviors can be recognized for the two users, such as the other abnormal behaviors introduced above, which is not described herein again.
S205, target detection is performed for a region between two forearm site regions having a second distance within a second distance range.
If the recognition device 102 determines that the second distance between the two forearm site areas is within a second distance range, the recognition device 102 may perform target detection on an area between the two forearm site areas having the second distance within the second distance range. The second distance range may be a preset range, or may be calculated based on a historical second distance between two forearm part areas in the historical data, or may be calculated based on examination room information, such as a distance between examination room tables, or the like.
The area between the two forearm site areas may be an area bounded by the wrist joint in the two forearm site areas at two points, for example, see fig. 10 for a schematic illustration of a principle for determining the area between the two forearm site areas. The area between the two forearm part areas is preset to be a rectangular area, the wrist joints in the two forearm part areas can be used for determining the positions of two opposite sides of the rectangular area, and the positions of the other two side lengths can be determined according to the preset side lengths of the two opposite sides, so that the area between the two forearm part areas is obtained.
Alternatively, the area between the two forearm site areas may be an area with the wrist joint of the two forearm site areas as the apex, for example, see fig. 11 for a schematic illustration of a principle for determining the area between the two forearm site areas. The area between the two forearm part areas is preset to be a rectangular area, and the wrist joints in the two forearm part areas can be used for determining two vertexes of the rectangular area, and the two vertexes can be adjacent vertexes or non-adjacent vertexes. And determining the area between the two forearm part areas according to the preset side length.
After obtaining the area between the two forearm site areas, the recognition device 102 may perform target detection on the area through a trained target detection model. The recognition device 102 inputs the region into a target detection model, and the target detection model performs feature extraction on the region to obtain a feature vector of the region. The target detection model classifies the feature vectors of the region, and the identification device 102 obtains the classification of the region. The category of the area may be electronic devices, textbooks, test papers, and others.
And S206, outputting an abnormal behavior recognition result according to the target detection result.
The recognition device 102 obtains a target detection result of the two forearm part areas after target detection between the areas, i.e., the category of the area. If the category of the area is one of an electronic device, a textbook, or a test paper, it is determined that a target object exists in the area. Based on the target detection result, the recognition device 102 outputs an abnormal behavior recognition result indicating that an abnormal behavior exists. If the category of the area is other, or the category of the area is not recognized, it is determined that the target object does not exist in the area. Based on the target detection result, the recognition device 102 outputs an abnormal behavior recognition result indicating that there is no abnormal behavior.
As an embodiment, the recognition device 102 may send the abnormal behavior recognition result to the processing device 103 after obtaining the abnormal behavior recognition result. After receiving the abnormal behavior recognition result sent by the recognition device 102, the processing device 103 may process the abnormal behavior recognition result according to a preset processing policy. For example, when the abnormal behavior recognition result indicates that there is an abnormal behavior, the processing device 103 may issue a prompt message so that a teacher may timely know and confirm the abnormal behavior; the processing device 103 may also issue warning information to the test room where there is abnormal behavior to warn the test taker to comply with the test room discipline; the processing device 103 may also record user information indicated by the user identifier corresponding to the to-be-identified area where abnormal behavior exists, for further verification after the examination is ended, or for penalizing, etc.
Please refer to fig. 12, which is a schematic diagram illustrating a principle of recognizing abnormal behavior. The following illustrates an example process for identifying abnormal behavior.
The recognition device 102 acquires a video frame to be recognized, and determines a head-shoulder part area and a forearm part area of a region to be recognized in the video frame to be recognized. The recognition device 102 determines whether a first distance between the head shoulder region and the forearm region is within a first distance range. If the recognition device 102 determines that the first distance between the head-shoulder part area and the forearm part area is within the first distance range, the recognition device 102 determines the area to be recognized as the target area. The recognition device 102 performs keypoint detection on the target region and the region to be recognized adjacent to the target region, and determines a second distance between the target region and the region to be recognized adjacent to the target region. If the recognition device 102 determines that the second distance between the target area and the area to be recognized adjacent to the target area is within the second distance range, the recognition device 102 performs target detection on the area between the target area and the area to be recognized adjacent to the target area. If the recognition device 102 determines that an illegal item exists in an area between the target area and the area to be recognized adjacent to the target area, the recognition device 102 outputs an abnormal behavior recognition result indicating that an abnormal behavior exists. If the recognition device 102 determines that the target area and an area between the areas to be recognized adjacent to the target area do not have the illegal item, the recognition device 102 outputs an abnormal behavior recognition result indicating that there is no abnormal behavior.
If the recognition device 102 determines that a first distance between the head-shoulder part region and the forearm part region is not within a first distance range, or that a second distance between the target region and a region to be recognized adjacent to the target region is not within a second distance range, the recognition device 102 performs target detection on the forearm part region of the region to be recognized. If the recognition device 102 determines that an offending item is present in the forearm site area, the recognition device 102 outputs an abnormal behavior recognition result indicating that an abnormal behavior is present. If the recognition device 102 determines that the forearm part area does not have an offending item, the recognition device 102 outputs an abnormal behavior recognition result indicating that no abnormal behavior exists.
Alternatively, if the recognition device 102 determines that a first distance between the head-shoulder part region and the forearm part region is not within a first distance range, or that a second distance between the target region and a region to be recognized adjacent to the target region is not within a second distance range, the recognition device 102 performs face-orientation recognition on the head-shoulder part region adjacent to the region to be recognized. If the recognition device 102 determines that the face orientation angle of the head and shoulder region of the region to be recognized meets the preset angle relationship, the recognition device 102 determines that the user associated with the region to be recognized has a left-right head twisting behavior or a back-to-back head twisting behavior, and the recognition device 102 outputs an abnormal behavior recognition result indicating that an abnormal behavior exists. If the recognition device 102 determines that the face orientation angle of the head-shoulder region of the region to be recognized satisfies the preset angular relationship, the recognition device 102 outputs an abnormal behavior recognition result indicating that there is no abnormal behavior.
Based on the same inventive concept, the embodiment of the present application provides an apparatus for identifying abnormal behavior, which is equivalent to the identification device 102 discussed above and can implement the corresponding functions of the foregoing method for identifying abnormal behavior. Referring to fig. 13, the apparatus includes an obtaining module 1301 and a processing module 1302, wherein:
the obtaining module 1301: the device comprises a video acquisition module, a video recognition module, a storage module and a display module, wherein the video acquisition module is used for sequentially acquiring each video frame to be recognized in a video to be recognized and determining at least one region to be recognized in each video frame to be recognized; the area to be identified comprises a head and shoulder part area and a forearm part area of a user;
the processing module 1302: the device comprises a recognition unit, a comparison unit and a display unit, wherein the recognition unit is used for determining a first distance between a head and shoulder part area and a forearm part area in each to-be-recognized area of each to-be-recognized video frame, if at least one target area with the first distance within a first distance range exists, determining a second distance between the forearm part area of each target area and a forearm part area of the to-be-recognized area adjacent to the target area, and performing target detection on an area between two forearm part areas with the second distance within a second distance range; outputting an abnormal behavior recognition result according to the target detection result; the target detection result is used for indicating whether a target object exists in the area between the two forearm part areas, and the abnormal behavior identification result is used for indicating whether abnormal behavior exists.
In a possible embodiment, the processing module 1302 is specifically configured to:
mapping the forearm part area of each target area and the forearm part area of the area to be identified adjacent to the target area into a coordinate system, detecting key points of the forearm part areas, and determining a first key coordinate point in each forearm part area; the first key coordinate point is used for indicating the position of the forearm of the user;
and determining the coordinate distance between the first key coordinate point of each target area and the first key coordinate point of the area to be identified adjacent to the target area as a second distance.
In one possible embodiment, the first key coordinate point is a coordinate point of a wrist joint in a coordinate system in the forearm site area.
In a possible embodiment, the processing module 1302 is further configured to:
after a first distance between a head and shoulder part area and a forearm part area in each to-be-recognized area of each to-be-recognized video frame is determined, determining a target detection area according to the forearm part area in the to-be-recognized area and performing target detection on the target detection area aiming at the to-be-recognized area of which the first distance is not within a first distance range;
and outputting an abnormal behavior recognition result according to the target detection result.
In a possible embodiment, the processing module 1302 is specifically configured to:
mapping the forearm part area to a coordinate system, detecting key points of the forearm part area, and determining four second key coordinate points in the forearm part area; the second key coordinate point is used for indicating the vertex of a quadrilateral area surrounded by two forearms of the user;
and determining the quadrilateral area with the second key coordinate point as a vertex as a target detection area.
In one possible embodiment, the second key coordinate points comprise coordinate points in a coordinate system of a wrist joint and an elbow joint in the forearm site area.
In a possible embodiment, the processing module 1302 is further configured to:
after determining a first distance between the head and shoulder part area and the forearm part area in each to-be-identified area of each to-be-identified video frame, mapping the head and shoulder part area into a coordinate system for the to-be-identified area of which the first distance is not within a first distance range, identifying the face orientation in the head and shoulder part area, and obtaining a face orientation angle between the face orientation in each head and shoulder part area and a specified coordinate axis;
if the face orientation angles corresponding to two adjacent to-be-recognized areas meet the preset angle relationship, outputting an abnormal behavior recognition result for indicating that abnormal behaviors exist; the preset angle relationship is used for representing the behavior of two adjacent users facing each other.
Based on the same inventive concept, an embodiment of the present application provides a computer device, which can implement the foregoing function of identifying an abnormal behavior, and the computer device may be equivalent to the foregoing electric fan, please refer to fig. 14, and the computer device includes:
at least one processor 1401, and a memory 1402 connected to the at least one processor 1401, in this embodiment, a specific connection medium between the processor 1401 and the memory 1402 is not limited in this application, and fig. 7 illustrates an example in which the processor 1401 and the memory 1402 are connected through a bus 1400. The bus 1400 is shown in fig. 7 by a thick line, and the connection between other components is merely illustrative and not limited. The bus 1400 may be divided into an address bus, a data bus, a control bus, etc., and is shown with only one thick line in fig. 7 for ease of illustration, but does not represent only one bus or type of bus. Alternatively, the processor 1401 may also be referred to as a controller 1401, without limitation to name a few.
In the embodiment of the present application, the memory 1402 stores instructions executable by the at least one processor 1401, and the at least one processor 1401 can execute the method for identifying abnormal behavior discussed above by executing the instructions stored in the memory 1402. The processor 1401 can realize the functions of the respective modules in the control apparatus shown in fig. 13.
The processor 1401 is a control center of the control device, and may connect various parts of the entire control device by using various interfaces and lines, and perform various functions of the control device and process data by executing or executing instructions stored in the memory 1402 and calling up data stored in the memory 1402, thereby performing overall monitoring of the control device.
In one possible embodiment, processor 1401 may include one or more processing units and processor 1401 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1401. In some embodiments, processor 1401 and memory 1402 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 1401 may be a general-purpose processor such as a Central Processing Unit (CPU), a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method for identifying abnormal behavior disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
Memory 1402, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 1402 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. Memory 1402 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1402 in the embodiments of the present application may also be a circuit or any other device capable of performing a storage function for storing program instructions and/or data.
By programming the processor 1401, the code corresponding to the method for identifying abnormal behavior described in the foregoing embodiments may be solidified into a chip, so that the chip can execute the steps of the method for identifying abnormal behavior of the embodiment shown in fig. 2 when running. How processor 1401 is programmed is well known to those skilled in the art and will not be described in detail herein.
Based on the same inventive concept, the present application also provides a storage medium storing computer instructions, which when executed on a computer, cause the computer to perform the method for identifying abnormal behavior discussed above.
In some possible embodiments, the various aspects of the method for identifying abnormal behavior provided by the present application may also be implemented in the form of a program product including program code for causing a control apparatus to perform the steps of the method for identifying abnormal behavior according to various exemplary embodiments of the present application described above in this specification when the program product is run on a device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of identifying abnormal behavior, comprising:
sequentially acquiring each video frame to be identified in a video to be identified, and determining at least one area to be identified in each video frame to be identified; wherein the region to be identified comprises a head and shoulder part region and a forearm part region of the user;
determining a first distance between a head and shoulder part area and a forearm part area in each to-be-recognized area of each to-be-recognized video frame, if at least one target area with the first distance within a first distance range exists, determining a second distance between the forearm part area of each target area and a forearm part area of the to-be-recognized area adjacent to the target area, and performing target detection on an area between two forearm part areas with the second distance within a second distance range;
outputting an abnormal behavior recognition result according to the target detection result; wherein the target detection result is used for indicating whether a target object exists in the area between the two forearm part areas, and the abnormal behavior identification result is used for indicating whether abnormal behavior exists.
2. The method of claim 1, wherein determining a second distance between the forearm site area of each target area and a forearm site area of an area to be identified adjacent to the target area comprises:
mapping the forearm part area of each target area and the forearm part area of the area to be identified adjacent to the target area into a coordinate system, detecting key points of the forearm part areas, and determining a first key coordinate point in each forearm part area; wherein the first key coordinate point is used for indicating the position of the forearm of the user;
and determining the coordinate distance between the first key coordinate point of each target area and the first key coordinate point of the area to be identified adjacent to the target area as a second distance.
3. The method of claim 2, wherein the first key coordinate point is a coordinate point of a wrist joint in the coordinate system in a forearm site area.
4. The method of claim 1, further comprising, after determining a first distance between a head and shoulder region and a forearm region in each to-be-identified region of each to-be-identified video frame:
for the area to be recognized, of which the first distance is not within the first distance range, determining a target detection area according to a forearm part area in the area to be recognized, and performing target detection on the target detection area;
and outputting an abnormal behavior recognition result according to the target detection result.
5. The method according to claim 4, wherein determining the target detection area according to the forearm part area in the area to be identified specifically comprises:
mapping the forearm part area to a coordinate system, detecting key points of the forearm part area, and determining four second key coordinate points in the forearm part area; the second key coordinate point is used for indicating the vertex of a quadrilateral area surrounded by two forearms of a user;
and determining the quadrilateral area with the second key coordinate point as a vertex as a target detection area.
6. The method of claim 5, wherein the second key coordinate points comprise coordinate points in the coordinate system of a wrist joint and an elbow joint in a forearm site area.
7. The method of claim 1, further comprising, after determining a first distance between a head and shoulder region and a forearm region in each to-be-identified region of each to-be-identified video frame:
for the area to be identified, the first distance of which is not within the first distance range, mapping the head and shoulder region into a coordinate system, identifying the face orientation in the head and shoulder region, and obtaining the face orientation angle between the face orientation in each head and shoulder region and a specified coordinate axis;
if the face orientation angles corresponding to two adjacent to-be-recognized areas meet the preset angle relationship, outputting an abnormal behavior recognition result for indicating that abnormal behaviors exist; the preset angle relationship is used for representing the behavior of two adjacent users facing each other.
8. An apparatus for identifying abnormal behavior, comprising:
an acquisition module: the device comprises a video acquisition module, a video recognition module, a storage module and a display module, wherein the video acquisition module is used for sequentially acquiring each video frame to be recognized in a video to be recognized and determining at least one region to be recognized in each video frame to be recognized; wherein the region to be identified comprises a head and shoulder part region and a forearm part region of the user;
a processing module: the device comprises a recognition unit, a first distance detection unit, a second distance detection unit and a second distance detection unit, wherein the recognition unit is used for determining a first distance between a head shoulder part area and a forearm part area in each to-be-recognized area of each to-be-recognized video frame, if at least one target area with the first distance within a first distance range exists, determining a second distance between the forearm part area of each target area and a forearm part area of an area to be recognized adjacent to the target area, and performing target detection on an area between two forearm part areas with the second distance within a second distance range; outputting an abnormal behavior recognition result according to the target detection result; wherein the target detection result is used for indicating whether a target object exists in the area between the two forearm part areas, and the abnormal behavior identification result is used for indicating whether abnormal behavior exists.
9. A computer device, comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the method according to any one of claims 1 to 7 according to the obtained program instructions.
10. A storage medium having stored thereon computer-executable instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202011245772.3A 2020-11-10 2020-11-10 Method and device for identifying abnormal behavior, computer equipment and storage medium Active CN112380951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011245772.3A CN112380951B (en) 2020-11-10 2020-11-10 Method and device for identifying abnormal behavior, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011245772.3A CN112380951B (en) 2020-11-10 2020-11-10 Method and device for identifying abnormal behavior, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112380951A true CN112380951A (en) 2021-02-19
CN112380951B CN112380951B (en) 2023-04-07

Family

ID=74579693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011245772.3A Active CN112380951B (en) 2020-11-10 2020-11-10 Method and device for identifying abnormal behavior, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112380951B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111747A (en) * 2021-03-31 2021-07-13 新疆爱华盈通信息技术有限公司 Abnormal limb behavior detection method, device, terminal and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140272882A1 (en) * 2013-03-13 2014-09-18 Kryterion, Inc. Detecting aberrant behavior in an exam-taking environment
JP2016095817A (en) * 2014-11-13 2016-05-26 株式会社空間概念研究所 Automatic examination system and examination fraudulent deed detection equipment employed in it
CN109829392A (en) * 2019-01-11 2019-05-31 平安科技(深圳)有限公司 Examination hall cheating recognition methods, system, computer equipment and storage medium
CN110032992A (en) * 2019-04-25 2019-07-19 沈阳航空航天大学 A kind of detection method that cheats at one's exam based on posture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140272882A1 (en) * 2013-03-13 2014-09-18 Kryterion, Inc. Detecting aberrant behavior in an exam-taking environment
JP2016095817A (en) * 2014-11-13 2016-05-26 株式会社空間概念研究所 Automatic examination system and examination fraudulent deed detection equipment employed in it
CN109829392A (en) * 2019-01-11 2019-05-31 平安科技(深圳)有限公司 Examination hall cheating recognition methods, system, computer equipment and storage medium
CN110032992A (en) * 2019-04-25 2019-07-19 沈阳航空航天大学 A kind of detection method that cheats at one's exam based on posture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
戴金波;龙曼丽;赵宏伟;陈奋君;: "考场异常行为检测算法" *
李凌;: "考生异常行为识别技术研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111747A (en) * 2021-03-31 2021-07-13 新疆爱华盈通信息技术有限公司 Abnormal limb behavior detection method, device, terminal and medium

Also Published As

Publication number Publication date
CN112380951B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN108985199A (en) Detection method, device and the storage medium of commodity loading or unloading operation
WO2022170844A1 (en) Video annotation method, apparatus and device, and computer readable storage medium
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
CN107679504A (en) Face identification method, device, equipment and storage medium based on camera scene
CN108875542B (en) Face recognition method, device and system and computer storage medium
CN111680551A (en) Method and device for monitoring livestock quantity, computer equipment and storage medium
CN109948397A (en) A kind of face image correcting method, system and terminal device
CN111191582B (en) Three-dimensional target detection method, detection device, terminal device and computer readable storage medium
CN113111844B (en) Operation posture evaluation method and device, local terminal and readable storage medium
CN108875667B (en) Target identification method and device, terminal equipment and storage medium
CN111582240B (en) Method, device, equipment and medium for identifying number of objects
CN112215037B (en) Object tracking method and device, electronic equipment and computer readable storage medium
US20220207266A1 (en) Methods, devices, electronic apparatuses and storage media of image processing
CN113240031B (en) Panoramic image feature point matching model training method and device and server
CN111008561A (en) Livestock quantity determination method, terminal and computer storage medium
JP2016157165A (en) Person identification system
CN112380951B (en) Method and device for identifying abnormal behavior, computer equipment and storage medium
CN109190674A (en) The generation method and device of training data
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN108234932B (en) Method and device for extracting personnel form in video monitoring image
CN111046831B (en) Poultry identification method, device and server
CN113569594A (en) Method and device for labeling key points of human face
CN114913470B (en) Event detection method and device
CN110458857A (en) Central symmetry pel detection method, device, electronic equipment and readable storage medium storing program for executing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant