CN115424341A - Fighting behavior identification method and device and electronic equipment - Google Patents

Fighting behavior identification method and device and electronic equipment Download PDF

Info

Publication number
CN115424341A
CN115424341A CN202211043986.1A CN202211043986A CN115424341A CN 115424341 A CN115424341 A CN 115424341A CN 202211043986 A CN202211043986 A CN 202211043986A CN 115424341 A CN115424341 A CN 115424341A
Authority
CN
China
Prior art keywords
target pedestrian
target
key point
pedestrian
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211043986.1A
Other languages
Chinese (zh)
Inventor
刘柯
闾凡兵
吴婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Hisense Intelligent System Research Institute Co ltd
Original Assignee
Changsha Hisense Intelligent System Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Hisense Intelligent System Research Institute Co ltd filed Critical Changsha Hisense Intelligent System Research Institute Co ltd
Priority to CN202211043986.1A priority Critical patent/CN115424341A/en
Publication of CN115424341A publication Critical patent/CN115424341A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fighting behavior identification method and device and electronic equipment. The method comprises the following steps: extracting a contour map of each target pedestrian; recognizing the coordinates and the posture categories of key points of each target pedestrian; judging whether the target pedestrian has a fighting action according to the coordinate change of the key point of the target pedestrian in the tracking process; calculating the distance between each target pedestrian, when the distance between one target pedestrian and the other target pedestrian is smaller than a preset distance threshold, the occurrence frequency of the framing action of the target pedestrian in the tracking process is larger than a preset action threshold, and the switching frequency of the posture category of the target pedestrian in the tracking process is larger than a preset switching threshold, sending a warning of framing, accurately identifying the key point coordinate and the posture category of the target pedestrian, calculating the displacement speed of the key point, and further completing the recognition of the framing action by combining the key point coordinate, the posture category and the key point displacement speed, so that the accuracy of the recognition can be effectively improved.

Description

Fighting behavior identification method and device and electronic equipment
Technical Field
The invention relates to the technical field of image recognition, in particular to a method and a device for recognizing a fighting behavior and electronic equipment.
Background
With the technical development and the progress of scientific technology, china has qualitative breakthroughs in the aspects of monitoring key technologies such as artificial intelligence perception, machine learning, 5G communication, internet of things, cloud computing and edge computing, video photoelectric technology and the like, the security industry has become the most important application field of the deep integration of artificial intelligence and entity economy, high-end artificial intelligence algorithms are distributed in head enterprises in a dispute, and AI cameras with artificial intelligence algorithm analysis have gradually occupied the market.
In the existing safety monitoring system, the abnormal condition is usually reported through the instant feedback of monitoring personnel, but the defects of incapability of guaranteeing the instantaneity, consumption of a large amount of human resources and the like exist in a manual monitoring mode. In the public scene with dense people such as rail transit, the mobility of people is high, and once the fighting behavior happens, people can not give an alarm in time, so that people flow can be blocked, and even trample events can occur.
Although some methods for detecting the fighting behavior by utilizing deep learning and image processing technologies exist in the prior art, the methods are all traditional schemes based on pedestrian detection, pedestrians belong to non-rigid objects, particularly, the postures of the pedestrians in the fighting process are changed variously, the posture freedom degree is high, shielding may exist, and missing detection is easy to occur in the detection mode.
Disclosure of Invention
The invention aims to provide a fighting behavior recognition method, which is suitable for public scenes with intensive personnel, can effectively improve the recognition accuracy and reduce false alarm and missing report.
In order to achieve the above object, the present invention first provides a fighting behavior recognition method, which includes the following steps:
s1, acquiring an image to be detected, inputting the image to be detected into a preset target extraction network, and extracting a contour map of each target pedestrian;
s2, inputting the contour map of the target pedestrian into a preset posture recognition network to obtain the key point coordinates and posture categories of each target pedestrian;
s3, tracking the target pedestrian through a preset pedestrian tracking model, and calculating the displacement speed and the displacement direction of the key point of the target pedestrian according to the change of the key point coordinate of the target pedestrian in the tracking process;
s4, determining whether the target pedestrian has an action of fighting a shelf according to a comparison result between the key point displacement speed of the target pedestrian and a preset speed threshold, the pointing relation between the key point displacement direction of the target pedestrian and other target pedestrians and the area coincidence relation between the key point of the target pedestrian and other target pedestrians;
s5, recording the occurrence frequency of the framing action of the target pedestrians in the tracking process and the switching frequency of the posture categories of the target pedestrians in the tracking process, and calculating the distance between the target pedestrians;
and S6, when the distance between one target pedestrian and another target pedestrian is smaller than a preset distance threshold value, the occurrence frequency of the fighting action of the target pedestrian in the tracking process is larger than a preset action threshold value, and the switching frequency of the posture categories of the target pedestrian in the tracking process is larger than a preset switching threshold value, sending out fighting warning.
Optionally, the step S3 of calculating the key point displacement speed and the displacement direction of the target pedestrian according to the change of the key point coordinates of the target pedestrian in the tracking process specifically includes: calculating to obtain the displacement speed and the displacement direction of the key points of the target pedestrian according to the change of the coordinates of the key points of the target pedestrian in the current frame and n frames of images before the current frame, wherein n is a positive integer;
the step S4 specifically includes:
judging whether the displacement speed of the key point of the target pedestrian exceeds a preset speed threshold value, whether the displacement direction points to another target pedestrian, and whether the key point of the target pedestrian is overlapped with the area of the other target pedestrian;
and when the displacement speed of the key point of the target pedestrian exceeds a preset speed threshold value, the displacement direction points to another target pedestrian, and the key point of the target pedestrian is overlapped with the region of the other target pedestrian, the target pedestrian is determined to have one fighting action.
Optionally, the calculating, according to the change of the coordinates of the key points of the target pedestrian in the current frame and n frames of images before the current frame, the key point displacement speed of the target pedestrian and the determining whether the key point displacement speed of the target pedestrian exceeds a preset speed threshold specifically include:
calculating the displacement speeds of a plurality of key points of the target pedestrian according to a preset first speed calculation formula;
calculating and determining the maximum displacement speed and the average displacement speed in the displacement speeds of the plurality of key points;
and when the maximum displacement speed in the displacement speeds of the plurality of key points exceeds a preset maximum speed threshold and the average displacement speed exceeds a preset average speed threshold, judging that the displacement speed of the key points of the target pedestrian exceeds a preset speed threshold.
Optionally, the first velocity calculation formula is:
Figure 463796DEST_PATH_IMAGE002
wherein, V i Indicating the displacement velocity of the ith keypoint,
Figure DEST_PATH_IMAGE003
and
Figure 430615DEST_PATH_IMAGE004
for the i-th keypoint abscissa and ordinate of the current frame,
Figure DEST_PATH_IMAGE005
and
Figure 280890DEST_PATH_IMAGE006
the horizontal coordinate and the vertical coordinate of the ith key point in the image of the n frames before the current frame are shown, t is the interval duration between the current frame and the image of the n frames before the current frame, and i is a positive integer.
Optionally, the plurality of key points of the target pedestrian comprises: a right hand wrist key point, a left hand wrist key point, a right leg ankle key point, and a left leg ankle key point.
Optionally, the formula for calculating the distance between the target pedestrians in step S5 is:
Figure 28266DEST_PATH_IMAGE008
wherein,
Figure DEST_PATH_IMAGE009
the abscissa and ordinate of the hip joint midpoint of the jth target pedestrian,
Figure 756444DEST_PATH_IMAGE010
the abscissa and ordinate of the hip joint midpoint of the kth target pedestrian,
Figure DEST_PATH_IMAGE011
the distance between the jth target pedestrian and the kth target pedestrian is j and k are positive integers.
Optionally, the step S1 specifically includes:
identifying and obtaining a face frame of each target pedestrian and an alignment parameter of each target pedestrian through the target extraction network;
establishing a contour map of each target pedestrian according to the face frame of the target pedestrian and the alignment parameters of the target pedestrian; the alignment parameters of each target pedestrian specifically include: the parameters of the middle point of the face, the parameters of the middle points of the shoulders, the parameters of the middle points of the hip joints and the parameters of the circumcircle of the human body;
the step S2 specifically includes: identifying key point coordinates of each target pedestrian according to the contour map of the target pedestrian;
classifying the postures of the target pedestrians according to the key point coordinates of the target pedestrians and the contour map of the target pedestrians to obtain the posture categories of the target pedestrians; the gesture categories include a fighting gesture and a normal gesture.
Optionally, in step S5, each time the target pedestrian switches from the normal posture to the fighting posture or from the fighting posture to the normal posture in the tracking process, it is determined that the posture category of the target pedestrian has completed one switching in the tracking process.
The invention also provides a fighting behavior recognition device, which comprises the following steps:
the system comprises an extraction unit, a target extraction network and a pedestrian detection unit, wherein the extraction unit is used for acquiring an image to be detected, inputting the image to be detected into the preset target extraction network and extracting a contour map of each target pedestrian;
the recognition unit is used for inputting the contour map of the target pedestrian into a preset gesture recognition network to obtain the key point coordinates and the gesture categories of each target pedestrian;
the first calculation unit is used for tracking the target pedestrian through a preset pedestrian tracking model and calculating the displacement speed and the displacement direction of the key points of the target pedestrian according to the change of the key point coordinates of the target pedestrian in the tracking process;
the behavior judging unit is used for determining whether the target pedestrian has a fighting action according to a comparison result between the key point displacement speed of the target pedestrian and a preset speed threshold value, the pointing relation between the key point displacement direction of the target pedestrian and other target pedestrians and the region coincidence relation between the key point of the target pedestrian and other target pedestrians;
the behavior recording unit is used for recording the occurrence frequency of the fighting action of the target pedestrian in the tracking process and the switching frequency of the posture categories of the target pedestrian in the tracking process, and calculating the distance between the target pedestrians;
and the alarm unit is used for sending out a fighting alarm when the distance between one target pedestrian and another target pedestrian is smaller than a preset distance threshold value, the occurrence frequency of fighting actions of the target pedestrian in the tracking process is larger than a preset action threshold value, and the switching frequency of the posture categories of the target pedestrian in the tracking process is larger than a preset switching threshold value.
In addition, the present invention also provides an electronic device including: a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the above method.
The invention has the beneficial effects that: the invention provides a fighting behavior identification method, a fighting behavior identification device and electronic equipment, wherein the method comprises the following steps: acquiring an image to be detected, inputting the image to be detected into a preset target extraction network, and extracting a contour map of each target pedestrian; inputting the contour map of the target pedestrian into a preset posture recognition network to obtain the key point coordinates and posture categories of each target pedestrian; tracking the target pedestrian through a preset pedestrian tracking model, and calculating the displacement speed and the displacement direction of the key point of the target pedestrian according to the change of the key point coordinate of the target pedestrian in the tracking process; determining whether the target pedestrian has a fighting action according to a comparison result between the key point displacement speed of the target pedestrian and a preset speed threshold, the pointing relation between the key point displacement direction of the target pedestrian and other target pedestrians, and the region coincidence relation between the key point of the target pedestrian and other target pedestrians; recording the occurrence frequency of the framing action of the target pedestrians in the tracking process and the switching frequency of the posture categories of the target pedestrians in the tracking process, and calculating the distance between the target pedestrians; when the distance between one target pedestrian and another target pedestrian is smaller than a preset distance threshold value, the occurrence frequency of the framing action of the target pedestrian in the tracking process is larger than a preset action threshold value, and the switching frequency of the posture category of the target pedestrian in the tracking process is larger than a preset switching threshold value, a framing warning is sent, the key point coordinate and the posture category of the target pedestrian are accurately identified, the key point displacement speed is calculated, the framing action identification is completed by combining the key point coordinate, the posture category and the key point displacement speed, and the identification accuracy can be effectively improved.
Drawings
For a better understanding of the nature and technical aspects of the present invention, reference should be made to the following detailed description of the invention, taken in conjunction with the accompanying drawings, which are provided for purposes of illustration and description and are not intended to limit the invention.
In the drawings, there is shown in the drawings,
FIG. 1 is a flow chart of a fighting behavior recognition method of the present invention;
fig. 2 and fig. 3 are flowcharts illustrating the determination of the fighting action in the fighting behavior recognition method according to the present invention;
FIG. 4 is a schematic diagram of key points in the fighting behavior recognition method according to the present invention;
fig. 5 to 6 are schematic diagrams illustrating an embodiment of a fighting behavior recognition method according to the present invention;
fig. 7 to 8 are schematic views illustrating another embodiment of the fighting behavior recognition method according to the present invention;
FIG. 9 is a schematic view of a fighting behavior recognition device according to the present invention;
fig. 10 is a schematic diagram of an electronic device of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be considered as limiting the present application. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In this application, the word "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes are not set forth in detail in order to avoid obscuring the description of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Referring to fig. 1, the present invention provides a fighting behavior recognition method, which includes the following steps:
s1, acquiring an image to be detected, inputting the image to be detected into a preset target extraction network, and extracting a contour map of each target pedestrian.
Specifically, the step S1 includes: identifying and obtaining a face frame of each target pedestrian and an alignment parameter of each target pedestrian through the target extraction network;
establishing a contour map of each target pedestrian according to the face frame of the target pedestrian and the alignment parameters of the target pedestrian;
the alignment parameters of each target pedestrian specifically include: the parameters of the middle point of the face, the parameters of the middle points of the shoulders, the parameters of the middle point of the hip joint and the parameters of the circumcircle of the human body.
In detail, in some embodiments of the present invention, the step S1 specifically includes: reading a video stream, obtaining an image frame, wherein the image frame is an image to be detected, inputting the image to be detected into a preset target extraction network, preferably, the target extraction network is a BlazeFace deep neural network, and further identifying and obtaining a face frame of each target pedestrian in the image to be detected and an alignment parameter of the corresponding pedestrian through the target extraction network, in a typical embodiment of the present invention, the alignment parameter includes: and finally, constructing a contour map of the target pedestrian by using a face frame of the target pedestrian detected by the target extraction network and alignment parameters of the corresponding pedestrian.
It should be noted that, since the subsequently used key point detection network (i.e., the gesture recognition network) is designed to implement the real-time single-person key point detection technology, and the setting scenes of the present invention are all multi-person scenes, the present invention constructs a contour map of the target pedestrian by using the face frame of the detected target pedestrian and the alignment parameters of the corresponding pedestrian, and further intercepts the human body, inputs the extracted key point information to the key point detection network, and further implements the multi-person real-time key point detection technology.
S2, inputting the contour map of the target pedestrian into a preset posture recognition network to obtain the key point coordinates and the posture category of each target pedestrian;
specifically, the step S2 specifically includes:
identifying key point coordinates of each target pedestrian according to the contour map of the target pedestrian;
classifying the postures of the target pedestrians according to the key point coordinates of the target pedestrians and the contour map of the target pedestrians to obtain the posture categories of the target pedestrians; the gesture categories include a fighting gesture and a normal gesture.
It should be noted that, as shown in fig. 4, in some embodiments of the present invention, the gesture recognition network outputs 14 pieces of key point information, and when a pedestrian is subsequently tracked, a human body frame is constructed by using the 14 pieces of key point information of the previous frame and is input to the next frame of image to help obtain the key point information of the next frame of image, so as to implement effective human body tracking, and when a target pedestrian is lost, the method returns to step S1 to restart a cycle period.
Furthermore, the posture recognition network is an improved BlazePose network, only one key point branch is output in the inference stage of the traditional BlazePose network, a classification branch is added in the network structure of the BlazePose to obtain the posture recognition network, whether the human body is in the posture of putting up the frame or not is distinguished through the classification branch, meanwhile, the classification branch utilizes key point information for assisting classification, classification accuracy is improved, and inference time consumption of the model is not excessively increased.
And S3, tracking the target pedestrian through a preset pedestrian tracking model by combining the images of the figures 5 to 8, and calculating the displacement speed and the displacement direction of the key points of the target pedestrian according to the change of the key point coordinates of the target pedestrian in the tracking process.
Specifically, as shown in fig. 2, in some embodiments of the present invention, the step S3 of determining, according to a change of a coordinate of a key point of the target pedestrian during the tracking process, that the target pedestrian calculates a displacement speed and a displacement direction of the key point of the target pedestrian specifically includes:
and calculating the displacement speed and the displacement direction of the key point of the target pedestrian according to the coordinate change of the key point of the target pedestrian in the current frame and n frames of images before the current frame, wherein n is a positive integer.
Further, as shown in fig. 4, the step S3 of calculating, according to the change of the coordinates of the key points of the target pedestrian in the images of the current frame and n frames before the current frame, the key point displacement speed of the target pedestrian and determining whether the key point displacement speed of the target pedestrian exceeds a preset speed threshold specifically includes:
calculating the displacement speeds of a plurality of key points of the target pedestrian according to a preset first speed calculation formula;
calculating and determining the maximum displacement speed and the average displacement speed in the displacement speeds of the plurality of key points;
and when the maximum displacement speed in the displacement speeds of the plurality of key points exceeds a preset maximum speed threshold and the average displacement speed exceeds a preset average speed threshold, judging whether the displacement speed of the key points of the target pedestrian exceeds a preset speed threshold.
It should be noted that, in a general fighting behavior, only a boxing kick is involved, the motion of key points of hands and legs is reflected on key points, the position information of the key points of hands and legs of a current frame is recorded, and by using tracking information, the displacement calculation is performed on the key points of hands and legs recorded by the current frame and the corresponding key points of a same target pedestrian in the previous n-frame image detection area, so as to obtain the displacement speed of the key points, wherein the specific calculation formula is the following first velocity calculation formula:
Figure 312191DEST_PATH_IMAGE012
wherein, V i Indicating the displacement velocity of the ith keypoint,
Figure 926843DEST_PATH_IMAGE003
and
Figure 535416DEST_PATH_IMAGE004
for the i-th keypoint abscissa and ordinate of the current frame,
Figure 752771DEST_PATH_IMAGE005
and
Figure 897445DEST_PATH_IMAGE006
the horizontal coordinate and the vertical coordinate of the ith key point in the image of the n frames before the current frame are shown, t is the interval duration between the current frame and the image of the n frames before the current frame, and i is a positive integer.
Only hands and hands are needed to be concerned for fightingThe foot key points are all, so the invention selects the right-hand wrist key point 5, the left-hand wrist key point 8, the right-leg ankle key point 11 and the left-leg ankle key point 14 from the 14 key points to calculate the displacement speed of the key points, and calculates the maximum displacement speed of the key points
Figure DEST_PATH_IMAGE013
And average displacement velocity
Figure 214157DEST_PATH_IMAGE014
The calculation method is as follows:
Figure DEST_PATH_IMAGE015
Figure 973341DEST_PATH_IMAGE016
wherein,
Figure DEST_PATH_IMAGE017
the key point displacement speeds of the right-hand wrist key point 5, the left-hand wrist key point 8, the right-leg ankle key point 11, and the left-leg ankle key point 14 are respectively represented.
And when the maximum displacement speed exceeds a preset maximum speed threshold and the average displacement speed exceeds a preset average speed threshold, judging whether the displacement speed of the key point of the target pedestrian exceeds a preset speed threshold.
And if the key point coordinate of the target pedestrian between the current frame and the n frames of images before the current frame moves towards the direction close to another target pedestrian, the key point displacement direction of the target pedestrian is considered to point to another target pedestrian.
And S4, determining whether the target pedestrian has the action of fighting a shelf according to a comparison result between the key point displacement speed of the target pedestrian and a preset speed threshold, the pointing relation between the key point displacement direction of the target pedestrian and other target pedestrians and the area coincidence relation between the key point of the target pedestrian and other target pedestrians.
Specifically, in step S4, first, it is determined whether the displacement speed of the key point of the target pedestrian exceeds a preset speed threshold, whether the displacement direction points to another target pedestrian, and whether the key point of the target pedestrian coincides with an area where another target pedestrian appears;
and when the displacement speed of the key point of the target pedestrian exceeds a preset speed threshold value, the displacement direction points to another target pedestrian, and the key point of the target pedestrian is overlapped with the region of the other target pedestrian, the target pedestrian is determined to have one-time fighting action.
S5, recording the occurrence frequency of the framing action of the target pedestrians in the tracking process and the switching frequency of the posture categories of the target pedestrians in the tracking process, and calculating the distance between the target pedestrians;
specifically, the formula for calculating the distance between the target pedestrians in step S5 is:
Figure 400911DEST_PATH_IMAGE008
wherein,
Figure 134512DEST_PATH_IMAGE009
the abscissa and ordinate of the hip joint midpoint of the jth target pedestrian,
Figure 950021DEST_PATH_IMAGE010
the abscissa and ordinate of the hip joint midpoint of the kth target pedestrian,
Figure 34652DEST_PATH_IMAGE011
the distance between the jth target pedestrian and the kth target pedestrian is j and k are positive integers.
Further, the posture categories comprise a normal posture and a fighting posture, and each time the target pedestrian switches from the normal posture to the fighting posture or from the fighting posture to the normal posture in the tracking process, the posture category of the target pedestrian is considered to complete one switching in the tracking process.
And S6, when the distance between one target pedestrian and another target pedestrian is smaller than a preset distance threshold, the occurrence frequency of the fighting action of the target pedestrian in the tracking process is larger than a preset action threshold, and the switching frequency of the posture categories of the target pedestrian in the tracking process is larger than a preset switching threshold, giving out fighting warning.
It should be noted that, because the fighting is a continuous action, and only one effective fighting action or one attitude category switching is counted, which is very easy to be misdetected with the daily action, in the invention, an action threshold value of occurrence of the fighting action and a switching threshold value of the attitude category switching are set, when the occurrence frequency of the fighting action of the target pedestrian in the tracking process is greater than a preset action threshold value and the switching frequency of the attitude category of the target pedestrian in the tracking process is greater than a preset switching threshold value, an alarm can be given, so as to effectively avoid false alarm.
It is worth mentioning that the fighting behavior identification method is suitable for two-person fighting and multi-person group.
Finally, pedestrian postures in the framing process are varied, the posture freedom degree is high, shielding is possible, missing detection is easy to occur in a traditional key point detection mode based on pedestrian detection, the key point detection of multiple persons is achieved through a mode of combining BlazeFace and BlazePose, the problem of ID tracking is solved, the missing detection condition and the model reasoning speed are favorably improved, the framing behavior is further distinguished by creatively utilizing the displacement speed of key points of continuous frames, the framing posture category switching and the key point displacement speed are combined for framing behavior detection, and the accuracy of identification of the algorithm can be effectively improved.
Referring to fig. 9, an apparatus for recognizing fighting behavior includes the following steps:
the extraction unit 10 is configured to acquire an image to be detected, input the image to be detected into a preset target extraction network, and extract a contour map of each target pedestrian;
the recognition unit 20 is configured to input the contour map of the target pedestrian into a preset gesture recognition network, so as to obtain a key point coordinate and a gesture category of each target pedestrian;
the first calculating unit 30 is configured to track the target pedestrian through a preset pedestrian tracking model, and calculate a key point displacement speed and a displacement direction of the target pedestrian according to a change of a key point coordinate of the target pedestrian in a tracking process;
the behavior judging unit 40 is configured to determine whether a fighting action occurs to the target pedestrian according to a comparison result between the key point displacement speed of the target pedestrian and a preset speed threshold, a pointing relationship between the key point displacement direction of the target pedestrian and other target pedestrians, and a region coincidence relationship between the key point of the target pedestrian and other target pedestrians;
a behavior recording unit 50, configured to record the occurrence frequency of a fighting action of a target pedestrian in the tracking process and the switching frequency of posture categories of the target pedestrian in the tracking process, and calculate a distance between each target pedestrian;
and the alarm unit 60 is used for sending an alert of fighting when the distance between one target pedestrian and another target pedestrian is smaller than a preset distance threshold, the occurrence frequency of the fighting action of the target pedestrian in the tracking process is larger than a preset action threshold, and the switching frequency of the posture categories of the target pedestrian in the tracking process is larger than a preset switching threshold.
Referring to fig. 10, the present invention provides an electronic device, including: a memory 200 and a processor 100, the memory 200 storing a computer program which, when executed by the processor 100, causes the processor 100 to perform the steps of the method as described above.
In summary, the present invention provides a method, an apparatus and an electronic device for identifying a fighting behavior, wherein the method includes the following steps: acquiring an image to be detected, inputting the image to be detected into a preset target extraction network, and extracting a contour map of each target pedestrian; inputting the contour map of the target pedestrian into a preset posture recognition network to obtain the key point coordinates and posture categories of each target pedestrian; tracking the target pedestrian through a preset pedestrian tracking model, and calculating the displacement speed and the displacement direction of the key point of the target pedestrian according to the change of the key point coordinate of the target pedestrian in the tracking process; determining whether the target pedestrian has a fighting action according to a comparison result between the key point displacement speed of the target pedestrian and a preset speed threshold, the pointing relation between the key point displacement direction of the target pedestrian and other target pedestrians, and the region coincidence relation between the key point of the target pedestrian and other target pedestrians; recording the occurrence frequency of the framing action of the target pedestrians in the tracking process and the switching frequency of the posture categories of the target pedestrians in the tracking process, and calculating the distance between the target pedestrians; when the distance between one target pedestrian and another target pedestrian is smaller than a preset distance threshold value, the occurrence frequency of the framing action of the target pedestrian in the tracking process is larger than a preset action threshold value, and the switching frequency of the posture category of the target pedestrian in the tracking process is larger than a preset switching threshold value, a warning of framing is sent, the key point coordinate and the posture category of the target pedestrian are accurately identified, the key point displacement speed is calculated, the framing action identification is completed by the combination of the key point coordinate, the posture category and the key point displacement speed, and the identification accuracy can be effectively improved.
As described above, it will be apparent to those skilled in the art that other various changes and modifications may be made based on the technical solution and concept of the present invention, and all such changes and modifications are intended to fall within the scope of the appended claims.

Claims (10)

1. A fighting behavior identification method is characterized by comprising the following steps:
s1, acquiring an image to be detected, inputting the image to be detected into a preset target extraction network, and extracting a contour map of each target pedestrian;
s2, inputting the contour map of the target pedestrian into a preset posture recognition network to obtain the key point coordinates and posture categories of each target pedestrian;
s3, tracking the target pedestrian through a preset pedestrian tracking model, and calculating the displacement speed and the displacement direction of the key point of the target pedestrian according to the change of the key point coordinate of the target pedestrian in the tracking process;
s4, determining whether the target pedestrian has an action of fighting a shelf according to a comparison result between the key point displacement speed of the target pedestrian and a preset speed threshold, the pointing relation between the key point displacement direction of the target pedestrian and other target pedestrians and the area coincidence relation between the key point of the target pedestrian and other target pedestrians;
s5, recording the occurrence frequency of the framing action of the target pedestrians in the tracking process and the switching frequency of the posture categories of the target pedestrians in the tracking process, and calculating the distance between the target pedestrians;
and S6, when the distance between one target pedestrian and another target pedestrian is smaller than a preset distance threshold, the occurrence frequency of the fighting action of the target pedestrian in the tracking process is larger than a preset action threshold, and the switching frequency of the posture categories of the target pedestrian in the tracking process is larger than a preset switching threshold, giving out fighting warning.
2. The fighting behavior recognition method according to claim 1, wherein the step S3 of calculating the displacement speed and the displacement direction of the key point of the target pedestrian according to the change of the key point coordinate of the target pedestrian in the tracking process specifically includes: calculating the displacement speed and the displacement direction of the key point of the target pedestrian according to the coordinate change of the key point of the target pedestrian in the current frame and n frames of images before the current frame, wherein n is a positive integer;
the step S4 specifically includes:
judging whether the displacement speed of the key point of the target pedestrian exceeds a preset speed threshold value, whether the displacement direction points to another target pedestrian, and whether the key point of the target pedestrian is overlapped with the area of the other target pedestrian;
and when the displacement speed of the key point of the target pedestrian exceeds a preset speed threshold value, the displacement direction points to another target pedestrian, and the key point of the target pedestrian is overlapped with the occurrence area of the other target pedestrian, the target pedestrian is determined to have one framing action.
3. The method for identifying fighting behaviors of claim 2, wherein the calculating of the displacement speed of the key point of the target pedestrian according to the change of the coordinates of the key point of the target pedestrian in the images of the current frame and n frames before the current frame and the judging of whether the displacement speed of the key point of the target pedestrian exceeds a preset speed threshold specifically comprises:
calculating displacement speeds of a plurality of key points of the target pedestrian according to a preset first speed calculation formula;
calculating and determining the maximum displacement speed and the average displacement speed in the displacement speeds of the plurality of key points;
and when the maximum displacement speed in the displacement speeds of the plurality of key points exceeds a preset maximum speed threshold and the average displacement speed exceeds a preset average speed threshold, judging that the displacement speed of the key points of the target pedestrian exceeds a preset speed threshold.
4. A fighting behavior recognition method according to claim 3, characterized in that the first velocity calculation formula is:
Figure DEST_PATH_IMAGE001
wherein, V i Indicating the displacement velocity of the ith keypoint,
Figure 338934DEST_PATH_IMAGE002
and
Figure 917552DEST_PATH_IMAGE003
for the i-th keypoint abscissa and ordinate of the current frame,
Figure 178769DEST_PATH_IMAGE004
and
Figure 50910DEST_PATH_IMAGE005
the horizontal coordinate and the vertical coordinate of the ith key point in the image of the n frames before the current frame are shown, t is the interval duration between the current frame and the image of the n frames before the current frame, and i is a positive integer.
5. A fighting behavior recognition method according to claim 3, characterized in that the plurality of key points of the target pedestrian comprises: a right hand wrist key point, a left hand wrist key point, a right leg ankle key point, and a left leg ankle key point.
6. The fighting behavior recognition method according to claim 1, wherein the formula for calculating the distance between the target pedestrians in step S5 is:
Figure 122902DEST_PATH_IMAGE006
wherein,
Figure DEST_PATH_IMAGE007
the abscissa and ordinate of the hip joint midpoint of the jth target pedestrian,
Figure 673969DEST_PATH_IMAGE008
the abscissa and ordinate of the hip joint midpoint of the kth target pedestrian,
Figure 991074DEST_PATH_IMAGE009
the distance between the jth target pedestrian and the kth target pedestrian is j and k are positive integers.
7. The fighting behavior recognition method according to claim 1, wherein the step S1 specifically includes:
identifying and obtaining a face frame of each target pedestrian and an alignment parameter of each target pedestrian through the target extraction network;
establishing a contour map of each target pedestrian according to the face frame of the target pedestrian and the alignment parameters of the target pedestrian; the alignment parameters of each target pedestrian specifically include: the parameters of the middle point of the face, the parameters of the middle points of the shoulders, the parameters of the middle points of the hip joints and the parameters of the circumcircle of the human body;
the step S2 specifically includes: identifying key point coordinates of each target pedestrian according to the contour map of the target pedestrian;
classifying the postures of the target pedestrians according to the key point coordinates of the target pedestrians and the contour map of the target pedestrians to obtain the posture categories of the target pedestrians; the gesture categories include a fighting gesture and a normal gesture.
8. The fighting behavior recognition method according to claim 7, wherein in step S5, each time the target pedestrian switches from the normal posture to the fighting posture or from the fighting posture to the normal posture in the tracking process, it is determined that the posture category of the target pedestrian has completed one switching in the tracking process.
9. The shelf-fighting behavior recognition device is characterized by comprising the following steps:
the system comprises an extraction unit, a target extraction network and a pedestrian detection unit, wherein the extraction unit is used for acquiring an image to be detected, inputting the image to be detected into the preset target extraction network and extracting a contour map of each target pedestrian;
the recognition unit is used for inputting the contour map of the target pedestrian into a preset gesture recognition network to obtain the key point coordinates and the gesture categories of each target pedestrian;
the first calculation unit is used for tracking the target pedestrian through a preset pedestrian tracking model and calculating the key point displacement speed and the displacement direction of the target pedestrian according to the change of the key point coordinates of the target pedestrian in the tracking process;
the behavior judging unit is used for determining whether the target pedestrian has a fighting action according to a comparison result between the key point displacement speed of the target pedestrian and a preset speed threshold value, the pointing relation between the key point displacement direction of the target pedestrian and other target pedestrians and the region coincidence relation between the key point of the target pedestrian and other target pedestrians;
the behavior recording unit is used for recording the occurrence frequency of the fighting action of the target pedestrian in the tracking process and the switching frequency of the posture categories of the target pedestrian in the tracking process, and calculating the distance between the target pedestrians;
and the alarm unit is used for sending out a fighting alarm when the distance between one target pedestrian and another target pedestrian is smaller than a preset distance threshold value, the occurrence frequency of fighting actions of the target pedestrian in the tracking process is larger than a preset action threshold value, and the switching frequency of the posture categories of the target pedestrian in the tracking process is larger than a preset switching threshold value.
10. An electronic device, comprising: a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1-8.
CN202211043986.1A 2022-08-30 2022-08-30 Fighting behavior identification method and device and electronic equipment Pending CN115424341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211043986.1A CN115424341A (en) 2022-08-30 2022-08-30 Fighting behavior identification method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211043986.1A CN115424341A (en) 2022-08-30 2022-08-30 Fighting behavior identification method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115424341A true CN115424341A (en) 2022-12-02

Family

ID=84200125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211043986.1A Pending CN115424341A (en) 2022-08-30 2022-08-30 Fighting behavior identification method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115424341A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524414A (en) * 2023-06-26 2023-08-01 广州英码信息科技有限公司 Method, system and computer readable storage medium for identifying racking behavior

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102457705A (en) * 2010-10-19 2012-05-16 由田新技股份有限公司 Method and system for detecting and monitoring fight behavior
KR20200052418A (en) * 2018-10-25 2020-05-15 주식회사 유캔스타 Automated Violence Detecting System based on Deep Learning
CN111324772A (en) * 2019-07-24 2020-06-23 杭州海康威视系统技术有限公司 Personnel relationship determination method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102457705A (en) * 2010-10-19 2012-05-16 由田新技股份有限公司 Method and system for detecting and monitoring fight behavior
KR20200052418A (en) * 2018-10-25 2020-05-15 주식회사 유캔스타 Automated Violence Detecting System based on Deep Learning
CN111324772A (en) * 2019-07-24 2020-06-23 杭州海康威视系统技术有限公司 Personnel relationship determination method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李保田: "基于视频的打架斗殴检测系统", 《中国优秀硕士学位论文全文数据库(工程科技Ⅰ辑)》, 15 March 2017 (2017-03-15), pages 1 - 65 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524414A (en) * 2023-06-26 2023-08-01 广州英码信息科技有限公司 Method, system and computer readable storage medium for identifying racking behavior
CN116524414B (en) * 2023-06-26 2023-10-17 广州英码信息科技有限公司 Method, system and computer readable storage medium for identifying racking behavior

Similar Documents

Publication Publication Date Title
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
CN114067358B (en) Human body posture recognition method and system based on key point detection technology
CN111753747B (en) Violent motion detection method based on monocular camera and three-dimensional attitude estimation
JP4198951B2 (en) Group attribute estimation method and group attribute estimation apparatus
CN114220176A (en) Human behavior recognition method based on deep learning
CN107766819B (en) Video monitoring system and real-time gait recognition method thereof
Chen et al. Fall detection system based on real-time pose estimation and SVM
Ghazal et al. Human posture classification using skeleton information
CN111582158A (en) Tumbling detection method based on human body posture estimation
CN113378649A (en) Identity, position and action recognition method, system, electronic equipment and storage medium
CN114469076B (en) Identity-feature-fused fall identification method and system for solitary old people
JP2001184488A (en) Device and method for tracking figure and recording medium with recorded program therefor
US20220036056A1 (en) Image processing apparatus and method for recognizing state of subject
CN114120188B (en) Multi-row person tracking method based on joint global and local features
CN111783702A (en) Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning
CN115424341A (en) Fighting behavior identification method and device and electronic equipment
CN117593792A (en) Abnormal gesture detection method and device based on video frame
CN116778666A (en) Child motion safety monitoring system and method
Liu et al. Adaptive recognition method for VR image of Wushu decomposition based on feature extraction
Bansal et al. Elderly people fall detection system using skeleton tracking and recognition
CN112036324A (en) Human body posture judgment method and system for complex multi-person scene
Cai et al. A novel method based on optical flow combining with wide residual network for fall detection
CN115953838A (en) Gait image tracking and identifying system based on MLP-Yolov5 network
Yangjiaozi et al. Research on fall detection based on improved human posture estimation algorithm
CN112800816A (en) Video motion recognition detection method based on multiple models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination