CN113269076B - Violent behavior detection system and detection method based on distributed monitoring - Google Patents

Violent behavior detection system and detection method based on distributed monitoring Download PDF

Info

Publication number
CN113269076B
CN113269076B CN202110545892.3A CN202110545892A CN113269076B CN 113269076 B CN113269076 B CN 113269076B CN 202110545892 A CN202110545892 A CN 202110545892A CN 113269076 B CN113269076 B CN 113269076B
Authority
CN
China
Prior art keywords
monitoring
violent
violent behavior
behavior detection
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110545892.3A
Other languages
Chinese (zh)
Other versions
CN113269076A (en
Inventor
叶亮
闫素素
李月
韩帅
石硕
甄佳玲
杨宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202110545892.3A priority Critical patent/CN113269076B/en
Publication of CN113269076A publication Critical patent/CN113269076A/en
Application granted granted Critical
Publication of CN113269076B publication Critical patent/CN113269076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Alarm Systems (AREA)

Abstract

A violent behavior detection system and a detection method based on distributed monitoring relate to a pattern recognition technology. The invention provides a violent behavior detection system based on distributed monitoring, aiming at solving the problems that the existing violent behavior detection method is high in algorithm complexity, poor in detection real-time performance and not suitable for large-scale violent behavior detection. The invention also provides an improved Openpos model, which can greatly reduce the algorithm complexity of the model and improve the instantaneity of the violent motion detection method.

Description

Violent behavior detection system and detection method based on distributed monitoring
Technical Field
The present invention relates to a pattern recognition technology.
Background
Along with the development of the new media era, great convenience is brought, various violent and low-custom contents are followed, violent behaviors appear in all corners of life in succession, and the violent behaviors refer to negative behaviors which cause harm to human bodies and psychology by adopting violent means. At present, some violent action detection methods based on images are available, whether a person has violent behavior or not is judged according to the acquired images, but the algorithm complexity of the conventional violent action detection method is high, so that the real-time performance of violent action detection is poor, and the method is not suitable for large-scale violent action detection.
Disclosure of Invention
The invention provides a violent behavior detection system and a detection method based on distributed monitoring, aiming at solving the problems that the existing violent behavior detection method is high in algorithm complexity, poor in detection real-time performance and not suitable for large-scale violent detection.
Violent behavior detecting system based on distributed monitoring, characterized by: it comprises a monitoring center, M monitoring cameras and M embedded devices, wherein M is a positive integer,
the M monitoring cameras are distributed in the area to be monitored,
each monitoring camera is used for acquiring image information of an area to be monitored;
the M embedded devices are respectively embedded into the M monitoring cameras and used for processing image data acquired by the monitoring cameras and sending the image data acquired by the monitoring cameras after being processed by the embedded devices to a monitoring center through a wired/wireless network;
the monitoring center is used for analyzing and processing the received processed image data collected by the monitoring camera and detecting possible violent actions.
The violent behavior detection method based on distributed monitoring of the violent behavior detection system of distributed monitoring, under a time cycle, it includes the following steps:
the method comprises the following steps that firstly, M monitoring cameras marked with different numbers respectively collect image information of an area to be monitored; acquiring M monitoring camera acquisition images;
step two, the M monitoring cameras respectively process the acquired images of the M monitoring cameras obtained in the step one through an improved COCO model and an improved Openpos model which are embedded in the M embedded devices, and the processed acquired image data of the monitoring cameras are obtained;
step three, the M monitoring cameras send the processed image data acquired by the monitoring cameras obtained in the step two to a monitoring center through a wired/wireless network;
step four, the monitoring center receives the processed image data collected by the monitoring camera and sent in the step three, detects whether violent behaviors exist in the image data collected by the camera, and sends out alarm information to the personal terminal through a wired/wireless network when violent behaviors are detected to exist;
and step five, the personal terminal receives and outputs the alarm information in the step four, and the distributed monitoring violent behavior detection of the violent behavior detection system based on the distributed monitoring is completed once.
In the second step, the M monitoring cameras respectively process the M monitoring camera collected images obtained in the first step through the improved COCO models embedded in the M embedded devices, and the specific method for obtaining the processed monitoring camera collected image data is as follows:
extracting key points of human skeleton from the acquired images of the M monitoring cameras obtained in the step one by using M embedded devices, and then extracting the human skeleton points by using an improved OpenPose model to form a human skeleton model so as to form a video stream only containing the skeleton model; and taking the video stream only containing the bone model as a processed monitoring camera to acquire image data.
In the fourth step, the specific method for the monitoring center to receive the processed image data collected by the monitoring camera and sent in the third step and detect whether violent behavior exists in the image data collected by the camera is as follows:
and step four, after receiving the video stream at the monitoring center, firstly carrying out size normalization on the skeleton model, and filling the missing skeleton points according to the relative positions among the skeleton points in the previous frame. A sliding window is adopted to segment the data stream, K frames) are taken as a basic processing unit, K is a positive integer, morphological characteristics and dynamic characteristics are extracted from each basic processing unit, then the extracted characteristics are screened, and finally, a classifier is utilized to complete recognition and classification of violent actions;
and step two, when detecting that the violent behavior exists, the monitoring center sends the alarm information containing the camera number and the camera position to the personal terminal, sends an instruction to the camera (the carried embedded equipment) and transmits the original video corresponding to the violent behavior back to the monitoring center.
The invention provides a violent behavior detection system based on distributed monitoring. The invention also provides an improved Openpos model, which can greatly reduce the algorithm complexity of the model and improve the instantaneity of the violent motion detection method.
Drawings
FIG. 1 is a network architecture diagram of an improved OpenPose model according to the present invention;
FIG. 2 is a schematic diagram of human skeleton key point extraction according to the present invention
Detailed Description
The detailed implementation mode one, violence act detecting system based on distributed monitoring, its characteristic is: it comprises a monitoring center, M monitoring cameras and M embedded devices, wherein M is a positive integer,
the M monitoring cameras are distributed in the area to be monitored,
each monitoring camera is used for acquiring image information of an area to be monitored;
the M embedded devices are respectively embedded into the M monitoring cameras and used for processing image data acquired by the monitoring cameras and sending the image data acquired by the monitoring cameras after being processed by the embedded devices to a monitoring center through a wired/wireless network;
the monitoring center is used for analyzing and processing the received processed image data collected by the monitoring camera and detecting possible violent actions.
In this embodiment, the monitoring center is implemented by a PC or a server.
In this embodiment, each monitoring camera is a fixed monitoring device or a mobile monitoring device, for example: a mobile monitoring device which is temporarily erected or carried on a vehicle.
In this specific embodiment, the system further includes N personal terminals, where N is a positive integer, and each personal terminal is configured to receive and output an alarm message sent by the monitoring center.
1. In a specific embodiment, the method for detecting a distributed monitoring violent behavior based on the system for detecting a violent behavior based on distributed monitoring in the step one comprises the following steps in a time period:
the method comprises the following steps that firstly, M monitoring cameras marked with different numbers respectively collect image information of an area to be monitored; acquiring M monitoring camera acquisition images;
step two, the M monitoring cameras respectively process the acquired images of the M monitoring cameras obtained in the step one through an improved COCO model and an improved Openpos model which are embedded in the M embedded devices, and the processed acquired image data of the monitoring cameras are obtained;
step three, M monitoring cameras send the processed image data acquired by the monitoring cameras obtained in the step two to a monitoring center through a wired/wireless network;
step four, the monitoring center receives the processed image data collected by the monitoring camera and sent in the step three, detects whether violent behaviors exist in the image data collected by the camera, and sends alarm information to the personal terminal through a wired/wireless network when violent behaviors are detected to exist;
and step five, the personal terminal receives and outputs the alarm information in the step four, and the distributed monitoring violent behavior detection of the violent behavior detection system based on the distributed monitoring is completed once.
In the second step, the M monitoring cameras respectively process the M monitoring camera collected images obtained in the first step through the improved COCO models embedded in the M embedded devices, and the specific method for obtaining the processed monitoring camera collected image data is as follows:
extracting key points of human skeleton from the acquired images of the M monitoring cameras obtained in the step one by using M embedded devices, and then extracting the human skeleton points by using an improved OpenPose model to form a human skeleton model so as to form a video stream only containing the skeleton model; and taking the video stream only containing the bone model as a processed monitoring camera to acquire image data.
In the fourth step, the specific method for the monitoring center to receive the processed image data collected by the monitoring camera and sent in the third step and detect whether violent behavior exists in the image data collected by the camera is as follows:
and step four, after receiving the video stream at the monitoring center, firstly carrying out size normalization on the skeleton model, and filling the missing skeleton points according to the relative positions among the skeleton points in the previous frame. A sliding window is adopted to segment the data stream, K frames) are taken as a basic processing unit, K is a positive integer, morphological characteristics and dynamic characteristics are extracted from each basic processing unit, then the extracted characteristics are screened, and finally, a classifier is utilized to complete recognition and classification of violent actions;
and step two, when detecting that the violent behavior exists, the monitoring center sends the alarm information containing the camera number and the camera position to the personal terminal, sends an instruction to the camera (the carried embedded equipment) and transmits the original video corresponding to the violent behavior back to the monitoring center.
The violent behavior detection system based on distributed monitoring has the working mode that the violent behavior detection system comprises the following steps:
step one, bone point detection based on distributed monitoring:
the method comprises the following steps of firstly, improving a COCO skeleton model, removing three skeleton points with small violence detection effect from the head, simplifying the structure of the head, and forming a skeleton model containing 14 skeleton points, wherein the corresponding skeleton points and the serial numbers are shown in a table 1;
TABLE 1 human skeletal point models and corresponding labels
Figure BDA0003073488330000041
And secondly, extracting a target skeleton model by using the improved Openpos model, wherein the network structure of the improved Openpos model is shown in figure 1.
The image feature extraction part of the original Openpos model consists of the first 10 layers of the VGG-19 network and two additional convolution kernels, the bottom layer is replaced by the first 22 layers of the ResNet-51 structure to extract features, weight connection in the network is pruned, model parameters are reduced, the improved Openpos model is used for extracting a target skeleton model from a video stream, and the frame image extraction result is shown in FIG. 2;
comparing the performance of the original OpenPose model with that of the improved OpenPose model, comparing the iteration loss before and after improving the model with respect to the model training time, wherein the improved OpenPose model is very close to the iteration effect of the original model at 200000 in 100000 iterations, as shown in table 2, and the evaluation performance indexes before and after improvement are shown in table 3, so that the training period of the model is reduced and the real-time property of the bone point extraction is improved under the condition of not influencing the model precision.
TABLE 2 comparison of iterative loss values for different models
Figure BDA0003073488330000051
TABLE 3 comparison of evaluation indexes of different models
Figure BDA0003073488330000052
Step two, data transmission based on distributed monitoring:
transmitting a video stream from a monitoring camera (a carried embedded device) to a monitoring center, wherein the video stream is divided into two situations, namely wired direct connection and wireless direct connection; the second is to adopt a wireless ad hoc network to perform connection, and two cases are respectively explained as follows:
the first is that a monitoring camera (a carried embedded device) is directly connected with a monitoring center in a wired or wireless mode, and video stream data can be directly transmitted through communication;
the second is that the monitoring camera (the embedded device carried) and the monitoring center are connected in a wireless ad hoc network mode, namely when not all the monitoring cameras (the embedded device carried) can directly communicate with the monitoring center, a wireless ad hoc network is adopted to establish a route from each monitoring camera (the embedded device carried) to the monitoring center by a protocol, and a skeleton model is transmitted to the monitoring center by the monitoring camera (the embedded device carried) in a multi-hop mode, and the corresponding steps are as follows:
firstly, carrying a wireless network card by embedded equipment at each monitoring camera for data transmission;
configuring a wireless network card to enable the wireless network card to work in an AD-HOC mode;
and thirdly, establishing a route from each monitoring camera (carried embedded equipment) to the monitoring center by adopting a wireless ad hoc network routing protocol, carrying out ad hoc network by adopting an optimal link state routing protocol (OLSR), and configuring the OLSR protocol on the embedded equipment at each monitoring camera and the monitoring center. Each node runs an OLSR protocol, and multi-hop transmission from a monitoring camera (a loaded embedded device) to a monitoring center video stream is realized through a wireless ad hoc network.
Step three, violent movement detection based on the skeleton model:
firstly, preprocessing a skeleton model, normalizing the sizes of images with different aspect ratios, then filling missing key points according to the relative positions of skeleton points in a previous frame, and discarding the frame if the neck key points and 2 waist key points in the frame are missing, wherein the formulas (1) and (2). And a sliding window is adopted to segment the data stream, and every 0.2 second (5 frames) is a basic processing unit. Wherein x represents an abscissa, y represents an ordinate, B represents a torso length, cur subscript represents a current frame, pre subscript represents a previous frame, and i subscript represents an ith joint point;
Figure BDA0003073488330000061
Figure BDA0003073488330000062
secondly, extracting characteristics of each basic processing unit, respectively extracting characteristics from the aspects of human body shape and dynamic state, and extracting distance characteristics, angle characteristics and human body height-width ratio characteristics from the aspect of human body shape. The distance feature adopts a distance ratio method, the calculation method of the distance feature D (i, j, n) is shown in formula (3), wherein i, j respectively represents the ith joint point and the jth joint point, n represents the nth frame image, D (i, j, n) represents the Euclidean distance between two skeleton points, B represents the length of the human body trunk, 14 types of features are extracted, each basic unit has 5 frames and 70 groups of features, as shown in Table 4, taking D (2,3, n) as an example, RSHOULDER _ RELBOW _ D represents the distance feature between the right shoulder and the right elbow, the calculation formula of the angle relationship between the two skeleton points is shown in formula (4), the angle feature adopts the angle relationship between the skeleton points when the human body stands as a basic angle, and is shown in base angle, the change value relative to the basic angle is used as a feature, the angle feature is controlled within the range of (-pi, pi ], the calculation formula of the angle features theta (i, j, n)' is shown in formula (5), and 12 types and 60 groups of features are extracted from the angle features.
Figure BDA0003073488330000071
Figure BDA0003073488330000072
Figure BDA0003073488330000073
TABLE 4 distance characterization parameters
Figure BDA0003073488330000074
TABLE 5 Angle characterization parameters
Figure BDA0003073488330000075
For the human body aspect ratio feature, the difference value between the maximum value and the minimum value of the coordinate of the bone point in the x-axis direction in each frame of image is defined as the width, the difference value between the maximum value and the minimum value of the coordinate of the bone point in the y-axis direction is defined as the height, and the ratio of the height to the width is the aspect ratio feature, and 5 groups of features are extracted. The dynamic features are used for describing position changes of the same skeleton point between two adjacent frames (displacement vectors of the skeleton point of the next frame relative to the skeleton point of the previous frame), the skeleton model has 14 skeleton points, the dynamic features (displacement vectors) are extracted aiming at each skeleton point, 14 types of features are extracted, one type of feature is extracted between every two frames, 4 groups of dynamic features are extracted from each basic unit, and 56 groups of dynamic features are extracted. For each basic processing unit, co-extracting a set of features 191;
thirdly, screening the features by adopting a Relief-F algorithm, reducing the dimension of the features, removing useless features according to the contribution degree of the features to classification, then calculating a correlation matrix of the features selected by the Relief-F algorithm, removing redundant features with strong correlation, and finally selecting 50-dimensional features to participate in violent action classification;
dividing the obtained 50-dimensional feature vector set into 2 major actions according to different action types: violent action and non-violent action; and further divided into 10 types of actions: standing, walking, running, jumping, sitting, kicking, beating, charging, pouring and falling. And averagely dividing the feature vector set of the divided action types into 2 groups, wherein 1 group is used as a training set, and 1 group is used as a testing set. The SVM classifier was trained using the training set, and then tested using the test set, with the classification results shown in Table 6. In class 10 actions: kicking, beating, shoulder charging and pouring are 4 types of violent actions; standing, walking, running, jumping, sitting, and falling are non-violent actions. A confusion matrix for violence identification is calculated as shown in table 7.
TABLE 610 identification conversion ratio of actions (unit:%)
Figure BDA0003073488330000081
TABLE 7 violence identification confusion matrix (unit:%)
Figure BDA0003073488330000082
Recognition rate acuracy 94.2%, accuracy precision 97.1%, recall rate recall 90.5%, F193.7%, the classification result shows that the method provided by the invention is effective.
Step four, the personal terminal displays the alarm information of the monitoring center:
firstly, after detecting a violent action, a monitoring center immediately sends a monitoring camera number (ID) and position information containing the detected violent action to a personal terminal, so that the violent action is stopped in time, and the influence caused by a violent event is reduced;
and secondly, the monitoring center sends an instruction signal to the camera in a wired/wireless mode, and after receiving the instruction signal, the monitoring camera transmits an original image corresponding to the violent behavior back to the monitoring center for storing the evidence of the violent behavior.

Claims (7)

1. The violent behavior detection method based on distributed monitoring of a violent behavior detection system of distributed monitoring comprises a monitoring center, M monitoring cameras and M embedded devices, wherein M is a positive integer,
the M monitoring cameras are distributed in the area to be monitored,
each monitoring camera is used for acquiring image information of an area to be monitored;
the M embedded devices are respectively embedded into the M monitoring cameras and used for processing image data acquired by the monitoring cameras and sending the image data acquired by the monitoring cameras after being processed by the embedded devices to a monitoring center through a wired/wireless network;
the monitoring center is used for analyzing and processing the received processed image data collected by the monitoring camera and detecting possible violent actions;
the method is characterized in that:
an improved COCO model and an improved Openpos model are embedded in each embedded device;
the improved COCO model is characterized in that three skeleton points with small violent motion detection effect in the head of the conventional COCO skeleton model are removed, the head structure is simplified, and a skeleton model containing 14 skeleton points is formed;
for the original Openpos model, replacing the bottom layer with the front 22 layers of a ResNet-51 structure to extract features, trimming weight connection in a network, and reducing model parameters;
the violent behavior detection method of distributed monitoring comprises the following steps in a time period:
the method comprises the following steps that firstly, M monitoring cameras marked with different numbers respectively collect image information of an area to be monitored; acquiring M monitoring camera acquisition images;
step two, the M monitoring cameras respectively process the acquired images of the M monitoring cameras obtained in the step one through an improved COCO model and an improved Openpos model which are embedded in the M embedded devices, and the processed acquired image data of the monitoring cameras are obtained;
step three, the M monitoring cameras send the processed image data acquired by the monitoring cameras obtained in the step two to a monitoring center through a wired/wireless network;
step four, the monitoring center receives the processed image data acquired by the monitoring camera and detects whether violent behaviors exist in the image data acquired by the camera, and when violent behaviors are detected, alarm information is sent to the personal terminal through a wired/wireless network;
step five, the personal terminal receives and outputs the alarm information in the step four, and the distributed monitoring violent behavior detection of the violent behavior detection system based on the distributed monitoring is completed once;
in the fourth step, the specific method for the monitoring center to receive the processed image data collected by the monitoring camera and sent in the third step and detect whether violent behavior exists in the image data collected by the camera is as follows:
after receiving the video stream, the monitoring center firstly performs size normalization on the skeleton model, and fills missing skeleton points according to the relative positions of the skeleton points in the previous frame; dividing the data stream by adopting a sliding window, taking a frame K as a basic processing unit, taking a frame K as a positive integer, extracting morphological characteristics and dynamic characteristics of each basic processing unit, screening the extracted characteristics, and finally finishing the identification and classification of violent actions by utilizing a classifier;
and step two, when detecting that the violent behavior exists, the monitoring center sends the alarm information containing the camera number and the position to the personal terminal, sends an instruction to the camera and transmits the original video corresponding to the violent behavior back to the monitoring center.
2. The distributed monitoring violent behavior detection method based on the distributed monitoring violent behavior detection system of claim 1, wherein in the distributed monitoring violent behavior detection system, the monitoring center is further used for sending an alarm message to a wired/wireless network after violent action is detected.
3. The distributed monitoring violent behavior detection method based on the distributed monitoring violent behavior detection system of claim 2, which is characterized in that the monitoring center is implemented by a PC or a server in the distributed monitoring violent behavior detection system.
4. The distributed monitoring-based violent behavior detecting method based on the distributed monitoring violent behavior detecting system of claim 3, wherein each monitoring camera in the distributed monitoring violent behavior detecting system is a fixed monitoring device or a mobile monitoring device.
5. The distributed monitoring violent behavior detection method based on the distributed monitoring violent behavior detection system of claim 4, wherein in the distributed monitoring violent behavior detection system, each monitoring camera is a temporarily erected or vehicle-mounted mobile monitoring device.
6. The distributed monitoring violent behavior detection method based on the distributed monitoring violent behavior detection system of claim 2, which is characterized in that the distributed monitoring violent behavior detection system further comprises N personal terminals, wherein N is a positive integer, and each personal terminal is used for receiving and outputting an alarm message sent by the monitoring center.
7. The distributed monitoring violent behavior detection method based on the distributed monitoring violent behavior detection system of claim 1, which is characterized in that in the second step, the M monitoring cameras respectively process the acquired images of the M monitoring cameras obtained in the first step through improved COCO models embedded in the M embedded devices, and the specific method for acquiring the acquired image data of the processed monitoring cameras comprises the following steps:
extracting key points of human skeleton from the acquired images of the M monitoring cameras obtained in the step one by using M embedded devices, and then extracting the human skeleton points by using an improved OpenPose model to form a human skeleton model so as to form a video stream only containing the skeleton model; and taking the video stream only containing the bone model as a processed monitoring camera to acquire image data.
CN202110545892.3A 2021-05-19 2021-05-19 Violent behavior detection system and detection method based on distributed monitoring Active CN113269076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110545892.3A CN113269076B (en) 2021-05-19 2021-05-19 Violent behavior detection system and detection method based on distributed monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110545892.3A CN113269076B (en) 2021-05-19 2021-05-19 Violent behavior detection system and detection method based on distributed monitoring

Publications (2)

Publication Number Publication Date
CN113269076A CN113269076A (en) 2021-08-17
CN113269076B true CN113269076B (en) 2022-06-07

Family

ID=77232038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110545892.3A Active CN113269076B (en) 2021-05-19 2021-05-19 Violent behavior detection system and detection method based on distributed monitoring

Country Status (1)

Country Link
CN (1) CN113269076B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037954A (en) * 2021-11-09 2022-02-11 江西师范大学 Human body behavior analysis system based on classroom intensive population
CN114581953B (en) * 2022-03-14 2022-09-30 北京科技大学 Human body posture estimation method based on joint point hard case mining
CN115034280B (en) * 2022-03-16 2023-07-25 宁夏广天夏科技股份有限公司 System for detecting unsafe behaviors of underground personnel

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614882A (en) * 2018-11-19 2019-04-12 浙江大学 A kind of act of violence detection system and method based on human body attitude estimation
CN110363131A (en) * 2019-07-08 2019-10-22 上海交通大学 Anomaly detection method, system and medium based on human skeleton
CN111091060A (en) * 2019-11-20 2020-05-01 吉林大学 Deep learning-based fall and violence detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019226051A1 (en) * 2018-05-25 2019-11-28 Kepler Vision Technologies B.V. Monitoring and analyzing body language with machine learning, using artificial intelligence systems for improving interaction between humans, and humans and robots

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614882A (en) * 2018-11-19 2019-04-12 浙江大学 A kind of act of violence detection system and method based on human body attitude estimation
CN110363131A (en) * 2019-07-08 2019-10-22 上海交通大学 Anomaly detection method, system and medium based on human skeleton
CN111091060A (en) * 2019-11-20 2020-05-01 吉林大学 Deep learning-based fall and violence detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Campus Bullying Detecting Algorithm Based on Surveillance Video;Liang Ye et.al;《AICON 2020:Artificial Intelligence for Communications and Networks》;20210219;全文 *
基于深度学习的校园欺凌行为检测研究;符水波等;《宁波大学学报(理工版)》;20200510(第03期);全文 *

Also Published As

Publication number Publication date
CN113269076A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN113269076B (en) Violent behavior detection system and detection method based on distributed monitoring
CN106095099B (en) A kind of user behavior motion detection recognition methods
Gu et al. Paws: Passive human activity recognition based on wifi ambient signals
CN105719188B (en) The anti-method cheated of settlement of insurance claim and server are realized based on plurality of pictures uniformity
CN109684920A (en) Localization method, image processing method, device and the storage medium of object key point
CN110738154A (en) pedestrian falling detection method based on human body posture estimation
CN110084165A (en) The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations
CN107742097B (en) Human behavior recognition method based on depth camera
CN106934773B (en) Video moving target and Mac address matching method
CN111901749A (en) High-precision three-dimensional indoor positioning method based on multi-source fusion
US20220345919A1 (en) Communication terminal and communication quality prediction method
CN116092199B (en) Employee working state identification method and identification system
CN112488019A (en) Fall detection method and device based on posture recognition, electronic equipment and storage medium
CN112052736A (en) Cloud computing platform-based field tea tender shoot detection method
Rokhana et al. Multi-class image classification based on mobilenetv2 for detecting the proper use of face mask
CN113627326B (en) Behavior recognition method based on wearable equipment and human skeleton
CN112818942B (en) Pedestrian action recognition method and system in vehicle driving process
CN114743273A (en) Human skeleton behavior identification method and system based on multi-scale residual error map convolutional network
CN107945166A (en) The measuring method of object under test three-dimensional vibrating track based on binocular vision
CN114201985A (en) Method and device for detecting key points of human body
CN115439934A (en) Self-adaptive step frequency detection method based on CNN-LSTM motion mode identification
CN115601834A (en) Fall detection method based on WiFi channel state information
CN115393956A (en) CNN-BilSTM fall detection method for improving attention mechanism
CN113949826A (en) Unmanned aerial vehicle cluster cooperative reconnaissance method and system under limited communication bandwidth condition
CN117423138B (en) Human body falling detection method, device and system based on multi-branch structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant