CN113468975A - Fighting behavior detection method and device - Google Patents

Fighting behavior detection method and device Download PDF

Info

Publication number
CN113468975A
CN113468975A CN202110642630.9A CN202110642630A CN113468975A CN 113468975 A CN113468975 A CN 113468975A CN 202110642630 A CN202110642630 A CN 202110642630A CN 113468975 A CN113468975 A CN 113468975A
Authority
CN
China
Prior art keywords
neural network
fighting
initial neural
detection method
alarm area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110642630.9A
Other languages
Chinese (zh)
Inventor
潘国雄
赵雷
潘华东
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110642630.9A priority Critical patent/CN113468975A/en
Publication of CN113468975A publication Critical patent/CN113468975A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application discloses a fighting behavior detection method and device. The fighting behavior detection method comprises the following steps: acquiring a sequence image to be detected, and determining a characteristic thermodynamic diagram corresponding to the sequence image; determining an alarm area based on the characteristic thermodynamic diagram; and detecting the alarm area to judge whether the alarm area has a fighting behavior. The method and the device can accurately and efficiently determine whether the framing behaviors exist in the sequence image to be detected.

Description

Fighting behavior detection method and device
Technical Field
The application relates to the technical field of computers, in particular to a fighting behavior detection method and device.
Background
The behavior analysis technology is widely applied to the fields of public safety, intelligent home and the like, and the applied occasions comprise families, financial scenes, business super scenes, public areas and the like. For example, falling of elderly people living alone can be detected at home, and fighting, abnormal crowd gathering, disturbance, and the like can be detected in a region with many people.
Most of the existing safety monitoring systems observe the abnormality in front of a monitoring picture for a long time by monitoring personnel so as to immediately find the abnormal condition and react. However, such manual monitoring often fails to timely detect abnormal behaviors of people, such as mutual spoofing between people. In addition, the manual monitoring mode is time-consuming and labor-consuming, in a large-scale monitoring system, enough monitoring personnel must be hired to monitor a plurality of monitoring pictures at the same time, but too many human hands are easy to waste resources, and the monitoring personnel can miss important pictures due to fatigue.
Disclosure of Invention
The application provides a method and a device for detecting a fighting behavior, which are used for accurately and efficiently determining whether the fighting behavior exists in a sequence image.
In order to achieve the above object, the present application provides a fighting behavior detection method, including:
acquiring a sequence image to be detected, and determining a characteristic thermodynamic diagram corresponding to the sequence image;
determining an alarm area based on the characteristic thermodynamic diagram;
and detecting the alarm area to judge whether the alarm area has a fighting behavior.
The step of determining the characteristic thermodynamic diagram corresponding to the sequence image comprises the following steps: training the initial neural network by using a framing training sample set to obtain a first neural network;
the step of determining the characteristic thermodynamic diagram corresponding to the sequence image comprises the following steps:
and inputting the sequence images into a first neural network to obtain a characteristic thermodynamic diagram of the sequence images.
Wherein, the step of training the initial neural network by using the fighting training sample set comprises the following steps:
and clipping the initial neural network.
Wherein, at least part convolution layer of the initial neural network is connected with a batch normalization layer, and the step of cutting the initial neural network comprises the following steps:
and determining whether to cut out the corresponding batch normalization layer and convolution layer based on the weight value of each batch normalization layer so as to obtain the cut initial neural network.
Wherein, training the initial neural network by using a training sample set to obtain a first neural network comprises the following steps:
training the initial neural network by using a framing training sample set to obtain a trained initial neural network;
setting all parameters of the convolutional layers meeting the preset conditions in the trained initial neural network as 0;
and the parameter ratio of the convolution kernel of the convolution layer, which is smaller than or equal to the first threshold value, is larger than the second threshold value, namely the parameter ratio meets the preset condition.
Wherein, the initial neural network includes the full-link layer, utilizes and puts up training sample set and trains initial neural network to the step that obtains first neural network includes:
training the initial neural network by using a framing training sample set to obtain a trained initial neural network;
and replacing the fully-connected layer with the first convolution layer to obtain a first neural network.
Wherein, the parameter weight of the first convolution layer is the parameter weight of the full-link layer.
The step of determining whether the fighting behavior exists in the alarm area comprises the following steps:
inputting the alarm area into a second neural network to determine whether the alarm area has fighting behaviors;
wherein the initial neural network and the second neural network are both 3D convolutional neural networks.
Wherein the step of determining an alarm area based on the characteristic thermodynamic diagram comprises:
and taking the area which is larger than the confidence coefficient threshold value in the characteristic thermodynamic diagram as an alarm area.
Wherein the step of determining an alarm area based on the characteristic thermodynamic diagram comprises:
when the number of the regions larger than the confidence threshold in the characteristic thermodynamic diagram is plural, the region with the highest confidence in the characteristic thermodynamic diagram is set as the alarm region.
The step of determining the characteristic thermodynamic diagram corresponding to the sequence image comprises the following steps:
and performing frame skipping sampling on the pictures in the video to be detected to obtain a sequence image.
Wherein, the step of determining whether the alarm area has a fighting behavior comprises the following steps:
determining whether a human body exists in the alarm area;
and if so, executing the step of determining whether the fighting behavior exists in the alarm area.
To achieve the above object, the present application provides an electronic device, which includes a processor; the processor is used for executing instructions to realize the method.
To achieve the above object, the present application provides a computer-readable storage medium for storing instructions/program data that can be executed to implement the above-described method.
According to the method and the device, the characteristic thermodynamic diagrams corresponding to the sequence images are firstly determined so as to preliminarily determine the alarm area (namely the fighting area) based on the characteristic thermodynamic diagrams, and then the alarm area can be accurately identified so as to accurately judge whether the fighting action exists in the alarm area, so that the fighting detection is accurately and quickly carried out on the sequence images, and the manpower resources are saved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flow chart of an embodiment of a method for detecting a fighting behavior according to the present application;
FIG. 2 is a schematic flowchart of an embodiment of a training method for an initial neural network and a second neural network in the fighting behavior detection method according to the present application;
FIG. 3 is a schematic diagram of at least a partial structure of an embodiment of an initial neural network in the fighting behavior detection method according to the present application;
FIG. 4 is a schematic diagram of the network structure of FIG. 3 after being pruned;
FIG. 5 is two schematic diagrams of convolution kernels of convolution layers in an initial neural network in the framework-based behavior detection method of the present application;
FIG. 6 is a diagram showing the results of the convolution kernel processing shown in FIG. 5 (b);
FIG. 7 is a schematic structural diagram of an initial neural network in the fighting behavior detection method according to the present application;
FIG. 8 is a schematic structural diagram of a first neural network in the fighting behavior detection method according to the present application;
FIG. 9 is a schematic flow chart diagram illustrating another embodiment of the shelf-shooting behavior detection method according to the present application;
FIG. 10 is a schematic diagram of an embodiment of an electronic device;
FIG. 11 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The description and drawings illustrate the principles of the application. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the application and are included within its scope. Moreover, all examples described herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the application and the concepts provided by the inventors and thereby deepening the art, and are to be construed as being limited to such specifically recited examples and conditions. Additionally, the term "or" as used herein refers to a non-exclusive "or" (i.e., "and/or") unless otherwise indicated (e.g., "or otherwise" or in the alternative). Moreover, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments may be combined with one or more other embodiments to form new embodiments.
In recent years, with the continuous development of deep learning technology, more and more deep learning algorithms are applied to the field of behavior analysis, such as two-stream (two-stream) method, 3D method, CNN-LSTM method, human joint posture recognition method, and the like. For example, fighting behavior in people's multiple regions can be very different. The action severity and the action diversity can be greatly changed by different shelving objects, and even the same shelving object can be greatly different in different time periods. The existing abnormal behavior detection method cannot achieve both real-time performance and accuracy in practical application, in order to improve accuracy, the number of layers of a plurality of networks is deep, the number of convolution kernels is large, detection time is greatly prolonged due to improvement of accuracy, a large-scale server is needed to complete accurate detection, and the method has a certain gap in use.
Based on the method, the alarm area (namely the fighting area) is preliminarily determined through the characteristic thermodynamic diagram, and then whether the fighting action really exists in the alarm area is accurately determined, so that the fighting action can be rapidly and accurately identified, and the efficiency and the accuracy of fighting detection are improved.
The above-mentioned fighting behavior detection method will be described in detail below, wherein a flow diagram of an embodiment of the fighting behavior detection method is specifically shown in fig. 1, and the fighting behavior detection method of the embodiment includes the following steps. It should be noted that the following numbers are only used for simplifying the description, and are not intended to limit the execution order of the steps, and the execution order of the steps in the present embodiment may be arbitrarily changed without departing from the technical idea of the present application.
S101: and determining a characteristic thermodynamic diagram corresponding to the sequence of images.
The characteristic thermodynamic diagrams corresponding to the sequence images can be determined firstly, so that the alarm area (namely the fighting area) can be determined preliminarily based on the characteristic thermodynamic diagrams, and then the alarm area can be identified accurately, so that whether the fighting behavior exists in the alarm area or not can be judged accurately, and the fighting detection can be performed on the sequence images accurately and quickly.
Alternatively, the sequence of images may be input into a first neural network to obtain a characteristic thermodynamic diagram of the sequence of images.
The first neural network is not limited in kind, and may be, for example, a 3D convolutional neural network, a CNN neural network, or the like.
Alternatively, the first neural network may be obtained by clipping or the like of the initial neural network, so that the model of the first neural network is relatively small, so that the alarm region can be quickly determined by the first neural network.
Optionally, before step S101, the initial neural network may be trained to obtain a first neural network, so that the characteristic thermodynamic diagram corresponding to the sequence of images is determined by using the first neural network in step S101.
Specifically, the initial neural network may be trained based on a training sample set and the training process shown in fig. 2. And the initial neural network can be trained in a manner of learning whether there is a fighting behavior in the sequence images. In addition, the frame-making video sequences in different time periods can be selected, converted into pictures to be stored and preprocessed, and the pictures and the image sequences without frame-making behaviors are combined to form a training sample set.
Optionally, the initial neural network may be subjected to a clipping operation during training of the initial neural network.
Wherein the initial neural network may include a Batch normalization (BN, Batch Norm) layer, such that the initial neural network may be tailored during the training process based on the weight values of each Batch normalization layer. Specifically, at least part of convolution layers of the initial neural network may be connected with batch normalization layers, so that in the training process, based on the weight value of each batch normalization layer, it is determined whether to cut out the corresponding batch normalization layer and convolution layer, so as to obtain the cut-out initial neural network. For example, if the weight values of the BN layer 1 and the BN layer 4 in fig. 3 are less than the threshold value after the initial neural network training, the BN layer 1, the convolutional layer 1, the BN layer 4, and the convolutional layer 4 are trimmed, and the trimmed initial neural network is shown in fig. 4.
Optionally, training may be performed on the initial neural network first, and then clipping-training may be repeatedly performed on the initial neural network until the initial neural network converges and the weight values of all batch normalization layers meet the preset condition. And the weight value of the batch normalization layer is greater than the threshold value, namely the condition is met. Of course, in other embodiments, the weight value of the batch normalization layer is less than the threshold, i.e., the condition is met. The threshold may be set according to actual conditions, and is not limited herein, and may be, for example, 0.4 or 0.6.
In addition, after the training of the initial neural network is completed, the trained initial neural network may be simplified and/or replaced, etc. to obtain the first neural network.
After the initial neural network training is completed, when the convolutional layer is determined to meet the preset condition, all the parameters of the convolutional layer are set to be 0. And if the parameter ratio of the convolution kernel of the convolution layer is smaller than or equal to the first threshold is larger than the second threshold, the convolution layer is in accordance with the preset condition. For example, assuming that the first threshold is 1 and the second threshold is 70%, the convolution kernel shown in fig. 5(a) does not meet the preset condition; the convolution kernel shown in fig. 5(b) satisfies the predetermined condition, and all the parameters of the convolution kernel shown in fig. 5(b) may be set to 0, and the convolution kernel shown in fig. 6 may be changed.
If the output result of the initial neural network is the confidence that the image is in the shelving type, part of layers of the initial neural network can be replaced after the training of the initial neural network is completed, so that the output result of the replaced initial neural network is the characteristic thermodynamic diagram, the network can be conveniently trained by judging whether the image is in the shelving type, and the initial neural network can be changed into the network which outputs the characteristic thermodynamic diagram through simple replacement operation. Specifically, if the trained initial neural network includes a fully-connected layer, the fully-connected layer in the initial neural network may be replaced by a first convolution layer, so as to obtain a characteristic thermodynamic diagram of the sequence image through the replaced initial neural network.
For example, if the initial neural network is the network shown in fig. 7, that is, the initial neural network includes the second convolutional layer, the global average pooling layer connected after the second convolutional layer, and the fully-connected layer connected after the global average pooling layer, after training of the initial neural network is completed, the global average pooling layer may be deleted, and the fully-connected layer may be replaced by the first convolutional layer to obtain the network shown in fig. 8, so that the feature thermodynamic diagrams of different categories of the sequence images may be obtained through the network shown in fig. 8. Preferably, the parameter weight of the first convolutional layer is the parameter weight of the fully-connected layer in the trained initial neural network, so that the network after replacement is not required to be trained, the training time is saved, and a more accurate characteristic thermodynamic diagram can be obtained by the method.
Wherein the second convolutional layer may be a 3D convolutional layer. The first convolutional layer may be a 1D convolutional layer.
S102: an alert area is determined based on the characteristic thermodynamic diagram.
After determining the characteristic thermodynamic diagrams corresponding to the sequence of images based on step S101, an alarm region may be determined based on the characteristic thermodynamic diagrams.
In particular, the alert area may be determined based on a characteristic thermodynamic diagram of the sequence images corresponding to the fighting category.
In one implementation, the pixel points with confidence levels greater than the third threshold in the characteristic thermodynamic diagram may be merged to obtain at least one unconnected alarm region.
In another implementation, pixel points with confidence degrees greater than a third threshold in the feature thermodynamic diagram may be merged to obtain at least one region with unconnected confidence degrees greater than the third threshold; and then, taking the area with the highest confidence level in the plurality of areas with the non-interconnected confidence levels larger than the third threshold value as an alarm area.
S103: and determining whether the alarm area has fighting behaviors.
After determining the alarm region based on step S102, it may be determined whether there is actually a fighting behavior in the alarm region.
Optionally, it may be determined by the second neural network whether there is fighting activity in the alert area.
The second neural network is not limited in kind, and may be, for example, a 3D convolutional neural network, a CNN neural network, or the like.
Preferably, the second neural network and the initial neural network can be 3D convolutional neural networks, so that the method for detecting the fighting behaviors achieves balance of performance and accuracy rate in a mode of connecting the 3D neural networks in series.
In addition, before step S103, a second neural network may be trained, so that whether there is fighting behavior in the alarm region is determined in step S103 by using the trained second neural network.
In particular, the second neural network may be trained based on a set of training samples and the training process shown in FIG. 2. In addition, the training sample sets of the second neural network and the first neural network may be the same, i.e., may both contain a sequence of framed images and a sequence of images in which there is no framed behavior.
In the embodiment, the characteristic thermodynamic diagrams corresponding to the sequence images are determined firstly, so that the alarm area (namely the fighting area) is determined preliminarily based on the characteristic thermodynamic diagrams, and then the alarm area can be identified accurately, so that whether the fighting behavior exists in the alarm area is judged accurately, so that the fighting detection is performed on the sequence images accurately and quickly, and the manpower resource is saved.
The above-mentioned fighting behavior detection method will be described in detail below, wherein a flow diagram of another embodiment of the fighting behavior detection method is specifically shown in fig. 9, and the fighting behavior detection method of the present embodiment includes the following steps. It should be noted that the following numbers are only used for simplifying the description, and are not intended to limit the execution order of the steps, and the execution order of the steps in the present embodiment may be arbitrarily changed without departing from the technical idea of the present application.
S201: a sequence of images is acquired.
The sequence of images may be acquired by a monitoring device.
The real-time video data can be acquired through the monitoring equipment, and the pictures in the real-time video are subjected to frame skipping and sampling to acquire the sequence images, so that a large amount of redundant information between adjacent frame pictures is reduced. Specifically, during the acquisition of the sequence of images, the acquired at least one frame of image may be stored in a temporary space. The temporary space may store N frames of picture sequence F in total, that is, F ═ F +1, fi + 2., (fi + N), and when a new picture fi + N +1 is acquired, the fi picture in the temporary storage space is deleted, and the fi + N +1 picture is saved, that is, F ═ F +2, fi + 3., (fi + N + 1).
S202: and determining a characteristic thermodynamic diagram corresponding to the sequence of images.
S203: an alert area is determined based on the characteristic thermodynamic diagram.
S204: it is determined whether a human body is present in the alarm area.
After the alarm area is determined, whether a human body exists in the alarm area can be determined, if so, the possibility of fighting a shelf in the alarm area is proved, and then the step S205 is executed; if not, the alarm area has no possibility of fighting, and the step S201 is returned to obtain the next sequence image again.
In one implementation, after the alarm area is determined, the alarm area may be input to a human target detection algorithm to determine whether there is a human in the alarm area.
In another implementation, human detection may be performed on the sequence of images, and whether a human is in the alarm region may be determined based on the human detection result.
S205: and determining whether the alarm area has fighting behaviors.
Under the condition that the alarm area has a human body, the alarm area in the sequence image can be subjected to matting processing to obtain an alarm sequence image, and then the alarm sequence image is input into a second neural network to determine whether the alarm area has a fighting behavior.
Specifically, after the confidence that the alarm sequence image belongs to each category is obtained based on the second neural network, the category with the maximum confidence can be used as the category to which the sequence image belongs, so that whether the alarm region has a fighting behavior or not is determined. For example, if the confidence of the alarm sequence images belonging to the shelving category and the non-shelving category is respectively 79% and 21% based on the second neural network, the alarm images are determined to have shelving behaviors.
Alternatively, the second neural network may be classified using softmax.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of an electronic device 20 according to the present application. The electronic device 20 of the present application includes a processor 22, and the processor 22 is configured to execute instructions to implement the method provided by any one of the above embodiments of the voice interaction method of the present application and any non-conflicting combinations.
The electronic device 20 may be a terminal such as a mobile phone or a notebook computer, or may also be a server.
The processor 22 may also be referred to as a CPU (Central Processing Unit). The processor 22 may be an integrated circuit chip having signal processing capabilities. The processor 22 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 22 may be any conventional processor or the like.
The electronic device 20 may further include a memory 21 for storing instructions and data required for operation of the processor 22.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure. The computer readable storage medium 30 of the embodiments of the present application stores instructions/program data 31 that when executed enable the methods provided by any of the above embodiments of the methods of the present application, as well as any non-conflicting combinations. The instructions/program data 31 may form a program file stored in the storage medium 30 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned computer-readable storage medium 30 includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or various media capable of storing program codes, or a computer, a server, a mobile phone, a tablet, or other devices.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (16)

1. A fighting behavior detection method is characterized by comprising the following steps:
acquiring a sequence image to be detected, and determining a characteristic thermodynamic diagram corresponding to the sequence image;
determining an alarm area based on the characteristic thermodynamic diagram;
and detecting the alarm area to judge whether the alarm area has a fighting behavior.
2. The fighting behavior detection method according to claim 1,
the step of determining the characteristic thermodynamic diagram corresponding to the sequence of images comprises the following steps:
and inputting the sequence of images into a first neural network to obtain a characteristic thermodynamic diagram of the sequence of images.
3. The fighting behavior detection method according to claim 2,
the step of determining the characteristic thermodynamic diagram corresponding to the sequence of images comprises: and training the initial neural network by using the training sample set for fighting and according to the mode of whether the fighting behaviors exist in the learning sequence image to obtain the first neural network.
4. The fighting behavior detection method according to claim 3, wherein the initial neural network includes a fully connected layer, and the step of training the initial neural network to obtain the first neural network includes:
training the initial neural network by using the fighting training sample set to obtain a trained initial neural network;
and replacing the fully-connected layer in the trained initial neural network with a first convolutional layer to obtain the first neural network.
5. The fighting behavior detection method according to claim 4, wherein the parameter weight of the first convolution layer is the parameter weight of the fully-connected layer.
6. The method according to claim 4, wherein the training the initial neural network using the fighting training sample set comprises:
and clipping the initial neural network.
7. The fighting behavior detection method according to claim 6, wherein a batch normalization layer is connected to at least a part of the convolution layer of the initial neural network, and the step of cutting the initial neural network comprises:
and determining whether to cut out the corresponding batch normalization layer and convolution layer based on the weight value of each batch normalization layer so as to obtain the cut initial neural network.
8. The method according to claim 4 or 6, wherein the step of training an initial neural network by using the fighting training sample set to obtain the first neural network comprises:
training the initial neural network by using the fighting training sample set to obtain a trained initial neural network;
setting all parameters of the convolutional layers which accord with preset conditions in the trained initial neural network to be 0;
and the parameter proportion of the convolution kernels of the convolution layer, which is smaller than or equal to the first threshold value, is larger than a second threshold value, namely the parameter proportion meets the preset condition.
9. The fighting behavior detection method according to any of claims 3-7, wherein the step of determining whether fighting behavior exists in the alarm area comprises:
and inputting the alarm area into a second neural network to judge whether the alarm area has a fighting behavior.
10. The method according to claim 9, wherein the initial neural network and the second neural network are both 3D convolutional neural networks.
11. The fighting behavior detection method of claim 1, wherein the step of determining an alarm area based on the characteristic thermodynamic diagram includes:
and taking the area which is larger than the confidence coefficient threshold value in the characteristic thermodynamic diagram as an alarm area.
12. The fighting behavior detection method of claim 11, wherein the step of determining an alarm area based on the characteristic thermodynamic diagram includes:
and when the number of the areas larger than the confidence coefficient threshold value in the characteristic thermodynamic diagram is multiple, taking the area with the highest confidence coefficient in the characteristic thermodynamic diagram as an alarm area.
13. The fighting behavior detection method according to claim 1, wherein the step of determining the characteristic thermodynamic diagram corresponding to the sequence of images is preceded by:
and performing frame skipping sampling on the pictures in the video to be detected to obtain the sequence images.
14. The fighting behavior detection method according to claim 1, wherein the step of determining whether fighting behavior exists in the alarm region is preceded by:
determining whether a human body is in the alarm area;
and if so, executing the step of determining whether the fighting behavior exists in the alarm area.
15. An electronic device, characterized in that the electronic device comprises a processor; the processor is configured to execute instructions to implement the method of any one of claims 1-14.
16. A computer-readable storage medium, characterized in that a program file capable of implementing the method of any one of claims 1-14 is stored in the computer-readable storage medium.
CN202110642630.9A 2021-06-09 2021-06-09 Fighting behavior detection method and device Pending CN113468975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110642630.9A CN113468975A (en) 2021-06-09 2021-06-09 Fighting behavior detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110642630.9A CN113468975A (en) 2021-06-09 2021-06-09 Fighting behavior detection method and device

Publications (1)

Publication Number Publication Date
CN113468975A true CN113468975A (en) 2021-10-01

Family

ID=77869461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110642630.9A Pending CN113468975A (en) 2021-06-09 2021-06-09 Fighting behavior detection method and device

Country Status (1)

Country Link
CN (1) CN113468975A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701508A (en) * 2016-01-12 2016-06-22 西安交通大学 Global-local optimization model based on multistage convolution neural network and significant detection algorithm
CN106650662A (en) * 2016-12-21 2017-05-10 北京旷视科技有限公司 Target object occlusion detection method and target object occlusion detection device
CN109614882A (en) * 2018-11-19 2019-04-12 浙江大学 A kind of act of violence detection system and method based on human body attitude estimation
CN112926648A (en) * 2021-02-24 2021-06-08 北京优创新港科技股份有限公司 Method and device for detecting abnormality of tobacco leaf tip in tobacco leaf baking process

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701508A (en) * 2016-01-12 2016-06-22 西安交通大学 Global-local optimization model based on multistage convolution neural network and significant detection algorithm
CN106650662A (en) * 2016-12-21 2017-05-10 北京旷视科技有限公司 Target object occlusion detection method and target object occlusion detection device
CN109614882A (en) * 2018-11-19 2019-04-12 浙江大学 A kind of act of violence detection system and method based on human body attitude estimation
CN112926648A (en) * 2021-02-24 2021-06-08 北京优创新港科技股份有限公司 Method and device for detecting abnormality of tobacco leaf tip in tobacco leaf baking process

Similar Documents

Publication Publication Date Title
CN110472090B (en) Image retrieval method based on semantic tags, related device and storage medium
CN108280670B (en) Seed crowd diffusion method and device and information delivery system
CN111930526B (en) Load prediction method, load prediction device, computer equipment and storage medium
CN111899470B (en) Human body falling detection method, device, equipment and storage medium
CN111754241A (en) User behavior perception method, device, equipment and medium
CN112257643A (en) Smoking behavior and calling behavior identification method based on video streaming
CN115185760A (en) Abnormality detection method and apparatus
CN112580668A (en) Background fraud detection method and device and electronic equipment
CN114241012B (en) High-altitude parabolic determination method and device
CN114338351B (en) Network anomaly root cause determination method and device, computer equipment and storage medium
CN115861915A (en) Fire fighting access monitoring method, fire fighting access monitoring device and storage medium
CN116016869A (en) Campus safety monitoring system based on artificial intelligence and Internet of things
CN111340213A (en) Neural network training method, electronic device, and storage medium
CN117112336B (en) Intelligent communication equipment abnormality detection method, equipment, storage medium and device
CN110084810B (en) Pulmonary nodule image detection method, model training method, device and storage medium
CN113468975A (en) Fighting behavior detection method and device
CN110852384A (en) Medical image quality detection method, device and storage medium
CN110969209B (en) Stranger identification method and device, electronic equipment and storage medium
US11875898B2 (en) Automatic condition diagnosis using an attention-guided framework
CN115686906A (en) RPA exception handling method, device, server and readable storage medium
CN113673318B (en) Motion detection method, motion detection device, computer equipment and storage medium
CN113627542A (en) Event information processing method, server and storage medium
CN114582012A (en) Skeleton human behavior recognition method, device and equipment
CN109741833B (en) Data processing method and device
CN110399399B (en) User analysis method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination