CN113705274B - Climbing behavior detection method and device, electronic equipment and storage medium - Google Patents

Climbing behavior detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113705274B
CN113705274B CN202010430411.XA CN202010430411A CN113705274B CN 113705274 B CN113705274 B CN 113705274B CN 202010430411 A CN202010430411 A CN 202010430411A CN 113705274 B CN113705274 B CN 113705274B
Authority
CN
China
Prior art keywords
climbing
human body
video frame
suspected
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010430411.XA
Other languages
Chinese (zh)
Other versions
CN113705274A (en
Inventor
宋旭鸣
任亦立
许朝斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010430411.XA priority Critical patent/CN113705274B/en
Publication of CN113705274A publication Critical patent/CN113705274A/en
Application granted granted Critical
Publication of CN113705274B publication Critical patent/CN113705274B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a climbing behavior detection method, a climbing behavior detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring video data to be detected; performing foreground detection on video data to be detected, and determining a video frame with suspected climbing behaviors and position information of the suspected climbing behaviors in the corresponding video frame according to a foreground detection result and the position relation of a preset climbing rule line; and detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the suspected climbing behavior in the corresponding video frame to obtain a climbing behavior detection result. The method and the device realize automatic detection of whether the monitoring area has the climbing behavior, acquire the suspected climbing behavior, and then detect the climbing behavior aiming at the suspected climbing behavior.

Description

Climbing behavior detection method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of intelligent monitoring, in particular to a climbing behavior detection method, a climbing behavior detection device, electronic equipment and a storage medium.
Background
With the development of computer vision technology, especially the emergence of deep learning algorithms, automatic detection technology based on image data has been applied to the fields of production and life.
In some monitoring scenes, for example, monitoring sites such as a prison, a hospital, a school and the like can be covered by monitoring equipment for real-time monitoring, in the prior art, aiming at such monitoring scenes, workers are required to manually detect whether climbing behaviors exist, and in order to reduce the manual workload and assist management, automatic climbing behavior detection for a monitoring area is required to be realized.
Disclosure of Invention
The embodiment of the application aims to provide a climbing behavior detection method, a climbing behavior detection device, electronic equipment and a storage medium, so as to automatically detect whether climbing behaviors exist in a monitoring area. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a climbing behavior detection method, where the method includes:
acquiring video data to be detected;
performing foreground detection on the video data to be detected, and determining a video frame with suspected climbing actions and position information of the suspected climbing actions in the corresponding video frame according to a foreground detection result and the position relation of a preset climbing rule line, wherein the suspected climbing actions represent that the height of a foreground region in the foreground detection result exceeds the preset climbing rule line;
And detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the suspected climbing behavior in the corresponding video frame to obtain a climbing behavior detection result.
In a possible implementation manner, the position information of the suspected climbing behavior in the corresponding video frame is specifically the position information of the human body area with the suspected climbing behavior in the corresponding video frame;
the step of performing foreground detection on the video data to be detected, and determining a video frame with suspected climbing behavior and position information of the suspected climbing behavior in the corresponding video frame according to a foreground detection result and a position relation of a preset climbing rule line, wherein the step of determining comprises the following steps:
performing foreground detection on the video frames of the video data to be detected to obtain foreground track information of the video data to be detected;
performing head-shoulder detection on the video frames of the video data to be detected to obtain head-shoulder track information of the video data to be detected;
determining a human body area in a video frame according to the foreground track information and the head-shoulder track information;
determining a video frame with suspected climbing behaviors and position information of the human body region with the suspected climbing behaviors in the corresponding video frame according to the position relation between the human body regions and a preset climbing rule line, wherein the suspected climbing behaviors represent that the height of the human body region exceeds the preset climbing rule line.
In one possible implementation manner, the determining the human body area in the video frame according to the foreground track information and the head-shoulder track information includes:
combining a part where the foreground region of the foreground track information and the head-shoulder region of the head-shoulder track information exist to overlap in the video frame aiming at any video frame corresponding to the foreground track information and the head-shoulder track information to obtain a human body region of the video frame;
the determining, according to the position relationship between each human body region and the preset climbing rule line, the video frame with suspected climbing behavior and the position information of the human body region with suspected climbing behavior in the corresponding video frame includes:
and respectively comparing the position relation between the human body area in each video frame and a preset climbing rule line, wherein the video frame with the human body area exceeding the preset climbing rule line is used as the video frame with suspected climbing behaviors, and the position information of the human body area exceeding the preset climbing rule line is used as the position information of the human body area with suspected climbing behaviors in the corresponding video frame.
In one possible implementation manner, the detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the suspected climbing behavior in the corresponding video frame to obtain a climbing behavior detection result includes:
Detecting human body joint points of the video frames with suspected climbing behaviors to obtain human body joint point information of each video frame with climbing behaviors;
determining the human body area with the suspected climbing behavior based on the position information of the human body area with the suspected climbing behavior in the corresponding video frame, and fusing the human body area with corresponding human body joint point information to obtain a plurality of fused human body posture information;
and analyzing the fused human body posture information according to the time sequence to obtain a climbing behavior detection result.
In a possible implementation manner, the determining the human body area with the suspected climbing behavior based on the position information of the human body area with the suspected climbing behavior in the corresponding video frame and fusing with the corresponding human body node information to obtain a plurality of fused human body posture information includes:
aiming at any video frame with suspected climbing behavior and human joint point information, according to the position information of a human body area with the climbing behavior in the video frame, obtaining the human body area with the suspected climbing behavior in the video frame; according to the human body key point information of the video frame, the position of each human body key point in the video frame is obtained;
Combining the positions of the human body key points in the video frame and the human body area with suspected climbing behaviors, and obtaining the human body area with suspected climbing behaviors containing the human body key points as fused human body posture information of the video frame.
In one possible implementation manner, the analyzing the fused human body posture information according to the time sequence to obtain the climbing behavior detection result includes:
based on the foreground track information or the head-shoulder track information, correlating the fused human body posture information belonging to the same foreground track information or the same head-shoulder track information to obtain a fused human body posture information track;
and carrying out human body climbing gesture analysis on the fused human body gesture information track according to a time sequence to obtain a climbing behavior detection result of the fused human body gesture information track.
In one possible embodiment, the video data to be detected is acquired by a camera.
In a second aspect, an embodiment of the present application provides a climbing behavior detection apparatus, the apparatus including:
the video data acquisition module is used for acquiring video data to be detected;
the primary climbing detection module is used for carrying out foreground detection on the video data to be detected, and determining a video frame with suspected climbing actions and position information of the suspected climbing actions in the corresponding video frame according to a foreground detection result and the position relation of a preset climbing rule line, wherein the suspected climbing actions represent that the height of a foreground area in the foreground detection result exceeds the preset climbing rule line;
The climbing secondary detection module is used for detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the suspected climbing behavior in the corresponding video frame to obtain a climbing behavior detection result.
In a possible implementation manner, the position information of the suspected climbing behavior in the corresponding video frame is specifically the position information of the human body area with the suspected climbing behavior in the corresponding video frame; the climbing primary detection module comprises:
the front Jing Guiji obtaining sub-module is used for carrying out foreground detection on the video frames of the video data to be detected to obtain foreground track information of the video data to be detected;
the head-shoulder track acquisition sub-module is used for carrying out head-shoulder detection on the video frames of the video data to be detected to obtain head-shoulder track information of the video data to be detected;
the human body region determining submodule is used for determining a human body region in a video frame according to the foreground track information and the head-shoulder track information;
the climbing behavior detection sub-module is used for determining a video frame with suspected climbing behaviors and position information of the human body region with suspected climbing behaviors in the corresponding video frame according to the position relation between the human body regions and the preset climbing rule line, wherein the suspected climbing behaviors represent that the height of the human body region exceeds the preset climbing rule line.
In a possible embodiment, the human body region determination submodule is specifically configured to: combining a part where the foreground region of the foreground track information and the head-shoulder region of the head-shoulder track information exist to overlap in the video frame aiming at any video frame corresponding to the foreground track information and the head-shoulder track information to obtain a human body region of the video frame;
the climbing behavior detection submodule is specifically used for: and respectively comparing the position relation between the human body area in each video frame and a preset climbing rule line, wherein the video frame with the human body area exceeding the preset climbing rule line is used as the video frame with suspected climbing behaviors, and the position information of the human body area exceeding the preset climbing rule line is used as the position information of the human body area with suspected climbing behaviors in the corresponding video frame.
In one possible embodiment, the climbing secondary detection module includes:
the node obtaining sub-module is used for detecting human body joints of the video frames with suspected climbing behaviors and obtaining human body joint information of each video frame with climbing behaviors;
the human body gesture acquisition sub-module is used for determining the human body region with suspected climbing behaviors based on the position information of the human body region with suspected climbing behaviors in the corresponding video frame, and fusing the human body region with corresponding human body joint point information to obtain a plurality of fused human body gesture information;
And the detection result acquisition sub-module is used for analyzing the fused human body posture information according to the time sequence to obtain the climbing behavior detection result.
In a possible implementation manner, the human body posture acquisition sub-module is specifically configured to: aiming at any video frame with suspected climbing behavior and human joint point information, according to the position information of a human body area with the climbing behavior in the video frame, obtaining the human body area with the suspected climbing behavior in the video frame; according to the human body key point information of the video frame, the position of each human body key point in the video frame is obtained; combining the positions of the human body key points in the video frame and the human body area with suspected climbing behaviors, and obtaining the human body area with suspected climbing behaviors containing the human body key points as fused human body posture information of the video frame.
In a possible implementation manner, the detection result obtaining sub-module is specifically configured to: based on the foreground track information or the head-shoulder track information, correlating the fused human body posture information belonging to the same foreground track information or the same head-shoulder track information to obtain a fused human body posture information track; and carrying out human body climbing gesture analysis on the fused human body gesture information track according to a time sequence to obtain a climbing behavior detection result of the fused human body gesture information track.
In one possible embodiment, the video data to be detected is acquired by a camera.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement any one of the climbing behavior detection methods described above when executing the program stored in the memory.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, which when executed by a processor implements any of the climbing behavior detection methods described above.
The climbing behavior detection method, the climbing behavior detection device, the electronic equipment and the storage medium provided by the embodiment of the application acquire video data to be detected; performing foreground detection on video data to be detected, and determining a video frame with suspected climbing actions and position information of the suspected climbing actions in the corresponding video frame according to a foreground detection result and the position relation of a preset climbing rule line, wherein the suspected climbing actions represent that the height of a foreground region in the foreground detection result exceeds the preset climbing rule line; and detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the suspected climbing behavior in the corresponding video frame to obtain a climbing behavior detection result. The method and the device realize automatic detection of whether the monitoring area has the climbing behavior, acquire the suspected climbing behavior, and then detect the climbing behavior aiming at the suspected climbing behavior. Of course, it is not necessary for any one product or method of practicing the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a first schematic diagram of a climbing behavior detection method according to an embodiment of the present application;
FIG. 2 is a second schematic diagram of a climbing behavior detection method according to an embodiment of the present application;
FIG. 3 is a third schematic diagram of a climbing behavior detection method according to an embodiment of the present application;
FIG. 4 is a first schematic diagram of a climbing behavior detection apparatus according to an embodiment of the present application;
FIG. 5 is a second schematic view of a climbing behavior detection apparatus according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a video capture module mounting location according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a training method of a head-shoulder detection model according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a head-shoulder detecting method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a training method of a human body node detection model according to an embodiment of the present application;
FIG. 10 is a schematic view of a human body joint according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a human body node detection method according to an embodiment of the present application;
fig. 12 is a schematic diagram of an electronic device according to an embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
First, terms of art in the embodiments of the present application will be explained:
climbing behavior-a person has the action of climbing up to a height and has a limb portion over a prescribed height for a certain period of time.
Human body joint point detection: the positions of joints such as the top of the head, the neck, the elbow, the wrist and the like in a frame of image are detected, and the joints are connected to form a skeleton of a human body.
Head and shoulder detection: and detecting the head and shoulder positions of the human body in one frame of image.
Background modeling: and detecting the area belonging to the foreground in one frame of image.
In order to automatically detect whether a climbing behavior exists in a monitoring area, an embodiment of the present application provides a climbing behavior detection method, referring to fig. 1, including:
s101, obtaining video data to be detected.
The climbing behavior detection method of the embodiment of the application can be realized through electronic equipment, and in particular, the electronic equipment can be an intelligent video camera, a hard disk video recorder or a server and the like.
The video data to be detected is the video data of the monitoring area needing to be detected in climbing behavior, and the video data of the monitoring area collected by the monitoring equipment can be utilized, so that the video data to be detected is obtained, and the video data to be detected comprises multi-frame video frames.
S102, performing foreground detection on the video data to be detected, and determining a video frame with suspected climbing actions and position information of the suspected climbing actions in the corresponding video frame according to a foreground detection result and the position relation of a preset climbing rule line, wherein the suspected climbing actions represent that the height of a foreground region in the foreground detection result exceeds the preset climbing rule line.
And carrying out foreground detection on each video frame in the video data to be detected in a background modeling mode to obtain a foreground detection result of each video frame. And determining the video frame with suspected climbing behavior according to the foreground detection result of each video frame and the position relation of the preset climbing rule line, and determining the position information of the suspected climbing behavior in the corresponding video frame. The preset climbing rule line is a boundary line which is preset in the monitoring area and cannot be exceeded by a human body, and can be set according to actual scene requirements. The suspected climbing behavior indicates that the height of the foreground region in the foreground detection result exceeds the preset climbing rule line, and optionally, when the height of the foreground region in the foreground detection result of the video frame exceeds the preset climbing rule line, the suspected climbing behavior is judged to exist in the video frame, and the position information of the foreground region with the height exceeding the preset climbing rule line is used as the position information of the suspected climbing behavior in the video frame.
S103, detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the suspected climbing behavior in the corresponding video frame, and obtaining a climbing behavior detection result.
Detecting the climbing behavior in the region indicated by the position information of the suspected climbing behavior in the video frame with the suspected climbing behavior to obtain a climbing behavior detection result, wherein the climbing behavior detection result indicates the existence of the climbing behavior or the absence of the climbing behavior. When there is a climbing behavior, the climbing behavior detection may further include position information of the region where the climbing behavior exists. In the embodiment of the application, the climbing behavior can be detected through a pre-trained deep learning network, and the climbing behavior can be detected by means of human body joint point detection and the like.
In the embodiment of the application, the automatic detection of whether the monitoring area has the climbing behavior is realized, the suspected climbing behavior is obtained through the foreground detection, and then the climbing behavior is detected aiming at the suspected climbing behavior.
In a possible implementation manner, the location information of the suspected climbing behavior in the corresponding video frame is specifically location information of a human body area with the suspected climbing behavior in the corresponding video frame, referring to fig. 2, the performing foreground detection on the video data to be detected, and determining, according to a foreground detection result and a location relationship of a preset climbing rule line, the location information of the video frame with the suspected climbing behavior and the suspected climbing behavior in the corresponding video frame, where the location information includes:
s1021, performing foreground detection on the video frames of the video data to be detected to obtain foreground track information of the video data to be detected.
Background modeling is conducted on each video frame of the video data to be detected, foreground detection is conducted on each video frame based on a background modeling result, foreground information of each video frame is determined, and according to time sequence of each video frame and positions of each foreground information, the foreground information of each video frame is associated, and foreground track information is obtained.
Specifically, the foreground information may be a foreground region, background modeling is first performed on each video frame of the video data to be detected, and based on a background modeling result, foreground detection is performed on each video frame, so as to obtain a foreground region of each video frame respectively. And then carrying out corresponding association and tracking according to the foreground areas of the front frame and the rear frame, and further obtaining the track information of the foreground areas.
And S1022, performing head-shoulder detection on the video frames of the video data to be detected to obtain head-shoulder track information of the video data to be detected.
The analysis processing based on the deep learning human head and shoulder frames is carried out on each video frame of the video data to be detected, and the detection of the human head and shoulder frames can be carried out by adopting a convolution neural network for single-class target detection. The convolutional neural network is obtained through pre-training, firstly, a detector for detecting the head and shoulder frames needs to be trained, the detector adopts the convolutional neural network, and the convolutional neural network is trained by using sample images of the marked human head and shoulder frames. And after the head and shoulder frames output by the head and shoulder detection model are obtained, corresponding association and tracking are carried out according to the foreground information of the front frame and the rear frame, and the track information of the head and shoulder frames is further obtained.
S1023, determining the human body area in the video frame according to the foreground track information and the head-shoulder track information.
And fusing the foreground track information and the head-shoulder track information in the same video frame, so as to obtain a fused human body region sensitive to climbing behaviors. For example, the head-shoulder track and the foreground track are combined, a first-level climbing probability model is built, and the obtained foreground area containing the head-shoulder frame of the human body is used as the human body area.
S1024, determining a video frame with suspected climbing actions and position information of the human body area with suspected climbing actions in the corresponding video frame according to the position relation between the human body areas and the preset climbing rule line, wherein the suspected climbing actions represent that the height of the human body area exceeds the preset climbing rule line.
And detecting the climbing behavior of the fused human body region sensitive to the climbing behavior, determining a video frame (hereinafter referred to as a target video frame) with suspected climbing behavior, and obtaining the position information of the human body region with suspected climbing behavior in the target video frame. The suspected climbing behavior indicates that the height of the human body area exceeds the preset climbing rule line, optionally, when the height of the human body area in the video frame exceeds the preset climbing rule line, the suspected climbing behavior is judged to exist in the video frame, and the position information of the human body area with the height exceeding the preset climbing rule line is used as the position information of the human body area with the suspected climbing behavior in the video frame.
In one possible implementation manner, the determining the human body area in the video frame according to the foreground track information and the head-shoulder track information includes:
And combining a part where the foreground region of the foreground track information and the head-shoulder region of the head-shoulder track information are overlapped in the video frame aiming at any video frame corresponding to the foreground track information and the head-shoulder track information to obtain a human body region of the video frame.
The determining, according to the positional relationship between each of the body regions and the preset climbing rule line, a video frame having a suspected climbing behavior and positional information of the body region having the suspected climbing behavior in the corresponding video frame includes:
and respectively comparing the position relation between the human body area in each video frame and a preset climbing rule line, wherein the video frame with the human body area exceeding the preset climbing rule line is used as the video frame with suspected climbing behaviors, and the position information of the human body area exceeding the preset climbing rule line is used as the position information of the human body area with suspected climbing behaviors in the corresponding video frame.
Specifically, the head-shoulder track information and the foreground track information in the same video frame can be combined into a human body area. And combining the foreground track information and the head-shoulder track information which are overlapped in the region in the video aiming at any video frame to serve as a human body region of the video frame. The region where the human head and shoulder frames which are overlapped and can be the head and shoulder track information are all contained in the foreground region of the foreground track information; or the ratio of the overlapping area to the human head-shoulder frame area of the head-shoulder track information is larger than a preset area threshold, and the overlapping area is specifically the overlapping area of the human head-shoulder frame of the head-shoulder track information and the foreground area of the foreground track information.
And determining whether a climbing action exists according to the position relation between the human body area and a preset climbing rule line, wherein the preset climbing rule line is the lowest boundary in an area which is preset in the monitoring area and cannot be exceeded by the human body, judging that a suspected climbing action exists if the human body area exceeds the preset climbing rule line, and extracting the position information of the human body area in a corresponding frequency frame. The fact that the human body area exceeds the preset climbing rule line can be that the central point of the human body area exceeds the preset climbing rule line, or that any point in the human body area exceeds the preset climbing rule line.
In the embodiment of the application, the suspected climbing behavior obtained based on the detection of the foreground track information and the head-shoulder track information can improve the accuracy of the climbing behavior detection, the foreground detection and the head-shoulder detection algorithm are relatively simple, the consumption of calculation resources is low, and the climbing behavior detection can be improved.
In one possible implementation manner, referring to fig. 3, according to the video frame with suspected climbing behavior and the position information of the suspected climbing behavior in the corresponding video frame, the step of detecting the climbing behavior to obtain a climbing behavior detection result includes:
S1031, detecting human body joints of the video frames with suspected climbing behaviors, and obtaining human body joint information of each video frame with climbing behaviors.
Human body joint point detection can be performed on the target video frame through a pre-trained neural network model. Specifically, an identifier (frame number or timestamp, etc.) of a video frame with suspected climbing behavior is obtained, and a corresponding video frame is extracted from video data to be detected according to the identifier to serve as a target video frame. And detecting the human body joint points of each target video frame, thereby obtaining the human body joint point information of each target video frame. Alternatively, in order to reduce the consumption of computing resources as much as possible, human joint point detection may be performed only on a human body region where a suspected climbing behavior exists in a video frame where a suspected climbing behavior exists.
And the human body joint point analysis processing of each target video frame based on deep learning can adopt a convolutional neural network joint point detection frame from bottom to top to detect human body joint points. The process of pre-training the neural network model may include: and acquiring a sample image marked with the human body joint point, inputting the sample image into a detector (the detector adopts a convolutional neural network structure) of the human body joint point for training, and finally obtaining a pre-trained neural network model.
S1032, determining the human body area with the suspected climbing behavior based on the position information of the human body area with the suspected climbing behavior in the corresponding video frame, and fusing the human body area with the corresponding human body joint point information to obtain a plurality of fused human body posture information.
And respectively determining the human body area in each target video frame based on the position information of the human body area with suspected climbing behavior in the corresponding video frame. And respectively fusing the human body area in the same target video frame with the human body joint point information to obtain the human body area comprising the human body key point information, wherein the human body area is used as fused human body posture information of each target video frame.
In one possible implementation manner, the determining the human body area with the suspected climbing behavior based on the position information of the human body area with the suspected climbing behavior in the corresponding video frame and fusing the human body area with corresponding human body node information to obtain a plurality of fused human body posture information includes:
step one, aiming at any video frame with suspected climbing behavior and human joint point information, obtaining a human body area with suspected climbing behavior in the video frame according to the position information of the human body area with the climbing behavior in the video frame; and obtaining the positions of all the human body key points in the video frame according to the human body key point information of the video frame.
Combining the positions of the human body key points in the video frame and the human body area with suspected climbing behaviors to obtain the human body area with suspected climbing behaviors containing the human body key points as fused human body posture information of the video frame.
S1033, analyzing the fused human body posture information according to the time sequence to obtain a climbing behavior detection result.
And according to the time sequence of each target video frame, carrying out climbing behavior analysis on the fused human body posture information of each target video frame to obtain a climbing behavior detection result. The climbing behavior detection result comprises whether climbing behavior exists or not, and can also comprise position information of a human body area where the climbing behavior exists, so that follow-up evidence collection is facilitated.
In one possible implementation manner, the analyzing the fused human body posture information according to the time sequence to obtain the climbing behavior detection result includes:
step one, based on the foreground track information or the head-shoulder track information, correlating the fused human body posture information belonging to the same foreground track information or the same head-shoulder track information to obtain a fused human body posture information track.
And step two, carrying out human body climbing gesture analysis on the fused human body gesture information track according to the time sequence to obtain a climbing behavior detection result of the fused human body gesture information track.
The position of the fused human body posture information corresponds to the position area of the foreground track information/the head-shoulder track information, so that the fused human body posture information belonging to the same foreground track information or the head-shoulder track information can be associated based on the foreground track information or the head-shoulder track information to obtain a fused human body posture information track.
For example, the video data to be detected includes video frames 1 to 10, the foreground track information 1 is composed of a region a in the video frame 3, a region B in the video frame 4, a region C in the video frame 5, and a region D in the video frame 6, the foreground track information 2 is composed of a region E in the video frame 4, a region F in the video frame 5, and a region G in the video frame 6, the video frame 3 includes a fusion human body posture information a correspondence region a, the video frame 4 includes a fusion human body posture information B correspondence region B, a fusion human body posture information E correspondence region E, the video frame 5 includes a fusion human body posture information C correspondence region C, a fusion human body posture information F correspondence region F, and the video frame 6 includes a fusion human body posture information D correspondence region D, a fusion human body posture information G correspondence region G. Correlating the fused human body posture information a-D corresponding to the areas A-D of the foreground track information 1 to obtain a fused human body posture information track; and correlating the fused human body posture information E-G corresponding to the region E-G of the foreground track information 2 to obtain a fused human body posture information track.
And carrying out human body climbing gesture analysis on each fused human body gesture information track according to the time sequence to obtain climbing behavior detection results of each fused human body gesture information track. Specifically, the change of the human body joint point in the track of the information of the fused human body posture can be compared with the change of the human body joint point in the real human body climbing process, and when the similarity is larger than a preset similarity threshold, the climbing behavior is judged to exist. For example, according to the change of human body joint points in the human body posture information track, calculating the proportion of the movement amplitude between the appointed limbs, comparing the proportion with the proportion of the movement amplitude between the appointed limbs in the real human body climbing process, and judging that climbing behaviors exist when the similarity of the proportion between the plurality of groups of limbs is larger than a preset similarity threshold value.
The analysis of the human body climbing gesture can also be realized through a deep learning network. The track of the fused human body posture information can be input into a pre-trained deep learning network to obtain a climbing behavior detection result. The human body posture information track marked with the climbing behaviors of the human body related nodes can be used as a positive sample in advance, and the deep learning network can be trained, so that the pre-trained deep learning network is obtained.
In the embodiment of the application, the climbing behavior is detected by combining the primary detection result of the climbing behavior and the human body joint point detection result, so that the accuracy of the climbing behavior detection result can be improved, and the climbing behavior detection performance can be improved.
In one possible embodiment, the video data to be detected is collected by a camera.
In the embodiment of the application, the climbing behavior detection can be performed through the video data collected by the single camera, and compared with the multi-camera joint detection mode, the deployment cost can be reduced, the deployment convenience is improved, the running efficiency of the climbing behavior detection system is improved, and the resource consumption is reduced.
In one possible embodiment, the method further comprises: and triggering an alarm when the climbing behavior detection result indicates that climbing behavior exists.
Aiming at the climbing behavior detection result, when the climbing behavior detection result indicates that climbing behavior exists, an alarm is triggered. The alarm mode can be set in a self-defined manner, for example, an alarm interval and an alarm mode can be set, and an alarm picture containing a climbing human body target frame can be correspondingly output and uploaded.
In the embodiment of the application, the climbing behavior is alarmed, and the management is convenient to assist.
The embodiment of the application also provides a climbing behavior detection device, referring to fig. 4, the device comprises:
a video data acquisition module 401, configured to acquire video data to be detected;
a primary climbing detection module 402, configured to perform foreground detection on the video data to be detected, and determine, according to a foreground detection result and a position relationship of a preset climbing rule line, a video frame in which a suspected climbing behavior exists and position information of the suspected climbing behavior in a corresponding video frame, where the suspected climbing behavior indicates that a height of a foreground region in the foreground detection result exceeds the preset climbing rule line;
the climbing secondary detection module 403 is configured to detect a climbing behavior according to a video frame with a suspected climbing behavior and position information of the suspected climbing behavior in a corresponding video frame, and obtain a climbing behavior detection result.
In one possible implementation manner, the location information of the suspected climbing behavior in the corresponding video frame is specifically the location information of the human body area with the suspected climbing behavior in the corresponding video frame; the climbing primary detection module 402 includes:
the front Jing Guiji obtaining sub-module is used for carrying out foreground detection on the video frames of the video data to be detected to obtain foreground track information of the video data to be detected;
The head-shoulder track acquisition sub-module is used for carrying out head-shoulder detection on the video frames of the video data to be detected to obtain head-shoulder track information of the video data to be detected;
the human body region determining submodule is used for determining a human body region in a video frame according to the foreground track information and the head-shoulder track information;
the climbing behavior detection sub-module is used for determining a video frame with suspected climbing behaviors and position information of the human body region with suspected climbing behaviors in the corresponding video frame according to the position relation between the human body regions and the preset climbing rule line, wherein the suspected climbing behaviors represent that the height of the human body region exceeds the preset climbing rule line.
In one possible embodiment, the human body region determining submodule is specifically configured to: combining a part where the foreground region of the foreground track information and the head-shoulder region of the head-shoulder track information exist to overlap in the video frame aiming at any video frame corresponding to the foreground track information and the head-shoulder track information to obtain a human body region of the video frame;
the climbing behavior detection submodule is specifically used for: and respectively comparing the position relation between the human body area in each video frame and a preset climbing rule line, wherein the video frame with the human body area exceeding the preset climbing rule line is used as the video frame with suspected climbing behaviors, and the position information of the human body area exceeding the preset climbing rule line is used as the position information of the human body area with suspected climbing behaviors in the corresponding video frame.
In one possible implementation, the climbing secondary detection module 403 includes:
the node obtaining sub-module is used for detecting human body joints of the video frames with suspected climbing behaviors and obtaining human body joint information of each video frame with climbing behaviors;
the human body gesture acquisition sub-module is used for determining the human body region with suspected climbing behaviors based on the position information of the human body region with suspected climbing behaviors in the corresponding video frame, and fusing the human body region with corresponding human body joint point information to obtain a plurality of fused human body gesture information;
and the detection result acquisition sub-module is used for analyzing the fused human body posture information according to the time sequence to obtain the climbing behavior detection result.
In one possible implementation manner, the human body posture acquisition sub-module is specifically configured to: aiming at any video frame with suspected climbing behavior and human joint point information, according to the position information of a human body area with the climbing behavior in the video frame, obtaining the human body area with the suspected climbing behavior in the video frame; according to the human body key point information of the video frame, the position of each human body key point in the video frame is obtained; combining the positions of the human body key points in the video frame and the human body area with suspected climbing behaviors, and obtaining the human body area with suspected climbing behaviors containing the human body key points as fused human body posture information of the video frame.
In one possible implementation manner, the detection result obtaining sub-module is specifically configured to: based on the foreground track information or the head-shoulder track information, correlating the fused human body posture information belonging to the same foreground track information or the same head-shoulder track information to obtain a fused human body posture information track; and carrying out human body climbing gesture analysis on the fused human body gesture information track according to the time sequence to obtain a climbing behavior detection result of the fused human body gesture information track.
In one possible embodiment, the video data to be detected is collected by a camera.
The embodiment of the application also provides a climbing behavior detection device, referring to fig. 5, the device comprises:
the system comprises a video acquisition module 51, a primary climbing detection module 52, a secondary climbing detection module 53 and an information output module 54.
The video acquisition module 51 is used for acquiring video images of the monitored area. For example, as shown in fig. 6, the video capture module 51 is mounted on the top end to monitor the area from a top view. The video acquisition module 51 sends the video data to be detected to the primary climbing detection module 52.
The input of the primary climbing detection module 52 is video data to be detected, and the output is detection information after primary climbing detection (including video frame identification with climbing behavior and position information of the climbing behavior in corresponding video frames). The primary climb detection module 52 includes a background modeling and target tracking sub-module 521, a head shoulder detection and target tracking sub-module 522, a primary climb sensitive feature information fusion sub-module 523, and a primary climb detection sub-module 524.
The background modeling and target tracking sub-module 521 first performs conventional background modeling on the video frame of the input video data to be detected, and obtains a set of foreground regions. And then carrying out corresponding association and tracking according to the foreground region group information of the front and rear frames to further obtain the track information of the foreground region.
The head-shoulder detection and target tracking sub-module 522 performs analysis processing based on deep learning on the head-shoulder frames of the human body on the video frames input with the video data to be detected, and performs detection on the head-shoulder frames of the human body by adopting a convolution neural network for single-class target detection. First, a detector for detecting a head and shoulder frame needs to be trained, and the detector adopts a convolutional neural network which is trained by using a collected and marked head and shoulder frame image of a human body, for example, as shown in fig. 7. After the head and shoulder detection model is obtained, human head and shoulder detection analysis can be performed on the video frame, for example, as shown in fig. 8, after the head and shoulder frame output by the head and shoulder detection model is obtained, corresponding association and tracking are performed according to the block group information of the front frame and the back frame, and track information of the head and shoulder frame is further obtained.
The background modeling and head-shoulder detection information of the video frames are summarized to a primary climbing sensitive characteristic information fusion submodule 523 for fusion of climbing sensitive information. The tracked foreground region and the tracked head-shoulder frame information are correspondingly associated and combined, and feature information sensitive to the climbing actions is fused and input to the first-stage climbing detection sub-module 524 for first-time climbing detection.
The primary climb detection sub-module 524 uses the first fused climb sensitive feature information to detect the first climb behavior. The climbing sensitive characteristic information is used for judging whether climbing behaviors exist or not, and position information of corresponding climbing human body foreground areas is given out to be well compared with frames of corresponding key frames. This information is input to the secondary climb detection module 53 for a second climb detection to improve performance of climb behavior detection.
The secondary climb detection module 53 inputs the video data to be detected and the information (including the video frame identifier with the climb behavior and the position information of the climb behavior in the corresponding video frame) output by the primary climb detection module 52, and outputs the detected information as the secondary climb detection. The secondary climbing detection module 53 includes a key frame extraction sub-module 531, a human body joint point detection sub-module 532, a secondary climbing sensitive characteristic information fusion sub-module 533 and a secondary climbing detection sub-module 534, and the climbing detection design of the hierarchy is key to reducing the system resource consumption.
The key frame extraction sub-module 531 extracts frames of the video data to be detected according to the video frame identification with the climbing behavior output by the primary climbing detection module 52, and obtains video frames with the climbing behavior. Video frames (hereinafter referred to as target video frames) in which climbing behavior exists are input to the human body node detection sub-module 532.
The human body node detection sub-module 532 performs human body node analysis processing based on deep learning on the input target video frame, and performs human body node detection by adopting a convolutional neural network node detection frame from bottom to top. First, a detector of a human body joint point needs to be trained, and the detector adopts a convolutional neural network structure which is trained by using an image acquired and marked with the human body joint point, wherein the structure is shown in fig. 9, and the joint point schematic is shown in fig. 10. After the joint point detection model is obtained, human body joint point detection analysis can be performed on the target video frame, a flowchart of the human body joint point detection analysis is shown in fig. 11, and after human body joint point information output by the human body joint point detection model is obtained, the human body joint point information is input into the secondary climbing high-sensitivity characteristic information fusion sub-module 533.
The information output by the primary climbing detection module 52 and the human body joint point information are summarized to a secondary climbing sensitive characteristic information fusion submodule 533, secondary climbing sensitive information fusion is carried out, corresponding association and combination are carried out on the detected human body joint point and the information output by the primary climbing detection module 52, fusion is carried out on the fused human body posture information sensitive to climbing behaviors, and the fused human body posture information is input to the secondary climbing detection submodule 534 for secondary climbing detection.
The secondary climbing detection sub-module 534 uses the fused human body posture information to detect the climbing behavior for the second time, and obtains the climbing behavior detection result. The climbing behavior detection result indicates whether climbing behavior exists, and may further include climbing target position information corresponding to the target video frame. The climb behavior detection result is input to the information output module 54 for further alarm output.
The information output module 54 receives the climbing behavior detection result of the secondary climbing detection module 53, and correspondingly outputs an alarm picture containing a climbing human body target frame in combination with the alarm interval setting and the alarm mode setting, and uploads the alarm picture.
In the embodiment of the application, the climbing behavior detection method based on the climbing sensitive characteristic fusion information of background modeling, head and shoulder detection and human body joint point detection is optimized for the clear task of climbing behavior detection, and can greatly improve the performance of climbing behavior detection. The video-based climbing detection solution greatly saves the cost of manpower and material resources in a monitoring room. The system can generate a large number of key frame images in the running process, and the images can be used for further improving the performance of the human body joint point detection model.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, and implement the following steps:
acquiring video data to be detected;
performing foreground detection on the video data to be detected, and determining a video frame with suspected climbing actions and position information of the suspected climbing actions in the corresponding video frame according to a foreground detection result and the position relation of a preset climbing rule line, wherein the suspected climbing actions represent that the height of a foreground region in the foreground detection result exceeds the preset climbing rule line;
and detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the suspected climbing behavior in the corresponding video frame to obtain a climbing behavior detection result.
Optionally, referring to fig. 12, the electronic device according to the embodiment of the present application further includes a communication interface 902 and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 perform communication with each other through the communication bus 904.
Optionally, the processor may be configured to implement any of the climbing behavior detection methods when executing the computer program stored in the memory.
The communication bus mentioned for the above-mentioned electronic devices may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include RAM (Random Access Memory ) or NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, any climbing behavior detection method is realized.
It should be noted that, in this document, the technical features in each alternative may be combined to form a solution, so long as they are not contradictory, and all such solutions are within the scope of the disclosure of the present application. Relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for embodiments of the apparatus, electronic device and storage medium, the description is relatively simple as it is substantially similar to the method embodiments, where relevant see the section description of the method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (8)

1. A climb behavior detection method, the method comprising:
acquiring video data to be detected;
performing foreground detection on the video frames of the video data to be detected to obtain foreground track information of the video data to be detected;
performing head-shoulder detection on the video frames of the video data to be detected to obtain head-shoulder track information of the video data to be detected;
determining a human body area in a video frame according to the foreground track information and the head-shoulder track information;
Determining a video frame with suspected climbing behaviors and position information of the human body region with suspected climbing behaviors in the corresponding video frame according to the position relation between each human body region and a preset climbing rule line, wherein the suspected climbing behaviors represent that the height of the human body region exceeds the preset climbing rule line;
and detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the human body area with the suspected climbing behavior in the corresponding video frame to obtain a climbing behavior detection result.
2. The method of claim 1, wherein determining the human body region in the video frame based on the foreground track information and the head-shoulder track information comprises:
combining a part where the foreground region of the foreground track information and the head-shoulder region of the head-shoulder track information exist to overlap in the video frame aiming at any video frame corresponding to the foreground track information and the head-shoulder track information to obtain a human body region of the video frame;
the determining, according to the position relationship between each human body region and the preset climbing rule line, the video frame with suspected climbing behavior and the position information of the human body region with suspected climbing behavior in the corresponding video frame includes:
And respectively comparing the position relation between the human body area in each video frame and a preset climbing rule line, wherein the video frame with the human body area exceeding the preset climbing rule line is used as the video frame with suspected climbing behaviors, and the position information of the human body area exceeding the preset climbing rule line is used as the position information of the human body area with suspected climbing behaviors in the corresponding video frame.
3. The method according to claim 1, wherein the step of detecting the climbing behavior according to the video frame with the suspected climbing behavior and the position information of the human body area with the suspected climbing behavior in the corresponding video frame to obtain a climbing behavior detection result includes:
detecting human body joint points of the video frames with suspected climbing behaviors to obtain human body joint point information of each video frame with climbing behaviors;
determining the human body area with the suspected climbing behavior based on the position information of the human body area with the suspected climbing behavior in the corresponding video frame, and fusing the human body area with corresponding human body joint point information to obtain a plurality of fused human body posture information;
and analyzing the fused human body posture information according to the time sequence to obtain a climbing behavior detection result.
4. The method of claim 3, wherein determining the human body region with the suspected climbing behavior based on the position information of the human body region with the suspected climbing behavior in the corresponding video frame and fusing the human body region with the corresponding human body node information to obtain a plurality of fused human body posture information comprises:
aiming at any video frame with suspected climbing behaviors and human joint point information, according to the position information of a human body area with suspected climbing behaviors in the video frame, obtaining the human body area with suspected climbing behaviors in the video frame; according to the human body key point information of the video frame, the position of each human body key point in the video frame is obtained;
combining the positions of the human body key points in the video frame and the human body area with suspected climbing behaviors, and obtaining the human body area with suspected climbing behaviors containing the human body key points as fused human body posture information of the video frame.
5. The method of claim 3, wherein analyzing each piece of fused human body posture information according to time sequence to obtain a climbing behavior detection result comprises:
based on the foreground track information or the head-shoulder track information, correlating the fused human body posture information belonging to the same foreground track information or the same head-shoulder track information to obtain a fused human body posture information track;
And carrying out human body climbing gesture analysis on the fused human body gesture information track according to a time sequence to obtain a climbing behavior detection result of the fused human body gesture information track.
6. A climbing behavior detection apparatus, the apparatus comprising:
the video data acquisition module is used for acquiring video data to be detected;
the primary climbing detection module is used for carrying out foreground detection on the video data to be detected, and determining a video frame with suspected climbing actions and position information of the suspected climbing actions in the corresponding video frame according to a foreground detection result and the position relation of a preset climbing rule line, wherein the suspected climbing actions represent that the height of a foreground area in the foreground detection result exceeds the preset climbing rule line;
the climbing secondary detection module is used for detecting climbing behaviors according to the video frames with suspected climbing behaviors and the position information of the suspected climbing behaviors in the corresponding video frames to obtain climbing behavior detection results;
the position information of the suspected climbing behavior in the corresponding video frame is specifically the position information of the human body area with the suspected climbing behavior in the corresponding video frame; the climbing primary detection module comprises:
The front Jing Guiji obtaining sub-module is used for carrying out foreground detection on the video frames of the video data to be detected to obtain foreground track information of the video data to be detected;
the head-shoulder track acquisition sub-module is used for carrying out head-shoulder detection on the video frames of the video data to be detected to obtain head-shoulder track information of the video data to be detected;
the human body region determining submodule is used for determining a human body region in a video frame according to the foreground track information and the head-shoulder track information;
the climbing behavior detection sub-module is used for determining a video frame with suspected climbing behaviors and position information of the human body region with suspected climbing behaviors in the corresponding video frame according to the position relation between the human body regions and the preset climbing rule line, wherein the suspected climbing behaviors represent that the height of the human body region exceeds the preset climbing rule line.
7. An electronic device, comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the climbing behavior detection method according to any one of claims 1 to 5 when executing the program stored in the memory.
8. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the climbing behavior detection method according to any one of claims 1-5.
CN202010430411.XA 2020-05-20 2020-05-20 Climbing behavior detection method and device, electronic equipment and storage medium Active CN113705274B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010430411.XA CN113705274B (en) 2020-05-20 2020-05-20 Climbing behavior detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010430411.XA CN113705274B (en) 2020-05-20 2020-05-20 Climbing behavior detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113705274A CN113705274A (en) 2021-11-26
CN113705274B true CN113705274B (en) 2023-09-05

Family

ID=78645610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010430411.XA Active CN113705274B (en) 2020-05-20 2020-05-20 Climbing behavior detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113705274B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916380A (en) * 2010-09-08 2010-12-15 大连古野软件有限公司 Video-based device and method for detecting smog
CN104866827A (en) * 2015-05-19 2015-08-26 天津大学 Method for detecting people crossing behavior based on video monitoring platform
CN105718857A (en) * 2016-01-13 2016-06-29 兴唐通信科技有限公司 Human body abnormal behavior detection method and system
CN106845325A (en) * 2015-12-04 2017-06-13 杭州海康威视数字技术股份有限公司 A kind of information detecting method and device
CN108108688A (en) * 2017-12-18 2018-06-01 青岛联合创智科技有限公司 A kind of limbs conflict behavior detection method based on the extraction of low-dimensional space-time characteristic with theme modeling
CN108830204A (en) * 2018-06-01 2018-11-16 中国科学技术大学 The method for detecting abnormality in the monitor video of target
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN109919132A (en) * 2019-03-22 2019-06-21 广东省智能制造研究所 A kind of pedestrian's tumble recognition methods based on skeleton detection
CN110533013A (en) * 2019-10-30 2019-12-03 图谱未来(南京)人工智能研究院有限公司 A kind of track-detecting method and device
CN110647819A (en) * 2019-08-28 2020-01-03 中国矿业大学 Method and device for detecting abnormal behavior of underground personnel crossing belt
CN110889339A (en) * 2019-11-12 2020-03-17 南京甄视智能科技有限公司 Head and shoulder detection-based dangerous area grading early warning method and system
CN110956769A (en) * 2019-12-13 2020-04-03 珠海大横琴科技发展有限公司 Monitoring method of perimeter anti-intrusion system based on target position

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8167430B2 (en) * 2009-08-31 2012-05-01 Behavioral Recognition Systems, Inc. Unsupervised learning of temporal anomalies for a video surveillance system
CN102368297A (en) * 2011-09-14 2012-03-07 北京英福生科技有限公司 Equipment, system and method for recognizing actions of detected object

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916380A (en) * 2010-09-08 2010-12-15 大连古野软件有限公司 Video-based device and method for detecting smog
CN104866827A (en) * 2015-05-19 2015-08-26 天津大学 Method for detecting people crossing behavior based on video monitoring platform
CN106845325A (en) * 2015-12-04 2017-06-13 杭州海康威视数字技术股份有限公司 A kind of information detecting method and device
CN105718857A (en) * 2016-01-13 2016-06-29 兴唐通信科技有限公司 Human body abnormal behavior detection method and system
CN108108688A (en) * 2017-12-18 2018-06-01 青岛联合创智科技有限公司 A kind of limbs conflict behavior detection method based on the extraction of low-dimensional space-time characteristic with theme modeling
CN108830204A (en) * 2018-06-01 2018-11-16 中国科学技术大学 The method for detecting abnormality in the monitor video of target
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN109919132A (en) * 2019-03-22 2019-06-21 广东省智能制造研究所 A kind of pedestrian's tumble recognition methods based on skeleton detection
CN110647819A (en) * 2019-08-28 2020-01-03 中国矿业大学 Method and device for detecting abnormal behavior of underground personnel crossing belt
CN110533013A (en) * 2019-10-30 2019-12-03 图谱未来(南京)人工智能研究院有限公司 A kind of track-detecting method and device
CN110889339A (en) * 2019-11-12 2020-03-17 南京甄视智能科技有限公司 Head and shoulder detection-based dangerous area grading early warning method and system
CN110956769A (en) * 2019-12-13 2020-04-03 珠海大横琴科技发展有限公司 Monitoring method of perimeter anti-intrusion system based on target position

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
智能视频监控中异常行为识别研究;陈颖鸣 等;《微电子学与计算机》;第27卷(第11期);102-105 *

Also Published As

Publication number Publication date
CN113705274A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN108256404B (en) Pedestrian detection method and device
CN111210399B (en) Imaging quality evaluation method, device and equipment
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
KR102478335B1 (en) Image Analysis Method and Server Apparatus for Per-channel Optimization of Object Detection
CN111860318A (en) Construction site pedestrian loitering detection method, device, equipment and storage medium
CN112163469B (en) Smoking behavior recognition method, system, equipment and readable storage medium
CN110706247B (en) Target tracking method, device and system
CN111126153B (en) Safety monitoring method, system, server and storage medium based on deep learning
US20210124914A1 (en) Training method of network, monitoring method, system, storage medium and computer device
CN107786848A (en) The method, apparatus of moving object detection and action recognition, terminal and storage medium
CN112183304A (en) Off-position detection method, system and computer storage medium
CN110866428B (en) Target tracking method, device, electronic equipment and storage medium
CN111814510A (en) Detection method and device for remnant body
KR102511287B1 (en) Image-based pose estimation and action detection method and appratus
CN114870384A (en) Taijiquan training method and system based on dynamic recognition
CN113869137A (en) Event detection method and device, terminal equipment and storage medium
KR20160057503A (en) Violence Detection System And Method Based On Multiple Time Differences Behavior Recognition
CN111753587A (en) Method and device for detecting falling to ground
CN113705274B (en) Climbing behavior detection method and device, electronic equipment and storage medium
CN116403162B (en) Airport scene target behavior recognition method and system and electronic equipment
CN112541403A (en) Indoor personnel falling detection method utilizing infrared camera
CN116977900A (en) Intelligent laboratory monitoring alarm system and method thereof
CN110855932B (en) Alarm method and device based on video data, electronic equipment and storage medium
CN113837138B (en) Dressing monitoring method, dressing monitoring system, dressing monitoring medium and electronic terminal
Supangkat et al. Moving Image Interpretation Models to Support City Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant