CN112329671B - Pedestrian running behavior detection method based on deep learning and related components - Google Patents
Pedestrian running behavior detection method based on deep learning and related components Download PDFInfo
- Publication number
- CN112329671B CN112329671B CN202011256786.5A CN202011256786A CN112329671B CN 112329671 B CN112329671 B CN 112329671B CN 202011256786 A CN202011256786 A CN 202011256786A CN 112329671 B CN112329671 B CN 112329671B
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- head
- shoulder
- motion
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pedestrian running behavior detection method based on deep learning and related components, wherein the method comprises the following steps: constructing a head and shoulder detection model; detecting the monitoring video picture by using a head and shoulder detection model to obtain a pedestrian head and shoulder frame in the monitoring video picture; tracking the head and shoulder frames of the pedestrians in the monitoring video picture to obtain the motion trail of the pedestrians; establishing a mapping relation between the size of each pixel point in a monitoring video picture and the size of an actual scene; calculating the movement distance of the pedestrian in a specified time period; calculating the average speed of the pedestrian in the specified time period according to the moving distance of the pedestrian in the specified time period, comparing the average speed with a preset speed threshold value, if the average speed is greater than the speed threshold value, determining that the pedestrian is in a running state, and if the average speed is less than the speed threshold value, determining that the pedestrian is in a non-running state. The invention has the advantages of high precision, high speed, strong real-time property and the like.
Description
Technical Field
The invention relates to the field of motion detection, in particular to a pedestrian running behavior detection method based on deep learning and a related component.
Background
With the rapid development of economic society and the acceleration of urbanization, the number of modern urban population is increasing, and safety accidents in public places frequently occur. In order to prevent safety accidents, monitoring agencies of all countries in the world install a large amount of monitoring videos in public places for monitoring and preventing emergencies so as to guarantee the safety of the public places and maintain long-term security of the society. Therefore, the monitoring of the abnormal behaviors of the pedestrians in the public places is also paid attention by related management departments, the abnormal behaviors of the pedestrians in the public places are timely monitored, and corresponding protective measures are arranged, so that the monitoring and the monitoring method are necessary for preventing and reducing the occurrence of emergency events. However, with the increase of the number of the surveillance videos, large-scale and efficient analysis cannot be achieved by means of a traditional human eye recognition method for a large number of abnormal behaviors of pedestrians under the surveillance videos.
The monitoring video is monitored by using the computer system, and the on-site management and control can be better carried out according to the real-time condition. The current technologies for detecting running of pedestrians mainly include the following:
1. based on Harris (Harris) angular points on the detected image, tracking the angular points by adopting an optical flow method, extracting the angular points generating movement, further obtaining the movement vectors of the movement angular points between two continuous frames in a video sequence, and calculating the movement speed of the whole crowd so as to detect the running behavior of the crowd.
2. The foreground extraction method of the background model based on the mixed Gaussian distribution updates the weight of the corresponding Gaussian distribution according to the occurrence frequency of different Gaussian distributions in a time window, and extracts a complete motion foreground. And marking the proposed foreground mask, calculating optical flow information of the moving object according to a Lucas-Kanade (LK optical flow algorithm) method, describing the intensity of human motion by adopting a weighted direction histogram based on amplitude values, and then calculating the entropy in a motion area to judge the abnormity of behaviors.
3. The method comprises the steps of detecting a moving target in a video based on a traditional background modeling method, judging whether the moving target is a human body, extracting motion vector characteristics of the human body target, classifying the characteristics by adopting an SVM (support vector machine) method, and judging whether the human body is in a running state.
In a surveillance video, the pedestrians are sparse and dense, and no matter the method is based on a traditional method or a deep learning method, the moving target is difficult to detect and track due to the fact that the method is easily influenced by the shielding condition under the condition that the pedestrians are dense, and at the moment, the moving state of the target pedestrian cannot be effectively detected. In addition, a large amount of target data needs to be stored in the modeling process, and the background is easily subjected to background updating, illumination change, shadow and other interference, so that the problems that the background is easily mistakenly detected as a foreground, a moving target is incomplete and the like are caused. The phenomena of shielding and the like easily exist in a region with more pedestrians, and the moving object extracted by adopting the background modeling method is inaccurate, so that the detection rate is low, the difference between the front frame object and the rear frame object is large, and the moving degree of the moving object cannot be judged correctly.
Disclosure of Invention
The invention aims to provide a pedestrian running behavior detection method based on deep learning and a related component, and aims to solve the problems of low accuracy, high interference possibility and low real-time performance of the conventional pedestrian running behavior detection technology.
In a first aspect, an embodiment of the present invention provides a pedestrian running behavior detection method based on deep learning, where the method includes:
establishing a head and shoulder data set, and performing model training by adopting a deep learning target detection algorithm to obtain a head and shoulder detection model;
detecting a monitoring video picture by using the head and shoulder detection model to obtain a pedestrian head and shoulder frame in the monitoring video picture;
tracking the head and shoulder frames of the pedestrians in the monitoring video picture based on a target tracking algorithm to obtain the motion trail of the pedestrians;
establishing a mapping relation between the size of each pixel point in a monitoring video picture and the size of an actual scene;
calculating the movement distance of the pedestrian in a specified time period based on the movement track of the pedestrian and the mapping relation;
calculating the average speed of the pedestrian in the appointed time period according to the movement distance of the pedestrian in the appointed time period, comparing the average speed with a preset speed threshold, if the average speed is larger than the speed threshold, determining that the pedestrian is in a running state, and if the average speed is smaller than the speed threshold, determining that the pedestrian is in a non-running state.
In a second aspect, an embodiment of the present invention provides a pedestrian running behavior detection apparatus based on deep learning, including:
the model construction unit is used for establishing a head and shoulder data set and performing model training by adopting a deep learning target detection algorithm to obtain a head and shoulder detection model;
the head and shoulder detection unit is used for detecting the monitoring video picture by using the head and shoulder detection model to obtain a pedestrian head and shoulder frame in the monitoring video picture;
the pedestrian tracking unit is used for tracking the head and shoulder frames of the pedestrians in the monitoring video picture based on a target tracking algorithm to obtain the motion trail of the pedestrians;
the mapping establishing unit is used for establishing the mapping relation between the size of each pixel point in the monitoring video picture and the actual scene size;
the distance calculation unit is used for calculating the movement distance of the pedestrian in a specified time period based on the movement track of the pedestrian and the mapping relation;
the judging unit is used for calculating the average speed of the pedestrian in the specified time period according to the moving distance of the pedestrian in the specified time period, comparing the average speed with a preset speed threshold, judging that the pedestrian is in a running state if the average speed is greater than the speed threshold, and judging that the pedestrian is in a non-running state if the average speed is less than the speed threshold.
In a third aspect, an embodiment of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the pedestrian running behavior detection method based on deep learning according to the first aspect when executing the computer program.
In a fourth aspect, the embodiment of the present invention provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the pedestrian running behavior detection method based on deep learning according to the first aspect.
The embodiment of the invention provides a pedestrian running behavior detection method based on deep learning and a related component, wherein the method comprises the following steps: establishing a head and shoulder data set, and performing model training by adopting a deep learning target detection algorithm to obtain a head and shoulder detection model; detecting a monitoring video picture by using the head and shoulder detection model to obtain a pedestrian head and shoulder frame in the monitoring video picture; tracking the head and shoulder frame of the pedestrian in the monitoring video picture based on a target tracking algorithm to obtain the motion trail of the pedestrian; establishing a mapping relation between the size of each pixel point in a monitoring video picture and the size of an actual scene; calculating the movement distance of the pedestrian in a specified time period based on the movement track of the pedestrian and the mapping relation; calculating the average speed of the pedestrian in the specified time period according to the moving distance of the pedestrian in the specified time period, comparing the average speed with a preset speed threshold, if the average speed is greater than the speed threshold, determining that the pedestrian is in a running state, and if the average speed is less than the speed threshold, determining that the pedestrian is in a non-running state. The embodiment of the invention has the advantages of high precision, high speed and strong real-time property; meanwhile, the influences of background updating, illumination change, shadow and other interferences in the traditional detection method are avoided; the detection and tracking effects of the moving target under the shielding condition can be effectively improved, and the multi-target tracking and the analysis of the moving state thereof can be efficiently carried out in real time; the motion trail and the speed of the pedestrian target can be efficiently and accurately calculated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a pedestrian running behavior detection method based on deep learning according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of step S101 in a method for detecting running behavior of a pedestrian based on deep learning according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating step S103 in a deep learning-based pedestrian running behavior detection method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart illustrating the step S302 in the method for detecting running behavior of a pedestrian based on deep learning according to the embodiment of the present invention;
fig. 5 is a flowchart illustrating the step S104 in the method for detecting a running behavior of a pedestrian based on deep learning according to the embodiment of the present invention;
fig. 6 is a flowchart illustrating step S105 of the pedestrian running behavior detection method based on deep learning according to the embodiment of the present invention;
fig. 7 is a flowchart illustrating step S603 in a pedestrian running behavior detection method based on deep learning according to an embodiment of the present invention;
fig. 8 is a schematic block diagram of a pedestrian running behavior detection apparatus based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for detecting a running behavior of a pedestrian based on deep learning according to an embodiment of the present invention, where the method includes steps S101 to S106:
s101, establishing a head and shoulder data set, and performing model training by adopting a deep learning target detection algorithm to obtain a head and shoulder detection model;
s102, detecting a monitoring video picture by using the head and shoulder detection model to obtain a pedestrian head and shoulder frame in the monitoring video picture;
s103, tracking the head and shoulder frames of the pedestrians in the monitoring video image based on a target tracking algorithm to obtain the motion trail of the pedestrians;
s104, establishing a mapping relation between the size of each pixel point in a monitoring video picture and the size of an actual scene;
s105, calculating the movement distance of the pedestrian in a specified time period based on the movement track of the pedestrian and the mapping relation;
s106, calculating the average speed of the pedestrian in the specified time period according to the moving distance of the pedestrian in the specified time period, comparing the average speed with a preset speed threshold, if the average speed is greater than the speed threshold, determining that the pedestrian is in a running state, and if the average speed is less than the speed threshold, determining that the pedestrian is in a non-running state.
The embodiment of the invention carries out model training by establishing a head and shoulder data set and adopting a deep learning target detection algorithm, positions of pedestrians are positioned by the trained head and shoulder detection model, the pedestrians are tracked by combining a target tracking algorithm to obtain the movement track of the pedestrians, the mapping relation between the size of each pixel point and the actual size is established, the actual movement distance of the pedestrians is calculated, the average speed of the pedestrians is calculated, and whether the pedestrians are in a running state or not is judged. The method provided by the embodiment of the invention is based on a deep learning detection method, and can be used for quickly and efficiently detecting the abnormal running behavior of the pedestrian in the monitoring video picture by combining the technologies such as a tracking algorithm, a trajectory analysis and the like.
In one embodiment, as shown in fig. 2, the step S101 includes steps S201 to S202:
s201, obtaining a picture sample by intercepting a monitoring video picture and/or crawling a pedestrian picture through a crawler technology, labeling and cleaning the head and shoulders of a pedestrian in the picture sample, and establishing a head and shoulder data set;
in the step, the picture samples can be obtained in various ways, such as intercepting historical monitoring video pictures, or crawling pedestrian pictures from the internet, or combining the two ways to obtain a plurality of picture samples, then marking pedestrians in the picture samples, and cleaning some invalid data, so that a head and shoulder data set is constructed.
S202, according to the established head and shoulder data set, model training is carried out by adopting a Yolo-V4 target detection algorithm, and network structure adjustment, parameter optimization and iterative updating are carried out to obtain an optimal head and shoulder detection model.
In the step, a Yolo-V4 target detection algorithm is adopted for model training, wherein Yolo-V4 adopts an optimal optimization strategy in the CNN (convolutional neural network) field on the basis of an original Yolo (youonly Look Once, an object recognition and positioning algorithm based on a deep neural network) target detection architecture, and has different degrees of optimization in various aspects such as data processing, a backbone network, network training, an activation function, a loss function and the like, and speed and effect are considered in the aspect of image detection.
Through model training, the network structure can be adjusted and parameter optimized by combining with a loss function, and iterative optimization is continuously performed until the loss value is smaller than a preset value or the iteration times are smaller than the preset times, so that the iteration can be finished, and an optimal head and shoulder detection model is obtained.
In one embodiment, as shown in fig. 3, the step S103 includes steps S301 to S302:
s301, extracting the position and size information of a pedestrian head and shoulder frame in two continuous monitoring video pictures;
it should be noted that, in this embodiment, two consecutive frames of surveillance video frames are taken as an example for detection, and in practical applications, continuous detection needs to be performed continuously, for example, the first frame of surveillance video frame and the second frame of surveillance video frame are detected, and then the second frame of surveillance video frame and the third frame of surveillance video frame are detected.
The pedestrian head-shoulder frame in each frame of the surveillance video picture can be detected through the step S102, so as to determine the position and size information of the pedestrian head-shoulder frame in the corresponding frame of the surveillance video picture. Similarly, the number of pedestrian head and shoulder frames in each frame of the surveillance video picture can be obtained.
S302, calculating the overlapping degree IOU of the pedestrian head and shoulder frames according to the position and size information of the pedestrian head and shoulder frames in the two continuous frames of monitoring video images, judging whether the corresponding pedestrian head and shoulder frames belong to the same pedestrian according to the overlapping degree IOU, tracking the pedestrian, and obtaining the motion trail of the pedestrian.
The step is to calculate the overlapping degree IOU of the pedestrian head and shoulder frames according to the position and size information of the pedestrian head and shoulder frames in the two continuous frames of monitoring video pictures, so as to judge whether the pedestrian head and shoulder frames in the two frames of monitoring video pictures belong to the same pedestrian, of course, a plurality of pedestrian head and shoulder frames may be arranged in the two frames of monitoring video pictures, so that the repeated pairwise calculation can be carried out, so as to calculate the overlapping degree IOU, and determine whether the corresponding pedestrian head and shoulder frames belong to the same pedestrian. The pedestrian can be tracked in such a way, so that the motion trail is obtained.
In one embodiment, as shown in fig. 4, the step S302 includes steps S401 to S404:
s401, giving ID numbers to pedestrian head and shoulder frames in a previous monitoring video picture, and calculating the overlapping degree IOU of the pedestrian head and shoulder frames in a current monitoring video picture and the pedestrian head and shoulder frames in the previous monitoring video picture;
in this step, ID numbers are set for pedestrian head and shoulder frames in the previous frame of surveillance video picture, and then the overlapping degree IOU of pedestrian head and shoulder frames in two frames of surveillance video pictures is calculated.
The calculation formula of the IOU is as follows:
a represents the coverage area of a pedestrian head-shoulder frame in the previous monitoring video picture, and B represents the coverage area of a pedestrian head-shoulder frame in the current monitoring video picture.
S402, if the overlapping degree IOU is larger than a set IOU threshold value, confirming that two pedestrian head-shoulder frames belong to the same pedestrian, and endowing the ID number of the pedestrian head-shoulder frame of the previous frame to the pedestrian head-shoulder frame corresponding to the current frame of the monitoring video picture;
in this step, if two pedestrian head shoulder frames belong to the same pedestrian, the ID number of the pedestrian head shoulder frame of the previous frame is assigned to the pedestrian head shoulder frame corresponding to the current frame of the surveillance video picture, and certainly if there are a plurality of pedestrians that need to be tracked and it is confirmed that there are pedestrian head shoulder frames belonging to different pedestrians in the current frame of the surveillance video picture, the ID number of the pedestrian head shoulder frame of the previous frame can be assigned to the corresponding pedestrian head shoulder frame in the current frame of the surveillance video picture, thereby tracking the pedestrians at the same time.
S403, if the overlapping degree IOU is smaller than or equal to the IOU threshold value, confirming that the two pedestrian head-shoulder frames do not belong to the same pedestrian, and endowing a new ID number to the pedestrian head-shoulder frame corresponding to the current monitoring video picture;
in this step, if the overlapping degree IOU is less than or equal to the IOU threshold, it is determined that the two pedestrian head-shoulder frames do not belong to the same pedestrian, so that a new ID number is assigned to the pedestrian head-shoulder frame corresponding to the current frame of the surveillance video. This step means that the overlapping degree IOU of a certain pedestrian head-shoulder frame of the current frame surveillance video picture and any one pedestrian head-shoulder frame of the previous frame surveillance video picture is less than or equal to the IOU threshold, in other words, the certain pedestrian head-shoulder frame of the current frame surveillance video picture is new relative to the previous frame surveillance video picture, so a new ID number is assigned to the certain pedestrian head-shoulder frame.
In addition, in this step, there is also a case that some detection target objects disappear, that is, the pedestrian head shoulder frame in the previous frame of the monitoring video picture does not appear in the current frame of the monitoring video picture, that is, the overlapping degree between a certain pedestrian head shoulder frame in the previous frame of the monitoring video picture and any one pedestrian head shoulder frame in the current frame is less than or equal to the IOU threshold, which indicates that the detection target object has left the monitoring area. At this time, the ID number corresponding to the head and shoulder frame of a pedestrian in the previous frame of the surveillance video image can be erased, and the ID number can be assigned to the newly appeared target again in the next round of detection.
S404, recording the central points of the pedestrian head and shoulder frames with the same ID number in the multi-frame monitoring video picture, and connecting the central points of the pedestrian head and shoulder frames with the same ID number to obtain the motion trail of the corresponding pedestrian.
According to the process, the target can be tracked, the central points of the pedestrian head and shoulder frames (with the same ID number) tracked each time are recorded, and the motion tracks of corresponding pedestrians can be drawn by connecting the central points of the pedestrian head and shoulder frames with the same ID number.
In one embodiment, as shown in fig. 5, the step S104 includes steps S501 to S502:
s501, dividing a monitoring video into n areas, and respectively measuring the actual sizes of the n areas in the horizontal direction and the vertical direction in an actual scene;
in the embodiment of the invention, according to the imaging perspective principle, the shooting distance and angle in the monitoring video scene can influence the accuracy of calculation of the pedestrian movement speed. According to the placing distance and the angle of different monitoring videos (the resolution is A1A 2), the monitoring videos (namely monitoring video pictures) are divided into n areas in the horizontal direction and the vertical direction by adopting a mapping relation, and the actual sizes in the horizontal direction and the vertical direction corresponding to the n areas in the actual scene are measured and recorded as W1, W2 … … Wn, H1 and H2 … … Hn. The n regions may be divided equally according to actual conditions or according to a preset rule.
S502, calculating the actual size represented by each pixel point in the horizontal direction and the vertical direction according to the actual sizes of the n regions in the actual scene in the horizontal direction and the vertical direction and the pixel resolution of the monitoring video picture.
The actual size of each pixel point in the horizontal direction and the vertical direction can be calculated through the actual size in the actual scene and the pixel resolution of the monitoring video picture. In the embodiment of the present invention, the actual sizes represented by the pixel points in each of the n regions may be calculated, that is, the actual sizes represented by the pixel points at different positions in a complete surveillance video frame may be different due to the angle, the distance, and the like. This is beneficial to improving the accuracy of subsequent distance and speed calculation.
In the step S105, a movement distance of the pedestrian in a specified time period is calculated based on the movement locus of the pedestrian and the mapping relationship.
The specified time period is the time difference between the time of the starting point and the time of the ending point in a complete motion track.
Since a complete motion trajectory may span multiple previous regions, the motion trajectory in each region may be acquired and the motion distance of the motion trajectory in the region may be calculated. The following provides a method for calculating the moving distance of the pedestrian in one area according to the moving track in the area and the mapping relation (in this case, the specified time period represents the time difference between the time of the starting point and the time of the end point of the area), and for other areas, the calculation can be performed in the same way.
In one embodiment, as shown in fig. 6, the step S105 includes steps S601 to S603:
s601, connecting the starting point and the end point of the motion track into a straight line, calculating the distance between all the points on the motion track and the straight line, and acquiring the maximum distance;
because the motion track may be a curve or irregular, the present embodiment uses a new method to calculate the length of the motion track, first connects the start point a and the end point B of the motion track into a straight line AB, and finds the distance between all points on the motion track and the straight line AB, where the point C on the motion track is the maximum distance point from the straight line AB, and the maximum distance is Dmax.
S602, if the maximum distance is smaller than a preset distance threshold, approximating the length of the straight line to the length of a motion track, and calculating the motion distance of the pedestrian in a specified time period according to the length of the motion track and the mapping relation;
if the maximum is smaller than the preset distance threshold, the motion track is close to the straight line, so that the length of the straight line AB can be approximate to the length of the motion track, and the actual motion distance can be calculated according to the mapping relation.
Wherein the coordinate of the starting point A of the motion track is assumed to be (X)0,Y0) The end point B coordinate is (X)1,Y1) The actual horizontal distance S between two points can be calculatedxTrue vertical distance SyIs composed of
Wherein, W/A1Is the actual size, H/A, of the pixel point in the horizontal direction2The actual size of the pixel point in the vertical direction.
According to Sx,SyThe actual distance S of each broken line can be calculatedxyComprises the following steps:
s603, if the maximum distance is larger than or equal to the distance threshold, segmenting the motion track, then calculating the length of the motion track according to the sub-motion track obtained after segmentation, and calculating the motion distance of the pedestrian in a specified time period according to the length of the motion track and the mapping relation.
If the maximum distance is greater than or equal to the distance threshold, it indicates that the motion trajectory deviates far from the straight line, in this case, the motion trajectory is segmented, and the segmentation may be performed from a maximum distance point C corresponding to the maximum distance, so as to segment the motion trajectory into two segments, to obtain two sub-motion trajectories, and then the two sub-motion trajectories are calculated by continuously using the method.
In an embodiment, as shown in fig. 7, the step S603 includes steps S701 to S703:
s701, if the maximum distance is larger than or equal to the distance threshold, segmenting the motion track to obtain a plurality of sub-motion tracks;
in this step, the motion trajectory AB is divided into a sub motion trajectory AC and a sub motion trajectory CB.
S702, connecting the starting point and the end point of the sub-motion track into a new straight line, calculating the distance between all points on the sub-motion track and the new straight line, and acquiring a new maximum distance; if the new maximum distance is smaller than the distance threshold, approximating the length of the new straight line to the length of the sub-motion track;
this step is to process the sub-motion trajectories in the same manner as in S601 and S602 described above, thereby obtaining the lengths of the sub-motion trajectories.
And S703, if the new maximum distance is greater than or equal to the distance threshold, continuing to segment the sub-motion trajectories until the lengths of all the sub-motion trajectories are calculated, and adding all the calculated sub-motion trajectories to obtain the lengths of the motion trajectories.
If the new maximum distance of the sub-motion tracks is still larger than or equal to the distance threshold, the corresponding sub-motion tracks are required to be continuously segmented to obtain new sub-motion tracks, the segmentation points are new maximum distance points corresponding to the new maximum distance, and the like is performed until the lengths of all the sub-motion tracks are calculated, and the lengths of the sub-motion tracks are added, namely the lengths of the motion tracks.
According to the method, the appointed time period T in the monitoring video picture can be obtained1~T2Is (S) is a moving distance S1+S2...+Sn) In which S isnThe actual length of each of the n regions (i.e., the movement distance of a complete movement trajectory).
In the step S106, when it is determined that the pedestrian is in the running state, information such as the ID number, the position, and the like of the pedestrian target in the running state may be returned, and the warning process may be performed.
The embodiment of the invention adopts a deep learning Yolo-V4 algorithm and trains by using the head and shoulder data set, and has the advantages of high precision, high speed and strong real-time property. The method for detecting the head and the shoulder by deep learning is adopted, so that the influences of background updating, illumination change, shadow and other interferences in the traditional detection method are avoided. For the traditional detection method, the motion track of the shielding target is difficult to detect under the condition of dense pedestrians, the method based on the combination of head and shoulder detection and IOU tracking can effectively improve the detection and tracking of the moving target under the shielding condition, and can carry out multi-target tracking and analysis of the motion state of the multi-target tracking in real time and high efficiency. The invention can efficiently and accurately calculate the motion track and the speed of the pedestrian target.
The method can be applied to the field of security protection of smart cities, including places with potential safety hazards such as bus stations, railway stations, subway stations, business surpasses, construction sites and the like.
Referring to fig. 8, fig. 8 is a schematic block diagram of a deep learning-based pedestrian running behavior detection apparatus according to an embodiment of the present invention, where the deep learning-based pedestrian running behavior detection apparatus 800 includes:
the model construction unit 801 is used for establishing a head and shoulder data set and performing model training by adopting a deep learning target detection algorithm to obtain a head and shoulder detection model;
a head and shoulder detecting unit 802, configured to detect a surveillance video frame by using the head and shoulder detecting model, so as to obtain a pedestrian head and shoulder frame in the surveillance video frame;
a pedestrian tracking unit 803, configured to track a head and shoulder frame of a pedestrian in the monitored video picture based on a target tracking algorithm, so as to obtain a motion trajectory of the pedestrian;
the mapping establishing unit 804 is used for establishing a mapping relation between the size of each pixel point in the monitoring video picture and the actual scene size;
a distance calculation unit 805 configured to calculate a movement distance of the pedestrian within a specified time period based on the movement trajectory of the pedestrian and the mapping relationship;
the determining unit 806 is configured to calculate an average speed of the pedestrian in a specified time period according to a moving distance of the pedestrian in the specified time period, compare the average speed with a preset speed threshold, determine that the pedestrian is in a running state if the average speed is greater than the speed threshold, and determine that the pedestrian is in a non-running state if the average speed is less than the speed threshold.
In one embodiment, the model building unit 801 includes:
the system comprises a preprocessing unit, a monitoring unit and a monitoring unit, wherein the preprocessing unit is used for obtaining a picture sample by intercepting a monitoring video picture and/or crawling a pedestrian picture through a crawler technology, marking and cleaning the head and shoulders of a pedestrian in the picture sample, and establishing a head and shoulder data set;
and the model training unit is used for performing model training by adopting a Yolo-V4 target detection algorithm according to the established head and shoulder data set, and performing network structure adjustment, parameter optimization and iterative update to obtain an optimal head and shoulder detection model.
In one embodiment, the pedestrian tracking unit 803 includes:
the extraction unit is used for extracting the position and size information of a pedestrian head and shoulder frame in two continuous frames of monitoring video pictures;
and the track tracking unit is used for calculating the overlapping degree IOU of the pedestrian head and shoulder frames according to the position and size information of the pedestrian head and shoulder frames in the two continuous frames of monitoring video pictures, judging whether the corresponding pedestrian head and shoulder frames belong to the same pedestrian according to the overlapping degree IOU, tracking the pedestrian and obtaining the motion track of the pedestrian.
In an embodiment, the trajectory tracking unit comprises:
the overlapping degree calculation unit is used for giving an ID number to the pedestrian head and shoulder frame in the previous frame of monitoring video picture and calculating the overlapping degree IOU of the pedestrian head and shoulder frame in the current frame of monitoring video picture and the pedestrian head and shoulder frame in the previous frame of monitoring video picture;
the first judgment unit is used for confirming that two pedestrian head shoulder frames belong to the same pedestrian if the overlapping degree IOU is larger than a set IOU threshold value, and endowing the ID number of the pedestrian head shoulder frame of the previous frame to the pedestrian head shoulder frame corresponding to the current frame monitoring video picture;
a second judging unit, configured to determine that the two pedestrian head-shoulder frames do not belong to the same pedestrian and assign a new ID number to the pedestrian head-shoulder frame corresponding to the current frame of the surveillance video picture if the overlapping degree IOU is less than or equal to the IOU threshold;
and the connecting unit is used for recording the central points of the pedestrian head and shoulder frames with the same ID number in the multi-frame monitoring video picture and connecting the central points of the pedestrian head and shoulder frames with the same ID number to obtain the motion trail of the corresponding pedestrian.
In an embodiment, the mapping establishing unit 804 includes:
the device comprises a dividing unit, a calculating unit and a processing unit, wherein the dividing unit is used for dividing a monitoring video into n areas and respectively measuring the actual sizes of the n areas in the horizontal direction and the vertical direction in an actual scene;
and the pixel point calculating unit is used for calculating the actual size represented by each pixel point in the horizontal direction and the vertical direction according to the actual sizes in the horizontal direction and the vertical direction corresponding to the n regions in the actual scene and the pixel resolution of the monitoring video picture.
In one embodiment, the distance calculation unit 805 includes:
the maximum distance calculation unit is used for connecting the starting point and the end point of the motion track into a straight line, calculating the distance between all the points on the motion track and the straight line and acquiring the maximum distance;
the approximation unit is used for approximating the length of the straight line to the length of a motion track if the maximum distance is smaller than a preset distance threshold, and calculating the motion distance of the pedestrian in a specified time period according to the length of the motion track and the mapping relation;
and the segmentation unit is used for segmenting the motion trail if the maximum distance is greater than or equal to the distance threshold, then calculating the length of the motion trail according to the sub-motion trail obtained after segmentation, and calculating the motion distance of the pedestrian in a specified time period according to the length of the motion trail and the mapping relation.
In one embodiment, the slicing unit includes:
the sub-segmentation unit is used for segmenting the motion trail to obtain a plurality of sub-motion trails if the maximum distance is greater than or equal to the distance threshold;
the sub-approximation unit is used for connecting the starting point and the end point of the sub-motion track into a new straight line, calculating the distance between all the points on the sub-motion track and the new straight line, and acquiring a new maximum distance; if the new maximum distance is smaller than the distance threshold, approximating the length of the new straight line to the length of the sub-motion track;
and the accumulation unit is used for continuously segmenting the sub-motion tracks until the lengths of all the sub-motion tracks are calculated if the new maximum distance is greater than or equal to the distance threshold value, and adding all the calculated sub-motion tracks to obtain the lengths of the motion tracks.
The specific contents of the above device embodiment correspond to the specific contents of the above method embodiment one to one, and for the specific technical details of the above device embodiment, reference may be made to the description of the above method embodiment, which is not repeated herein.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method for detecting running behavior of pedestrians based on deep learning as described above when executing the computer program.
Embodiments of the present invention further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the pedestrian running behavior detection method based on deep learning as described above.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, without departing from the principle of the present invention, it is possible to make various improvements and modifications to the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Claims (7)
1. A pedestrian running behavior detection method based on deep learning is characterized by comprising the following steps:
establishing a head and shoulder data set, and performing model training by adopting a deep learning target detection algorithm to obtain a head and shoulder detection model;
detecting a monitoring video picture by using the head and shoulder detection model to obtain a pedestrian head and shoulder frame in the monitoring video picture;
tracking the head and shoulder frames of the pedestrians in the monitoring video picture based on a target tracking algorithm to obtain the motion trail of the pedestrians;
establishing a mapping relation between the size of each pixel point in a monitoring video picture and the size of an actual scene;
calculating the movement distance of the pedestrian in a specified time period based on the movement track of the pedestrian and the mapping relation;
calculating the average speed of the pedestrian in a specified time period according to the moving distance of the pedestrian in the specified time period, comparing the average speed with a preset speed threshold, if the average speed is greater than the speed threshold, determining that the pedestrian is in a running state, and if the average speed is less than the speed threshold, determining that the pedestrian is in a non-running state;
the establishing of the mapping relation between the size of each pixel point in the monitoring video picture and the actual scene size comprises the following steps:
dividing a monitoring video into n regions, enabling the adjacent positions to form a region, and respectively measuring the actual sizes in the horizontal direction and the vertical direction corresponding to the n regions in an actual scene; for a region, the actual sizes represented by the pixel points in the region are the same;
calculating the actual size represented by each pixel point in the horizontal direction and the vertical direction according to the actual sizes in the horizontal direction and the vertical direction corresponding to the n regions in the actual scene and the pixel resolution of a monitoring video picture;
the calculating the moving distance of the pedestrian in a specified time period based on the moving track of the pedestrian and the mapping relation comprises:
connecting the starting point and the end point of the motion track into a straight line, calculating the distance between all points on the motion track and the straight line, and acquiring the maximum distance;
if the maximum distance is smaller than a preset distance threshold value, the length of the straight line is approximate to the length of a motion track, and the motion distance of the pedestrian in a specified time period is calculated according to the length of the motion track and the mapping relation;
the coordinate of the starting point A of the motion track is (X)0,Y0) The coordinate of the end point B is (X)1,Y1) Calculating the actual horizontal distance S between two pointsxTrue vertical distance SyComprises the following steps:
wherein, W/A1Is the actual size, H/A, of the pixel point in the horizontal direction2The actual size of the pixel point represented in the vertical direction;
according to Sx,SyCalculating the actual distance S of each broken linexyComprises the following steps:
if the maximum distance is larger than or equal to the distance threshold, segmenting the motion trail, then calculating the length of the motion trail according to the sub-motion trail obtained after segmentation, and calculating the motion distance of the pedestrian in a specified time period according to the length of the motion trail and the mapping relation;
if the maximum distance is greater than or equal to the distance threshold, segmenting the motion trajectory, then calculating the length of the motion trajectory according to the sub-motion trajectories obtained after segmentation, and calculating the motion distance of the pedestrian in a specified time period according to the length of the motion trajectory and the mapping relationship, including:
if the maximum distance is larger than or equal to the distance threshold, segmenting the motion track to obtain a plurality of sub-motion tracks;
connecting the starting point and the end point of the sub-motion track into a new straight line, and solving the distances between all points on the sub-motion track and the new straight line to obtain a new maximum distance; if the new maximum distance is smaller than the distance threshold, approximating the length of the new straight line to the length of the sub-motion track;
and if the new maximum distance is larger than or equal to the distance threshold, continuing to segment the sub-motion tracks until the lengths of all the sub-motion tracks are calculated, and adding all the calculated sub-motion tracks to obtain the lengths of the motion tracks.
2. The pedestrian running behavior detection method based on deep learning of claim 1, wherein the establishing of the head and shoulder data set and the model training using the deep learning objective detection algorithm to obtain the head and shoulder detection model comprises:
capturing a monitoring video picture and/or crawling a pedestrian picture through a crawler technology to obtain a picture sample, labeling and cleaning the head and shoulders of the pedestrian in the picture sample, and establishing a head and shoulder data set;
and according to the established head and shoulder data set, model training is carried out by adopting a Yolo-V4 target detection algorithm, and network structure adjustment, parameter optimization and iterative updating are carried out to obtain an optimal head and shoulder detection model.
3. The method for detecting running behavior of pedestrians based on deep learning as claimed in claim 1, wherein the tracking the head and shoulder frames of pedestrians in the surveillance video frame based on the target tracking algorithm to obtain the motion trail of pedestrians includes:
extracting the position and size information of a pedestrian head and shoulder frame in two continuous frames of monitoring video pictures;
and calculating the overlapping degree IOU of the pedestrian head and shoulder frames according to the position and size information of the pedestrian head and shoulder frames in the two continuous frames of monitoring video pictures, judging whether the corresponding pedestrian head and shoulder frames belong to the same pedestrian or not according to the overlapping degree IOU, tracking the pedestrian, and obtaining the motion trail of the pedestrian.
4. The method as claimed in claim 3, wherein the step of calculating the degree of overlap IOU of the pedestrian head and shoulder frames according to the position and size information of the pedestrian head and shoulder frames in the two consecutive frames of monitored video pictures, determining whether the corresponding pedestrian head and shoulder frames belong to the same pedestrian according to the degree of overlap IOU, and tracking the pedestrian to obtain the motion trajectory of the pedestrian comprises:
giving ID numbers to pedestrian head and shoulder frames in a previous frame of monitoring video image, and calculating the overlapping degree IOU of the pedestrian head and shoulder frames in the current frame of monitoring video image and the pedestrian head and shoulder frames in the previous frame of monitoring video image;
if the overlapping degree IOU is larger than the set IOU threshold value, confirming that the two pedestrian head shoulder frames belong to the same pedestrian, and endowing the ID number of the pedestrian head shoulder frame of the previous frame to the pedestrian head shoulder frame corresponding to the monitoring video picture of the current frame;
if the overlapping degree IOU is smaller than or equal to the IOU threshold value, confirming that the two pedestrian head and shoulder frames do not belong to the same pedestrian, and endowing a new ID number to the pedestrian head and shoulder frame corresponding to the current frame monitoring video picture;
recording the central points of the pedestrian head and shoulder frames with the same ID number in the multi-frame monitoring video picture, and connecting the central points of the pedestrian head and shoulder frames with the same ID number to obtain the motion trail of the corresponding pedestrian.
5. A pedestrian running behavior detection device based on deep learning, characterized by comprising:
the model construction unit is used for establishing a head and shoulder data set and performing model training by adopting a deep learning target detection algorithm to obtain a head and shoulder detection model;
the head and shoulder detection unit is used for detecting the monitoring video picture by using the head and shoulder detection model to obtain a pedestrian head and shoulder frame in the monitoring video picture;
the pedestrian tracking unit is used for tracking a pedestrian head and shoulder frame in the monitoring video picture based on a target tracking algorithm to obtain a motion track of a pedestrian;
the mapping establishing unit is used for establishing the mapping relation between the size of each pixel point in the monitoring video picture and the actual scene size;
the distance calculation unit is used for calculating the movement distance of the pedestrian in a specified time period based on the movement track of the pedestrian and the mapping relation;
the judging unit is used for calculating the average speed of the pedestrian in a specified time period according to the moving distance of the pedestrian in the specified time period, comparing the average speed with a preset speed threshold, judging that the pedestrian is in a running state if the average speed is greater than the speed threshold, and judging that the pedestrian is in a non-running state if the average speed is less than the speed threshold;
the mapping establishing unit includes:
the dividing unit is used for dividing the monitoring video into n regions, enabling the adjacent positions to form one region, and respectively measuring the actual sizes in the horizontal direction and the vertical direction corresponding to the n regions in the actual scene; for a region, the actual sizes represented by the pixel points in the region are the same;
the pixel point calculating unit is used for calculating the actual size represented by each pixel point in the horizontal direction and the vertical direction according to the actual sizes in the horizontal direction and the vertical direction corresponding to the n regions in the actual scene and the pixel resolution of the monitoring video picture;
the distance calculation unit includes:
the maximum distance calculation unit is used for connecting the starting point and the end point of the motion track into a straight line, calculating the distances between all points on the motion track and the straight line and acquiring the maximum distance;
the approximation unit is used for approximating the length of the straight line to the length of a motion track if the maximum distance is smaller than a preset distance threshold, and calculating the motion distance of the pedestrian in a specified time period according to the length of the motion track and the mapping relation;
the coordinate of the starting point A of the motion track is (X)0,Y0) The end point B coordinate is (X)1,Y1) Calculating the actual horizontal distance S between two pointsxTrue vertical distance SyComprises the following steps:
wherein, W/A1Is the actual size, H/A, of the pixel point in the horizontal direction2The actual size of the pixel point represented in the vertical direction;
according to Sx,SyCalculating the actual distance S of each broken linexyComprises the following steps:
the segmentation unit is used for segmenting the motion trail if the maximum distance is greater than or equal to the distance threshold, then calculating the length of the motion trail according to the sub-motion trail obtained after segmentation, and calculating the motion distance of the pedestrian in a specified time period according to the length of the motion trail and the mapping relation;
the slicing unit includes:
the sub-segmentation unit is used for segmenting the motion trail to obtain a plurality of sub-motion trails if the maximum distance is greater than or equal to the distance threshold;
the sub-approximation unit is used for connecting the starting point and the end point of the sub-motion track into a new straight line, calculating the distance between all the points on the sub-motion track and the new straight line, and acquiring a new maximum distance; if the new maximum distance is smaller than the distance threshold, approximating the length of the new straight line to the length of the sub-motion track;
and the accumulation unit is used for continuously segmenting the sub-motion tracks until the lengths of all the sub-motion tracks are calculated if the new maximum distance is greater than or equal to the distance threshold value, and adding the calculated sub-motion tracks to obtain the lengths of the motion tracks.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the deep learning based pedestrian running behavior detection method according to any one of claims 1 to 4 when executing the computer program.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to execute the deep learning-based pedestrian running behavior detection method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011256786.5A CN112329671B (en) | 2020-11-11 | 2020-11-11 | Pedestrian running behavior detection method based on deep learning and related components |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011256786.5A CN112329671B (en) | 2020-11-11 | 2020-11-11 | Pedestrian running behavior detection method based on deep learning and related components |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112329671A CN112329671A (en) | 2021-02-05 |
CN112329671B true CN112329671B (en) | 2022-06-17 |
Family
ID=74318943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011256786.5A Active CN112329671B (en) | 2020-11-11 | 2020-11-11 | Pedestrian running behavior detection method based on deep learning and related components |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112329671B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113284106B (en) * | 2021-05-25 | 2023-06-06 | 浙江商汤科技开发有限公司 | Distance detection method and device |
CN113435367A (en) * | 2021-06-30 | 2021-09-24 | 北大方正集团有限公司 | Social distance evaluation method and device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109145805A (en) * | 2018-08-15 | 2019-01-04 | 深圳市豪恩汽车电子装备股份有限公司 | Moving target detection method and system under vehicle-mounted environment |
CN110555397A (en) * | 2019-08-21 | 2019-12-10 | 武汉大千信息技术有限公司 | crowd situation analysis method |
US10503966B1 (en) * | 2018-10-11 | 2019-12-10 | Tindei Network Technology (Shanghai) Co., Ltd. | Binocular pedestrian detection system having dual-stream deep learning neural network and the methods of using the same |
CN111723664A (en) * | 2020-05-19 | 2020-09-29 | 烟台市广智微芯智能科技有限责任公司 | Pedestrian counting method and system for open type area |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160203B (en) * | 2019-12-23 | 2023-05-16 | 中电科新型智慧城市研究院有限公司 | Loitering and stay behavior analysis method based on head-shoulder model and IOU tracking |
CN111291735B (en) * | 2020-04-30 | 2020-08-18 | 华夏天信(北京)智能低碳技术研究院有限公司 | Underground personnel running abnormal behavior detection method based on trajectory analysis |
-
2020
- 2020-11-11 CN CN202011256786.5A patent/CN112329671B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109145805A (en) * | 2018-08-15 | 2019-01-04 | 深圳市豪恩汽车电子装备股份有限公司 | Moving target detection method and system under vehicle-mounted environment |
US10503966B1 (en) * | 2018-10-11 | 2019-12-10 | Tindei Network Technology (Shanghai) Co., Ltd. | Binocular pedestrian detection system having dual-stream deep learning neural network and the methods of using the same |
CN110555397A (en) * | 2019-08-21 | 2019-12-10 | 武汉大千信息技术有限公司 | crowd situation analysis method |
CN111723664A (en) * | 2020-05-19 | 2020-09-29 | 烟台市广智微芯智能科技有限责任公司 | Pedestrian counting method and system for open type area |
Also Published As
Publication number | Publication date |
---|---|
CN112329671A (en) | 2021-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103049787B (en) | A kind of demographic method based on head shoulder feature and system | |
US9213901B2 (en) | Robust and computationally efficient video-based object tracking in regularized motion environments | |
CN111753797B (en) | Vehicle speed measuring method based on video analysis | |
US20030053659A1 (en) | Moving object assessment system and method | |
Fradi et al. | Low level crowd analysis using frame-wise normalized feature for people counting | |
CN104008371A (en) | Regional suspicious target tracking and recognizing method based on multiple cameras | |
CN103413444A (en) | Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video | |
CN103824070A (en) | Rapid pedestrian detection method based on computer vision | |
CN112329671B (en) | Pedestrian running behavior detection method based on deep learning and related components | |
CN104200466A (en) | Early warning method and camera | |
Prokaj et al. | Tracking many vehicles in wide area aerial surveillance | |
CN111681382A (en) | Method for detecting temporary fence crossing in construction site based on visual analysis | |
CN111797803A (en) | Road guardrail abnormity detection method based on artificial intelligence and image processing | |
CN111008574A (en) | Key person track analysis method based on body shape recognition technology | |
CN111160203A (en) | Loitering and lingering behavior analysis method based on head and shoulder model and IOU tracking | |
US20240127416A1 (en) | Systems and methods for creating and/or analyzing three-dimensional models of infrastructure assets | |
CN110147748A (en) | A kind of mobile robot obstacle recognition method based on road-edge detection | |
Zhao et al. | APPOS: An adaptive partial occlusion segmentation method for multiple vehicles tracking | |
CN109977796A (en) | Trail current detection method and device | |
CN117315547A (en) | Visual SLAM method for solving large duty ratio of dynamic object | |
Hsieh et al. | Grid-based template matching for people counting | |
CN110942642B (en) | Video-based traffic slow-driving detection method and system | |
Moayed et al. | Traffic intersection monitoring using fusion of GMM-based deep learning classification and geometric warping | |
Shahraki et al. | A trajectory based method of automatic counting of cyclist in traffic video data | |
CN112634299A (en) | Remnant detection method for eliminating interference of winged insects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000 Patentee after: China Resources Digital Technology Co.,Ltd. Address before: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000 Patentee before: Runlian software system (Shenzhen) Co.,Ltd. |