CN111027370A - Multi-target tracking and behavior analysis detection method - Google Patents

Multi-target tracking and behavior analysis detection method Download PDF

Info

Publication number
CN111027370A
CN111027370A CN201910981794.7A CN201910981794A CN111027370A CN 111027370 A CN111027370 A CN 111027370A CN 201910981794 A CN201910981794 A CN 201910981794A CN 111027370 A CN111027370 A CN 111027370A
Authority
CN
China
Prior art keywords
target
tracking
image
head
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910981794.7A
Other languages
Chinese (zh)
Inventor
张中
赵冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Zhanda Intelligent Technology Co ltd
Original Assignee
Hefei Zhanda Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Zhanda Intelligent Technology Co ltd filed Critical Hefei Zhanda Intelligent Technology Co ltd
Priority to CN201910981794.7A priority Critical patent/CN111027370A/en
Publication of CN111027370A publication Critical patent/CN111027370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-target tracking and behavior analysis detection method, which is applied to the technical field of automatic target tracking and comprises the following steps: decomposing the image of the person to be tracked to obtain a plurality of target person head portraits; inputting the head portrait of the target person and the image to be analyzed into a network tracking model to obtain a confidence value and a coordinate frame, wherein the network tracking model is a tracking model formed on the basis of CNN and RPN; selecting a first target number of coordinate frames as first coordinate frames according to the confidence values and the coordinate frames; acquiring a tracking result according to the historical tracking value and the first coordinate frame; and outputting the detected candidate human head target information according to the tracking result, wherein the human head target information comprises the scale of each target and the total number of the candidate human head targets. By applying the embodiment of the invention, the problems of false alarm, missed report, difficult tracking and the like easily caused by reasons of easy shielding, difficult target classification and the like in the personnel detection and tracking method based on face detection, head and shoulder detection or whole body detection in the prior art are solved.

Description

Multi-target tracking and behavior analysis detection method
Technical Field
The invention relates to the technical field of target tracking, in particular to a multi-target tracking and behavior analysis detection method.
Background
With the rapid development of social economy, the application of video monitoring systems in various industries is becoming mature, and the number of cameras built and put into use is increasing rapidly. Because a monitoring system collects a large number of videos, real-time monitoring of the videos gradually becomes a big problem, a user cannot always keep an alert state even when the user always gazes at a screen, and analysis cannot be caused when subsequent data are lost. Therefore, it is necessary to employ an intelligent analysis system to perform analysis while the video is being captured.
In the prior art, the current intelligent analysis technology is a personnel detection and tracking method based on face detection, head and shoulder detection and whole body detection, the analysis accuracy is greatly influenced by the environment, a moving target is shielded to cause target information loss, the target is difficult to classify due to the diversity of the erection position and the angle of a camera, and the factors easily cause results such as misinformation, missing report and difficult tracking.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a multi-target tracking and behavior analysis detection method, and aims to solve the problems that in the prior art, a person detection and tracking method based on face detection, head and shoulder detection or whole body detection is easy to cause false alarm, missing report, difficult tracking and the like due to the reasons of easy shielding, difficult target classification and the like.
The invention is realized by the following steps:
the invention provides a multi-target tracking and behavior analysis detection method, which comprises the following steps:
decomposing the image of the person to be tracked to obtain a plurality of target person head portraits;
inputting the target character head portrait and the image to be analyzed into a network tracking model to obtain a confidence value and a coordinate frame, wherein the network tracking model is a tracking model formed on the basis of CNN and RPN, and the image to be analyzed is any corresponding video frame image in the video to be processed;
selecting a first target number of coordinate frames as first coordinate frames according to the confidence value and the coordinate frames;
acquiring a tracking result according to a historical tracking value and the first coordinate frame;
and outputting the detected candidate human head target information according to the tracking result, wherein the human head target information comprises the scale, the position, the confidence coefficient and the total number of the candidate human head targets.
In one implementation, the method further comprises:
acquiring the number of intervals of video frames corresponding to the figure image to be tracked and the image to be analyzed;
and when the interval number is larger than a preset value, performing image cutting on the image to be analyzed according to the image to be analyzed.
In one implementation, the step of selecting a first target number of coordinate frames as the first coordinate frame according to the confidence value and the coordinate frames includes:
sequencing the confidence values in a descending order;
sequentially selecting a first target number of confidence degrees, and taking a coordinate frame corresponding to the first number of confidence degrees as a tracking target candidate frame.
In one implementation, the step of outputting the detected candidate head target information according to the tracking result includes:
merging each candidate human head target;
acquiring candidate head targets with overlapped positions, and combining the candidate head targets with overlapped positions;
acquiring candidate head targets with confidence values lower than a preset value, and deleting the candidate head targets with low confidence values;
according to the human ecology, the human head targets with inconsistent sizes are obtained and deleted.
In one implementation, the step of inputting the avatar of the target person and the image to be analyzed into a network tracking model to obtain the confidence value and the coordinate frame includes:
the tracking model is divided into: front head tracker, back head tracker, left head tracker, and right head tracker
For each tracker in the tracking model, performing the steps of:
extracting integral channel characteristics of each layer of an input image, wherein the channel characteristics comprise color characteristics, amplitude values, angle characteristics and gradient histogram characteristics;
respectively inputting the integral channel characteristics of a certain layer, the width and the height of the channel characteristics of the current layer and the channel characteristic dimension into the four trained classifiers, and simultaneously inputting a detection window scale, a window sliding step length and a detection threshold; a confidence value and a coordinate frame are obtained.
By applying the multi-target tracking and behavior analysis detection method, a plurality of target character head portraits are obtained by decomposing the character images to be tracked; inputting the head portrait of the target person and the image to be analyzed into a network tracking model to obtain a confidence value and a coordinate frame; then selecting a first target number of coordinate frames as first coordinate frames according to the confidence value and the coordinate frames; and acquiring a tracking result through a historical tracking value and the first coordinate frame, and then outputting the detected candidate head target information. Therefore, the application process of the embodiment of the invention solves the problems that in the prior art, the personnel detection and tracking method based on face detection, head and shoulder detection or whole body detection is easy to cause results such as false alarm, missing report and difficult tracking due to reasons such as easy shielding and difficult target classification.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a multi-target tracking and behavior analysis detection method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a multi-target tracking and behavior analysis detection method, including the following steps:
s101, decomposing the image of the person to be tracked to obtain a plurality of target person head portraits.
In the video target tracking process, the video format to be tracked may be AVI, MP4, MKV, etc. The image of the person to be tracked is a video frame image which needs to be analyzed, and can be any video frame image in the video to be processed. After the previous frame of video is processed, a tracking result is obtained, then, for a subsequent video frame, a certain range can be expanded in the current frame of image according to the tracking result of the previous frame, and then the target character avatar is cut out, and the specific expansion of the current frame of image for finding the target character avatar is the existing process, which is not described herein again in the embodiments of the present invention.
It should be noted that, since the target operation in the video has front-back continuity, the target position between the front-back frame images does not shift too much, and the position of the target in the current frame can be estimated according to the tracking result of the previous frame image, but the position is not the accurate position. And based on the estimated position, amplifying a certain range, cutting the image in the current frame to obtain a target character head portrait, and matching the target template and the target character head portrait through network processing to obtain the accurate position of the target in the current frame.
S102, inputting the target character avatar and the image to be analyzed into a network tracking model to obtain a confidence value and a coordinate frame, wherein the network tracking model is a tracking model formed on the basis of CNN and RPN, and the image to be analyzed is any corresponding video frame image in the video to be processed.
In practical application, according to a target which is provided in advance and needs to be tracked, a template image is obtained by cutting after the range is expanded in a first frame image of a video, and therefore an initial image to be analyzed is obtained. Taking the second video frame image as the image of the person to be tracked as an example, after the target person avatar is obtained from the second video frame image, the image to be analyzed and the target person avatar are respectively input into the neural network CNN, then the result of the CNN is input into the RPN, and the confidence value and the coordinate frame are output through the RPN.
In one implementation, the step of performing image cropping on the current frame image according to the image to be analyzed includes: acquiring the number of intervals of video frames corresponding to the figure image to be tracked and the image to be analyzed; and when the interval number is larger than a preset value, performing image cutting on the image to be analyzed according to the image to be analyzed. And inputting the clipped image into a tracking model.
After the image is input into the network, a multi-channel feature map is obtained through calculation of each convolutional layer, and the obtained feature map is input into the next convolutional layer to be calculated to obtain a new feature map. The CNN network has various classical network structures, such as AlexNet, VGG, inclusion, ResNet, and the like. Taking AlexNet as an example, there are 5 convolutional layers interspersed with activation functions (ReLU), Pooling layers (Max power), full connectivity layers, etc. In the case of omitting the full connectivity layer in the network, the network output is a 256-channel 6x6 size profile.
Specifically, in one implementation of the present invention, the tracking model is divided into: a front head tracker, a back head tracker, a left head tracker, and a right head tracker; for each tracker in the tracking model, performing the steps of: extracting integral channel characteristics of each layer of the input image, wherein the channel characteristics comprise color characteristics, amplitude values, angle characteristics and gradient histogram characteristics; respectively inputting the integral channel feature of a certain layer, the width and the height of the channel feature of the current layer and the channel feature dimension into the four trained classifiers, and simultaneously inputting a detection window scale, a window sliding step length and a detection threshold; a confidence value and a coordinate frame are obtained.
S103, selecting a first target number of coordinate frames as first coordinate frames according to the confidence value and the coordinate frames.
It should be noted that the coordinate frame is plural, and a fixed number of bounding boxes, that is, coordinate frames, are output by using a deep neural network CNN.
In one implementation manner of the present invention, the method adopted for selecting the coordinate frame is as follows: sequencing the confidence values in a descending order; and sequentially selecting coordinate frames corresponding to the confidence values of the first target quantity as tracking target candidate frames.
After the confidence values are sequenced, the confidence values of the first target number are selected according to the sequence from large to small, and because the confidence values and the coordinate frames are in one-to-one correspondence, the coordinate frames of the first number corresponding to the confidence values of the first number can be obtained.
In one implementation, the step of selecting a first target number of coordinate frames as a first coordinate frame according to the confidence value and the coordinate frames includes: sorting the confidence level values in a descending order; and sequentially selecting the confidence degrees of the first target number, and taking the coordinate frames corresponding to the confidence degrees of the first number as the tracking target candidate frames.
And S104, acquiring a tracking result according to the historical tracking value and the first coordinate frame.
It should be noted that the coordinate frame distance in the embodiment of the present invention is measured by IoU, and the overlap IoU, i.e., interaction over Union, is a standard for measuring the accuracy of detecting a corresponding object in a specific data set, and is a simple measurement standard. The IoU score is a standard performance metric for the object class segmentation problem, and given a set of images, the IoU measurement gives the similarity between the predicted area and the ground truth area of objects present in the set of images, i.e. the similarity of the first and second coordinate boxes is calculated.
The IoU values are then sorted to retain the first second number of candidate boxes as new tracking target candidate boxes. And then, combining the historical tracking coordinate frame information to further filter the newly generated tracking target candidate frame. According to the historical tracking coordinate frame information and the video time sequence characteristics, the motion trail prediction can be carried out on the tracking target, and the corresponding coordinate frame with the maximum confidence coefficient is selected as the best tracking result of the current frame after further filtering.
And S105, outputting the detected candidate human head target information according to the tracking result, wherein the human head target information comprises the scale, the position, the confidence coefficient and the total number of the candidate human head targets.
And simultaneously inputting detection parameters such as a detection window scale, a window sliding step length, a detection threshold value and the like. The front head classifier, the back head classifier, the left head classifier and the right head classifier are classifiers trained respectively for front head features, back head features, left head features and right head features, and can be used for recognizing and classifying head portraits of people in images. And finally, outputting candidate head target information detected by the current layer, wherein the candidate head target information comprises the scale, the position, the confidence coefficient and the total number of the candidate head targets. And detecting candidate human head targets in the current layer according to the integral channel characteristics of each layer in sequence.
Specifically, the step of outputting the detected candidate head target information according to the tracking result includes: merging each candidate human head target; acquiring candidate head targets with overlapped positions, and combining the candidate head targets with overlapped positions; acquiring candidate head targets with confidence values lower than a preset value, and deleting the candidate head targets with low confidence values; according to the human ecology, the human head targets with inconsistent sizes are obtained and deleted.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (5)

1. A multi-target tracking and behavior analysis detection method is characterized by comprising the following steps:
decomposing the image of the person to be tracked to obtain a plurality of target person head portraits;
inputting the target character head portrait and the image to be analyzed into a network tracking model to obtain a confidence value and a coordinate frame, wherein the network tracking model is a tracking model formed on the basis of CNN and RPN, and the image to be analyzed is any corresponding video frame image in the video to be processed;
selecting a first target number of coordinate frames as first coordinate frames according to the confidence value and the coordinate frames;
acquiring a tracking result according to a historical tracking value and the first coordinate frame;
and outputting the detected candidate human head target information according to the tracking result, wherein the human head target information comprises the scale, the position, the confidence coefficient and the total number of the candidate human head targets.
2. The multi-target tracking and behavioral analysis detection method according to claim 1, characterized in that the method further comprises:
acquiring the number of intervals of video frames corresponding to the figure image to be tracked and the image to be analyzed;
when the interval number is larger than a preset value, image cutting is carried out on the image to be analyzed according to the image to be analyzed;
the step of inputting the target character avatar and the image to be analyzed into a network tracking model includes:
and inputting the cut image to be analyzed and the target character avatar into a network tracking model.
3. The multi-target tracking and behavior analysis detecting method according to claim 2, wherein the step of selecting a first target number of coordinate frames as first coordinate frames according to the confidence values and the coordinate frames comprises:
sequencing the confidence values in a descending order;
sequentially selecting a first target number of confidence degrees, and taking a coordinate frame corresponding to the first number of confidence degrees as a tracking target candidate frame.
4. The multi-target tracking and behavior analysis detecting method according to claim 1, wherein the step of outputting the detected candidate human head target information according to the tracking result comprises:
merging each candidate human head target;
acquiring candidate head targets with overlapped positions, and combining the candidate head targets with overlapped positions;
acquiring candidate head targets with confidence values lower than a preset value, and deleting the candidate head targets with low confidence values;
according to the human ecology, the human head targets with inconsistent sizes are obtained and deleted.
5. The multi-target tracking and behavior analysis detection method according to claim 1, wherein the step of inputting the target human avatar and the image to be analyzed to a network tracking model to obtain a confidence value and a coordinate frame comprises:
the tracking model is divided into: front head tracker, back head tracker, left head tracker, and right head tracker
For each tracker in the tracking model, performing the steps of:
extracting integral channel characteristics of each layer of an input image, wherein the channel characteristics comprise color characteristics, amplitude values, angle characteristics and gradient histogram characteristics;
respectively inputting the integral channel characteristics of a certain layer, the width and the height of the channel characteristics of the current layer and the channel characteristic dimension into the four trained classifiers, and simultaneously inputting a detection window scale, a window sliding step length and a detection threshold; a confidence value and a coordinate frame are obtained.
CN201910981794.7A 2019-10-16 2019-10-16 Multi-target tracking and behavior analysis detection method Pending CN111027370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910981794.7A CN111027370A (en) 2019-10-16 2019-10-16 Multi-target tracking and behavior analysis detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910981794.7A CN111027370A (en) 2019-10-16 2019-10-16 Multi-target tracking and behavior analysis detection method

Publications (1)

Publication Number Publication Date
CN111027370A true CN111027370A (en) 2020-04-17

Family

ID=70205077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910981794.7A Pending CN111027370A (en) 2019-10-16 2019-10-16 Multi-target tracking and behavior analysis detection method

Country Status (1)

Country Link
CN (1) CN111027370A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232153A (en) * 2020-09-30 2021-01-15 广东职业技术学院 Method and system for acquiring track of target person
CN113191318A (en) * 2021-05-21 2021-07-30 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113807403A (en) * 2021-08-23 2021-12-17 网易(杭州)网络有限公司 Model training method and device, computer equipment and storage medium
CN114219832A (en) * 2021-11-29 2022-03-22 浙江大华技术股份有限公司 Face tracking method and device and computer readable storage medium
WO2023065938A1 (en) * 2021-10-22 2023-04-27 广州视源电子科技股份有限公司 Target tracking method and apparatus, target selection method, and medium and electronic device
CN117545145A (en) * 2023-11-24 2024-02-09 海南博思高科软件开发有限公司 Space-time illumination control method and system based on video image data processing
CN117934555A (en) * 2024-03-21 2024-04-26 西南交通大学 Vehicle speed identification method, device, equipment and medium based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041297A1 (en) * 2005-05-31 2009-02-12 Objectvideo, Inc. Human detection and tracking for security applications
CN105184258A (en) * 2015-09-09 2015-12-23 苏州科达科技股份有限公司 Target tracking method and system and staff behavior analyzing method and system
WO2018121286A1 (en) * 2016-12-30 2018-07-05 纳恩博(北京)科技有限公司 Target tracking method and device
CN109360226A (en) * 2018-10-17 2019-02-19 武汉大学 A kind of multi-object tracking method based on time series multiple features fusion
CN109934844A (en) * 2019-01-28 2019-06-25 中国人民解放军战略支援部队信息工程大学 A kind of multi-object tracking method and system merging geospatial information
CN110084829A (en) * 2019-03-12 2019-08-02 上海阅面网络科技有限公司 Method for tracking target, device, electronic equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041297A1 (en) * 2005-05-31 2009-02-12 Objectvideo, Inc. Human detection and tracking for security applications
CN105184258A (en) * 2015-09-09 2015-12-23 苏州科达科技股份有限公司 Target tracking method and system and staff behavior analyzing method and system
WO2018121286A1 (en) * 2016-12-30 2018-07-05 纳恩博(北京)科技有限公司 Target tracking method and device
CN109360226A (en) * 2018-10-17 2019-02-19 武汉大学 A kind of multi-object tracking method based on time series multiple features fusion
CN109934844A (en) * 2019-01-28 2019-06-25 中国人民解放军战略支援部队信息工程大学 A kind of multi-object tracking method and system merging geospatial information
CN110084829A (en) * 2019-03-12 2019-08-02 上海阅面网络科技有限公司 Method for tracking target, device, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈凯;宋晓;刘敬;: "基于深度卷积网络与尺度不变特征变换的行人跟踪框架" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232153A (en) * 2020-09-30 2021-01-15 广东职业技术学院 Method and system for acquiring track of target person
CN113191318A (en) * 2021-05-21 2021-07-30 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113807403A (en) * 2021-08-23 2021-12-17 网易(杭州)网络有限公司 Model training method and device, computer equipment and storage medium
CN113807403B (en) * 2021-08-23 2023-06-16 网易(杭州)网络有限公司 Model training method, device, computer equipment and storage medium
WO2023065938A1 (en) * 2021-10-22 2023-04-27 广州视源电子科技股份有限公司 Target tracking method and apparatus, target selection method, and medium and electronic device
CN114219832A (en) * 2021-11-29 2022-03-22 浙江大华技术股份有限公司 Face tracking method and device and computer readable storage medium
CN117545145A (en) * 2023-11-24 2024-02-09 海南博思高科软件开发有限公司 Space-time illumination control method and system based on video image data processing
CN117934555A (en) * 2024-03-21 2024-04-26 西南交通大学 Vehicle speed identification method, device, equipment and medium based on deep learning

Similar Documents

Publication Publication Date Title
CN111027370A (en) Multi-target tracking and behavior analysis detection method
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN107527009B (en) Remnant detection method based on YOLO target detection
KR102129893B1 (en) Ship tracking method and system based on deep learning network and average movement
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108446630B (en) Intelligent monitoring method for airport runway, application server and computer storage medium
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
CN104933710B (en) Based on the shop stream of people track intelligent analysis method under monitor video
CN110738127A (en) Helmet identification method based on unsupervised deep learning neural network algorithm
Nandhini et al. CNN Based Moving Object Detection from Surveillance Video in Comparison with GMM
CN103824070A (en) Rapid pedestrian detection method based on computer vision
CN114241548A (en) Small target detection algorithm based on improved YOLOv5
CN109472226B (en) Sleeping behavior detection method based on deep learning
CN111738218B (en) Human body abnormal behavior recognition system and method
CN110874592A (en) Forest fire smoke image detection method based on total bounded variation
KR101472674B1 (en) Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images
CN112270381B (en) People flow detection method based on deep learning
Murugesan et al. Bayesian Feed Forward Neural Network-Based Efficient Anomaly Detection from Surveillance Videos.
CN108710879B (en) Pedestrian candidate region generation method based on grid clustering algorithm
CN109063630B (en) Rapid vehicle detection method based on separable convolution technology and frame difference compensation strategy
Kongurgsa et al. Real-time intrusion—detecting and alert system by image processing techniques
CN111476160A (en) Loss function optimization method, model training method, target detection method, and medium
CN113706481A (en) Sperm quality detection method, sperm quality detection device, computer equipment and storage medium
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
Zhou et al. A study on attention-based LSTM for abnormal behavior recognition with variable pooling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination