CN110781806A - Pedestrian detection tracking method based on YOLO - Google Patents
Pedestrian detection tracking method based on YOLO Download PDFInfo
- Publication number
- CN110781806A CN110781806A CN201911012572.0A CN201911012572A CN110781806A CN 110781806 A CN110781806 A CN 110781806A CN 201911012572 A CN201911012572 A CN 201911012572A CN 110781806 A CN110781806 A CN 110781806A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- yolo
- training
- model
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pedestrian detection and tracking method based on YOLO, belonging to the technical field of information engineering, and the method comprises the following steps: network model training, video acquisition, video input, target pedestrian prediction, pedestrian trajectory tracking, alarming or prompting. The invention adopts a YOLO network discrimination model and a classification method to realize pedestrian tracking, and distinguishes the object to be tracked from the background by utilizing the extracted background and object characteristics so as to obtain the position information of the image frame, and the precision is higher than that of the traditional generating model.
Description
Technical Field
The invention relates to the technical field of information engineering, in particular to a pedestrian detection tracking method based on YOLO.
Background
Nowadays, video surveillance systems have been widely installed in various public places including, but not limited to, squares, outdoor forest park roads, residential building entrances, etc., and how to detect pedestrians in multi-scene images has been a focus of recent research.
Currently, existing pedestrian detection includes the following steps: region suggestion, feature extraction and mode classification; however, pedestrian detection is difficult due to the influence of external factors such as large scale change range, complex appearance and posture, and light shielding. In many practical application scenarios, there are high requirements on the speed, accuracy and model size of pedestrian detection. The prior art studies of these three aspects can be categorized into two categories: the method based on background modeling and the method based on statistical learning both obtain certain achievements, but the method based on background modeling is not high in robustness and poor in anti-interference capability, and the method based on statistical learning is greatly influenced by training samples and is not enough in capability of coping with real scenes.
Disclosure of Invention
In order to solve the above problems, the present invention provides a pedestrian detection and tracking method based on YOLO, which uses the extracted features of the background and the object to distinguish the object to be tracked from the background, and further obtains the position information of the image frame, specifically distinguishes the background and the foreground, and obtains higher pedestrian tracking accuracy.
In order to achieve the purpose, the invention is realized by the following scheme: a pedestrian detection and tracking method based on YOLO specifically comprises the following steps:
step 1, training a network model: training a YOLO network model, setting a confidence threshold value after the trained model comprises each recognition target and a name, and marking the targets with the confidence degrees which are more than or equal to the confidence threshold value by using person;
step 2, video acquisition: collecting a video collected by a camera, and inputting the video to a server or a super-brain intelligent network hard disk video recorder;
and step 3, video input: saving each frame of the Video input in the step 2 by using a Video Writer in OpenCV, and zooming the saved image pixels to 0-1 to obtain a zoomed frame image;
and 4, target pedestrian prediction: inputting the frame image zoomed in the step 3 into the YOLO network model in the step 1, judging whether the frame image contains the pedestrian, if so, marking the frame image by using a bounding box, otherwise, not performing any processing;
step 5, tracking the pedestrian track: and (4) predicting and connecting each frame of image including the pedestrian detected in the step (4) by using a for loop, and connecting the middle points of the pedestrian boundary frames of each frame of image by using the visualization of OpenCV to form a target pedestrian tracking track.
And 6, alarming or prompting: and when the pedestrian is detected in the acquisition area by the method in the step 2-5, the pedestrian is warned to enter the area in an audible and visual alarm mode, and meanwhile, the staff is reminded of entering the acquisition area.
Further, step 1 comprises the following sub-steps:
1.1 pre-training: the YOLO network structure has 26 layers in total, including 24 convolutional layers and 2 fully-connected layers, the first 20 convolutional layers, 1 average pooling layer and 1 fully-connected layer of the YOLO network are trained by using the ImageNet 1000-type data set, and the training picture resolution in the ImageNet 1000-type data set is adjusted to 224 × 224 by using the ReSize of OpenCV before training, so as to obtain 20 convolutional layer weight files.
1.2 training: initializing the network parameters of the first 20 convolutional layers of the YOLO model by using the first 20 convolutional layer weight files obtained in the step 1.1, then randomly initializing 4 convolutional layers and 2 full-connection layers, training the YOLO model by using a PASCAL VOC 20-class label data set, and adjusting the resolution of training pictures in the PASCAL VOC 20-class data set to 448 × 448 by using the ReSize of OpenCV before training.
1.3 model parameter confirmation: collecting images of pedestrians, making the images into a marked data set by using a LabelImg tool, using the marked data set to train a YOLO model continuously, and modifying the recognition type into Classes [ 'person' ] in a configuration file when the confidence coefficient is larger than or equal to a confidence coefficient threshold value.
Further, the confidence threshold is 0.7.
Further, the bounding box marking of step 4 adopts the following method:
(x,y,w,h,Score_confidence) (2)
in the prediction of the target pedestrian, the value of Score _ confidence is determined by equation (1), where pr (object) indicates whether the pedestrian really appears in the grid, and if so, it is 1, and it does not appear as 0.
The overlap ratio of the areas between the prediction frame and the actual frame is represented, pred represents the area of the prediction frame, and the truth represents the area of the actual frame, which is the marked real value in the data set.
When the YoLO model actually detects a pedestrian, five values in the formula (2) are directly output, wherein coordinates (x, y) represent relative values of the center of the predicted boundary box and a grid boundary line, and (w, h) represent ratios of the width and the height of the predicted boundary box to the width and the height of the whole image, and the values are between 0 and 1.
Compared with the prior art, the invention has the following beneficial effects: the invention uses the YOLO algorithm as the algorithm for pedestrian detection, and utilizes the strong function of OpenCv and rich interfaces to process the collected video. And distinguishing the object to be tracked from the background by using the extracted background and object characteristics, realizing pedestrian tracking by using a classification method, and displaying the tracked track on a workstation to prompt the staff of the workstation. And sound and light alarm is set in some forbidden zones, and corresponding warning is given after the pedestrian entering by mistake is detected. In order to train the best YOLO pedestrian detection model, a large number of pedestrian photos are collected from cell monitoring, subway station monitoring and road monitoring to be made into a data set. Therefore, the detection precision and speed are greatly improved, the generalization capability is better, and the application scene is wider.
Drawings
FIG. 1 is a flow chart of a pedestrian detection tracking method of the present invention;
FIG. 2 is a diagram of a YOLO network architecture;
FIG. 3 is a YOLO model workflow diagram.
Detailed Description
The technical solution of the present invention is further explained with reference to the accompanying drawings.
The pedestrian detection and tracking system of the YOLO adopted by the invention comprises: the network camera is used for acquiring a target area video and converting the target area video into a digital image signal; the server or the super-brain intelligent network hard disk video recorder is used for training and operating a YOLO network model, reading and storing network camera information and performing digital image analysis; the workstation is used for setting the size of a forbidden zone and alarm rules, checking and playing back video and pedestrian marks and motion tracks in the video, giving an alarm, checking and printing alarm information, and comprises a computer host, a display, a keyboard and a mouse, an audible and visual alarm instrument, a printer and the like; and the optical fiber switch is used for converting the electric signals into optical signals and then carrying out optical transmission networking.
Fig. 1 is a flowchart of a pedestrian detection and tracking method according to the present invention, which specifically includes the following steps:
step 1, training a network model: and training a YOLO network model, setting a confidence threshold value after the trained model comprises all recognition targets and names, and marking the targets with the confidence degrees which are greater than or equal to the confidence threshold value by using person.
Step 1 is divided into the following substeps:
1.1 pre-training: the YOLO network structure is shown in fig. 2, and has 26 layers in total, including 24 convolutional layers and 2 fully-connected layers, the first 20 convolutional layers, 1 average pooling layer and 1 fully-connected layer of the YOLO network are trained using the ImageNet 1000-type dataset, and the training picture resolution in the ImageNet 1000-type dataset is adjusted to 224 × 224 using the ReSize of OpenCV before training to increase the training speed, so as to obtain 20 convolutional layer weight files.
1.2 training: initializing the network parameters of the first 20 convolutional layers of the YOLO model by using the first 20 convolutional layer weight files obtained in the step 1.1, then randomly initializing 4 convolutional layers and 2 full-connection layers, training the YOLO model by using a PASCAL VOC 20-class label data set, and adjusting the resolution of training pictures in the PASCAL VOC 20-class data set to 448 × 448 by using the ReSize of OpenCV before training for improving the training precision. The YOLO model workflow is as shown in fig. 3, inputting a 448 × 448 image and outputting a tensor (tensor) of S × (5B + C). The S is the size of a grid which is obtained by segmenting an input image by a YOLO model, and 7 x 7 recommended by an official party is adopted in the invention. B is the number of objects identified per mesh, B is 2, and C is the type of object that can be identified per model, considering the model calculation speed, and C is 20 since there are 20 types of objects in the PASCAL VOC 20 data set. The entire YOLO model outputs a tensor of 7 x 30.
1.3 model parameter confirmation: the pedestrian image is collected and made into a labeled data set by a LabelImg tool, the YoLO model is continuously trained by the labeled data set, and when the confidence degree is larger than or equal to a confidence degree threshold value, the recognition type is modified to be Classes [ 'person' ]ina configuration file, so that the function of the model is adjusted to only recognize people.
And the confidence threshold value is 0.7 through repeated tests and tests.
Step 2, video acquisition: collecting a video collected by a camera, and inputting the video to a server or a super-brain intelligent network hard disk video recorder;
and step 3, video input: saving each frame of the Video input in the step 2 by using a Video Writer in OpenCV, and zooming the saved image pixels to 0-1 to obtain a zoomed frame image;
and 4, target pedestrian prediction: inputting the frame image zoomed in the step 3 into the YOLO network model in the step 1, judging whether the frame image contains the pedestrian, if so, marking the frame image by using a bounding box, otherwise, not performing any processing;
the bounding box marking of the step 4 adopts the following method:
(x,y,w,h,Score_confidence) (2)
in the prediction of the target pedestrian, the value of Score _ confidence is determined by equation (1), where pr (object) indicates whether the pedestrian really appears in the grid, and if so, it is 1, and it does not appear as 0.
Represents the overlapping proportion of the areas between the prediction frame and the actual frame, pred represents the area of the prediction frame, and truth represents the area of the actual frame, which is the real value already marked in the data set,
the larger the signal, the higher the accuracy of pedestrian detection.
When the YoLO model actually detects a pedestrian, five values in the formula (2) are directly output, wherein coordinates (x, y) represent relative values of the center of the predicted boundary box and a grid boundary line, and (w, h) represent ratios of the width and the height of the predicted boundary box to the width and the height of the whole image, and the values are between 0 and 1.
Step 5, tracking the pedestrian track: and (4) predicting and connecting each frame of image including the pedestrian detected in the step (4) by using a for loop, and connecting the middle points of the pedestrian boundary frames of each frame of image by using the visualization of OpenCV to form a target pedestrian tracking track.
And 6, alarming or prompting: and when the pedestrian is detected in the acquisition area by the method in the step 2-5, the pedestrian is warned to enter the area in an audible and visual alarm mode, and meanwhile, the staff is reminded of entering the acquisition area.
The invention collects a large number of pedestrian photos in different directions, angles, scenes and light rays to manufacture a data set. A better YOLO model for detecting pedestrians is trained by utilizing the characteristics of fast speed, high precision and strong generalization capability of a YOLO algorithm and combining a self-made data set. Compared with the traditional HOG + SVM method, the method has the advantages that the model is obviously improved in recognition accuracy and speed, and the application range is wider.
The above description is only for the preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification, or any direct or indirect application attached to the technical field of other related products are included in the protection scope of the present invention.
Claims (4)
1. A pedestrian detection and tracking method based on YOLO is characterized by comprising the following steps:
step 1, training a network model: training a YOLO network model, setting a confidence threshold value after the trained model comprises each recognition target and a name, and marking the targets with the confidence degrees which are more than or equal to the confidence threshold value by using person;
step 2, video acquisition: collecting a video collected by a camera, and inputting the video to a server or a super-brain intelligent network hard disk video recorder;
and step 3, video input: saving each frame of the Video input in the step 2 by using a Video Writer in OpenCV, and zooming the saved image pixels to 0-1 to obtain a zoomed frame image;
and 4, target pedestrian prediction: inputting the frame image zoomed in the step 3 into the YOLO network model in the step 1, judging whether the frame image contains the pedestrian, if so, marking the frame image by using a bounding box, otherwise, not performing any processing;
step 5, tracking the pedestrian track: and (4) predicting and connecting each frame of image including the pedestrian detected in the step (4) by using a for loop, and connecting the middle points of the pedestrian boundary frames of each frame of image by using the visualization of OpenCV to form a target pedestrian tracking track.
And 6, alarming or prompting: and when the pedestrian is detected in the acquisition area by the method in the step 2-5, the pedestrian is warned to enter the area in an audible and visual alarm mode, and meanwhile, the staff is reminded of entering the acquisition area.
2. Working method according to claim 1, characterized in that step 1 comprises the following sub-steps:
1.1 pre-training: the YOLO network structure has 26 layers in total, including 24 convolutional layers and 2 fully-connected layers, the first 20 convolutional layers, 1 average pooling layer and 1 fully-connected layer of the YOLO network are trained by using the ImageNet 1000-type data set, and the training picture resolution in the ImageNet 1000-type data set is adjusted to 224 × 224 by using the ReSize of OpenCV before training, so as to obtain 20 convolutional layer weight files.
1.2 training: initializing the network parameters of the first 20 convolutional layers of the YOLO model by using the first 20 convolutional layer weight files obtained in the step 1.1, then randomly initializing 4 convolutional layers and 2 full-connection layers, training the YOLO model by using a PASCAL VOC 20-class label data set, and adjusting the resolution of training pictures in the PASCAL VOC 20-class data set to 448 × 448 by using the ReSize of OpenCV before training.
1.3 model parameter confirmation: collecting images of pedestrians, making the images into a marked data set by using a LabelImg tool, using the marked data set to train a YOLO model continuously, and modifying the recognition type into Classes [ 'person' ] in a configuration file when the confidence coefficient is larger than or equal to a confidence coefficient threshold value.
3. The method of operation of claim 1, wherein: the confidence threshold is 0.7.
4. The method of operation of claim 1, wherein: the bounding box marking of the step 4 adopts the following method:
(x,y,w,h,Score_confidence) (2)
in the prediction of the target pedestrian, the value of Score _ confidence is determined by equation (1), where pr (object) indicates whether the pedestrian really appears in the grid, and if so, it is 1, and it does not appear as 0.
Indicates the overlapping ratio of the areas between the prediction frame and the actual frame, pred indicates the area of the prediction frame, and truth indicates the area of the actual frameThe area of (a) is the real value in the dataset that has been marked.
When the YoLO model actually detects a pedestrian, five values in the formula (2) are directly output, wherein coordinates (x, y) represent relative values of the center of the predicted boundary box and a grid boundary line, and (w, h) represent ratios of the width and the height of the predicted boundary box to the width and the height of the whole image, and the values are between 0 and 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911012572.0A CN110781806A (en) | 2019-10-23 | 2019-10-23 | Pedestrian detection tracking method based on YOLO |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911012572.0A CN110781806A (en) | 2019-10-23 | 2019-10-23 | Pedestrian detection tracking method based on YOLO |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110781806A true CN110781806A (en) | 2020-02-11 |
Family
ID=69386682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911012572.0A Pending CN110781806A (en) | 2019-10-23 | 2019-10-23 | Pedestrian detection tracking method based on YOLO |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110781806A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112487920A (en) * | 2020-11-25 | 2021-03-12 | 电子科技大学 | Convolution neural network-based crossing behavior identification method |
CN113115229A (en) * | 2021-02-24 | 2021-07-13 | 福建德正智能有限公司 | Personnel trajectory tracking method and system based on Beidou grid code |
CN113158897A (en) * | 2021-04-21 | 2021-07-23 | 新疆大学 | Pedestrian detection system based on embedded YOLOv3 algorithm |
CN113470073A (en) * | 2021-07-06 | 2021-10-01 | 浙江大学 | Animal center tracking method based on deep learning |
CN113516685A (en) * | 2021-07-09 | 2021-10-19 | 东软睿驰汽车技术(沈阳)有限公司 | Target tracking method, device, equipment and storage medium |
CN113568407A (en) * | 2021-07-27 | 2021-10-29 | 山东中科先进技术研究院有限公司 | Man-machine cooperation safety early warning method and system based on deep vision |
CN114241763A (en) * | 2021-12-14 | 2022-03-25 | 中国电信股份有限公司 | Traffic behavior warning method and device, electronic equipment and computer readable medium |
CN114511899A (en) * | 2021-12-30 | 2022-05-17 | 武汉光庭信息技术股份有限公司 | Street view video fuzzy processing method and system, electronic equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610151A (en) * | 2017-08-24 | 2018-01-19 | 上海汇纳信息科技股份有限公司 | Pedestrian track line handling method/system, computer-readable recording medium and equipment |
CN108021848A (en) * | 2016-11-03 | 2018-05-11 | 浙江宇视科技有限公司 | Passenger flow volume statistical method and device |
CN108229390A (en) * | 2018-01-02 | 2018-06-29 | 济南中维世纪科技有限公司 | Rapid pedestrian detection method based on deep learning |
CN108509859A (en) * | 2018-03-09 | 2018-09-07 | 南京邮电大学 | A kind of non-overlapping region pedestrian tracting method based on deep neural network |
CN108985186A (en) * | 2018-06-27 | 2018-12-11 | 武汉理工大学 | A kind of unmanned middle pedestrian detection method based on improvement YOLOv2 |
CN109241814A (en) * | 2018-06-26 | 2019-01-18 | 武汉科技大学 | Pedestrian detection method based on YOLO neural network |
CN109978918A (en) * | 2019-03-21 | 2019-07-05 | 腾讯科技(深圳)有限公司 | A kind of trajectory track method, apparatus and storage medium |
CN110135314A (en) * | 2019-05-07 | 2019-08-16 | 电子科技大学 | A kind of multi-object tracking method based on depth Trajectory prediction |
-
2019
- 2019-10-23 CN CN201911012572.0A patent/CN110781806A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021848A (en) * | 2016-11-03 | 2018-05-11 | 浙江宇视科技有限公司 | Passenger flow volume statistical method and device |
CN107610151A (en) * | 2017-08-24 | 2018-01-19 | 上海汇纳信息科技股份有限公司 | Pedestrian track line handling method/system, computer-readable recording medium and equipment |
CN108229390A (en) * | 2018-01-02 | 2018-06-29 | 济南中维世纪科技有限公司 | Rapid pedestrian detection method based on deep learning |
CN108509859A (en) * | 2018-03-09 | 2018-09-07 | 南京邮电大学 | A kind of non-overlapping region pedestrian tracting method based on deep neural network |
CN109241814A (en) * | 2018-06-26 | 2019-01-18 | 武汉科技大学 | Pedestrian detection method based on YOLO neural network |
CN108985186A (en) * | 2018-06-27 | 2018-12-11 | 武汉理工大学 | A kind of unmanned middle pedestrian detection method based on improvement YOLOv2 |
CN109978918A (en) * | 2019-03-21 | 2019-07-05 | 腾讯科技(深圳)有限公司 | A kind of trajectory track method, apparatus and storage medium |
CN110135314A (en) * | 2019-05-07 | 2019-08-16 | 电子科技大学 | A kind of multi-object tracking method based on depth Trajectory prediction |
Non-Patent Citations (1)
Title |
---|
JOSEPH REDMON,SANTOSH DIVVALA,ROSS GIRSHICK,ALI FARHADI: "You Only Look Once:Unified, Real-Time Object Detection", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112487920A (en) * | 2020-11-25 | 2021-03-12 | 电子科技大学 | Convolution neural network-based crossing behavior identification method |
CN112487920B (en) * | 2020-11-25 | 2022-03-15 | 电子科技大学 | Convolution neural network-based crossing behavior identification method |
CN113115229A (en) * | 2021-02-24 | 2021-07-13 | 福建德正智能有限公司 | Personnel trajectory tracking method and system based on Beidou grid code |
CN113158897A (en) * | 2021-04-21 | 2021-07-23 | 新疆大学 | Pedestrian detection system based on embedded YOLOv3 algorithm |
CN113470073A (en) * | 2021-07-06 | 2021-10-01 | 浙江大学 | Animal center tracking method based on deep learning |
CN113516685A (en) * | 2021-07-09 | 2021-10-19 | 东软睿驰汽车技术(沈阳)有限公司 | Target tracking method, device, equipment and storage medium |
CN113568407A (en) * | 2021-07-27 | 2021-10-29 | 山东中科先进技术研究院有限公司 | Man-machine cooperation safety early warning method and system based on deep vision |
CN113568407B (en) * | 2021-07-27 | 2024-09-20 | 山东中科先进技术有限公司 | Man-machine cooperation safety early warning method and system based on depth vision |
CN114241763A (en) * | 2021-12-14 | 2022-03-25 | 中国电信股份有限公司 | Traffic behavior warning method and device, electronic equipment and computer readable medium |
CN114511899A (en) * | 2021-12-30 | 2022-05-17 | 武汉光庭信息技术股份有限公司 | Street view video fuzzy processing method and system, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110781806A (en) | Pedestrian detection tracking method based on YOLO | |
KR102129893B1 (en) | Ship tracking method and system based on deep learning network and average movement | |
CN108062349B (en) | Video monitoring method and system based on video structured data and deep learning | |
CN106897670B (en) | Express violence sorting identification method based on computer vision | |
CN109977782B (en) | Cross-store operation behavior detection method based on target position information reasoning | |
CN104378582B (en) | A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN110852179B (en) | Suspicious personnel invasion detection method based on video monitoring platform | |
CN103577875A (en) | CAD (computer-aided design) people counting method based on FAST (features from accelerated segment test) | |
CN113592905B (en) | Vehicle driving track prediction method based on monocular camera | |
CN112613668A (en) | Scenic spot dangerous area management and control method based on artificial intelligence | |
CN106228570A (en) | A kind of Truth data determines method and apparatus | |
CN112836657A (en) | Pedestrian detection method and system based on lightweight YOLOv3 | |
CN112541403B (en) | Indoor personnel falling detection method by utilizing infrared camera | |
CN113298018A (en) | False face video detection method and device based on optical flow field and facial muscle movement | |
Waqar et al. | Meter digit recognition via Faster R-CNN | |
Azimjonov et al. | Vision-based vehicle tracking on highway traffic using bounding-box features to extract statistical information | |
Cheng et al. | Semantic segmentation for pedestrian detection from motion in temporal domain | |
Wu et al. | Vehicle Classification and Counting System Using YOLO Object Detection Technology. | |
Balali et al. | Video-based highway asset recognition and 3D localization | |
Li et al. | Intelligent transportation video tracking technology based on computer and image processing technology | |
CN117475353A (en) | Video-based abnormal smoke identification method and system | |
CN117294818A (en) | Building site panoramic monitoring method for airport construction | |
CN117351409A (en) | Intelligent concrete dam face operation risk identification method | |
Bharathi et al. | A Conceptual Real-Time Deep Learning Approach for Object Detection, Tracking and Monitoring Social Distance using Yolov5 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200211 |