CN112347938B - People stream detection method based on improved YOLOv3 - Google Patents

People stream detection method based on improved YOLOv3 Download PDF

Info

Publication number
CN112347938B
CN112347938B CN202011236196.6A CN202011236196A CN112347938B CN 112347938 B CN112347938 B CN 112347938B CN 202011236196 A CN202011236196 A CN 202011236196A CN 112347938 B CN112347938 B CN 112347938B
Authority
CN
China
Prior art keywords
area
people
monitoring
flow
vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011236196.6A
Other languages
Chinese (zh)
Other versions
CN112347938A (en
Inventor
王议
袁佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Mechatronic Technology
Original Assignee
Nanjing Institute of Mechatronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Mechatronic Technology filed Critical Nanjing Institute of Mechatronic Technology
Priority to CN202011236196.6A priority Critical patent/CN112347938B/en
Publication of CN112347938A publication Critical patent/CN112347938A/en
Application granted granted Critical
Publication of CN112347938B publication Critical patent/CN112347938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

A people stream detection method based on improved YOLOv 3. The method comprises the following steps: step 1, acquiring experimental data of people flow monitoring; step 2, modeling the people stream monitoring problem; step 3, distinguishing the types of the traffic video monitoring images by utilizing a kNN algorithm; step 4, improving anchor frame and receptive field mechanisms of the YOLOv3 model to monitor targets; and 5, solving a people stream monitoring model. According to the invention, the traffic flow and the vehicle flow are monitored on the basis of the traffic monitoring video image, the anchor frame of the YOLOv3 model is improved by combining the regression difficulty of the model detection frame with the K-means algorithm, and meanwhile, a receptive field mechanism is added into a backbone module for extracting image features capable of reflecting deeper layers, so that the feature extraction capability of the model is improved.

Description

People stream detection method based on improved YOLOv3
Technical Field
The invention relates to the field of people flow monitoring, in particular to a people flow detection method based on improved YOLOv 3.
Background
With the rapid development of cities, people stream management and mobile mode mining in cities are becoming increasingly important. Meanwhile, with the development of various perception technologies represented by crowd sensing, concepts of smart cities are proposed, and a large amount of perception data in the smart cities provides possibilities for analyzing people streams. The regional crowd flow prediction has important strategic significance for traffic management and public safety, if the flow condition of the crowd can be predicted in advance, related departments can early warn in advance, an emergency mechanism is started, and accidents are avoided. The smart city field encompasses many leading-edge directions of research . Such as speed detection at traffic intersections, flow estimation, etc. Meanwhile, researchers predict future tracks of individuals according to the historical tracks of the individuals. However, these individual studies do not meet the existing needs if a prediction of the flow across the area is required. Urban road networks are complex, individuals move numerous and cannot be fully monitored, but related departments such as government and the like are concerned about the personnel flow condition of the whole area. Therefore, the change of the regional flow is detected in advance, guidance can be provided for related departments, intervention measures can be adopted earlier, and dangers are avoided.
Disclosure of Invention
In order to solve the problems, the invention provides a people stream detection method based on improved YOLOv3 on the basis of a YOLOv3 algorithm. In order to reduce the regression difficulty of the detection frame, the prior knowledge of the detection target is obtained by adjusting the anchor frame of the network. In addition, to enhance the importance of the features presented, a multi-receptive field mechanism is added to the backbone block to extract advanced features of the monitored image. To achieve the purpose, the invention provides a people stream detection method based on improved YOLOv3, which comprises the following specific steps:
step 1, obtaining experimental data: dividing different areas, and monitoring photos of pedestrians and vehicles in the different areas through traffic videos;
step 2, modeling of people stream monitoring problems: the people flow in a certain area is obtained by subtracting the people flow flowing out of the area from the people flow entering the area, and the people flow is divided into two types of people flow and vehicle flow;
step 3, distinguishing the types of traffic video monitoring images: dividing the traffic video monitoring images into three categories: vehicles, pedestrians, and both pedestrians and vehicles, if a photo contains both vehicles and pedestrians, the photo can be categorized into a vehicle photo and a pedestrian photo;
step 4, improving the YOLOv3 network monitoring target: the anchor frame and receptive field mechanism of the YOLOv3 network are improved to detect pedestrians and vehicles respectively;
step 5, solving a people stream monitoring model: and (3) carrying the result of the YOLOv3 detection back to the people stream monitoring problem, and solving the people stream monitoring result.
Further, the division of different regions in step 1 can be expressed as:
different areas of a city are divided, and the different areas are expressed as R= { R 1 ,r 2 ,r 3 ,...,r n Respectively collecting each region into an image to establish a training sample set D Ri And test sample set T Ri
Further, the modeling of the people stream monitoring problem in step 2 can be expressed as:
dividing the people stream monitoring problem into: people flow monitoring and vehicle flow monitoring are carried out on a certain area r i The people flow rate of (1) can be representedIs thatWhen a pedestrian enters the area, the flow rate superimposed on the area for the period of time is +.>When the pedestrian leaves the area, the flow rate superimposed to the area for the period of time is +.>For a certain region r i Can be expressed as the vehicle flow rate of (2)When the vehicle is driving into the area, the flow rate superimposed on the area for the period of time is +.>When the vehicle is driven out of the area, the flow rate superimposed on the area for the period of time is +.>
Further, in the step 3, the distinguishing traffic video monitoring image types is specifically described as follows:
test sample set T by kNN algorithm Ri Classification is carried out, and the classification is as follows: pedestrians, vehicles, and both pedestrians and vehicles. Searching for test sample T Ri From training sample set T Ri The nearest k samples, if most of the k training samples belong to a certain class, the test sample also belongs to this class, and the distance formula is measured using euclidean distance:
L(x i ,x j ) Is the image sample x i And image sample x j Is used for the distance of euclidean distance,is x i Characteristic value of the first dimension->Is x j The eigenvalue of the first dimension.
Further, the improved YOLOv3 network monitoring target in step 4 is specifically described as:
two YOLOv3 models are built to detect pedestrians and vehicles respectively, each model comprising three parts: the system comprises a trunk module, a feature fusion module and a prediction module, wherein the trunk module extracts rich information of an input image, the feature fusion module uses short links to splice feature graphs with different abstract scales, and finally the prediction module predicts a detection result through local and global information.
In order to reduce the regression difficulty of the detection frame, a clustering center on a training set is obtained through K-means, the anchor frame of the original Yolov3 network is reset, and the distance measurement of the following formula is used:
d=1-IOU(b,a) (2)
b and a respectively represent a label, a clustering center frame and a clustering center frame, d represents the overlapping degree of the label frame and the clustering center frame, d is smaller, the higher the overlapping degree of the label frame and the clustering center frame is, 80 times of clustering is carried out on training data to obtain 5 clustering centers, all the YOLO anchor frames are arranged by using the clustering centers, 2 smaller anchor frames are used for a YOLO layer corresponding to a large-resolution feature map, and 3 larger anchor frames are used for a YOLO layer corresponding to a small-resolution feature map.
The backbone network has the effects that effective features of images are extracted for prediction, the quality of the features affects the final output result, a multi-receptive field mechanism is added in a backbone module to learn the rich features for learning the richer features, the learning capacity of the backbone module is improved, and a CBLP module with convolution kernel sizes of 1 multiplied by 1 and 5 multiplied by 5 is respectively connected with an original CBLP module in parallel for obtaining different receptive field features, wherein the mapping relation between the CBLP module and the multi-receptive field module is expressed as the following formula:
x CBLP =H(x i ) (3)
x multi =H 1 (x i )+H 3 (x i )+H 5 (x i ) (4)
wherein x is CBLP And x multi The outputs of the CBLP module and the multi-receptive field module are respectively represented; h 1 (·)、H 3 (. Cndot.) and H 5 (. Cndot.) the convolution kernel sizes are respectively: 1×1, 3×3, and 5×5 mappings; x is x i And representing the input characteristic diagram, and training the YOLOv3 network by using a training sample set to obtain two improved YOLOv3 networks capable of detecting pedestrians and vehicles.
Further, the solving of the people stream monitoring result in the step 5 is specifically described as follows:
the test sample images classified in the step 3 are respectively detected through the improved YOLOv3 network for detecting pedestrians and driving by training in the step 4, and when r i When a pedestrian enters the area, the area isSuperimposed on the number of people entering, when r i When pedestrians leave the area, at +.>Superimposed on the number of people leaving, when r i When a vehicle enters the area, the area is at +.>Superimposed on the number of vehicles entering, when r i When the vehicle is driving away in the area, the area is at +.>The number of vehicles leaving is superimposed, and r is finally obtained i Regional people flow->And traffic flow->
The people flow detection method based on the improved YOLOv3 has the beneficial effects that: the invention has the technical effects that:
1. the method and the device utilize the kNN algorithm to pre-classify the test set, reduce the detection time of the detection model, and improve the accuracy of the model;
2. according to the invention, the anchor frame of the YOLOv3 model is adjusted by combining the clustering center obtained by K-means training in order to reduce the regression difficulty of the detection frame;
3. according to the invention, a multi-receptive field mechanism is added into a backbone module of the YOLOv3 model to learn rich features, so that the learning capacity of the backbone module is improved;
4. the invention can accurately detect the information of the traffic flow and the traffic flow, and provides an important technical means for detecting the traffic flow.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and detailed description:
the invention provides a people flow detection method based on improved YOLOv3, which aims to improve the accuracy of people flow monitoring, reduce the regression difficulty of a YOLOv3 model detection frame and improve the learning capacity of a backbone module. FIG. 1 is a flow chart of the present invention. The steps of the present invention will be described in detail with reference to the flow charts.
Step 1, obtaining experimental data: dividing different areas, and monitoring photos of pedestrians and vehicles in the different areas through traffic videos;
the division of the different regions in step 1 can be expressed as:
different areas of a city are divided, and the different areas are expressed as R= { R 1 ,r 2 ,r 3 ,...,r n Respectively collecting each region into an image to establish a training sample set D Ri And test sample set T Ri
Step 2, modeling of people stream monitoring problems: the people flow in a certain area is obtained by subtracting the people flow flowing out of the area from the people flow entering the area, and the people flow is divided into two types of people flow and vehicle flow;
the modeling of the people stream monitoring problem in step 2 can be expressed as:
dividing the people stream monitoring problem into: people flow monitoring and vehicle flow monitoring are carried out on a certain area r i The people flow rate of (1) can be expressed asWhen a pedestrian enters the area, the flow rate superimposed on the area for the period of time is +.>When the pedestrian leaves the area, the flow rate superimposed to the area for the period of time is +.>For a certain region r i Can be expressed as the vehicle flow rate of (2)When the vehicle is driving into the area, the flow rate superimposed on the area for the period of time is +.>When the vehicle is driven out of the area, the flow rate superimposed on the area for the period of time is +.>
Step 3, distinguishing the types of traffic video monitoring images: dividing the traffic video monitoring images into three categories: vehicle pictures, pedestrian pictures and pictures containing both pedestrians and vehicles, if a photo contains both vehicles and pedestrians, the picture can be categorized into both vehicle pictures and pedestrian pictures;
and 3, distinguishing the types of traffic video monitoring images is specifically described as follows:
test sample set T by kNN algorithm Ri Classification is carried out, and the classification is as follows: pedestrians, vehicles, and both pedestrians and vehicles. Searching for test sample T Ri From training sample set T Ri The nearest k samples, if thisMost of the k training samples belong to a certain class, and the test sample also belongs to this class, the distance formula is measured using the euclidean distance:
L(x i ,x j ) Is the image sample x i And image sample x j Is used for the distance of euclidean distance,is x i Characteristic value of the first dimension->Is x j The eigenvalue of the first dimension.
Step 4, improving the YOLOv3 network monitoring target: the anchor frame and receptive field mechanism of the YOLOv3 network are improved to detect pedestrians and vehicles respectively;
the improved YOLOv3 network monitoring target in step 4 is specifically described as follows:
two YOLOv3 models are built to detect pedestrians and vehicles respectively, each model comprising three parts: the system comprises a trunk module, a feature fusion module and a prediction module, wherein the trunk module extracts rich information of an input image, the feature fusion module uses short links to splice feature graphs with different abstract scales, and finally the prediction module predicts a detection result through local and global information.
In order to reduce the regression difficulty of the detection frame, a clustering center on a training set is obtained through K-means, the anchor frame of the original Yolov3 network is reset, and the distance measurement of the following formula is used:
d=1-IOU(b,a) (2)
b and a respectively represent a label, a clustering center frame and a clustering center frame, d represents the overlapping degree of the label frame and the clustering center frame, d is smaller, the higher the overlapping degree of the label frame and the clustering center frame is, 80 times of clustering is carried out on training data to obtain 5 clustering centers, all the YOLO anchor frames are arranged by using the clustering centers, 2 smaller anchor frames are used for a YOLO layer corresponding to a large-resolution feature map, and 3 larger anchor frames are used for a YOLO layer corresponding to a small-resolution feature map.
The backbone network has the effects that effective features of images are extracted for prediction, the quality of the features affects the final output result, a multi-receptive field mechanism is added in a backbone module to learn the rich features for learning the richer features, the learning capacity of the backbone module is improved, and a CBLP module with convolution kernel sizes of 1 multiplied by 1 and 5 multiplied by 5 is respectively connected with an original CBLP module in parallel for obtaining different receptive field features, wherein the mapping relation between the CBLP module and the multi-receptive field module is expressed as the following formula:
x CBLP =H(x i ) (3)
x multi =H 1 (x i )+H 3 (x i )+H 5 (x i ) (4)
wherein x is CBLP And x multi The outputs of the CBLP module and the multi-receptive field module are respectively represented; h 1 (·)、H 3 (. Cndot.) and H 5 (. Cndot.) the convolution kernel sizes are respectively: 1×1, 3×3, and 5×5 mappings; x is x i And representing the input characteristic diagram, and training the YOLOv3 network by using a training sample set to obtain two improved YOLOv3 networks capable of detecting pedestrians and vehicles.
Step 5, solving a people stream monitoring model: carrying back the result of the YOLOv3 detection to the people stream monitoring problem, and solving the people stream monitoring result;
the solving of the people stream monitoring result in the step 5 is specifically described as follows:
the test sample images classified in the step 3 are respectively detected through the improved YOLOv3 network for detecting pedestrians and driving by training in the step 4, and when r i When a pedestrian enters the area, the area isSuperimposed on the number of people entering, when r i When pedestrians leave the area, at +.>Superimposed and separatedThe number of people who open, when r i When a vehicle enters the area, the area is at +.>Superimposed on the number of vehicles entering, when r i When the vehicle is driving away in the area, the area is at +.>The number of vehicles leaving is superimposed, and r is finally obtained i Regional people flow->And traffic flow->
The above description is only of the preferred embodiment of the present invention, and is not intended to limit the present invention in any other way, but is intended to cover any modifications or equivalent variations according to the technical spirit of the present invention, which fall within the scope of the present invention as defined by the appended claims.

Claims (4)

1. The people stream detection method based on the improved YOLOv3 comprises the following specific steps of:
step 1, obtaining experimental data: dividing different areas, and monitoring photos of pedestrians and vehicles in the different areas through traffic videos;
step 2, modeling of people stream monitoring problems: the people flow in a certain area is obtained by subtracting the people flow flowing out of the area from the people flow entering the area, and the people flow is divided into two types of people flow and vehicle flow;
step 3, distinguishing the types of traffic video monitoring images: dividing the traffic video monitoring images into three categories: vehicles, pedestrians, and both pedestrians and vehicles, if a photo contains both vehicles and pedestrians, the photo can be categorized into a vehicle picture and a pedestrian picture;
and 3, distinguishing the types of traffic video monitoring images is specifically described as follows:
test sample set T by kNN algorithm Ri Classification is carried out, and the classification is as follows: pedestrians, vehicles, and both pedestrians and vehicles; searching for test sample T Ri From training sample set T Ri The most recent k samples, if most of the k training samples belong to a certain class, the test sample also belongs to this class, measured using Euclidean distance:
L(x i ,x j ) Is the image sample x i And image sample x j Is used for the distance of euclidean distance,is x i Characteristic value of the first dimension->Is x j A feature value of the first dimension;
step 4, improving the YOLOv3 network monitoring target: the anchor frame and receptive field mechanism of the YOLOv3 network are improved to detect pedestrians and vehicles respectively;
the improved YOLOv3 network monitoring target in step 4 is specifically described as follows:
two YOLOv3 models are built to detect pedestrians and vehicles respectively, each model comprising three parts: the system comprises a trunk module, a characteristic fusion module and a prediction module; in order to reduce the regression difficulty of the detection frame, obtaining a clustering center on a training set through K-means, resetting an anchor frame of an original YOLOv3 network, and using the distance measurement of the following formula:
d=1-IOU(b,a) (2)
b and a respectively represent a label and a clustering center frame, d represents the overlapping degree of the label frame and the clustering center frame, d is smaller, the overlapping degree of the label frame and the clustering center frame is higher, all the YOLO anchor frames are arranged by using the clustering center, 2 smaller anchor frames are used for a YOLO layer corresponding to a large-resolution feature map, and 3 larger anchor frames are used for a YOLO layer corresponding to a small-resolution feature map;
the multi-receptive field mechanism is added into the backbone module to learn rich features, the learning capacity of the backbone module is improved, and in order to obtain different receptive field features, the CBLP modules with convolution kernel sizes of 1 multiplied by 1 and 5 multiplied by 5 are respectively connected with the original CBLP modules in parallel, and the mapping relation between the CBLP modules and the multi-receptive field module is expressed as the following formula:
x CBLP =H(x i ) (3)
x multi =H 1 (x i )+H 3 (x i )+H 5 (x i ) (4)
wherein x is CBLP And x multi The outputs of the CBLP module and the multi-receptive field module are respectively represented; h 1 (·)、H 3 (. Cndot.) and H 5 (. Cndot.) the convolution kernel sizes are respectively: 1×1, 3×3, and 5×5 mappings; x is x i Representing an input feature map, training the YOLOv3 network by using a training sample set to obtain two improved YOLOv3 networks for detecting pedestrians and vehicles;
step 5, solving a people stream monitoring model: and (3) carrying the result of the YOLOv3 detection back to the people stream monitoring problem, and solving the people stream monitoring result.
2. The improved YOLOv 3-based people stream detection method of claim 1, wherein: the division of different areas in step 1 is expressed as:
different areas of a city are divided, and the different areas are expressed as R= { R 1 ,r 2 ,r 3 ,...,r n Respectively collecting each region into an image to establish a training sample set D Ri And test sample set T Ri
3. The improved YOLOv 3-based people stream detection method of claim 1, wherein: in the step 2, the modeling of the people flow monitoring problem is expressed as follows:
dividing the people stream monitoring problem into: people flow monitoring and vehicle flow monitoring are carried out on a certain area r i The people flow rate of (1) can be expressed asWhen a pedestrian enters the area, the flow superimposed to the area at time t is +.>When the pedestrian leaves the area, the flow rate superimposed to the area for the period of time is +.>For a certain region r i Can be expressed as +.>When the vehicle is driving into the area, the flow rate superimposed on the area for the period of time is +.>When the vehicle is driven out of the area, the flow rate superimposed on the area for the period of time is +.>
4. The improved YOLOv 3-based people stream detection method of claim 1, wherein: the solving of the people stream monitoring result in the step 5 is specifically described as follows:
the test sample images classified in the step 3 are respectively detected through the improved YOLOv3 network for detecting pedestrians and driving by training in the step 4, and when r i When a pedestrian enters the area, the area isSuperimposed on the number of people entering, when r i When pedestrians leave the area, at +.>Stacking the number of people leaving; when r is i When a vehicle enters the area, the area is at +.>Upper stackAdding the number of vehicles entering; when r is i When the vehicle is driving away in the area, the area is at +.>Overlaying the number of vehicles leaving; finally obtain r i Flow of people in an areaAnd traffic flow->
CN202011236196.6A 2020-11-09 2020-11-09 People stream detection method based on improved YOLOv3 Active CN112347938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011236196.6A CN112347938B (en) 2020-11-09 2020-11-09 People stream detection method based on improved YOLOv3

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011236196.6A CN112347938B (en) 2020-11-09 2020-11-09 People stream detection method based on improved YOLOv3

Publications (2)

Publication Number Publication Date
CN112347938A CN112347938A (en) 2021-02-09
CN112347938B true CN112347938B (en) 2023-09-26

Family

ID=74429158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011236196.6A Active CN112347938B (en) 2020-11-09 2020-11-09 People stream detection method based on improved YOLOv3

Country Status (1)

Country Link
CN (1) CN112347938B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887634B (en) * 2021-10-08 2024-05-28 齐丰科技股份有限公司 Electric safety belt detection and early warning method based on improved two-step detection
CN114530039A (en) * 2022-01-27 2022-05-24 浙江梧斯源通信科技股份有限公司 Real-time detection device and method for pedestrian flow and vehicle flow at intersection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN111046787A (en) * 2019-12-10 2020-04-21 华侨大学 Pedestrian detection method based on improved YOLO v3 model
CN111626128A (en) * 2020-04-27 2020-09-04 江苏大学 Improved YOLOv 3-based pedestrian detection method in orchard environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN111046787A (en) * 2019-12-10 2020-04-21 华侨大学 Pedestrian detection method based on improved YOLO v3 model
CN111626128A (en) * 2020-04-27 2020-09-04 江苏大学 Improved YOLOv 3-based pedestrian detection method in orchard environment

Also Published As

Publication number Publication date
CN112347938A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
Rizwan et al. Real-time smart traffic management system for smart cities by using Internet of Things and big data
Morris et al. Real-time video-based traffic measurement and visualization system for energy/emissions
CN109858389B (en) Vertical ladder people counting method and system based on deep learning
CN103871077B (en) A kind of extraction method of key frame in road vehicles monitoring video
CN112347938B (en) People stream detection method based on improved YOLOv3
WO2021082464A1 (en) Method and device for predicting destination of vehicle
CN109409242A (en) A kind of black smoke vehicle detection method based on cyclic convolution neural network
CN116824859B (en) Intelligent traffic big data analysis system based on Internet of things
CN104239905A (en) Moving target recognition method and intelligent elevator billing system having moving target recognition function
CN114170580A (en) Highway-oriented abnormal event detection method
Khan et al. Deep-learning based vehicle count and free parking slot detection system
CN110674887A (en) End-to-end road congestion detection algorithm based on video classification
CN116434159A (en) Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
CN106529405A (en) Local anomaly behavior detection method based on video image block model
CN117456482B (en) Abnormal event identification method and system for traffic monitoring scene
CN110610118A (en) Traffic parameter acquisition method and device
CN110765900A (en) DSSD-based automatic illegal building detection method and system
Xu et al. Deep learning based vehicle violation detection system
Chaturvedi et al. Detection of traffic rule violation in University campus using deep learning model
CN111627224A (en) Vehicle speed abnormality detection method, device, equipment and storage medium
Deliali et al. A framework for mode classification in multimodal environments using radar-based sensors
Pan et al. Identifying Vehicles Dynamically on Freeway CCTV Images through the YOLO Deep Learning Model.
CN112802333A (en) AI video analysis-based highway network safety situation analysis system and method
Zhuo et al. Traffic congestion detection based on the image classification with cnn
Huu et al. Proposing a route recommendation algorithm for vehicles based on receiving video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant