CN113327248B - Tunnel traffic flow statistical method based on video - Google Patents

Tunnel traffic flow statistical method based on video Download PDF

Info

Publication number
CN113327248B
CN113327248B CN202110885426.XA CN202110885426A CN113327248B CN 113327248 B CN113327248 B CN 113327248B CN 202110885426 A CN202110885426 A CN 202110885426A CN 113327248 B CN113327248 B CN 113327248B
Authority
CN
China
Prior art keywords
vehicle
detection
frame
traffic flow
vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110885426.XA
Other languages
Chinese (zh)
Other versions
CN113327248A (en
Inventor
张蓉
申莲莲
邓承刚
叶琳
龚绍杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiutong Zhilu Technology Co ltd
Original Assignee
Sichuan Jiutong Zhilu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiutong Zhilu Technology Co ltd filed Critical Sichuan Jiutong Zhilu Technology Co ltd
Priority to CN202110885426.XA priority Critical patent/CN113327248B/en
Publication of CN113327248A publication Critical patent/CN113327248A/en
Application granted granted Critical
Publication of CN113327248B publication Critical patent/CN113327248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of vehicle detection, and discloses a tunnel traffic flow statistical method based on videos, which comprises the following steps: A. making a data set; B. dividing a data set; C. constructing a modified yolov 3-based vehicle identification detection model; D. training a vehicle recognition model; d1, vehicle tracking; d2, and carrying out traffic flow statistics. This application uses the yolov3 network after the improvement to detect current frame vehicle, and the introduction of noise when having avoided using interframe information to carry out vehicle identification through the feature extraction, can accurately predict the vehicle according to the weight and the offset of training, hardly influenced by the speed of a motor vehicle, has very high robustness, and is high to the detection discernment rate of accuracy of vehicle.

Description

Tunnel traffic flow statistical method based on video
Technical Field
The application relates to the technical field of vehicle detection, in particular to a tunnel traffic flow statistical method based on videos.
Background
The current method for traffic flow statistics is mainly based on ultrasonic detection, induction coil detection, microwave detection and video detection. The ultrasonic detection equipment is small and easy to install, but the performance is gradually reduced along with the influence of the ambient temperature and the airflow; the induction coil detection has the advantages of product equipment standardization and high detection precision, but when the induction coil detection is installed, the road surface needs to be dug for embedding, so that the traffic can be blocked, and the service life of the road surface is influenced; the microwave detection is simple and convenient to install, does not damage the road surface, can realize all-weather detection, has strong anti-interference capability, has higher requirement on installation terrain, and needs to be installed on a flat road surface without hills or other obstacles on the road side; the video detection technology is combined with the computer technology, the video image processing technology, the artificial intelligence technology and other technologies, video is analyzed, a specific target in a scene is detected and tracked, and the intelligent detection technology of an emergency plan is started in time when an abnormal event is detected. Therefore, compared with other technologies, the video detection technology is easy to install and maintain, meanwhile, the road surface cannot be damaged, a real-time image of the traffic road surface can be provided, workers can conveniently verify abnormal events, and functions of the video detection technology can be expanded through different built-in algorithms.
The statistics of the tunnel traffic flow is based on that a camera installed in a tunnel counts vehicles passing through a highway point in a certain time. At present, the video-based traffic flow statistics is mainly realized by combining vehicle detection and target tracking. The commonly used vehicle detection methods mainly include an optical flow method, a frame difference method and a background difference method. The core of the optical flow method is to calculate the optical flow, i.e. the velocity, of a moving object. The moving target detection based on the optical flow method adopts the optical flow characteristic of the moving target changing along with the time, so that the target is effectively extracted and tracked. The frame difference method mainly utilizes the difference between adjacent frames of images of an image sequence to carry out analysis, and the algorithm can quickly detect the target from the background. The background difference method is to analyze the difference between the current frame and the background frame of the vehicle without movement, so as to detect the complete information of the vehicle, but when the background change is caused by the light change or the camera shake, the background information needs to be reconstructed.
Common target tracking methods include a mean shift algorithm, a target tracking algorithm based on Kalman filtering, and a target tracking algorithm based on particle filtering. Mean shift refers to an iterative step of calculating the initial shift mean, moving the point to a new position along the mean shift vector, and then continuing moving with the new position as a new starting point until the constraint condition is satisfied. The size of a target frame searched by the mean shift does not change along with the change of the size of the target when the target is tracked, so that the vehicle is easy to lose. The Kalman filtering based target tracking algorithm considers that a motion model of an object obeys a Gaussian model, so that the motion state of the target is predicted, and then the motion state of the target is updated according to errors by comparing with an observation model, and the accuracy of the algorithm is not very high. The particle filter algorithm is a sequence Monte Carlo filter method, and the core is to replace the posterior probability distribution of the state by a series of randomly extracted samples (particles). An inevitable phenomenon in particle filter algorithms is particle degradation.
Disclosure of Invention
In order to solve the problems and the defects existing in the prior art, the application provides a video-based tunnel traffic flow statistical method, the traditional basic yolov3 model is improved, and a virtual detection line and a lane line distinguishing method are combined, so that statistics of traffic flow in a tunnel scene is realized, and the video-based tunnel traffic flow statistical method has the characteristics of high detection efficiency and good real-time performance.
In order to achieve the above object, the technical solution of the present application is as follows:
a tunnel traffic flow statistical method based on videos specifically comprises the following steps:
step A, data set manufacturing, namely acquiring a plurality of video images containing vehicles in a tunnel, converting the collected video images into pictures by using Python, labeling each picture by using a labelImg tool to obtain original pictures and label data, and dividing the vehicles into three categories, namely cars, trucks and buses during labeling;
b, dividing a data set, namely dividing the pictures marked in the step A into a training set and a test set according to the ratio of 6: 4;
step C, constructing an improved yolov 3-based vehicle identification detection model, improving a traditional basic yolov3 model, in the improved model, improving a K-means clustering mode of prior frame anchors, adopting a two-step clustering method, firstly adopting a BRICH clustering algorithm to generate 9 clustering centers as initial points of the anchors, then adopting the K-means clustering algorithm to obtain the final sizes of the anchors, and finally obtaining an improved yolov 3-based vehicle identification detection model; wherein anchor boxes represent anchor boxes;
d, training a vehicle identification model, namely performing model training on the vehicle identification detection model based on the improved yolov3 constructed in the step C by using a training set, testing on the testing set to obtain a final yolov3 vehicle identification detection model based on the improvement, and realizing tracking detection and traffic flow statistics of the vehicle by using the final yolov3 vehicle identification detection model based on the improvement;
d1, tracking the vehicle, extracting the features of the input image by the feature extraction network, outputting the center coordinates, width, height, confidence and classification results of the vehicle detection frame, comparing the position information of the vehicle detected by the current frame with the position information of all the vehicles detected by the previous frame, and judging whether the vehicles are the same vehicle;
and D2, counting the traffic flow, setting a virtual detection line and a distinguishing lane line, and obtaining the traffic flow corresponding to the lane by judging the relationship between the vehicle detection frame and the virtual detection line and the distinguishing lane line.
Further, in the step C, the BRICH clustering algorithm scans the whole data set to establish a clustering feature tree, then clusters the leaf nodes of the clustering feature tree to obtain 9 clustering centers serving as initial central points of the K-means clustering algorithm, and finally obtains the final sizes of anchor boxes by using the K-means clustering algorithm.
Further, in the improved yolov 3-based vehicle identification and detection model, the input size of the picture is 416 × 416 pixels, and the feature extraction network extracts feature maps of three dimensions, namely 13 × 13 pixels, 26 × 26 pixels and 52 × 52 pixels.
Further, the step D1 is specifically as follows:
d1.1, initially, giving unique ID numbers to all detected vehicles in the frame;
d1.2, comparing the position information of the vehicles detected by the current frame with the position information of all vehicles in the previous frame one by one, if the center coordinates of the detection frame of a certain vehicle in the current frame are in the detection frame of a certain vehicle in the previous frame, judging that the two vehicles are the same vehicle, and updating the center coordinates, width, height and classification results of the detection frames of the vehicles with the ID numbers; and if the center coordinates of the detection frame of a certain vehicle in the current frame are not in the detection frame of a certain vehicle in the previous frame, marking as a new vehicle and simultaneously giving an ID number to the new vehicle.
Further, the step D2 is specifically as follows:
d2.1, initially, setting a virtual detection line and distinguishing lane lines on a lane;
and D2.2, when the vehicle passes through the virtual detection line and the center coordinate of the detection frame of the vehicle is on the left side of the lane line, adding one to the statistical count of the downlink traffic flow, or adding one to the statistical count of the uplink traffic flow, and finally updating the center coordinate of the detection frame of the vehicle.
Further, in the step a, in the process of labeling the pictures, the pictures not including the vehicle are deleted.
The beneficial effect of this application:
(1) the improved yolov3 network model is used for detecting the vehicle at the current frame, so that the introduction of noise during vehicle identification by using interframe information is avoided, the vehicle can be accurately predicted according to the trained weight and offset through feature extraction, the vehicle is hardly influenced by the vehicle speed, the robustness is high, and the accuracy of the overall detection and identification of the vehicle is high.
(2) The method improves the anchor boxes clustering method of the traditional yolov3 network model, adopts a two-step clustering method, and firstly adopts the BRICH clustering algorithm to generate 9 clustering centers as initial points of the anchor boxes, thereby avoiding the problem that K-means randomly selects initial values and is sensitive to noise points, and greatly improving the detection precision of the model network.
(3) The traffic flow direction-dividing statistics is realized by the lane dividing lines, and furthermore, the up and down lane lines of the lane are divided by additionally arranging the lane dividing lines, so that the traffic flow direction-dividing statistics method is not only suitable for the traffic flow statistics of one-way tunnels, but also suitable for the traffic flow statistics of two-way tunnels and expressways.
(4) This application is at the data mark stage, and the vehicle has been classified, consequently when finally making statistics of the vehicle flow, not only can obtain the total vehicle flow that corresponds the lane, can also obtain the vehicle flow of different motorcycle types simultaneously, consequently this application is more comprehensive to the statistical data of vehicle flow.
Drawings
The foregoing and following detailed description of the present application will become more apparent when read in conjunction with the following drawings, wherein:
FIG. 1 is a flow chart of the method of the present application;
FIG. 2 is a diagram illustrating the virtual detection lines and lane line differentiation.
Detailed Description
The technical solutions for achieving the objects of the present invention are further described below by specific examples, and it should be noted that the technical solutions claimed in the present application include, but are not limited to, the following examples.
Example 1
The embodiment discloses a tunnel traffic flow statistical method based on videos, which is based on the following steps, referring to the accompanying drawings 1 and 2 of the specification, the method is used for collecting picture data based on a camera installed in a tunnel, and pile number information installed by the camera identifies the position of the camera:
A. data set production
Acquiring a plurality of video images containing vehicles in a tunnel, converting the collected video images into pictures by using Python, and labeling each picture by using a labelImg tool to obtain an original picture and label data; during marking, the vehicles are divided into three categories, namely, cars, trucks and buses, and finally, original pictures and label data with an xml format are obtained and are respectively placed under JPEGImages and exceptions folders; in the process of marking the pictures, deleting the pictures which do not contain the vehicles;
B. data set partitioning
Respectively dividing the pictures marked in the step A into a training set and a testing set according to the ratio of 6:4, putting files such as divided Traffic _ traffic.txt, Traffic _ val.txt and the like into Traffic _ data \ ImageSets \ Main folders, using the training set for model training, using the testing set for verifying the quality of the model, still performing well in the process of model verification, applying to the statistics of actual Traffic flow, and optimizing the model if the model verification effect is not good, and deploying and using after the use standard is reached;
C. the basic yolov3 model is improved, and the improved yolov 3-based vehicle identification detection model is constructed
Obtaining anchor boxes by a traditional yolov3 model is obtained by performing K-means clustering on bounding boxes, wherein the K-means clustering result is different in selection of initial values, so that the K-means clustering result is sensitive to noise points, and the detection precision is insufficient and the accuracy is low when the model is used for detection in a later period, so that the traditional yolov3 model is improved, in the improved model, the K-means clustering mode of prior frame anchor boxes of a basic yolov3 model is improved, a two-step clustering method is adopted, firstly, a BRICH clustering algorithm is adopted to generate 9 clustering centers as initial points of the anchor boxes, and finally, the K-means clustering algorithm is adopted to obtain the size of the final anchor boxes; wherein anchor boxes represent anchor boxes;
D. vehicle recognition model training
C, performing model training on the vehicle identification and detection model based on the improved yolov3 constructed in the step C by using a training set, obtaining a model which shows the best benefit through a test set, and realizing tracking detection and traffic flow statistics of the vehicle based on the model which shows the best benefit; because the original models of yolov3 are 80 categories, in the embodiment, only 3 categories of vehicles need to be detected, the configuration parameters of the model network need to be changed during model training, so that the models are subjected to transfer training;
d1, vehicle tracking
Firstly, the feature extraction network carries out feature extraction on an input image and outputs the center coordinates of a vehicle detection frame
Figure 490303DEST_PATH_IMAGE001
Wide and wide
Figure 435125DEST_PATH_IMAGE002
High, high
Figure 256450DEST_PATH_IMAGE003
The central coordinate represents the position of the detection frame, the width and the height represent the size of the detection frame, the confidence represents whether an object exists in the detection frame, 1 represents the existence of the object, 0 represents the absence of the object, the classification result represents the probability value that the vehicle detection frame tracks the detected vehicle and belongs to a certain class, and the final classification result with the highest probability value is selected; then the center coordinates of the detection frame of the vehicle detected by the current frame are determined
Figure 28360DEST_PATH_IMAGE001
Respectively comparing the positions of all the vehicles detected in the previous frame, and judging whether the vehicles are the same vehicle; the anchor boxes are obtained by clustering training samples, the vehicle detection frame is obtained when the target is finally detected by using the model, and the vehicle detection frame is one of the anchor boxes;
d2, traffic flow statistics
And setting a virtual detection line and distinguishing lane lines, and obtaining the traffic flow corresponding to the lane by judging the relationship between the vehicle detection frame and the virtual detection line and the distinguishing lane lines.
Example 2
The embodiment discloses a video-based tunnel traffic flow statistical method, wherein a yolov3 model mainly comprises two parts, namely a Darknet-53 feature extraction network and a prediction network, the Darknet-53 feature extraction network obtains a feature map, on the basis of embodiment 1, in the embodiment, the feature extraction network in the yolov3 model performs feature extraction on an input picture, and extracts three feature maps 13 × 13, 26 × 26 and 52 × 52 with different scales for prediction respectively (13 × 13 indicates that the width and the height of the picture are 13 pixels, 26 × 26 indicates that the width and the height of the picture are 26 pixels, and 52 × 52 indicates that the width and the height of the picture are 52 pixels), in the vehicle identification detection model of the application, the size of the picture input to the network model is 416 & lt 3 & gt (416 pixels, 3 color channels represent the width, the height of the picture), each scale can predict 3 target frames, a total of 9 anchor boxes. For a picture, the picture is initially divided into K × K grids, if the center of an object falls on a certain cell, the cell is responsible for detecting the object, the tensor size of the final feature map output is K × K × (3 × (4+1+ C)), wherein 4 central point coordinates, confidence scores and object categories required by determining a target frame are included, then the frame score with the confidence score smaller than a threshold value is set to be 0, finally, an NMS (non-maximum suppression) algorithm is adopted to remove repeated bounding boxes, and the bounding box with the maximum score is reserved as a final prediction box.
Further, because the size of the traditional yolv 3 model anchor boxes is obtained by clustering bounding boxes through a K-means algorithm, but the K-means clustering result is different due to the selection of initial values and is sensitive to noise points, the clustering method improves the clustering algorithm of the traditional yolv 3 model, adopts a BRICH and K-means two-step clustering method, the BRICH clustering algorithm firstly scans the whole data set, establishes a clustering feature tree, then clusters leaf nodes of the clustering feature tree to obtain 9 clustering centers serving as initial center points of the K-means clustering algorithm, and finally adopts the K-means clustering algorithm to obtain the size of the final anchor boxes, and the specific process is as follows:
1) the BIRCH obtains N frames of all marked vehicles in the training data to obtain the width and the height of all the frames, firstly scans the whole data set, establishes an aggregation feature tree, then replaces the original data set with the aggregation feature for clustering, and then puts leaf nodes along with the addition of objects to form a CF tree;
a. searching a leaf node closest to the new sample and a CF node closest to the leaf node from the root node downwards;
b. if the radius of the hypersphere corresponding to the CF node is still smaller than the threshold T after the new sample is added, updating all CF triples on the path, ending the insertion, otherwise, switching to c;
c. if the number of the CF nodes of the current leaf node is less than the threshold value L, a new CF node is created, a new sample is put in, the new CF node is put in the leaf node, all CF triples on the path are updated, the insertion is finished, otherwise, the process goes to d;
d. dividing a current leaf node into two new leaf nodes, selecting two CF tuples with the longest distance of a hyper-sphere in all CF tuples in the old leaf nodes, distributing the two CF tuples as the first CF nodes of the two new leaf nodes, putting other tuples and new sample tuples into corresponding leaf nodes according to the distance principle, sequentially checking upwards whether a father node is to be split, and if so, checking the father node in the same way as the leaf node;
2) setting K =9, merging CF tuples according to the distance, and finally obtaining 9 clustering centers which are used as initial central points of K-means and initial values of 9 anchor boxes;
3) calculating an iou value of each bounding box and each anchor box, and using d (n, k) =1-iou (n, k) to represent the error of the nth bounding box and the kth anchor box;
4) selecting the anchor box with the smallest error by comparing the error sizes { d (1, k), d (2, k), … …, d (n, k) } of each bounding box with each anchor box, and allocating the bounding box to the anchor box, wherein all bounding boxes do the operation;
5) obtaining which bounding boxes each anchor box has through the step 4), and then respectively obtaining the median values of the width and the height of the bounding boxes to be used as the new size of the anchor box;
6) repeating the steps 3) -5) until the type of the anchor box to which the bounding box belongs is completely the same as the type of the anchor box to which the bounding box belongs to the previous bounding box, and obtaining the size of the final anchor box; wherein, anchor box represents the anchor frame.
Further, when traffic flow statistics is carried out on a real scene, firstly, a trained model is used for tracking and detecting the vehicle, initially, an ID number is given when the vehicle is detected, and the detected vehicle ID number is stored in a set
Figure 794190DEST_PATH_IMAGE004
(ii) a Then, the center coordinates of the detection frame of the vehicle newly detected at the current frame are determined
Figure 418070DEST_PATH_IMAGE001
One by one and aggregate
Figure 218535DEST_PATH_IMAGE005
Comparing the position information of all vehicles in the current frame, and if the center coordinate of the detection frame of a certain vehicle in the current frame is the same as the center coordinate of the detection frame of the vehicle in the current frame
Figure 925460DEST_PATH_IMAGE001
Judging that the two vehicles are the same vehicle in the detection frame of a certain vehicle in the previous frame, and updating the center coordinates, the width, the height and the classification result of the detection frame of the vehicle with the ID number; if the center coordinates of the detection frame of a certain vehicle in the current frame
Figure 53953DEST_PATH_IMAGE001
If the vehicle is not in the detection frame of a certain vehicle in the previous frame, the vehicle is marked as a new vehicle, and the ID number is given to the new vehicle.
When the collection
Figure 95903DEST_PATH_IMAGE005
Is longer than the length
Figure 547613DEST_PATH_IMAGE006
Then, the first-in first-out principle is satisfied, and the first-in first-out is removed
Figure 159860DEST_PATH_IMAGE007
I.e. by
Figure 775649DEST_PATH_IMAGE008
Further, when the vehicle flow is counted, judging whether the vehicle passes through the virtual detection line or not when the vehicle passes through the virtual detection line, and counting if the vehicle does not pass through the virtual detection line; center coordinates of vehicle inspection frame
Figure 898588DEST_PATH_IMAGE001
At the position of distinguishing lane line, the center coordinate of the frame is detected by the vehicle
Figure 267122DEST_PATH_IMAGE001
And (4) dividing the left side and the right side of the lane line into an upper side and a lower side, counting, and finally updating the center coordinates of the vehicle detection frame. The setting of the virtual detection line may affect the accuracy of the vehicle statistics, and therefore, in the present embodiment, one virtual detection line is set at a position where the vehicle can be clearly seen
Figure 754997DEST_PATH_IMAGE009
And at the same time, a lane line for distinguishing lanes is arranged on the road surface
Figure 982716DEST_PATH_IMAGE010
. And when the vehicle passes through the virtual detection line and the center coordinate of the vehicle detection frame is on the left side of the lane line, adding 1 to the statistical count of the downstream traffic flow, or adding 1 to the statistical count of the upstream traffic flow.
The criteria for determining that the vehicle passes through the virtual detection line are as follows:
Figure 876723DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 709550DEST_PATH_IMAGE012
the coordinates of the upper left corner of the vehicle detection frame,
Figure 899485DEST_PATH_IMAGE013
detecting the coordinates of the lower right corner of the frame for the vehicle;
the vehicle up-down judgment standard is as follows:
Figure 614500DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 187564DEST_PATH_IMAGE001
the center coordinates of the vehicle detection frame.
The foregoing is directed to embodiments of the present invention, which are not limited thereto, and any simple modifications and equivalents thereof according to the technical spirit of the present invention may be made within the scope of the present invention.

Claims (5)

1. A tunnel traffic flow statistical method based on videos is characterized by comprising the following steps: the method specifically comprises the following steps:
step A, data set manufacturing, namely acquiring a plurality of video images containing vehicles in a tunnel, converting the collected video images into pictures by using Python, labeling each picture by using a labelImg tool to obtain original pictures and label data, and dividing the vehicles into three categories, namely cars, trucks and buses during labeling;
b, dividing a data set, namely dividing the pictures marked in the step A into a training set and a test set according to the ratio of 6: 4;
step C, constructing an improved yolov 3-based vehicle identification detection model, improving a traditional basic yolov3 model, in the improved model, improving a K-means clustering mode of prior frame anchors, adopting a two-step clustering method, firstly adopting a BRICH clustering algorithm to generate 9 clustering centers as initial points of the anchors, then adopting the K-means clustering algorithm to obtain the final sizes of the anchors, and finally obtaining an improved yolov 3-based vehicle identification detection model; wherein anchor boxes represent anchor boxes;
d, training a vehicle identification model, namely performing model training on the vehicle identification detection model based on the improved yolov3 constructed in the step C by using a training set, testing on the testing set to obtain a final yolov3 vehicle identification detection model based on the improvement, and realizing tracking detection and traffic flow statistics of the vehicle by using the final yolov3 vehicle identification detection model based on the improvement;
d1, tracking the vehicle, extracting the features of the input image by the feature extraction network, outputting the center coordinates, width, height, confidence and classification results of the vehicle detection frame, comparing the position information of the vehicle detected by the current frame with the position information of all the vehicles detected by the previous frame, and judging whether the vehicles are the same vehicle;
d2, counting the traffic flow, setting a virtual detection line and a distinguishing lane line, and obtaining the traffic flow corresponding to the lane by judging the relation between a vehicle detection frame and the virtual detection line and the distinguishing lane line;
d2.1, setting a virtual detection line on the lane at the beginning
Figure DEST_PATH_IMAGE001
And distinguishing lane lines
Figure 400982DEST_PATH_IMAGE002
D2.2, when the vehicle passes through the virtual detection line and the center coordinate of the detection frame of the vehicle is on the left side of the lane line, adding one to the statistical count of the downstream traffic flow, or adding one to the statistical count of the upstream traffic flow, and finally updating the center coordinate of the detection frame of the vehicle;
the criteria for determining that the vehicle passes through the virtual detection line are as follows:
Figure DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 426445DEST_PATH_IMAGE004
the coordinates of the upper left corner of the vehicle detection frame,
Figure DEST_PATH_IMAGE005
detecting the coordinates of the lower right corner of the frame for the vehicle;
the vehicle up-down judgment standard is as follows:
Figure 927221DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE007
the center coordinates of the vehicle detection frame.
2. The video-based tunnel traffic flow statistical method according to claim 1, characterized in that: in the step C, the BRICH clustering algorithm firstly scans the whole data set to establish a clustering feature tree, then carries out clustering on leaf nodes of the clustering feature tree to obtain 9 clustering centers serving as initial central points of the K-means clustering algorithm, and finally obtains the size of a final anchor box by adopting the K-means clustering algorithm.
3. The video-based tunnel traffic flow statistical method according to claim 1, characterized in that: in the improved yolov 3-based vehicle identification detection model, the input size of the picture is 416 x 416 pixels, and the feature extraction network extracts feature maps of three dimensions, namely 13 x 13 pixels, 26 x 26 pixels and 52 x 52 pixels.
4. The video-based tunnel traffic flow statistical method according to claim 1, wherein the step D1 is as follows:
d1.1, initially, giving unique ID numbers to all detected vehicles in the frame;
d1.2, comparing the position information of the vehicles detected by the current frame with the position information of all vehicles in the previous frame one by one, if the center coordinates of the detection frame of a certain vehicle in the current frame are in the detection frame of a certain vehicle in the previous frame, judging that the two vehicles are the same vehicle, and updating the center coordinates, width, height and classification results of the detection frames of the vehicles with the ID numbers; and if the center coordinates of the detection frame of a certain vehicle in the current frame are not in the detection frame of a certain vehicle in the previous frame, marking as a new vehicle and simultaneously giving an ID number to the new vehicle.
5. The video-based tunnel traffic flow statistical method according to claim 1, characterized in that: in the step A, in the process of marking the pictures, the pictures which do not contain the vehicles are deleted.
CN202110885426.XA 2021-08-03 2021-08-03 Tunnel traffic flow statistical method based on video Active CN113327248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110885426.XA CN113327248B (en) 2021-08-03 2021-08-03 Tunnel traffic flow statistical method based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110885426.XA CN113327248B (en) 2021-08-03 2021-08-03 Tunnel traffic flow statistical method based on video

Publications (2)

Publication Number Publication Date
CN113327248A CN113327248A (en) 2021-08-31
CN113327248B true CN113327248B (en) 2021-11-26

Family

ID=77426886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110885426.XA Active CN113327248B (en) 2021-08-03 2021-08-03 Tunnel traffic flow statistical method based on video

Country Status (1)

Country Link
CN (1) CN113327248B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113823094B (en) * 2021-11-17 2022-02-18 四川九通智路科技有限公司 Tunnel real-time monitoring management system and method based on traffic flow big data
CN114495509B (en) * 2022-04-08 2022-07-12 四川九通智路科技有限公司 Method for monitoring tunnel running state based on deep neural network
CN114973694B (en) * 2022-05-19 2024-05-24 杭州中威电子股份有限公司 Tunnel traffic flow monitoring system and method based on inspection robot

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366571A (en) * 2013-07-03 2013-10-23 河南中原高速公路股份有限公司 Intelligent method for detecting traffic accident at night
CN103456172A (en) * 2013-09-11 2013-12-18 无锡加视诚智能科技有限公司 Traffic parameter measuring method based on videos
CN103473926A (en) * 2013-09-11 2013-12-25 无锡加视诚智能科技有限公司 Gun-ball linkage road traffic parameter collection and rule breaking snapshooting system
CN103578281A (en) * 2012-08-02 2014-02-12 中兴通讯股份有限公司 Optimal control method and device for traffic artery signal lamps
CN104882005A (en) * 2015-05-15 2015-09-02 青岛海信网络科技股份有限公司 Method and device for detecting lane traffic flow
CN105243854A (en) * 2015-09-24 2016-01-13 侯文宇 Method and apparatus for detecting traffic flow on road
CN105321358A (en) * 2014-07-31 2016-02-10 段绍节 Urban road intersection and road traffic intelligent network real-time command system
CN107316472A (en) * 2017-07-28 2017-11-03 广州市交通规划研究院 A kind of dynamic coordinate control method towards the two-way different demands in arterial highway
CN107895492A (en) * 2017-10-24 2018-04-10 河海大学 A kind of express highway intelligent analysis method based on conventional video
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN110570649A (en) * 2018-06-05 2019-12-13 高德软件有限公司 Method for detecting flow of motor vehicle, method for detecting working state of equipment and corresponding devices
US10628671B2 (en) * 2017-11-01 2020-04-21 Here Global B.V. Road modeling from overhead imagery
US10685252B2 (en) * 2018-10-30 2020-06-16 Here Global B.V. Method and apparatus for predicting feature space decay using variational auto-encoder networks
CN111554105A (en) * 2020-05-29 2020-08-18 浙江科技学院 Intelligent traffic identification and statistics method for complex traffic intersection

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087517B (en) * 2018-09-19 2021-02-26 山东大学 Intelligent signal lamp control method and system based on big data
CN109584558A (en) * 2018-12-17 2019-04-05 长安大学 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals
CN109859468A (en) * 2019-01-30 2019-06-07 淮阴工学院 Multilane traffic volume based on YOLOv3 counts and wireless vehicle tracking
CN109919072B (en) * 2019-02-28 2021-03-19 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN109935080B (en) * 2019-04-10 2021-07-16 武汉大学 Monitoring system and method for real-time calculation of traffic flow on traffic line
AU2019101142A4 (en) * 2019-09-30 2019-10-31 Dong, Qirui MR A pedestrian detection method with lightweight backbone based on yolov3 network

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103578281A (en) * 2012-08-02 2014-02-12 中兴通讯股份有限公司 Optimal control method and device for traffic artery signal lamps
CN103366571A (en) * 2013-07-03 2013-10-23 河南中原高速公路股份有限公司 Intelligent method for detecting traffic accident at night
CN103456172A (en) * 2013-09-11 2013-12-18 无锡加视诚智能科技有限公司 Traffic parameter measuring method based on videos
CN103473926A (en) * 2013-09-11 2013-12-25 无锡加视诚智能科技有限公司 Gun-ball linkage road traffic parameter collection and rule breaking snapshooting system
CN105321358A (en) * 2014-07-31 2016-02-10 段绍节 Urban road intersection and road traffic intelligent network real-time command system
CN104882005A (en) * 2015-05-15 2015-09-02 青岛海信网络科技股份有限公司 Method and device for detecting lane traffic flow
CN105243854A (en) * 2015-09-24 2016-01-13 侯文宇 Method and apparatus for detecting traffic flow on road
CN107316472A (en) * 2017-07-28 2017-11-03 广州市交通规划研究院 A kind of dynamic coordinate control method towards the two-way different demands in arterial highway
CN107895492A (en) * 2017-10-24 2018-04-10 河海大学 A kind of express highway intelligent analysis method based on conventional video
US10628671B2 (en) * 2017-11-01 2020-04-21 Here Global B.V. Road modeling from overhead imagery
CN110570649A (en) * 2018-06-05 2019-12-13 高德软件有限公司 Method for detecting flow of motor vehicle, method for detecting working state of equipment and corresponding devices
US10685252B2 (en) * 2018-10-30 2020-06-16 Here Global B.V. Method and apparatus for predicting feature space decay using variational auto-encoder networks
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN111554105A (en) * 2020-05-29 2020-08-18 浙江科技学院 Intelligent traffic identification and statistics method for complex traffic intersection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Real-time Detection of Vehicle and Traffic Light for Intelligent and Connected Vehicles Based on YOLOv3 Network;Du, L.等;《2019 5th International Conference on Transportation Information and Safety (ICTIS)》;20191028;第388-392页 *
城市道路交通流视频检测与数据处理技术研究;张楠;《中国优秀硕士学位论文全文数据库 信息科技辑》;20131215(第S2期);第I138-1000页 *

Also Published As

Publication number Publication date
CN113327248A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN113327248B (en) Tunnel traffic flow statistical method based on video
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN106935035B (en) Parking offense vehicle real-time detection method based on SSD neural network
WO2017156772A1 (en) Method of computing passenger crowdedness and system applying same
CN109816024A (en) A kind of real-time automobile logo detection method based on multi-scale feature fusion and DCNN
CN103871077B (en) A kind of extraction method of key frame in road vehicles monitoring video
CN112016605B (en) Target detection method based on corner alignment and boundary matching of bounding box
CN104463196A (en) Video-based weather phenomenon recognition method
CN109784392A (en) A kind of high spectrum image semisupervised classification method based on comprehensive confidence
CN102087790B (en) Method and system for low-altitude ground vehicle detection and motion analysis
CN104978567A (en) Vehicle detection method based on scenario classification
CN107590486B (en) Moving object identification method and system, and bicycle flow statistical method and equipment
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN109934170B (en) Mine resource statistical method based on computer vision
CN103136534A (en) Method and device of self-adapting regional pedestrian counting
CN114170580A (en) Highway-oriented abnormal event detection method
CN103679214A (en) Vehicle detection method based on online area estimation and multi-feature decision fusion
CN112084890A (en) Multi-scale traffic signal sign identification method based on GMM and CQFL
CN110633678A (en) Rapid and efficient traffic flow calculation method based on video images
CN110674887A (en) End-to-end road congestion detection algorithm based on video classification
CN112100435A (en) Automatic labeling method based on edge end traffic audio and video synchronization sample
CN114049572A (en) Detection method for identifying small target
CN111127520B (en) Vehicle tracking method and system based on video analysis
CN112200248B (en) Point cloud semantic segmentation method, system and storage medium based on DBSCAN clustering under urban road environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant