CN115171377B - Traffic flow parameter detection and analysis method and device based on deep learning - Google Patents

Traffic flow parameter detection and analysis method and device based on deep learning Download PDF

Info

Publication number
CN115171377B
CN115171377B CN202210759052.1A CN202210759052A CN115171377B CN 115171377 B CN115171377 B CN 115171377B CN 202210759052 A CN202210759052 A CN 202210759052A CN 115171377 B CN115171377 B CN 115171377B
Authority
CN
China
Prior art keywords
vehicle
matching
association
traffic
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210759052.1A
Other languages
Chinese (zh)
Other versions
CN115171377A (en
Inventor
王富
宋金苑
郭瑞利
李元元
顾邓钧
阳丹
朱鸿斌
王静
谭畅
叶蔓青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN202210759052.1A priority Critical patent/CN115171377B/en
Publication of CN115171377A publication Critical patent/CN115171377A/en
Application granted granted Critical
Publication of CN115171377B publication Critical patent/CN115171377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/065Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses a traffic flow parameter detection and analysis method and device based on deep learning, wherein the method comprises the following steps: acquiring a complete vehicle tracking model, wherein the vehicle tracking model comprises a target detection module and an association matching module; acquiring traffic video data; extracting features of the traffic video data by utilizing the target detection module to obtain a continuous frame vehicle target detection diagram; performing association matching on the continuous frame target detection graph by using the association matching module to obtain a vehicle target tracking result; and determining a traffic flow parameter analysis result according to the vehicle target tracking result. The method improves the identification accuracy of the vehicle targets in the traffic video data, and extracts the vehicle information from the track data so as to realize automatic detection of the vehicle flow and the speed; the method can better adapt to the influence of severe environment and shielding, greatly ensures the detection accuracy and provides data support for urban traffic management.

Description

Traffic flow parameter detection and analysis method and device based on deep learning
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a traffic flow parameter detection and analysis method and device based on deep learning and a computer readable storage medium.
Background
The intelligent traffic system is the development direction of future traffic systems, the detection of traffic flow parameters is a key technology of the intelligent traffic system, and the detection and analysis of traffic flow parameters have important significance in aspects of urban traffic management, urban traffic planning, urban resident traveling and the like, and are an indispensable ring in urban traffic planning. The analysis of the traffic flow parameter detection result and the change condition can reflect the traffic condition of the road, and can provide basis for road control, signal lamp timing, tidal lane setting and the like.
With the development of sensing technology and video technology, various traffic detection technologies, such as induction coil detection technology, ultrasonic detection technology, infrared sensing technology, radar detection technology, laser detection technology and video detection technology, are gradually developed, and compared with the early manual method, the detection technologies are time-saving and labor-saving, and have improved precision, and although the traffic detection technology has rapid development in recent years, in the practical landing application, the problems of inaccurate analysis of traffic conditions caused by missed detection, false detection and the like due to inaccurate identification of targets and inaccurate selection of analysis parameters are caused.
Therefore, it is necessary to provide a traffic flow parameter detection method, which can accurately detect and track vehicles, and analyze traffic data by selecting proper analysis parameters, so as to ensure the stability, real-time performance and accuracy of traffic state detection.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method, a device and a computer readable storage device for detecting and analyzing traffic flow parameters based on deep learning, so as to solve the problem in the prior art that the traffic condition analysis result is inaccurate due to missed detection, false detection and the like caused by inaccurate identification of targets.
In order to solve the above problems, the present invention provides a traffic flow parameter detection and analysis method based on deep learning, including:
acquiring a complete vehicle tracking model, wherein the vehicle tracking model comprises a target detection module and an association matching module;
acquiring traffic video data;
extracting features of the traffic video data by utilizing the target detection module to obtain a continuous frame vehicle target detection diagram;
performing association matching on the continuous frame target detection graph by using the association matching module to obtain a vehicle target tracking result;
and determining a traffic flow parameter analysis result according to the vehicle target tracking result.
Further, the feature extraction is performed on the traffic video data by using the object detection module to obtain a continuous frame vehicle object feature map, which comprises:
obtaining continuous frame traffic state images from the traffic video data according to a preset extraction method;
and extracting preset characteristic information of an object in the traffic state image of each frame by utilizing the target detection module, and generating a plurality of target detection frames containing vehicle target information in the traffic state image of each frame to obtain a vehicle target characteristic diagram.
Further, performing association matching on the continuous frame object feature map by using the association matching module to obtain a vehicle object tracking result, including:
obtaining a target prediction frame of the current frame containing target prediction information according to the continuous frame images before the current frame by utilizing the association matching module;
and carrying out association matching on the vehicle target information in the current frame traffic state image and the target prediction information to obtain a vehicle target tracking result.
Further, performing association matching on the vehicle target information in the current frame traffic state image and the target prediction information to obtain a vehicle target tracking result, including:
performing association matching calculation on the vehicle target information and the target prediction information in the current frame traffic state image through a first preset matching algorithm to obtain a global association result; and when the global association result does not accord with a preset first association standard, progressive matching association is carried out.
Further, the progressive matching association includes:
and carrying out association matching on the prediction frames in the target detection frame and outside the newly-appearing target by using a second preset matching algorithm to obtain a second matching result.
Further, determining a traffic flow parameter analysis result according to the vehicle target tracking result, including:
determining traffic volume and vehicle speed data in preset time according to the vehicle target tracking result;
determining the time distribution characteristic of the traffic volume according to the traffic volume data;
and determining a corresponding relation diagram of the speed and the density according to the vehicle speed data.
Further, the backbone network of the target detection module is a residual network structure; the backbone network of the association matching module is a deep layer aggregation network.
Further, the obtaining the vehicle tracking model with complete training includes:
creating an initial vehicle tracking model;
acquiring a vehicle tracking sample tracking video, and performing image extraction and labeling on the vehicle tracking sample tracking video to obtain a training data set and a test data set;
performing iterative training on the vehicle tracking model according to the training data set and a preset loss function to obtain a trained vehicle tracking model;
and verifying the trained vehicle tracking model by using the test data set to obtain the vehicle tracking model with complete training.
The invention also provides a traffic flow parameter detection and analysis device based on deep learning, which comprises:
the vehicle tracking model acquisition module is used for acquiring a vehicle tracking model with complete training, and the vehicle tracking model comprises a target detection module and an association matching module;
the video data acquisition module is used for acquiring traffic video data;
the feature extraction module is used for extracting features of the traffic video data by utilizing the target detection module to obtain a continuous frame vehicle target detection diagram;
the association matching module is used for carrying out association matching on the vehicle targets by utilizing the continuous frame target detection graphs to obtain a vehicle target tracking result;
and the traffic flow parameter analysis module is used for determining a traffic flow parameter analysis result according to the vehicle target tracking result.
The invention also provides a computer readable storage medium storing computer program instructions which, when executed by a computer, cause the computer to perform the deep learning-based traffic flow parameter detection and analysis method according to any one of the above technical schemes.
Compared with the prior art, the invention has the beneficial effects that: firstly, constructing a vehicle tracking model with complete training; secondly, carrying out feature extraction and association matching on the traffic video data to obtain a vehicle target tracking result; and finally, determining a traffic flow parameter analysis result according to the vehicle target tracking result. According to the method, the continuous frame target feature images in the video data are subjected to feature extraction through the target detection module, and multiple targets are tracked through the association matching module, so that track tracking data of the vehicle are obtained; extracting vehicle information from the track data to realize automatic detection of the vehicle flow; the tracking operation of the association module can be better suitable for severe environments and the influence of shielding, so that the detection accuracy is greatly ensured. And proper traffic flow parameters are selected according to the target tracking result to analyze traffic conditions, so that data support is provided for urban traffic management, urban traffic planning, urban resident traveling and other aspects.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a method for detecting and analyzing traffic flow parameters based on deep learning;
FIG. 2 is a schematic diagram of an embodiment of a vehicle tracking model according to the present invention;
FIG. 3 is a flow chart of a method according to an embodiment of the progressive matching association provided by the present invention;
FIG. 4 is a schematic diagram of an embodiment of a traffic flow versus time correspondence provided by the present invention;
FIG. 5 is a schematic diagram of an embodiment of a frequency distribution histogram for vehicle speed analysis according to the present invention;
fig. 6 is a schematic structural diagram of an embodiment of a traffic flow parameter detection and analysis device based on deep learning according to the present invention.
Detailed Description
Preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which form a part hereof, and together with the description serve to explain the principles of the invention, and are not intended to limit the scope of the invention.
Before the description of the embodiments, the related words are interpreted:
traffic flow parameters: traffic parameters (basic traffic parameter) refer to the most fundamental and most important physical parameters describing traffic flow characteristics. Including flow, velocity, and density. The detection and analysis of the traffic flow parameters have important significance in the aspects of urban traffic management, urban traffic planning, urban resident traveling and the like, and are an indispensable ring in urban traffic planning.
In the existing traffic flow parameter detection method, the detection and tracking of targets in videos or images are inaccurate, so that the traffic flow parameter analysis result is inconsistent with the real traffic condition, and the actual traffic planning cannot be guided. Especially under the conditions of severe weather and other environments and shielding problems, the definition of the original video or image data is lower, and errors of detection and tracking are further increased. The invention aims to optimize the detection and tracking method of the vehicle target, improve the accuracy of vehicle detection and tracking, and obtain more accurate traffic condition analysis results according to the detection and tracking results.
The invention provides a traffic flow parameter detection and analysis method based on deep learning, which comprises the following steps:
step S101: acquiring a complete vehicle tracking model, wherein the vehicle tracking model comprises a target detection module and an association matching module;
step S102: acquiring traffic video data;
step S103: extracting features of the traffic video data by utilizing the target detection module to obtain a continuous frame vehicle target detection diagram;
step S104: performing association matching on the continuous frame target detection graph by using the association matching module to obtain a vehicle target tracking result;
step S105: and determining a traffic flow parameter analysis result according to the vehicle target tracking result.
According to the traffic flow parameter detection and analysis method based on deep learning, firstly, a vehicle tracking model with complete training is constructed; secondly, carrying out feature extraction and association matching on the traffic video data to obtain a vehicle target tracking result; and finally, determining a traffic flow parameter analysis result according to the vehicle target tracking result. According to the method, feature extraction is carried out on continuous frame target feature graphs in video data through a target detection module, multiple targets are tracked through a correlation matching module, and track tracking data of a vehicle are obtained; extracting vehicle information from the track data to realize automatic detection of the vehicle flow; the tracking operation of the association module can be better suitable for severe environments and the influence of shielding, so that the detection accuracy is greatly ensured. And proper traffic flow parameters are selected according to the target tracking result to analyze traffic conditions, so that data support is provided for urban traffic management, urban traffic planning, urban resident traveling and other aspects. As a preferred embodiment, the backbone network of the target detection module is a residual network structure; the backbone network of the association matching module is a deep layer aggregation network.
As a preferred embodiment, the acquiring a training complete vehicle tracking model includes:
creating an initial vehicle tracking model;
acquiring a vehicle tracking sample tracking video, and performing image extraction and labeling on the vehicle tracking sample tracking video to obtain a training data set and a test data set;
performing iterative training on the vehicle tracking model according to the training data set and a preset loss function to obtain a trained vehicle tracking model;
and verifying the trained vehicle tracking model by using the test data set to obtain the vehicle tracking model with complete training.
As a specific example, as shown in fig. 2, fig. 2 is a schematic diagram of the structure of the vehicle tracking model of the present embodiment. The vehicle tracking model comprises a target detection module and an association matching module. The target detection module adopts a ResNet-34 grid structure of a ResNet network, and performs feature extraction by taking the ResNet-34 as a backbone network to obtain a high-resolution feature map, thereby improving the precision and the detection speed; the association matching module applies an improved network DLA-34 of depth layer aggregation (Deep Layer Aggregation, DLA) to a backbone network, realizes multi-layer feature fusion by adding jump connection between low-level features and high-level features, and improves the robustness of an algorithm to object scale change; in addition, in order to avoid the overfitting problem caused by the high-dimensional features, the target appearance information is acquired by adopting the low-dimensional features with 128 dimensions.
As a specific embodiment, the method for training the initial vehicle tracking model includes:
firstly, establishing a vehicle tracking data set, extracting a picture sequence of the data set from a video fragment, arranging according to a time sequence, and marking the data set by adopting an automatic detection and manual combination mode;
dividing the complete data set into a training set and a testing set; the training set is used for training the target detection network; the test set is used for evaluating the performance of the associated matching module.
Thirdly, the loss functions of the target detection network and the associated matching module are summed to realize the training of the vehicle tracking model; specifically, for an image having N objects and their corresponding ID values, a true heat map, a frame offset map, and a size map are generated, as well as a single heat-coded class representation of the object. These real tag data are compared with predicted values of the vehicle tracking model to calculate loss values, enabling training of the entire network.
As a specific embodiment, besides the standard training strategy, a weak supervision learning method is also provided, and the FairMOT is trained on the image-level target detection data set such as the COCO. Each target instance in the dataset is treated as a separate class and different transformations of the same target are treated as instances in the same class. The transformations employed include HSV enhancement, rotation, scaling, translation, and clipping. The model was pre-trained using the CrowdHuman dataset and then fine-tuned on the MOT dataset. By the self-supervision learning method, the overall performance of the model is further improved.
As a preferred embodiment, the feature extraction is performed on the traffic video data by using the object detection module to obtain a continuous frame vehicle object feature map, which includes:
obtaining continuous frame traffic state images from the traffic video data according to a preset extraction method;
and extracting preset characteristic information of an object in the traffic state image of each frame by utilizing the target detection module, and generating a plurality of target detection frames containing vehicle target information in the traffic state image of each frame to obtain a vehicle target characteristic diagram.
As a preferred embodiment, the performing, by using the association matching module, association matching of the vehicle target on the continuous frame target feature map to obtain a vehicle target tracking result includes:
obtaining a target prediction frame of the current frame containing target prediction information according to the continuous frame images before the current frame by utilizing the association matching module;
and carrying out association matching on the vehicle target information in the current frame traffic state image and the target prediction information to obtain a vehicle target tracking result.
As a preferred embodiment, performing association matching on the vehicle target information in the current frame traffic state image and the target prediction information to obtain a vehicle target tracking result, including:
performing association matching calculation on the vehicle target information and the target prediction information in the current frame traffic state image through a first preset matching algorithm to obtain a global association result; and when the global association result does not accord with a preset first association standard, progressive matching association is carried out.
As a preferred embodiment, the progressive matching association includes:
and carrying out association matching on the prediction frames in the target detection frame and outside the newly-appearing target by using a second preset matching algorithm to obtain a second matching result.
As a specific embodiment, the first preset matching algorithm is a hungarian algorithm, and the second preset matching algorithm is: and taking the cosine distance calculated based on the Re-ID characteristic as a cost matrix, and executing matching association through a Hungary algorithm.
The specific steps of analysis by the association matching module are as follows:
first, a globally optimal correlation result is determined. And according to the similarity of the motion characteristics and the depth appearance characteristics between the vehicle targets in the detection frame and the vehicle targets in the prediction frame, realizing the global optimal correlation of the detection result and the prediction result through a Hungary algorithm. If the association is successful, the target tracking is successful; if the association fails, the step of progressively matching the association is entered.
Referring to fig. 3, a specific method for progressive matching association is described, including:
(1) Aiming at all detection frames and all tracking tracks except the track newly appeared in the previous frame, calculating cosine distance based on Re-ID characteristics to be used as a cost matrix for matching; if the matching is successful, the target tracking is successful; if the matching fails, the step (2) is entered;
(2) For the unmatched detection frames and unmatched vehicle targets in the tracking state in the step (1), calculating a cost matrix based on IoU metrics, and executing matching association through a Hungary algorithm; if the matching is successful, the target tracking is successful; if the matching fails, the step (3) is entered;
(3) Calculating a cost matrix based on IoU measurement for the unmatched detection frames of the previous two times and the newly initialized tracking frame of the previous frame, and executing matching association through a Hungary algorithm; it is noted that for any new track that appears in a subsequent frame than the first frame, if this new track has no detection frame matching it in the second frame, it is directly removed and the track information is not retained.
The real-time speed of the target is calculated by combining the vehicle displacement difference values of the adjacent frames and the frame rate, and the influence of severe environment and shielding can be better adapted on the premise of acquiring the vehicle flow and speed parameters, so that the detection accuracy is greatly ensured.
As a specific embodiment, since the cost matrix does not consider spatial information, sensitivity processing for excessive distance matching is added, and when the distance is excessive, the corresponding value in the cost matrix is set to infinity so as to suppress unreasonable matching when the spatial distance span is large.
As a specific example, during tracking, the tracker may be managed according to the state of the tracked object. The management of the tracker includes: when a tracker is created, when it is deleted, updating information of a tracking target, and so on.
As shown in table 1, MOTP is used to represent the multi-target tracking accuracy, and the accuracy of the tracking target in position is evaluated, which is the accumulation of the position errors of all the tracking target prediction frames and the real frames, and MOTA is used to represent the multi-target tracking accuracy. FM is adopted to represent the sum of the times that the true track of all targets is interrupted, MT is adopted to represent the proportion of the number of targets which are tracked by more than 80% of the motion tracks to the total number of targets, and ML is adopted to represent the proportion of the number of targets which are tracked by less than 20% of the motion tracks to the total number of targets. The detection results are shown in table 1, and the detection accuracy is at a high level in the multi-target tracking algorithm, which is also one of the advantages of the method.
TABLE 1 detection results
As a preferred embodiment, determining a traffic flow parameter analysis result according to the vehicle target tracking result includes:
determining traffic volume and vehicle speed data in preset time according to the vehicle target tracking result;
determining the time distribution characteristic of the traffic volume according to the traffic volume data;
and determining a corresponding relation diagram of the speed and the density according to the vehicle speed data.
As a specific example, a database is used to analyze traffic flow parameters.
Firstly, a blank database is established through Access, traffic volume data and vehicle speed data are extracted from the vehicle target tracking result and are output to the blank database, and the traffic volume database and the vehicle speed database are respectively established.
And secondly, acquiring traffic volume data and vehicle speed data within a set time, wherein the traffic volume data and the vehicle speed data can be acquired by calling a traffic volume database and a vehicle speed database through an Excel table.
And calculating the traffic density under different traffic states according to the traffic volume data and the vehicle speed data.
And finally, determining the time distribution characteristic of the traffic volume according to the traffic volume data, acquiring the relationship between the traffic volume and time and the space distribution characteristic of the traffic volume, and outputting a corresponding relationship diagram of the traffic volume and time according to the corresponding relationship between the traffic volume and time.
As a specific example, the time distribution characteristics of the traffic volume including the weekly, monthly, and time-varying changes of the traffic volume are determined.
Determining a corresponding relation diagram of speed and density, comprising: and obtaining the corresponding relation between the speed and the density and the characteristic value of the vehicle speed according to the speed and the density, wherein the characteristic value comprises an average vehicle speed, a median vehicle speed and a vehicle speed mode, and outputting a corresponding relation diagram of the speed and the density according to the corresponding relation between the speed and the density.
In traffic investigation, it is difficult to fully describe the actual state of traffic flow only with parameters such as traffic volume. For example, the traffic amount approaches zero, and the traffic amount may be road traffic when the number of vehicles is very small, or may represent traffic congestion, and the traffic is in a stagnation state. And according to the density, the crowding degree can be directly judged, so that the traffic management and control measures are decided. In practice, one can see the phenomenon: when vehicles on the road increase and the traffic density increases, the driver is forced to lower the vehicle speed. When the vehicle flow density is reduced from large to small, the vehicle speed is increased again, which means that a certain relation exists between the speed and the density.
As a specific embodiment, the time distribution characteristic and the space distribution characteristic of the traffic volume are determined according to the traffic volume data, the relationship between the traffic volume and the time is obtained, and the corresponding relationship diagram of the traffic volume and the time is output according to the corresponding relationship between the traffic volume and the time, as shown in fig. 4, the corresponding relationship diagram of the traffic volume and the time can clearly see the change trend of the traffic volume along with the time.
As a specific example, the time distribution characteristic of the traffic volume refers to a certain place (road section or intersection), and the change condition of the traffic volume at different times includes a monthly change, a weekly change, and a time-varying change of the traffic volume. The change in the traffic amount of each month in one year is called a month change, and the ratio of the annual average daily traffic amount to the monthly average daily traffic amount is called a month change coefficient. Weekly changes in traffic refer to changes in traffic on each day of the week, also known as daily changes. The ratio of the annual average daily traffic to the average daily traffic on a certain day of the week is called the daily coefficient. In the absence of annual traffic observations, a single week of observations can be used to determine the solar coefficient. The time-varying traffic refers to the change in traffic over 24 hours of the day, including peak hour flow ratio and peak hour coefficient.
As a specific embodiment, the spatial distribution characteristic of the traffic volume is determined, that is, the road distribution coefficient corresponding to the traffic volume is determined according to the traffic volume. The amount of traffic varies not only with time but also with space, and this characteristic that varies with space position is called a spatial distribution characteristic. Traffic flowing to and from one road is usually different. Such differences may be determined by road distribution coefficient K d To express the road distribution coefficient K d Is the ratio of the traffic volume of the main direction driving to the total traffic volume of the two directions. The directional distribution of traffic may also be different for each peak period during 24 hours of the day, and to represent this distinction, the directional distribution coefficients may be calculated separately for the early peak and the late peak, respectively.
As a specific embodiment, the correspondence between the speed and the density is obtained according to the speed and the density, and the correspondence graph between the speed and the density is output according to the correspondence between the speed and the density. Calculating traffic density under different traffic states through traffic volume and corresponding vehicle speed data, and obtaining a corresponding relation diagram of speed and density, wherein in the corresponding relation diagram of speed and density, the abscissa is the density at maximum flow, and the ordinate is the vehicle speed at maximum flow; and displaying the effect of the corresponding relation graph of the speed and the density. The corresponding relation diagram of the flow and the density can more intuitively display the quadratic function relation of the road and the traffic density under different traffic states. Fig. 5 shows a frequency distribution histogram analyzed for a certain daily vehicle speed in the present embodiment.
The invention also provides a traffic flow parameter detection and analysis device based on deep learning, the structural block diagram of which is shown in fig. 6, the traffic flow parameter detection and analysis device 600 based on deep learning comprises:
the vehicle tracking model acquisition module 601 is configured to acquire a vehicle tracking model with complete training, where the vehicle tracking model includes a target detection module and an association matching module;
the video data acquisition module 602 is configured to acquire traffic video data;
the feature extraction module 603 is configured to perform feature extraction on the traffic video data by using the target detection module, so as to obtain a continuous frame vehicle target detection map;
the association matching module 604 is configured to perform association matching on the continuous frame target detection map to obtain a vehicle target tracking result;
the traffic flow parameter analysis module 605 is configured to determine a traffic flow parameter analysis result according to the vehicle target tracking result.
The present embodiment also provides a computer-readable storage medium storing computer program instructions that, when executed by a computer, cause the computer to perform the deep learning-based traffic flow parameter detection and analysis method according to any one of the above technical solutions.
According to the computer readable storage medium and the computing device provided in the above embodiments of the present invention, the implementation of the specific description of the method for detecting and analyzing traffic flow parameters based on deep learning according to the present invention may be referred to, and have similar advantages as the method for detecting and analyzing traffic flow parameters based on deep learning according to the present invention, and will not be described herein.
The invention discloses a traffic flow parameter detection and analysis method, a device, electronic equipment and a computer readable storage medium based on deep learning, wherein firstly, a vehicle tracking model with complete training is constructed; secondly, carrying out feature extraction and association matching on the traffic video data to obtain a vehicle target tracking result; and finally, determining a traffic flow parameter analysis result according to the vehicle target tracking result.
According to the method, feature extraction is carried out on continuous frame target feature graphs in video data through a target detection module, multiple targets are tracked through a correlation matching module, and track tracking data of a vehicle are obtained; extracting vehicle information from the track data to realize automatic detection of the vehicle flow; the tracking operation of the association module can be better suitable for severe environments and the influence of shielding, so that the detection accuracy is greatly ensured. And proper traffic flow parameters are selected according to the target tracking result to analyze traffic conditions, so that data support is provided for urban traffic management, urban traffic planning, urban resident traveling and other aspects.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.

Claims (5)

1. The traffic flow parameter detection and analysis method based on deep learning is characterized by comprising the following steps of:
acquiring a complete vehicle tracking model, wherein the vehicle tracking model comprises a target detection module and an association matching module;
acquiring traffic video data;
extracting features of the traffic video data by using the target detection module to obtain a continuous frame vehicle target detection diagram, wherein the target detection module adopts a ResNet-34 grid structure as a backbone network;
performing association matching on the continuous frame vehicle target detection graph by using the association matching module to obtain a vehicle target tracking result, wherein the association matching module applies an improved network DLA-34 to the backbone network;
determining a traffic flow parameter analysis result according to the vehicle target tracking result;
the method for extracting the characteristics of the traffic video data by utilizing the target detection module to obtain a continuous frame vehicle target detection diagram comprises the following steps:
obtaining continuous frame traffic state images from the traffic video data according to a preset extraction method;
extracting preset characteristic information of objects in the traffic state image of each frame by using the target detection module, and generating a plurality of target detection frames containing vehicle target information in the traffic state image of each frame to obtain a vehicle target detection diagram;
and carrying out association matching on the continuous frame target detection graph by using the association matching module to obtain a vehicle target tracking result, wherein the association matching comprises the following steps:
obtaining a target prediction frame of the current frame containing target prediction information according to the continuous frame images before the current frame by utilizing the association matching module;
performing association matching calculation on the vehicle target information and the target prediction information in the current frame traffic state image through a first preset matching algorithm to obtain a global association result;
and when the global association result does not meet a preset first association standard, performing progressive matching association, including: performing association matching on the target prediction frames in the target detection frames and outside the newly-appearing targets by using a second preset matching algorithm to obtain a second matching result;
wherein the first preset matching algorithm is a hungarian algorithm, and the second preset matching algorithm is: taking the cosine distance calculated based on Re-ID characteristics as a cost matrix, and executing matching association through a Hungary algorithm;
the progressive matching association specifically comprises:
(1) Aiming at all detection frames and all tracking tracks except the track newly appeared in the previous frame, calculating cosine distance based on Re-ID characteristics to be used as a cost matrix for matching; if the matching is successful, the target tracking is successful; if the matching fails, the step (2) is entered;
(2) For the unmatched detection frames and unmatched vehicle targets in the tracking state in the step (1), calculating a cost matrix based on IoU metrics, and executing matching association through a Hungary algorithm; if the matching is successful, the target tracking is successful; if the matching fails, the step (3) is entered;
(3) Calculating a cost matrix based on IoU measurement for the unmatched detection frames of the previous two times and the newly initialized tracking frame of the previous frame, and executing matching association through a Hungary algorithm; for any new track that appears in a subsequent frame other than the first frame, if this new track has no detection frame in the second frame that matches it, it is directly removed and no track information is retained.
2. The deep learning-based traffic flow parameter detection and analysis method according to claim 1, wherein determining a traffic flow parameter analysis result from the vehicle target tracking result comprises:
determining traffic volume and vehicle speed data in preset time according to the vehicle target tracking result;
determining the time distribution characteristic of the traffic volume according to the traffic volume data;
and determining a corresponding relation diagram of the speed and the density according to the vehicle speed data.
3. The method of claim 1, wherein the obtaining a trained vehicle tracking model comprises:
creating an initial vehicle tracking model;
acquiring a vehicle tracking sample tracking video, and performing image extraction and labeling on the vehicle tracking sample tracking video to obtain a training data set and a test data set;
performing iterative training on the vehicle tracking model according to the training data set and a preset loss function to obtain a trained vehicle tracking model;
and verifying the trained vehicle tracking model by using the test data set to obtain the vehicle tracking model with complete training.
4. A traffic flow parameter detection and analysis device based on deep learning, comprising:
the vehicle tracking model acquisition module is used for acquiring a vehicle tracking model with complete training, and the vehicle tracking model comprises a target detection module and an association matching module;
the video data acquisition module is used for acquiring traffic video data;
the feature extraction module is used for extracting features of the traffic video data by utilizing the target detection module to obtain a continuous frame vehicle target detection diagram, wherein the target detection module adopts a ResNet-34 grid structure as a backbone network;
the association matching module is used for carrying out association matching on the vehicle targets of the continuous frame vehicle target detection graphs by utilizing the association matching module to obtain a vehicle target tracking result, wherein the association matching module applies an improved network DLA-34 to the backbone network;
the traffic flow parameter analysis module is used for determining a traffic flow parameter analysis result according to the vehicle target tracking result;
the method for extracting the characteristics of the traffic video data by utilizing the target detection module to obtain a continuous frame vehicle target detection diagram comprises the following steps:
obtaining continuous frame traffic state images from the traffic video data according to a preset extraction method;
extracting preset characteristic information of objects in the traffic state image of each frame by using the target detection module, and generating a plurality of target detection frames containing vehicle target information in the traffic state image of each frame to obtain a vehicle target detection diagram;
and carrying out association matching on the continuous frame target detection graph by using the association matching module to obtain a vehicle target tracking result, wherein the association matching comprises the following steps:
obtaining a target prediction frame of the current frame containing target prediction information according to the continuous frame images before the current frame by utilizing the association matching module;
performing association matching calculation on the vehicle target information and the target prediction information in the current frame traffic state image through a first preset matching algorithm to obtain a global association result;
and when the global association result does not meet a preset first association standard, performing progressive matching association, including: performing association matching on the target prediction frames in the target detection frames and outside the newly-appearing targets by using a second preset matching algorithm to obtain a second matching result;
wherein the first preset matching algorithm is a hungarian algorithm, and the second preset matching algorithm is: taking the cosine distance calculated based on Re-ID characteristics as a cost matrix, and executing matching association through a Hungary algorithm;
the progressive matching association specifically comprises:
(1) Aiming at all detection frames and all tracking tracks except the track newly appeared in the previous frame, calculating cosine distance based on Re-ID characteristics to be used as a cost matrix for matching; if the matching is successful, the target tracking is successful; if the matching fails, the step (2) is entered;
(2) For the unmatched detection frames and unmatched vehicle targets in the tracking state in the step (1), calculating a cost matrix based on IoU metrics, and executing matching association through a Hungary algorithm; if the matching is successful, the target tracking is successful; if the matching fails, the step (3) is entered;
(3) Calculating a cost matrix based on IoU measurement for the unmatched detection frames of the previous two times and the newly initialized tracking frame of the previous frame, and executing matching association through a Hungary algorithm; for any new track that appears in a subsequent frame other than the first frame, if this new track has no detection frame in the second frame that matches it, it is directly removed and no track information is retained.
5. A computer readable storage medium, characterized in that the program medium stores computer program instructions, which when executed by a computer, cause the computer to perform the deep learning based traffic flow parameter detection analysis method according to any one of claims 1-3.
CN202210759052.1A 2022-06-30 2022-06-30 Traffic flow parameter detection and analysis method and device based on deep learning Active CN115171377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210759052.1A CN115171377B (en) 2022-06-30 2022-06-30 Traffic flow parameter detection and analysis method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210759052.1A CN115171377B (en) 2022-06-30 2022-06-30 Traffic flow parameter detection and analysis method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN115171377A CN115171377A (en) 2022-10-11
CN115171377B true CN115171377B (en) 2024-01-09

Family

ID=83489827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210759052.1A Active CN115171377B (en) 2022-06-30 2022-06-30 Traffic flow parameter detection and analysis method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN115171377B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437792B (en) * 2023-12-20 2024-04-09 中交第一公路勘察设计研究院有限公司 Real-time road traffic state monitoring method, device and system based on edge calculation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034355A (en) * 2010-12-28 2011-04-27 丁天 Feature point matching-based vehicle detecting and tracking method
CN102682453A (en) * 2012-04-24 2012-09-19 河海大学 Moving vehicle tracking method based on multi-feature fusion
CN106611169A (en) * 2016-12-31 2017-05-03 中国科学技术大学 Dangerous driving behavior real-time detection method based on deep learning
CN109376615A (en) * 2018-09-29 2019-02-22 苏州科达科技股份有限公司 For promoting the method, apparatus and storage medium of deep learning neural network forecast performance
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN114360239A (en) * 2021-12-03 2022-04-15 武汉工程大学 Traffic prediction method and system for multilayer space-time traffic knowledge map reconstruction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034355A (en) * 2010-12-28 2011-04-27 丁天 Feature point matching-based vehicle detecting and tracking method
CN102682453A (en) * 2012-04-24 2012-09-19 河海大学 Moving vehicle tracking method based on multi-feature fusion
CN106611169A (en) * 2016-12-31 2017-05-03 中国科学技术大学 Dangerous driving behavior real-time detection method based on deep learning
CN109376615A (en) * 2018-09-29 2019-02-22 苏州科达科技股份有限公司 For promoting the method, apparatus and storage medium of deep learning neural network forecast performance
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN114360239A (en) * 2021-12-03 2022-04-15 武汉工程大学 Traffic prediction method and system for multilayer space-time traffic knowledge map reconstruction

Also Published As

Publication number Publication date
CN115171377A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN111652097B (en) Image millimeter wave radar fusion target detection method
CN110472496B (en) Traffic video intelligent analysis method based on target detection and tracking
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
Zhang et al. Semi-automatic road tracking by template matching and distance transformation in urban areas
Zhang et al. CDNet: A real-time and robust crosswalk detection network on Jetson nano based on YOLOv5
Rodríguez et al. An adaptive, real-time, traffic monitoring system
CN105608417A (en) Traffic signal lamp detection method and device
CN115171377B (en) Traffic flow parameter detection and analysis method and device based on deep learning
Zhang et al. Vehicle re-identification for lane-level travel time estimations on congested urban road networks using video images
CN110659601A (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment
Bu et al. A UAV photography–based detection method for defective road marking
CN112017213B (en) Target object position updating method and system
CN113505638A (en) Traffic flow monitoring method, traffic flow monitoring device and computer-readable storage medium
CN116109986A (en) Vehicle track extraction method based on laser radar and video technology complementation
CN111325811A (en) Processing method and processing device for lane line data
Sharma et al. Deep Learning-Based Object Detection and Classification for Autonomous Vehicles in Different Weather Scenarios of Quebec, Canada
CN114895274A (en) Guardrail identification method
Hadzic et al. Rasternet: Modeling free-flow speed using lidar and overhead imagery
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
Shan et al. Bidirectional feedback of optimized gaussian mixture model and kernel correlation filter for enhancing simple detection of small pixel vehicles
Lin et al. Semi-automatic road tracking by template matching and distance transform
CN116977969B (en) Driver two-point pre-aiming identification method based on convolutional neural network
CN113452952B (en) Road condition monitoring method, device and system
Lv et al. Vehicle detection method for satellite videos based on enhanced vehicle features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant