CN114973169A - Vehicle classification counting method and system based on multi-target detection and tracking - Google Patents

Vehicle classification counting method and system based on multi-target detection and tracking Download PDF

Info

Publication number
CN114973169A
CN114973169A CN202210918749.9A CN202210918749A CN114973169A CN 114973169 A CN114973169 A CN 114973169A CN 202210918749 A CN202210918749 A CN 202210918749A CN 114973169 A CN114973169 A CN 114973169A
Authority
CN
China
Prior art keywords
vehicle
target
counting
tracking
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210918749.9A
Other languages
Chinese (zh)
Inventor
毕研超
孙自若
卢宝辉
刘新锋
聂秀山
陈梦雅
李成龙
孙倩
杜俊彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jianzhu University
Original Assignee
Shandong Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jianzhu University filed Critical Shandong Jianzhu University
Priority to CN202210918749.9A priority Critical patent/CN114973169A/en
Publication of CN114973169A publication Critical patent/CN114973169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle classification counting method and system based on multi-target detection and tracking, relates to the technical field of machine vision and image processing, and is used for solving the problem of wrong counting of vehicles due to shielding between vehicles in the conventional vehicle classification counting method, and the method comprises the following steps: acquiring a target video to be detected and at least two non-overlapping counting areas; detecting a target vehicle in each frame of image of a target video, tracking the target vehicle to obtain the running track information of the target vehicle, determining the type of the target vehicle when the target vehicle is judged to enter a counting area, and counting the type of the target vehicle; and determining the traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area. By the method, the problem of wrong counting of the vehicles due to shielding or identity exchange among the vehicles can be avoided, and accurate counting of various types of vehicles on a traffic trunk line is achieved.

Description

Vehicle classification counting method and system based on multi-target detection and tracking
Technical Field
The invention belongs to the technical field of machine vision and image processing, and particularly relates to a vehicle classification counting method and system based on multi-target detection and tracking.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art that has become known to those skilled in the art.
With the rapid development of economy, the traffic flow of a traffic trunk line is continuously increased, and in order to ensure the smoothness and the driving safety of the traffic trunk line, traffic polices and corresponding departments can be assisted to make reasonable decisions on the passing of vehicles by traffic statistics of different types of vehicles such as passenger cars, trucks and container trucks.
Different from current intelligent transportation monitored control system, the traffic flow on the traffic trunk is great and the motorcycle type is numerous and diverse, can take place to shelter from between the vehicle of different grade type to take place identity exchange, if the automobile body of large truck is long, moving speed is slow, can shelter from the small-size vehicle in certain time quantum, take place identity exchange after sheltering from between the vehicle, miss the detection easily like this in the counting process, the false retrieval causes the wrong count.
The vehicle detection and tracking based on the deep learning has the advantages of high speed and high accuracy, and can show good detection effect no matter in a complex environment background or on edge equipment with low calculation force, and the conventional vehicle detection and tracking method based on the deep learning comprises the following steps: a lane-dividing vehicle automatic counting method (application number: 202010924261.8) based on YOLOV V4 and DeepsORT is provided, a YOLOV4+ DeepsORT vehicle detection and tracking model is adopted to detect and track vehicles in real time, track data is replaced by a track end position when the method is used, DBSCAN is used to cluster the track end position to realize the clustering of the track data, the clustering is performed according to the cluster position and the track start position of a track, and finally, the lane-dividing vehicle automatic counting function is completed according to the matched track and track analysis.
Due to the above reasons, the conventional method cannot be directly used for the traffic flow statistics of the traffic trunk, and therefore, how to accurately count various types of vehicles on the traffic trunk becomes a problem to be solved urgently.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a vehicle classification counting method and system based on multi-target detection and tracking, so as to realize accurate counting of various types of vehicles on a traffic trunk line.
In order to achieve the above object, the present invention mainly includes the following aspects:
in a first aspect, an embodiment of the present invention provides a vehicle classification counting method based on multi-target detection and tracking, including:
acquiring a target video to be detected and at least two non-overlapping counting areas;
detecting a target vehicle in each frame of image of the target video, tracking the target vehicle to obtain the running track information of the target vehicle, determining the type of the target vehicle when the target vehicle is judged to enter a counting area, and counting the type of the target vehicle;
and determining the traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area.
In one possible implementation manner, a multi-target detection model is adopted to detect a target vehicle in each frame of image of the target video, a detection frame of the target vehicle is obtained, the detection frame is input into a multi-target tracking model to obtain motion track information of the target vehicle, and when the target vehicle is judged to enter a counting area, a vehicle classification model is adopted to determine the type of the target vehicle.
In one possible implementation mode, vehicle video samples with a plurality of vehicle shooting visual angles in different scenes are obtained, and the vehicle video samples are processed to obtain a vehicle image training set; and respectively training the multi-target detection model, the multi-target tracking model and the vehicle classification model through the vehicle image training set.
In one possible implementation mode, vehicle images in a vehicle image training set are obtained, and a target vehicle and a non-target vehicle in the vehicle images are respectively labeled; constructing a multi-target detection model based on a YOLOv5 network, wherein the multi-target detection model comprises a convolutional layer, a C3 module and an SSPF module; and training parameters of the multi-target detection model by using the marked vehicle images.
In a possible implementation manner, when the multi-target detection model detects a target vehicle in any frame of a target video, the multi-target tracking model tracks the position of the target vehicle in a frame image from any frame to obtain the motion trail information of the target vehicle.
In a possible implementation manner, in the process of tracking the target vehicle, the multi-target tracking model extracts appearance features of the target vehicle in the detection frames of the previous and subsequent frames, compares the appearance features with appearance features of the target vehicle in the detection frame of the current frame, identifies the same vehicle in different frames, and generates a uniform number for the same vehicle in different frames.
In one possible embodiment, a preset coordinate point of a detection frame of the target vehicle is determined as a count coordinate point, and when the count coordinate point falls within a count area, it is determined that the target vehicle enters the count area.
In one possible embodiment, after determining the traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area, the method further includes: the target video is displayed on a display interface, and the traffic information of each type of vehicle is displayed at a specific position of the display interface.
In a second aspect, an embodiment of the present invention further provides a vehicle classification and counting system based on multi-target detection and tracking, including:
the video acquisition module is used for acquiring a target video to be detected and at least two non-overlapping counting areas;
the classification counting module is used for detecting a target vehicle in each frame of image of the target video, tracking the target vehicle to obtain the running track information of the target vehicle, determining the type of the target vehicle when the target vehicle is judged to enter a counting area, and counting the type of the target vehicle;
and the determining module is used for determining the traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area.
In one possible embodiment, the method further comprises:
and the display module is used for displaying the target video on a display interface and displaying the traffic flow information of each type of vehicle at a specific position of the display interface.
The above one or more technical solutions have the following beneficial effects:
(1) when the invention detects the target vehicle, the invention tracks the target vehicle to obtain the running track information of the target vehicle, and sets at least two non-overlapping counting areas, when it is determined that the target vehicle enters the counting area, the type of the target vehicle is recognized and counted, and further, the traffic flow information of each type of vehicle is determined based on the running track information and the counting result of the counting area, and thus, when the vehicle classification counting is carried out, the problem of wrong counting of the vehicles caused by shielding or identity exchange among the vehicles can be avoided, and simultaneously, when the target vehicle is detected, the target vehicles are tracked, and the types of the target vehicles are identified when the target vehicles enter the counting area, so that the accuracy of classification counting is improved, and the accurate counting of various types of vehicles on a traffic trunk is realized.
(2) The multi-target detection model is trained by adopting the vehicle image training set for respectively labeling the target vehicle and the non-target vehicles in the vehicle images, so that the problem of false detection of non-target vehicles such as bicycles and electric vehicles as the target vehicles can be reduced, and the accuracy of target vehicle detection is ensured.
(3) In the process of tracking the target vehicle, the appearance characteristics of the target vehicle in the detection frames of the previous frame and the next frame are extracted and are respectively compared with the appearance characteristics of the target vehicle in the detection frame of the current frame, the same vehicle in different frames is identified, the vehicle tracking precision can be improved, and the problem of wrong counting caused by tracking failure and identity conversion is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a schematic flow chart of a vehicle classification counting method based on multi-target detection and tracking according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating an effect of the vehicle classification and counting interface provided by the embodiment of the invention;
FIG. 3 is a network architecture diagram of a multi-target detection model provided by an embodiment of the invention;
FIG. 4 is a tracking flow diagram of a multi-target tracking model provided by an embodiment of the invention;
FIG. 5 is a network architecture diagram of a vehicle classification model provided by an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a vehicle classification and counting system based on multi-target detection and tracking according to an embodiment of the invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The vehicle detection and tracking method based on deep learning has the advantages of high speed and high accuracy, good detection effect can be shown no matter in a complex environment background or on edge equipment with low calculation force, and for the monitoring of traffic flow of a traffic trunk, the existing counting methods such as manual detection and license plate number detection are limited by the installation position of a monitoring device such as a camera, so that the vehicles are difficult to accurately identify, the traffic flow on the traffic trunk is large, and shelters can exist among different types of vehicles, so that the wrong counting of the vehicles is caused.
Based on this, the embodiment provides a vehicle classification counting method and system based on multi-target detection and tracking, when a target vehicle is detected, the target vehicle is tracked to obtain the running track information of the target vehicle, at least two counting areas which are not overlapped with each other are arranged, when the target vehicle is judged to enter the counting area, the type of the target vehicle is identified, the type of the vehicle is counted, and further, the traffic flow information of each type of vehicle is determined according to the running track information and the counting result of the counting area, so that the problem of wrong counting of the vehicles due to shielding or identity exchange between the vehicles is avoided, and accurate counting of each type of vehicle on a traffic trunk is realized.
Referring to fig. 1, fig. 1 is a schematic flowchart of a vehicle classification and counting method based on multi-target detection and tracking according to an embodiment of the present invention, as shown in fig. 1, the vehicle classification and counting method based on multi-target detection and tracking according to the embodiment specifically includes the following steps:
s101: acquiring a target video to be detected and at least two non-overlapping counting areas.
In specific implementation, when vehicle classification counting is performed, a target video to be detected is firstly placed under a video folder, a drop-down box is clicked to select a video name to be detected, the video is identified, and the target video to be detected is obtained. At least two non-overlapping counting regions are set, as shown in fig. 2, the present embodiment sets a left counting region and a right counting region, respectively, and counts through the two counting regions.
S102: detecting a target vehicle in each frame of image of the target video, tracking the target vehicle to obtain the running track information of the target vehicle, determining the type of the target vehicle when the target vehicle is judged to enter a counting area, and counting the type of the target vehicle.
In specific implementation, the obtained target video is subjected to frame conversion processing, a target vehicle in each frame of image is detected, and the target vehicle is tracked to obtain the running track information of the target vehicle. When it is determined that the target vehicle enters the count area, the type of the target vehicle is determined, and the type of the vehicle is counted.
As an optional embodiment, a multi-target detection model is used to detect a target vehicle in each frame of image of the target video, a detection frame of the target vehicle is obtained, the detection frame is input into a multi-target tracking model to obtain motion trajectory information of the target vehicle, and when it is determined that the target vehicle enters a counting area, a vehicle classification model is used to determine the type of the target vehicle. Therefore, the model is built in a series connection mode, a proper optimizer, a learning rate and an over-fitting prevention operation are set, and high-precision real-time detection, tracking and classification counting of multiple targets can be achieved.
Optionally, vehicle video samples with a plurality of vehicle shooting visual angles in different scenes are obtained, and the vehicle video samples are processed to obtain a vehicle image training set; and respectively training the multi-target detection model, the multi-target tracking model and the vehicle classification model through the vehicle image training set. The video files come from professional institutions and manually shot multi-view vehicle videos in different scenes, the videos are processed to obtain data sets of three models, the vehicle scene view angles of the data sets are wide, the models have good adaptability, and the models can have good detection effects in different scenes and under various view angles.
As an optional embodiment, vehicle images in a vehicle image training set are obtained, and a target vehicle and a non-target vehicle in the vehicle images are respectively labeled; constructing a multi-target detection model based on a YOLOv5 network, wherein the multi-target detection model comprises a convolutional layer, a C3 module and an SSPF module; and training parameters of the multi-target detection model by using the marked vehicle images.
In specific implementation, for the construction of a multi-target detection model training set: and (5) converting the frame of the vehicle video sample, screening to obtain 1500 vehicle images, and marking the vehicles in the vehicle images. In order to solve the problem of misjudgment of identifying the electric vehicle as the vehicle, the labels are divided into two types of vehicles and electric vehicles, wherein the vehicles are marked as car type, and the electric vehicles are marked as xdc type. Here, the multi-target detection model is used for detecting a target vehicle in each frame of image of a target video, and in this embodiment, the multi-target detection model is constructed based on a YOLOv5 network, a network structure of the multi-target detection model is shown in fig. 3, the multi-target detection model includes a convolutional layer, a C3 module and an SSPF module, the convolutional layer is used for performing a series of operations such as convolution, regularization and activation on an input image, the C3 module is composed of a classical residual error structure Bottleneck, the input image is overlapped with an original feature after extracting features from the convolutional layer, and the residual error feature is transferred on the basis of maintaining an original output depth. And the SSPF module is used for performing maximum pooling operation on the feature map, and outputting an image after the obtained feature map is subjected to feature splicing by using Concat. And inputting the marked vehicle image into the constructed multi-target detection model, and outputting to obtain the vehicle image with the detection frame. The size of a convolution kernel in the convolution layer is increased to 5x5, the step length is 2, and therefore the receptive field of feature extraction is increased; the C3 module increases the number of the bottleneck layer and the convolution layer convolution kernels under the condition of not changing the input dimension, increases the network depth and the characteristic extraction receptive field, and further improves the detection precision of the target vehicle. The method for setting the training parameters of the multi-target detection model specifically comprises the following steps: the initial learning rate is 0.01, the optimizer adopts SGD, batch _ size is 24, picture input size is 640x640, and training times are 100.
As an optional embodiment, when the multi-target detection model detects a target vehicle in any frame of the target video, the multi-target tracking model tracks the position of the target vehicle in the frame image from any frame to obtain the motion trail information of the target vehicle.
In specific implementation, aiming at the shielding problem in the running process of the vehicle, the multi-target detection model detects the target vehicle of each frame in the target video, and when the target vehicle is detected in any frame in the target video, the position of the target vehicle in the subsequent frame image is tracked through the multi-target tracking model, so that the motion track information of the target vehicle is determined. The tracking process of the multi-target tracking model is as shown in fig. 4, cascade matching is performed on a target vehicle in a detection frame to obtain a feature association degree between an image corresponding to the target vehicle and an image corresponding to a track target, coordinate position information of a current frame tracking track prediction frame is obtained according to coordinate position information of a previous frame tracking track frame and a kalman filter, and cascade matching is performed on the detection target and the track target determined by a state to obtain a cascade matching set, an unmatched track target and an unmatched detection target; and performing IOU matching based on the unmatched track target and the unmatched detection target to obtain an IOU matching set so as to realize multi-target tracking according to the cascade matching set and the IOU matching set.
As an optional embodiment, in the process of tracking the target vehicle, the appearance features of the target vehicle in the previous and next frame detection frames are extracted, and are respectively compared with the appearance features of the target vehicle in the current frame detection frame, so as to identify the same vehicle in different frames, and generate a uniform number for the same vehicle in different frames.
In specific implementation, a vehicle heavy identification model is adopted to realize the matching of the appearance features of the extracted vehicle and the vehicle in the detection frame, specifically, the vehicle heavy identification model is constructed based on a residual error neural network, wherein for the construction of a training set of the vehicle heavy identification model: after the vehicle video samples are subjected to frame conversion, cutting and screening, a data set is constructed in a mode that images of various visual angles of each vehicle are stored in the same folder, and a label file is written to record the vehicle category corresponding to the vehicle in each folder. The embodiment is totally divided into 854 files, and the files are divided into six types of vehicles, namely passenger cars, motor buses, small trucks, large trucks, container vehicles and medium trucks. Respectively setting training parameters of the vehicle weight recognition model, and specifically comprising the following steps: the initial learning rate is 0.01, the weight attenuation is 0.1, the optimizer adopts SGD, the batch _ size is 24, the picture input size is 224x224, and the number of training times is 80.
The weight of the vehicle is identified by using the multi-target tracking model, the obtained weight is poor in identification effect and low in precision, and a large number of identity transformation problems are caused. Therefore, the vehicle appearance features are extracted by the residual error neural network with higher precision, the extracted vehicle appearance features are matched with the vehicle features in the detection frame, high-precision vehicle re-identification is achieved, and the probability of the problem that counting is inaccurate due to identity transformation is reduced to the greatest extent.
In addition, as shown in fig. 5, the vehicle classification model includes a convolution layer, a batch normalization layer, an MBConv module, a global average pooling layer, and a full connection layer, an input image first passes through the convolution layer and the batch normalization layer and is processed by a Swish activation function, the processed image passes through a plurality of MBConv modules to obtain a feature map of a specific size, and further, the feature map passes through the convolution layer and the batch normalization layer in sequence, is processed by the Swish activation function, is input to the global average pooling layer, is subjected to random inactivation processing, and obtains a classification result by a Softmax function.
And (3) constructing a training set of the vehicle classification model: the method comprises the steps of firstly selecting 4 vehicle video samples every 1 second, manually primarily screening, cutting out vehicles in each picture by using a multi-target detection model, then manually screening out usable pictures, storing vehicle data classification types according to six types of passenger cars, buses, vans, container cars and medium trucks, carrying out data enhancement on the types of the vehicles with less quantity to obtain 32034 pictures, and mainly adopting a data enhancement mode of adjusting image brightness, image contrast, image saturation and Gaussian noise. The test set is a video of a vehicle with different visual angles in different scenes obtained after cutting and splicing. Respectively setting training parameters of the vehicle classification model, and specifically comprising the following steps: the initial learning rate is 0.01, the weight attenuation is 0.8, the optimizer adopts Adam, the batch _ size is 40, the picture input size is 224x224, and the number of training times is 100.
As an alternative embodiment, a preset coordinate point of the detection frame of the target vehicle is determined as a count coordinate point, and when the count coordinate point falls within a count area, it is determined that the target vehicle enters the count area.
In a specific implementation, a counting mode that a counting coordinate point of the vehicle detection frame counts when hitting a counting area is adopted, that is, the same coordinate point of the same vehicle is counted through at least two counting areas, and the counting coordinate point can be a coordinate point at any position on the detection frame, such as a coordinate point at the upper right corner of the vehicle detection frame.
S103: and determining the traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area.
In concrete implementation, this embodiment sets up two count regions, be left count region and right count region respectively, and when the vehicle went from left to right, first through left count region, pass through right count region again, and when the vehicle went from right to left, first through right count region, pass through left count region again.
Taking the running track information as an example that the vehicle runs from left to right, before counting is started, initializing a counting array at the right, wherein the counting array is used for storing the counted vehicle number;
when the coordinate point at the upper right corner of the vehicle detection frame appears in the left counting area, judging whether the serial number of the vehicle already exists in the counting array or not, if not, adding the serial number of the vehicle into the counting array, and adding one to the number of the correspondingly classified vehicle, so that the vehicle already counted is represented by existence and is not repeatedly counted;
in order to avoid the problems of meter missing and meter error of one counting area caused by the shielding of vehicles, the right counting area is used for matching and counting the number of the vehicles, so that the counting accuracy is improved, when a coordinate point at the upper right corner of a vehicle detection frame appears in the right counting area, whether the serial number of the vehicle already exists in a counting array is judged firstly, if not, the serial number of the vehicle is added into the counting array, and meanwhile, the number of the vehicles classified correspondingly is increased by one; if the counting number exists, the counting number is counted, and the counting is not repeated, wherein the existence refers to that the left counting area/the right counting area are counted.
As an optional embodiment, after determining the traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area, the method further comprises: the target video is displayed on a display interface, and the traffic information of each type of vehicle is displayed at a specific position of the display interface. Here, the traffic information includes the total number of vehicles and the number of each type of vehicle.
In the specific implementation, a GUI display interface is set, and after clicking on the 'identification video', video content is displayed on the GUI display interface, and the total number of vehicles and the number of each type of vehicle are displayed on the upper portion of the video content in real time. The GUI display interface can display the counting result in real time, greatly facilitate the use of the user and simplify the operation.
When the invention detects the target vehicle, the invention tracks the target vehicle to obtain the running track information of the target vehicle, and sets at least two non-overlapping counting areas, when it is determined that the target vehicle enters the counting area, the type of the target vehicle is recognized and counted, and further, the traffic flow information of each type of vehicle is determined based on the running track information and the counting result of the counting area, and thus, when the vehicle classification counting is carried out, the problem of wrong counting of the vehicles caused by shielding or identity exchange among the vehicles can be avoided, and simultaneously, when the target vehicle is detected, the target vehicles are tracked, and the types of the target vehicles are identified when the target vehicles enter the counting area, so that the accuracy of classification counting is improved, and the accurate counting of various types of vehicles on a traffic trunk is realized.
And the method is used for detecting the trunk traffic, so that the labor cost can be greatly reduced, the working efficiency of the traffic police is improved, the fixed standard value can be compared with the detection value obtained by the traffic detection equipment, and the detection condition of the traffic detection equipment is judged, so that the equipment which does not accord with the detection standard can be maintained in time.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a vehicle classification and counting system based on multi-target detection and tracking according to an embodiment of the present invention. As shown in fig. 6, an embodiment of the present invention further provides a vehicle classification and counting system based on multi-target detection and tracking, where the vehicle classification and counting system 600 includes:
the video acquisition module 610 is configured to acquire a target video to be detected and at least two non-overlapping counting regions;
the classification counting module 620 is configured to detect a target vehicle in each frame of image of the target video, perform tracking processing on the target vehicle to obtain running track information of the target vehicle, determine the type of the target vehicle when it is determined that the target vehicle enters a counting area, and count the type of the target vehicle;
and the determining module 630 is configured to determine traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area.
As an optional embodiment, the vehicle classification and counting system 600 further includes:
and the display module is used for displaying the target video on a display interface and displaying the traffic flow information of each type of vehicle at a specific position of the display interface.
The vehicle classification and counting system based on multi-target detection and tracking provided by this embodiment is used for implementing the vehicle classification and counting method based on multi-target detection and tracking, and therefore, a specific implementation manner of the vehicle classification and counting system based on multi-target detection and tracking can be found in the foregoing embodiment section of the vehicle classification and counting method based on multi-target detection and tracking, and is not described herein again.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A vehicle classification counting method based on multi-target detection and tracking is characterized by comprising the following steps:
acquiring a target video to be detected and at least two non-overlapping counting areas;
detecting a target vehicle in each frame of image of the target video, tracking the target vehicle to obtain the running track information of the target vehicle, determining the type of the target vehicle when the target vehicle is judged to enter a counting area, and counting the type of the target vehicle;
and determining the traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area.
2. The multi-target detection and tracking based vehicle classification counting method according to claim 1, characterized in that a multi-target detection model is used to detect a target vehicle in each frame of image of the target video, a detection frame of the target vehicle is obtained, the detection frame is input into a multi-target tracking model to obtain motion trail information of the target vehicle, and when the target vehicle is determined to enter a counting area, the type of the target vehicle is determined by using a vehicle classification model.
3. The multi-target detection and tracking-based vehicle classification counting method according to claim 2, characterized in that vehicle video samples with a plurality of vehicle shooting visual angles in different scenes are obtained and processed to obtain a vehicle image training set; and respectively training the multi-target detection model, the multi-target tracking model and the vehicle classification model through the vehicle image training set.
4. The multi-target detection and tracking based vehicle classification counting method according to claim 3, characterized in that vehicle images in a vehicle image training set are obtained, and target vehicles and non-target vehicles in the vehicle images are respectively labeled; constructing a multi-target detection model based on a YOLOv5 network, wherein the multi-target detection model comprises a convolutional layer, a C3 module and an SSPF module; and training parameters of the multi-target detection model by using the marked vehicle images.
5. The multi-target detection and tracking-based vehicle classification and counting method as claimed in claim 2, wherein when the multi-target detection model detects a target vehicle in any frame of a target video, the multi-target tracking model tracks the position of the target vehicle in a frame image from any frame to obtain the motion track information of the target vehicle.
6. The multi-target detection and tracking-based vehicle classification and counting method according to claim 5, wherein in the process of tracking the target vehicle, the multi-target tracking model extracts appearance features of the target vehicle in the detection frames of the previous and subsequent frames, compares the appearance features with appearance features of the target vehicle in the detection frame of the current frame, identifies the same vehicle in different frames, and generates a uniform number for the same vehicle in different frames.
7. The multi-target detection and tracking based vehicle classification counting method according to claim 2, wherein a preset coordinate point of a detection frame of the target vehicle is determined as a count coordinate point, and when the count coordinate point falls into a count area, it is determined that the target vehicle enters the count area.
8. The multi-target detection and tracking based vehicle classification counting method according to claim 1, further comprising, after determining the traffic flow information of each type of vehicle according to the travel track information and the counting result of the counting area: the target video is displayed on a display interface, and the traffic information of each type of vehicle is displayed at a specific position of the display interface.
9. A vehicle classification and counting system based on multi-target detection and tracking, comprising:
the video acquisition module is used for acquiring a target video to be detected and at least two non-overlapping counting areas;
the classification counting module is used for detecting a target vehicle in each frame of image of the target video, tracking the target vehicle to obtain the running track information of the target vehicle, determining the type of the target vehicle when the target vehicle is judged to enter a counting area, and counting the type of the target vehicle;
and the determining module is used for determining the traffic flow information of each type of vehicle according to the running track information and the counting result of the counting area.
10. The multi-target detection and tracking based vehicle classification counting system of claim 9, further comprising:
and the display module is used for displaying the target video on a display interface and displaying the traffic flow information of each type of vehicle at a specific position of the display interface.
CN202210918749.9A 2022-08-02 2022-08-02 Vehicle classification counting method and system based on multi-target detection and tracking Pending CN114973169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210918749.9A CN114973169A (en) 2022-08-02 2022-08-02 Vehicle classification counting method and system based on multi-target detection and tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210918749.9A CN114973169A (en) 2022-08-02 2022-08-02 Vehicle classification counting method and system based on multi-target detection and tracking

Publications (1)

Publication Number Publication Date
CN114973169A true CN114973169A (en) 2022-08-30

Family

ID=82970121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210918749.9A Pending CN114973169A (en) 2022-08-02 2022-08-02 Vehicle classification counting method and system based on multi-target detection and tracking

Country Status (1)

Country Link
CN (1) CN114973169A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524474A (en) * 2023-07-04 2023-08-01 武汉大学 Vehicle target detection method and system based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013026205A1 (en) * 2011-08-25 2013-02-28 Harman International (Shanghai) Management Co., Ltd. System and method for detecting and recognizing rectangular traffic signs
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN111368938A (en) * 2020-03-19 2020-07-03 南京因果人工智能研究院有限公司 Multi-target vehicle tracking method based on MDP
CN112560932A (en) * 2020-12-10 2021-03-26 山东建筑大学 Vehicle weight identification method based on dual-branch network feature fusion
CN112750150A (en) * 2021-01-18 2021-05-04 西安电子科技大学 Vehicle flow statistical method based on vehicle detection and multi-target tracking
WO2021238062A1 (en) * 2020-05-29 2021-12-02 北京百度网讯科技有限公司 Vehicle tracking method and apparatus, and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013026205A1 (en) * 2011-08-25 2013-02-28 Harman International (Shanghai) Management Co., Ltd. System and method for detecting and recognizing rectangular traffic signs
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN111368938A (en) * 2020-03-19 2020-07-03 南京因果人工智能研究院有限公司 Multi-target vehicle tracking method based on MDP
WO2021238062A1 (en) * 2020-05-29 2021-12-02 北京百度网讯科技有限公司 Vehicle tracking method and apparatus, and electronic device
CN112560932A (en) * 2020-12-10 2021-03-26 山东建筑大学 Vehicle weight identification method based on dual-branch network feature fusion
CN112750150A (en) * 2021-01-18 2021-05-04 西安电子科技大学 Vehicle flow statistical method based on vehicle detection and multi-target tracking

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIUSHAN NIE ET.AL: "Re-ranking vehicle re-identification with orientation-guide query expansion", 《INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS》 *
姚娟: ""多目标跟踪算法及其在车辆计数中的应用研究"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
汪辉等: "基于YOLOv3的多车道车流量统计及车辆跟踪方法", 《国外电子测量技术》 *
沙洁韵: ""三跨"输电线路智慧巡检中的车辆检测技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524474A (en) * 2023-07-04 2023-08-01 武汉大学 Vehicle target detection method and system based on artificial intelligence
CN116524474B (en) * 2023-07-04 2023-09-15 武汉大学 Vehicle target detection method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN109657552B (en) Vehicle type recognition device and method for realizing cross-scene cold start based on transfer learning
CN110119726B (en) Vehicle brand multi-angle identification method based on YOLOv3 model
CN109190444B (en) Method for realizing video-based toll lane vehicle feature recognition system
Bach et al. Deep convolutional traffic light recognition for automated driving
CN109993138A (en) A kind of car plate detection and recognition methods and device
CN108830254B (en) Fine-grained vehicle type detection and identification method based on data balance strategy and intensive attention network
CN111325146A (en) Truck type and axle type identification method and system
CN114170580A (en) Highway-oriented abnormal event detection method
CN114627447A (en) Road vehicle tracking method and system based on attention mechanism and multi-target tracking
CN112990065A (en) Optimized YOLOv5 model-based vehicle classification detection method
CN109993032B (en) Shared bicycle target identification method and device and camera
CN113450573A (en) Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN114973169A (en) Vehicle classification counting method and system based on multi-target detection and tracking
Cruz et al. Classified counting and tracking of local vehicles in manila using computer vision
CN117037085A (en) Vehicle identification and quantity statistics monitoring method based on improved YOLOv5
CN111832463A (en) Deep learning-based traffic sign detection method
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
Bi et al. Real-time traffic flow statistics based on dual-granularity classification
CN114898309A (en) City intelligent inspection vehicle system and inspection method based on visual AI technology
CN113850112A (en) Road condition identification method and system based on twin neural network
CN111161542B (en) Vehicle identification method and device
CN113283303A (en) License plate recognition method and device
Prawinsankar et al. Traffic Congession Detection through Modified Resnet50 and Prediction of Traffic using Clustering
CN113378787B (en) Intelligent traffic electronic prompting device detection method and system based on multi-feature vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination