Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a method and a system for detecting the traffic flow of a tunnel side-mounted video based on deep learning.
The technical scheme for solving the technical problems is as follows:
a tunnel side-mounted video traffic flow detection method based on deep learning comprises the following steps:
acquiring tunnel side-mounted video data;
detecting the tunnel traffic parameters of the video data in real time through a trained deep learning vehicle detection model;
and obtaining the vehicle information passing through the camera and the tunnel traffic parameters.
The invention has the beneficial effects that: the deep learning vehicle detection model of the scheme is used for detecting traffic flow parameters of the tunnel side-mounted video in real time, can extract the tunnel traffic parameters including flow, vehicle speed and vehicle distance in real time, and has practical values for tunnel traffic operation monitoring, prediction, early warning, management and control and the like. Meanwhile, the vehicle type can be identified and a traffic monitoring image can be provided, and the composite performance of the method exceeds that of the traditional detection methods such as a loop coil and an ultrasonic detector. The method has the advantages that various traffic flow parameters are detected by combining high-definition cameras and standard-definition cameras in the tunnel with a deep learning vehicle detection model, the cost is low, the coverage is wide, and the safety is high.
Further, still include:
constructing a deep learning vehicle detection model based on YOLOv 4;
setting preset model parameters of the deep learning vehicle detection model, training the deep learning vehicle detection model by taking image data in a training library as input, testing the image data input into a testing library by the trained deep learning vehicle detection model, adjusting the preset model parameters if a test result does not meet a preset threshold condition, and continuing to train the deep learning vehicle detection model;
and if the test result meets the preset threshold condition, finishing training and obtaining a trained deep learning vehicle detection model.
The beneficial effect of adopting the further scheme is that: the deep learning vehicle detection model obtained by the scheme can keep accuracy, reduce calculation bottleneck and reduce memory cost while achieving light weight. The feature extraction capability can be improved, and the regression speed and the regression precision of the prediction frame can be improved.
Further, still include:
constructing a training library of the vehicle according to the image data of the vehicle running in the tunnel;
and selecting partial image data from the training library according to a preset proportion to serve as a test library.
The beneficial effect of adopting the further scheme is that: the scheme provides a data basis for training and testing the deep learning vehicle detection model.
Further, still include: and when the number of the images in the training library is not enough, performing data sample expansion on the training library by adopting a preset data sample expansion mode.
The beneficial effect of adopting the further scheme is that: according to the scheme, the number of the images in the training library is not enough, and the expansion of data volume is realized.
Further, the preset data sample expansion method includes: color dithering or rotation transformation.
Another technical solution of the present invention for solving the above technical problems is as follows:
a tunnel side-mounted video traffic flow detection system based on deep learning comprises: the system comprises a data acquisition module, a real-time detection module and a traffic information acquisition module;
the data acquisition module is used for acquiring tunnel side-mounted video data;
the real-time detection module is used for detecting the tunnel traffic parameters of the video data in real time through a trained deep learning vehicle detection model;
the traffic information acquisition module is used for acquiring vehicle information passing through the camera and tunnel traffic parameters.
The invention has the beneficial effects that: the deep learning vehicle detection model of the scheme is used for detecting traffic flow parameters of the tunnel side-mounted video in real time, can extract the tunnel traffic parameters including flow, vehicle speed and vehicle distance in real time, and has practical values for tunnel traffic operation monitoring, prediction, early warning, management and control and the like. Meanwhile, the vehicle type can be identified and a traffic monitoring image can be provided, and the composite performance of the method exceeds that of the traditional detection methods such as a loop coil and an ultrasonic detector. The method has the advantages that various traffic flow parameters are detected by combining high-definition cameras and standard-definition cameras in the tunnel with a deep learning vehicle detection model, the cost is low, the coverage is wide, and the safety is high.
Further, still include: the vehicle detection model acquisition module is used for constructing a deep learning vehicle detection model based on YOLOv 4;
setting preset model parameters of the deep learning vehicle detection model, training the deep learning vehicle detection model by taking image data in a training library as input, testing the image data input into a testing library by the trained deep learning vehicle detection model, adjusting the preset model parameters if a test result does not meet a preset threshold condition, and continuing to train the deep learning vehicle detection model;
and if the test result meets the preset threshold condition, finishing training and obtaining a trained deep learning vehicle detection model.
The beneficial effect of adopting the further scheme is that: the deep learning vehicle detection model obtained by the scheme can keep accuracy, reduce calculation bottleneck and reduce memory cost while achieving light weight. The feature extraction capability can be improved, and the regression speed and the regression precision of the prediction frame can be improved.
Further, still include: the training test library establishing module is used for establishing a training library of the vehicle according to the image data of the vehicle running in the tunnel; and selecting partial image data from the training library according to a preset proportion to serve as a test library.
The beneficial effect of adopting the further scheme is that: the scheme provides a data basis for training and testing the deep learning vehicle detection model.
Further, the training test library establishing module is specifically configured to, when the number of images in the training library is not enough, perform data sample expansion on the training library by using a preset data sample expansion manner.
The beneficial effect of adopting the further scheme is that: according to the scheme, the number of the images in the training library is not enough, and the expansion of data volume is realized.
Further, the preset data sample expansion method includes: color dithering or rotation transformation.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
As shown in fig. 1, a method for detecting a tunnel side-mounted video traffic flow based on deep learning according to an embodiment of the present invention includes:
acquiring tunnel side-mounted video data;
detecting the tunnel traffic parameters of the video data in real time through a trained deep learning vehicle detection model; the deep learning vehicle detection model can be constructed based on YOLOv4, and the YOLOv4 model is combined with tunnel side-mounted video data to be used for real-time detection of traffic parameters.
In one embodiment, constructing the data set may specifically include: the image reference library is divided into a "car" training library and a "bus" training library according to the state of the vehicle traveling in the tunnel, and the sample diagrams are a "car" sample diagram as shown in fig. 4 and a "bus" sample diagram as shown in fig. 5. If the number of images in the training library is not enough, the three training libraries can be subjected to data sample expansion respectively by self-developed programming, and the data sample expansion usually adopts modes of color dithering, Rotation transformation, Rotation and the like to achieve the number of images required by a training model; if the number reaches the training requirement, the data sample expanding step can be omitted. Partial images, such as 100%, 50%, 30% of the number of images in the training library, are selected from the two training libraries, and the percentage of partial images can be determined to form two test libraries, namely a "car" test library and a "bus" test library according to the proportion of the types of traffic flow vehicles. Wherein, various types of sample graph data are shown in table 1:
TABLE 1
In a certain embodiment, deep neural network model training and testing may include: the input picture size may be set to 640 × 480, which is the input size of a standard VGA image, using CIoU as a loss function, Batch may be set to 16, the maximum number of iterations may be set to 40000, the learning rate may be set to 0.001, and it is reduced to one tenth of the original at iterations 32000 and 36000, thereby completing the network training of YOLOv 4. The YOLOv4 model training test results are shown in figure 6,
in one embodiment, vehicle detection may include: in a deep learning vehicle detection model, a YOLO algorithm adopts a single CNN model to realize end-to-end target detection, firstly, images resize are input to 448x 448 and then sent into a CNN network, finally, a network prediction result is processed to obtain a detected target, and then the detected target is sent out after being selected through a boundary frame, so that a running vehicle is detected, wherein the deep neural network model and the CNN model are general convolutional neural network models and are both components used by the YOLOv4 model. In which the vehicle is detected as shown in figure 7.
And obtaining the vehicle information passing through the camera and the tunnel traffic parameters.
It should be noted that, in a certain embodiment, the traffic parameter detection may include: and setting double virtual lines on the lane, and extracting required information when detecting that the vehicle passes through the virtual lines. The double virtual line distribution is shown in fig. 8; wherein the content of the first and second substances,
the vehicle flow detection may be performed by detecting that the double virtual lines count when the vehicle center point passes through the main virtual line and the sub virtual line, and performing mutual correction using the double coils. The formula is as follows:
wherein: q is a detection flow rate, q1 is a main virtual line detection flow rate, and q2 is a sub virtual line detection flow rate.
The vehicle speed measuring method may include: the working process of the main virtual line is that when the vehicle enters the main virtual line, namely at the rising edge of the vehicle detection signal, a notification signal is sent to the corresponding auxiliary virtual line.
The secondary virtual line has two working states, namely an idle state and a timing state. When the secondary virtual line receives the notification sent by the primary virtual line, the secondary virtual line enters a timing state no matter which state the secondary virtual line is in, and the starting point t of the timer is set as the current time. When the auxiliary virtual line detects that the vehicle enters, if the auxiliary virtual line is in an idle state at present, no operation is performed; and if the secondary virtual line is originally in a timing state, the secondary virtual line is converted into an idle state, and the speed of the vehicle at the position is estimated as follows:
wherein: t1 is the current time; t2 is the vehicle passing through the main virtual line time.
The measuring method of the inter-vehicle distance is obtained by calculating the time difference of front and rear adjacent vehicles through a secondary virtual line, and the formula is as follows:
h=(tB-tA)*vA,
wherein: h is the distance between the car heads; tA is the passing time of the front vehicle; tB is the passing time of the rear vehicle; vA is the forward vehicle speed.
It should be noted that, in an embodiment, as shown in fig. 3, the detecting of the tunnel-side mounted video traffic flow may include: and constructing a data set and a deep learning vehicle detection model, setting parameters of the deep network model, and training and testing the deep network model according to a training library and a testing library constructed by the data set. The method can also comprise detecting the traffic parameters on the basis of vehicle detection. Wherein the tunnel traffic parameters may include flow, vehicle speed, and inter-vehicle distance. The vehicle detection information, i.e., the vehicle information, may include: time of passing, speed, inter-vehicle distance.
The deep learning vehicle detection model of the scheme is used for detecting traffic flow parameters of the tunnel side-mounted video in real time, can extract the tunnel traffic parameters including flow, vehicle speed and vehicle distance in real time, and has practical values for tunnel traffic operation monitoring, prediction, early warning, management and control and the like. Meanwhile, the vehicle type can be identified and a traffic monitoring image can be provided, and the composite performance of the method exceeds that of the traditional detection methods such as a loop coil and an ultrasonic detector. The method has the advantages that various traffic flow parameters are detected by combining high-definition cameras and standard-definition cameras in the tunnel with a deep learning vehicle detection model, the cost is low, the coverage is wide, and the safety is high.
Preferably, in any of the above embodiments, further comprising:
constructing a deep learning vehicle detection model based on YOLOv 4;
setting preset model parameters of the deep learning vehicle detection model, training the deep learning vehicle detection model by taking image data in a training library as input, testing the image data input into a testing library by the trained deep learning vehicle detection model, adjusting the preset model parameters if a test result does not meet a preset threshold condition, and continuing to train the deep learning vehicle detection model; wherein the presetting of the model parameters may include: the number of images, the width of the images, the height of the images, a weight attenuation regular coefficient, the maximum iteration number, the learning rate variation step length, the learning rate variation factor and the like. And if the test result meets the preset threshold condition, finishing training and obtaining a trained deep learning vehicle detection model. The preset threshold condition can be the accuracy of the set traffic flow parameter detection, and the training is finished when the set accuracy standard is met.
The deep learning vehicle detection model obtained by the scheme can keep accuracy, reduce calculation bottleneck and reduce memory cost while achieving light weight. The feature extraction capability can be improved, and the regression speed and the regression precision of the prediction frame can be improved.
Preferably, in any of the above embodiments, further comprising:
constructing a training library of the vehicle according to the image data of the vehicle running in the tunnel;
and selecting partial image data from the training library according to a preset proportion to serve as a test library. Wherein, the preset proportion can include: 100%, 50% or 30%, can be selected according to the application requirements.
The scheme provides a data basis for training and testing the deep learning vehicle detection model.
Preferably, in any of the above embodiments, further comprising: and when the number of the images in the training library is not enough, performing data sample expansion on the training library by adopting a preset data sample expansion mode.
According to the scheme, the number of the images in the training library is not enough, and the expansion of data volume is realized.
Preferably, in any of the above embodiments, the preset data expansion method includes: color dithering or rotation transformation.
In one embodiment, as shown in fig. 2, a deep learning-based tunnel side-mounted video traffic flow detection system includes: the system comprises a data acquisition module 11, a real-time detection module 12 and a traffic information acquisition module 13;
the data acquisition module 11 is used for acquiring tunnel side-mounted video data;
the real-time detection module 12 is used for detecting the tunnel traffic parameters of the video data in real time through a trained deep learning vehicle detection model;
the traffic information obtaining module 13 is configured to obtain vehicle information passing through the camera and tunnel traffic parameters.
The deep learning vehicle detection model of the scheme is used for detecting traffic flow parameters of the tunnel side-mounted video in real time, can extract the tunnel traffic parameters including flow, vehicle speed and vehicle distance in real time, and has practical values for tunnel traffic operation monitoring, prediction, early warning, management and control and the like. Meanwhile, the vehicle type can be identified and a traffic monitoring image can be provided, and the composite performance of the method exceeds that of the traditional detection methods such as a loop coil and an ultrasonic detector. The method has the advantages that various traffic flow parameters are detected by combining high-definition cameras and standard-definition cameras in the tunnel with a deep learning vehicle detection model, the cost is low, the coverage is wide, and the safety is high.
Preferably, in any of the above embodiments, further comprising: the vehicle detection model acquisition module is used for constructing a deep learning vehicle detection model based on YOLOv 4;
setting preset model parameters of the deep learning vehicle detection model, training the deep learning vehicle detection model by taking image data in a training library as input, testing the image data input into a testing library by the trained deep learning vehicle detection model, adjusting the preset model parameters if a test result does not meet a preset threshold condition, and continuing to train the deep learning vehicle detection model;
and if the test result meets the preset threshold condition, finishing training and obtaining a trained deep learning vehicle detection model.
The deep learning vehicle detection model obtained by the scheme can keep accuracy, reduce calculation bottleneck and reduce memory cost while achieving light weight. The feature extraction capability can be improved, and the regression speed and the regression precision of the prediction frame can be improved.
Preferably, in any of the above embodiments, further comprising: the training test library establishing module is used for establishing a training library of the vehicle according to the image data of the vehicle running in the tunnel; and selecting partial image data from the training library according to a preset proportion to serve as a test library.
The scheme provides a data basis for training and testing the deep learning vehicle detection model.
Preferably, in any embodiment above, the training test library establishing module is specifically configured to, when the number of images in the training library is not enough, perform data sample expansion on the training library by using a preset data sample expansion method.
According to the scheme, the number of the images in the training library is not enough, and the expansion of data volume is realized.
Preferably, in any of the above embodiments, the preset data expansion method includes: color dithering or rotation transformation.
It is understood that some or all of the alternative embodiments described above may be included in some embodiments.
It should be noted that the above embodiments are product embodiments corresponding to the previous method embodiments, and for the description of each optional implementation in the product embodiments, reference may be made to corresponding descriptions in the above method embodiments, and details are not described here again.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.