CN114898326A - Method, system and equipment for detecting reverse running of one-way vehicle based on deep learning - Google Patents

Method, system and equipment for detecting reverse running of one-way vehicle based on deep learning Download PDF

Info

Publication number
CN114898326A
CN114898326A CN202210235720.0A CN202210235720A CN114898326A CN 114898326 A CN114898326 A CN 114898326A CN 202210235720 A CN202210235720 A CN 202210235720A CN 114898326 A CN114898326 A CN 114898326A
Authority
CN
China
Prior art keywords
vehicle
target
frame
detection
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210235720.0A
Other languages
Chinese (zh)
Other versions
CN114898326B (en
Inventor
郑庆祥
金积德
黄荣鹏
田亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202210235720.0A priority Critical patent/CN114898326B/en
Publication of CN114898326A publication Critical patent/CN114898326A/en
Application granted granted Critical
Publication of CN114898326B publication Critical patent/CN114898326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method, a system and equipment for detecting the retrograde motion of a one-way vehicle based on deep learning, wherein a retrograde motion judgment area is arranged in a video pixel coordinate system; acquiring road monitoring video data in real time to obtain video frame image data, inputting the video frame image data into a vehicle detection network model, and performing target detection on vehicles running on a road to obtain vehicle information of the running vehicles on video pixel coordinates, wherein the vehicle information comprises position and size information; tracking the detected running vehicle through a multi-target tracking algorithm DEEP SORT to obtain a vehicle running track; and finally, judging whether the vehicle drives in the wrong direction or not, and judging whether the target vehicle drives in the wrong direction or not according to the sequence of the tracked target vehicle to different position areas of the wrong direction judging area. The invention has stable detection result, high detection speed and high accuracy, can judge whether the vehicle is driven in the wrong direction in real time, effectively prevents the same vehicle from being judged for many times, and reduces the wrong judgment rate of the driving in the wrong direction.

Description

Method, system and equipment for detecting reverse running of one-way vehicle based on deep learning
Technical Field
The invention belongs to the technical field of intelligent traffic monitoring and computer vision, and relates to a method, a system and equipment for detecting the reverse running of a vehicle on a one-way road, in particular to a method, a system and electronic equipment for detecting the reverse running of the vehicle on the one-way road based on deep learning.
Background
Although attention on road safety is continuously paid, and a large number of cameras are mounted on roads for monitoring, most of monitoring still stays in a video recording stage, and whether vehicles run in the wrong direction or not is observed manually. When the monitoring data is increased in quantity and the traffic is in a traffic flow dense area, a large amount of manpower and material resources are consumed, the real-time performance is not enough, and the efficiency is not high.
In the prior art, a vehicle reverse running detection method includes: the retrograde detection method based on the optical flow method dynamic target detection algorithm can detect the motion of an object, but is sensitive to noise interference, low in real-time performance and easy to miss and report; the method comprises the steps that a reverse detection method based on a deep learning neural network is used, a detection model is obtained by utilizing a data set of the head and the tail of a vehicle through training, and then reverse judgment is carried out on the vehicle on a road, although the real-time effect is good, the detection capability of the method is limited, and misjudgment can be generated on the head and the tail of the vehicle which are not learned or have high similarity, so that the reverse judgment is wrong; based on a target tracking algorithm, the reverse judging method comprises the following steps: when the tracking algorithm is used for carrying out reverse driving judgment, the target vehicle ID conversion is caused due to illumination change or shielding, and the vehicle ID conversion is a new vehicle in the monitoring range, the front and rear information is asymmetric or the information is lost, and the like, so that the judgment omission is caused; meanwhile, since the reverse judgment in the prior art is randomly carried out on the road in the whole course, after a tracked vehicle is judged, the same target vehicle is subjected to secondary judgment due to ID conversion, and even multiple judgments occur, so that the practicability is not high. Therefore, for the detection of the reverse running vehicle on the one-way road, a vehicle reverse running detection method which has the advantages of stable detection result, high detection speed, high precision, low false judgment rate and false judgment rate of reverse running judgment, strong practicability and meeting the real-time requirement is urgently needed to assist the traffic police to standardize the one-way road traffic.
Disclosure of Invention
In order to overcome the defects of the existing detection method, the invention provides the method, the system and the equipment for detecting the vehicle in the one-way road in the reverse direction based on deep learning, which have the advantages of high detection speed and high precision, and can intelligently judge whether the vehicle running on the one-way road violates the reverse direction by monitoring the real-time multi-vehicle target tracking.
The method adopts the technical scheme that: a method for detecting the reverse running of a one-way vehicle based on deep learning comprises the following steps:
step 1: setting a retrograde motion judgment area in a video pixel coordinate system, wherein the retrograde motion judgment area consists of two connected and non-intersected position areas and traverses a road in a video; the position areas are composed of pixel point sets which are two connected and non-intersected pixel point sets respectively, and meanwhile, the two connected and non-intersected position areas must cross the road in the video so as to ensure that vehicles driving on the road can pass through the two position areas in sequence;
step 2: acquiring road monitoring video data in real time to obtain video frame image data; inputting the obtained video frame image data into a vehicle detection network model, and carrying out target detection on vehicles running on a road to obtain vehicle information of the running vehicles on video pixel coordinates, wherein the vehicle information comprises position and size information;
and step 3: tracking the whole journey of the running vehicle detected in the step 2 in the video monitoring range through a multi-target tracking algorithm DEEP SORT to obtain the whole journey running track of the vehicle and the running track in the reverse running judgment area;
and 4, step 4: judging whether the vehicle runs in the wrong direction; when the target ID vehicle passes through two connected and non-intersected position areas of the retrograde motion judging area, track information of the target ID vehicle on video pixel coordinates of the retrograde motion judging area is obtained through tracking in the step 3, the running direction of the tracked target ID vehicle is determined according to the sequence of the target ID vehicle passing through two connected and non-intersected pixel point sets, whether the tracked vehicle is retrograde is judged, and one retrograde motion judgment is carried out after one retrograde motion judging area.
The technical scheme adopted by the system of the invention is as follows: a one-way vehicle reverse detection system based on deep learning comprises the following modules:
the module 1 is used for setting a retrograde motion judgment area in a video pixel coordinate system, wherein the retrograde motion judgment area consists of two connected and non-intersected position areas and traverses a road in a video; the position areas are composed of pixel point sets which are two connected and non-intersected pixel point sets respectively, and meanwhile, the two connected and non-intersected position areas must cross the road in the video so as to ensure that vehicles driving on the road can pass through the two position areas in sequence;
the module 2 is used for acquiring road monitoring video data in real time to acquire video frame image data; inputting the obtained video frame image data into a vehicle detection network model, and carrying out target detection on vehicles running on a road to obtain vehicle information of the running vehicles on video pixel coordinates, wherein the vehicle information comprises position and size information;
the module 3 is used for tracking the whole journey of the running vehicle detected in the module 2 in the video monitoring range through a multi-target tracking algorithm DEEP SORT to obtain the whole journey running track of the vehicle and the running track in the reverse running judgment area;
the module 4 is used for judging whether the vehicle runs in the wrong direction or not; when the target ID vehicle passes through two connected and non-intersected position areas of the retrograde motion judging area, track information of the target ID vehicle on video pixel coordinates of the retrograde motion judging area is obtained through tracking in the step 3, the running direction of the tracked target ID vehicle is determined according to the sequence of the target ID vehicle passing through two connected and non-intersected pixel point sets, whether the tracked vehicle is retrograde is judged, and one retrograde motion judgment is carried out after one retrograde motion judging area.
The technical scheme adopted by the equipment of the invention is as follows: a one-way vehicle reverse detection apparatus based on deep learning, comprising:
one or more processors;
a storage device to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement a deep learning based method for detecting a reverse movement of a one-way vehicle.
The invention provides a method, a system and equipment for detecting the reverse running of a one-way vehicle based on deep learning, wherein road traffic monitoring video data are collected in real time through a monitoring camera, the road traffic monitoring video data are transmitted to a terminal and decoded to obtain video frame image data, and a trained deep learning target detection algorithm YOLOv5 vehicle detection model is used for processing and detecting the video frame image to obtain a vehicle detection frame; vehicle tracking is carried out through a multi-target tracking algorithm DEEP SORT, and traveling vehicles in a video monitoring range are matched with IDs and tracked in real time to obtain tracking track information; track information of a retrograde motion judgment area of the ID vehicle on the video pixel coordinate is obtained through tracking, and whether the ID vehicle is retrograde motion or not is judged according to the sequence of the ID vehicle passing through two connected and non-intersected position areas. According to the method, the YOLOv5 DEEP learning target detection algorithm with high detection speed and high precision is used for accurately detecting the vehicle, and meanwhile, the DEEP SORT is combined with the multi-target tracking algorithm to efficiently track the multi-target vehicle in real time; and finally, judging the reverse running of the ID vehicle according to the pixel position information. Compared with the prior art, the method has the advantages of stable detection result, high detection speed, high accuracy, strong practicability and strong real-time performance, can judge the vehicle in a reverse direction in real time, effectively prevents the same vehicle from being judged for multiple times, and reduces the rate of wrong judgment and missed judgment of the reverse direction.
Drawings
FIG. 1 is a schematic diagram of a detection method according to an embodiment of the present invention;
FIG. 2 is a diagram of an example of a detection method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a detection method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a real-time acquisition of road traffic video according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a vehicle reverse travel determination method according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a detection system according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a detection apparatus according to an embodiment of the present invention;
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, 2 and 3, the method for detecting the reverse running of a one-way vehicle based on deep learning provided by the invention comprises the following steps:
s101, a retrograde motion judgment area (which is composed of two non-intersecting position areas and crosses a road in a video) is arranged in a video pixel coordinate system.
In the existing vehicle converse running detection technology, on the basis of a target tracking algorithm, longer tracking track information or vehicle orientation information and the like are acquired through tracking the same target vehicle in the whole course of a road for a longer time to judge whether the vehicle drives reversely or not by mutual comparison, but the methods have large limitations on the premise that the information only corresponds to one target vehicle, but the target tracking algorithm causes vehicle ID conversion due to illumination change or shielding, and the vehicle ID conversion is that a new vehicle appears in a monitoring range for the algorithm, so that the problems of front and rear information asymmetry, information loss and the like can be caused, the judgment is delayed, and even the judgment is failed or the judgment is missed and misjudged; meanwhile, since the reverse judgment in the prior art is randomly carried out on the road in the whole course, after a target vehicle is judged, the same target vehicle is subjected to secondary judgment due to ID conversion, and even multiple judgments occur, so that the practicability is not high.
The reverse judging method of the embodiment is as follows:
the method comprises the following steps that a video pixel coordinate system X-Y is adopted, the height and width of a road traffic monitoring video pixel size are H and W, a retrograde determination region is arranged in the video pixel size, the retrograde determination region consists of two connected and non-intersected position regions and traverses a road in a video, the position regions consist of pixel point sets which are two connected and non-intersected pixel point sets respectively, and meanwhile, the two connected and non-intersected position regions must traverse the road in the video so as to ensure that vehicles driving on the road can pass through the two position regions successively; track information of the ID vehicle on video pixel coordinates is obtained through a multi-target tracking algorithm, and two connected and disjoint different position areas are set as A [ [ x ] 1 ,y 2 ],[x 3 , y 4 ],[x 5 ,y 6 ]……[x 11 ,y 12 ]],B=[[x 13 ,y 14 ],[x 15 ,y 16 ],[x 17 ,y 18 ]……[x 23 ,y 24 ]]And the surrounding pixel points form two connected and disjoint pixel point sets, wherein (x, y) represents the position information of the pixel, and each tracked vehicle running on the road passes through the two connected and disjoint position areas in sequence.
S102, the monitoring camera collects video data in real time and transmits the video data to the terminal, the video data are decoded into video frame images, and the video frame images are input into the vehicle detection network model.
In the embodiment, a road monitoring camera is arranged on a one-way lane, the monitoring camera is deployed at the entrance end of the one-way lane to collect road traffic monitoring video data in real time, the road traffic monitoring video data is transmitted to a terminal, and the video frame image data is obtained by decoding; the video frame image data is a video frame image sequence, the images comprise road conditions and running vehicles in the monitoring video range, and the video frame image sequence is subjected to subsequent steps to detect and track the running vehicles on the road in the monitoring video range.
In the embodiment of the invention, a road monitoring camera is deployed on a single lane, and a road traffic video schematic diagram is collected in real time, which is shown in fig. 4.
In the embodiment, the road traffic video is actively acquired in real time, then the road traffic video actively acquired in real time is output to the terminal, and then the original video frame image is extracted at the terminal through a video decoding technology to detect and track the vehicle in real time, so that the backward driving judgment result is more objective and fair, and the modification and damage of the vehicle owner to the backward driving judgment result can be prevented. And, through the real-time collection system of reasonable overall arrangement road surface image, can reach the purpose to road full perception and comprehensive detection.
In this embodiment, the vehicle detection network model is an existing model, and a vehicle detection model is obtained by training with a deep learning target detection algorithm YOLOv5 to detect a driving vehicle.
And inputting the video frame image obtained in the step S101 into a trained deep learning target detection algorithm YOLOv5 to obtain a vehicle detector, detecting vehicles running on the road surface to obtain vehicle information, wherein the vehicle information comprises position information of the vehicles and size information of a vehicle detection frame.
In this embodiment, a deep learning target detection algorithm YOLOv5 is trained to obtain a vehicle detection model, the steps refer to the model training of fig. 2, and the training process is as follows:
the vehicle detection model needs to use a UA-DETRAC vehicle data set to perform model training in advance by using a deep learning target detection algorithm YOLOv5, and the data set is distributed into a training data set and a test data set according to a preset proportion; training a deep learning target detection algorithm YOLOv5 by using a training data set, wherein in the training process, the target detection algorithm YOLOv5 generates and stores a model weight file every time of one round of iteration, and the training is stopped until the target detection algorithm YOLOv5 reaches a convergence state; the target detection algorithm YOLOv5 uses the model weight file as the vehicle detection model weight to perform detection test on the test set in the UA-DETRAC vehicle detection data set, compares the detection result with the artificial labeling real result, calculates to obtain the model detection precision, selects the model weight with the highest test precision as the vehicle detection model weight of the embodiment of the invention, and obtains the well-trained deep learning vehicle detection model; carrying out target detection on the video frame image data obtained in the step one through a vehicle detection model to obtain vehicle information of a running vehicle on video pixel coordinates, wherein the vehicle information comprises position information of the vehicle and size information of a vehicle detection frame;
in the embodiment, the video frame image is input into a YOLOv5 vehicle detection model for vehicle detection, and vehicle information is obtained, where the vehicle information is [ x, y, w, h ], the position information of the vehicle is [ x, y ] representing the center coordinates of the vehicle detection frame, and the size information of the vehicle detection frame is [ w, h ] representing the width and height of the vehicle detection frame.
In the embodiment, the UA-DETRAC vehicle data set is used as a training set and a test set, the data set is from screenshots in monitoring 24 regional roads in Beijing and Tianjin, is a depression angle of the vehicle, is suitable for urban road monitoring, and is more suitable for a data set for deep learning target detection in the embodiment of the invention.
The YOLOv5 of the present embodiment is an improved version based on YOLOv4, has a very lightweight model size, and may be approximately 90% smaller than the YOLOv4 model size at the limit. The method is a single-stage (one-stage) detection network which is excellent in accuracy and detection speed effect at present, and after the advantages of the previous version and other networks are inherited, the Yolov5 target detection algorithm is improved in detection accuracy and real-time performance, the real-time detection requirement of video images is met, meanwhile, the model volume is smaller, and the deployment at a monitoring end is met.
In the embodiment, a YOLOv5 target detection model is used for detecting the vehicle, and an excellent target detection model can greatly improve the target tracking effect, so that a YOLOv5 excellent single-stage target detection network is used as the detection model.
S103, tracking the vehicle in real time by using a multi-target tracking algorithm DEEP SORT to obtain a driving track.
In the aspect of multi-target tracking algorithm, the mainstream multi-target tracking algorithm is a multi-thread-based single-target tracker algorithm, which is represented by KCF, and the algorithm has high detection precision, but the implementation mode of starting a plurality of threads consumes CPU resources extremely, and meanwhile, the real-time performance is not high, so that the algorithm is not suitable for real-time detection at a monitoring end. The method has the advantages that SORT and DEEP SORT are used as representatives of the rear-end optimization algorithm using Kalman wave and Hungary algorithms, the requirement on a DEEP learning target detection model is high, the algorithm is high in detection precision, good in tracking effect and high in real-time performance, and the method is suitable for being deployed at a monitoring end for real-time detection.
In the embodiment, a multi-target Tracking algorithm DEEP SORT is used, and is based on a discriminant Tracking algorithm, a vehicle is firstly detected and then Tracked (TBD), a trained DEEP learning target Detection model is used for detecting vehicles running on roads in video frame images, Kalman filtering is used for prediction updating in the real-time target Tracking process, the Hungary algorithm is used for carrying out frame-by-frame data association, the target distance and the appearance similarity are considered, the appearance characteristic of a target is extracted, the matching precision is improved, the target Tracking effect under the shielding condition can be improved, and the Tracking vehicle can obtain a better Tracking effect under the complex conditions of illumination, quick movement, shielding and the like.
In the present embodiment, the first and second electrodes are,the DEEP SORT algorithm uses an 8-dimensional state vector X ═ u, v, r, h, u * ,v * , r * ,h * ] T To estimate the motion state of the vehicle at the next frame, where u, v]The center coordinates of the vehicle detection frame, the length-width ratio r, the height h and the rest four variables represent the derived speed information of each parameter in the image coordinate system.
In the embodiment, the position and the state of the vehicle in the next frame are predicted by Kalman filtering to obtain a prediction frame, the data association is carried out on the prediction frame and the vehicle detection frame in the next frame by the Hungarian algorithm, the target distance and the appearance similarity are considered by cascade matching, the matching precision is improved, the vehicle is tracked in real time, and the track information of the ID vehicle on the video pixel coordinate is obtained.
In this embodiment, kalman filtering is used as a method for optimal state prediction and estimation, and a trajectory of a running vehicle is predicted and updated by using the kalman filtering, where the kalman filtering mainly uses a uniform velocity model and a nonlinear observation model, and performs prediction and estimation on a vector [ u, v, r, h ] through the kalman filtering, and plays an irreplaceable role in a running vehicle motion system that needs to be predicted, on one hand, because the kalman filtering has a certain fault-tolerant capability on motion parameters containing noise and inaccurate observed values, and on the other hand, because the overall performance of the kalman filtering prediction step realizes optimal estimation on a state value of a dynamic system as much as possible.
In the embodiment, the hungarian algorithm is a method for finding optimal allocation to perform data association, wherein target distance and appearance similarity are considered, a cost matrix is obtained through weighting and calculation, and then the cost matrix is input into the hungarian algorithm to perform data association, so that the problem of allocation of a vehicle detection model detection result and a tracking result predicted by Kalman filtering is solved.
When the vehicle runs in the monitoring camera range, the vehicle detection model detects the vehicle running on the road in the video frame image to obtain a vehicle detection frame, the detected vehicle can be matched with the corresponding label ID, the depth appearance characteristic extraction is carried out on the vehicle in the vehicle detection frame, and a prediction frame of the next frame of the running vehicle is predicted through Kalman filtering; when the next frame is carried out, a vehicle detection model is utilized to detect vehicles running on the road in a video frame image to obtain a vehicle detection frame, the depth appearance characteristics of the vehicles in the vehicle detection frame are extracted, the Mahalanobis distance is utilized to carry out correlation measurement on the target distance between a Kalman filtering prediction frame of the previous frame and a current frame detection frame, the appearance characteristic cosine distance measurement on the appearance characteristics of the vehicle detection frame of the previous frame and the current frame vehicle detection frame obtains the minimum cosine distance, the Mahalanobis distance result and the minimum cosine distance are weighted and summed to obtain a cost matrix, the cost matrix is input into a Hungary algorithm to carry out data correlation to obtain a linear matching result, the ID of the corresponding target vehicle of the previous frame is distributed to the target with high matching degree in the current frame, the previous frame and the next frame are tracked successfully, then the Kalman filtering is used for prediction and update, the Hungary algorithm carries out frame-by-frame data correlation, tracking until the ID vehicle disappears or the ID is converted.
In this embodiment, when the target is created and removed, each track of the ID vehicle is processed, and for each track, there is a threshold a, which is used to record the duration from the last successful matching of a track with the target to a current moment, and once the detection result of the target is correctly associated with the tracking result, the parameter is set to 0. Set the maximum threshold A max Default to A max 70, when the threshold A exceeds the maximum threshold A set in advance max The tracking process for that ID vehicle may end. Then, when matching is performed, it is considered that a new track trace is likely to be generated for any of the detection boxes that did not complete matching. However, since these detection results may be some false alarms, the newly generated tracks in this special case are labeled to a specific state final (initial default state), and then by observing whether they are successfully matched in the next consecutive 3 frames, if the consecutive 3 frames are successfully matched, they are considered as new tracks to be repeatedly generated, the state is modified from final to confirmed, otherwise, the tracks which can be deleted are considered to leave the scene, and the state is labeled as deleted.
In this embodiment, mahalanobis distance is used to measure the target distance between the kalman filtering prediction frame of the previous frame and the vehicle detection model detection frame of the current frame in a correlated manner, and mahalanobis distance is mainly used to measure the similarity of the first 4 vectors [ u, v, r, h ] of the 8-dimensional state vector. In principle, the moving distance of the object between the previous and next frames is not too far apart, so the closer the coordinates are, the more likely it is that the same object is. The function used was (1):
Figure BDA0003542072010000081
d j indicates the position of the jth detection frame, y i Representing the predicted position of the ith Kalman filter on the target, S i -1 Representing a covariance matrix between the detected position and the average tracking position.
In this embodiment, the mahalanobis distance only measures the spatial distance, which may cause a serious identity transformation problem, and introduces the appearance similarity in the appearance feature cosine distance measurement box. Appearance characteristics in the frame are extracted through a wide residual error network in the re-recognition field, and a network model for extracting the appearance characteristics needs to be offline learned in advance for extracting the appearance characteristics of the vehicle. And calculating the minimum cosine distance between all the tracked feature vectors of the ith object and the target detection result of the jth object. The function used is (2):
Figure BDA0003542072010000091
r j corresponding to the jth detected feature vector, r k (i) Corresponding to the feature vector set r of successful tracking for k times j T r k (i) Calculating cosine similarity, 1-r j T r k (i) Is the cosine distance of the two vectors.
In the embodiment, the target distance and the appearance feature are weighted and summed by combining with the metric, the target frame distance and the content in the target frame are considered, and the weighted sum of the two metric modes is used as the final metric. The function used is (3):
c i,j =λd (1) (i,j)+(1-λ)d (2) (i,j) (3)
and lambda represents the weight of the Mahalanobis distance and the cosine distance in the calculation process of the cost function, the Mahalanobis distance is used for eliminating wrong matching, when the matching is successful when the two bounding boxes are far away, the wrong matching result is screened out through the threshold value of the motion measurement model, and the Mahalanobis distance does not need to be considered in the comprehensive measurement.
The essence of the multi-target tracking in the embodiment is that the Kalman filtering is used for predicting and estimating the behavior of the moving target, and the Kalman filtering needs to adjust the prediction result according to the observed object, so when the object is blocked for a long time to cause the motion and appearance information for reference to be blank for a long time, the uncertainty of Kalman filtering prediction is greatly increased, and the observability in a state space is greatly reduced, so that the DEEP SORT algorithm uses a cascade matching method. The cascade matching is used for giving priority to the targets which appear more frequently, and the core idea of the cascade matching is to match the tracks with the same disappearance time from small to large, so that the condition that the maximum priority is given to the targets which appear recently is ensured, and the problem is solved.
According to the method, video frame images acquired by road traffic surveillance camera shooting are transmitted to a trained YOLOv5 target detection model, a vehicle detection frame is obtained, target tracking is achieved jointly according to the target vehicle detection frame generated by YOLOv5 and a multi-target tracking algorithm DEEP SORT, meanwhile, matching corresponding IDs of each vehicle are detected for each target detection model in a surveillance video range, vehicle appearance characteristics are extracted from the trained DEEP convolutional neural network model, frame-by-frame data association is conducted on the prediction frame and a Hungary algorithm by means of Kalman filtering, cascade matching is conducted, target distance and appearance similarity are considered, and matching accuracy is improved.
In order to better understand the target tracking, referring to fig. 2, in this embodiment, when a running vehicle on a road is detected at time T1, a corresponding ID-1 is matched to obtain a vehicle detection frame and a vehicle position of the ID-1 vehicle, appearance feature extraction is performed on the ID-1 vehicle, and then a prediction frame of the vehicle ID-1 detection frame is predicted through kalman filtering; at the time of T2, obtaining a vehicle detection frame and a vehicle position of a current frame, extracting appearance characteristics of the detection frame, calculating the distance between an ID-1 vehicle prediction frame and the current frame detection frame by a Mahalanobis distance, calculating the minimum cosine distance between the ID-1 vehicle prediction frame and the current frame detection frame by the last frame Kalman filtering to measure appearance similarity of the two frames, and performing weighted summation calculation by combining the two results to obtain a cost matrix; and inputting the calculated cost matrix into a Hungarian algorithm to obtain a linear matching result, when the vehicle detection frame at the time of T2 is successfully matched with the prediction frame of Kalman filtering, which indicates that the tracking at the time of T1 and the tracking at the time of T2 are successful, matching the vehicle detection frame at the time of T2 into an ID-1 vehicle, then continuously tracking the ID-1 vehicle in a video range, and when the ID-1 vehicle leaves the video range at the time of Tn +1, ending the tracking of the ID-1 vehicle to obtain the track information of the vehicle ID-1 on the video pixel coordinate.
The multi-target tracking algorithm DEEP SORT utilized in the embodiment adds cascade matching on the basis of the SORT algorithm, simultaneously considers the target distance and the appearance similarity, improves the matching precision, can perform re-tracking when the tracking target disappears for a short time and reappears, has better tracking effect, enhances the robustness and reduces the ID conversion.
Referring to fig. 4 and fig. 5, a schematic diagram of a real-time collected road traffic video according to an embodiment of the present invention and a schematic diagram of an exemplary vehicle reverse driving determination method according to an embodiment of the present invention are respectively shown, where the real-time collected road traffic video is transmitted to a terminal for processing, and the following vehicle reverse driving determination method is further described.
And S104, judging whether the target vehicle drives in the wrong direction or not by judging the sequence of the tracked target vehicle to different position areas in the wrong direction of the vehicle.
The retrograde judging method of the embodiment tracks based on the multi-target tracking algorithm DEEP SORT, and although the multi-target tracking algorithm DEEP SORT is already powerful and has a good target tracking effect, the target tracking problem still faces a lot of challenges due to pollution of noise in images, complex diversity of scenes and complex changes of targets. Such as lighting changes, moving scenes, camera shake, long occlusion, etc., may still result in the same target being ID-translated within the visual range of the surveillance video.
In order to reduce the influence of the target vehicle ID conversion on the detection of the vehicle in the wrong direction and better solve the problems and prevent the same vehicle from being judged for multiple times, the embodiment of the invention provides an efficient method for judging the wrong direction, which comprises the following steps: a retrograde motion judgment area is arranged in a video pixel coordinate system, the retrograde motion judgment area consists of two connected and non-intersected position areas and crosses a road in a video, the position areas consist of pixel point sets which are two connected and non-intersected pixel point sets respectively, and the two connected and non-intersected position areas must cross the road in the video so as to ensure that vehicles running on the road can pass through the two position areas successively; when the ID vehicle passes through the two connected and non-intersected position areas, track information of the ID vehicle on the video pixel coordinates of the retrograde motion judging area is obtained through tracking in the third step, and rapid and efficient retrograde motion judgment is performed once, so that the driving direction of the tracked target vehicle is determined according to the sequence of the ID vehicle passing through the two connected and non-intersected pixel point sets, and whether the tracked vehicle drives in the retrograde motion is judged; in order to prevent the ID conversion in the reverse driving judgment area, the reverse driving judgment area is reasonably set to be within a reduced range, and the target vehicle can pass through the reverse driving judgment area in a very short time to perform quick reverse driving judgment due to the relatively high driving speed of the vehicle.
For better understanding of the following contents, please refer to fig. 5, a fixed reverse driving judgment region (composed of a region a and a region B) is provided, when the tracked vehicle passes through the region a first and then passes through the region B, the method returns to judge whether the tracked ID vehicle passes through the region a, and the direction is judged according to the sequence of the tracked ID vehicle passing through the two disjoint different position regions; when the vehicle runs in the forward direction through the area A and runs in the reverse direction through the area B, the vehicle runs in the reverse direction.
Referring to fig. 3, the present embodiment provides a flowchart of an exemplary vehicle reverse detection scheme. The method comprises the following steps:
s301, collecting a video and inputting the video into a terminal;
s302, detecting the vehicle by the trained YOLOv5 target detection model;
s303, tracking the vehicle in real time by using a multi-target tracking algorithm DEEP SORT;
s304, tracking whether the vehicle firstly passes through the area B and then passes through the area A;
when the target vehicle passes through the area B first and then the area a, the target vehicle moves in the reverse direction.
Referring to fig. 5, a schematic diagram of an exemplary method for determining a vehicle reverse driving direction provided in this embodiment is shown, where a video pixel coordinate system X-Y is provided, and the road traffic monitoring video pixel sizes are H and W, as shown in a dotted line area in fig. 4, which is a road traffic monitoring video range, and is shown as a video pixel coordinate system as shown in a coordinate system in fig. 5.
Two connected and non-intersected position areas are arranged in the size of a video pixel and cross a road in the video, track information of an ID vehicle on a pixel coordinate is obtained through multi-target tracking, and the two connected and non-intersected different position areas are set as A [ [ x ] ] 1 ,y 2 ],[x 3 ,y 4 ],[x 5 ,y 6 ]……[x 11 ,y 12 ]],B=[[x 13 ,y 14 ],[x 15 ,y 16 ],[x 17 , y 18 ]……[x 23 ,y 24 ]]And the surrounding pixel points form two connected and disjoint pixel point sets, wherein (x, y) represents the position information of the pixel, and whether the vehicle drives in the wrong direction or not is judged according to the sequence of the tracked ID vehicles passing through different position areas. As shown in fig. 5, the ID coordinate position is labeled, the pixel point where the ID coordinate passes through the position is the driving track, and when the tracked ID-2 vehicle passes through the area a and then the area B first, the driving process is AB, and it is determined that the vehicle is driving in the forward direction; and the ID-3 tracks that the vehicle firstly passes through the area B and then passes through the area A, and if the driving process is BA, the vehicle is judged to be in reverse driving and illegal driving. In the monitoring video range, each target vehicle normally runsThe target vehicle is efficiently judged once only through the retrograde motion judgment area once and the retrograde motion judgment once, so that the same vehicle is effectively prevented from being judged for many times.
According to the method for judging the retrograde motion, if the tracked vehicle driving direction is judged to be the retrograde motion, the monitoring system can capture images of illegal sites in real time, obtain illegal videos, store the illegal videos and the images to the terminal for storage, and lock the illegal vehicles to finish detection; according to the method, only the reverse driving judgment area is fixed for judgment, each target vehicle only passes through the reverse driving judgment area once and carries out reverse driving judgment once, the influence of target vehicle ID conversion on reverse driving judgment is reduced, the reverse driving detection rate is improved, the same vehicle is effectively prevented from being judged for multiple times, and the misjudgment rate is reduced.
Compared with the prior art, the one-way reverse detection method based on DEEP learning combines a target detection algorithm based on DEEP learning, the YOLOv5 which is very fast in speed and high in precision and very light in weight serves as a vehicle detection model, and a vehicle is tracked by combining a multi-target tracking algorithm DEEP SORT, so that real-time multi-target tracking of running vehicles on road traffic is realized; meanwhile, an efficient real-time reverse driving judgment method is carried out on the target vehicle through the track information, and the illegal vehicle information is stored to the terminal. By adopting the method, the detection result is stable, the detection speed is high, the accuracy is high, the real-time performance is strong, the vehicle can be judged in a retrograde motion manner in real time, the retrograde motion misjudgment rate and the judgment missing rate are effectively reduced, and the method is easy to deploy.
In the foregoing embodiment, a method for detecting the vehicle in the one-way road in the reverse direction based on deep learning is provided, and correspondingly, a system for detecting the vehicle in the one-way road in the reverse direction based on deep learning is provided, referring to fig. 6, and fig. 6, a schematic structural diagram of the system for detecting the vehicle in the one-way road in the reverse direction based on deep learning is provided in the embodiment of the present invention. As shown in fig. 6, the vehicle reverse driving detection system includes a reverse driving area module 601, a video capture module 602, a target detection module 603, a multi-target tracking module 604, a reverse driving determination module 605, and a locking target module 606.
The retrograde region module 601 sets a retrograde determination region in a video pixel coordinate system, the retrograde determination region is composed of two connected and non-intersected position regions and traverses a road in a video, the position regions are composed of pixel point sets, the two connected and non-intersected position regions are respectively two connected and non-intersected pixel point sets, and the two connected and non-intersected position regions must traverse the road in the video so as to ensure that vehicles driving on the road can successively pass through the two position regions.
The video acquisition module 602 is used for arranging a road monitoring camera on a one-way lane, deploying the monitoring camera at the entrance end of the one-way lane to acquire road traffic monitoring video data in real time, transmitting the road traffic monitoring video data to a terminal, and decoding the road traffic monitoring video data to obtain video frame image data; the video frame image data is a video frame image sequence, the images comprise road conditions and running vehicles in the monitoring video range, and the video frame image sequence is subjected to subsequent steps to detect and track the running vehicles on the road in the monitoring video range.
The target detection module 603 is used for constructing a vehicle detection model based on a deep learning target detection algorithm, wherein the vehicle detection model needs to perform model training in advance by using an open-source vehicle detection data set by using the target detection algorithm, and the data set is distributed into a training data set and a test data set according to a preset proportion; training a deep learning target detection network by using a training data set, wherein in the training process, a model weight file is generated and stored every time a deep learning target detection algorithm is iterated for one round, and the training is stopped until the deep learning target detection algorithm reaches a convergence state; the deep learning target detection algorithm uses the model weight file as a vehicle detection weight to perform detection test on a test set in an open source vehicle detection data set, compares a detection result with an artificial labeling real result, calculates to obtain a model detection precision, selects the model weight with the highest test precision as the vehicle detection model weight of the embodiment of the invention, selects the model weight with the highest test precision as the vehicle detection model weight of the embodiment, and obtains a trained deep learning target detection model; and carrying out target detection on the video frame image data obtained by the video acquisition module through a vehicle detection model to obtain vehicle information of a running vehicle on a video pixel coordinate, wherein the vehicle information comprises position information of the vehicle and size information of a vehicle detection frame, the vehicle information is [ x, y, w, h ], the position information of the vehicle is [ x, y ] representing the center coordinate of the vehicle detection frame, and the size information of the vehicle detection frame is [ w, h ] representing the width and the height of the vehicle detection frame.
The multi-target tracking module 604 tracks the vehicles detected by the vehicle detection model in the target detection module on the video frame image by using a multi-target tracking algorithm DEEP SORT, and the vehicle tracking only limits the tracking of the vehicles in the monitoring video range; detecting vehicles running on the road in the video frame image by using the vehicle detection model in the step two, detecting the vehicles running on the road in the video frame image by using the vehicle detection model when the vehicles run in the monitoring camera range to obtain a vehicle detection frame, matching the detected vehicles with the corresponding label ID, extracting the depth appearance characteristics of the vehicles in the vehicle detection frame, and predicting a prediction frame of the next frame of the running vehicles through Kalman filtering; when the next frame is carried out, a vehicle detection model is used for detecting vehicles running on the road in the video frame image to obtain a vehicle detection frame, the deep appearance characteristics of the vehicles in the vehicle detection frame are extracted, the Mahalanobis distance is used for carrying out correlation measurement on the target distance between the Kalman filtering prediction frame of the previous frame and the current frame detection frame, the cosine distance of the appearance characteristics is used for obtaining the minimum cosine distance from the appearance characteristics of the vehicle detection frame of the previous frame and the current frame detection frame, the weighted summation of the Mahalanobis distance result and the minimum cosine distance is used for obtaining a cost matrix, the cost matrix is input into a Hungarian algorithm for carrying out data correlation to obtain a linear matching result, the ID of the corresponding target vehicle of the previous frame is distributed to the target with high matching degree in the current frame, the tracking of the previous frame and the next frame is successful, the prediction updating is carried out through the filtering, and the Hungarian algorithm carries out frame-by-frame data correlation, and tracking until the ID vehicle disappears or ID conversion is carried out, wherein cascade matching is utilized, the target distance and the appearance similarity are considered, the matching precision is improved, and the track information of the corresponding ID vehicle on the video pixel coordinates is obtained.
And the reverse driving judgment module 605, when the ID vehicle passes through the two connected and non-intersected position areas, obtaining the track information of the ID vehicle on the video pixel coordinates of the reverse driving judgment area through the tracking in the third step, determining the driving direction of the tracked target vehicle according to the sequence of the ID vehicle passing through the two connected and non-intersected pixel point sets, judging whether the tracked vehicle reverses the direction, and performing a rapid and efficient reverse driving judgment once through one reverse driving judgment area.
The target locking module 606 is used for obtaining the driving track of the vehicle in the reverse driving judgment area through the modules, and judging whether the vehicle drives reversely according to the sequence of the ID vehicle passing through two connected and non-intersected position areas, wherein the two position areas are assumed to be an area A and an area B, the ID vehicle firstly passes through the area A and then passes through the area B to drive in the forward direction, and the ID vehicle drives in the reverse direction on the contrary; and when the vehicle is judged to be driven in the wrong direction, the monitoring system can capture the illegal site image in real time, store the illegal video and image to the terminal and lock the target vehicle.
The embodiment provides a one-way vehicle retrograde motion detection system based on deep learning, which has the advantages of stable detection result, high detection speed, high accuracy, strong real-time performance and capability of judging a retrograde motion vehicle in real time, and simultaneously carries out efficient retrograde motion judgment on a target vehicle by carrying out an efficient retrograde motion judgment method through position information.
According to an embodiment of the present invention, the present embodiment further provides a detection apparatus.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a one-way vehicle back-running detection device based on deep learning according to the embodiment of the present invention. The terminal device is intended to represent various forms of digital computers, such as notebook computers, desktop computers, mainframe computers, workstations, in-vehicle terminals, wireless terminals in unmanned driving, and the like. But also various forms of mobile devices such as personal digital processing, smart phones, wearable devices, etc. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described or claimed herein.
As shown in fig. 7, the terminal includes a processor 701, a memory 702, an input device 703, an output device 704, and a bus; the processor 701, the memory 702, the input device 703, and the output device 704 in the terminal device are specifically connected to each other through a bus, and fig. 7 illustrates an example of connection through a bus. The memory 702 is used for storing a computer program, the computer program includes program instructions, and the processor 701 executes the instructions in the terminal device for processing.
The memory 702 is used as a computer-readable storage medium, and can be used for storing computer-executable programs, instructions, and modules, such as program instructions or modules corresponding to the vehicle reverse driving detection method in the embodiment of the present invention (for example, refer to the video acquisition module 601, the target detection module 602, the multi-target tracking module 603, the reverse driving judgment module 604, and the target locking module 605 in the structural diagram of the vehicle reverse driving detection system shown in fig. 6). The processor 701 executes various functional applications and data processing of the terminal device by running the computer-executable programs, instructions and modules in the processor 702, so as to realize the method for judging the reverse direction of the one-way vehicle based on deep learning.
The memory 702 includes a nonvolatile storage medium, an internal memory. Wherein the non-volatile storage medium stores an operating system and a computer program; the internal memory provides an environment for the operation of the operating system and the computer program in the nonvolatile storage medium, while storing data (e.g., vehicle data set, model weight, pixel coordinate position information, violation information, etc.) that the vehicle reverse travel detection method was created. Further, the memory 702 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal, such as an input means of a touch screen, a touch pad, a keyboard, a mouse, a microphone, and the like. The output means 704 includes a display or the like, a speaker, and the like.
The present embodiments also provide a computer-readable storage medium. A computer-readable storage medium has stored therein a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the aforementioned one-way retrograde vehicle detection method based on a convolutional neural network.
It is clear to those skilled in the art from the above description of the embodiments that the present invention can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium. Any reference to memory, storage, database or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. Non-volatile memory may include Read Only Memory (ROM), magnetic tape, floppy disk, flash memory or optical memory, etc. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, random access memory can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The method has the advantages of stable detection result, high detection speed and high accuracy, can judge whether the vehicle runs in the wrong direction in real time, effectively prevents the same vehicle from being judged for multiple times, and reduces the misjudgment rate of the judgment of the vehicle running in the wrong direction.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present invention can be achieved.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for detecting the reverse running of a one-way vehicle based on deep learning is characterized by comprising the following steps:
step 1: setting a retrograde motion judgment area in a video pixel coordinate system, wherein the retrograde motion judgment area consists of two connected and non-intersected position areas and traverses a road in a video; the position areas are composed of pixel point sets which are two connected and non-intersected pixel point sets respectively, and meanwhile, the two connected and non-intersected position areas must cross the road in the video so as to ensure that vehicles driving on the road can pass through the two position areas in sequence;
step 2: acquiring road monitoring video data in real time to obtain video frame image data; inputting the obtained video frame image data into a vehicle detection network model, and carrying out target detection on vehicles running on a road to obtain vehicle information of the running vehicles on video pixel coordinates, wherein the vehicle information comprises position and size information;
and step 3: tracking the whole journey of the running vehicle detected in the step 2 in the video monitoring range through a multi-target tracking algorithm DEEP SORT to obtain the whole journey running track of the vehicle and the running track in the reverse running judgment area;
and 4, step 4: judging whether the vehicle runs in the wrong direction; when the target ID vehicle passes through two connected and non-intersected position areas of the retrograde motion judging area, track information of the target ID vehicle on video pixel coordinates of the retrograde motion judging area is obtained through tracking in the step 3, the running direction of the tracked target ID vehicle is determined according to the sequence of the target ID vehicle passing through two connected and non-intersected pixel point sets, whether the tracked vehicle is retrograde is judged, and one retrograde motion judgment is carried out after one retrograde motion judging area.
2. The deep learning-based one-way vehicle reverse running detection method according to claim 1, characterized in that: and 2, the vehicle detection network model is a deep learning vehicle detection network model trained in advance, and the vehicle profile is subjected to model learning training through a vehicle data set by using a deep learning target detection algorithm, so that the vehicle detection network model can perform target identification on vehicles on the road in the monitoring video range.
3. The deep learning-based one-way vehicle reverse running detection method according to claim 1, characterized in that: and in the step 2, the vehicle information is [ x, y, w, h ], wherein the position information of the vehicle is [ x, y ] and represents the center coordinates of the vehicle detection frame, and the size information of the vehicle detection frame is [ w, h ] and represents the width and height of the vehicle detection frame.
4. The deep learning-based one-way vehicle reverse running detection method according to claim 1, characterized in that: in the step 3, tracking the detected vehicle through a multi-target tracking algorithm DEEP SORT; detecting vehicles running on the road in the video frame image by using the vehicle detection network model, detecting the vehicles running on the road in the video frame image by using the vehicle detection network model when the vehicles run in the monitoring camera range to obtain a vehicle detection frame, matching the detected vehicles with corresponding label IDs, extracting depth appearance characteristics of the vehicles in the vehicle detection frame, and predicting a prediction frame of the next frame of the running vehicles through Kalman filtering; when the next frame is carried out, a vehicle detection network model is utilized to detect vehicles running on the road in the video frame image to obtain a vehicle detection frame, the deep appearance characteristics of the vehicles in the vehicle detection frame are extracted, the Mahalanobis distance is utilized to carry out correlation measurement on the target distance between the Kalman filtering prediction frame of the previous frame and the current frame detection frame, the cosine distance of the appearance characteristics measures the appearance characteristics of the vehicle detection frame of the previous frame and the current frame vehicle detection frame to obtain the minimum cosine distance, the weighted summation of the Mahalanobis distance result and the minimum cosine distance is used to obtain a cost matrix, the cost matrix is input into a Hungarian algorithm to carry out data correlation to obtain a linear matching result, the ID of the corresponding target vehicle of the previous frame is distributed to the target with high matching degree in the current frame, the tracking of the previous frame and the next frame is successful, the prediction updating is carried out through filtering, and the Hungarian algorithm carries out frame-by-frame data correlation, and tracking until the ID vehicle disappears or the ID is converted to obtain the track information of the ID vehicle on the video pixel coordinates.
5. The deep learning-based one-way vehicle reverse running detection method according to claim 4, characterized in that: using an 8-dimensional state vector X ═ u, v, r, h, u * ,v * ,r * ,h * ] T To estimate the motion state of the vehicle at the next frame, where u, v]The center coordinates of the vehicle detection frame, the length-width ratio of r, the height of h and the rest four variables represent the derived speed information of each parameter in the image coordinate system;
the Mahalanobis distance is used for measuring the target distance between the Kalman filtering prediction frame of the previous frame and the detection frame of the current frame in a correlation mode, and the Mahalanobis distance is used for measuring the first 4 vectors [ u, v, r, h ] of the 8-dimensional state vector]Degree of similarity d (1) (i, j), the mahalanobis distance metric calculation formula is as follows:
Figure FDA0003542067000000021
wherein d is j Indicates the position of the jth detection frame, y i Representing the predicted position of the ith Kalman filter on the target, S i -1 Representing a covariance matrix between the detected position and the tracked position.
6. The deep learning-based one-way vehicle reverse running detection method according to claim 4, characterized in that: the appearance characteristic cosine distance measure obtains the minimum cosine distance from the appearance characteristics of the previous frame of vehicle detection frame and the current frame of vehicle detection frame, and calculates the minimum cosine distance d of all the characteristic vectors of the tracking of the ith object and the target detection result of the jth object (2) (i, j), the minimum cosine distance calculation formula is as follows:
Figure FDA0003542067000000022
wherein,r j Corresponding to the jth detected feature vector,
Figure FDA0003542067000000023
corresponding to the feature vector set successfully tracked for k times,
Figure FDA0003542067000000024
the cosine similarity is calculated and the similarity of the cosine is calculated,
Figure FDA0003542067000000025
is the cosine distance of the two vectors.
7. The deep learning-based one-way vehicle reverse running detection method according to claim 4, characterized in that: weighting and summing the Mahalanobis distance result and the minimum cosine distance to obtain a cost matrix c i,j
c i,j =λd (1) (i,j)+(1-λ)d (2) (i,j)
Wherein λ represents the weight of mahalanobis distance and cosine distance in the cost function calculation process.
8. The deep learning-based one-way vehicle reverse detection method according to any one of claims 1 to 7, characterized in that: in the step 4, if the tracked vehicle driving direction is judged to be in a wrong-way driving, the monitoring system can capture images of illegal sites in real time, obtain illegal videos and lock illegal vehicles in the wrong-way driving to finish detection.
9. The system for detecting the reverse running of the vehicle on the one-way road based on deep learning is characterized by comprising the following modules:
the module 1 is used for setting a retrograde motion judgment area in a video pixel coordinate system, wherein the retrograde motion judgment area consists of two connected and non-intersected position areas and traverses a road in a video; the position areas are composed of pixel point sets which are two connected and non-intersected pixel point sets respectively, and meanwhile, the two connected and non-intersected position areas must cross the road in the video so as to ensure that vehicles driving on the road can pass through the two position areas in sequence;
the module 2 is used for acquiring road monitoring video data in real time and acquiring video frame image data; inputting the obtained video frame image data into a vehicle detection network model, and carrying out target detection on vehicles running on a road to obtain vehicle information of the running vehicles on video pixel coordinates, wherein the vehicle information comprises position and size information;
the module 3 is used for tracking the whole journey of the running vehicle detected in the module 2 in the video monitoring range through a multi-target tracking algorithm DEEP SORT to obtain the whole journey running track of the vehicle and the running track in the reverse running judgment area;
the module 4 is used for judging whether the vehicle runs in the wrong direction or not; when the target ID vehicle passes through two connected and non-intersected position areas of the retrograde motion judging area, track information of the target ID vehicle on video pixel coordinates of the retrograde motion judging area is obtained through tracking in the step 3, the running direction of the tracked target ID vehicle is determined according to the sequence of the target ID vehicle passing through two connected and non-intersected pixel point sets, whether the tracked vehicle is retrograde is judged, and one retrograde motion judgment is carried out after one retrograde motion judging area.
10. A one-way vehicle reverse detection device based on deep learning, comprising:
one or more processors;
a storage device to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the deep learning-based one-way vehicle reverse travel detection method of any one of claims 1-8.
CN202210235720.0A 2022-03-11 2022-03-11 Method, system and equipment for detecting reverse running of one-way road vehicle based on deep learning Active CN114898326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210235720.0A CN114898326B (en) 2022-03-11 2022-03-11 Method, system and equipment for detecting reverse running of one-way road vehicle based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210235720.0A CN114898326B (en) 2022-03-11 2022-03-11 Method, system and equipment for detecting reverse running of one-way road vehicle based on deep learning

Publications (2)

Publication Number Publication Date
CN114898326A true CN114898326A (en) 2022-08-12
CN114898326B CN114898326B (en) 2024-10-29

Family

ID=82715656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210235720.0A Active CN114898326B (en) 2022-03-11 2022-03-11 Method, system and equipment for detecting reverse running of one-way road vehicle based on deep learning

Country Status (1)

Country Link
CN (1) CN114898326B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071688A (en) * 2023-03-06 2023-05-05 台州天视智能科技有限公司 Behavior analysis method and device for vehicle, electronic equipment and storage medium
CN117372924A (en) * 2023-10-18 2024-01-09 中国铁塔股份有限公司 Video detection method and device
CN117437792A (en) * 2023-12-20 2024-01-23 中交第一公路勘察设计研究院有限公司 Real-time road traffic state monitoring method, device and system based on edge calculation
CN117636270A (en) * 2024-01-23 2024-03-01 南京理工大学 Vehicle robbery event identification method and device based on monocular camera
CN118247495A (en) * 2024-05-29 2024-06-25 湖北楚天高速数字科技有限公司 Target identification method and device for high-resolution video spliced by multiple cameras

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040024527A1 (en) * 2002-07-30 2004-02-05 Patera Russell Paul Vehicular trajectory collision conflict prediction method
CN106022243A (en) * 2016-05-13 2016-10-12 浙江大学 Method for recognizing converse vehicle driving in vehicle lanes on the basis of image processing
CN111259868A (en) * 2020-03-10 2020-06-09 北京以萨技术股份有限公司 Convolutional neural network-based method, system and medium for detecting vehicles in reverse driving
CN111695545A (en) * 2020-06-24 2020-09-22 浪潮卓数大数据产业发展有限公司 Single-lane reverse driving detection method based on multi-target tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040024527A1 (en) * 2002-07-30 2004-02-05 Patera Russell Paul Vehicular trajectory collision conflict prediction method
CN106022243A (en) * 2016-05-13 2016-10-12 浙江大学 Method for recognizing converse vehicle driving in vehicle lanes on the basis of image processing
CN111259868A (en) * 2020-03-10 2020-06-09 北京以萨技术股份有限公司 Convolutional neural network-based method, system and medium for detecting vehicles in reverse driving
CN111695545A (en) * 2020-06-24 2020-09-22 浪潮卓数大数据产业发展有限公司 Single-lane reverse driving detection method based on multi-target tracking

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071688A (en) * 2023-03-06 2023-05-05 台州天视智能科技有限公司 Behavior analysis method and device for vehicle, electronic equipment and storage medium
CN117372924A (en) * 2023-10-18 2024-01-09 中国铁塔股份有限公司 Video detection method and device
CN117372924B (en) * 2023-10-18 2024-05-07 中国铁塔股份有限公司 Video detection method and device
CN117437792A (en) * 2023-12-20 2024-01-23 中交第一公路勘察设计研究院有限公司 Real-time road traffic state monitoring method, device and system based on edge calculation
CN117437792B (en) * 2023-12-20 2024-04-09 中交第一公路勘察设计研究院有限公司 Real-time road traffic state monitoring method, device and system based on edge calculation
CN117636270A (en) * 2024-01-23 2024-03-01 南京理工大学 Vehicle robbery event identification method and device based on monocular camera
CN117636270B (en) * 2024-01-23 2024-04-09 南京理工大学 Vehicle robbery event identification method and device based on monocular camera
CN118247495A (en) * 2024-05-29 2024-06-25 湖北楚天高速数字科技有限公司 Target identification method and device for high-resolution video spliced by multiple cameras

Also Published As

Publication number Publication date
CN114898326B (en) 2024-10-29

Similar Documents

Publication Publication Date Title
CN114898326B (en) Method, system and equipment for detecting reverse running of one-way road vehicle based on deep learning
CN109948582B (en) Intelligent vehicle reverse running detection method based on tracking trajectory analysis
Bai et al. Traffic anomaly detection via perspective map based on spatial-temporal information matrix.
US9569531B2 (en) System and method for multi-agent event detection and recognition
Xu et al. Dual-mode vehicle motion pattern learning for high performance road traffic anomaly detection
CN113326719A (en) Method, equipment and system for target tracking
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
Bloisi et al. Argos—A video surveillance system for boat traffic monitoring in Venice
Li et al. Bi-directional dense traffic counting based on spatio-temporal counting feature and counting-LSTM network
CN111582253B (en) Event trigger-based license plate tracking and identifying method
CN113887304A (en) Road occupation operation monitoring method based on target detection and pedestrian tracking
Li et al. Time-spatial multiscale net for vehicle counting and traffic volume estimation
CN118033622A (en) Target tracking method, device, equipment and computer readable storage medium
CN115565157A (en) Multi-camera multi-target vehicle tracking method and system
Mao et al. Aic2018 report: Traffic surveillance research
CN117037085A (en) Vehicle identification and quantity statistics monitoring method based on improved YOLOv5
CN117078718A (en) Multi-target vehicle tracking method in expressway scene based on deep SORT
CN111784750A (en) Method, device and equipment for tracking moving object in video image and storage medium
Patel et al. Vehicle tracking and monitoring in surveillance video
Zhong et al. Research on Road Object Detection Algorithm Based on YOLOv5+ Deepsort
Bai et al. Pedestrian Tracking and Trajectory Analysis for Security Monitoring
Gu et al. Real-Time Vehicle Passenger Detection Through Deep Learning
Tian et al. Pedestrian multi-target tracking based on YOLOv3
Wang A Novel Vehicle Tracking Algorithm Using Video Image Processing
CN116886877B (en) Park safety monitoring method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant