CN111145545A - Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning - Google Patents

Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning Download PDF

Info

Publication number
CN111145545A
CN111145545A CN201911360542.9A CN201911360542A CN111145545A CN 111145545 A CN111145545 A CN 111145545A CN 201911360542 A CN201911360542 A CN 201911360542A CN 111145545 A CN111145545 A CN 111145545A
Authority
CN
China
Prior art keywords
vehicle
information
module
sequence
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911360542.9A
Other languages
Chinese (zh)
Other versions
CN111145545B (en
Inventor
龚怡宏
张玥
余旭峰
洪晓鹏
马健行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201911360542.9A priority Critical patent/CN111145545B/en
Publication of CN111145545A publication Critical patent/CN111145545A/en
Application granted granted Critical
Publication of CN111145545B publication Critical patent/CN111145545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning, wherein the system comprises an acquisition module, a single-camera processing module, a cross-camera matching module and a traffic parameter extraction module; the single-camera processing module creates a single-camera processing submodule for each path of video output by the acquisition module to perform data processing; the single-camera processing submodule comprises an image preprocessing module, a vehicle detection module and a vehicle tracking module. The method uses an unmanned aerial vehicle to shoot road traffic conditions and analyzes the traffic state of vehicles in a monitoring area, and comprises the steps of calibrating a monitoring picture, a video-based vehicle detection and tracking technology, a camera-crossing multi-target track matching method based on geographic positions, an algorithm for analyzing the traffic state according to vehicle motion tracks and the like.

Description

Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
Technical Field
The invention belongs to the field of intelligent transportation, and particularly relates to a road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning.
Background
An Intelligent Transportation System (ITS for short) is a development direction of future Transportation systems, and aims to improve the safety and efficiency of Transportation. The system adopts various related advanced scientific technologies, integrates people, vehicles, roads and environments involved in the traffic system together to play an intelligent role, so that the traffic system achieves the aims of safety, smoothness, low pollution and low energy consumption. Real-time detection and tracking of moving vehicles are one of the core parts of an intelligent transportation system. In recent years, with the rapid development of computer hardware technology and digital image processing technology, although detection and tracking technology for moving vehicles is also continuously improved, it still faces a great challenge to solve the traffic problem by directly using ITS technology. The detection and tracking of the moving automobile is to obtain the sequence images of the automobile by using the computer vision technology and then detect the motion information of the automobile, such as the automobile type, the automobile speed, the automobile flow rate, the lane occupancy and the like. And macroscopically regulating and controlling the traffic according to the data, further lightening traffic jam, and achieving the purposes of improving traffic environment, improving road utilization rate and reducing traffic accident rate. In addition, the driving conditions of vehicles on the traffic road, such as illegal overtaking and vehicle collision, are recorded accurately in real time, and the traffic accidents can be processed quickly, so that the efficiency of troubleshooting is improved greatly, the road is smooth quickly, and strong and reliable evidence is improved for processing traffic incidents.
At present, in an intelligent traffic system, a plurality of problems to be solved exist in the problems of detection and tracking of vehicles:
firstly, the detection of a moving vehicle in a dynamic scene makes the extraction of the vehicle more difficult due to the existence of two mutually independent motions of the vehicle and the background, and the precision of the current mainstream target detection technology needs to be improved on the task;
secondly, interleaving and shielding of vehicles in an actual traffic environment are too frequent, and the vehicle retrieval tracking technology under multiple cameras is greatly challenged by the changes of vehicle deformation under different visual angles, camera visual angles and the like;
thirdly, aiming at the collection of the current common roadside images, the effective distance shot by a single camera is limited, the complete road section cannot be covered, the information collected by a plurality of cameras is difficult to correlate, and it is very difficult to obtain the long-distance and accurate vehicle track information.
Disclosure of Invention
The invention aims to provide a road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning, wherein an unmanned aerial vehicle is used for shooting road traffic conditions and analyzing traffic states of vehicles in a monitoring area, and the system comprises a monitoring picture calibration method, a video-based vehicle detection and tracking technology, a cross-camera multi-target track matching method based on geographic positions, an algorithm for analyzing the traffic states according to vehicle motion tracks and the like.
The invention is realized by the following technical scheme:
the road traffic behavior unmanned aerial vehicle monitoring system based on deep learning comprises an acquisition module, a single-camera processing module, a cross-camera matching module and a traffic parameter extraction module; wherein,
the acquisition module is used for acquiring aerial vehicle video data, and inputting the acquired multi-channel video data into the single-camera processing module to perform single-camera multi-target tracking processing on each channel of video;
the single-camera processing module is used for carrying out multi-target tracking processing on the multi-path video data output by the acquisition module, acquiring a vehicle track set of each path of video and inputting the vehicle track set to the cross-camera matching module;
the cross-camera matching module is used for performing information matching on the vehicle track set of each path of video input by the single-camera processing module to obtain complete tracks of all vehicles in the global video and inputting the complete tracks into the traffic parameter extraction module;
and the traffic parameter extraction module is used for analyzing the vehicle track information input by the cross-camera matching module and extracting the traffic parameters of the vehicle track information so as to obtain the traffic state of the target vehicle.
The invention has the further improvement that the single-camera processing module establishes a single-camera processing submodule for each path of video output by the acquisition module to perform data processing;
the single-camera processing submodule comprises an image preprocessing module, a vehicle detection module and a vehicle tracking module; the image preprocessing module is used for manually preprocessing the output video of the acquisition module distributed by the single-camera processing submodule to obtain an image sequence, lane information, a rotation adjustment matrix sequence and a homography matrix, and inputting the lane information, the image sequence and the rotation adjustment matrix sequence into the vehicle detection module; inputting the homography matrix into a vehicle tracking module; the vehicle detection module is used for carrying out target detection processing on the image sequence output by the image preprocessing module, obtaining information such as position coordinates, pixel sizes and vehicle types of all vehicles in the image, adjusting and supplementing detection results according to the rotation adjustment matrix sequence and the lane information output by the image preprocessing module, obtaining a target detection frame set sequence and inputting the target detection frame set sequence into the vehicle tracking module; the vehicle tracking module is used for carrying out target tracking processing on a target detection frame set sequence output by the vehicle detection module to obtain complete track information of all targets in the video, and adjusting a tracking result according to a homography matrix output by the image preprocessing module to obtain a target track information set; and integrating the target track information sets obtained by the single-camera processing sub-modules, and inputting the target track information sets into a cross-camera matching module.
The road traffic behavior unmanned aerial vehicle monitoring method based on deep learning is characterized in that the method is based on the road traffic behavior unmanned aerial vehicle monitoring system based on deep learning, and comprises the following steps:
1) the method comprises the following steps that a plurality of unmanned aerial vehicles form an unmanned aerial vehicle cluster to acquire aerial vehicle video data, all paths of acquired data are synchronously stored and input into a single-camera processing module;
2) the single-camera processing module creates a sub-module for each path of video to perform single-camera multi-target tracking processing according to the output video of the acquisition module to obtain a vehicle track set of each path of video, and then integrates and inputs the vehicle track sets of all the videos to the cross-camera matching module;
3) the cross-camera matching module performs pairwise matching and information fusion on the vehicle track data of each video according to the integrated data output by the single-camera processing module to obtain complete track information of each vehicle in the global video, and then integrates the information of all vehicles into a global track set to be input to the traffic parameter extraction module;
4) the traffic parameter extraction module calculates the track information of each vehicle in the set by using a traffic parameter calculation formula according to the global track set input by the cross-camera matching module to obtain a single traffic parameter of the target; calculating the track information of every two vehicles in the set to obtain the interactive traffic parameters of the two targets; and evaluating the traffic state of each vehicle according to the calculated traffic parameters.
The further improvement of the invention is that the specific implementation method of the step 1) is as follows:
401) numbering all unmanned aerial vehicles, arranging the unmanned aerial vehicles from small to large, and sequentially ascending at the leftmost end of a monitoring area according to the numbering sequence;
402) for the No. 1 unmanned aerial vehicle, vertically ascending, hovering at the high altitude about 250 meters away from the ground, and then adjusting the position to enable the left side of a shot picture to be the leftmost end of a monitoring area;
403) for the No. 2 unmanned aerial vehicle, after hovering at the high altitude of about 250 meters away from the ground, adjusting the position to ensure that the left area of the shot picture and the right area of the shot picture of the No. 1 unmanned aerial vehicle have the overlap of about 30 meters of real world length;
404) analogizing in turn until the shooting picture of the unmanned aerial vehicle cluster covers the whole monitoring area, simultaneously starting the video recording function of all unmanned aerial vehicles, synchronously acquiring video data and storing the video data, and transferring the acquired data from the unmanned aerial vehicle to a database after the data acquisition is finished.
The further improvement of the invention is that the specific implementation method of the step 2) is as follows:
501) reading a video collected by the unmanned aerial vehicle with the number of 1 from a database, processing the video into an image sequence taking frames as a unit, and selecting one frame as a calibration image H to represent the scene of the video; selecting more than 20 marking points in the calibration image H, wherein the marking points require consistent horizontal height and uniformly distribute the whole monitoring picture as much as possible, and then recording image coordinates of all the marking points; acquiring longitude and latitude position data of the marking points by using Google maps or GPS (global positioning system) on site, and converting the longitude and latitude position data into northeast coordinate information; arranging the image coordinates and the northeast coordinates according to a corresponding sequence, inputting the image coordinates and the northeast coordinates into a cv2.findHomography function of python for fitting, and calculating a homography matrix K of the mapping from the image coordinate system of the calibration frame to a world coordinate system; on the calibration image H, for each lane area, marking a plurality of points along the edge of the lane area and saving the image coordinates of the points; integrating the generated point set of each lane area into an npy file to represent the scene lane information; extracting position information L and feature information F of a plurality of feature points P of a calibration image H by using an ORB algorithm;
502) using ORB algorithm to extract the jth frame image H of the videojA plurality of feature points PjPosition information L ofjAnd characteristic information Fj(ii) a According to 501) and characteristic information FjCalculating a feature point P and a feature point PjAnd according to the corresponding relationship and the position information L of 501) tojUsing cv2.findHomography function fitting of python to calculate rotation adjustment matrix M of mapping image coordinate system of current processing image to image coordinate system of calibration framej(ii) a Traversing all the frames of the video according to the frame number according to the method to obtain a rotation adjustment matrix sequence M;
503) performing frame-by-frame target detection on the image sequence of 501) by using a target detector based on deep learning to obtain detection frames of all vehicles in each frame of image, namely image position information and corresponding vehicle type information; analyzing the lane area where each detection frame is located according to the lane information of 501); synthesizing the above information to obtain a sequence D of the detection box set, unit D thereofjThe detection frame set represents the jth frame of picture and comprises the image coordinates, the vehicle type and the lane of each detection frame;
504) according to 503) the obtained sequence D of the detection frame sets for each DjDetection of (2)Frame coordinates, using 502) to adjust the homography matrix M in the sequence M corresponding to the jth framejPerforming homography transformation to obtain transformed coordinates and updating to djTo obtain dj'; after traversing and updating all the units, updating the detection frame set sequence into D';
505) according to the detection frame set sequence D' obtained in 504), a vehicle track set T is calculated by using a Sort multi-target tracking algorithm based on Kalman filtering and Hungary matching, and a unit T of the vehicle track set T isiTrajectory information representing the ith vehicle, including a sequence of image coordinates (x) of the targetij,yij) Type of vehicle type, lane sequence rijWherein j is the frame number corresponding to the parameter;
506) according to 505) obtaining a set of vehicle trajectories T, for each TiCoordinate sequence of (x)ij,yij) Using 501) to perform homography transformation to obtain transformed coordinates
Figure BDA0002337062060000051
And updated to tiTo obtain wi(ii) a After traversing and updating all the units, the vehicle track set is updated to W1
507) Reading the video collected by the unmanned aerial vehicle with the number of 2 from the database, repeating the steps 501) to 506), and updating the vehicle track set to W2And in the same way, traversing all videos to obtain a vehicle track set sequence W.
The further improvement of the invention is that the specific implementation method of the step 3) is as follows:
601) obtaining a vehicle trajectory set sequence W from a single-camera processing module, wherein WnSet of vehicle trajectory information for nth video, unit w thereofniEstablishing a global vehicle track information set W for the track information of the ith target of the nth videototalSetting n as 1;
602) with WnFor purposes of targeting, use Wn+1Is matched with the data of WnEach w inniEstablishing a matching set mniIf i is 1;
603) traverse Wn+1According to the following rule, wn+1iFall under set mniThe method comprises the following steps: is the same vehicle type; there is an overlapping effective time, which refers to the time period from the appearance of the target to the disappearance; in the overlapped effective time, the lanes are the same; after the traversal is finished, m is calculatedniEach element of (1) and wniKeeping the elements with the minimum distance and lower than a set threshold value in the set according to the Euclidean distance of the tracks in the overlapped effective time, and deleting the rest elements; if mniIf it is empty, then wniAdding to a global set of vehicle trajectory information WtotalOtherwise, w isniTo mniThe remaining elements are in Wn+1A corresponding unit;
604) let i ═ i +1, then continue the loop starting from 603) until WnAll elements in are traversed;
605) let n equal n +1, if n equals the total number of video group videos, then W will be addednAll elements of (2) are added to WtotalOtherwise, from 602) continue cycling;
606) is a set WtotalAll elements in (1) are ordered and assigned numbers according to the order of appearance time, set WtotalThe track information of all targets in the video group global monitoring range is contained, and the information of the same vehicle under different videos is summarized to the same unit.
The further improvement of the invention is that the specific implementation method of the step 4) is as follows:
701) obtaining a global information set W from a cross-camera matching moduletotal(ii) a Selecting one unit from the set or appointing one unit, wherein the unit comprises a position information sequence l and a lane information sequence r corresponding to the monitoring time of the vehicle in the monitoring range; filtering the sequence information by using a low-pass filtering method to obtain a position information sequence l'; setting a distance parameter x, calculating the Euclidean distance between the node and the first x-1 nodes from the x-th node according to the position information sequence l', and then dividing the Euclidean distance by the time difference between the two nodes to calculate the speed of the node target; continuously traversing the sequence l' until all the remaining nodes are calculated, and obtainingA velocity information sequence v; calculating the change process of the speed according to the speed information sequence v, and judging whether the target has overspeed behavior in the driving process by setting a threshold value; according to the lane information sequence r, counting lane change times and lane change time of the target in unit time, and judging whether a bad lane change behavior occurs in the driving process of the target by setting a threshold value;
702) selecting two context targets from the global information set WtotalExtracting both units and processing the data of 701) to obtain the position information sequences l of both units1、l2And a velocity information sequence v1、v2(ii) a According to l1、l2Calculating the Euclidean distance corresponding to the time data to obtain a front-back interval of the Euclidean distance and a back-back interval of the Euclidean distance, namely a car following distance sequence D (t), and judging whether the car following distance sequence D (t) follows the car in the driving process or not by setting a threshold value; according to the MOR formula
Figure BDA0002337062060000071
Calculating the time interval TH of the vehicle head according to the MOR formula
Figure BDA0002337062060000072
Calculating the time TTC of collision of the two vehicles so as to judge whether the two vehicles have collision danger in the driving process;
703) selecting two targets in parallel relation from the global information set WtotalExtracting both units and processing the data of 701) to obtain the position information sequences l of both units1、l2According to l1、l2And calculating the Euclidean distance according to the time data to obtain a parallel interval sequence D (h) of the Euclidean distance and judging whether the parallel interval sequence D (h) and the parallel interval sequence D (h) have bad parallel behaviors in the driving process by setting a threshold value.
The invention has at least the following beneficial technical effects:
the method takes the traffic section video of the unmanned aerial vehicle aerial photography as input, takes a cross-camera multi-target tracking technology based on deep learning as a core technology, and completes the real-time detection and tracking of the moving vehicles in the monitoring range around the task requirement of an intelligent traffic system, and analyzes the traffic state of the vehicles in real time through the obtained motion parameters. According to the invention, by constructing the mapping relation between the world coordinates and the image coordinates, the target detection tracking technology based on the image is applied to the task of extracting the actual traffic parameters, and a novel and effective scheme is provided for acquiring the target traffic parameters of the intelligent traffic system. In summary, the present invention has the following advantages:
firstly, the method comprises the following steps: through parameter adjustment and data enhancement technology, a current mainstream target detection model based on deep learning is improved, and the detection capability of the model on a vehicle in a test environment is greatly improved;
secondly, the method comprises the following steps: the unmanned aerial vehicle is used for hovering over a shooting area and collecting data at an angle vertical to the ground, so that the influence of obstacles on the road and the shielding of a target caused by dense vehicles is eliminated;
thirdly, the method comprises the following steps: by using the method of cross-camera multi-target track matching based on the geographic position, the area shot by each camera can be mapped to the same world coordinate, cross-camera data association can be simply, conveniently and accurately carried out, and the amplification of the acquisition range is realized.
Furthermore, by the cross-camera multi-target tracking technology, long-distance running track information of the target vehicle can be acquired, and accurate and rich data are provided for vehicle traffic parameter calculation.
Furthermore, according to the vehicle traffic parameters obtained through analysis, the running state of the vehicle can be monitored in real time, bad behaviors are warned in time, and the traffic condition of the environment is fed back in time.
Drawings
FIG. 1 is a diagram of the key technology relationship of the present invention.
FIG. 2 is a schematic diagram of data acquisition under a real environment of the present invention.
Fig. 3 is a traffic parameter extraction result demonstration diagram according to the present invention.
Detailed Description
The invention is further described below with reference to the following figures and examples.
The road traffic behavior unmanned aerial vehicle monitoring method based on deep learning disclosed by the invention comprises four modules, namely an acquisition module, a single-camera processing module, a cross-camera matching module and a traffic parameter extraction module as shown in figure 1, wherein the single-camera processing module consists of a plurality of single-camera processing sub-modules, and each sub-module comprises an image preprocessing module, a vehicle detection module and a vehicle tracking module. The modules are specifically as follows.
1. And an acquisition module.
And the acquisition module is used for acquiring video data of the aerial photography vehicle, and inputting the acquired multi-channel video data into the single-camera processing module to perform single-camera multi-target tracking processing on each channel of video. The main body of the acquisition module is an unmanned aerial vehicle cluster consisting of a plurality of unmanned aerial vehicles, and the number of the unmanned aerial vehicles depends on the range of a monitoring area. Firstly, all unmanned aerial vehicles are assigned with numbers and arranged from small to large, and the unmanned aerial vehicles sequentially ascend to the air at the leftmost end of a monitoring area according to the number sequence, as shown in the attached drawing 2, in the acquisition process, the unmanned aerial vehicle cluster hovers at a height of about 250 meters away from the ground to acquire data, the unmanned aerial vehicles are arranged side by side along the long edge of the monitoring area according to the numbers, the acquisition range of each unmanned aerial vehicle is about 270 meters, and at least 30 meters of overlapped monitoring intervals are arranged between the adjacent unmanned aerial vehicles. The unmanned aerial vehicle group synchronously collects data and stores the data, and after the data are collected, the unmanned aerial vehicle group is recovered and the data are stored in the database according to the serial number of the unmanned aerial vehicle.
By way of illustration, as shown in fig. 2, a target vehicle a and a target vehicle B appear from the acquisition area of the unmanned aerial vehicle No. 1, and then leave the acquisition area of the unmanned aerial vehicle No. 1 in the overlapping area of the unmanned aerial vehicles No. 1 and No. 2, and enter the acquisition area of the unmanned aerial vehicle No. 2; by analogy, the two targets sequentially appear in the acquisition area of each unmanned aerial vehicle and finally leave the monitoring area, and each unmanned aerial vehicle acquires the motion information of the two targets at different time intervals to wait for the subsequent module to process.
2. Single camera processing module.
And the single-camera processing module is used for carrying out multi-target tracking processing on the multi-channel video information output by the acquisition module, acquiring a vehicle track set of each channel of video and inputting the vehicle track set to the cross-camera matching module. The single-camera processing module is composed of a plurality of single-camera processing submodules, and each submodule corresponds to a video acquired by the unmanned aerial vehicle. Each single-camera processing submodule consists of an image preprocessing module, a vehicle detection module and a vehicle tracking module. And for the plurality of collected videos, a plurality of sub-modules are used for parallel processing, each sub-module outputs a vehicle track set corresponding to the videos, the output sets of all the sub-modules are integrated to serve as the output of the single-camera processing module and input to the cross-camera matching module. Further description of the functional modules of the sub-modules is provided below.
1) And the image preprocessing module.
The image preprocessing module is used for manually preprocessing the output video of the acquisition module distributed by the sub-module and mainly comprises four preprocessing technologies of data reading, camera calibration, lane extraction and image rotation adjustment. The data reading function is to process the input single-camera collected video into an image sequence with a frame as a unit, and select one frame as a calibration frame to represent the scene of the video. The camera calibration function is to obtain the mapping relation between the image coordinates and the world coordinates, and the homography matrix of the image coordinates of the calibration frame and the real world coordinate system is obtained by using a homography matrix-based image coordinate and world coordinate conversion method so as to represent the mapping relation. The lane extraction function is to extract lane information in a monitoring area, artificially plan and calibrate a lane area in a frame image by using an image-based lane area characterization method, and provide support for subsequent extraction of the lane information of a vehicle and lane changing. The image rotation adjustment function is to calculate the mapping relation between the current test image and the reference image, and correct the tiny rotation and offset of the image in the video at different moments by a picture rotation distortion compensation method based on feature point matching, so as to overcome the traffic parameter measurement deviation caused by the displacement of a camera due to environmental factors (such as breeze, drift and the like). The specific steps are further described below.
Step 1: and after receiving the video information, the sub-module processes the video into an image sequence taking frames as a unit, and selects one frame as a calibration image H to represent the scene of the video. More than 20 marking points are selected in the calibration image H, the marking points are required to be consistent in horizontal height and uniformly distributed on the whole monitoring picture as much as possible, and then image coordinates of all the marking points are recorded.
Step 2: and acquiring longitude and latitude position data of the marking points by using Google maps or GPS (global positioning system) on site, and converting the longitude and latitude position data into northeast coordinate information. Arranging the image coordinates and the northeast coordinates according to a corresponding sequence, inputting cv2.findHomography function of python for fitting, and calculating a homography matrix K for mapping the image coordinate system of the calibration frame to the world coordinate system
And step 3: on the calibration image H, for each lane area, several points are marked along the edge of the lane area and the image coordinates of the points are saved. The generated point sets of each lane area are integrated into npy files, representing the scene lane information.
And 4, step 4: using the ORB algorithm, the position information L and the feature information F of a plurality (about 1000) of feature points P of the calibration image H are extracted. Using ORB algorithm to extract the jth frame image H of the videojA number of (about 1000) feature points PjPosition information L ofjAnd characteristic information Fj. Based on the feature information F and the feature information FjCalculating a feature point P and a feature point PjAnd according to the corresponding relationship and the position information LjUsing cv2.findHomography function fitting of python to calculate rotation adjustment matrix M of mapping image coordinate system of current processing image to image coordinate system of calibration framej. And traversing all the frames of the video according to the frame number according to the method to obtain a rotation adjustment matrix sequence M.
And 5: inputting the image sequence obtained in the step 1, the lane information obtained in the step 3 and the rotation adjustment matrix sequence M obtained in the step 4 into a vehicle detection module; and (4) inputting the homography matrix K obtained in the step (2) into a vehicle tracking module.
2) And a vehicle detection module.
The vehicle detection module has the functions of carrying out target detection processing on the image sequence output by the image preprocessing module, obtaining information such as position coordinates, pixel sizes and vehicle types of all vehicles in the image, adjusting and supplementing detection results according to the rotation adjustment matrix sequence and the lane information output by the image preprocessing module, obtaining a target detection frame set sequence and inputting the target detection frame set sequence into the vehicle tracking module. The yolov3 target detection model is used as a baseline model, and the model is optimized on a data set and training parameters. Besides the public vehicle data set, the training data set also increases the pictures of collection and artificial marking, expands the image data volume of large vehicles, vehicles with shadows and the like, enriches the data types and enhances the identification capability of the model. In addition, the invention utilizes a data enhancement technology to transform the data aiming at the angle, the saturation, the exposure, the hue and the like to realize the expansion of the data set, and enhances the data through rotating the angle, adjusting the saturation, adjusting the exposure and the hue, thereby improving the generalization capability of the vehicle detection model. Before training, a k-means clustering algorithm is used, detection frames of a training set are roughly clustered into a plurality of symbolic types and used as an adjustment reference of detection model anchor parameters, regression loss of a detection model is optimized, and detection precision is improved. In the training process, the learning rate is initially set to 0.01 in order to converge more quickly at the initial stage of training. When the iteration reaches 40000 times, the learning rate is reduced to one tenth of the original rate; when the iteration reaches 45000 times, the iteration is reduced to one tenth of the original iteration, and finally the iteration reaches 85000 times, so that the final model is obtained. The specific steps are further described below.
Step 1: and according to the image sequence output by the image preprocessing module, performing frame-by-frame target detection by using a target detector based on deep learning to obtain detection frames of all vehicles in each frame of image, namely image position information and corresponding vehicle type information. Meanwhile, the lane area where each detection frame is located is analyzed according to the lane information output by the image preprocessing module. Synthesizing the above information to obtain a sequence D of the detection box set, unit D thereofjAnd the detection frame set representing the j frame picture comprises the image coordinates, the vehicle type and the lane of each detection frame.
Step 2: based on the detection frame set sequence D obtained in step 1, for each DjUsing the coordinates of the detection frame output from the image preprocessing moduleRotating the homography matrix M corresponding to the jth frame in the adjustment matrix sequence MjPerforming homography transformation to obtain transformed coordinates and updating to djTo obtain dj. After all cells are traversed and updated, the sequence of the set of detection boxes is updated to D'. The purpose of this step is to perform rotation correction on all detected coordinates of the corresponding frame according to the rotation adjustment matrix of each frame and the calibration frame calculated by the image preprocessing module, so that the detected coordinates are mapped from the image coordinates of the corresponding frame to the image coordinates of the calibration frame. By such correction, all the calculated coordinates are based on the same image coordinate system, and errors caused by coordinate fixing deviation due to screen shaking are eliminated.
And step 3: and (3) inputting the detection frame set sequence D' obtained in the step (2) into a vehicle tracking module.
3) Vehicle tracking module
The vehicle tracking module is used for carrying out target tracking processing on a target detection frame set sequence output by the vehicle detection module, acquiring information such as positions, sizes and lanes of all vehicles in each frame of a video according to the target detection frame set sequence, associating the information belonging to the same target between two frames, and carrying out iteration to obtain complete track information of all targets in the video, then carrying out mapping conversion on a tracking result according to a homography matrix output by the image preprocessing module, and mapping tracks from image coordinates to world coordinates to obtain a target track information set. The invention realizes the function of the module by using a so rt multi-target tracking algorithm. The sort multi-target tracking algorithm estimates the position and size information of the next frame target by using a Kalman filtering method according to the position and size information of the current frame target. After the Kalman filtering predicts the motion state of the target, the Hungarian algorithm judges whether the detection frame is successfully associated with the target according to the IOU between the prediction frame of the previous frame and the detection frame of the previous frame. If the data association is successful, updating the state of the target by using the detection frame; and if the data association fails, predicting the target by using a linear model. When the IOU between the detection frame appearing in a certain frame and the prediction frame generated in the previous frame is less than a threshold value, a new target can be considered to appear, a new label is allocated to the new target, and the state of the target in the next frame is predicted by using the detection frame. When the prediction frame of a certain target has no detection frame matched with the target in multiple frames, the target can be considered to be disappeared, and the prediction of the target can be stopped. The specific steps are further described below.
Step 1: according to a detection frame set sequence D' output by a vehicle detection module, a vehicle track set T is calculated by using a Sort multi-target tracking algorithm based on Kalman filtering and Hungary matching, and a unit T of the vehicle track set T is calculatediTrajectory information representing the ith vehicle, including a sequence of image coordinates (x) of the targetij,yij) Type of vehicle type, lane sequence rijAnd j is the frame number corresponding to the parameter, which is equivalent to recombining and distributing the target detection frame set sequence according to the IDs of different vehicles to obtain the target image coordinate system track information set.
Step 2: based on the track information set T output in step 1, for each TiCoordinate sequence of (x)ij,yij) Performing homography transformation by using the homography matrix K output by the image preprocessing module to obtain conversion coordinates
Figure BDA0002337062060000121
And updated to tiTo obtain wi. And after all the units are traversed and updated, the vehicle track set is updated to be W. The step realizes that the track coordinates of all vehicles are mapped to a real-world coordinate system from an image coordinate system of a calibration frame, thereby providing data support for subsequent cross-camera track matching, traffic parameter calculation and the like.
And step 3: and (3) updating the vehicle track set obtained from the step (2) to W, which is the output of the vehicle tracking module and the final output of the corresponding single-camera processing submodule, and then uniformly integrating by the single-camera processing module.
After all the sub-modules output the vehicle track set, the single-camera processing module integrates the set output by each sub-module into a sequence according to the sequence of the sub-modules and inputs the sequence to the cross-camera matching module. By combining with the example, in the data acquisition process, each unmanned aerial vehicle shoots pictures of the target A, B running in the unmanned aerial vehicle acquisition area at different time periods, the single-camera processing module respectively processes the videos acquired by all the unmanned aerial vehicles, and extracts a vehicle track set independent of each video, wherein each set comprises tracks of A, B two targets, and the track time sequences are consistent with the set sequence. The tracks of the same target in different sets need to be integrated to form a complete track for the global monitoring range, so that the cross-camera matching technology needs to be further processed.
4. Cross-camera matching module
And the cross-camera matching module is used for performing information matching on the track set of the videos input by the single-camera processing module to obtain complete tracks of all vehicles in the global video and inputting the complete tracks into the traffic parameter extraction module. When a plurality of cameras are used for acquiring data, the data acquired by each camera are processed by the image preprocessing module, the vehicle detection module and the vehicle tracking module, and all video data are integrated into a sequence after being processed and transmitted to the module for cross-camera data matching, so that global data are obtained. The invention uses a multi-target track matching method of camera crossing based on geographic position, according to prior operation, a certain space overlapping area is arranged between adjacent cameras, the world coordinates of all vehicles in the overlapping area shot by two adjacent cameras in the same time are compared, if the distance is very close, the two cameras can be considered as the same target, so that two targets from different sets are matched, namely, the information of the same target in a plurality of target track sets is integrated into a unit, and the complete information of the target in the global monitoring range is obtained. The specific steps are further described below.
Step 1: obtaining a vehicle trajectory set sequence W from a single-camera processing module, wherein WnSet of vehicle trajectory information for nth video, unit w thereofniThe track information of the ith target of the nth video. Establishing a global vehicle trajectory information set Wtotal{ }. Let n equal to 1.
Step 2: with WnTo the eyesTarget, using Wn+1Are matched. Is WnEach w inniEstablishing a matching set mni. Let i equal to 1.
And step 3: traverse Wn+1According to the following rule, wn+1iFall under set mniThe method comprises the following steps: is the same vehicle type; there is an overlapping effective time, which refers to the time period from the appearance of the target to the disappearance; in the overlapping active time, the lanes are the same. After the traversal is finished, m is calculatedniEach element of (1) and wniAnd keeping the elements with the minimum distance and lower than a set threshold value in the set and deleting the rest elements in the Euclidean distance of the track in the overlapped effective time. If mniIf it is empty, then wniAdding to a global set of vehicle trajectory information WtotalOtherwise, w isniTo mniThe remaining elements are in Wn+1The corresponding unit.
And 4, step 4: i +1, and then continue the loop starting at step 3 until WnIs traversed.
And 5: n equals n +1, if n equals the total number of video group videos, then W will be addednAll elements of (2) are added to WtotalOtherwise, the loop continues from step 2.
Step 6: is a set WtotalAll elements in (b) are ordered and assigned numbers according to chronological order of occurrence. Set WtotalThe track information of all targets in the video group global monitoring range is contained, and the information of the same vehicle under different videos is summarized to the same unit. Finally, the global information is collected WtotalAnd inputting the traffic parameter extraction module.
The working process of the cross-camera matching module is that firstly a plurality of target track information sets obtained by analyzing data collected by different cameras are obtained from the vehicle tracking module, and the plurality of target track information sets are sorted according to the spatial sequence of the cameras. According to the method for matching the multi-target track of the cross-camera based on the geographic position, a global information set is established firstly, the set is an empty set, and the data of the first set and the data of the second set are matched according to the position of the track in an overlapping area. After matching is completed, the information of the units which are successfully matched is updated from the first set to the matched units of the second set, the units which are not successfully matched are updated to the global information set to indicate that the target corresponding to the unit only appears in the first camera, then matching of the second set and the third set is carried out, and the like until the last set, all the units in the last set are updated to the global information set, and global IDs are distributed to all the units of the global information set, and therefore, the cross-camera matching work is completed. The information of each unit of the global information set is complete information of a target corresponding to the unit in the global monitoring range, and comprises vehicle type information, position information at each moment and lane information.
By combining with the example, after the processing of the cross-camera matching module, the trajectory information of the A, B two targets originally dispersed in the corresponding trajectory set of each video is spliced one by one according to the time sequence to obtain a complete trajectory information unit, which contains information of each vehicle from entering the first unmanned plane acquisition area to leaving the last unmanned plane acquisition area. The method has the advantages that through establishing the relation between the image coordinate system and the world coordinate system, a plurality of originally different image coordinate systems can be mapped into the same world coordinate system after matrix conversion, and therefore mutual fusion of information among different image coordinate systems is achieved. When the subsequent traffic parameters are calculated, the data are based on the world coordinate system, so that the authenticity and the reliability of the parameters are improved.
5. Traffic parameter extraction module
The traffic parameter extraction is a rear-end module of the vehicle behavior analysis system, and the module has the function of calculating the track information of each vehicle in the set by using a traffic parameter calculation formula according to a global track set output by the cross-camera matching module to obtain a single traffic parameter of the target; and calculating the track information of every two vehicles in the set to obtain the interactive traffic parameters of the two targets. And evaluating the traffic state of each vehicle according to the calculated traffic parameters so as to judge whether the vehicle has bad behaviors. Currently, the traffic parameters that can be analyzed by the present invention include: vehicle speed, overspeed parameters, lane change parameters for a single target; the system comprises side-by-side running parameters, front and rear following parameters, rear vehicle head time interval, collision time and the like aiming at a plurality of targets. The user can also customize a calculation formula of the traffic parameters according to the extracted data, and the output of the module is enriched. The calculation of the different parameters is further explained below.
1) Speed of individual targets, passing parameters, and lane change parameters.
From the global information set WtotalAnd taking one unit or appointing one unit, wherein the unit comprises a position information sequence l and a lane information sequence r corresponding to the monitoring time of the vehicle in the monitoring range. And filtering the sequence information by using a low-pass filtering method to obtain a position information sequence l'. Setting a distance parameter x, calculating the Euclidean distance between the node and the first x-1 nodes from the x-th node aiming at the position information sequence l', and then dividing the Euclidean distance by the time difference between the two nodes to calculate the speed of the node target. And continuously traversing the sequence l' until all the remaining nodes are calculated, and obtaining a speed information sequence v. During testing, x is the frame rate of the video, and represents the velocity of each point calculated by the Euclidean distance between the point position and the position one second before the target. According to the speed information sequence v, the change process of the target speed can be known, and whether the target has overspeed behavior in the driving process can be judged by setting a threshold value; and counting lane change times and lane change time of the target in unit time according to the lane information sequence r, and judging whether the target has bad lane change behavior in the driving process by setting a threshold value.
2) Front and back following parameters of two front and back relation targets, a rear vehicle head time interval and collision time.
Selecting two context targets from the global information set WtotalExtracting the units of the two and respectively carrying out the data processing of 1) to obtain the position information sequences l of the two1、l2And a velocity information sequence v1、v2. According to l1、l2Calculating Euclidean distance according to the time data to obtain the twoThe distance between the front and the back of the vehicle, namely the vehicle-following distance sequence D (t), can judge whether the latter has the following over-close behavior in the driving process by setting a threshold value; according to the MOR formula
Figure BDA0002337062060000161
Calculating the time interval TH of the vehicle head according to the MOR formula
Figure BDA0002337062060000162
And calculating the time TTC of collision of the two vehicles so as to judge whether the two vehicles have collision danger in the driving process.
3) Two side-by-side relation of side-by-side driving parameters.
Selecting two targets in parallel relation from the global information set WtotalExtracting the units of the two and respectively carrying out the data processing of 1) to obtain the position information sequences l of the two1、l2According to l1、l2And calculating the Euclidean distance according to the time data to obtain a parallel interval sequence D (h) of the Euclidean distance and a threshold value, and judging whether the parallel behaviors are bad or not in the driving process.
By way of example, as shown in fig. 3, in the cross-camera matching module, the system assigns the ID of the target a vehicle to 11 and the ID of the target B vehicle to 13. The speed of the two vehicles at the moment, the current lane, the lane change times per unit time, the following distance D of the two vehicles, the time interval TH of the head of the rear vehicle and the time TTC of collision are recorded in the attached figure 3. (since both vehicles are in a front-rear state, the parallel running parameters are not calculated).

Claims (7)

1. The road traffic behavior unmanned aerial vehicle monitoring system based on deep learning is characterized by comprising an acquisition module, a single-camera processing module, a cross-camera matching module and a traffic parameter extraction module; wherein,
the acquisition module is used for acquiring aerial vehicle video data, and inputting the acquired multi-channel video data into the single-camera processing module to perform single-camera multi-target tracking processing on each channel of video;
the single-camera processing module is used for carrying out multi-target tracking processing on the multi-path video data output by the acquisition module, acquiring a vehicle track set of each path of video and inputting the vehicle track set to the cross-camera matching module;
the cross-camera matching module is used for performing information matching on the vehicle track set of each path of video input by the single-camera processing module to obtain complete tracks of all vehicles in the global video and inputting the complete tracks into the traffic parameter extraction module;
and the traffic parameter extraction module is used for analyzing the vehicle track information input by the cross-camera matching module and extracting the traffic parameters of the vehicle track information so as to obtain the traffic state of the target vehicle.
2. The deep learning-based unmanned aerial vehicle monitoring system for road traffic behaviors as claimed in claim 1, wherein the single-camera processing module creates a single-camera processing submodule for data processing for each path of video output by the acquisition module;
the single-camera processing submodule comprises an image preprocessing module, a vehicle detection module and a vehicle tracking module; the image preprocessing module is used for manually preprocessing the output video of the acquisition module distributed by the single-camera processing submodule to obtain an image sequence, lane information, a rotation adjustment matrix sequence and a homography matrix, and inputting the lane information, the image sequence and the rotation adjustment matrix sequence into the vehicle detection module; inputting the homography matrix into a vehicle tracking module; the vehicle detection module is used for carrying out target detection processing on the image sequence output by the image preprocessing module, obtaining information such as position coordinates, pixel sizes and vehicle types of all vehicles in the image, adjusting and supplementing detection results according to the rotation adjustment matrix sequence and the lane information output by the image preprocessing module, obtaining a target detection frame set sequence and inputting the target detection frame set sequence into the vehicle tracking module; the vehicle tracking module is used for carrying out target tracking processing on a target detection frame set sequence output by the vehicle detection module to obtain complete track information of all targets in the video, and adjusting a tracking result according to a homography matrix output by the image preprocessing module to obtain a target track information set; and integrating the target track information sets obtained by the single-camera processing sub-modules, and inputting the target track information sets into a cross-camera matching module.
3. The road traffic behavior unmanned aerial vehicle monitoring method based on deep learning is characterized in that the method is based on the road traffic behavior unmanned aerial vehicle monitoring system based on deep learning of claim 1 or 2, and comprises the following steps:
1) the method comprises the following steps that a plurality of unmanned aerial vehicles form an unmanned aerial vehicle cluster to acquire aerial vehicle video data, all paths of acquired data are synchronously stored and input into a single-camera processing module;
2) the single-camera processing module creates a sub-module for each path of video to perform single-camera multi-target tracking processing according to the output video of the acquisition module to obtain a vehicle track set of each path of video, and then integrates and inputs the vehicle track sets of all the videos to the cross-camera matching module;
3) the cross-camera matching module performs pairwise matching and information fusion on the vehicle track data of each video according to the integrated data output by the single-camera processing module to obtain complete track information of each vehicle in the global video, and then integrates the information of all vehicles into a global track set to be input to the traffic parameter extraction module;
4) the traffic parameter extraction module calculates the track information of each vehicle in the set by using a traffic parameter calculation formula according to the global track set input by the cross-camera matching module to obtain a single traffic parameter of the target; calculating the track information of every two vehicles in the set to obtain the interactive traffic parameters of the two targets; and evaluating the traffic state of each vehicle according to the calculated traffic parameters.
4. The deep learning-based unmanned aerial vehicle monitoring method for road traffic behaviors according to claim 3, wherein the specific implementation method of the step 1) is as follows:
401) numbering all unmanned aerial vehicles, arranging the unmanned aerial vehicles from small to large, and sequentially ascending at the leftmost end of a monitoring area according to the numbering sequence;
402) for the No. 1 unmanned aerial vehicle, vertically ascending, hovering at the high altitude about 250 meters away from the ground, and then adjusting the position to enable the left side of a shot picture to be the leftmost end of a monitoring area;
403) for the No. 2 unmanned aerial vehicle, after hovering at the high altitude of about 250 meters away from the ground, adjusting the position to ensure that the left area of the shot picture and the right area of the shot picture of the No. 1 unmanned aerial vehicle have the overlap of about 30 meters of real world length;
404) analogizing in turn until the shooting picture of the unmanned aerial vehicle cluster covers the whole monitoring area, simultaneously starting the video recording function of all unmanned aerial vehicles, synchronously acquiring video data and storing the video data, and transferring the acquired data from the unmanned aerial vehicle to a database after the data acquisition is finished.
5. The deep learning-based unmanned aerial vehicle monitoring method for road traffic behaviors according to claim 4, wherein the specific implementation method of the step 2) is as follows:
501) reading a video collected by the unmanned aerial vehicle with the number of 1 from a database, processing the video into an image sequence taking frames as a unit, and selecting one frame as a calibration image H to represent the scene of the video; selecting more than 20 marking points in the calibration image H, wherein the marking points require consistent horizontal height and uniformly distribute the whole monitoring picture as much as possible, and then recording image coordinates of all the marking points; acquiring longitude and latitude position data of the marking points by using Google maps or GPS (global positioning system) on site, and converting the longitude and latitude position data into northeast coordinate information; arranging the image coordinates and the northeast coordinates according to a corresponding sequence, inputting the image coordinates and the northeast coordinates into a cv2.findHomography function of python for fitting, and calculating a homography matrix K of the mapping from the image coordinate system of the calibration frame to a world coordinate system; on the calibration image H, for each lane area, marking a plurality of points along the edge of the lane area and saving the image coordinates of the points; integrating the generated point set of each lane area into an npy file to represent the scene lane information; extracting position information L and feature information F of a plurality of feature points P of a calibration image H by using an ORB algorithm;
502) using ORB algorithm to extract the jth frame image H of the videojA plurality of feature points PjPosition information L ofjAnd characteristic information Fj(ii) a According to 501) and characteristic information FjCalculating a feature point P and a feature point PjAnd according to the corresponding relationship and the position information L of 501) tojUsing cv2.findHomography function fitting of python to calculate rotation adjustment matrix M of mapping image coordinate system of current processing image to image coordinate system of calibration framej(ii) a Traversing all the frames of the video according to the frame number according to the method to obtain a rotation adjustment matrix sequence M;
503) performing frame-by-frame target detection on the image sequence of 501) by using a target detector based on deep learning to obtain detection frames of all vehicles in each frame of image, namely image position information and corresponding vehicle type information; analyzing the lane area where each detection frame is located according to the lane information of 501); synthesizing the above information to obtain a sequence D of the detection box set, unit D thereofjThe detection frame set represents the jth frame of picture and comprises the image coordinates, the vehicle type and the lane of each detection frame;
504) according to 503) the obtained sequence D of the detection frame sets for each DjUsing the homography matrix M of the j-th frame in the rotation adjustment matrix sequence M of 502)jPerforming homography transformation to obtain transformed coordinates and updating to djTo give d'j(ii) a After traversing and updating all the units, updating the detection frame set sequence into D';
505) according to the detection frame set sequence D' obtained in 504), a vehicle track set T is calculated by using a Sort multi-target tracking algorithm based on Kalman filtering and Hungary matching, and a unit T of the vehicle track set T isiTrajectory information representing the ith vehicle, including a sequence of image coordinates (x) of the targetij,yij) Type of vehicle type, lane sequence rijWherein j is the frame number corresponding to the parameter;
506) according to 505) obtaining a set of vehicle trajectories T, for each TiCoordinate sequence of (x)ij,yij) Using 501) to perform homography transformation to obtain transformed coordinates
Figure FDA0002337062050000041
And updated to tiTo obtain wi(ii) a After traversing and updating all the units, the vehicle track set is updated to W1
507) Reading the video collected by the unmanned aerial vehicle with the number of 2 from the database, repeating the steps 501) to 506), and updating the vehicle track set to W2And in the same way, traversing all videos to obtain a vehicle track set sequence W.
6. The deep learning-based unmanned aerial vehicle monitoring method for road traffic behaviors according to claim 5, wherein the specific implementation method of the step 3) is as follows:
601) obtaining a vehicle trajectory set sequence W from a single-camera processing module, wherein WnSet of vehicle trajectory information for nth video, unit w thereofniEstablishing a global vehicle track information set W for the track information of the ith target of the nth videototalSetting n as 1;
602) with WnFor purposes of targeting, use Wn+1Is matched with the data of WnEach w inniEstablishing a matching set mniIf i is 1;
603) traverse Wn+1According to the following rule, wn+1iFall under set mniThe method comprises the following steps: is the same vehicle type; there is an overlapping effective time, which refers to the time period from the appearance of the target to the disappearance; in the overlapped effective time, the lanes are the same; after the traversal is finished, m is calculatedniEach element of (1) and wniKeeping the elements with the minimum distance and lower than a set threshold value in the set according to the Euclidean distance of the tracks in the overlapped effective time, and deleting the rest elements; if mniIf it is empty, then wniAdding to a global set of vehicle trajectory information WtotalOtherwise, w isniTo mniThe remaining elements are in Wn+1A corresponding unit;
604) let i ═ i +1, then continue the loop starting from 603) until WnAll elements in are traversed;
605) let n equal n +1, if n equals the total number of video group videos, then W will be addednAll elements of (2) are added to WtotalOtherwise, from 602) continue cycling;
606) is a set WtotalAll elements in (1) are ordered and assigned numbers according to the order of appearance time, set WtotalThe track information of all targets in the video group global monitoring range is contained, and the information of the same vehicle under different videos is summarized to the same unit.
7. The deep learning-based unmanned aerial vehicle monitoring method for road traffic behaviors as claimed in claim 6, wherein the specific implementation method of step 4) is as follows:
701) obtaining a global information set W from a cross-camera matching moduletotal(ii) a Selecting one unit from the set or appointing one unit, wherein the unit comprises a position information sequence l and a lane information sequence r corresponding to the monitoring time of the vehicle in the monitoring range; filtering the sequence information by using a low-pass filtering method to obtain a position information sequence l'; setting a distance parameter x, calculating the Euclidean distance between the node and the first x-1 nodes from the x-th node according to the position information sequence l', and then dividing the Euclidean distance by the time difference between the two nodes to calculate the speed of the node target; continuously traversing the sequence l' until all the remaining nodes are calculated, and obtaining a speed information sequence v; calculating the change process of the speed according to the speed information sequence v, and judging whether the target has overspeed behavior in the driving process by setting a threshold value; according to the lane information sequence r, counting lane change times and lane change time of the target in unit time, and judging whether a bad lane change behavior occurs in the driving process of the target by setting a threshold value;
702) selecting two context targets from the global information set WtotalExtracting both units and processing the data of 701) to obtain the position information sequences l of both units1、l2And a velocity information sequence v1、v2(ii) a According to l1、l2Calculating the Europe according to the time dataObtaining the distance between the front and the back of the vehicle and the sequence D (t) of the vehicle-following distance, and judging whether the latter has the following and approaching behavior in the driving process by setting a threshold value; according to the MOR formula
Figure FDA0002337062050000051
Calculating the time interval TH of the vehicle head according to the MOR formula
Figure FDA0002337062050000052
Calculating the time TTC of collision of the two vehicles so as to judge whether the two vehicles have collision danger in the driving process;
703) selecting two targets in parallel relation from the global information set WtotalExtracting both units and processing the data of 701) to obtain the position information sequences l of both units1、l2According to l1、l2And calculating the Euclidean distance according to the time data to obtain a parallel interval sequence D (h) of the Euclidean distance and judging whether the parallel interval sequence D (h) and the parallel interval sequence D (h) have bad parallel behaviors in the driving process by setting a threshold value.
CN201911360542.9A 2019-12-25 2019-12-25 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning Active CN111145545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360542.9A CN111145545B (en) 2019-12-25 2019-12-25 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360542.9A CN111145545B (en) 2019-12-25 2019-12-25 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning

Publications (2)

Publication Number Publication Date
CN111145545A true CN111145545A (en) 2020-05-12
CN111145545B CN111145545B (en) 2021-05-28

Family

ID=70520178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360542.9A Active CN111145545B (en) 2019-12-25 2019-12-25 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning

Country Status (1)

Country Link
CN (1) CN111145545B (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111554105A (en) * 2020-05-29 2020-08-18 浙江科技学院 Intelligent traffic identification and statistics method for complex traffic intersection
CN111627220A (en) * 2020-05-22 2020-09-04 中国科学院空天信息创新研究院 Unmanned aerial vehicle and ground cooperative processing system for vehicle detection
CN111696138A (en) * 2020-06-17 2020-09-22 北京大学深圳研究生院 System for automatically collecting, tracking and analyzing biological behaviors
CN111724603A (en) * 2020-07-01 2020-09-29 中南大学 CAV state determination method, device, equipment and medium based on traffic track data
CN111784747A (en) * 2020-08-13 2020-10-16 上海高重信息科技有限公司 Vehicle multi-target tracking system and method based on key point detection and correction
CN111833598A (en) * 2020-05-14 2020-10-27 山东科技大学 Automatic traffic incident monitoring method and system for unmanned aerial vehicle on highway
CN111898436A (en) * 2020-06-29 2020-11-06 北京大学 Multi-target tracking processing optimization method based on visual signals
CN111968367A (en) * 2020-08-12 2020-11-20 上海宝通汎球电子有限公司 Internet of things communication management system and management method
CN112102372A (en) * 2020-09-16 2020-12-18 上海麦图信息科技有限公司 Cross-camera track tracking system for airport ground object
CN112149595A (en) * 2020-09-29 2020-12-29 爱动超越人工智能科技(北京)有限责任公司 Method for detecting lane line and vehicle violation by using unmanned aerial vehicle
CN112184814A (en) * 2020-09-24 2021-01-05 天津锋物科技有限公司 Positioning method and positioning system
CN112183528A (en) * 2020-09-23 2021-01-05 桂林电子科技大学 Method for tracking target vehicle, device, system and computer storage medium thereof
CN112212881A (en) * 2020-12-14 2021-01-12 成都飞航智云科技有限公司 Flight navigator based on big dipper is used
CN112381982A (en) * 2020-10-19 2021-02-19 北京科技大学 Unmanned supermarket system constructed based on deep learning
CN112381022A (en) * 2020-11-20 2021-02-19 深圳市汇芯视讯电子有限公司 Intelligent driving monitoring method, system, equipment and storable medium
CN112699854A (en) * 2021-03-22 2021-04-23 亮风台(上海)信息科技有限公司 Method and device for identifying stopped vehicle
CN112735164A (en) * 2020-12-25 2021-04-30 北京智能车联产业创新中心有限公司 Test data construction method and test method
CN112836683A (en) * 2021-03-04 2021-05-25 广东建邦计算机软件股份有限公司 License plate recognition method, device, equipment and medium for portable camera equipment
CN113269424A (en) * 2021-05-17 2021-08-17 西安交通大学 Robot cluster task allocation method, system, equipment and storage medium
CN113421289A (en) * 2021-05-17 2021-09-21 同济大学 High-precision vehicle track data extraction method for overcoming unmanned aerial vehicle shooting disturbance
CN113469117A (en) * 2021-07-20 2021-10-01 国网信息通信产业集团有限公司 Multi-channel video real-time detection method and system
CN113469069A (en) * 2021-07-06 2021-10-01 沈阳工业大学 Method for acquiring and evaluating parameters of vehicle running congestion state of highway section
CN113569647A (en) * 2021-06-29 2021-10-29 广州市赋安电子科技有限公司 AIS-based ship high-precision coordinate mapping method
CN113674329A (en) * 2021-08-13 2021-11-19 上海同温层智能科技有限公司 Vehicle driving behavior detection method and system
CN113763425A (en) * 2021-08-30 2021-12-07 青岛海信网络科技股份有限公司 Road area calibration method and electronic equipment
CN113766179A (en) * 2020-06-05 2021-12-07 上海竺程信息科技有限公司 Intelligent road side system based on camera
CN113792634A (en) * 2021-09-07 2021-12-14 北京易航远智科技有限公司 Target similarity score calculation method and system based on vehicle-mounted camera
CN114038193A (en) * 2021-11-08 2022-02-11 华东师范大学 Intelligent traffic flow data statistical method and system based on unmanned aerial vehicle and multi-target tracking
CN114155511A (en) * 2021-12-13 2022-03-08 吉林大学 Environmental information acquisition method for automatically driving automobile on public road
CN114549593A (en) * 2022-02-25 2022-05-27 北京拙河科技有限公司 Target tracking method and system for multiple targets and multiple cameras
CN114550539A (en) * 2022-03-08 2022-05-27 北京通汇定位科技有限公司 Method for recording training track animation trace of learner-driven vehicle in driving school
CN114783181A (en) * 2022-04-13 2022-07-22 江苏集萃清联智控科技有限公司 Traffic flow statistical method and device based on roadside perception
TWI777223B (en) * 2020-08-21 2022-09-11 交通部運輸研究所 Unmanned aerial vehicle traffic survey system and method thereof
CN115457780A (en) * 2022-09-06 2022-12-09 北京航空航天大学 Vehicle flow and flow speed automatic measuring and calculating method and system based on priori knowledge set
CN116069976A (en) * 2023-03-06 2023-05-05 南京和电科技有限公司 Regional video analysis method and system
US11765562B2 (en) 2021-10-11 2023-09-19 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for matching objects in collaborative perception messages
CN117237418A (en) * 2023-11-15 2023-12-15 成都航空职业技术学院 Moving object detection method and system based on deep learning
CN117456723A (en) * 2023-09-26 2024-01-26 长春理工大学 Automatic driving vehicle motion trail analysis system of intelligent traffic system
CN117593717A (en) * 2024-01-18 2024-02-23 武汉大学 Lane tracking method and system based on deep learning
CN117636270A (en) * 2024-01-23 2024-03-01 南京理工大学 Vehicle robbery event identification method and device based on monocular camera
CN117649432A (en) * 2023-12-04 2024-03-05 成都臻识科技发展有限公司 Cross-camera multi-target tracking method, device and readable storage medium
CN118015844A (en) * 2024-04-10 2024-05-10 成都航空职业技术学院 Traffic dynamic control method and system based on deep learning network

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075376A (en) * 2006-05-19 2007-11-21 北京微视新纪元科技有限公司 Intelligent video traffic monitoring system based on multi-viewpoints and its method
CN102510482A (en) * 2011-11-29 2012-06-20 蔡棽 Image splicing reconstruction and overall monitoring method for improving visibility and visual distance
CN102915638A (en) * 2012-10-07 2013-02-06 复旦大学 Surveillance video-based intelligent parking lot management system
CN103281519A (en) * 2013-05-30 2013-09-04 水木路拓科技(北京)有限公司 Novel road traffic surveillance camera system
CN105407278A (en) * 2015-11-10 2016-03-16 北京天睿空间科技股份有限公司 Panoramic video traffic situation monitoring system and method
CN107067706A (en) * 2017-03-05 2017-08-18 赵莉莉 Command the traffic supervision system for vehicles of managing and control system in real time comprehensively based on intelligent transportation
CN107111903A (en) * 2014-12-19 2017-08-29 丰田自动车株式会社 Remote vehicle data gathering system
KR101845943B1 (en) * 2017-07-26 2018-04-05 주식회사 한일에스티엠 A system and method for recognizing number plates on multi-lane using one camera
CN107886761A (en) * 2017-11-14 2018-04-06 金陵科技学院 A kind of parking lot monitoring method based on unmanned plane
CN107967817A (en) * 2017-11-17 2018-04-27 张慧 Intelligent managing system for parking lot and method based on multi-path camera deep learning
US20180233041A1 (en) * 2014-01-21 2018-08-16 Speedgauge, Inc. Identification of driver abnormalities in a traffic flow
CN108734655A (en) * 2017-04-14 2018-11-02 中国科学院苏州纳米技术与纳米仿生研究所 The method and system that aerial multinode is investigated in real time
CN109034446A (en) * 2018-06-13 2018-12-18 南京理工大学 The smart city traffic incident emergency response system collected evidence online based on unmanned plane
US10241651B2 (en) * 2016-12-22 2019-03-26 Sap Se Grid-based rendering of nodes and relationships between nodes
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075376A (en) * 2006-05-19 2007-11-21 北京微视新纪元科技有限公司 Intelligent video traffic monitoring system based on multi-viewpoints and its method
CN102510482A (en) * 2011-11-29 2012-06-20 蔡棽 Image splicing reconstruction and overall monitoring method for improving visibility and visual distance
CN102915638A (en) * 2012-10-07 2013-02-06 复旦大学 Surveillance video-based intelligent parking lot management system
CN103281519A (en) * 2013-05-30 2013-09-04 水木路拓科技(北京)有限公司 Novel road traffic surveillance camera system
US20180233041A1 (en) * 2014-01-21 2018-08-16 Speedgauge, Inc. Identification of driver abnormalities in a traffic flow
CN107111903A (en) * 2014-12-19 2017-08-29 丰田自动车株式会社 Remote vehicle data gathering system
CN105407278A (en) * 2015-11-10 2016-03-16 北京天睿空间科技股份有限公司 Panoramic video traffic situation monitoring system and method
US10241651B2 (en) * 2016-12-22 2019-03-26 Sap Se Grid-based rendering of nodes and relationships between nodes
CN107067706A (en) * 2017-03-05 2017-08-18 赵莉莉 Command the traffic supervision system for vehicles of managing and control system in real time comprehensively based on intelligent transportation
CN108734655A (en) * 2017-04-14 2018-11-02 中国科学院苏州纳米技术与纳米仿生研究所 The method and system that aerial multinode is investigated in real time
KR101845943B1 (en) * 2017-07-26 2018-04-05 주식회사 한일에스티엠 A system and method for recognizing number plates on multi-lane using one camera
CN107886761A (en) * 2017-11-14 2018-04-06 金陵科技学院 A kind of parking lot monitoring method based on unmanned plane
CN107967817A (en) * 2017-11-17 2018-04-27 张慧 Intelligent managing system for parking lot and method based on multi-path camera deep learning
CN109034446A (en) * 2018-06-13 2018-12-18 南京理工大学 The smart city traffic incident emergency response system collected evidence online based on unmanned plane
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833598A (en) * 2020-05-14 2020-10-27 山东科技大学 Automatic traffic incident monitoring method and system for unmanned aerial vehicle on highway
CN111627220A (en) * 2020-05-22 2020-09-04 中国科学院空天信息创新研究院 Unmanned aerial vehicle and ground cooperative processing system for vehicle detection
CN111554105A (en) * 2020-05-29 2020-08-18 浙江科技学院 Intelligent traffic identification and statistics method for complex traffic intersection
CN113766179A (en) * 2020-06-05 2021-12-07 上海竺程信息科技有限公司 Intelligent road side system based on camera
CN111696138A (en) * 2020-06-17 2020-09-22 北京大学深圳研究生院 System for automatically collecting, tracking and analyzing biological behaviors
CN111696138B (en) * 2020-06-17 2023-06-30 北京大学深圳研究生院 System for automatically collecting, tracking and analyzing biological behaviors
CN111898436A (en) * 2020-06-29 2020-11-06 北京大学 Multi-target tracking processing optimization method based on visual signals
CN111724603A (en) * 2020-07-01 2020-09-29 中南大学 CAV state determination method, device, equipment and medium based on traffic track data
CN111968367A (en) * 2020-08-12 2020-11-20 上海宝通汎球电子有限公司 Internet of things communication management system and management method
CN111968367B (en) * 2020-08-12 2021-12-14 上海宝通汎球电子有限公司 Internet of things communication management system and management method
CN111784747A (en) * 2020-08-13 2020-10-16 上海高重信息科技有限公司 Vehicle multi-target tracking system and method based on key point detection and correction
CN111784747B (en) * 2020-08-13 2024-02-27 青岛高重信息科技有限公司 Multi-target vehicle tracking system and method based on key point detection and correction
TWI777223B (en) * 2020-08-21 2022-09-11 交通部運輸研究所 Unmanned aerial vehicle traffic survey system and method thereof
CN112102372A (en) * 2020-09-16 2020-12-18 上海麦图信息科技有限公司 Cross-camera track tracking system for airport ground object
CN112183528A (en) * 2020-09-23 2021-01-05 桂林电子科技大学 Method for tracking target vehicle, device, system and computer storage medium thereof
CN112184814B (en) * 2020-09-24 2022-09-02 天津锋物科技有限公司 Positioning method and positioning system
CN112184814A (en) * 2020-09-24 2021-01-05 天津锋物科技有限公司 Positioning method and positioning system
CN112149595A (en) * 2020-09-29 2020-12-29 爱动超越人工智能科技(北京)有限责任公司 Method for detecting lane line and vehicle violation by using unmanned aerial vehicle
CN112381982A (en) * 2020-10-19 2021-02-19 北京科技大学 Unmanned supermarket system constructed based on deep learning
CN112381982B (en) * 2020-10-19 2022-02-22 北京科技大学 Unmanned supermarket system constructed based on deep learning
CN112381022A (en) * 2020-11-20 2021-02-19 深圳市汇芯视讯电子有限公司 Intelligent driving monitoring method, system, equipment and storable medium
CN112212881B (en) * 2020-12-14 2021-03-12 成都飞航智云科技有限公司 Flight navigator based on big dipper is used
CN112212881A (en) * 2020-12-14 2021-01-12 成都飞航智云科技有限公司 Flight navigator based on big dipper is used
CN112735164A (en) * 2020-12-25 2021-04-30 北京智能车联产业创新中心有限公司 Test data construction method and test method
CN112836683B (en) * 2021-03-04 2024-07-09 广东建邦计算机软件股份有限公司 License plate recognition method, device, equipment and medium for portable camera equipment
CN112836683A (en) * 2021-03-04 2021-05-25 广东建邦计算机软件股份有限公司 License plate recognition method, device, equipment and medium for portable camera equipment
CN112699854A (en) * 2021-03-22 2021-04-23 亮风台(上海)信息科技有限公司 Method and device for identifying stopped vehicle
CN113421289A (en) * 2021-05-17 2021-09-21 同济大学 High-precision vehicle track data extraction method for overcoming unmanned aerial vehicle shooting disturbance
CN113269424B (en) * 2021-05-17 2023-06-09 西安交通大学 Robot cluster task allocation method, system, equipment and storage medium
CN113269424A (en) * 2021-05-17 2021-08-17 西安交通大学 Robot cluster task allocation method, system, equipment and storage medium
CN113421289B (en) * 2021-05-17 2022-09-20 同济大学 High-precision vehicle track data extraction method for overcoming unmanned aerial vehicle shooting disturbance
CN113569647A (en) * 2021-06-29 2021-10-29 广州市赋安电子科技有限公司 AIS-based ship high-precision coordinate mapping method
CN113569647B (en) * 2021-06-29 2024-02-20 广州赋安数字科技有限公司 AIS-based ship high-precision coordinate mapping method
CN113469069A (en) * 2021-07-06 2021-10-01 沈阳工业大学 Method for acquiring and evaluating parameters of vehicle running congestion state of highway section
CN113469117A (en) * 2021-07-20 2021-10-01 国网信息通信产业集团有限公司 Multi-channel video real-time detection method and system
CN113674329A (en) * 2021-08-13 2021-11-19 上海同温层智能科技有限公司 Vehicle driving behavior detection method and system
CN113763425A (en) * 2021-08-30 2021-12-07 青岛海信网络科技股份有限公司 Road area calibration method and electronic equipment
CN113792634A (en) * 2021-09-07 2021-12-14 北京易航远智科技有限公司 Target similarity score calculation method and system based on vehicle-mounted camera
US11765562B2 (en) 2021-10-11 2023-09-19 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for matching objects in collaborative perception messages
CN114038193B (en) * 2021-11-08 2023-07-18 华东师范大学 Intelligent traffic flow data statistics method and system based on unmanned aerial vehicle and multi-target tracking
CN114038193A (en) * 2021-11-08 2022-02-11 华东师范大学 Intelligent traffic flow data statistical method and system based on unmanned aerial vehicle and multi-target tracking
CN114155511A (en) * 2021-12-13 2022-03-08 吉林大学 Environmental information acquisition method for automatically driving automobile on public road
CN114549593A (en) * 2022-02-25 2022-05-27 北京拙河科技有限公司 Target tracking method and system for multiple targets and multiple cameras
CN114550539A (en) * 2022-03-08 2022-05-27 北京通汇定位科技有限公司 Method for recording training track animation trace of learner-driven vehicle in driving school
CN114783181A (en) * 2022-04-13 2022-07-22 江苏集萃清联智控科技有限公司 Traffic flow statistical method and device based on roadside perception
CN115457780A (en) * 2022-09-06 2022-12-09 北京航空航天大学 Vehicle flow and flow speed automatic measuring and calculating method and system based on priori knowledge set
CN116069976B (en) * 2023-03-06 2023-09-12 南京和电科技有限公司 Regional video analysis method and system
CN116069976A (en) * 2023-03-06 2023-05-05 南京和电科技有限公司 Regional video analysis method and system
CN117456723A (en) * 2023-09-26 2024-01-26 长春理工大学 Automatic driving vehicle motion trail analysis system of intelligent traffic system
CN117456723B (en) * 2023-09-26 2024-06-07 长春理工大学 Automatic driving vehicle motion trail analysis system of intelligent traffic system
CN117237418A (en) * 2023-11-15 2023-12-15 成都航空职业技术学院 Moving object detection method and system based on deep learning
CN117649432A (en) * 2023-12-04 2024-03-05 成都臻识科技发展有限公司 Cross-camera multi-target tracking method, device and readable storage medium
CN117593717A (en) * 2024-01-18 2024-02-23 武汉大学 Lane tracking method and system based on deep learning
CN117593717B (en) * 2024-01-18 2024-04-05 武汉大学 Lane tracking method and system based on deep learning
CN117636270B (en) * 2024-01-23 2024-04-09 南京理工大学 Vehicle robbery event identification method and device based on monocular camera
CN117636270A (en) * 2024-01-23 2024-03-01 南京理工大学 Vehicle robbery event identification method and device based on monocular camera
CN118015844A (en) * 2024-04-10 2024-05-10 成都航空职业技术学院 Traffic dynamic control method and system based on deep learning network
CN118015844B (en) * 2024-04-10 2024-06-11 成都航空职业技术学院 Traffic dynamic control method and system based on deep learning network

Also Published As

Publication number Publication date
CN111145545B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN111145545B (en) Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
Zhao et al. Detection, tracking, and geolocation of moving vehicle from uav using monocular camera
KR102189262B1 (en) Apparatus and method for collecting traffic information using edge computing
CN111429484B (en) Multi-target vehicle track real-time construction method based on traffic monitoring video
CN101969548B (en) Active video acquiring method and device based on binocular camera shooting
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN103425967A (en) Pedestrian flow monitoring method based on pedestrian detection and tracking
CN112750150A (en) Vehicle flow statistical method based on vehicle detection and multi-target tracking
Feng et al. Mixed road user trajectory extraction from moving aerial videos based on convolution neural network detection
US20190311209A1 (en) Feature Recognition Assisted Super-resolution Method
CN114170580A (en) Highway-oriented abnormal event detection method
CN111127520B (en) Vehicle tracking method and system based on video analysis
CN114023062A (en) Traffic flow information monitoring method based on deep learning and edge calculation
CN113903008A (en) Ramp exit vehicle violation identification method based on deep learning and trajectory tracking
CN106092123A (en) A kind of video navigation method and device
Liang et al. LineNet: A zoomable CNN for crowdsourced high definition maps modeling in urban environments
Cheng et al. Structure-aware network for lane marker extraction with dynamic vision sensor
CN114663473A (en) Personnel target positioning and tracking method and system based on multi-view information fusion
CN109359545B (en) Cooperative monitoring method and device under complex low-altitude environment
Zhao et al. Real-world trajectory extraction from aerial videos-a comprehensive and effective solution
Notz et al. Extraction and assessment of naturalistic human driving trajectories from infrastructure camera and radar sensors
CN113537170A (en) Intelligent traffic road condition monitoring method and computer readable storage medium
CN111950524A (en) Orchard local sparse mapping method and system based on binocular vision and RTK
KR102682309B1 (en) System and Method for Estimating Microscopic Traffic Parameters from UAV Video using Multiple Object Tracking of Deep Learning-based
EP4087236A1 (en) Video surveillance system with vantage point transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant