CN114022791A - Vehicle track motion characteristic identification method based on high-altitude visual angle identification system - Google Patents
Vehicle track motion characteristic identification method based on high-altitude visual angle identification system Download PDFInfo
- Publication number
- CN114022791A CN114022791A CN202111201539.XA CN202111201539A CN114022791A CN 114022791 A CN114022791 A CN 114022791A CN 202111201539 A CN202111201539 A CN 202111201539A CN 114022791 A CN114022791 A CN 114022791A
- Authority
- CN
- China
- Prior art keywords
- frame
- vehicle
- vehicle target
- target
- mth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 230000000007 visual effect Effects 0.000 title claims abstract description 17
- 238000004364 calculation method Methods 0.000 claims abstract description 70
- 238000012549 training Methods 0.000 claims abstract description 53
- 238000006243 chemical reaction Methods 0.000 claims abstract description 40
- 230000001133 acceleration Effects 0.000 claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 36
- 230000008569 process Effects 0.000 claims abstract description 30
- 238000001914 filtration Methods 0.000 claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 108
- 239000000126 substance Substances 0.000 claims description 48
- 230000008859 change Effects 0.000 claims description 23
- 238000012216 screening Methods 0.000 claims description 17
- 230000007704 transition Effects 0.000 claims description 15
- 238000013135 deep learning Methods 0.000 claims description 9
- 238000012888 cubic function Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000012850 discrimination method Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a vehicle track motion characteristic identification method based on a high-altitude visual angle identification system. The high altitude perspective identification system includes: the device comprises an aerial photography shooting device, a calculation processing host and a display device. The method comprises the steps of collecting high-altitude video data by an aerial photography device to produce a high-altitude image training data set and a high-altitude image sequence data set; the high-altitude image training dataset is used for training the YOLOv5 model; carrying out vehicle identification on the high-altitude image sequence data set to obtain a high-altitude image sequence vehicle identification frame set; generating an original vehicle track motion characteristic identification text data set by applying Kalman filtering and Hungarian matching algorithm; through four processes of data preprocessing, motion feature extraction, lane number detection and coordinate conversion, a five-level vehicle track motion feature recognition text data set is finally formed. The method can reduce the omission problem of the associated part of the vehicle target data and provides a specific implementation method for extracting the characteristics of the vehicle position, speed, acceleration and lane number.
Description
Technical Field
The invention belongs to the field of intelligent transportation, and particularly relates to a vehicle track motion characteristic identification method based on a high-altitude visual angle identification system.
Background
In recent years, the rapid development of artificial intelligence technology and the automatic driving industry promotes the intelligentization of road traffic on one hand, and puts higher requirements on the acquisition of road traffic information on the other hand. The artificial intelligence technology plays a great role in the fields of feature extraction, data mining and decision control, and meanwhile, the research on the automatic driving technology is tightened in all countries, and the automatic driving is generally divided into six grades: l0 manual driving, L1 assisted driving, L2 semi-automatic driving, L3 high-altitude automatic driving, L4 ultrahigh-level automatic driving, and L5 full-automatic driving. The evaluation of these grades requires real vehicle driving data from a real scene as support, and vehicle trajectory data under real roads can be used to verify vehicles in an autonomous driving mode, thereby evaluating their grades. At this stage, relevant research has been conducted to collect vehicle trajectory data. The chinese patent application CN110751099A of the invention proposes a method for extracting a high-precision track by using an aerial video, which focuses on denoising, splicing, and smoothing processing of a vehicle track, and the extraction of vehicle motion parameters is not described in detail, and in a vehicle target association stage, it does not consider the influence of the change situation of the vehicle motion state of a previous frame on the vehicle state of a current frame. The Chinese patent application CN111611918A also proposes a traffic flow data collection construction method based on aerial photography video, but the method for extracting the traffic flow parameters is deficient, and the target tracking means used by the method is a single-target tracking method which needs to be expanded; the Chinese patent application CN111145545A is superior to cross-camera monitoring in road traffic detection, and is deficient in a method for extracting the motion characteristics of traffic vehicles.
Disclosure of Invention
In order to solve the technical problem, the invention provides a vehicle track motion characteristic identification method based on a high-altitude visual angle identification system.
The high altitude visual angle recognition system is characterized by comprising: the device comprises an aerial photography shooting device, a calculation processing host and a display projection device;
the aerial photographing device is connected with the calculation processing host computer in a wireless mode; the computing processing host is connected with the display projection device in a wired mode;
the aerial photography camera device is used for collecting video image data of vehicles on the road under the high-altitude visual angle and sending the video image data to the calculation processing host computer in a wireless mode; the calculation processing host is used for processing the video image data of the road vehicles at the high altitude view angle acquired by the aerial photography camera device, further obtaining vehicle image recognition results and track generation results through a vehicle track motion characteristic recognition method at the high altitude view angle, and transmitting the vehicle image recognition results and the track generation results to the display projection device for display;
aerial photography camera device is directly over the road surface when long-range shooting, and the device camera of taking photo by plane promptly shoots the sight and is 90 degrees with road surface contained angle.
The vehicle track motion characteristic identification method is characterized by comprising the following steps of:
step 1: the calculation processing host machine wirelessly shoots video image data by using an aerial photography camera device to be positioned right above a road surface, and is used for forming a high-altitude image training data set, carrying out manual marking on the high-altitude image training data set, marking an external rectangular frame of a vehicle target and a vehicle type, and forming a high-altitude image training vehicle marking frame set;
step 2: the calculation processing host machine wirelessly shoots video image data by using an aerial photography device positioned right above a road surface, and the video image data is used for forming a high-altitude image sequence data set and subsequently extracting vehicle track data; the road in the image picture of the high-altitude image sequence data set is positioned in the middle of the image;
and step 3: introducing a YOLOv5 deep learning network model, sequentially inputting each frame of image in the high-altitude image training data set and a vehicle marking frame corresponding to each frame of image in the high-altitude image training vehicle marking frame set into the YOLOv5 deep learning network model for training, constructing a loss function model by using a GIOU method, optimizing a loss function value by using an Adam optimization algorithm, and identifying vehicle targets in the high-altitude image sequence data set by using the trained YOLOv5 deep learning network model to obtain a high-altitude image sequence vehicle identification frame set;
and 4, step 4: starting from the first frame of vehicle target external rectangular frame data in the high-altitude image sequence vehicle target identification frame set, carrying out the following processing procedures: applying Kalman filtering to a previous frame of vehicle target boundary frame to obtain vehicle target estimation frame data of a current frame, performing association matching on the vehicle target identification frame data of the current frame and the vehicle target boundary frame in the vehicle target estimation frame data by using a Hungarian association algorithm, wherein the matching mechanism is IOU distance, obtaining the ID serial number of the vehicle target identification frame data of the current frame, namely the ID serial number of the vehicle target of the current frame, and marking a new ID serial number by the vehicle target frame data of the current frame which is not matched; until the end of the aerial image sequence. Combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the correlation matching process to form an original vehicle track motion characteristic identification text data set;
and 5: and sequentially carrying out four processing processes of data preprocessing, motion feature extraction, lane number detection and coordinate conversion on the original vehicle track motion feature identification text data set to finally form a five-level vehicle track motion feature identification text data set.
Preferably, the high-altitude image training data set in step 1 is:
{datae(x,y),e∈[1,E],x∈[1,X],y∈[1,Y]}
wherein, the datae(X, Y) represents the pixel information of the X-th row and the Y-th column of the E-th frame image in the high-altitude image training data set, E is the high-altitude image training data set frame number, X is the row number of the images in the high-altitude image training data set, and Y is the column number of the images in the high-altitude image training data set;
step 1, the high-altitude image training vehicle marking frame set comprises the following steps:
wherein the content of the first and second substances,the horizontal coordinate of the upper left corner of the rectangular frame of the mark of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set is represented,representing the vertical coordinate of the upper left corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set;represents the horizontal coordinate of the lower right corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set,representing the vertical coordinate of the lower right corner of a rectangular frame of an nth vehicle target mark in an e frame image in the high-altitude image training vehicle mark frame set; typee,nRepresenting the mark category of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set;
preferably, in step 2, the fixed shooting frame rate of the aerial camera device is FPS, the length of the shot road is L, and the number of the covered pixel units in the road length direction of the shot picture is G; the high-altitude image data shooting sizes are X and Y;
step 2, the high-altitude image sequence dataset is as follows:
{datat(x,y),t∈[1,T],x∈[1,X],y∈[1,Y]}
wherein, the datat(X, Y) represents the pixel information of the X-th line and the Y-th column of the T-th frame image in the high-altitude image sequence data set, T is the total frame number of the high-altitude image sequence data set, X is the line number of the image in the high-altitude image sequence data set, and Y is the column number of the image in the high-altitude image sequence data set;
preferably, the YOLOv5 network framework in step 3 is a yolo5x network structure;
and 3, the high-altitude image sequence vehicle identification frame set comprises the following steps:
wherein the content of the first and second substances,represents the horizontal coordinate of the upper left corner of the bounding rectangle frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set,representing the vertical coordinate of the upper left corner of a circumscribed rectangular frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set;represents the horizontal coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle recognition frame set,representing the vertical coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set; typet,nRepresenting the category of the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set;
preferably, in the step 4, the recorded video frame sequence numbers in the current video frame sequence numbers are collected into a set
Framet,n{framet,n}
Wherein the frame ist,nAnd the video sequence number corresponding to the nth vehicle target in the tth frame is shown.
And 4, the Kalman filtering processing process sequentially comprises the following steps: initializing a vehicle target state vector; initializing a state transition matrix, initializing a covariance matrix, initializing an observation matrix and initializing a system noise matrix; predicting the vehicle target state vector of the current frame according to the optimal estimated value of the vehicle target state vector of the previous frame to obtain a predicted value of the vehicle target state vector of the current frame; predicting a current frame vehicle target system error covariance matrix according to the previous frame vehicle target system error covariance matrix to obtain a current frame vehicle target system error covariance matrix predicted value; updating a Kalman coefficient by using the covariance matrix predicted value of the current frame vehicle target system; estimating according to the current frame vehicle target state vector predicted value and the system observation value to obtain the current frame vehicle target state vector optimal estimation value; updating a current frame vehicle target system error covariance matrix; extracting a current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value;
in the process of initializing the vehicle target state vector by Kalman filtering, the characteristics of the vehicle target boundary frame are described by using the abscissa of the center of the vehicle target boundary frame, the ordinate of the center of the boundary frame, the area of the boundary frame and the horizontal-vertical ratio of the boundary frame, and the motion state information of the boundary frame is described by using a linear uniform velocity model, namely:
wherein the content of the first and second substances,represents motion state information of the bounding box, u represents the abscissa of the center of the bounding box, v represents the ordinate of the center of the bounding box, s represents the area of the bounding box, r represents the bounding box aspect ratio, typically a constant,represents the rate of change of the abscissa of the center of the bounding box,the ordinate representing the center of the bounding box,the area change rate of the bounding box is indicated. Mth frame of t-1The motion state information of the vehicle target bounding box is described as:
wherein the content of the first and second substances,represents motion state information of the mth vehicle target bounding box of the t-1 th frame, ut-1,mAbscissa, v, representing the center of the mth vehicle target bounding box of the t-1 th framet-1,mOrdinate, s, representing the center of the mth vehicle target bounding box in the t-1 th framet-1,mDenotes the area, r, of the m-th vehicle target bounding box of the t-1 th framet-1,mRepresents the aspect ratio of the mth vehicle target boundary box of the t-1 th frame,represents the change rate of the center abscissa of the mth vehicle target bounding box of the t-1 th frame,the ordinate representing the center of the mth vehicle target bounding box of the t-1 th frame,representing the area change rate of the mth vehicle target boundary box of the t-1 th frame;
the calculation formula of the abscissa, the ordinate and the boundary box area of the mth vehicle target frame center in the t-1 frame is as follows:
wherein the content of the first and second substances,represents the abscissa of the upper left corner of the mth vehicle target frame of the t-1 th frame,represents the horizontal coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame,represents the ordinate of the upper left corner of the mth vehicle target frame of the t-1 th frame,representing the vertical coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame;
in the initialization state transition matrix in step 4, a state transition matrix F models the motion of the target state vector, and the state transition matrix F corresponding to the adopted uniform motion model is initialized as follows:
in the initialization of a covariance matrix, a covariance matrix P represents the uncertainty of target position information, and the covariance matrix is an empirical parameter;
in the initialization of the system noise covariance matrix, because the process noise is not measurable, the system noise covariance matrix Q is generally assumed to conform to normal distribution;
in initializing the observation matrix, the observation matrix H is related to an observable variable, and its values are initialized as:
in the initialization observation noise covariance matrix, because the observation noise is not measurable, the observation noise covariance matrix R is generally assumed to conform to normal distribution;
and 4, predicting the current frame vehicle target state vector by the Kalman filtering according to the optimal estimated value of the previous frame vehicle target state vector, wherein the obtained mth frame vehicle target state vector predicted value calculation formula is as follows:
wherein the content of the first and second substances,represents the optimal estimated value of the m-th vehicle target state vector in the t-1 th frame,representing the predicted value of the mth vehicle target state vector of the t frame, F is a state transition matrix, B is a control matrix, ut-1,mRepresenting a control gain matrix;
and 4, predicting the current frame vehicle target system error covariance matrix by the Kalman filtering according to the previous frame vehicle target system error covariance matrix, wherein the obtained predicted value of the mth vehicle target system error covariance matrix of the tth frame has the following formula:
wherein, Pt-1,mRepresenting the mth vehicle target system error covariance matrix at frame t-1,representing the predicted value of the error covariance matrix of the mth vehicle target system in the tth frame, wherein Q is the covariance matrix of process noise;
thirdly, in the Kalman filtering, updating Kalman coefficients by using the predicted value of the current frame system error covariance matrix, wherein the calculation formula of the mth vehicle target Kalman coefficient of the tth frame is as follows:
wherein H is an observation matrix, R is a covariance matrix of observation noise, Kt,mTarget kalman coefficients for the mth vehicle for the tth frame;
step 4, in the Kalman filtering, according to the predicted value of the current frame vehicle target state vector and the system observation value, calculating the optimal estimation value of the current frame vehicle target state vector, wherein the calculation formula of the mth vehicle target state vector optimal estimation value of the tth frame is as follows:
wherein the content of the first and second substances,for the mth vehicle target state vector optimal estimation value of the tth frame, ztIs an observed value;
and 4, in the Kalman filtering updating of the current frame system error covariance matrix, the updating calculation formula of the mth vehicle target system error covariance matrix of the tth frame is as follows:
wherein, Pt,mThe mth vehicle target system covariance matrix is the mth frame;
step 4, extracting the current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value, wherein the mth target state vector optimal estimation value of the tth frame is described as follows:
wherein u ist,mOptimal estimated value, v, of abscissa of center of mth vehicle target bounding box of tth frame representing optimal estimationt,mOptimal estimated value, s, of the ordinate of the center of the mth vehicle target bounding box of the tth frame representing the optimal estimationt,mOptimal estimate, r, of the area of the mth vehicle target bounding box in the tth frame representing the optimal estimatet,mAn optimal estimated value of the aspect ratio of the mth vehicle target bounding box of the tth frame representing the optimal estimation,an optimal estimated value of the change rate of the center abscissa of the mth vehicle target bounding box of the tth frame representing the optimal estimation,the vertical coordinate change rate of the mth vehicle target bounding box center of the tth frame representing the optimal estimation,representing the area change rate of the mth vehicle target boundary box of the mth frame of the optimal estimation;
the current frame vehicle target estimation frame coordinate calculation formula is as follows:
wherein the content of the first and second substances,the abscissa of the upper left corner of the mth vehicle target frame representing the optimal estimation,the vertical coordinate of the top left corner of the mth vehicle target frame of the tth frame representing the optimal estimation,the abscissa of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,the vertical coordinate of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
therefore, the current frame vehicle target estimation frame set is:
preferentially, the Hungarian correlation algorithm in the step 4 carries out matching by calculating the intersection ratio of vehicle target frames IOU;
and 4, calculating the intersection ratio and matching of the vehicle target frames IOU by the Hungarian association algorithm as follows: calculating the IOU intersection ratio of the mth vehicle target estimation frame in the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame in the current frame vehicle target identification frame set, wherein the intersection area calculation formula is as follows:
wherein S is1Representing the intersection area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the combined calculation formula is:
wherein S is2Representing the combination area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the IOU intersection ratio calculation formula is as follows:
the vehicle frame IOU intersection-comparison matching principle of the Hungarian correlation algorithm is as follows: if the calculated IOU intersection ratio of the mth vehicle target estimation frame of the t frame and the nth vehicle target identification frame of the t frame is maximum and belongs to the same vehicle class, the mth vehicle target of the t-1 frame and the nth vehicle target of the t frame belong to the same vehicle target, and the ID serial number of the nth vehicle target of the t frame is marked as the same ID serial number as that of the mth vehicle target of the t-1 frame. The associated vehicle id serial number set is as follows:
IDt,n{idt,n}
wherein idt,nAnd the vehicle id number corresponding to the nth vehicle target in the tth frame is shown.
And 4, combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the association process to form an original vehicle track motion characteristic identification text data set, wherein the formed original vehicle track motion characteristic identification text data set is as follows:
preferably, the data preprocessing performed in step 5 is as follows:
firstly, the center point coordinate of a vehicle target frame needs to be calculated, and the calculation formula of the center point coordinate is as follows:
wherein the content of the first and second substances,represents the abscissa of the center point of the nth vehicle target identification frame of the t-th frame,representing the vertical coordinate of the center point of the nth vehicle target identification frame of the t frame;
secondly, the width and the height of a vehicle target frame need to be calculated, and the calculation formula is as follows:
wherein, wt,nIndicates the width, h, of the nth vehicle target frame in the t-th framet,nIndicating the height of the nth vehicle target frame in the tth frame.
Forming a first-level vehicle track motion characteristic identification text data set:
when the data preprocessing is carried out on the first-level vehicle track motion characteristic identification text data set, firstly, screening out the vehicle track motion characteristic identification text data by using a threshold discrimination method to form a second-level vehicle track motion characteristic identification text data set, wherein the discrimination formula is as follows:
wherein the content of the first and second substances,represents the abscissa after the threshold decision is screened out,denotes the ordinate, X, after the threshold decision was screened1Represents the lower limit of the abscissa judgment threshold, X2The expression represents the upper limit of the abscissa judgment threshold, Y1Denotes the lower limit of the ordinate decision threshold, Y2Represents the upper limit of the ordinate judgment threshold;
secondly, counting the vehicle tracks with the same ID serial number, if the number of the video frames is less than a fixed value, judging as a fragmentary track segment, and clearing, wherein the judgment formula is as follows:
wherein the content of the first and second substances,represents the number of video frames for which the vehicle ID is a value, and threshold represents a fixed value;
the formed secondary vehicle track motion characteristic identification text data set comprises:
wherein the content of the first and second substances,the video frame number corresponding to the vehicle target frame after the data screening is represented,the serial number of the vehicle id corresponding to the frame of the vehicle target after the data screening is represented,indicating the width of the vehicle target frame after data screening,and the height of the vehicle target frame after data screening is represented.
And 5, the motion characteristic extraction process comprises the following steps:
the method comprises the following steps of firstly calculating the vehicle speed of each vehicle id serial number under each video frame serial number, and specifically calculating the speed of a current frame by using the position difference value and the time difference value of the current frame and a previous frame, wherein the speed comprises the transverse speed and the longitudinal speed of the vehicle. The formed data set is a three-level vehicle track motion characteristic identification text data set, and a vehicle transverse speed and vehicle longitudinal speed calculation formula is as follows:
wherein the content of the first and second substances,represents the abscissa of the center point of the vehicle target frame of the t frame under the vehicle id serial number corresponding to the nth vehicle target of the t frame,represents the abscissa of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th frame,represents the vertical coordinate of the center point of the vehicle target frame of the t frame under the serial number of the vehicle id corresponding to the nth vehicle target of the t frame,represents the longitudinal coordinate of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th framet,n,xRepresents the lateral velocity, v, of the center point of the nth vehicle target frame in the t-th framet,n,yRepresents the longitudinal speed of the center point of the nth vehicle target frame in the tth frame,the video frame sequence number of the t-1 frame after the data screening is represented;
since the speed calculation of each frame utilizes the position data of the previous frame, the vehicle speed of the first frame in each id vehicle frame sequence cannot be calculated, so that a cubic polynomial is adopted for fitting, and the calculation formula is as follows:
in the formula (f)3(vx,2,vx,3,vx,4,vx,5) To relate to vx,2,vx,3,vx,4,vx,5Cubic function of vx,1Is the first frame x-direction velocity, f3(vy,2,vy,3,vy,4,vy,5) Is about vy,2,vy,3,vy,4,vy,5Cubic function of vy,1Is the first frame y-direction velocity; v. ofx,2,vx,3,vx,4,vx,5The speeds of the 2 nd, 3 rd, 4 th and 5 th frames of vehicles with different id are respectively;
secondly, calculating the vehicle acceleration of each vehicle id serial number under each video frame serial number, wherein the specific process is that the acceleration is calculated by using the speed difference value and the time difference value of the current frame and the previous frame, and the acceleration comprises the vehicle transverse acceleration and the vehicle longitudinal acceleration, so that a three-level vehicle track motion characteristic identification text data set is formed, and the calculation formula of the vehicle transverse acceleration and the vehicle longitudinal acceleration is as follows:
wherein the content of the first and second substances,represents the transverse speed of the central point of the nth target frame of the tth frame under the corresponding vehicle id serial number of the nth vehicle target frame of the tth frame,represents the center of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frameThe speed of the spot in the transverse direction,the longitudinal speed of the center point of the nth target frame of the tth frame under the corresponding vehicle id number of the nth vehicle target frame of the tth frame is represented,represents the longitudinal speed of the center point of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frame, at,n,xRepresents the longitudinal acceleration of the center point of the target frame of the nth vehicle in the t frame, at,n,yRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame;
in the same way as the speed, the accelerations of the first frames of different vehicles are fitted by using a cubic polynomial, and the calculation formula is as follows:
the formed three-level vehicle track motion characteristic identification text data set comprises:
and 5, detecting the lane number as follows:
firstly, carrying out linear fitting on vehicle position coordinate data in a three-level vehicle track motion characteristic identification text data set to obtain a fitting straight line, wherein the fitting straight line expression is as follows:
wherein the content of the first and second substances,show aboutAnda represents the slope of the straight line, B represents the intercept of the straight line;
secondly, respectively calculating the distance from the vehicle position coordinate data in the three-level vehicle track motion characteristic identification text data set to the fitting straight line, wherein the calculation formula is as follows:
the lane number is judged by using a threshold judgment method to form a four-level vehicle track motion characteristic identification text data set, and the formula for judging the lane number is as follows:
{lanet,n=k,if distk,1≤dist≤distk,2}
wherein, lanet,nIndicates the lane number where the nth vehicle target frame center point is located in the t-th frame, k indicates the determined lane number, distk,1Indicates the k-th lane boundary lower bound, distk,2Represents the kth lane boundary upper limit;
the formed four-level vehicle track motion characteristic identification text data set comprises:
and 5, converting the coordinates into: converting the pixel unit number covered in the road length direction and the actual road length, and forming a five-level vehicle track motion characteristic identification text data set after conversion, wherein the conversion ratio is as follows:
wherein q is the conversion ratio;
the four-level vehicle track motion characteristic identification text data set parameter conversion process comprises the following steps:
wherein the content of the first and second substances,represents the abscissa of the center point of the nth vehicle target frame after coordinate conversion,represents the vertical coordinate of the center point of the nth vehicle target frame of the t frame after coordinate conversion,represents the width of the nth vehicle target frame of the tth frame after coordinate conversion,represents the height, v, of the nth vehicle target frame of the t frame after coordinate conversiont,n,x,qRepresents the transverse speed v of the center point of the target frame of the nth vehicle of the t frame after coordinate conversiont,n,y,qRepresents the longitudinal speed of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,x,qRepresents the lateral acceleration of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,y,qRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame after coordinate conversion;
the formed five-level vehicle track motion characteristic identification text data set comprises:
the invention has the advantages that: firstly, a new vehicle track characteristic identification method is provided, which is different from the existing patent, the method of the invention applies a YOLOv5 identification model, applies Kalman filtering and Hungarian algorithm based on a uniform motion model, and provides a method for extracting the characteristics of vehicle speed, acceleration and lane number; the method overcomes the defects of the prior patent in the vehicle motion characteristic method process, the applied Kalman filtering can slow down the omission of the relevant part of the vehicle target data, and the speed, acceleration and lane number extraction method can effectively extract the vehicle motion characteristics.
Drawings
FIG. 1: is a schematic view of the device of the invention;
FIG. 2: is a working scene diagram of the invention;
FIG. 3: is a flow chart of the method of the present invention;
FIG. 4: extracting a test chart for the vehicle track by the method;
FIG. 5: the invention is a lane number detection test chart.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are described below clearly and completely, and it is obvious that the described embodiments are some, not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, which is a schematic view of the apparatus of the present invention, the technical solution of the apparatus of the present invention is a high altitude visual angle recognition system, which is characterized by comprising:
the device comprises an aerial photography shooting device, a calculation processing host and a display projection device;
the aerial photographing device is connected with the calculation processing host computer in a wireless mode; the computing processing host is connected with the display projection device in a wired mode;
the aerial photography camera device is used for collecting video image data of vehicles on the road under the high-altitude visual angle and sending the video image data to the calculation processing host computer in a wireless mode; the calculation processing host is used for processing the video image data of the road vehicles at the high altitude view angle acquired by the aerial photography camera device, further obtaining vehicle image recognition results and track generation results through a vehicle track motion characteristic recognition method at the high altitude view angle, and transmitting the vehicle image recognition results and the track generation results to the display projection device for display;
the model selection of the aerial photography camera device is as follows: xinjiang DJI magic Air 2;
the computing processing host is configured to: I9-9900K type CPU; NVIDA GeForce RTX 3080 model GPU; a Huashuo PRIME Z390-A type mainboard; two DDR 43000 HZ 16G memory banks; GW-EPS model 1250DA power supply;
the display screen is selected as follows: AOC22B2H model display screen;
as shown in fig. 2, the aerial photography device is located right above the road surface during remote photography, that is, the included angle between the photography sight line of the aerial photography device camera and the road surface is 90 degrees.
As shown in fig. 3, the method for identifying the vehicle track motion characteristics includes the following steps:
step 1: the calculation processing host machine wirelessly shoots video image data by using an aerial photography camera device to be positioned right above a road surface, and is used for forming a high-altitude image training data set, carrying out manual marking on the high-altitude image training data set, marking an external rectangular frame of a vehicle target and a vehicle type, and forming a high-altitude image training vehicle marking frame set;
step 1, the high-altitude image training data set comprises:
{datae(x,y),e∈[1,E],x∈[1,X],y∈[1,Y]}
wherein, the datae(X, Y) represents the pixel information of the X-th row and the Y-th column of the E-th frame image in the high-altitude image training data set, E is the high-altitude image training data set frame number, X is the row number of the images in the high-altitude image training data set, and Y is the column number of the images in the high-altitude image training data set;
step 1, the high-altitude image training vehicle marking frame set comprises the following steps:
wherein the content of the first and second substances,the horizontal coordinate of the upper left corner of the rectangular frame of the mark of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set is represented,representing the vertical coordinate of the upper left corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set;represents the horizontal coordinate of the lower right corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set,representing the vertical coordinate of the lower right corner of a rectangular frame of an nth vehicle target mark in an e frame image in the high-altitude image training vehicle mark frame set; typee,nRepresenting the mark category of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set;
step 2: the calculation processing host machine wirelessly shoots video image data by using an aerial photography device positioned right above a road surface, and the video image data is used for forming a high-altitude image sequence data set and subsequently extracting vehicle track data; the road in the image picture of the high-altitude image sequence dataset is positioned in the middle of the image.
Step 2, the fixed shooting frame rate of the aerial shooting device is FPS, the length of a shot road is 152 meters, and the unit number of covered pixels in the length direction of the shot image road is G3840; the size of the high-altitude image data shooting is X3840 and Y2160;
step 2, the high-altitude image sequence dataset is as follows:
{datat(x,y),t∈[1,T],x∈[1,X],y∈[1,Y]}
wherein, the datat(x, y) represents the x-th row and y-th column pixel information of the T-th frame image in the high-altitude image sequence data set, and T is the high-altitude image sequenceThe number of data aggregation frames is 19200, X is the number of lines of the images in the high-altitude image sequence data set, and Y is the number of columns of the images in the high-altitude image sequence data set;
and step 3: introducing a YOLOv5 deep learning network model, sequentially inputting each frame of image in the high-altitude image training data set and a vehicle marking frame corresponding to each frame of image in the high-altitude image training vehicle marking frame set to a YOLOv5 deep learning network model for training, constructing a loss function model by using a GIOU method, optimizing a loss function value by using an Adam optimization algorithm, and identifying vehicle targets in the high-altitude image sequence data set by using the trained YOLOv5 deep learning network model to obtain a high-altitude image sequence vehicle identification frame set.
Step 3, the YOLOv5 network framework is specifically a yolo5x network structure;
and 3, the high-altitude image sequence vehicle identification frame set comprises the following steps:
wherein the content of the first and second substances,represents the horizontal coordinate of the upper left corner of the bounding rectangle frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set,representing the vertical coordinate of the upper left corner of a circumscribed rectangular frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set;represents the horizontal coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle recognition frame set,representing the first in the set of high-altitude image sequence vehicle identification framesthe vertical coordinate of the lower right corner of a rectangular frame externally connected with the nth vehicle target in the t frames of images; typet,nRepresenting the category of the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set;
and 4, step 4: starting from the first frame of vehicle target external rectangular frame data in the high-altitude image sequence vehicle target identification frame set, carrying out the following processing procedures: applying Kalman filtering to a previous frame of vehicle target boundary frame to obtain vehicle target estimation frame data of a current frame, performing association matching on the vehicle target identification frame data of the current frame and the vehicle target boundary frame in the vehicle target estimation frame data by using a Hungarian association algorithm, wherein the matching mechanism is IOU distance, obtaining the ID serial number of the vehicle target identification frame data of the current frame, namely the ID serial number of the vehicle target of the current frame, and marking a new ID serial number by the vehicle target frame data of the current frame which is not matched; until the end of the aerial image sequence. And combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the correlation matching process to form an original vehicle track motion characteristic identification text data set.
Step 4, in the recording of the current video frame sequence numbers, the recorded video frame sequence number set is
Framet,n{framet,n}
Wherein the frame ist,nAnd the video sequence number corresponding to the nth vehicle target in the tth frame is shown.
And 4, the Kalman filtering processing process sequentially comprises the following steps: initializing a vehicle target state vector; initializing a state transition matrix, initializing a covariance matrix, initializing an observation matrix and initializing a system noise matrix; predicting the vehicle target state vector of the current frame according to the optimal estimated value of the vehicle target state vector of the previous frame to obtain a predicted value of the vehicle target state vector of the current frame; predicting a current frame vehicle target system error covariance matrix according to the previous frame vehicle target system error covariance matrix to obtain a current frame vehicle target system error covariance matrix predicted value; updating a Kalman coefficient by using the covariance matrix predicted value of the current frame vehicle target system; estimating according to the current frame vehicle target state vector predicted value and the system observation value to obtain the current frame vehicle target state vector optimal estimation value; updating a current frame vehicle target system error covariance matrix; extracting a current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value;
in the process of initializing the vehicle target state vector by Kalman filtering, the characteristics of the vehicle target boundary frame are described by using the abscissa of the center of the vehicle target boundary frame, the ordinate of the center of the boundary frame, the area of the boundary frame and the horizontal-vertical ratio of the boundary frame, and the motion state information of the boundary frame is described by using a linear uniform velocity model, namely:
wherein the content of the first and second substances,represents motion state information of the bounding box, u represents the abscissa of the center of the bounding box, v represents the ordinate of the center of the bounding box, s represents the area of the bounding box, r represents the bounding box aspect ratio, typically a constant,represents the rate of change of the abscissa of the center of the bounding box,the ordinate representing the center of the bounding box,the area change rate of the bounding box is indicated. The motion state information of the mth vehicle target boundary box of the t-1 th frame is described as follows:
wherein the content of the first and second substances,represents motion state information of the mth vehicle target bounding box of the t-1 th frame, ut-1,mAbscissa, v, representing the center of the mth vehicle target bounding box of the t-1 th framet-1,mOrdinate, s, representing the center of the mth vehicle target bounding box in the t-1 th framet-1,mDenotes the area, r, of the m-th vehicle target bounding box of the t-1 th framet-1,mRepresents the aspect ratio of the mth vehicle target boundary box of the t-1 th frame,represents the change rate of the center abscissa of the mth vehicle target bounding box of the t-1 th frame,the ordinate representing the center of the mth vehicle target bounding box of the t-1 th frame,representing the area change rate of the mth vehicle target boundary box of the t-1 th frame;
the calculation formula of the abscissa, the ordinate and the boundary box area of the mth vehicle target frame center in the t-1 frame is as follows:
wherein the content of the first and second substances,represents the abscissa of the upper left corner of the mth vehicle target frame of the t-1 th frame,represents the horizontal coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame,represents the ordinate of the upper left corner of the mth vehicle target frame of the t-1 th frame,representing the vertical coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame;
in the initialization state transition matrix in step 4, a state transition matrix F models the motion of the target state vector, and the state transition matrix F corresponding to the adopted uniform motion model is initialized as follows:
in the initialization of a covariance matrix, a covariance matrix P represents the uncertainty of target position information, and the covariance matrix is an empirical parameter;
in the initialization of the system noise covariance matrix, because the process noise is not measurable, the system noise covariance matrix Q is generally assumed to conform to normal distribution;
in initializing the observation matrix, the observation matrix H is related to an observable variable, and its values are initialized as:
in the initialization observation noise covariance matrix, because the observation noise is not measurable, the observation noise covariance matrix R is generally assumed to conform to normal distribution;
and 4, predicting the current frame vehicle target state vector by the Kalman filtering according to the optimal estimated value of the previous frame vehicle target state vector, wherein the obtained mth frame vehicle target state vector predicted value calculation formula is as follows:
wherein the content of the first and second substances,represents the optimal estimated value of the m-th vehicle target state vector in the t-1 th frame,representing the predicted value of the mth vehicle target state vector of the t frame, F is a state transition matrix, B is a control matrix, ut-1,mRepresenting a control gain matrix;
and 4, predicting the current frame vehicle target system error covariance matrix by the Kalman filtering according to the previous frame vehicle target system error covariance matrix, wherein the obtained predicted value of the mth vehicle target system error covariance matrix of the tth frame has the following formula:
wherein, Pt-1,mRepresenting the mth vehicle target system error covariance matrix at frame t-1,representing the predicted value of the error covariance matrix of the mth vehicle target system in the tth frame, wherein Q is the covariance matrix of process noise;
thirdly, in the Kalman filtering, updating Kalman coefficients by using the predicted value of the current frame system error covariance matrix, wherein the calculation formula of the mth vehicle target Kalman coefficient of the tth frame is as follows:
wherein H is an observation matrix, R is a covariance matrix of observation noise, Kt,mTarget kalman coefficients for the mth vehicle for the tth frame;
step 4, in the Kalman filtering, according to the predicted value of the current frame vehicle target state vector and the system observation value, calculating the optimal estimation value of the current frame vehicle target state vector, wherein the calculation formula of the mth vehicle target state vector optimal estimation value of the tth frame is as follows:
wherein the content of the first and second substances,for the mth vehicle target state vector optimal estimation value of the tth frame, ztIs an observed value;
and 4, in the Kalman filtering updating of the current frame system error covariance matrix, the updating calculation formula of the mth vehicle target system error covariance matrix of the tth frame is as follows:
wherein, Pt,mThe mth vehicle target system covariance matrix is the mth frame;
step 4, extracting the current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value, wherein the mth target state vector optimal estimation value of the tth frame is described as follows:
wherein u ist,mOptimal estimated value, v, of abscissa of center of mth vehicle target bounding box of tth frame representing optimal estimationt,mOptimal estimated value, s, of the ordinate of the center of the mth vehicle target bounding box of the tth frame representing the optimal estimationt,mOptimal estimate, r, of the area of the mth vehicle target bounding box in the tth frame representing the optimal estimatet,mAn optimal estimated value of the aspect ratio of the mth vehicle target bounding box of the tth frame representing the optimal estimation,an optimal estimated value of the change rate of the center abscissa of the mth vehicle target bounding box of the tth frame representing the optimal estimation,ordinate of the mth vehicle target bounding box center of the tth frame representing the optimal estimationThe rate of change of the rate of change,representing the area change rate of the mth vehicle target boundary box of the mth frame of the optimal estimation;
the current frame vehicle target estimation frame coordinate calculation formula is as follows:
wherein the content of the first and second substances,the abscissa of the upper left corner of the mth vehicle target frame representing the optimal estimation,the vertical coordinate of the top left corner of the mth vehicle target frame of the tth frame representing the optimal estimation,the abscissa of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,the vertical coordinate of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
therefore, the current frame vehicle target estimation frame set is:
preferentially, the Hungarian correlation algorithm in the step 4 carries out matching by calculating the intersection ratio of vehicle target frames IOU;
and 4, calculating the intersection ratio and matching of the vehicle target frames IOU by the Hungarian association algorithm as follows: calculating the IOU intersection ratio of the mth vehicle target estimation frame in the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame in the current frame vehicle target identification frame set, wherein the intersection area calculation formula is as follows:
wherein S is1Representing the intersection area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the combined calculation formula is:
wherein S is2Representing the combination area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the IOU intersection ratio calculation formula is as follows:
the vehicle frame IOU intersection-comparison matching principle of the Hungarian correlation algorithm is as follows: if the calculated IOU intersection ratio of the mth vehicle target estimation frame of the t frame and the nth vehicle target identification frame of the t frame is maximum and belongs to the same vehicle class, the mth vehicle target of the t-1 frame and the nth vehicle target of the t frame belong to the same vehicle target, and the ID serial number of the nth vehicle target of the t frame is marked as the same ID serial number as that of the mth vehicle target of the t-1 frame. The associated vehicle id serial number set is as follows:
IDt,n{idt,n}
wherein idt,nAnd the vehicle id number corresponding to the nth vehicle target in the tth frame is shown.
And 4, combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the association process to form an original vehicle track motion characteristic identification text data set, wherein the formed original vehicle track motion characteristic identification text data set is as follows:
as shown in fig. 4, a vehicle trajectory extraction test chart is identified for the original vehicle trajectory motion feature recognition text data set;
and 5: and sequentially carrying out four processing processes of data preprocessing, motion feature extraction, lane number detection and coordinate conversion on the original vehicle track motion feature identification text data set to finally form a five-level vehicle track motion feature identification text data set.
And step 5, the data preprocessing comprises the following steps:
firstly, the center point coordinate of a vehicle target frame needs to be calculated, and the calculation formula of the center point coordinate is as follows:
wherein the content of the first and second substances,represents the abscissa of the center point of the nth vehicle target identification frame of the t-th frame,representing the vertical coordinate of the center point of the nth vehicle target identification frame of the t frame;
secondly, the width and the height of a vehicle target frame need to be calculated, and the calculation formula is as follows:
wherein, wt,nIndicates the width, h, of the nth vehicle target frame in the t-th framet,nIndicating the height of the nth vehicle target frame in the tth frame.
Forming a first-level vehicle track motion characteristic identification text data set:
when the data preprocessing is carried out on the first-level vehicle track motion characteristic identification text data set, firstly, screening out the vehicle track motion characteristic identification text data by using a threshold discrimination method to form a second-level vehicle track motion characteristic identification text data set, wherein the discrimination formula is as follows:
wherein the content of the first and second substances,represents the abscissa after the threshold decision is screened out,denotes the ordinate, X, after the threshold decision was screened1Represents the lower limit of the abscissa judgment threshold, X2The expression represents the upper limit of the abscissa judgment threshold, Y1Denotes the lower limit of the ordinate decision threshold, Y2Represents the upper limit of the ordinate judgment threshold;
secondly, counting the vehicle tracks with the same ID serial number, if the number of the video frames is less than a fixed value, judging as a fragmentary track segment, and clearing, wherein the judgment formula is as follows:
wherein the content of the first and second substances,represents the number of video frames for which the vehicle ID is a value, and threshold represents a fixed value;
the formed secondary vehicle track motion characteristic identification text data set comprises:
wherein the content of the first and second substances,the video frame number corresponding to the vehicle target frame after the data screening is represented,the serial number of the vehicle id corresponding to the frame of the vehicle target after the data screening is represented,indicating the width of the vehicle target frame after data screening,and the height of the vehicle target frame after data screening is represented.
And 5, the motion characteristic extraction process comprises the following steps:
the method comprises the following steps of firstly calculating the vehicle speed of each vehicle id serial number under each video frame serial number, and specifically calculating the speed of a current frame by using the position difference value and the time difference value of the current frame and a previous frame, wherein the speed comprises the transverse speed and the longitudinal speed of the vehicle. The formed data set is a three-level vehicle track motion characteristic identification text data set, and a vehicle transverse speed and vehicle longitudinal speed calculation formula is as follows:
wherein the content of the first and second substances,represents the abscissa of the center point of the vehicle target frame of the t frame under the vehicle id serial number corresponding to the nth vehicle target of the t frame,represents the abscissa of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th frame,represents the vertical coordinate of the center point of the vehicle target frame of the t frame under the serial number of the vehicle id corresponding to the nth vehicle target of the t frame,represents the longitudinal coordinate of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th framet,n,xRepresents the lateral velocity, v, of the center point of the nth vehicle target frame in the t-th framet,n,yRepresents the longitudinal speed of the center point of the nth vehicle target frame in the tth frame,the video frame sequence number of the t-1 frame after the data screening is represented;
since the speed calculation of each frame utilizes the position data of the previous frame, the vehicle speed of the first frame in each id vehicle frame sequence cannot be calculated, so that a cubic polynomial is adopted for fitting, and the calculation formula is as follows:
in the formula (f)3(vx,2,vx,3,vx,4,vx,5) To relate to vx,2,vx,3,vx,4,vx,5Cubic function of vx,1Is the first frame x-direction velocity, f3(vy,2,vy,3,vy,4,vy,5) Is about vy,2,vy,3,vy,4,vy,5Cubic function of vy,1Is the first frame y-direction velocity; v. ofx,2,vx,3,vx,4,vx,5The speeds of the 2 nd, 3 rd, 4 th and 5 th frames of vehicles with different id are respectively;
secondly, calculating the vehicle acceleration of each vehicle id serial number under each video frame serial number, wherein the specific process is that the acceleration is calculated by using the speed difference value and the time difference value of the current frame and the previous frame, and the acceleration comprises the vehicle transverse acceleration and the vehicle longitudinal acceleration, so that a three-level vehicle track motion characteristic identification text data set is formed, and the calculation formula of the vehicle transverse acceleration and the vehicle longitudinal acceleration is as follows:
wherein the content of the first and second substances,represents the transverse speed of the central point of the nth target frame of the tth frame under the corresponding vehicle id serial number of the nth vehicle target frame of the tth frame,represents the center of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frameThe speed of the spot in the transverse direction,the longitudinal speed of the center point of the nth target frame of the tth frame under the corresponding vehicle id number of the nth vehicle target frame of the tth frame is represented,represents the longitudinal speed of the center point of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frame, at,n,xRepresents the longitudinal acceleration of the center point of the target frame of the nth vehicle in the t frame, at,n,yRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame;
in the same way as the speed, the accelerations of the first frames of different vehicles are fitted by using a cubic polynomial, and the calculation formula is as follows:
the formed three-level vehicle track motion characteristic identification text data set comprises:
and 5, detecting the lane number as follows:
firstly, carrying out linear fitting on vehicle position coordinate data in a three-level vehicle track motion characteristic identification text data set to obtain a fitting straight line, wherein the fitting straight line expression is as follows:
wherein the content of the first and second substances,show aboutAnda represents the slope of the straight line, B represents the intercept of the straight line, and a is calculated to be-0.008725 and B is 1189;
secondly, respectively calculating the distance from the vehicle position coordinate data in the three-level vehicle track motion characteristic identification text data set to the fitting straight line, wherein the calculation formula is as follows:
the lane number is judged by using a threshold judgment method to form a four-level vehicle track motion characteristic identification text data set, and the formula for judging the lane number is as follows:
{lanet,n=k,if distk,1≤dist≤distk,2}
wherein, lanet,nIndicates the lane number where the nth vehicle target frame center point is located in the t-th frame, k indicates the determined lane number, distk,1Indicates the k-th lane boundary lower bound, distk,2Represents the kth lane boundary upper limit;
the formed four-level vehicle track motion characteristic identification text data set comprises:
as shown in fig. 5, the test diagram for detecting the lane number of the four-level vehicle track motion characteristic recognition text data set is shown, wherein continuous straight lines in the diagram represent lane spacing lines to be detected;
and 5, converting the coordinates into: converting the pixel unit number covered in the road length direction and the actual road length, and forming a five-level vehicle track motion characteristic identification text data set after conversion, wherein the conversion ratio is as follows:
wherein q is the conversion ratio, and q is calculated to be 0.03958;
the four-level vehicle track motion characteristic identification text data set parameter conversion process comprises the following steps:
wherein the content of the first and second substances,represents the abscissa of the center point of the nth vehicle target frame after coordinate conversion,represents the vertical coordinate of the center point of the nth vehicle target frame of the t frame after coordinate conversion,represents the width of the nth vehicle target frame of the tth frame after coordinate conversion,represents the height, v, of the nth vehicle target frame of the t frame after coordinate conversiont,n,x,qRepresents the transverse speed v of the center point of the target frame of the nth vehicle of the t frame after coordinate conversiont,n,y,qRepresents the longitudinal speed of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,x,qRepresents the lateral acceleration of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,y,qRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame after coordinate conversion;
the formed five-level vehicle track motion characteristic identification text data set comprises:
finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (6)
1. A vehicle track motion characteristic identification method based on a high-altitude visual angle identification system is characterized by comprising the following steps:
the high altitude visual angle recognition system is characterized by comprising: the device comprises an aerial photography shooting device, a calculation processing host and a display projection device;
the aerial photographing device is connected with the calculation processing host computer in a wireless mode; the computing processing host is connected with the display projection device in a wired mode;
the aerial photography camera device is used for collecting video image data of vehicles on the road under the high-altitude visual angle and sending the video image data to the calculation processing host computer in a wireless mode; the calculation processing host is used for processing the video image data of the road vehicles at the high altitude view angle acquired by the aerial photography camera device, further obtaining vehicle image recognition results and track generation results through a vehicle track motion characteristic recognition method at the high altitude view angle, and transmitting the vehicle image recognition results and the track generation results to the display projection device for display;
the aerial photographing device is positioned right above a road surface during remote photographing, namely an included angle between a photographing sight line of a camera of the aerial photographing device and the road surface is 90 degrees;
the vehicle track motion characteristic identification method comprises the following steps:
step 1: the calculation processing host machine wirelessly shoots video image data by using an aerial photography camera device to be positioned right above a road surface, and is used for forming a high-altitude image training data set, carrying out manual marking on the high-altitude image training data set, marking an external rectangular frame of a vehicle target and a vehicle type, and forming a high-altitude image training vehicle marking frame set;
step 2: the calculation processing host machine wirelessly shoots video image data by using an aerial photography device positioned right above a road surface, and the video image data is used for forming a high-altitude image sequence data set and subsequently extracting vehicle track data; the road in the image picture of the high-altitude image sequence data set is positioned in the middle of the image;
and step 3: introducing a YOLOv5 deep learning network model, sequentially inputting each frame of image in the high-altitude image training data set and a vehicle marking frame corresponding to each frame of image in the high-altitude image training vehicle marking frame set into the YOLOv5 deep learning network model for training, constructing a loss function model by using a GIOU method, optimizing a loss function value by using an Adam optimization algorithm, and identifying vehicle targets in the high-altitude image sequence data set by using the trained YOLOv5 deep learning network model to obtain a high-altitude image sequence vehicle identification frame set;
and 4, step 4: starting from the first frame of vehicle target external rectangular frame data in the high-altitude image sequence vehicle target identification frame set, carrying out the following processing procedures: applying Kalman filtering to a previous frame of vehicle target boundary frame to obtain vehicle target estimation frame data of a current frame, performing association matching on the vehicle target identification frame data of the current frame and the vehicle target boundary frame in the vehicle target estimation frame data by using a Hungarian association algorithm, wherein the matching mechanism is IOU distance, obtaining the ID serial number of the vehicle target identification frame data of the current frame, namely the ID serial number of the vehicle target of the current frame, and marking a new ID serial number by the vehicle target frame data of the current frame which is not matched; until the high-altitude image sequence is finished; combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the correlation matching process to form an original vehicle track motion characteristic identification text data set;
and 5: and sequentially carrying out four processing processes of data preprocessing, motion feature extraction, lane number detection and coordinate conversion on the original vehicle track motion feature identification text data set to finally form a five-level vehicle track motion feature identification text data set.
2. The vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: step 1, the high-altitude image training data set comprises:
{datae(x,y),e∈[1,E],x∈[1,X],y∈[1,Y]}
wherein, the datae(X, Y) represents the pixel information of the X-th row and the Y-th column of the E-th frame image in the high-altitude image training data set, E is the high-altitude image training data set frame number, X is the row number of the images in the high-altitude image training data set, and Y is the column number of the images in the high-altitude image training data set;
step 1, the high-altitude image training vehicle marking frame set comprises the following steps:
wherein the content of the first and second substances,the horizontal coordinate of the upper left corner of the rectangular frame of the mark of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set is represented,representing the vertical coordinate of the upper left corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set;represents the horizontal coordinate of the lower right corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set,representing the vertical coordinate of the lower right corner of a rectangular frame of an nth vehicle target mark in an e frame image in the high-altitude image training vehicle mark frame set; typee,nRepresenting high-altitude image trainingAnd marking the mark type of the nth vehicle target in the e frame image in the vehicle marking frame set.
3. The vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: step 2, the fixed shooting frame rate of the aerial shooting device is FPS, the length of a shot road is L, and the unit number of covered pixels in the length direction of the shot image road is G; the high-altitude image data shooting sizes are X and Y;
step 2, the high-altitude image sequence dataset is as follows:
{datat(x,y),t∈[1,T],x∈[1,X],y∈[1,Y]}
wherein, the datat(X, Y) represents the pixel information of the X-th line and the Y-th column of the T-th frame image in the high-altitude image sequence data set, T is the integrated frame number of the high-altitude image sequence data, X is the line number of the image in the high-altitude image sequence data set, and Y is the column number of the image in the high-altitude image sequence data set.
4. The vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: step 3, the YOLOv5 network framework is specifically a yolo5x network structure;
and 3, the high-altitude image sequence vehicle identification frame set comprises the following steps:
wherein the content of the first and second substances,represents the horizontal coordinate of the upper left corner of the bounding rectangle frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set,representing the nth vehicle in the t frame image in the high-altitude image sequence vehicle identification frame setMarking the vertical coordinate of the upper left corner of the external rectangular frame;represents the horizontal coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle recognition frame set,representing the vertical coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set; typet,nAnd representing the category of the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set.
5. The vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: step 4, in the recording of the current video Frame sequence number, the recorded video Frame sequence number set is Framet,n{framet,n}
Wherein the frame ist,nThe video sequence number corresponding to the nth vehicle target of the tth frame is represented;
and 4, the Kalman filtering processing process sequentially comprises the following steps: initializing a vehicle target state vector; initializing a state transition matrix, initializing a covariance matrix, initializing an observation matrix and initializing a system noise matrix; predicting the vehicle target state vector of the current frame according to the optimal estimated value of the vehicle target state vector of the previous frame to obtain a predicted value of the vehicle target state vector of the current frame; predicting a current frame vehicle target system error covariance matrix according to the previous frame vehicle target system error covariance matrix to obtain a current frame vehicle target system error covariance matrix predicted value; updating a Kalman coefficient by using the covariance matrix predicted value of the current frame vehicle target system; estimating according to the current frame vehicle target state vector predicted value and the system observation value to obtain the current frame vehicle target state vector optimal estimation value; updating a current frame vehicle target system error covariance matrix; extracting a current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value;
in the process of initializing the vehicle target state vector by Kalman filtering, the characteristics of the vehicle target boundary frame are described by using the abscissa of the center of the vehicle target boundary frame, the ordinate of the center of the boundary frame, the area of the boundary frame and the horizontal-vertical ratio of the boundary frame, and the motion state information of the boundary frame is described by using a linear uniform velocity model, namely:
wherein the content of the first and second substances,represents motion state information of the bounding box, u represents the abscissa of the center of the bounding box, v represents the ordinate of the center of the bounding box, s represents the area of the bounding box, r represents the bounding box aspect ratio, typically a constant,represents the rate of change of the abscissa of the center of the bounding box,the ordinate representing the center of the bounding box,representing the rate of change of the area of the bounding box; the motion state information of the mth vehicle target boundary box of the t-1 th frame is described as follows:
wherein the content of the first and second substances,represents motion state information of the mth vehicle target bounding box of the t-1 th frame, ut-1,mIs shown asAbscissa, v, of center of mth vehicle target bounding box of t-1 framet-1,mOrdinate, s, representing the center of the mth vehicle target bounding box in the t-1 th framet-1,mDenotes the area, r, of the m-th vehicle target bounding box of the t-1 th framet-1,mRepresents the aspect ratio of the mth vehicle target boundary box of the t-1 th frame,represents the change rate of the center abscissa of the mth vehicle target bounding box of the t-1 th frame,the ordinate representing the center of the mth vehicle target bounding box of the t-1 th frame,representing the area change rate of the mth vehicle target boundary box of the t-1 th frame;
the calculation formula of the abscissa, the ordinate and the boundary box area of the mth vehicle target frame center in the t-1 frame is as follows:
wherein the content of the first and second substances,represents the abscissa of the upper left corner of the mth vehicle target frame of the t-1 th frame,represents the horizontal coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame,represents the ordinate of the upper left corner of the mth vehicle target frame of the t-1 th frame,representing the vertical coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame;
in the initialization state transition matrix in step 4, a state transition matrix F models the motion of the target state vector, and the state transition matrix F corresponding to the adopted uniform motion model is initialized as follows:
in the initialization of a covariance matrix, a covariance matrix P represents the uncertainty of target position information, and the covariance matrix is an empirical parameter;
in the initialization of the system noise covariance matrix, because the process noise is not measurable, the system noise covariance matrix Q is generally assumed to conform to normal distribution;
in initializing the observation matrix, the observation matrix H is related to an observable variable, and its values are initialized as:
in the initialization observation noise covariance matrix, because the observation noise is not measurable, the observation noise covariance matrix R is generally assumed to conform to normal distribution;
and 4, predicting the current frame vehicle target state vector by the Kalman filtering according to the optimal estimated value of the previous frame vehicle target state vector, wherein the obtained mth frame vehicle target state vector predicted value calculation formula is as follows:
wherein the content of the first and second substances,represents the optimal estimated value of the m-th vehicle target state vector in the t-1 th frame,representing the predicted value of the mth vehicle target state vector of the t frame, F is a state transition matrix, B is a control matrix, ut-1,mRepresenting a control gain matrix;
and 4, predicting the current frame vehicle target system error covariance matrix by the Kalman filtering according to the previous frame vehicle target system error covariance matrix, wherein the obtained predicted value of the mth vehicle target system error covariance matrix of the tth frame has the following formula:
wherein, Pt-1,mRepresenting the mth vehicle target system error covariance matrix at frame t-1,representing the predicted value of the error covariance matrix of the mth vehicle target system in the tth frame, wherein Q is the covariance matrix of process noise;
thirdly, in the Kalman filtering, updating Kalman coefficients by using the predicted value of the current frame system error covariance matrix, wherein the calculation formula of the mth vehicle target Kalman coefficient of the tth frame is as follows:
wherein H is an observation matrix, R is a covariance matrix of observation noise, Kt,mTarget kalman coefficients for the mth vehicle for the tth frame;
step 4, in the Kalman filtering, according to the predicted value of the current frame vehicle target state vector and the system observation value, calculating the optimal estimation value of the current frame vehicle target state vector, wherein the calculation formula of the mth vehicle target state vector optimal estimation value of the tth frame is as follows:
wherein the content of the first and second substances,for the mth vehicle target state vector optimal estimation value of the tth frame, ztIs an observed value;
and 4, in the Kalman filtering updating of the current frame system error covariance matrix, the updating calculation formula of the mth vehicle target system error covariance matrix of the tth frame is as follows:
wherein, Pt,mThe mth vehicle target system covariance matrix is the mth frame;
step 4, extracting the current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value, wherein the mth target state vector optimal estimation value of the tth frame is described as follows:
wherein u ist,mOptimal estimated value, v, of abscissa of center of mth vehicle target bounding box of tth frame representing optimal estimationt,mOptimal estimated value, s, of the ordinate of the center of the mth vehicle target bounding box of the tth frame representing the optimal estimationt,mOptimal estimate, r, of the area of the mth vehicle target bounding box in the tth frame representing the optimal estimatet,mAn optimal estimated value of the aspect ratio of the mth vehicle target bounding box of the tth frame representing the optimal estimation,an optimal estimated value of the change rate of the center abscissa of the mth vehicle target bounding box of the tth frame representing the optimal estimation,the vertical coordinate change rate of the mth vehicle target bounding box center of the tth frame representing the optimal estimation,representing the area change rate of the mth vehicle target boundary box of the mth frame of the optimal estimation;
the current frame vehicle target estimation frame coordinate calculation formula is as follows:
wherein the content of the first and second substances,the abscissa of the upper left corner of the mth vehicle target frame representing the optimal estimation,the vertical coordinate of the top left corner of the mth vehicle target frame of the tth frame representing the optimal estimation,the abscissa of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,the vertical coordinate of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation;
therefore, the current frame vehicle target estimation frame set is:
4, matching the Hungarian correlation algorithm by calculating the intersection ratio of vehicle target frames IOU;
and 4, calculating the intersection ratio and matching of the vehicle target frames IOU by the Hungarian association algorithm as follows: calculating the IOU intersection ratio of the mth vehicle target estimation frame in the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame in the current frame vehicle target identification frame set, wherein the intersection area calculation formula is as follows:
wherein S is1Representing the intersection area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the combined calculation formula is:
wherein S is2Representing the combination area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the IOU intersection ratio calculation formula is as follows:
the vehicle frame IOU intersection-comparison matching principle of the Hungarian correlation algorithm is as follows: if the calculated IOU intersection ratio of the mth vehicle target estimation frame of the t frame and the nth vehicle target identification frame of the t frame is maximum and belongs to the same vehicle class, the mth vehicle target of the t-1 frame and the nth vehicle target of the t frame belong to the same vehicle target, and the ID serial number of the nth vehicle target of the t frame is marked as the same ID serial number as that of the mth vehicle target of the t-1 frame; the associated vehicle id serial number set is as follows:
IDt,n{idt,n}
wherein idt,nThe serial number of the vehicle id corresponding to the nth vehicle target of the tth frame is represented;
and 4, combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the association process to form an original vehicle track motion characteristic identification text data set, wherein the formed original vehicle track motion characteristic identification text data set is as follows:
6. the vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: and step 5, the data preprocessing comprises the following steps:
firstly, the center point coordinate of a vehicle target frame needs to be calculated, and the calculation formula of the center point coordinate is as follows:
wherein the content of the first and second substances,represents the abscissa of the center point of the nth vehicle target identification frame of the t-th frame,representing the vertical coordinate of the center point of the nth vehicle target identification frame of the t frame;
secondly, the width and the height of a vehicle target frame need to be calculated, and the calculation formula is as follows:
wherein, wt,nIndicates the width, h, of the nth vehicle target frame in the t-th framet,nIndicating the height of the nth vehicle target frame of the tth frame;
forming a first-level vehicle track motion characteristic identification text data set:
when the data preprocessing is carried out on the first-level vehicle track motion characteristic identification text data set, firstly, screening out the vehicle track motion characteristic identification text data by using a threshold discrimination method to form a second-level vehicle track motion characteristic identification text data set, wherein the discrimination formula is as follows:
wherein the content of the first and second substances,represents the abscissa after the threshold decision is screened out,denotes the ordinate, X, after the threshold decision was screened1Represents the lower limit of the abscissa judgment threshold, X2The expression represents the upper limit of the abscissa judgment threshold, Y1Denotes the lower limit of the ordinate decision threshold, Y2Represents the upper limit of the ordinate judgment threshold;
secondly, counting the vehicle tracks with the same ID serial number, if the number of the video frames is less than a fixed value, judging as a fragmentary track segment, and clearing, wherein the judgment formula is as follows:
wherein the content of the first and second substances,represents the number of video frames for which the vehicle ID is a value, and threshold represents a fixed value;
the formed secondary vehicle track motion characteristic identification text data set comprises:
wherein the content of the first and second substances,the video frame number corresponding to the vehicle target frame after the data screening is represented,the serial number of the vehicle id corresponding to the frame of the vehicle target after the data screening is represented,representing the frame of the vehicle target after data screeningThe width of the paper is less than the width of the paper,representing the height of the vehicle target frame after data screening;
and 5, the motion characteristic extraction process comprises the following steps:
firstly, calculating the vehicle speed of each vehicle id serial number under each video frame serial number, wherein the specific process is to calculate the speed of a current frame by using the position difference value and the time difference value of the current frame and a previous frame, and the speed comprises the transverse speed and the longitudinal speed of the vehicle; the formed data set is a three-level vehicle track motion characteristic identification text data set, and a vehicle transverse speed and vehicle longitudinal speed calculation formula is as follows:
wherein the content of the first and second substances,represents the abscissa of the center point of the vehicle target frame of the t frame under the vehicle id serial number corresponding to the nth vehicle target of the t frame,represents the abscissa of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th frame,represents the vertical coordinate of the center point of the vehicle target frame of the t frame under the serial number of the vehicle id corresponding to the nth vehicle target of the t frame,indicates the nth frameThe vehicle target frame corresponds to the vertical coordinate of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number, vt,n,xRepresents the lateral velocity, v, of the center point of the nth vehicle target frame in the t-th framet,n,yRepresents the longitudinal speed of the center point of the nth vehicle target frame in the tth frame,the video frame sequence number of the t-1 frame after the data screening is represented;
since the speed calculation of each frame utilizes the position data of the previous frame, the vehicle speed of the first frame in each id vehicle frame sequence cannot be calculated, so that a cubic polynomial is adopted for fitting, and the calculation formula is as follows:
in the formula (f)3(vx,2,vx,3,vx,4,vx,5) To relate to vx,2,vx,3,vx,4,vx,5Cubic function of vx,1Is the first frame x-direction velocity, f3(vy,2,vy,3,vy,4,vy,5) Is about vy,2,vy,3,vy,4,vy,5Cubic function of vy,1Is the first frame y-direction velocity; v. ofx,2,vx,3,vx,4,vx,5The speeds of the 2 nd, 3 rd, 4 th and 5 th frames of vehicles with different id are respectively;
secondly, calculating the vehicle acceleration of each vehicle id serial number under each video frame serial number, wherein the specific process is that the acceleration is calculated by using the speed difference value and the time difference value of the current frame and the previous frame, and the acceleration comprises the vehicle transverse acceleration and the vehicle longitudinal acceleration, so that a three-level vehicle track motion characteristic identification text data set is formed, and the calculation formula of the vehicle transverse acceleration and the vehicle longitudinal acceleration is as follows:
wherein the content of the first and second substances,represents the transverse speed of the central point of the nth target frame of the tth frame under the corresponding vehicle id serial number of the nth vehicle target frame of the tth frame,represents the transverse speed of the center point of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th frame,the longitudinal speed of the center point of the nth target frame of the tth frame under the corresponding vehicle id number of the nth vehicle target frame of the tth frame is represented,represents the longitudinal speed of the center point of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frame, at,n,xRepresents the longitudinal acceleration of the center point of the target frame of the nth vehicle in the t frame, at,n,yRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame;
in the same way as the speed, the accelerations of the first frames of different vehicles are fitted by using a cubic polynomial, and the calculation formula is as follows:
the formed three-level vehicle track motion characteristic identification text data set comprises:
and 5, detecting the lane number as follows:
firstly, carrying out linear fitting on vehicle position coordinate data in a three-level vehicle track motion characteristic identification text data set to obtain a fitting straight line, wherein the fitting straight line expression is as follows:
wherein the content of the first and second substances,show aboutAnda represents the slope of the straight line, B represents the intercept of the straight line;
secondly, respectively calculating the distance from the vehicle position coordinate data in the three-level vehicle track motion characteristic identification text data set to the fitting straight line, wherein the calculation formula is as follows:
the lane number is judged by using a threshold judgment method to form a four-level vehicle track motion characteristic identification text data set, and the formula for judging the lane number is as follows:
{lanet,n=k,if distk,1≤dist≤distk,2}
wherein, lanet,nIndicates the lane number where the nth vehicle target frame center point is located in the t-th frame, k indicates the determined lane number, distk,1Indicates the k-th lane boundary lower bound, distk,2Represents the kth lane boundary upper limit;
the formed four-level vehicle track motion characteristic identification text data set comprises:
and 5, converting the coordinates into: converting the pixel unit number covered in the road length direction and the actual road length, and forming a five-level vehicle track motion characteristic identification text data set after conversion, wherein the conversion ratio is as follows:
wherein q is the conversion ratio;
the four-level vehicle track motion characteristic identification text data set parameter conversion process comprises the following steps:
wherein the content of the first and second substances,represents the abscissa of the center point of the nth vehicle target frame after coordinate conversion,represents the vertical coordinate of the center point of the nth vehicle target frame of the t frame after coordinate conversion,represents the width of the nth vehicle target frame of the tth frame after coordinate conversion,represents the height, v, of the nth vehicle target frame of the t frame after coordinate conversiont,n,x,qTo representThe transverse speed v of the center point of the target frame of the nth vehicle of the t frame after coordinate conversiont,n,y,qRepresents the longitudinal speed of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,x,qRepresents the lateral acceleration of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,y,qRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame after coordinate conversion;
the formed five-level vehicle track motion characteristic identification text data set comprises:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111201539.XA CN114022791B (en) | 2021-10-15 | 2021-10-15 | Vehicle track motion feature recognition method based on high-altitude visual angle recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111201539.XA CN114022791B (en) | 2021-10-15 | 2021-10-15 | Vehicle track motion feature recognition method based on high-altitude visual angle recognition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114022791A true CN114022791A (en) | 2022-02-08 |
CN114022791B CN114022791B (en) | 2024-05-28 |
Family
ID=80056377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111201539.XA Active CN114022791B (en) | 2021-10-15 | 2021-10-15 | Vehicle track motion feature recognition method based on high-altitude visual angle recognition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114022791B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070105100A (en) * | 2006-04-25 | 2007-10-30 | 유일정보시스템(주) | System for tracking car objects using mosaic video image and a method thereof |
US20170301109A1 (en) * | 2016-04-15 | 2017-10-19 | Massachusetts Institute Of Technology | Systems and methods for dynamic planning and operation of autonomous systems using image observation and information theory |
US20180137376A1 (en) * | 2016-10-04 | 2018-05-17 | Denso Corporation | State estimating method and apparatus |
WO2020000251A1 (en) * | 2018-06-27 | 2020-01-02 | 潍坊学院 | Method for identifying video involving violation at intersection based on coordinated relay of video cameras |
CN112329569A (en) * | 2020-10-27 | 2021-02-05 | 武汉理工大学 | Freight vehicle state real-time identification method based on image deep learning system |
CN112884816A (en) * | 2021-03-23 | 2021-06-01 | 武汉理工大学 | Vehicle feature deep learning recognition track tracking method based on image system |
CN113269098A (en) * | 2021-05-27 | 2021-08-17 | 中国人民解放军军事科学院国防科技创新研究院 | Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle |
-
2021
- 2021-10-15 CN CN202111201539.XA patent/CN114022791B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070105100A (en) * | 2006-04-25 | 2007-10-30 | 유일정보시스템(주) | System for tracking car objects using mosaic video image and a method thereof |
US20170301109A1 (en) * | 2016-04-15 | 2017-10-19 | Massachusetts Institute Of Technology | Systems and methods for dynamic planning and operation of autonomous systems using image observation and information theory |
US20180137376A1 (en) * | 2016-10-04 | 2018-05-17 | Denso Corporation | State estimating method and apparatus |
WO2020000251A1 (en) * | 2018-06-27 | 2020-01-02 | 潍坊学院 | Method for identifying video involving violation at intersection based on coordinated relay of video cameras |
CN112329569A (en) * | 2020-10-27 | 2021-02-05 | 武汉理工大学 | Freight vehicle state real-time identification method based on image deep learning system |
CN112884816A (en) * | 2021-03-23 | 2021-06-01 | 武汉理工大学 | Vehicle feature deep learning recognition track tracking method based on image system |
CN113269098A (en) * | 2021-05-27 | 2021-08-17 | 中国人民解放军军事科学院国防科技创新研究院 | Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN114022791B (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104282020B (en) | A kind of vehicle speed detection method based on target trajectory | |
CN109460709B (en) | RTG visual barrier detection method based on RGB and D information fusion | |
CN113160192B (en) | Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background | |
CN108053419A (en) | Inhibited and the jamproof multiscale target tracking of prospect based on background | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN108364466B (en) | Traffic flow statistical method based on unmanned aerial vehicle traffic video | |
CN103544483B (en) | A kind of joint objective method for tracing based on local rarefaction representation and system thereof | |
CN105260749B (en) | Real-time target detection method based on direction gradient binary pattern and soft cascade SVM | |
CN105404857A (en) | Infrared-based night intelligent vehicle front pedestrian detection method | |
CN104517095B (en) | A kind of number of people dividing method based on depth image | |
CN106778659B (en) | License plate recognition method and device | |
CN112884816B (en) | Vehicle feature deep learning recognition track tracking method based on image system | |
CN105869178A (en) | Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization | |
CN105512618B (en) | Video tracing method | |
CN101094413A (en) | Real time movement detection method in use for video monitoring | |
CN107315095A (en) | Many vehicle automatic speed-measuring methods with illumination adaptability based on Video processing | |
CN106919895A (en) | For the tracking and system of moving target | |
CN109961013A (en) | Recognition methods, device, equipment and the computer readable storage medium of lane line | |
CN113111722A (en) | Automatic driving target identification method based on improved Mask R-CNN | |
CN107230219A (en) | A kind of target person in monocular robot is found and follower method | |
CN107808524A (en) | A kind of intersection vehicle checking method based on unmanned plane | |
CN103065163A (en) | Rapid target detection and recognition system and method based on static picture | |
CN103605960A (en) | Traffic state identification method based on fusion of video images with different focal lengths | |
CN112446353A (en) | Video image trace line detection method based on deep convolutional neural network | |
CN116883893A (en) | Tunnel face underground water intelligent identification method and system based on infrared thermal imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |