CN112748732B - Real-time path planning method based on improved Kstar algorithm and deep learning - Google Patents
Real-time path planning method based on improved Kstar algorithm and deep learning Download PDFInfo
- Publication number
- CN112748732B CN112748732B CN202011386288.2A CN202011386288A CN112748732B CN 112748732 B CN112748732 B CN 112748732B CN 202011386288 A CN202011386288 A CN 202011386288A CN 112748732 B CN112748732 B CN 112748732B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- path
- algorithm
- road
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013135 deep learning Methods 0.000 title claims abstract description 11
- 238000001514 detection method Methods 0.000 claims description 38
- 238000001914 filtration Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000005259 measurement Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000011524 similarity measure Methods 0.000 claims description 4
- 230000001174 ascending effect Effects 0.000 claims description 3
- 230000002596 correlated effect Effects 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 claims description 3
- 238000013136 deep learning model Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 2
- 239000002699 waste material Substances 0.000 abstract description 4
- 238000011156 evaluation Methods 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a real-time path planning method based on an improved Kstar algorithm and deep learning. The k-shortest paths problem is combined into the real-time path planning, k excellent paths to be selected are found out quickly and efficiently by improving the Kstar algorithm, a more reasonable road evaluation formula is provided, the time spent on passing through the road and the congestion index of the road are comprehensively considered, and the traffic pressure is relieved while the optimal path is found out. The improved Kstar algorithm is used for regionalizing the road map, the heuristic search speed is greatly improved, high-efficiency data structures such as piles and path structure maps are constructed, unnecessary memory waste is optimized, a plurality of paths to be selected are quickly obtained under the condition that the time complexity is close to that of the conventional optimal path algorithm, and unnecessary navigation route planning times are effectively reduced.
Description
Technical Field
The invention relates to the technical field of computer science and intelligent traffic systems, in particular to a real-time path planning method based on an improved Kstar algorithm and deep learning.
Background
With urban development and vehicle popularization, road congestion becomes a serious difficulty in traffic path planning, time and space waste and economic cost caused by congestion are increasing continuously, and reasonable real-time optimal path planning becomes an urgent need for development in the field of intelligent traffic.
The real-time optimal path planning is an important component in the field of intelligent transportation, can effectively relieve traffic pressure and reduce traffic jam degree, but the existing simple dynamic real-time planning cannot deal with complex real-time traffic environment, the complex dynamic real-time planning algorithm has high calculation complexity, and meanwhile, the existing real-time path planning system usually repeatedly calculates the same path and needs to perform path planning again after the path estimation value exceeds the threshold value, which takes more time. Most of the existing road evaluation formulas are simply measuring the time spent on passing through a road or the length of the road, and there are situations that a section of the road is crowded but a user is still guided to the section of the road, which can increase the stress of traffic.
Therefore, the invention provides a new method for dynamic real-time path planning aiming at the problems and overcomes the defects in the prior technical scheme.
Disclosure of Invention
In order to overcome the defects of the existing real-time dynamic path planning, the invention discloses a real-time path planning method based on an improved Kstar algorithm and deep learning, wherein the k-shortest paths problem is combined into the real-time path planning, k excellent paths to be selected are quickly and efficiently found out by improving the Kstar algorithm, a more reasonable road evaluation formula is provided, the time spent on passing through a road and the congestion index of the road are comprehensively considered, and the traffic pressure is relieved while the optimal path is found out. In order to solve the problems of the prior art, the technical scheme of the invention comprises the following steps:
step (1): setting an updating period;
step (2): acquiring the current position and the target position of a user;
and (3): obtaining a map between the current position and the target position and related historical information of the contained roads, dividing the map into regions, and obtaining k candidate paths by using an improved Kstar algorithm. The method specifically comprises the following steps:
step (3.1): determining and marking a starting point s and a target position t on a map:
step (3.2): using heuristic search, a shortest path from the initial position s to the target position t is retrieved, the heuristic function being:
(n) ═ h (n) + λ d (n) formula (1)
Wherein n represents a point location in the searching process, f (n) represents the heuristic value of the point location n, d (n) represents the Manhattan distance from the initial position s to the point location n, h (n) represents the weight of the point location n, and λ represents the parameter for adjusting the weight;
step (3.3): dividing the shortest path searched in the step (3.2) into tree edges (tree edges), dividing the rest searched paths into side edges (sidetrack edges), and constructing a heap H (v) for each vertex v on the shortest path according to the following rules:
for each vertex v on the shortest path, judging whether the vertex v is a starting point s, if so, H (v) is an empty heap, otherwise, u is a father node of v, u is a starting point of a tree edge pointing to v, and H (v) is a copy of H (u); simultaneously judging whether an incoming edge belonging to a side edge exists in a vertex v, if not, carrying out no processing, if so, adding a set (x, v) formed by a starting point x of the incoming edge and the vertex v into H (v) as a node, and then adding all nodes in H (u) into H (v) because H (v) is a copy of H (u); each H (v) has a plurality of nodes, but only one root node, the root node has only one child node, all the nodes are arranged according to the weight value in ascending order, and the weight value of the root node is minimum.
The node (x, v) weight formula is as follows:
wherein x, v represent the start and end points of edge (x, v), d (x) represents the distance from s to x, d (v) represents the distance from s to v, and w (x, v) represents the weight of edge (x, v).
Step (3.5): constructing a Path Structure Graph (P (G)):
redefining heap H (v) i ) There are several nodes (x) i ,v i );
Selecting any node (x) for the same pile i ,v i ) And (x) i ,v i ) Pointed to node (x) i ′,v i ') have an edge weight of between them of
Aiming at different heaps, selecting any node (x) in two heaps i ,v i ) And (x) j ,v j ) Wherein v is j =x i The edge weight between them is
Step (3.6): searching the path structure chart by using a Dijkstra algorithm to obtain k paths to be selected; k is a self-defined threshold, if the path to be selected is less than k, returning to the step (3.2) to continue heuristic search;
and (4): acquiring live images of the k paths to be selected in the step (3) through monitoring equipment and the like, and sending the live images to a cloud server side;
and (5): training a deep learning model, detecting and tracking vehicles, and calculating a road congestion index, wherein the method specifically comprises the following steps:
step (5.1): acquiring a live image from a cloud server, performing vehicle identification detection by using a YOLOv3 neural network to obtain the position of a vehicle, and marking the image;
the YOLOv3 neural network is trained and constructed through a UA-DETRAC data set.
Step (5.2): the vehicle tracking part uses a Deepsort algorithm and comprises the following steps:
step (5.2.1): vehicle tracking prediction using kalman filtering on live images of each frame:
step (5.2.2): calculating the motion similarity of the vehicle, and calculating the target distance between every two vehicle tracking prediction results obtained by the last frame of Kalman filtering multiple trackers and the vehicle detection results of the current frame of multiple detection frames to obtain the motion similarity; and (3) screening to obtain a distance smaller than a threshold value, wherein the motion state association is successful at the moment, and the formula is as follows:
wherein d is j Indicates the position of the jth vehicle detection frame of the current frame i Representing the predicted position of the i-th tracker of the previous frame on the vehicle object, S i A covariance matrix representing a covariance matrix between the vehicle position obtained by the vehicle detection and a mean of the vehicle positions obtained by the vehicle tracking;
step (5.2.3): calculating the appearance similarity of the vehicle, calculating the feature vector of the vehicle obtained by k tracking frames which are recently and successfully associated with the ith tracker and the feature vector of the jth vehicle detection result of the current frame through a Convolutional Neural Network (CNN), and obtaining the appearance similarity of the vehicle between the vehicle image obtained by the detection frame and the vehicle image successfully associated with the tracking frame according to the following formula:
whereinRepresenting the characteristic vector of the vehicle obtained by k tracking frames which are successfully matched by the ith tracker recently, wherein k is a threshold value and can be freely adjusted, and r j A feature vector representing the vehicle obtained by the jth detection frame,a detection box indicating the most recent successful match of the ith tracker.
Step (5.2.4): calculating a fusion characteristic value according to the vehicle motion similarity measurement and the vehicle appearance similarity measurement:
wherein, w a ,w b And respectively representing the weights corresponding to the vehicle motion similarity measure and the vehicle appearance similarity measure.
Step (5.2.5): the fusion result is used as the input of the Hungarian algorithm to realize maximum matching, the vehicle detection result and the vehicle tracking result are correlated, and Kalman filtering needs to be updated according to the result when the model is trained;
step (5.3): predicting an average vehicle speed and a road congestion index using results of the vehicle detection and tracking, comprising the steps of:
step (5.3.1): the image coordinates obtained by tracking the vehicles are converted into the coordinates of an actual road, the actual displacement is calculated, and the approximate speed v of each vehicle can be predicted by dividing the time difference between the upper frame and the lower frame j ;
Step (5.3.2): the average vehicle speed is calculated as follows:
wherein v is j Representing the speed of the jth vehicle, n representing the number of vehicles,representing the estimated average vehicle speed. The congestion index is defined according to different vehicle speeds:
wherein z is i Representative of the congestion index, z i When 0 represents deadlock, z i When 1 represents clogging, z i When 2 represents congestion, z i When the value is 3, the expression indicates unobstructed. v. of 1 ,v 2 ,v 3 All are thresholds, set empirically.
And (6): calculating the weight of each candidate path according to the following formula, selecting the path with the minimum weight as the optimal path, wherein the path weight estimation formula is as follows:
wherein L is i Represents the length of the ith segment of road,represents the average speed of the ith road, beta represents a penalty coefficient, can be freely adjusted between 0 and 10, 10 represents the highest penalty of the congested road, and z represents i Representing the congestion index of the road.
And (7): and (4) whether the target location is reached, if so, finishing the path planning, otherwise, returning to the step 2.
Compared with the prior art, the method provided by the invention has the following advantages:
1. high efficiency: the improved Kstar algorithm is used for regionalizing the road map, the heuristic search speed is greatly improved, high-efficiency data structures such as piles and path structure maps are constructed, unnecessary memory waste is optimized, a plurality of paths to be selected are quickly obtained under the condition that the time complexity is close to that of the conventional optimal path algorithm, and unnecessary navigation route planning times are effectively reduced.
2. Robustness: by adopting an efficient YOLOv3 network model and a Deepsort tracking algorithm, and combining a plurality of deep learning algorithms and models, the interference of noise in data is effectively reduced, and the detection and tracking capability of vehicles and the prediction capability of a congestion model are enhanced.
3. Rationality: the traditional path planning only simply considers the time spent on the road or the length of the road, and the method comprehensively considers the time spent on passing the road and the congestion degree, thereby avoiding the difficulty of increasing congestion of the congested road, more reasonably planning the path, relieving the traffic pressure and realizing intelligent traffic.
Drawings
FIG. 1 is a flow chart of a method of real-time path planning;
FIG. 2 is a detailed flow chart of a real-time path planning method based on the improved Kstar algorithm and deep learning;
FIG. 3 is a road map of a local area of Shanghai City;
FIG. 4 is a schematic diagram of the modeling of a road map in this example;
FIG. 5 is a comparison graph of the optimal path obtained by the method with multiple experiments of the shortest length path and the shortest time path;
Detailed Description
The real-time path planning method based on the improved Kstar algorithm and the deep learning provided by the invention will be further described with reference to the accompanying drawings.
Referring to fig. 1-2, a flow chart of the method is shown, and the specific steps are as follows:
step (1): setting an update period, generally five minutes;
step (2): acquiring a target position according to a navigation request of a user, and acquiring the current position of the user according to a GPS satellite;
and (3): and loading a map between the current position and the target position and related historical information of the included roads, obtaining k candidate paths by using an improved Kstar algorithm, layering the nodes on the map, and dividing the nodes into a hierarchical network according to province and city equal ranges. The step (3) further comprises the following steps:
step (3.1): determining and marking a starting point s and a target position t on a map, if the starting point and the target position belong to different provinces, different cities and the like, adjusting a planning target to be an exit of the same layer network, and then calling a historical best path from the exit to an entrance of a network where a target place is located:
step (3.2): using a heuristic search, a shortest path is retrieved from the initial location s to the target location t. The specific heuristic searching step is as follows:
step (3.2.1): adding the initial position s into a priority queue open _ list;
step (3.2.2): taking out the first node in the priority queue open _ list, judging whether all nodes which can be reached by the node have been accessed, skipping the node if the nodes have been accessed, adding the nodes into the open _ list if the nodes have not been accessed, judging whether the nodes are in the priority queue, updating the minimum heuristic value if the nodes are in the priority queue, and then storing the node which is taken out firstly into the close _ list which represents the queue of the accessed nodes. The calculation formula of the heuristic value is as follows;
(n) ═ h (n) + λ d (n) formula (1)
Wherein n represents a point location in the searching process, f (n) represents a heuristic value of the point location n, d (n) represents a Manhattan distance from an initial position s to the point location n, h (n) represents a weight of the point location n, and λ represents a parameter for adjusting the weight;
step (3.2.3): and (5) circulating the step (3.2.2) until the target position t is found, or ending the search when the priority queue open _ list is empty and returning the search failure.
Step (3.3): when the target position t is found, suspending heuristic search, and saving a path from the starting point s to the target position t as a first shortest path;
step (3.4): dividing the shortest path searched in the step (3.3) into tree edges (tree edges), dividing the rest searched paths into side edges (sidetrack edges), and constructing a heap H (v) for each vertex v on the shortest path according to the following rules:
for each vertex v on the shortest path, judging whether the vertex v is a starting point s, if so, H (v) is an empty heap, otherwise, u is a father node of v, u is a starting point of a tree edge pointing to v, and H (v) is a copy of H (u); simultaneously judging whether an incoming edge belonging to a side edge exists in a vertex v, if not, carrying out no processing, if so, adding a set (x, v) formed by a starting point x of the incoming edge and the vertex v into H (v) as a node, and then adding all nodes in H (u) into H (v) because H (v) is a copy of H (u); each H (v) has a plurality of nodes, but only one root node, the root node has only one child node, all the nodes are arranged according to the weight value in ascending order, and the weight value of the root node is minimum.
The node (x, v) weight formula is as follows:
wherein x, v represent the start and end points of edge (x, v), d (x) represents the distance from s to x, d (v) represents the distance from s to v, and w (x, v) represents the weight of edge (x, v).
Step (3.5): constructing a Path Structure Graph (P (G)) and redefining heap H (v) i ) There are several nodes (x) i ,v i ),
Selecting any node (x) for the same pile i ,v i ) And (x) i ,v i ) Pointed to node (x) i ′,v i ') and the edge weight between them is
Aiming at different heaps, selecting any node (x) in two heaps i ,v i ) And (x) j ,v j ) Wherein v is j =x i The edge weight value between them is
Step (3.6): obtaining k paths to be selected by using Dijkstra algorithm for the path structure diagram, if the paths to be selected are smaller than k and k is a user-defined threshold value, returning to the step (3.2) to continue heuristic search;
and (4): acquiring live images of roads in each path through monitoring equipment and the like, and sending the images to a cloud server;
and (5): training a deep learning model, detecting and tracking vehicles, and calculating the degree of road congestion, wherein the step (5) further comprises the following steps:
step (5.1): acquiring a live image from a cloud server, carrying out normalization processing on the image, carrying out vehicle identification detection by using a YOLOv3 neural network trained by a UA-DETRAC data set to obtain the position of a vehicle, and marking the position on the image, wherein the YOLOv3 neural network is an efficient target detection neural network model, adopts a Darknet-53 network structure and consists of 53 convolution layers of 3 × 3 and 1 × 1;
step (5.2): the vehicle tracking part uses a Deepsort algorithm and comprises the following steps:
step (5.2.1): vehicle tracking predictions are made for each frame using kalman filtering, setting the mean:
x=[cx cy r h vx vy vr vh]
wherein cx and cy represent the center position of the detected vehicle, r represents the aspect ratio, h represents the height, and vx, vy, vr and vh respectively represent the speed change values of x, y, r and h;
the Kalman filtering prediction formula is as follows:
x' ═ τ x formula (3)
P′=τPτ T + Q formula (4)
Where x 'represents the prediction of the position of the detected vehicle by kalman filtering at time t, x represents the position of the target vehicle at time t-1, P represents the covariance matrix of the method at frame t-1, P' represents the new covariance matrix calculated from the data of the previous frame at frame t, τ represents the state transition matrix, dt is the time difference between the current frame and the previous frame, and Q represents the noise matrix.
Step (5.2.2): calculating the motion similarity of the vehicle, and calculating the target distance between every two vehicle tracking prediction results obtained by the last frame of Kalman filtering multiple trackers and the vehicle detection results of the current frame of multiple detection frames to obtain the motion similarity; and (3) screening to obtain a distance smaller than a threshold value, wherein the motion state association is successful at the moment, and the formula is as follows:
wherein d is j Indicates the position of the jth vehicle detection frame of the current frame i Representing the predicted position of the i-th tracker of the previous frame on the vehicle object, S i A covariance matrix representing a covariance matrix between the vehicle position obtained by the vehicle detection and a mean of the vehicle positions obtained by the vehicle tracking;
step (5.2.3): calculating the appearance similarity of the vehicle, and calculating the feature vector of the vehicle obtained by k tracking frames which are recently and successfully associated with the ith tracker and the feature vector of the jth vehicle detection result of the current frame through a specific feature extraction convolutional neural network;
table 1: specific feature extraction convolutional neural network
And obtaining the vehicle appearance similarity between the vehicle image obtained by the detection frame and the vehicle image successfully associated with the tracking frame according to the following formula:
whereinRepresenting the characteristic vector of the vehicle obtained by k tracking frames which are successfully matched by the ith tracker recently, wherein k is a threshold value and can be freely adjusted, and r j A transposed matrix representing the feature vector of the vehicle obtained by the jth detection frame,a detection box indicating the most recent successful match of the ith tracker.
Step (5.2.4): and (3) fusing the vehicle motion similarity measurement and the vehicle appearance similarity measurement to calculate a characteristic value:
wherein, w a ,w b Representing the weight parameters corresponding to the two metrics, c i,j Representing the total eigenvalue.
Step (5.2.5): the fusion result is used as the input of the Hungarian algorithm to realize maximum matching, the detection result and the tracking result are correlated, and Kalman filtering needs to be updated according to the result when the model is trained;
step (5.3): predicting an average vehicle speed and a degree of road congestion using results of the vehicle detection and tracking, comprising the steps of:
step (5.3.1): the image coordinates obtained by tracking the vehicles are converted into the coordinates of an actual road, the actual displacement is calculated, and the approximate speed v of each vehicle can be predicted by dividing the time difference between the upper frame and the lower frame j ;
Step (5.3.2): an average vehicle speed is calculated, which is formulated as follows:
wherein v is j Representing the speed of the jth vehicle, n representing the number of vehicles,representing the estimated average vehicle speed. And (3) dividing congestion levels according to different vehicle speeds:
wherein z is i Representing the degree of congestion, z i When 0 represents deadlock, z i When 1 represents clogging, z i When 2 represents congestion, z i When it is 3, it stands for unobstructed, v 1 ,v 2 ,v 3 Are all adjustable thresholds.
And (6): calculating the weight of the path to be selected according to a path weight estimation formula, and selecting an optimal path, wherein the path weight estimation formula is as follows:
wherein L is i Represents the length of the ith segment of road,represents the average speed of the ith road, beta represents a penalty coefficient, can be freely adjusted between 0 and 10, 10 represents the highest penalty of the congested road, and z represents i Representing the degree of congestion of the road.
And (7): and (4) whether the target location is reached, if so, finishing the path planning, otherwise, returning to the step 2.
See FIG. 3 for an example of a region in east of Mi Hai.
Referring to fig. 4, by extracting road characteristics of the area, obtaining a two-dimensional model of the area through abstract modeling, adding a 5X5 grid and obtaining relevant historical information of each link, including the length of the link and a historical average speed, an initial weight of each link is calculated. Taking edge (S0, S1) as an example, the length of the road section is 0.3089km, the road section is a branch, the historical average speed is 30km/h, and therefore the weight value of the road section is 0.0103; taking edge (S0, S4) as an example, the length of the road segment is 0.2800km, which is a secondary trunk road, and the historical average speed is 40km/h, so the weight of the road segment is 0.0070. In this example, k is set to 5, the starting point is S0, the target location point is S18, and the Kstar algorithm is modified to obtain the shortest paths quickly.
Table 2: alternative paths derived from the improved Kstar algorithm
Acquiring live images of all the roads on the five paths, identifying and tracking the vehicles by using a trained Yolov3 network model and a Deepsort algorithm, wherein the current w is obtained through practice verification a =0,w b The algorithm achieves the best effect when the traffic jam is 1, the average vehicle speed is obtained, the congestion level is judged, and the road weight is calculated. Taking the route numbered 1 as an example, the congestion levels of six roads need to be obtained, monitoring videos on all roads to be selected are obtained, taking a road section edge (S0, S1) as an example, images after the vehicles move are obtained through a vehicle identification algorithm and a tracking algorithm, the displacement of the center point of the vehicles between adjacent frames is calculated, generally 25 frames are set as a first vehicle speed detection, the speed of the vehicles is predicted by dividing the frame difference time, and the average vehicle speed formula is adopted:
the real-time average vehicle speed of the road section is obtained through calculationThe congestion level is defined to be smooth according to the vehicle speed, namely z is equal to 0, and the same principle is adoptedAnd calculating the average vehicle speed and the congestion level of other roads. Then according to the path weight estimation formula:
β is set to 0.5, and the weight W of the path is obtained as 0.0588, and the final weight of the other 4 paths is obtained in the same manner.
Table 3: the weight of the candidate path obtained by the Kstar algorithm in this example
The optimal path selected is therefore the path numbered 2. Through calculation and verification, the weight calculated by the shortest path-first path is 0.0402, and the weight calculated by the least time-first path is 0.0710, which indicates that the traditional path with the shortest path-first path and the path with the least time-first path cannot bypass the completely congested road segment. Fig. 5 is a comparison graph of multiple experimental results of the optimal path, the shortest length path and the shortest time path obtained by the method, and it can be seen from the graph that the method is more reasonable in a crowded traffic environment, effectively avoids crowded road sections, and relieves traffic pressure while realizing path planning.
The method has high efficiency and robustness, and overcomes the defects of the traditional implementation path planning. The method has the specific advantages that the time waste caused by repeatedly searching for the optimal path in the traditional path planning is overcome, a plurality of excellent paths to be selected are quickly generated by combining the k-short paths problem, and an enough alternative scheme is provided for the real-time optimal path planning. The robustness of the method is specifically shown in that various deep learning algorithms and models are adopted, a very efficient YOLOv3 algorithm is selected in vehicle detection, a detection frame is obtained quickly and accurately, noise interference is reduced, Kalman filtering and measurement fusion are used in vehicle tracking, the motion similarity and appearance similarity of a vehicle are comprehensively considered, and more accurate vehicle tracking is realized. In the traditional path planning, only the time spent on the path is considered, a more reasonable path weight estimation formula is provided, the optimal path is planned and the traffic pressure is relieved at the same time by combining the time spent on the path and the path congestion degree, and intelligent traffic is realized.
The above description of the embodiments is only intended to facilitate the understanding of the method of the invention and its core idea. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (2)
1. The real-time path planning method based on the improved Kstar algorithm and the deep learning is characterized by comprising the following steps of:
step (1): setting an updating period;
step (2): acquiring the current position and the target position of a user;
and (3): obtaining a map between a current position and a target position and related historical information of roads, dividing the map into a plurality of areas, and obtaining k paths to be selected by using an improved Kstar algorithm;
and (4): acquiring live images of the k paths to be selected in the step (3), and sending the live images to a cloud server;
and (5): training a deep learning model, detecting and tracking vehicles, and calculating a road congestion index; the method comprises the following steps:
step (5.1): acquiring a live image from a cloud server, and performing vehicle identification detection by using a YOLOv3 neural network to obtain the position of a vehicle;
step (5.2): the vehicle tracking part uses a Deepsort algorithm; the method comprises the following steps:
step (5.2.1): vehicle tracking prediction using kalman filtering on live images of each frame:
step (5.2.2): calculating the motion similarity of the vehicle, and calculating the target distance between every two vehicle tracking prediction results obtained by the last frame of Kalman filtering multiple trackers and the vehicle detection results of the current frame of multiple detection frames to obtain the motion similarity; and (3) screening to obtain a distance smaller than a threshold value, wherein the motion state association is successful at the moment, and the formula is as follows:
d (1) (i,j)=(d j -ε i ) T S i -1 (d j -ε i ) Formula (5)
Wherein d is j Indicates the position of the jth vehicle detection frame of the current frame i Representing the predicted position of the i-th tracker of the previous frame on the vehicle object, S i A covariance matrix representing a covariance matrix between the vehicle position obtained by the vehicle detection and a mean of the vehicle positions obtained by the vehicle tracking;
step (5.2.3): calculating the vehicle appearance similarity, calculating the feature vector of the vehicle obtained by k tracking frames which are recently and successfully associated with the ith tracker and the feature vector of the jth vehicle detection result of the current frame through a Convolutional Neural Network (CNN), and obtaining the vehicle appearance similarity between the vehicle image obtained by the detection frame and the vehicle image successfully associated with the tracking frame according to the following formula:
whereinRepresenting the characteristic vector of the vehicle obtained by k tracking frames which are successfully matched by the ith tracker recently, wherein k is a threshold value and can be freely adjusted, and r j A feature vector representing the vehicle obtained by the jth detection frame,a detection box indicating the most recent successful match of the ith tracker;
step (5.2.4): calculating a fusion characteristic value according to the vehicle motion similarity measurement and the vehicle appearance similarity measurement:
wherein, w a ,w b Respectively representing weights corresponding to the vehicle motion similarity measure and the vehicle appearance similarity measure;
step (5.2.5): the fusion result is used as the input of the Hungarian algorithm to realize maximum matching, the vehicle detection result and the vehicle tracking result are correlated, and Kalman filtering needs to be updated according to the result when the model is trained;
step (5.3): predicting an average vehicle speed and a road congestion index using results of the vehicle detection and tracking; the method comprises the following steps:
step (5.3.1): the image coordinates obtained by tracking the vehicles are converted into the coordinates of an actual road, the actual displacement is calculated, and the approximate speed v of each vehicle can be predicted by dividing the time difference between the upper frame and the lower frame j ;
Step (5.3.2): the average vehicle speed is calculated as follows:
wherein v is j Representing the speed of the jth vehicle, n representing the number of vehicles,represents an estimated average vehicle speed; the congestion index is defined according to different vehicle speeds:
wherein z is i Representative of the congestion index, z i When 0 represents deadlock, z i When 1 represents clogging, z i When 2 represents congestion, z i When the value is 3, the expression is unobstructed; v. of 1 ,v 2 ,v 3 Are all threshold values;
and (6): calculating the weight of each candidate path according to the following formula, selecting the path with the minimum weight as the optimal path, wherein the path weight estimation formula is as follows:
wherein L is i Represents the length of the ith segment of road,represents the average speed of the ith road, beta represents the penalty factor, z i A congestion index representing a road;
and (7): and (4) whether the target location is reached, if so, finishing the path planning, otherwise, returning to the step 2.
2. The method for real-time path planning based on the improved Kstar algorithm and the deep learning of claim 1, wherein the step (3) comprises the following steps:
step (3.1): determining and marking a starting point s and a target position t on a map:
step (3.2): using heuristic search, a shortest path from the initial position s to the target position t is retrieved, the heuristic function being:
(n) ═ h (n) + λ d (n) formula (1)
Wherein n represents a point location in the searching process, f (n) represents a heuristic value of the point location n, d (n) represents a Manhattan distance from an initial position s to the point location n, h (n) represents a weight of the point location n, and λ represents a parameter for adjusting the weight;
step (3.3): dividing the shortest path searched in the step (3.2) into tree edges, dividing the rest searched paths into side edges, and constructing a heap H (v) for each vertex v on the shortest path according to the following rules:
for each vertex v on the shortest path, judging whether the vertex v is a starting point s, if so, H (v) is an empty heap, otherwise, u is a father node of v, u is a starting point of a tree edge pointing to v, and H (v) is a copy of H (u); simultaneously judging whether an incoming edge belonging to a side edge exists in a vertex v, if not, carrying out no processing, if so, adding a set (x, v) formed by a starting point x of the incoming edge and the vertex v into H (v) as a node, and then adding all nodes in H (u) into H (v) because H (v) is a copy of H (u); each H (v) has a plurality of nodes, but only one root node, the root node has only one child node, all the nodes are arranged according to the weight value in ascending order, and the weight value of the root node is minimum;
the node (x, v) weight formula is as follows:
wherein x, v represent the start and end points of edge (x, v), d (x) represents the distance from s to x, d (v) represents the distance from s to v, and w (x, v) represents the weight of edge (x, v);
step (3.5): constructing a path structure diagram:
redefining heap H (v) i ) There are several nodes (x) i ,v i );
Selecting any node (x) for the same pile i ,v i ) And (x) i ,v i ) Pointed to node (x) i ′,v i ') have an edge weight of between them of
Aiming at different heaps, selecting any node (x) in two heaps i ,v i ) And (x) j ,v j ) Wherein v is j =x i The edge weight between them is
Step (3.6): searching the path structure chart by using a Dijkstra algorithm to obtain k paths to be selected; and k is a self-defined threshold, and if the path to be selected is less than k, returning to the step (3.2) to continue the heuristic search.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011386288.2A CN112748732B (en) | 2020-12-01 | 2020-12-01 | Real-time path planning method based on improved Kstar algorithm and deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011386288.2A CN112748732B (en) | 2020-12-01 | 2020-12-01 | Real-time path planning method based on improved Kstar algorithm and deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112748732A CN112748732A (en) | 2021-05-04 |
CN112748732B true CN112748732B (en) | 2022-08-05 |
Family
ID=75648984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011386288.2A Active CN112748732B (en) | 2020-12-01 | 2020-12-01 | Real-time path planning method based on improved Kstar algorithm and deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112748732B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114237265B (en) * | 2022-02-25 | 2022-07-12 | 深圳市城市交通规划设计研究中心股份有限公司 | Optimal routine inspection route planning method, system, computer and storage medium |
CN114967711B (en) * | 2022-07-04 | 2023-05-12 | 江苏集萃清联智控科技有限公司 | Multi-AGV cooperative path planning method and system based on dynamic weighted map |
CN118052344B (en) * | 2024-02-27 | 2024-07-30 | 暨南大学 | Multi-path collaborative planning method based on MTWS congestion evaluation and storage medium |
CN117970892B (en) * | 2024-03-29 | 2024-07-09 | 合肥焕智科技有限公司 | Control method and device of conveying system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105335816A (en) * | 2015-10-13 | 2016-02-17 | 国网安徽省电力公司铜陵供电公司 | Electric power communication operation trend and business risk analyzing method based on deep learning |
CN108426576A (en) * | 2017-09-15 | 2018-08-21 | 辽宁科技大学 | Aircraft paths planning method and system based on identification point vision guided navigation and SINS |
CN110096981A (en) * | 2019-04-22 | 2019-08-06 | 长沙千视通智能科技有限公司 | A kind of video big data traffic scene analysis method based on deep learning |
CN110941277A (en) * | 2019-12-13 | 2020-03-31 | 华南智能机器人创新研究院 | Trolley route planning method and system |
CN110989597A (en) * | 2019-12-05 | 2020-04-10 | 南京理工大学 | Adaptive path tracking method of integrated fuzzy neural network |
CN111091712A (en) * | 2019-12-25 | 2020-05-01 | 浙江大学 | Traffic flow prediction method based on cyclic attention dual graph convolution network |
CN111474953A (en) * | 2020-03-30 | 2020-07-31 | 清华大学 | Multi-dynamic-view-angle-coordinated aerial target identification method and system |
CN111540198A (en) * | 2020-04-17 | 2020-08-14 | 浙江工业大学 | Urban traffic situation recognition method based on directed graph convolution neural network |
-
2020
- 2020-12-01 CN CN202011386288.2A patent/CN112748732B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105335816A (en) * | 2015-10-13 | 2016-02-17 | 国网安徽省电力公司铜陵供电公司 | Electric power communication operation trend and business risk analyzing method based on deep learning |
CN108426576A (en) * | 2017-09-15 | 2018-08-21 | 辽宁科技大学 | Aircraft paths planning method and system based on identification point vision guided navigation and SINS |
CN110096981A (en) * | 2019-04-22 | 2019-08-06 | 长沙千视通智能科技有限公司 | A kind of video big data traffic scene analysis method based on deep learning |
CN110989597A (en) * | 2019-12-05 | 2020-04-10 | 南京理工大学 | Adaptive path tracking method of integrated fuzzy neural network |
CN110941277A (en) * | 2019-12-13 | 2020-03-31 | 华南智能机器人创新研究院 | Trolley route planning method and system |
CN111091712A (en) * | 2019-12-25 | 2020-05-01 | 浙江大学 | Traffic flow prediction method based on cyclic attention dual graph convolution network |
CN111474953A (en) * | 2020-03-30 | 2020-07-31 | 清华大学 | Multi-dynamic-view-angle-coordinated aerial target identification method and system |
CN111540198A (en) * | 2020-04-17 | 2020-08-14 | 浙江工业大学 | Urban traffic situation recognition method based on directed graph convolution neural network |
Also Published As
Publication number | Publication date |
---|---|
CN112748732A (en) | 2021-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112748732B (en) | Real-time path planning method based on improved Kstar algorithm and deep learning | |
KR102296507B1 (en) | Method for tracking object by using convolutional neural network including tracking network and computing device using the same | |
CN111444767B (en) | Pedestrian detection and tracking method based on laser radar | |
KR102279376B1 (en) | Learning method, learning device for detecting lane using cnn and test method, test device using the same | |
CN110197215A (en) | A kind of ground perception point cloud semantic segmentation method of autonomous driving | |
KR102279388B1 (en) | Learning method, learning device for detecting lane using lane model and test method, test device using the same | |
CN112560576B (en) | AI map recognition garbage classification and intelligent recovery method | |
CN112033413A (en) | Improved A-algorithm combined with environmental information | |
CN114454875A (en) | Urban road automatic parking method and system based on reinforcement learning | |
CN115713856A (en) | Vehicle path planning method based on traffic flow prediction and actual road conditions | |
CN115077540A (en) | Map construction method and device | |
CN115713174A (en) | Unmanned aerial vehicle city inspection system and method | |
CN116071722A (en) | Lane geometric information extraction method, system, equipment and medium based on road section track | |
JP4292250B2 (en) | Road environment recognition method and road environment recognition device | |
CN117664168A (en) | Park vehicle navigation path planning method, medium and system | |
CN113807457B (en) | Method, device, equipment and storage medium for determining road network characterization information | |
CN117990105A (en) | Intelligent medical service method based on cloud platform | |
KR20230024392A (en) | Driving decision making method and device and chip | |
CN115691140B (en) | Analysis and prediction method for space-time distribution of automobile charging demand | |
CN115375869B (en) | Robot repositioning method, robot and computer-readable storage medium | |
JP2003303390A (en) | Travel time prediction method, device and program | |
CN116563341A (en) | Visual positioning and mapping method for processing dynamic object in complex environment | |
CN113327453A (en) | Parking lot vacancy guiding system based on high-point video analysis | |
CN115082903B (en) | Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium | |
CN117870713B (en) | Path planning method and system based on big data vehicle-mounted image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |