CN114964272A - Vehicle track map matching method fusing vehicle-mounted image semantics - Google Patents

Vehicle track map matching method fusing vehicle-mounted image semantics Download PDF

Info

Publication number
CN114964272A
CN114964272A CN202210491343.7A CN202210491343A CN114964272A CN 114964272 A CN114964272 A CN 114964272A CN 202210491343 A CN202210491343 A CN 202210491343A CN 114964272 A CN114964272 A CN 114964272A
Authority
CN
China
Prior art keywords
candidate
point
track
vehicle
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210491343.7A
Other languages
Chinese (zh)
Inventor
李伯钊
蔡忠亮
蒋子捷
王孟琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202210491343.7A priority Critical patent/CN114964272A/en
Publication of CN114964272A publication Critical patent/CN114964272A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a vehicle track map matching method fusing vehicle-mounted image semantics, which firstly provides a hidden Markov model-based multi-path output algorithm, and can extract a plurality of candidate tracks including a real vehicle running track according to the space-time characteristics of current track data; secondly, constructing a convolutional neural network model, extracting semantic information for identifying a vertical parallel path and a horizontal parallel path from the vehicle-mounted image, and designing a quantification method of vehicle-mounted image semantics; and finally, based on an entropy weight method, realizing comprehensive evaluation of the candidate track through the quantized vehicle-mounted image semantics, and extracting the maximum likelihood candidate track as a map matching result of the vehicle track. The method promotes the effective fusion of the vehicle-mounted image and the map matching algorithm, can effectively improve the utilization rate and the value of the high-precision map crowdsourcing data, and provides method support for the position correction of the high-precision map crowdsourcing vehicle track data and the high timeliness updating of the high-precision map.

Description

Vehicle track map matching method fusing vehicle-mounted image semantics
Technical Field
The invention relates to a vehicle track map matching method fusing vehicle-mounted image semantics, and belongs to the technical field of high-precision map crowdsourcing mode data updating.
Background
The high-precision map is one of infrastructure systems of future intelligent automobiles, and can provide functions of high-precision positioning information, auxiliary environment perception, path decision planning, cloud vehicle-road cooperative interaction and the like for an automatic driving automobile. And crowd-sourced mode data updating is a necessary way for realizing high-precision map real-time updating. The vehicle-mounted image and the vehicle trajectory data acquired by the crowd-sourced vehicle can be used for acquiring and updating static road data such as road elements and affiliated facilities. At present, the crowdsourcing mode of the high-precision map is low in updating automation degree due to the fact that positioning accuracy of crowdsourcing track data is low and data quality is poor, and the requirements of high-precision and high-timeliness data updating cannot be met. The map matching algorithm is an effective method for realizing the position correction of the track data, is commonly used for online vehicle navigation and offline data correction, and is also one of the core algorithms for the position correction of the high-precision map crowd-sourced track data. The map matching algorithm only fuses the space-time characteristics of the track data, and solves the problem on the premise of assuming the shortest vehicle running or the best path, so that the map matching algorithm is a solution under an ideal condition, and the requirements of high-precision map data acquisition and updating on the track position precision are difficult to meet. Considering that the vehicle-mounted image can reflect the driving environment information of the vehicle and can be used for deducing the complex driving behavior of the vehicle and the specific driving position of the vehicle in a complex road scene, how to effectively fuse the semantic information of the vehicle-mounted image in the map matching algorithm and break through the precision bottleneck of the map matching algorithm in the crowdsourcing track data correction is the main problem solved by the invention.
Disclosure of Invention
The invention aims to provide a vehicle track map matching method fusing vehicle-mounted image semantics, which realizes high-precision position correction of vehicle track data on the basis of fully utilizing track data space-time characteristics and vehicle-mounted image semantics characteristics acquired by crowdsourcing vehicles on a high-precision map.
The invention provides a vehicle track map matching method fusing vehicle-mounted image semantics, which comprises the following steps: firstly, a multi-path output algorithm based on a hidden Markov model is constructed, a plurality of candidate tracks which are possible to run in the current track data can be extracted, and the recall rate of the real running track of the vehicle is improved; secondly, semantic information for identifying the vertical parallel path and the horizontal parallel path is extracted from the vehicle-mounted image, and a quantification method of vehicle-mounted image semantics is designed; and finally, based on an entropy weight method, realizing comprehensive evaluation of the candidate track by fusing the quantized vehicle-mounted image semantic information, and extracting the candidate track with the maximum likelihood as a map matching result of the vehicle track. The method promotes the effective fusion of the vehicle-mounted image and the map matching algorithm, provides a new idea for breaking through the precision bottleneck of the map matching algorithm, can effectively improve the utilization rate and the value of the high-precision map crowdsourcing data, and provides method support for the position correction of the high-precision map crowdsourcing vehicle track data and the high timeliness updating of the high-precision map. The technical scheme of the invention is a vehicle track map matching method fusing vehicle-mounted image semantics. The method mainly comprises the following steps:
step 1, solving a plurality of candidate tracks which may be driven by the current track based on a multi-path output algorithm of a hidden markov model, which may be specifically subdivided into the following steps (as shown in fig. 1):
step 1.1, vehicle track data are preprocessed, vehicle tracks are firstly grouped according to vehicle id, then the vehicle track data are sequenced according to acquisition time, linear speed is calculated according to linear distance and time interval between adjacent original track records, abnormal points are screened by taking the maximum speed limit of a city of 120km/h as a threshold, and track data are interrupted at the positions where the abnormal points appear.
Step 1.2, preprocessing urban road data, firstly, ensuring that a condition that one node only relates to two roads does not exist in road network data, and combining the two roads into the same road if the node only relates to two roads; secondly, caching spatial information of road sections in the road data by adopting an R-tree spatial index, and caching attribute information of nodes, roads, road sections and the like in the road data by adopting a red-black tree attribute index so as to meet the requirement of performing spatial and attribute retrieval on the data by a convenient method; in addition, a weighted directed graph of the A-shortest path algorithm is constructed through the nodes and the road data.
Step 1.3, sequentially traversing each track point, and entering step 1.9 if the next track point does not exist;
step 1.4, acquiring a candidate road section of the current track point, taking the current track point as a center, taking a certain range as a radius, inquiring all road sections in the range as the candidate road section of the current track point through the spatial index constructed in the step 1.2, firstly judging an included angle between the driving direction of the track point and the driving direction of the candidate road section, and abandoning the current candidate road section if the angle exceeds 60 degrees; for each candidate road section meeting the conditions, calculating a point-line relation function between the current track point and the candidate road section according to a formula (1); calculating candidate points of the current track point on the candidate road section according to a formula (2);
Figure BDA0003631083420000021
x=x 1 +r*(x 2 -x 1 );y=y 1 +r*(y 2 -y 1 ) (2)
wherein A and B are the starting point and the ending point of the road section, (x) 1 ,y 1 ) And (x) 2 ,y 2 ) Respectively as the coordinates of a starting point and a stopping point, P is the current track point, and (x, y) is the coordinates of the current track point;
step 1.5, screening and grouping the candidate road sections, if the point-line relation function value r calculated in the step 1.4 belongs to [0,1], adding the candidate points on the current candidate road section to a candidate set, and recording the road where the current candidate points are located to a candidate road ID set; if r ∈ [ -epsilon, 0 ∈ U (1,1+ epsilon ] and the distance between the trajectory point and the candidate point is smaller than a certain threshold, adding the candidate point on the current candidate road section to the candidate set, wherein epsilon is an allowable range value of the point-line relation function; other candidate points will be discarded.
Step 1.6, selecting key candidate points in the candidate set to the candidate set, if all road IDs associated with a certain candidate point in the candidate set do not appear in the candidate road ID set in the step 1.5, adding the current candidate point to the candidate set, and adding all road IDs associated with the current candidate point to the candidate road ID set; if all the associated road IDs appear in the candidate road ID set, comparing the distances between the two candidate points and the track points, only keeping one candidate point with the shortest distance in the candidate set, and moving the other candidate point to the candidate set; the calculation method of the associated road is as follows: all candidate points in the candidate set are end points of the road sections, and the road IDs of all the road sections associated with the end points are the associated road IDs of the current candidate points;
step 1.7, calculating the observation probability and the transition probability of all current candidate points (Viterbi algorithm), which may be specifically subdivided into the following steps (as shown in fig. 2):
step 1.7.1, judging whether a candidate set and an alternative set recorded by the preamble track point exist, if the candidate set and the alternative set recorded by the preamble track point do not exist, skipping to step 1.7.2, otherwise skipping to step 1.7.3;
step 1.7.2, calculating the observation probability of all candidate points in the current track point candidate set and the candidate set, as detailed in formula (3), taking the observation probability as the overall probability of the candidate points, adding the candidate points with the overall probability scores into the candidate result set and the candidate result set, and then jumping to step 1.7.8; the observation probability calculation formula is as follows:
Figure BDA0003631083420000031
wherein the content of the first and second substances,
Figure BDA0003631083420000032
representing the ith original track point p i The (j) th candidate point of (a),
Figure BDA0003631083420000033
the probability of the spatial position of the vehicle track point is shown in a formula (4);
Figure BDA0003631083420000034
the probability of the driving direction of the vehicle track point is shown in formula (5);
Figure BDA0003631083420000035
as the trajectory of the vehicleThe probability of the relationship between the point and the candidate road section, as shown in formula (6);
the distance between the vehicle track point and the candidate point meets the normal distribution, and the spatial position probability function F of the vehicle track point d The definition is as follows:
Figure BDA0003631083420000036
wherein the content of the first and second substances,
Figure BDA0003631083420000037
the jth candidate point representing the ith original trace point,
Figure BDA0003631083420000038
represents the distance, μ, between the ith original trace point and the candidate point on its jth candidate path d And σ d Respectively representing the mean and standard deviation of the distances;
the included angle between the vehicle driving direction and the candidate road section direction also satisfies the normal distribution, and the probability function F of the vehicle driving direction θ The definition is as follows:
Figure BDA0003631083420000039
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00036310834200000310
representing the ith original track point p i The (j) th candidate point of (a),
Figure BDA00036310834200000311
represents the angle between the driving direction of the ith original track point and the direction of the jth candidate road section, mu θ And σ θ Respectively representing the mean and standard deviation of the angle;
the probability of the relationship between the vehicle trajectory point and the candidate road segment is defined as follows:
Figure BDA0003631083420000041
wherein the content of the first and second substances,
Figure BDA0003631083420000042
the jth candidate point representing the ith original track point pi,
Figure BDA0003631083420000043
and representing a point-line relation function value between the ith original track point and the jth candidate road section.
Step 1.7.3, calculating the overall probability of all candidate point pairs between the current track point candidate set and the preorder track point candidate set, which is detailed in a formula (7); judging whether the ratio of the shortest path length between the candidate point pairs to the track point to the linear distance exceeds a certain threshold value, if so, adding the candidate points to a candidate result set, otherwise, adding the candidate points to a temporary result set; and taking the road ID chain of the candidate point path as a unique identifier, and when candidate points are added to the result set, if candidate points with the same identifier appear, only one item with the highest probability score is stored in the result set.
Figure BDA0003631083420000044
Wherein
Figure BDA0003631083420000045
And (3) representing the integral probability score of the tth candidate point of the ith track point, and according to the description of the step 1.7.2, knowing:
Figure BDA0003631083420000046
Figure BDA0003631083420000047
for the observation probability, the definition is detailed in step 1.7.2;
Figure BDA0003631083420000048
as candidate points
Figure BDA00036310834200000410
To the candidate point
Figure BDA00036310834200000411
The transition probability between them is defined in the formula (8).
Figure BDA00036310834200000412
Wherein the content of the first and second substances,
Figure BDA00036310834200000413
the probability of the shortest path between candidate points for the vehicle is shown in formula (9);
Figure BDA00036310834200000414
the probability of the traveling speed between candidate points of the vehicle is shown in formula (10);
the probability of the straight-line distance between the shortest path between the front candidate point pair and the rear candidate point pair of the vehicle and the track point is defined as follows:
Figure BDA00036310834200000415
wherein, dis (p) i-1 ,p i ) Representing original track points p i-1 And p i The straight-line distance between the two,
Figure BDA00036310834200000429
representing the ith original track point p i The t-th candidate point of (a),
Figure BDA00036310834200000416
representing candidate points
Figure BDA00036310834200000417
And
Figure BDA00036310834200000418
the shortest path distance between them,min and max are functions that find the minimum and maximum values, respectively.
The probability of the traveling speed between the pair of candidate points before and after the vehicle is defined as follows:
Figure BDA00036310834200000419
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00036310834200000420
representing the ith original track point p i The t-th candidate point of (a),
Figure BDA00036310834200000421
representing the candidate points
Figure BDA00036310834200000422
To the candidate point
Figure BDA00036310834200000423
The probability of travel speed in between;
Figure BDA00036310834200000424
represents the slave candidate point
Figure BDA00036310834200000425
To the candidate point
Figure BDA00036310834200000426
V is calculated by dividing the shortest path value between two candidate points by the travel time between two original trajectory points u Represents the maximum speed limit of the current driving road u, and k represents a candidate point
Figure BDA00036310834200000427
And
Figure BDA00036310834200000428
the number of the sections of the shortest path between the two nodes.
Step 1.7.4, calculating the overall probability of all candidate point pairs between the current track point candidate set and the preorder track point candidate set, wherein the calculation formula is detailed in step 1.7.3; judging whether the ratio of the shortest path length between the candidate point pairs to the track point to the linear distance exceeds a certain threshold value, if so, adding the candidate points to the candidate result set, otherwise, adding the candidate points to the temporary result set; and taking the road ID chain of the candidate point path as a unique identifier, and when candidate points are added to the result set, if candidate points with the same identifier appear, only one item with the closest matching distance is stored in the result set.
Step 1.7.5, obtaining the associated roads of all candidate points in the current track point candidate set and the candidate result set, with the identifier of AR 1 ,AR 2 Judging whether all the associated roads of the candidate set appear in the candidate result set, i.e.
Figure BDA0003631083420000051
If yes, go directly to step 1.7.6; otherwise, counting the ID of the non-appeared associated road, and identifying by AR, wherein AR belongs to AR 2 -AR 1 Calculating the overall probability of all candidate point pairs between the current track point candidate set and the preorder track point candidate set, wherein the calculation formula is detailed in step 1.7.3, selecting candidate points with intersection between the associated roads and the non-appeared associated roads from the calculation result, and adding the candidate points into the candidate result set; if candidate points with the same identity appear, only the one with the highest probability score is saved in the candidate result set.
Step 1.7.6, obtaining again the associated roads of all candidate points in the current candidate result set, and marking as AR 2 Simultaneously acquiring the associated roads of all candidate points in the preorder track point candidate set, and marking as AR 3 Judging whether all the associated roads in the pre-sequence track point candidate set appear in the candidate result set, namely
Figure BDA0003631083420000052
If yes, go directly to step 1.7.7; otherwise, counting the ID of the non-appeared associated road, and identifying by AR, wherein AR belongs to AR 2 -AR 3 And selecting the associated road from the candidate result setMoving the candidate points to a candidate result set from the candidate result set by the candidate points with intersection with the non-appeared associated roads; if candidate points with the same identification appear, only one item with the closest map matching distance is saved in the candidate result set.
Step 1.7.7, if the candidate result set is not empty, directly jumping to step 1.7.8; otherwise, all the data in the temporary result set are added into the candidate result set;
step 1.7.8, returning the candidate result and the candidate result set of the current track point as results;
step 1.8, if the candidate result set and the alternative result set returned in the Viterbi algorithm are empty, then the step 1.9 is carried out, otherwise, the candidate result set and the alternative result set of the current track point are recorded and are taken as a preamble candidate set and a preamble alternative set of the next track point, and then the step 1.3 is carried out;
step 1.9, recursion solving and candidate point screening are carried out, for each candidate point reserved by the current track point, the preorder candidate points are recurred in sequence, and the obtained candidate point chain is turned over, so that the candidate track of the current candidate point path can be obtained; for each candidate track, filtering all candidate points by taking a path road ID chain between a second candidate point and a penultimate candidate point as a unique identifier, and if two candidate points have the same identifier, only keeping one candidate point with the highest probability score;
step 1.10, track splicing, wherein if a result set is empty, all candidate tracks solved currently are added to the result set; otherwise, splicing the candidate track currently solved and the track stored in the result set: taking the candidate track solved currently as a reference, searching a candidate track of which the last candidate point and the first candidate point of the current candidate track are on the same path in the result set, and splicing the candidate track with the current candidate track; if the situation does not exist, selecting a candidate track with the shortest distance between the last candidate point and the first candidate point of the current candidate track in the result set, and splicing the candidate track with the current candidate track; the result after the splicing is reserved is a multi-path output result of the current track point solution;
step 1.11, if trace points still exist and are not traversed, clearing the candidate result set and the alternative result set recorded currently, and turning to step 1.3; otherwise, the solving of the multi-path output algorithm is completed, and the step 2 is switched to.
Step 2, semantic extraction and quantization of the vehicle-mounted image, which can be specifically subdivided into the following steps (step 1-3 detailed flow chart, as shown in fig. 3):
step 2.1, extracting and quantifying road scene semantics, and carrying out scene mode classification on the vehicle-mounted image by adopting a convolutional neural network: 4 types of tunnels and viaducts are not shielded, viaducts are shielded and are not shielded; the method for quantizing the road scene semantics comprises the following steps:
Figure BDA0003631083420000061
wherein, alpha belongs to [0,1] is the confidence coefficient of vehicle-mounted image semantic recognition, and the condition that the road scene is consistent with the candidate road attribute comprises the following steps: both the image and the road scene are the case of a tunnel; the image is the situation that the viaduct is shielded and the road scene is non-viaduct and tunnel; the image is an viaduct non-occlusion or non-occlusion, if no viaduct exists in the road scenes of all candidate points of the current track point, the attributes of all non-tunnel roads are considered to be consistent with the road scene of the current image, if the viaduct non-occlusion scene exists, the attributes of the viaduct non-occlusion scene are consistent with the attributes of all non-tunnel roads, and the attributes of the non-occlusion scene are consistent with the attributes of the viaduct road. The score of each candidate track in the aspect of road scene semantics is equal to the sum of the scores of all candidate points on the candidate track in the road scene semantics;
and 2.2, semantically extracting and quantifying lane numbers, adopting a convolutional neural network to identify the lane numbers in the vehicle-mounted images (ignoring lanes on the left side of a yellow solid line and non-motor lanes on the right side), defaulting the road without lane line identification to be 1 lane, and artificially correcting the vehicle-mounted image identification results and the urban road attributes of more than 4 lanes to be 4 lanes. The method for quantizing the semantic meanings of the number of the lanes comprises the following steps:
Figure BDA0003631083420000062
wherein, alpha is ∈ [0,1]]Confidence for semantic recognition of vehicle images, y p ,y r Respectively the number of lanes extracted from the corrected vehicle-mounted image and the number of lanes recorded in the urban road attribute, wherein max and abs are respectively functions for calculating a maximum value and an absolute value; the score of each candidate track in the lane number semantic aspect is equal to the sum of the scores of all candidate points on the candidate track in the lane number semantic aspect;
step 2.3, extracting and quantifying the guide line semanteme, and identifying the guide line appearing in the vehicle-mounted image by adopting a convolutional neural network, wherein the identification content comprises the following steps: there are no guide lines, the guide lines appear on the left side of the vehicle, the guide lines appear on the right side of the vehicle, the guide lines appear on the two sides of the vehicle, and the like. The flow guide line semantic quantization method comprises the following steps:
Figure BDA0003631083420000071
wherein, alpha is ∈ [0,1]]If the current vehicle-mounted image has no guide line, the scores of all candidate points on the current track point are alpha; if the guide lines appear in the vehicle-mounted image, selecting 5 continuous candidate points on each candidate track by taking each candidate point of the current track point as a center to form a track chain, and grouping the track chains with overlapping by adopting a data structure of a parallel search set; after grouping, if only one track chain in the group exists, the score of the candidate track on the current track point is 0.8 alpha; for a scene with multiple trajectory chains in the group, sequencing the trajectory chains according to the positions of the guide lines, for example: if the current diversion line appears on the left side of the vehicle, the vehicle is proved to run near the right side, the track near the right in the same group is divided into alpha, and other tracks are sequentially decreased by 0.2 alpha from the right to the left; if the diversion lines appear on the two sides of the vehicle, one or two candidate tracks in the middle are divided into alpha, and the other tracks are sequentially decreased by 0.2 alpha from the center to the two sides. Formula (13)In c is j Represents the number of trace chains in the jth grouping and i represents the order of the trace chains in the grouping.
And 3, evaluating an entropy weight method and solving a map matching result, wherein the entropy weight method can be specifically subdivided into the following steps:
step 3.1, data normalization:
Figure BDA0003631083420000072
wherein x i,j Representing the jth vehicular image semantic score, min, of the ith candidate trajectory i=1 ...n x i,j And max i=1 ...n x i,j Respectively representing the maximum value and the minimum value of a certain type of vehicle-mounted image semantics in all candidate tracks.
Step 3.2, calculating the proportion of each type of vehicle-mounted image semantics in each candidate track in corresponding indexes of all candidate tracks:
Figure BDA0003631083420000073
step 3.3, entropy calculation, namely calculating the entropy of each type of vehicle-mounted image semantic score:
Figure BDA0003631083420000074
step 3.4, information entropy redundancy calculation, namely calculating the information entropy redundancy of each type of vehicle-mounted image semantics:
d j =1-e j (17)
step 3.5, weight calculation, namely calculating the weight of each vehicle-mounted image semantic:
Figure BDA0003631083420000081
step 3.6, calculating the score of each candidate track, selecting one candidate track with the highest overall score as a map matching result:
Figure BDA0003631083420000082
compared with the prior art, the invention has the following advantages and beneficial effects: the invention firstly provides a multi-path output algorithm based on a hidden Markov model, which can extract a plurality of candidate tracks including the real driving track of a vehicle according to the space-time characteristics of current track data; secondly, constructing a convolutional neural network model, extracting semantic information for identifying a vertical parallel path and a horizontal parallel path from the vehicle-mounted image, and designing a quantification method of vehicle-mounted image semantics; and finally, based on an entropy weight method, realizing comprehensive evaluation of the candidate track through the quantized vehicle-mounted image semantics, and extracting the maximum likelihood candidate track as a map matching result of the vehicle track. On the basis of fully utilizing track data space-time characteristics and vehicle-mounted image semantic characteristics acquired by a high-precision map crowdsourcing vehicle, the high-precision position correction of vehicle track data is realized, the effective fusion of a vehicle-mounted image and a map matching algorithm is promoted, a new idea is provided for breaking through the precision bottleneck of the map matching algorithm, the utilization rate and the value of the high-precision map crowdsourcing data can be effectively improved, and method support is provided for the position correction of the high-precision map crowdsourcing vehicle track data and the high timeliness updating of a high-precision map.
Drawings
FIG. 1: a multipath output algorithm logic flow diagram.
FIG. 2: a Viterbi algorithm logic flow diagram.
FIG. 3: and a detailed flow chart of the vehicle track map matching method fusing vehicle-mounted image semantics.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
Taking the crowdsourcing data of the high-precision map in Wuhan City as an example, the specific implementation flow comprises the following steps:
step 1, vehicle track data preprocessing: the method comprises the steps of firstly grouping vehicle tracks according to vehicle id, then sequencing vehicle track data according to acquisition time, calculating linear speed according to linear distance and time interval between adjacent original track records, screening abnormal points by taking the maximum speed limit of 120km/h as a threshold value, and breaking track data at the positions where the abnormal points appear.
Step 2, navigation road network data preprocessing: the method comprises the steps of adopting a red-black tree to construct attribute indexes for information such as roads, road sections and nodes, adopting an R tree to construct a spatial index for the information of the road sections, and then adopting nodes and road data to construct a weighted directed graph for an A shortest path algorithm.
Step 3, track point traversal: if the next track point exists, executing the step 4; if no next track point exists, executing step 9;
step 4, acquiring candidate road sections: sequentially traversing each track point, taking the current track point as a center, taking a certain range as a radius, inquiring all road sections in the range as candidate road sections of the current track point, firstly judging an included angle between the driving direction of the track point and the driving direction of the candidate road sections, and abandoning the current candidate road sections if the angle exceeds 60 degrees; and for each candidate road section meeting the conditions, calculating a point-line relation function between the current track point and the candidate road section according to a formula (1), and further solving candidate points of the current track point on the candidate road section according to a formula (2).
Figure BDA0003631083420000091
x=x 1 +r*(x 2 -x 1 );y=y 1 +r*(y 2 -y 1 ) (2)
Wherein A and B are the starting point and the ending point of the road section, (x) 1 ,y 1 ) And (x) 2 ,y 2 ) Respectively as the coordinates of a starting point and a stopping point, P is the current track point, and (x, y) is the coordinates of the current track point;
and 5, screening and grouping the candidate road sections: if the calculated point-line relation function value r belongs to [0,1], adding a candidate point on the current candidate road section to a candidate set, and recording a road where the current candidate point is located to a candidate road ID set; if r ∈ [ -epsilon, 0 ∈ U (1,1+ epsilon ] and the distance between the trajectory point and the candidate point is smaller than a certain threshold, adding the candidate point on the current candidate road section to the candidate set; other candidate points will be discarded.
Step 6, candidate point screening of the candidate set: if all road IDs associated with a certain candidate point in the candidate set do not appear in the associated road ID sets of all candidate points in the candidate set, adding the current candidate point to the candidate set, and adding all road IDs associated with the current candidate point to the candidate road ID sets; if all the associated road IDs appear in the candidate road ID set, comparing the distances between the candidate points and the track points, only keeping one candidate point with the shortest distance in the candidate set, and moving the other candidate point to the candidate set;
step 7, Viterbi algorithm: the method can be particularly subdivided into the following steps:
step 7.1, judging whether a candidate set and an alternative set recorded by the preorder track point exist, wherein the step 7.2 is not executed, and the step 7.3 is executed;
step 7.2, calculating the observation probability of all candidate points in the current track point candidate set and the candidate set according to the formula (3), taking the observation probability as the overall probability of the candidate points, and adding the candidate points with the overall probability scores into the candidate result set and the candidate result set; then step 7.8 is performed;
Figure BDA0003631083420000092
wherein the content of the first and second substances,
Figure BDA0003631083420000093
representing the ith original track point p i The j-th candidate point of (a),
Figure BDA0003631083420000094
the probability of the spatial position of the vehicle track point is shown in formula (4);
Figure BDA0003631083420000095
the probability of the driving direction of the vehicle track point is shown in a formula (5);
Figure BDA0003631083420000096
the relation probability between the vehicle track point and the candidate road section is shown in formula (6);
the distance between the vehicle track point and the candidate point meets the normal distribution, and the spatial position probability function F of the vehicle track point d The definition is as follows:
Figure BDA0003631083420000101
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003631083420000102
the jth candidate point representing the ith original trace point,
Figure BDA0003631083420000103
represents the distance, μ, between the ith original trace point and the candidate point on its jth candidate path d And σ d Respectively representing the mean and standard deviation of the distances;
the included angle between the vehicle driving direction and the candidate road section direction also satisfies the normal distribution, and the probability function F of the vehicle driving direction θ The definition is as follows:
Figure BDA0003631083420000104
wherein the content of the first and second substances,
Figure BDA0003631083420000105
representing the ith original track point p i The (j) th candidate point of (a),
Figure BDA0003631083420000106
representing the driving direction of the ith original track point and the direction of the jth candidate road sectionAngle between directions, mu θ And σ θ Respectively representing the mean and standard deviation of the angle;
the probability of the relationship between the vehicle trajectory point and the candidate road segment is defined as follows:
Figure BDA0003631083420000107
wherein the content of the first and second substances,
Figure BDA0003631083420000108
representing the ith original track point p i The (j) th candidate point of (a),
Figure BDA0003631083420000109
and representing a point-line relation function value between the ith original track point and the jth candidate road section.
Step 7.3, calculating the overall probability of all candidate point pairs between the current track point candidate set and the preorder track point candidate set according to the formula (7), judging whether the ratio of the shortest path length between the candidate point pairs to the linear distance of the track points exceeds a certain threshold value, if so, adding the candidate points to the candidate result set, otherwise, adding the candidate points to the temporary result set; taking a road ID chain of a candidate point path as a unique identifier, and when candidate points are added to a result set, if the candidate points with the same identifier appear, only storing one item with the highest probability score in the result set;
Figure BDA00036310834200001010
wherein
Figure BDA00036310834200001011
And (3) representing the integral probability score of the tth candidate point of the ith track point, and according to the description in the step 7.2, knowing:
Figure BDA00036310834200001012
Figure BDA00036310834200001013
for the observation probability, the definition is detailed in step 7.2;
Figure BDA00036310834200001014
as candidate points
Figure BDA00036310834200001015
To the candidate point
Figure BDA00036310834200001016
The definition of the transition probability is shown in the formula (8).
Figure BDA00036310834200001017
Wherein the content of the first and second substances,
Figure BDA00036310834200001018
the probability of the shortest path between candidate points for the vehicle is shown in equation (9);
Figure BDA00036310834200001019
the probability of the traveling speed between candidate points of the vehicle is shown in formula (10);
the probability of the straight-line distance between the shortest path between the front candidate point pair and the rear candidate point pair of the vehicle and the track point is defined as follows:
Figure BDA0003631083420000111
wherein, dis (p) i-1 ,p i ) Representing original track points p i-1 And p i The straight-line distance between the two,
Figure BDA0003631083420000112
representing the ith original track point p i The t-th candidate point of (a),
Figure BDA0003631083420000113
representing waiting timePoint selection
Figure BDA0003631083420000114
And
Figure BDA0003631083420000115
the shortest path distance between min and max is a function for finding the minimum and maximum values, respectively.
The probability of the traveling speed between the pair of candidate points before and after the vehicle is defined as follows:
Figure BDA0003631083420000116
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003631083420000117
representing the ith original track point p i The t-th candidate point of (a),
Figure BDA0003631083420000118
representing the candidate points
Figure BDA0003631083420000119
To the candidate point
Figure BDA00036310834200001110
The probability of travel speed in between;
Figure BDA00036310834200001111
represents the slave candidate point
Figure BDA00036310834200001112
To the candidate point
Figure BDA00036310834200001113
V is calculated by dividing the shortest path value between two candidate points by the travel time between two original trajectory points u Represents the maximum speed limit of the current driving road u, and k represents a candidate point
Figure BDA00036310834200001114
And
Figure BDA00036310834200001115
the number of the road sections of the shortest path between the two road sections.
Step 7.4, calculating the overall probability of all candidate point pairs between the current track point candidate set and the preorder track point candidate set, judging whether the ratio of the shortest path length between the candidate point pairs to the linear distance of the track points exceeds a certain threshold value, if so, adding the candidate points to the candidate result set, otherwise, adding the candidate points to the temporary result set; and taking the road ID chain of the candidate point path as a unique identifier, and when candidate points are added to the result set, if candidate points with the same identifier appear, only one item with the closest matching distance is stored in the result set.
Step 7.5, obtaining the associated roads of all candidate points in the current track point candidate set and the candidate result set, wherein the identifier is AR 1 ,AR 2 Judging whether all the associated roads of the candidate set appear in the candidate result set, i.e.
Figure BDA00036310834200001116
If yes, directly executing step 7.6; otherwise, counting the ID of the non-appeared associated road, and identifying by AR, wherein AR belongs to AR 2 -AR 1 Calculating the overall probability of all candidate point pairs between the current track point candidate set and the preorder track point candidate set, selecting candidate points with intersection between the associated roads and the non-appeared associated roads from the calculation result, and adding the candidate points into the candidate result set; if candidate points with the same identity appear, only the one with the highest probability score is saved in the candidate result set.
Step 7.6, obtaining the associated roads of all candidate points in the current candidate result set again, and marking as AR 2 Simultaneously acquiring the associated roads of all candidate points in the preorder track point candidate set, and marking as AR 3 Judging whether all the associated roads in the pre-sequence track point candidate set appear in the candidate result set, namely
Figure BDA00036310834200001117
If yes, directly executing step 7.7; otherwise, counting the ID of the non-appeared associated road, and identifying by AR, wherein AR belongs to AR 2 -AR 3 Selecting candidate points with intersection between the associated roads and the non-appeared associated roads from the candidate result set, and moving the candidate points from the candidate result set to the candidate result set; if candidate points with the same identification appear, only one item with the closest map matching distance is saved in the candidate result set.
Step 7.7, if the candidate result set is not empty, directly executing step 7.8; otherwise, all the data in the temporary result set are added into the candidate result set;
7.8, returning the candidate result and the candidate result set of the current track point as results;
step 8, recording the Viterbi result: if the candidate result set and the alternative result set returned in the Viterbi algorithm are empty, executing step 9, otherwise, recording the candidate result set and the alternative result set of the current track point, taking the candidate result set and the alternative result set as a preamble candidate set and a preamble alternative set of the next track point, and executing step 3;
step 9, recursion solving and candidate point screening: for each candidate point reserved by the current track point, sequentially recursing the preorder candidate points of the candidate point, and turning over the obtained candidate point chain to obtain a candidate track of the current candidate point path; for each candidate track, taking a path road ID chain from the second candidate point to the penultimate candidate point as a unique identifier, filtering all candidate points, and if two candidate points have the same identifier, only keeping one candidate point with the highest probability score;
step 10, track splicing: if the result set is empty, adding all candidate tracks solved currently to the result set; otherwise, splicing the candidate track currently solved and the track stored in the result set: taking the candidate track solved currently as a reference, searching a candidate track of which the last candidate point and the first candidate point of the current candidate track are on the same path in the result set, and splicing the candidate track with the current candidate track; if the situation does not exist, selecting a candidate track with the shortest distance between the last candidate point and the first candidate point of the current candidate track in the result set, and splicing the candidate track with the current candidate track; the result after the splicing is reserved is a multi-path output result of the current track point solution;
step 11, judging a multipath output algorithm: if the track points still exist and are not traversed, clearing the candidate result set and the alternative result set of the current record, and turning to the step 3; otherwise, the multi-path output algorithm is solved and the step 12 is carried out.
Step 12, semantic extraction and quantification of road scenes: marking road scene semantic information on the acquired Baidu panorama data, and training a convolutional neural network based on the marked panorama data; extracting the road scene semantics of each vehicle-mounted image by adopting a trained convolutional neural network model, quantifying the road scene semantics, and taking the sum of the scores of all candidate points on the candidate tracks on the road scene semantics as the total score of the current candidate tracks on the road scene semantics; the vehicle-mounted image scene modes are divided into 4 types, namely, a tunnel, a viaduct without shielding, and the like; the method for quantizing the road scene semantics comprises the following steps:
Figure BDA0003631083420000121
wherein, alpha belongs to [0,1] is the confidence coefficient of vehicle-mounted image semantic recognition, and the condition that the road scene is consistent with the candidate road attribute comprises the following steps: both the image and the road scene are the case of a tunnel; the image is the situation that the viaduct is shielded and the road scene is non-viaduct and tunnel; the image is an viaduct non-occlusion or non-occlusion, if no viaduct exists in the road scenes of all candidate points of the current track point, the attributes of all non-tunnel roads are considered to be consistent with the road scene of the current image, if the viaduct non-occlusion scene exists, the attributes of the viaduct non-occlusion scene are consistent with the attributes of all non-tunnel roads, and the attributes of the non-occlusion scene are consistent with the attributes of the viaduct road. The score of each candidate track in the aspect of road scene semantics is equal to the sum of the scores of all candidate points on the candidate track in the road scene semantics;
step 13, semantic extraction and quantification of lane numbers: performing lane number semantic information labeling on the acquired Baidu panorama data, and training a convolutional neural network based on the labeled panorama data; then, extracting the number of lanes of each vehicle-mounted image by adopting a trained convolutional neural network model, finally quantifying lane number semantics, and taking the sum of scores of all candidate points on a candidate track in the lane number semantics as the total score of the current candidate track in the lane number semantics; during quantification, a road without lane line marks is defaulted to be 1 lane, and the vehicle-mounted image recognition result and the urban road attribute of more than 4 lanes are artificially corrected to be 4 lanes. The method for quantizing the semantic meanings of the number of the lanes comprises the following steps:
Figure BDA0003631083420000131
wherein, alpha is ∈ [0,1]]Confidence, y, for semantic recognition of vehicle images p ,y r The number of lanes extracted from the corrected vehicle-mounted image and the number of lanes recorded in the urban road attribute are respectively, and max and abs are respectively functions for calculating a maximum value and an absolute value; the score of each candidate track in the lane number semantic aspect is equal to the sum of the scores of all candidate points on the candidate track in the lane number semantic aspect;
step 14, flow guide line semantic extraction and quantification: carrying out flow guide line semantic information labeling on the acquired Baidu panorama data, and training a convolutional neural network (ignoring lanes on the left side of a yellow solid line and non-motor lanes on the right side) based on the labeled panorama data; then extracting the position of the guide line of each vehicle-mounted image by adopting a trained convolutional neural network model, finally quantizing the guide line semantics, and taking the sum of the scores of all candidate points on the candidate track on the guide line semantics as the total score of the current candidate track on the guide line semantics; the identification content of the diversion line comprises the following steps: there are no guide lines, the guide lines appear on the left side of the vehicle, the guide lines appear on the right side of the vehicle, the guide lines appear on the two sides of the vehicle, and the like. The flow guide line semantic quantization method comprises the following steps:
Figure BDA0003631083420000132
wherein, alpha is ∈ [0,1]]If the current vehicle-mounted image has no guide line, the scores of all candidate points on the current track point are alpha; if the guide line appears in the vehicle-mounted image, selecting 5 continuous candidate points on each candidate track to form a track chain by taking each candidate point of the current track point as a center, and grouping the track chains with overlapping by adopting a data structure of a parallel search set; after grouping, if only one track chain in the group exists, the score of the candidate track on the current track point is 0.8 alpha; for a scene with multiple trajectory chains in the group, the trajectory chains are sorted according to the positions of the guide lines, for example: if the current diversion line appears on the left side of the vehicle, the vehicle is proved to run near the right side, the track near the right in the same group is divided into alpha, and other tracks are sequentially decreased by 0.2 alpha from the right to the left; if the diversion lines appear on the two sides of the vehicle, one or two candidate tracks in the middle are divided into alpha, and the other tracks are sequentially decreased by 0.2 alpha from the center to the two sides. In the formula (13), is c j Represents the number of trace chains in the jth grouping and i represents the order of the trace chains in the grouping.
Step 15, evaluating an entropy weight method and solving a map matching result: normalizing the scores of various vehicle-mounted image semantics; calculating the weight of each type of vehicle-mounted image semantics by adopting an entropy weight method; and then calculating the weighted sum of all vehicle-mounted image semantic normalization scores as the final score of the candidate tracks, and obtaining a candidate track with the highest score as a map matching result.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (7)

1. A vehicle track map matching method fusing vehicle-mounted image semantics is characterized by comprising the following steps:
step 1, solving a plurality of candidate tracks which are possible to run on the current track based on a multi-path output algorithm of a hidden Markov model;
step 2, semantic extraction and quantification of the vehicle-mounted image, specifically comprising the steps of adopting a convolutional neural network to classify the scene mode of the vehicle-mounted image, identifying the number of lanes in the vehicle-mounted image, and identifying the flow guide lines appearing in the vehicle-mounted image;
and 3, based on an entropy weight method, realizing comprehensive evaluation of the candidate tracks through the quantized vehicle-mounted image semantics, and extracting the candidate tracks with the maximum likelihood as the map matching result of the vehicle tracks.
2. The vehicle trajectory map matching method fusing the semantics of the vehicle-mounted image according to claim 1, wherein: the specific implementation manner of the step 1 is as follows:
step 1.1, vehicle track data are preprocessed, vehicle tracks are grouped according to vehicle id, then the vehicle track data are sequenced according to acquisition time, linear speed is calculated according to linear distance and time interval between adjacent original track records, abnormal points are screened by taking the maximum speed limit N km/h of a city as a threshold, and track data are interrupted at the positions where the abnormal points appear;
step 1.2, preprocessing urban road data, firstly, ensuring that a condition that one node only relates to two roads does not exist in road network data, and combining the two roads into the same road if the node only relates to two roads; secondly, caching spatial information of road sections in the road data by adopting an R-tree spatial index, and caching attribute information of nodes, roads, road sections and the like in the road data by adopting a red-black tree attribute index so as to meet the requirement of performing spatial and attribute retrieval on the data by a convenient method; in addition, a weighted directed graph of the A-shortest path algorithm is constructed through the nodes and the road data;
step 1.3, sequentially traversing each track point, and entering step 1.9 if the next track point does not exist;
step 1.4, acquiring a candidate road section of the current track point, inquiring all road sections in the range as the candidate road section of the current track point by taking the current track point as a center and a certain range as a radius through the spatial index constructed in the step 1.2, firstly judging an included angle between the driving direction of the track point and the driving direction of the candidate road section, and discarding the current candidate road section if the angle exceeds M degrees; for each candidate road section meeting the conditions, calculating a point-line relation function between the current track point and the candidate road section according to a formula (1); calculating candidate points of the current track point on the candidate road section according to a formula (2);
Figure FDA0003631083410000011
x=x 1 +r*(x 2 -x 1 );y=y 1 +r*(y 2 -y 1 ) (2)
wherein A and B are the starting point and the ending point of the road section, (x) 1 ,y 1 ) And (x) 2 ,y 2 ) Respectively as the coordinates of a starting point and a stopping point, P is the current track point, and (x, y) is the coordinates of the current track point;
step 1.5, screening and grouping the candidate road sections, if the point-line relation function value r calculated in the step 1.4 belongs to [0,1], adding the candidate points on the current candidate road section to a candidate set, and recording the road where the current candidate points are located to a candidate road ID set; if r ∈ [ -epsilon, 0 ∈ U (1,1+ epsilon ] and the distance between the trajectory point and the candidate point is smaller than a certain threshold, adding the candidate point on the current candidate road section to the candidate set, wherein epsilon is an allowable range value of the point-line relation function; other candidate points will be discarded;
step 1.6, selecting key candidate points in the candidate set to the candidate set, if all road IDs associated with a certain candidate point in the candidate set do not appear in the candidate road ID set in the step 1.5, adding the current candidate point to the candidate set, and adding all road IDs associated with the current candidate point to the candidate road ID set; if all the associated road IDs appear in the candidate road ID set, comparing the distances between the two candidate points and the track points, only keeping one candidate point with the shortest distance in the candidate set, and moving the other candidate point to the candidate set; the calculation method of the associated road is as follows: all candidate points in the candidate set are end points of road sections, and the road IDs of all road sections associated with the end points are the associated road IDs of the current candidate points;
step 1.7, calculating the observation probability and the transition probability of all current candidate points by using a Viterbi algorithm;
step 1.8, if the candidate result set and the alternative result set returned in the Viterbi algorithm are empty, then the step 1.9 is carried out, otherwise, the candidate result set and the alternative result set of the current track point are recorded and are taken as a preamble candidate set and a preamble alternative set of the next track point, and then the step 1.3 is carried out;
step 1.9, recursion solving and candidate point screening, recursion of the preorder candidate points of each candidate point reserved for the current track point, and turning over the obtained candidate point chain to obtain the candidate track of the current candidate point path; for each candidate track, taking a path road ID chain from the second candidate point to the penultimate candidate point as a unique identifier, filtering all candidate points, and if two candidate points have the same identifier, only keeping one candidate point with the highest probability score;
step 1.10, track splicing, wherein if a result set is empty, all candidate tracks solved currently are added to the result set; otherwise, splicing the candidate track solved currently and the track stored in the result set: taking the candidate track solved currently as a reference, searching a candidate track of which the last candidate point and the first candidate point of the current candidate track are on the same path in the result set, and splicing the candidate track with the current candidate track; if the situation does not exist, selecting a candidate track with the shortest distance between the last candidate point and the first candidate point of the current candidate track in the result set, and splicing the candidate track with the current candidate track; the result after the splicing is reserved is a multi-path output result of the current track point solution;
step 1.11, if trace points still exist and are not traversed, clearing the candidate result set and the alternative result set recorded currently, and turning to step 1.3; otherwise, the solving of the multi-path output algorithm is completed, and the step 2 is switched to.
3. The vehicle trajectory map matching method fusing the vehicle-mounted image semantics as claimed in claim 1, characterized in that: the specific implementation of step 1.7 is as follows;
step 1.7.1, judging whether a candidate set and an alternative set recorded by the preamble track point exist, if the candidate set and the alternative set recorded by the preamble track point do not exist, skipping to step 1.7.2, otherwise skipping to step 1.7.3;
step 1.7.2, calculating the observation probability of all candidate points in the current track point candidate set and the candidate set, as detailed in formula (3), taking the observation probability as the overall probability of the candidate points, adding the candidate points with the overall probability scores into the candidate result set and the candidate result set, and then jumping to step 1.7.8; the observation probability calculation formula is as follows:
Figure FDA0003631083410000031
wherein the content of the first and second substances,
Figure FDA0003631083410000032
representing the ith original track point p i The (j) th candidate point of (a),
Figure FDA0003631083410000033
the probability of the spatial position of the vehicle track point is shown in formula (4);
Figure FDA0003631083410000034
the probability of the driving direction of the vehicle track point is shown in formula (5);
Figure FDA0003631083410000035
the relation probability between the vehicle track point and the candidate road section is shown in formula (6);
the distance between the vehicle track point and the candidate point meets the normal distribution, and the spatial position probability function F of the vehicle track point d The definition is as follows:
Figure FDA0003631083410000036
wherein the content of the first and second substances,
Figure FDA0003631083410000037
the jth candidate point representing the ith original trace point,
Figure FDA0003631083410000038
represents the distance, μ, between the ith original trace point and the candidate point on its jth candidate path d And σ d Respectively representing the mean and standard deviation of the distances;
the included angle between the vehicle driving direction and the candidate road section direction also satisfies the normal distribution, and the probability function F of the vehicle driving direction θ The definition is as follows:
Figure FDA0003631083410000039
wherein the content of the first and second substances,
Figure FDA00036310834100000310
representing the ith original track point p i The (j) th candidate point of (a),
Figure FDA00036310834100000311
represents the angle between the driving direction of the ith original track point and the direction of the jth candidate road section, mu θ And σ θ Respectively representing the mean and standard deviation of the angle;
the probability of the relationship between the vehicle trajectory point and the candidate road segment is defined as follows:
Figure FDA00036310834100000312
wherein the content of the first and second substances,
Figure FDA00036310834100000313
representing the ith original track point p i The j-th candidate point of (a),
Figure FDA00036310834100000314
representing a point-line relation function value between the ith original track point and the jth candidate road section;
step 1.7.3, calculating the overall probability of all candidate point pairs between the current track point candidate set and the preorder track point candidate set, which is detailed in a formula (7); judging whether the ratio of the shortest path length between the candidate point pairs to the track point to the linear distance exceeds a certain threshold value, if so, adding the candidate points to a candidate result set, otherwise, adding the candidate points to a temporary result set; taking a road ID chain of a candidate point path as a unique identifier, and when candidate points are added to a result set, if the candidate points with the same identifier appear, only storing one item with the highest probability score in the result set;
Figure FDA0003631083410000041
wherein
Figure FDA0003631083410000042
And (3) representing the integral probability score of the tth candidate point of the ith track point, and according to the description of the step 1.7.2, knowing:
Figure FDA0003631083410000043
for the observation probability, the definition is detailed in step 1.7.2;
Figure FDA0003631083410000044
as candidate points
Figure FDA0003631083410000045
To the candidate point
Figure FDA0003631083410000046
The transition probability is defined in formula (8);
Figure FDA0003631083410000047
wherein the content of the first and second substances,
Figure FDA0003631083410000048
the probability of the shortest path between candidate points for the vehicle is shown in equation (9);
Figure FDA0003631083410000049
the probability of the traveling speed between candidate points of the vehicle is shown in formula (10);
the probability of the straight-line distance between the shortest path between the front candidate point pair and the rear candidate point pair of the vehicle and the track point is defined as follows:
Figure FDA00036310834100000410
wherein, dis (p) i-1 ,p i ) Representing the original track point p i-1 And p i The straight-line distance between the two,
Figure FDA00036310834100000411
representing the ith original track point p i The t-th candidate point of (a),
Figure FDA00036310834100000412
representing candidate points
Figure FDA00036310834100000413
And
Figure FDA00036310834100000414
the shortest path distance between the two, min and max, are functions for solving the minimum and maximum values respectively;
the probability of the traveling speed between the pair of candidate points before and after the vehicle is defined as follows:
Figure FDA00036310834100000415
wherein the content of the first and second substances,
Figure FDA00036310834100000416
representing the ith original track point p i The t-th candidate point of (2),
Figure FDA00036310834100000417
represents the candidate points
Figure FDA00036310834100000418
To the candidate point
Figure FDA00036310834100000419
The probability of travel speed in between;
Figure FDA00036310834100000420
represents the slave candidate point
Figure FDA00036310834100000421
To the candidate point
Figure FDA00036310834100000422
V is calculated by dividing the shortest path value between two candidate points by the travel time between two original trajectory points u Represents the maximum speed limit of the current driving road u, and k represents a candidate point
Figure FDA00036310834100000423
And
Figure FDA00036310834100000424
the number of road sections of the shortest path between the road sections;
step 1.7.4, calculating the overall probability of all candidate point pairs between the current track point candidate set and the preorder track point candidate set, wherein the calculation formula is detailed in step 1.7.3; judging whether the ratio of the shortest path length between the candidate point pairs to the track point to the linear distance exceeds a certain threshold value, if so, adding the candidate points to the candidate result set, otherwise, adding the candidate points to the temporary result set; taking a road ID chain of a candidate point path as a unique identifier, and when candidate points are added to a result set, if candidate points with the same identifier appear, only storing one item with the closest matching distance in the result set;
step 1.7.5, acquiring the associated roads of all candidate points in the current track point candidate set and the candidate result set, judging whether the associated roads of the candidate set all appear in the candidate result set, if yes, directly jumping to step 1.7.6; otherwise, counting the ID of the non-appeared associated road, calculating the overall probability of all candidate point pairs between the current track point candidate set and the pre-sequence track point candidate set, wherein the calculation formula is detailed in step 1.7.3, selecting the candidate point of the intersection of the associated road and the non-appeared associated road from the calculation result, and adding the candidate point into the candidate result set; if candidate points with the same identification appear, only one item with the highest probability score is stored in the candidate result set;
step 1.7.6, obtaining again the associated roads of all candidate points in the current candidate result set, simultaneously obtaining the associated roads of all candidate points in the preamble track point candidate set, judging whether the associated roads of the preamble track point candidate set all appear in the candidate result set, if yes, directly jumping to step 1.7.7; otherwise, counting the ID of the non-appeared associated road, selecting a candidate point of the intersection of the associated road and the non-appeared associated road from the candidate result set, and moving the candidate point from the candidate result set to the candidate result set; if the candidate points with the same identification appear, only one item with the closest map matching distance is stored in the candidate result set;
step 1.7.7, if the candidate result set is not empty, directly jumping to step 1.7.8; otherwise, all the data in the temporary result set are added into the candidate result set;
and step 1.7.8, returning the candidate result and the candidate result set of the current track point as results.
4. The vehicle trajectory map matching method fusing the vehicle-mounted image semantics as claimed in claim 1, characterized in that: adopting a convolutional neural network in the step 2 to classify the scene modes of the vehicle-mounted images: tunnel, viaduct have no shelter, viaduct shelter and have no shelter 4 types; the method for quantizing the road scene semantics comprises the following steps:
Figure FDA0003631083410000051
wherein, alpha belongs to [0,1] is the confidence coefficient of vehicle-mounted image semantic recognition, and the condition that the road scene is consistent with the candidate road attribute comprises the following steps: both the image and the road scene are the case of a tunnel; the image is the situation that the viaduct is shielded and the road scene is non-viaduct and tunnel; the image is an viaduct non-occlusion or non-occlusion, if no viaduct exists in the road scenes of all candidate points of the current track point, the attributes of all non-tunnel roads are considered to be consistent with the road scene of the current image, if the viaduct exists, the attributes of the viaduct non-occlusion scene are consistent with the attributes of all non-tunnel roads, and the attributes of the non-occlusion scene are consistent with the attributes of the viaduct road; and the score of each candidate track in the road scene semantic aspect is equal to the sum of the scores of all candidate points on the candidate track in the road scene semantic aspect.
5. The vehicle trajectory map matching method fusing the vehicle-mounted image semantics as claimed in claim 1, characterized in that: in the steps, a convolutional neural network is adopted to recognize the number of lanes in the vehicle-mounted image, the lanes on the left side of a yellow solid line and the non-motor lanes on the right side are ignored, the road without lane line identification is defaulted to be 1 lane, the vehicle-mounted image recognition result and the urban road attribute of more than 4 lanes are artificially corrected to be 4 lanes, and the method for quantizing the lane number semantics comprises the following steps:
Figure FDA0003631083410000061
wherein, alpha is ∈ [0,1]]Confidence, y, for semantic recognition of vehicle images p ,y r Respectively the number of lanes extracted from the corrected vehicle-mounted image and the number of lanes recorded in the urban road attribute, wherein max and abs are respectively functions for calculating a maximum value and an absolute value; the score of each candidate track in the lane number semantic aspect is equal to the sum of the scores of all candidate points on the candidate track in the lane number semantic aspect.
6. The vehicle trajectory map matching method fusing the vehicle-mounted image semantics as claimed in claim 1, characterized in that: adopting a convolutional neural network to identify the guide lines appearing in the vehicle-mounted image, wherein the identification content comprises the following steps: the method for quantizing the semantics of the flow guide lines has the following steps that no flow guide line exists, the flow guide line appears on the left side of the vehicle, the flow guide line appears on the right side of the vehicle, and the flow guide line appears on the two sides of the vehicle in 4 aspects:
Figure FDA0003631083410000062
wherein, alpha is ∈ [0,1]]If the current vehicle-mounted image has no guide line, the scores of all candidate points on the current track point are alpha; if the guide line appears in the vehicle-mounted image, selecting 5 continuous candidate points on each candidate track to form a track chain by taking each candidate point of the current track point as a center, and grouping the track chains with overlapping by adopting a data structure of a parallel search set; after grouping, if only one track chain in the group exists, the score of the candidate track on the current track point is 0.8 alpha; for the scene with a plurality of track chains in the group, the track chains are sequenced according to the positions of the guide linesFor example: if the current diversion line appears on the left side of the vehicle, the vehicle is proved to run near the right side, the track near the right in the same group is divided into alpha, and other tracks are sequentially decreased by 0.2 alpha from the right to the left; if the diversion lines appear on the two sides of the vehicle, one or two candidate tracks in the middle are scored as alpha, and the other tracks are sequentially decreased by 0.2 alpha from the center to the two sides, wherein the value is c in the formula (13) j Represents the number of trace chains in the jth grouping, and i represents the order of the trace chains in the grouping.
7. The vehicle trajectory map matching method fusing the semantics of the vehicle-mounted image according to claim 1, wherein: the specific implementation of the step 3 comprises the following substeps;
step 3.1, data normalization:
Figure FDA0003631083410000063
wherein x i,j Representing the jth vehicular image semantic score, min, of the ith candidate trajectory i=1…n x i,j And max i=1…n x i,j Respectively representing the maximum value and the minimum value of a certain type of vehicle-mounted image semantics in all candidate tracks;
step 3.2, calculating the proportion of each type of vehicle-mounted image semantics in each candidate track in corresponding indexes of all candidate tracks:
Figure FDA0003631083410000071
step 3.3, entropy calculation, namely calculating the entropy of each type of vehicle-mounted image semantic score:
Figure FDA0003631083410000072
step 3.4, information entropy redundancy calculation, namely calculating the information entropy redundancy of each type of vehicle-mounted image semantics:
d j =1-e j (17)
step 3.5, weight calculation, namely calculating the weight of each vehicle-mounted image semantic:
Figure FDA0003631083410000073
step 3.6, calculating the score of each candidate track, selecting one candidate track with the highest overall score as a map matching result:
Figure FDA0003631083410000074
CN202210491343.7A 2022-05-07 2022-05-07 Vehicle track map matching method fusing vehicle-mounted image semantics Pending CN114964272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210491343.7A CN114964272A (en) 2022-05-07 2022-05-07 Vehicle track map matching method fusing vehicle-mounted image semantics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210491343.7A CN114964272A (en) 2022-05-07 2022-05-07 Vehicle track map matching method fusing vehicle-mounted image semantics

Publications (1)

Publication Number Publication Date
CN114964272A true CN114964272A (en) 2022-08-30

Family

ID=82970525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210491343.7A Pending CN114964272A (en) 2022-05-07 2022-05-07 Vehicle track map matching method fusing vehicle-mounted image semantics

Country Status (1)

Country Link
CN (1) CN114964272A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115586557A (en) * 2022-12-12 2023-01-10 国网浙江省电力有限公司信息通信分公司 Vehicle running track deviation rectifying method and device based on road network data
CN115905449A (en) * 2022-12-30 2023-04-04 北京易航远智科技有限公司 Semantic map construction method and automatic driving system with familiar road mode
CN116007638A (en) * 2023-03-24 2023-04-25 北京集度科技有限公司 Vehicle track map matching method and device, electronic equipment and vehicle
CN117889871A (en) * 2024-03-14 2024-04-16 德博睿宇航科技(北京)有限公司 Navigation road network matching accurate position echelon iterative search method and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115586557A (en) * 2022-12-12 2023-01-10 国网浙江省电力有限公司信息通信分公司 Vehicle running track deviation rectifying method and device based on road network data
CN115586557B (en) * 2022-12-12 2023-05-12 国网浙江省电力有限公司信息通信分公司 Vehicle driving track deviation correcting method and device based on road network data
CN115905449A (en) * 2022-12-30 2023-04-04 北京易航远智科技有限公司 Semantic map construction method and automatic driving system with familiar road mode
CN116821266A (en) * 2022-12-30 2023-09-29 北京易航远智科技有限公司 Semantic map construction method and automatic driving system with acquaintance road mode
CN116821266B (en) * 2022-12-30 2024-03-29 北京易航远智科技有限公司 Semantic map construction method and automatic driving system with acquaintance road mode
CN116007638A (en) * 2023-03-24 2023-04-25 北京集度科技有限公司 Vehicle track map matching method and device, electronic equipment and vehicle
CN117889871A (en) * 2024-03-14 2024-04-16 德博睿宇航科技(北京)有限公司 Navigation road network matching accurate position echelon iterative search method and system
CN117889871B (en) * 2024-03-14 2024-05-10 德博睿宇航科技(北京)有限公司 Navigation road network matching accurate position echelon iterative search method and system

Similar Documents

Publication Publication Date Title
CN114964272A (en) Vehicle track map matching method fusing vehicle-mounted image semantics
CN111653097B (en) Urban trip mode comprehensive identification method based on mobile phone signaling data and containing personal attribute correction
CN112543427B (en) Method and system for analyzing and identifying urban traffic corridor based on signaling track and big data
CN101965601B (en) Driving support device and driving support method
CN108848460B (en) Man-vehicle association method based on RFID and GPS data
CN111653094B (en) Urban trip mode comprehensive identification method based on mobile phone signaling data and containing road network correction
CN110276973B (en) Automatic intersection traffic rule identification method
CN112365711A (en) Vehicle track reconstruction method based on license plate recognition data
CN113505187B (en) Vehicle classification track error correction method based on map matching
CN107328423B (en) Curve identification method and system based on map data
CN113932821B (en) Track map matching method based on continuous window average direction characteristics
CN112309118B (en) Vehicle trajectory calculation method based on space-time similarity
CN111653093A (en) Urban trip mode comprehensive identification method based on mobile phone signaling data
CN109489679B (en) Arrival time calculation method in navigation path
CN113989784A (en) Road scene type identification method and system based on vehicle-mounted laser point cloud
CN113903173B (en) Vehicle track feature extraction method based on directed graph structure and LSTM
CN114202120A (en) Urban traffic travel time prediction method aiming at multi-source heterogeneous data
CN117033562B (en) Dangerous prediction method and system based on scene element matching
CN117037471A (en) Truck working area and driving path identification method based on GPS track data
CN113945222B (en) Road information identification method and device, electronic equipment, vehicle and medium
CN116564551A (en) Data-knowledge driven urban rail transit risk identification method
CN114333323B (en) Expressway travel speed prediction method based on pressure characteristics
CN114462609A (en) Floating car data track reduction method based on hidden Markov model
WO2023050645A1 (en) Method and apparatus for training autonomous driving prediction model, terminal and medium
CN115206104A (en) Urban resident traffic trip mode identification method based on mobile phone signaling data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination