CN110503009B - Lane line tracking method and related product - Google Patents

Lane line tracking method and related product Download PDF

Info

Publication number
CN110503009B
CN110503009B CN201910719667.XA CN201910719667A CN110503009B CN 110503009 B CN110503009 B CN 110503009B CN 201910719667 A CN201910719667 A CN 201910719667A CN 110503009 B CN110503009 B CN 110503009B
Authority
CN
China
Prior art keywords
lane
models
model
sub
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910719667.XA
Other languages
Chinese (zh)
Other versions
CN110503009A (en
Inventor
杨臻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910719667.XA priority Critical patent/CN110503009B/en
Publication of CN110503009A publication Critical patent/CN110503009A/en
Application granted granted Critical
Publication of CN110503009B publication Critical patent/CN110503009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the application provides a lane line tracking method and related products, wherein the lane line tracking method comprises the following steps: at the time t, predicting each of the N first vehicle channel models at the time t-1 according to the running parameters of the vehicle to obtain N second vehicle channel models; updating the N second lane models according to lane line characteristics in a target image to obtain N third lane models, wherein the target image is an image of a preset area in front of the vehicle at the moment t; calculating the adaptation probability of each third lane model in the N third lane models according to the probability parameters; and determining a third lane model with the largest adaptation probability among the N third lane models, wherein the third lane model with the largest adaptation probability is used for tracking a lane line of the vehicle driving road surface. The embodiment of the application is beneficial to improving the accuracy of lane line tracking.

Description

Lane line tracking method and related product
Technical Field
The application relates to the technical field of automatic driving, in particular to a lane line tracking method and related products.
Background
With the development of computer vision technology, feature extraction is performed on an input image through a neural network algorithm to obtain lane line features of lane lines in the input image, the lane line features are processed through a lane model, and lane lines in front of a vehicle are output to realize unmanned driving. At present, there are two common lane models, the first is a lane model based on parallel assumption, i.e. all lane lines are assumed to be parallel to each other, and the second is a lane model based on non-parallel assumption, i.e. all lane lines are assumed to be non-parallel, and the lane model based on parallel assumption or non-parallel assumption is constrained by its own conditions, when a vehicle runs at a complex intersection, for example, when there are parallel lane lines and non-parallel lane lines at the same time, the lane lines are missed or output lane lines do not conform to rules, so that each lane line under the complex intersection cannot be accurately tracked by using the lane model.
Disclosure of Invention
The lane line tracking method and related products are beneficial to enabling vehicles to adapt to various complex driving scenes, further improving the accuracy of lane line tracking, and further improving traffic safety.
In a first aspect, an embodiment of the present application provides a lane line tracking method, including:
at time t, predicting each of N first lane models at time t-1 according to running parameters of a vehicle to obtain N second lane models, wherein the first lane models are used for tracking multiple groups of lanes, lanes A and lanes B are not parallel, the lanes A and the lanes B are lanes in any two groups of the multiple groups of lanes respectively, lanes contained in each group of lanes in the multiple groups of lanes are parallel, and N is an integer greater than or equal to 1;
updating the N second lane models according to lane line characteristics in a target image to obtain N third lane models, wherein the target image is an image of a preset area in front of the vehicle at the moment t;
calculating the adaptation probability of each third lane model in the N third lane models according to probability parameters, wherein the adaptation probability is used for representing the adaptation degree of the third lane model and the lane line of the vehicle driving road surface;
And determining a third lane model with the largest adaptation probability among the N third lane models, wherein the third lane model with the largest adaptation probability is used for tracking a lane line of the vehicle driving road surface.
The first lane model is used for tracking a plurality of groups of lanes, the lanes are not parallel to each other, and the lanes contained in each group of lanes are parallel to each other, so that the third lane model obtained at the time t can be used for tracking parallel lane lines and non-parallel lane lines, the accuracy of lane line tracking is improved, the vehicle is further enabled to adapt to various complex driving environments, and traffic safety is improved; and the third lane model with the largest adaptation probability is adopted to track the lane line, so that the accuracy of lane line tracking is further improved.
In some possible embodiments, the predicting each of the N first vehicle models at time t-1 according to the running parameters of the vehicle, to obtain N second vehicle models, includes:
obtaining a prediction matrix according to the running parameters of the vehicle;
and predicting each of the N first vehicle channel models at the t-1 moment according to the prediction matrix to obtain N second vehicle channel models.
It can be seen that, based on the lane model at the previous time and the vehicle running parameters at the current time, the lane model at the current time is predicted, so that the data at the two times are associated, the predicted lane model can contain the existing running information of the vehicle, and the predicted lane model is more in line with the current running scene.
In some possible embodiments, the updating the N second lane models according to the lane line features in the target image to obtain N third lane models includes:
dividing the target image into T sub-images, wherein the distance between the region corresponding to the ith sub-image and the vehicle is smaller than the distance between the region corresponding to the (i+1) th sub-image and the vehicle, i is an integer, i is more than or equal to 1 and less than or equal to T, and T is an integer which is more than or equal to 2;
acquiring lane line characteristics in the ith sub-image;
selecting U target lane models matched with the lane line characteristics of the ith sub-image from N first reference lane models, executing an ith updating operation on each of the U target lane models according to the lane line characteristics to obtain U first reference lane models, wherein an ith updating result comprises the U updated first reference lane models, when i=1, the N first reference lane models are the N second lane models, and when i is larger than 1, the N first reference lane models are the i-1 th updating results, and U is more than or equal to 0 and less than or equal to N;
And when i=t, the T-th updating result obtained after the T-th updating operation is executed is the N third lane models.
It can be seen that the predicted lane model is updated based on the target image of the preset area in front of the vehicle, and when the model is updated, a segment matching updating mode is adopted, so that the situation that the false detection lane line is updated by mistake is avoided, the updating error is eliminated, and the updated third lane model is more suitable for the current driving environment.
In some possible embodiments, the selecting, from the N first reference lane models, U target lane models that match with the lane line features of the ith sub-image includes:
under an image coordinate system, obtaining an observation vector of a lane line in the target image according to the lane line characteristics;
under a vehicle coordinate system, obtaining M predictive observation vectors corresponding to lane lines in the target image and a lane model A according to the lane line characteristics, wherein M is the number of the lane lines tracked by the lane model A, the lane model A is any one of the N first reference lane models, and M is an integer greater than or equal to 1;
Determining M Markov distances corresponding to the observation vectors and the M predictive observation vectors;
and determining the minimum Markov distance in the M Markov distances, and determining the lane model A as a target lane model matched with the lane line characteristics of the ith sub-image when the minimum Markov distance is smaller than a distance threshold.
In some possible embodiments, the method further comprises:
when i=t, if f minimum mahalanobis distances corresponding to a lane line C in the target image are all greater than or equal to the distance threshold, creating N fourth lane models according to the N third lane models, where the N fourth lane models are consistent with lane line parameters of the N third lane models, the lane line C is any one lane line in the target image, the f minimum mahalanobis distances are minimum mahalanobis distances corresponding to lane line features of the lane line C in f sub-images, the f sub-images are sub-images containing the lane line features of the lane line C in the T sub-images, and f is an integer greater than or equal to 1, and f is less than or equal to T;
processing each of the N fourth lane models to obtain N new lane models;
And taking the N fourth lane models and the N new lane models as the first lane model at the moment t.
It can be seen that when the fact that the untracked lane line exists is detected, a new lane model is created, so that the untracked lane line is tracked, the problem of missed detection of the lane line is avoided, and the accuracy of lane line tracking is improved.
In some possible embodiments, the processing each of the N fourth lane models to obtain N new lane models includes:
acquiring the relative distance between the lane line C and the vehicle;
carrying out initial assignment on lane line parameters corresponding to the lane lines C in each fourth lane model according to the relative distance to obtain N fifth lane models;
fitting lane line characteristics of each sub-image in the T sub-images to obtain at least one fitting equation;
if a target fitting equation is firstly obtained in a kth sub-image in the T sub-images, updating each of the N fourth lane models by adopting lane line characteristics of the lane line C in the kth sub-image to obtain N fourth current lane models, wherein the target fitting equation is a fitting equation with the intercept in the at least one fitting equation and the relative distance smaller than a distance threshold;
And starting from the (k+1) th sub-image in the T sub-images, sequentially adopting the lane line characteristics in each sub-image to update the N fourth lane models to obtain the current latest N fourth lane models, and obtaining the N new lane models after updating the current latest N fourth lane models by adopting the lane line characteristics in the T sub-images.
It can be seen that the newly created lane model is updated by using the target image so that the newly created lane model is in a tracking state, and the lane line which is not tracked at the time t is tracked by using the lane model at the time t+1, so that the lane line is prevented from being missed.
In some possible embodiments, the probability parameters include a target matching probability, a priori probability, and an adaptation probability of the first lane model B at the time t-1; the adaptation probability of the third lane model B' is obtained by the target matching probability, the prior probability and the adaptation probability of the first lane model B at the t-1 moment; the first lane model B is any one of the N first lane models;
the target matching probability is used for representing the matching degree of all lane lines in the target image and the third lane model B';
The prior probability is used to characterize the origin of the third lane model B ', including that the third lane model B' was obtained by performing an update operation on the first lane model B.
In some possible embodiments, the method further comprises:
obtaining a target third lane model according to the adaptation probability of each third lane model, wherein the target third lane model is a third lane model with the adaptation probability smaller than a probability threshold value in the N third lane models;
and deleting the target third lane model with the tracking time length greater than a time length threshold, wherein the tracking time length is the total time length of the target third lane model from the creation time to the t time.
It can be seen that in the embodiment, the lane model with the probability lower than the probability threshold is deleted in time, so that the calculation amount when the lane line is tracked is reduced, the calculation load of the vehicle-mounted equipment is lightened, the lane model adapted to the current lane line can be found more quickly, and the tracking efficiency is improved.
In a second aspect, an embodiment of the present application provides a lane tracking apparatus, including:
the prediction unit is used for predicting each of N first lane models at the time t-1 according to the running parameters of the vehicle to obtain N second lane models, wherein the first lane models are used for tracking a plurality of groups of lanes, the lanes A and the lanes B are not parallel, the lanes A and the lanes B are lanes in any two groups of the plurality of groups of lanes respectively, the lanes contained in each group of lanes in the plurality of groups of lanes are parallel, and N is an integer greater than or equal to 1;
The updating unit is used for updating the N second lane models according to lane line characteristics in a target image to obtain N third lane models, wherein the target image is an image of a preset area in front of the vehicle at the moment t;
the calculation unit is used for calculating the adaptation probability of each third lane model in the N third lane models according to the probability parameters, wherein the adaptation probability is used for representing the adaptation degree of the third lane model and the lane line of the vehicle driving road surface;
the tracking unit is used for determining a third lane model with the largest adaptation probability among the N third lane models, and the third lane model with the largest adaptation probability is used for tracking the lane line of the vehicle driving road surface.
In some possible embodiments, in predicting each of the N first lane models at time t-1 according to a driving parameter of the vehicle, to obtain N second lane models, the prediction unit is specifically configured to: obtaining a prediction matrix according to the running parameters of the vehicle; and predicting each of the N first vehicle channel models at the t-1 moment according to the prediction matrix to obtain N second vehicle channel models.
In some possible implementations, in updating the N second lane models according to the lane line features in the target image to obtain N third lane models, the updating unit is specifically configured to:
dividing the target image into T sub-images, wherein the distance between the region corresponding to the ith sub-image and the vehicle is smaller than the distance between the region corresponding to the (i+1) th sub-image and the vehicle, i is an integer, i is more than or equal to 1 and less than or equal to T, and T is an integer which is more than or equal to 2;
acquiring lane line characteristics in the ith sub-image;
selecting U target lane models matched with the lane line characteristics of the ith sub-image from N first reference lane models, executing an ith updating operation on each of the U target lane models according to the lane line characteristics to obtain U first reference lane models, wherein an ith updating result comprises the U updated first reference lane models, when i=1, the N first reference lane models are the N second lane models, and when i is larger than 1, the N first reference lane models are the i-1 th updating results, and U is more than or equal to 0 and less than or equal to N;
and when i=t, the T-th updating result obtained after the T-th updating operation is executed is the N third lane models.
In some possible embodiments, in selecting U target lane models matching with the lane line features of the ith sub-image from the N first reference lane models, the updating unit is specifically configured to:
under an image coordinate system, obtaining an observation vector of a lane line in the target image according to the lane line characteristics;
under a vehicle coordinate system, obtaining M predictive observation vectors corresponding to lane lines in the target image and a lane model A according to the lane line characteristics, wherein M is the number of the lane lines tracked by the lane model A, the lane model A is any one of the N first reference lane models, and M is an integer greater than or equal to 1;
determining M Markov distances corresponding to the observation vectors and the M predictive observation vectors;
and determining the minimum Markov distance in the M Markov distances, and determining the lane model A as a target lane model matched with the lane line characteristics of the ith sub-image when the minimum Markov distance is smaller than a distance threshold.
In some possible embodiments, the apparatus further comprises a creation unit for:
When i=t, if f minimum mahalanobis distances corresponding to a lane line C in the target image are all greater than or equal to the distance threshold, creating N fourth lane models according to the N third lane models, where the N fourth lane models are consistent with lane line parameters of the N third lane models, the lane line C is any one lane line in the target image, the f minimum mahalanobis distances are minimum mahalanobis distances corresponding to lane line features of the lane line C in f sub-images, the f sub-images are sub-images containing the lane line features of the lane line C in the T sub-images, and f is an integer greater than or equal to 1, and f is less than or equal to T;
processing each of the N fourth lane models to obtain N new lane models;
and taking the N fourth lane models and the N new lane models as the first lane model at the moment t.
In some possible embodiments, in processing each of the N first lane models according to the lane line characteristics of the lane line C to obtain N new lane models, the creating unit is specifically configured to:
in the aspect of processing each of the N fourth lane models to obtain N new lane models, the creating unit is specifically configured to:
Acquiring the relative distance between the lane line C and the vehicle;
carrying out initial assignment on lane line parameters corresponding to the lane lines C in each fourth lane model according to the relative distance to obtain N fifth lane models;
fitting lane line characteristics of each sub-image in the T sub-images to obtain at least one fitting equation;
if a target fitting equation is firstly obtained in a kth sub-image in the T sub-images, updating each of the N fourth lane models by adopting lane line characteristics of the lane line C in the kth sub-image to obtain N fourth current lane models, wherein the target fitting equation is a fitting equation with the intercept in the at least one fitting equation and the relative distance smaller than a distance threshold;
and starting from the (k+1) th sub-image in the T sub-images, sequentially adopting the lane line characteristics in each sub-image to update the N fourth lane models to obtain the current latest N fourth lane models, and obtaining the N new lane models after updating the current latest N fourth lane models by adopting the lane line characteristics in the T sub-images.
In some possible embodiments, the probability parameters include a target matching probability, a priori probability, and an adaptation probability of the first lane model B at the time t-1; the adaptation probability of the third lane model B' is obtained by the target matching probability, the prior probability and the adaptation probability of the first lane model B at the t-1 moment; the first lane model B is any one of the N first lane models;
the target matching probability is used for representing the matching degree of all lane lines in the target image and the third lane model B';
the prior probability is used to characterize the origin of the third lane model B ', including that the third lane model B' was obtained by performing an update operation on the first lane model B.
In some possible embodiments, the apparatus further comprises a deletion unit for:
obtaining a target third lane model according to the adaptation probability of each third lane model, wherein the target third lane model is a third lane model with the adaptation probability smaller than a probability threshold value in the N third lane models;
and deleting the target third lane model with the tracking time length greater than a time length threshold, wherein the tracking time length is the total time length of the target third lane model from the creation time to the t time.
In a third aspect, an embodiment of the present application provides another lane tracking apparatus, including:
the device comprises a processor, a communication interface and a memory, wherein the processor, the communication interface and the memory are connected through electric signals;
the processor is used for predicting each of N first lane models at the time t-1 according to the running parameters of the vehicle to obtain N second lane models, the first lane models are used for tracking multiple groups of lanes, the lanes A and the lanes B are not parallel, the lanes A and the lanes B are lanes in any two groups of the multiple groups of lanes respectively, lanes contained in each group of lanes in the multiple groups of lanes are parallel, and N is an integer greater than or equal to 1;
the processor is further configured to update the N second lane models according to lane line features in a target image, so as to obtain N third lane models, where the target image is an image of a preset area in front of the vehicle at the time t;
the processor is further used for calculating the adaptation probability of each third lane model in the N third lane models according to probability parameters, wherein the adaptation probability is used for representing the adaptation degree of the third lane model and the lane line of the vehicle driving road surface;
The processor is further configured to determine a third lane model with a largest adaptation probability among the N third lane models, where the third lane model with the largest adaptation probability is used to track a lane line of the vehicle driving road surface.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium storing a computer program that is executed by hardware (e.g., a processor, etc.) to perform part or all of the steps of any one of the methods performed by the lane line tracking apparatus in the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions that, when run on a lane line tracking apparatus, cause the lane line tracking apparatus to perform some or all of the steps of the lane line tracking method of the above aspects.
Drawings
Some drawings that relate to embodiments of the present application will be described below.
Fig. 1A is a schematic diagram of a lane line in a vehicle coordinate system according to an embodiment of the present application;
fig. 1B is a schematic diagram of a driving scenario provided in an embodiment of the present application;
fig. 1C is a schematic structural diagram of a vehicle-mounted device according to an embodiment of the present application;
Fig. 1D is a schematic flow chart of a lane tracking method according to an embodiment of the present application;
FIG. 2A is a flowchart of a method for updating a second vehicle model according to an embodiment of the present disclosure;
fig. 2B is a schematic diagram of dividing a target image according to an embodiment of the present application;
FIG. 2C is a flowchart of another method for updating a lane model according to an embodiment of the present disclosure;
FIG. 3A is a flowchart of a method for initializing a lane model according to an embodiment of the present disclosure;
fig. 3B is a schematic view of a scenario of a high-speed ramp bus provided in an embodiment of the present application;
FIG. 4 is a flowchart of a method for creating a new lane model according to an embodiment of the present disclosure;
fig. 5 is a flow chart of a method for managing a lane model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a deleted lane model according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a lane tracking apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram of another lane tracking apparatus according to an embodiment of the present application.
Detailed Description
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
For the understanding of the solution of the present application, a lane model will be first described.
And modeling the lane line under a vehicle coordinate system to obtain a lane line equation, and expressing equation parameters (lane line parameters) in the lane line equation by vectors to obtain a lane model. At present, a lane line is modeled by using a triple spiral line shown in a general formula (1) to obtain a lane line equation f (l), wherein a lane model corresponding to the lane line equation is x= [ C ] 1 C 0 b Y 0 ]。
Figure BDA0002151709860000061
As shown in fig. 1A, in the vehicle coordinate system, C 1 C is the curvature change rate of the lane line 0 B is the slope of the tangent line between the origin of coordinates and the lane line, Y 0 For the lateral offset of the lane line with respect to the origin of coordinates, l is the distance along the lane line of the intersection of the lane line with the y-axis, and f (l) is the lateral offset from the intersection of the lane line with the y-axis with respect to the x-axis.
At present, when modeling multiple lane lines, two modes are generally adopted, namely, the first mode is based on the parallel assumption that all lane lines are parallel, so that C of the multiple lane lines 1 、C 0 B is the same; the second way is based on the non-parallel assumption that if multiple lane lines are all non-parallel, each lane line has independent C 1 、C 0 B, obtaining lane line equations based on the parallel assumption and the non-parallel assumption based on the modeling method in the formula (1), namely, as shown in the formula (2) and the formula (3):
Figure BDA0002151709860000062
Figure BDA0002151709860000071
Based on the parallel assumption, a plurality of lane lines are tracked as a whole, a lane model is constructed to track the plurality of parallel lane lines, and the lane model is x= [ C ] 1 C 0 b Y 0 …Y n ]The method comprises the steps of carrying out a first treatment on the surface of the Based on non-parallel assumption, a lane model needs to be constructed for each lane line, and the lane model corresponding to the ith lane line is:
Figure BDA0002151709860000072
based on the parallel assumption, when the lane line is tracked by adopting the lane model, the non-parallel lane line can be missed when the lane line is tracked due to the existence of the parallel constraint, the parallel lane line can be missed due to the existence of the non-parallel constraint, the up-down gradient of the lane surface is inconsistent due to jolt of a vehicle when the lane line is ascending or descending, and the parallel lane line can be projected as the non-parallel lane line when the lane line is projected under the image coordinate system, so that the lane line of an inner splay or an outer splay is output. In order to solve the defect of tracking lane lines at present, the technical scheme of the application is specially provided.
The lane model referred to in this application will be described again.
The lane model related in the embodiment of the present application is used for tracking N groups of lanes and M lane lines corresponding to the N groups of lanes, where any two groups of lanes in the N groups of lanes are not parallel, lanes included in each group of lanes are parallel to each other, and when tracking parallel lanes, a plurality of parallel lanes are attributed to one group of lanes, so each group of lanes includes 0 or more parallel lane lines. It will be appreciated that if only one lane is included in each set of lanes, there are no parallel lanes in the set of parallel lanes. N is an integer greater than or equal to 2, M is an integer greater than or equal to 2, and N and M can be set autonomously by a user according to different driving scenes or automatically by a system according to different driving scenes.
For example, when the vehicle is traveling in the scene shown in fig. 3B, the first lane and the second lane belong to lanes parallel to each other, so the first lane and the second lane are tracked as a set of lanes lane0, and the third lane is not parallel to the first lane and the second lane, so the third lane is tracked as a set of lanes lane1 alone.
Therefore, the lane models in the present application are used to track parallel lanes (parallel lane lines) and non-parallel lanes (non-parallel lane lines), so a state vector of each lane model is created based on the set of lanes tracked in each lane model.
For example, when the vehicle is traveling on the road as shown in fig. 1B, a lane model is created to track three sets of lanes lane0, lane1 and lane2, and since lane line y0 and lane line y1 are parallel to each other, one set is shared
Figure BDA0002151709860000073
Also lane lines y2 and y3 share a set +.>
Figure BDA0002151709860000074
y4 and y5 share a group +.>
Figure BDA0002151709860000075
The lane model created for the road shown in FIG. 1B is +>
Figure BDA0002151709860000076
Therefore, when the lane model in the application is generalized to be used for tracking N groups of lanes and M lane lines, the corresponding lane model is
Figure BDA0002151709860000077
Referring to fig. 1C, fig. 1C is a schematic structural diagram of an in-vehicle apparatus 100 according to an embodiment of the present application, where the in-vehicle apparatus 100 includes: an image acquisition module 101, a lane line feature extraction module 102, a lane line tracking module 103, a lane model management module 104, a lane line output module 105 and a driving parameter input module 106, wherein:
The image acquisition module 101 is used for acquiring a target image of a preset area in front of the vehicle at the time t;
the lane line feature extraction module 102 is used for extracting features of the target image to obtain lane line features of each lane line;
the driving parameter input module 106 is used for inputting driving parameters of the vehicle to the lane line tracking module 103;
the lane line tracking module 103 is configured to predict each of N first lane models at time t-1 according to a vehicle driving parameter, obtain N second lane models, update the N second lane models according to lane line features in a target image, obtain N third lane models, and track multiple groups of lanes, where a lane a and a lane B are not parallel, where the lane a and the lane B are lanes in any two groups of the multiple groups of lanes, lanes included in each group of lanes are parallel, and N is an integer greater than or equal to 1;
the lane model management module 104 is configured to calculate, according to the probability parameters, an adaptation probability of each of the N third lane models, where the adaptation probability is used to characterize an adaptation degree of the third lane models and a lane line in the target image, and determine a third lane model with a maximum adaptation probability of the N third lane models, where the third lane model with the maximum adaptation probability is used to track a lane line of a vehicle driving road surface;
The lane line output module 105 is used for outputting the tracked lane line.
In the embodiment of the application, the lane model is used for tracking a plurality of groups of lanes, lanes in each group of lanes are parallel to each other, and any two groups of lanes are not parallel to each other, so that lane lines in each group of lanes are parallel to each other, lane lines in any two groups of lanes are not parallel, when the vehicle runs in parallel lanes, tracking is performed through one group of lanes, and when the vehicle runs in a scene containing the non-parallel lanes, tracking is performed through two or more groups of lanes, so that the vehicle can adapt to various complex driving scenes, and the output lane lines are more accurate; and moreover, the lane line is tracked by adopting the lane model with the maximum adaptation probability, so that the accuracy of lane line tracking is further improved.
Referring to fig. 1D, fig. 1D is a flow chart of a lane tracking method according to an embodiment of the present application, where the method includes, but is not limited to, the following steps:
101: at time t, the vehicle-mounted device predicts each of N first lane models at time t-1 according to the running parameters of the vehicle to obtain N second lane models, wherein the first lane models are used for tracking multiple groups of lanes, the lanes A and the lanes B are not parallel, the lanes A and the lanes B are lanes in any two groups of the multiple groups of lanes respectively, lanes contained in each group of lanes in the multiple groups of lanes are parallel, and N is an integer greater than or equal to 1.
Wherein, since the lane line is located on the lane, the tracked lane and the tracked lane line are identical.
The state description of the first lane model and the second lane model is consistent, and the lane line parameters corresponding to all the moments are inconsistent. For example, the first lane model is an n-dimensional vector, and the second lane model is also an n-dimensional vector.
The N first lane models comprise lane models updated at the time t-1 and lane models newly created at the time t-1.
Optionally, a prediction matrix is obtained based on the association of the lane models at the time t-1 and the time t and the running parameters of the vehicle, and each of the N first lane models at the time t-1 is predicted based on the prediction matrix to obtain N second lane models.
For example, assuming that the change in curvature of each lane line is continuous, the lane line curvature change rate C at time t 1,t Lane line curvature change rate C from time t-1 1,t-1 The same, then the lane curvature c at time t 0,t =v*Δt*c 1,t-1 +c 0,t-1 Wherein v is the vehicle speed, c 0,t-1 The lane line curvature at time t-1. Therefore, the driving parameters of the vehicle can be formed into a prediction matrix byThe prediction matrix converts lane line parameters in the lane model at the time t-1 into lane line parameters corresponding to the time t, so that N second lane models at the time t are obtained.
102: and the vehicle-mounted equipment updates the N second lane models according to lane line characteristics in a target image to obtain N third lane models, wherein the target image is an image of a preset area in front of the vehicle at the time t.
The preset area in front of the vehicle is a front area shot by shooting equipment (such as a camera, a laser radar and the like) in the vehicle-mounted equipment.
103: and the vehicle-mounted equipment calculates the adaptation probability of each third lane model in the N third lane models according to the probability parameters, wherein the adaptation probability is used for representing the adaptation degree of the third lane model and the lane lines in the target image.
The matching condition of the lane line tracked by the third lane model and the lane line of the vehicle driving road surface at the moment t and the lane attribution of the tracked lane line in the third lane model are determined together.
For example, if the vehicle driving road surface at time t has 4 lane lines y0, y1, y2 and y3, and y0 and y1 belong to lane0, y2 and y3 belong to lane1, for example, the number of lane lines tracked by the third lane model is 4, for example, in the third lane model, the attribution of the tracked first lane line and second lane line is divided into one lane, the tracked third lane line and fourth lane line are attributed to another lane, and then the adaptation degree of the third lane model and the lane line of the vehicle driving road surface at time t is the greatest.
In addition, the probability calculation process is specifically referred to as step 501, and will not be described here too much.
104: the vehicle-mounted device determines a third lane model with the largest adaptation probability among the N third lane models, and the third lane model with the largest adaptation probability is used for tracking lane lines of the vehicle driving road surface.
Optionally, tracking a lane line by adopting a third lane model with the largest adaptation probability, and outputting the tracked lane line to a visual interface, wherein the visual interface can be a visual interface of the vehicle-mounted equipment or equipment associated with the vehicle-mounted equipment.
It can be seen that, in the lane model in this embodiment, multiple groups of lanes are tracked, lanes included in each group of lanes are parallel to each other, and lanes included in different groups are not parallel, so when a vehicle is traveling in a scene including only parallel lanes, one group of lanes in the lane model is tracked, and when the vehicle is traveling in a scene including non-parallel lanes, two or more groups of non-parallel lanes in the lane model are tracked, so that lane lines in various traveling scenes can be tracked, and the accuracy of lane line tracking is improved; and moreover, the lane line is tracked by adopting the lane model with the maximum adaptation probability, so that the accuracy of lane line tracking is further improved.
In some possible embodiments, each of the N first lane models is predicted based on equation (4), resulting in N second lane models.
Figure BDA0002151709860000091
/>
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002151709860000092
for the j-th first lane model of the N first lane models,/for the first lane model>
Figure BDA0002151709860000093
For->
Figure BDA0002151709860000094
A j second channel model F obtained after prediction t-1 For the first prediction matrix g t-1 Is a second prediction matrix;
Figure BDA0002151709860000095
Figure BDA0002151709860000096
Figure BDA0002151709860000097
Figure BDA0002151709860000098
r=[0 0 … w*Δt],l=v*Δt。
the running parameters of the vehicle comprise a vehicle yaw rate and a vehicle speed, wherein l is the vehicle yaw rate at the moment w of t-1, v is the vehicle speed, and delta t is the time interval between the moment t and the moment t-1.
It can be understood that after the target image is obtained, lane line prediction is performed on the target image, and the predicted lane lines are associated with lane line features, if it is predicted that Q lane lines exist in the target image, each lane line needs to be processed to update the lane model, and in the present application, the update operation process corresponding to the lane line is specifically illustrated and performed by taking the lane line C, that is, the Q-th lane line in the Q lane lines as an example, where Q is greater than or equal to 1 and Q is less than or equal to Q.
Referring to fig. 2A, fig. 2A is a flowchart of a method for updating a second vehicle model according to an embodiment of the present application, including, but not limited to, the following steps:
201: the in-vehicle apparatus divides the target image into T sub-images.
As shown in fig. 2B, the target image is divided into T sub-images from the near to the far, where the distance between the region corresponding to the i-th sub-image and the vehicle is smaller than the distance between the region corresponding to the i+1th sub-image and the vehicle, the region corresponding to each sub-image is a real region in a preset region in front of the vehicle, and is not an image region, i is an integer, i is greater than or equal to 1 and less than or equal to T, and T is an integer greater than or equal to 2.
The area of the region corresponding to the i-1 th sub-image is smaller than that of the region corresponding to the i-th sub-image.
202: the vehicle-mounted device acquires lane line characteristics in the ith sub-image.
The lane line features are pixel point sets of the lane line in the ith sub-image.
203: the vehicle-mounted equipment selects U target lane models matched with the lane line characteristics of the ith sub-image from N first reference lane models, and executes the ith updating operation on each of the U target lane models according to the lane line characteristics to obtain U first reference lane models.
Wherein the ith updated result comprises the U updated first reference lane models, when i=1, the N first reference lane models are the N second lane models, when i > 1, the N first reference lane models are the i-1 updated results, U is more than or equal to 0 and less than or equal to N,
Optionally, if the ith sub-image includes multiple lane lines, lane line features of each lane line are sequentially obtained, then the lane line features of each lane line are matched with the lane model, then the matched lane model is updated by using the lane line features, then the updating result obtained by the last updating operation is model matched by using other lane line features in the sub-image, and the matching model is updated until the ith updating operation corresponding to the ith sub-image is completed after the matching-updating operation is performed on the lane line features of all the lane lines in the ith sub-image, so that the ith updating result is an updating result obtained after the matching-updating operation is performed on all the lane lines in the ith sub-image.
When the lane line features are matched with the N first reference lane models, the N first reference models are all involved in updating operation to obtain N updated first reference lane models, the N updated first reference lane models are used as current updating results, when the lane line features are partially matched with the N first reference models, only the first reference models of the matched parts are updated to obtain U updated first reference lane models, at the moment, the updated U first reference lane models and the un-updated (N-U) first reference lane models are used as current updating results, when the lane line features are not matched with the N first reference models, the updating operation is not performed, and the N first reference models are used as current updating results.
204: the in-vehicle apparatus determines whether Q is less than or equal to Q.
If so, let q=q+1 and execute step 202;
if not, go to step 204.
205: the vehicle-mounted equipment determines whether the i is smaller than or equal to the T;
if yes, let i=i+1, and execute step 202; if not, step 206 is performed, i.e. the updating is ended, and N third lane models are obtained.
206: and the vehicle-mounted equipment finishes updating the lane models to obtain N third lane models.
It can be seen that in this embodiment, the target image is divided, and the second lane model is sequentially updated by using the divided sub-images, so that the problem of false detection caused when the whole lane line is matched with the lane model is avoided, and the updated third lane model is more adapted to the current driving scene.
Referring to fig. 2C, fig. 2C is a flowchart illustrating another method for updating a second vehicle model according to an embodiment of the present application, including, but not limited to, the following steps:
2031: and under an image coordinate system, the vehicle-mounted equipment acquires the observation vector of the lane line in the target image according to the lane line characteristics.
Fitting the lane lines under an image coordinate system to obtain a linear equation, obtaining an observation vector z of the lane line characteristics of the lane lines according to the linear equation, and
Figure BDA0002151709860000101
Said->
Figure BDA0002151709860000102
For the intersection of the straight line equation with the straight line equation x=l, the +.>
Figure BDA0002151709860000111
For the straight lineEquation at the point +.>
Figure BDA0002151709860000112
Normal vector at, said->
Figure BDA0002151709860000113
Is the slope of the linear equation.
Of course, the lane line characteristics can be fitted into curves, three-time spiral lines and the like, the observation vector of the lane line C can be obtained based on the curves or the three-time spiral lines, in addition, the coordinates of two points can be taken in a fitting equation to obtain the observation vector, and the observation vector is mainly obtained based on the extended Kalman filtering, so that the specific mode for obtaining the observation vector is not limited uniquely.
2032: and under a vehicle coordinate system, the vehicle-mounted equipment acquires M predictive observation vectors corresponding to the lane line and the lane model A in the target image according to the lane line characteristics.
M is the number of lane lines tracked by a lane model A, the lane model A is any one of N first reference lane models, M is an integer greater than or equal to 1, and the lane line features are pixel point sets of the lane lines in an image coordinate system.
The j-th predictive observation vector among the M predictive observation vectors is
Figure BDA0002151709860000114
And is also provided with
Figure BDA0002151709860000115
In the vehicle coordinate system, f j (l) A lane line equation for the j-th lane line tracked by the lane model A, said (l, f j (l) Is the intersection of the lane-line equation with the straight line x=l, p (l, f) j (l) Is the point (l, f) in the vehicle coordinate system j (l) Projection point obtained by projection onto the image coordinate system, < >>
Figure BDA0002151709860000116
For the purpose of measuring the position of the optical axis at said projection point (l, f j (l) Normal vector at>
Figure BDA0002151709860000117
And j is more than or equal to 1 and less than or equal to M for the partial derivative of the projection point under the image coordinate system.
The manner in which the predicted observation vector is obtained is also merely illustrative, and the present application does not uniquely limit the manner in which the predicted observation vector is obtained.
Optionally, the above-mentioned points (l, f j (l) Projection onto image coordinate system to obtain projection point p (l, f) j (l) Specifically, the projection operation p (-) is obtained through projection transformation by camera parameters, wherein the camera parameters comprise: the external reference matrix of the camera, i.e. the matrix formed by the installation angle and the installation position of the camera on the vehicle, the internal reference matrix of the camera, i.e. the matrix formed by the optical center and the focal length of the camera, and the radial and tangential distortion coefficients of the camera lens, are the prior art, and the transformation process is not repeated.
2033: and the vehicle-mounted equipment determines M Markov distances corresponding to the observation vectors and the M predictive observation vectors.
And (3) calculating the mahalanobis distance between the observation vector and each predicted observation vector through a formula (5).
Figure BDA0002151709860000118
In the above formula
Figure BDA0002151709860000119
For the mahalanobis distance between the observation vector and the jth predicted observation vector, Y is z q And->
Figure BDA00021517098600001110
Residual error of->
Figure BDA00021517098600001111
P is an extended Kalman filter covariance matrix, H is an observation model expressed by a Jacobian matrix, and is determined by projection operation, and R is an observation noise matrix。
2034: and the vehicle-mounted equipment determines the minimum Mahalanobis distance in the M Mahalanobis distances, and determines the lane model A as a target lane model matched with the lane line characteristics of the ith sub-image when the minimum Mahalanobis distance is smaller than a distance threshold.
Further, when it is determined that the lane model a is a matched lane model, the lane model a is updated through the formula (6) to obtain a lane model a'.
Figure BDA00021517098600001112
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00021517098600001113
the lane model is A, & lt + & gt>
Figure BDA00021517098600001114
For lane model A',>
Figure BDA00021517098600001115
for the estimation of the extended Kalman covariance matrix of the t-1 moment to the t moment, z t For observing vector, +.>
Figure BDA00021517098600001116
The projection operation is performed by determining that H is an observation model expressed by a Jacobian matrix, and H is a predicted observation vector corresponding to the minimum Marsdet distance among M predicted observation vectors.
Synchronizing updates while updating the second lane model
Figure BDA00021517098600001117
Obtaining P t|t ,P t|t Extended Kalman covariance matrix for time t to use P at time t+1 t|t Obtaining an estimate of the extended Kalman covariance matrix at time t versus time t+1 ∈1>
Figure BDA00021517098600001118
To update the second road model at time t+1, the update process can be seen in equation (7). />
Figure BDA00021517098600001119
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002151709860000121
for the estimation of the extended Kalman covariance matrix at the t-1 moment to the t moment, H is an observation model expressed by a Jacobian matrix and is determined by projection operation.
Since the vehicle-mounted device is in an unoperated state before the vehicle is started, a lane model does not exist in the lane line management module of the vehicle-mounted device, and the lane model needs to be initialized after the vehicle is started.
The process of initializing the lane model is described in the following in a specific embodiment.
Referring to fig. 3A, fig. 3A is a flowchart of a method for initializing a lane model according to an embodiment of the present application, including, but not limited to, the following steps:
301: the in-vehicle apparatus determines whether a lane line feature exists in the target image.
Accordingly, after the vehicle is started, the vehicle-mounted device always detects whether a lane line exists in front of the vehicle, and when the lane line exists, step 302 is executed; otherwise, step 305 is performed without creating a lane model.
302: the vehicle-mounted device determines whether the confidence of the lane line feature is greater than a confidence threshold.
The confidence is used for representing the probability that the pixel points are the pixel points of the lane lines, when the vehicle-mounted equipment detects the lane line characteristics in the target image, the probability value of the pixel points of each pixel point belonging to the lane lines is determined, so when the confidence is larger than the confidence threshold, the pixel points are determined to belong to the lane lines, and when the pixel point set corresponding to the lane line characteristics is larger than the confidence threshold, the step 302 is executed; if the confidence threshold is less, then it is determined that the lane line feature is not substantially a lane line, and step 306 is performed.
303: the vehicle-mounted device creates a lane model.
When the lane line is detected, a totally new lane model is created, and the lane line parameters in the created lane model are all in an inactive state and are all 0.
For example, when driving on the highway scene as shown in fig. 3B, the lane model is used to track the left and right lane lines of the lane where the vehicle is located, and the left and right lane lines outside the left and right lane lines, the state vector of the initial lane model is
Figure BDA0002151709860000122
Figure BDA0002151709860000123
For the lane line parameter corresponding to the first lane, +.>
Figure BDA0002151709860000124
For the lane line parameter corresponding to the second lane, +.>
Figure BDA0002151709860000125
The lateral offsets of the four lane lines are respectively.
304: and the vehicle-mounted equipment carries out initial assignment on the lane line parameters in the lane model to obtain an intermediate lane model.
Wherein the initial assignment of lane line parameters is determined by the width of the lane itself.
For example, the standard width of the known lane is 3.5m, and when the lane model is initially assigned, it is assumed that all lane lines are straight lines and belong to the same set of lane0, and the lane lines are assigned
Figure BDA0002151709860000126
Assigning a value of 0, assuming that the vehicle is running in the center of the lane, the lateral offsets of the left and right lane lines on both sides of the vehicle are 1.75 and-1.75, respectively, the left and right lane lines are 5.25 and-5.25, respectively, and so on, the lateral offset Y of the jth lane line on the left side of the vehicle j The lateral offset of the j-th lane line on the right side of the vehicle is- (1.75+ (j-1) 3.5), wherein j is a positive integer. So when driving on the expressway ramp shown in fig. 3B, the lane model is initially assigned x 0 =[00005.251.75-1.755.25]。
305: and the vehicle-mounted equipment updates the intermediate lane model by adopting lane line characteristics in the target image to obtain an initial lane model.
Firstly, the target image is divided into T sub-images, and step 201 is consistent and will not be described again.
Then, starting from a first sub-image, sequentially acquiring lane line characteristics of each lane line in each sub-image, fitting the lane line characteristics to obtain fitting equations, acquiring the intercept of each fitting equation and a y-axis, and acquiring a target fitting equation corresponding to the lane line tracked by the intermediate lane model according to the intercept, wherein the intercept of the target fitting equation and the y-axis is closest to the distance between the lane line and the vehicle, if the target fitting equation corresponding to the lane line is acquired in the kth sub-image first, updating the intermediate lane model by utilizing the lane line characteristics corresponding to the target fitting equation, and after the target fitting equation corresponding to each lane line tracked by the intermediate lane model is acquired, performing an updating process shown in fig. 2A by adopting the rest sub-images to obtain an initial lane model.
For example, the fitting equation with the closest intercept to 1.75 may be used as the target fitting equation for the left lane line, the fitting equation with the closest intercept to-1.75 may be used as the target fitting equation for the right lane line, and so on to determine the target fitting equation for each lane line. Since the distances between each sub-image and the vehicle are different, which results in different shooting views, the first sub-image may only contain left and right lane lines, which may exist in the subsequent sub-image, so that, starting from the first sub-image, the target fitting equation corresponding to each lane line is sequentially acquired in each sub-image, an updating operation is performed, and after the target fitting equations corresponding to all lane lines are acquired in a certain sub-image, starting from the sub-image, the updating process shown in fig. 2A is performed by adopting the remaining sub-images, so as to obtain the initial lane model.
306: the vehicle-mounted device ends the creation of the initial lane model.
It can be seen that in this embodiment, when the vehicle is started, an initial lane model is autonomously created to adapt to the current driving scenario, so as to improve traffic safety.
Based on the lane model updating method shown in fig. 2A, when no matched lane model exists in the lane line characteristics of the lane line C in each sub-image, it is determined that the lane line C is in an untracked state, which indicates that a non-parallel lane appears in a preset area in front of the vehicle, and a new lane model needs to be created to track the lane line C.
A method of creating a new lane model is provided below.
Referring to fig. 4, fig. 4 is a flowchart of a method for creating a new lane model according to an embodiment of the present application, including, but not limited to, the following steps:
401: the in-vehicle apparatus determines whether there is an untracked lane line in the target image. Based on the updating method shown in fig. 2A, when i=t, f minimum mahalanobis distances corresponding to the lane line C are all greater than or equal to the distance threshold, N fourth lane models are created according to the N third lane models, the N fourth lane models are consistent with lane line parameters of the N third lane models, the f minimum mahalanobis distances are minimum mahalanobis distances corresponding to lane line features of the lane line C in f sub-images, the f sub-images are sub-images containing the lane line features of the lane line C in the T sub-images, f is an integer greater than or equal to 1, f is less than or equal to T, and the lane line C is any one lane line in the target image; when it is determined that the lane line C is in an untracked state, step 402 is performed; otherwise, step 406 is performed without creating a lane model.
402: the vehicle-mounted device determines whether the confidence of the untracked lane line is greater than a confidence threshold.
When the confidence is greater than the confidence threshold, determining that the untracked lane line is a lane line, executing step 403; if the confidence threshold is less, then it is determined that the untracked lane line is not substantially a lane line, step 406 is performed.
403: and the vehicle-mounted equipment creates N fourth lane models according to the N third lane models.
The N fourth lane models are copied from the N third lane models.
For example, when the vehicle is traveling in the expressway scene as shown in fig. 3B, only one third lane model is in tracking state before the vehicle reaches the upper ramp, assuming that the third lane model is
Figure BDA0002151709860000131
Since the lane line y is not detected before 3 So that it is identical to y 3 The corresponding lane parameter is in the inactive state, i.e. lane parameter +.>
Figure BDA0002151709860000132
At 0, y cannot be tracked 3 So duplicate x 0 Obtaining a new lane model x 1 And->
Figure BDA0002151709860000133
Wherein (1)>
Figure BDA0002151709860000134
Indicating lane line y 3 Belonging to lane1, the first lane model corresponding to the time t is extended to 2.
404: and the vehicle-mounted equipment processes each of the N fourth lane models to obtain N new lane models.
Optionally, processing each of the N fourth lane models to obtain N new lane models may include: acquiring the relative distance between the lane line C and the vehicle; carrying out initial assignment on lane line parameters corresponding to the lane lines C in each fourth lane model according to the relative distance to obtain N fifth lane models; fitting lane line characteristics in each sub-image in the T sub-images to obtain at least one fitting equation; if a target fitting equation is firstly obtained in a kth sub-image in the T sub-images, updating each of the N fourth lane models by adopting lane line characteristics of the lane line C in the kth sub-image to obtain N fourth current lane models, wherein the target fitting equation is a fitting equation with the intercept in the at least one fitting equation and the relative distance smaller than a distance threshold; and starting from the (k+1) th sub-image in the T sub-images, sequentially adopting the lane line characteristics in each sub-image to update the N fourth lane models to obtain the current latest N fourth lane models, and obtaining the N new lane models after updating the current latest N fourth lane models by adopting the lane line characteristics in the T sub-images.
The method comprises the steps of judging which lane line is the untracked lane line according to the relative distance between the lane line C and the vehicle, and then carrying out initial assignment on the lane line parameters corresponding to the untracked lane line, wherein the assignment process can be referred to as a process shown in step 303. For example, as shown in fig. 3B, since the standard width of the lane is 3.5m, the relative distance between the lane line C and the vehicle is 5.25m, and the lane line C is determined to be the right lane line of the vehicle, the parameters in the fourth lane model are as follows
Figure BDA0002151709860000141
The initial assignment is 000-5.25, and other lane line parameters are consistent with the lane line parameters in the third lane model;
then, starting from the first sub-image, fitting all lane line features in each sub-image to obtain at least one fitting equation, then determining the lane line features including the lane line C in each sub-image when the intercept of each fitting equation and the relative distance is smaller than a threshold value, so that the kth sub-image of the target fitting equation is determined first, performing updating operation by using the lane line features of the lane line C in the kth sub-image to obtain the fourth current latest lane model, and finally, starting from the kth+1 sub-image to the end of the last sub-image, sequentially updating the N current latest first lane models, wherein the specific updating process is consistent with that shown in FIG. 2A, and is not repeated.
405: and the vehicle-mounted equipment takes the N fourth lane models and the N new lane models as the first lane model at the moment t.
406: the vehicle-mounted device ends the creation of the new lane model.
It can be seen that in this embodiment, at time t, when an untracked lane line appears in a preset area in front of the vehicle, it is determined that an unparallel lane appears on the vehicle driving road surface, and all current lane models cannot track the lane line in the unparallel lane, so that a new lane model needs to be created to track the untracked lane line, so as to avoid missed detection of the lane line.
Based on the method for creating the lane model, a plurality of new lane models can be created along with the increase of the running time of the vehicle, the management of the multi-lane models by the vehicle-mounted equipment tends to be saturated, and the burden is brought to calculation when the lane lines are tracked. Therefore, the lane model needs to be managed, and a part of the irrelevant lane model needs to be deleted. The following provides a method for managing a lane model.
Referring to fig. 5, fig. 5 provides a flowchart of a method for managing a lane model according to an embodiment of the present application, including, but not limited to, the following steps:
501: and the vehicle-mounted equipment calculates the adaptation probability of each third lane model in the N third lane models according to the probability parameters.
The probability parameters comprise target matching probability, prior probability and probability of the first lane model B at the t-1 moment; the probability of the third lane model B' is the product of the target matching probability, the prior probability and the probability of the first lane model B at the time t-1.
The first lane model B is any one of the N first lane models, and the third lane model B' is a third lane model obtained by updating the lane model B.
Wherein the matching probability may be P (Z tk ,Θ t-1 ,Z t-1 ) The prior probability may be P (θ) kt-1 ,Z t-1 ) The probability of the first lane model B at time t-1 may be P (Θ t-1 |Z t-1 ) The method comprises the steps of carrying out a first treatment on the surface of the The probability of the third lane model B' may be P (Θ t |Z t );
The target matching probability is used for representing the matching degree of all lane lines at the t moment and the third lane model B', specifically, the target matching probability is the product of W first matching probabilities, the W first matching probabilities are first matching probabilities corresponding to W target lane line features, the W target lane line features are lane line features matched with the current latest lane model, the current latest lane model is a lane model obtained after updating operation is carried out on the first lane model B each time, and each first matching probability is obtained by a prediction vector corresponding to each target lane line feature, a minimum Mahalanobis distance corresponding to the target lane line feature and a prediction observation vector corresponding to the minimum Mahalanobis distance;
Specifically, updating a lane model, acquiring a target lane line feature, determining a minimum mahalanobis distance corresponding to the target lane line feature and a predicted observed quantity corresponding to the minimum mahalanobis distance, determining a measurement residual covariance matrix S of an observed vector of the target lane line feature and the predicted observed vector, taking the covariance matrix S as a variance of Gaussian distribution, taking the mahalanobis distance between the observed vector and the predicted observed vector as a value of Gaussian distribution (x-u), and obtaining probability under the Gaussian distribution, and taking the probability as a first matching probability corresponding to the target lane line feature.
The prior probability is used for representing the source of the third lane model B ', the source comprises that the third lane model B' is obtained by performing updating operation on the first lane model B, and the prior probability is that
Figure BDA0002151709860000151
In addition, if the probability of each newly created lane model is calculated, the corresponding prior probability is set as tau, and the probability corresponding to the t-1 moment is set as a preset value.
502: and the vehicle-mounted equipment obtains a target third lane model according to the adaptive probability of each third lane model.
And the target third lane model is a third lane model with the adaptive probability smaller than a probability threshold value in the N third lane models.
Wherein the probability threshold may be 0.7, 0.8, or other value.
503: and deleting the target third lane model with the tracking time length larger than the time length threshold value by the vehicle-mounted equipment.
Wherein the duration threshold may be 1 minute, 5 minutes, or other value.
Alternatively, referring to fig. 6, the numbers in the box of fig. 6 are used to characterize the situation where each lane line belongs to a lane, the number 0 is used to characterize the lane line as belonging to lane0, and the number 1 is used to characterize the lane line as belonging to lane1. At time t0, only one lane model is running, four lane lines are tracked by adopting the lane model, at time t1, a new lane model is created when the fact that the lane model cannot track the fourth lane line is detected, at time t2, two new lane models are needed to be created again on the basis of t1 when two lane models at time t1 cannot track the third lane line, and at time t2, four models are running, so that more and more lane models are generated. Therefore, the vehicle-mounted equipment calculates the adaptation probability of each lane model at the time t, and deletes the lane model with the adaptation probability lower than the probability threshold value so as to improve the calculation speed of the vehicle-mounted equipment.
Optionally, after the new lane model is created, since the new lane model just runs, the new lane model is not necessarily adapted to the current driving scene, a certain protection period is set for the new lane model, that is, when the vehicle-mounted device deletes the lane model, only the lane model outside the protection period is deleted, only the adaptation probability of the new lane model is calculated, if the adaptation probability is smaller than the probability threshold, the new lane model is not deleted.
Referring to fig. 7, fig. 7 is a lane tracking apparatus according to an embodiment of the present application, which may include:
the prediction unit 710 is configured to predict each of N first lane models at time t-1 according to a driving parameter of a vehicle at time t to obtain N second lane models, where the first lane models are used for tracking multiple groups of lanes, lanes a and lanes B are not parallel, the lanes a and the lanes B are lanes in any two groups of the multiple groups of lanes respectively, lanes included in each group of lanes in the multiple groups of lanes are parallel, and N is an integer greater than or equal to 1;
the updating unit 720 is configured to update the N second lane models according to lane line features in a target image, so as to obtain N third lane models, where the target image is an image of a preset area in front of the vehicle at the time t;
a calculating unit 730, configured to calculate, according to a probability parameter, an adaptation probability of each third lane model of the N third lane models, where the adaptation probability is used to characterize a degree of adaptation of the third lane model to a lane line of the vehicle driving road surface;
and the tracking unit 740 is configured to determine a third lane model with the largest adaptation probability among the N third lane models, where the third lane model with the largest adaptation probability is used to track a lane line of the vehicle driving road surface.
In some possible embodiments, the prediction unit 710 is specifically configured to, in predicting each of the N first lane models at time t-1 according to the driving parameters of the vehicle, obtain N second lane models: obtaining a prediction matrix according to the running parameters of the vehicle; and predicting each of the N first vehicle channel models at the t-1 moment according to the prediction matrix to obtain N second vehicle channel models.
In some possible embodiments, in updating the N second lane models according to the lane line characteristics in the target image to obtain N third lane models, the updating unit 720 is specifically configured to:
dividing the target image into T sub-images, wherein the distance between the region corresponding to the ith sub-image and the vehicle is smaller than the distance between the region corresponding to the (i+1) th sub-image and the vehicle, i is an integer, i is more than or equal to 1 and less than or equal to T, and T is an integer which is more than or equal to 2;
acquiring lane line characteristics in the ith sub-image;
selecting U target lane models matched with the lane line characteristics of the ith sub-image from N first reference lane models, executing an ith updating operation on each of the U target lane models according to the lane line characteristics to obtain U first reference lane models, wherein an ith updating result comprises the U updated first reference lane models, when i=1, the N first reference lane models are the N second lane models, and when i is larger than 1, the N first reference lane models are the i-1 th updating results, and U is more than or equal to 0 and less than or equal to N;
And when i=t, the T-th updating result obtained after the T-th updating operation is executed is the N third lane models. .
In some possible embodiments, in selecting U target lane models matching the lane line features of the ith sub-image from the N first reference lane models, the updating unit 720 is specifically configured to:
under an image coordinate system, obtaining an observation vector of a lane line in the target image according to the lane line characteristics;
under a vehicle coordinate system, obtaining M predictive observation vectors corresponding to lane lines in the target image and a lane model A according to the lane line characteristics, wherein M is the number of the lane lines tracked by the lane model A, the lane model A is any one of the N first reference lane models, and M is an integer greater than or equal to 1;
determining M Markov distances corresponding to the observation vectors and the M predictive observation vectors;
and determining the minimum Markov distance in the M Markov distances, and determining the lane model A as a target lane model matched with the lane line characteristics of the ith sub-image when the minimum Markov distance is smaller than a distance threshold.
In some possible embodiments, the apparatus further comprises a creation unit 750, the creation unit 750, for;
when i=t, if f minimum mahalanobis distances corresponding to a lane line C in the target image are all greater than or equal to the distance threshold, creating N fourth lane models according to the N third lane models, where the N fourth lane models are consistent with lane line parameters of the N third lane models, the lane line C is any one lane line in the target image, the f minimum mahalanobis distances are minimum mahalanobis distances corresponding to lane line features of the lane line C in f sub-images, the f sub-images are sub-images containing the lane line features of the lane line C in the T sub-images, and f is an integer greater than or equal to 1, and f is less than or equal to T;
processing each of the N fourth lane models to obtain N new lane models;
and taking the N fourth lane models and the N new lane models as the first lane model at the moment t.
In some possible embodiments, in processing each of the N fourth lane models to obtain N new lane models, the creating unit 750 is specifically configured to:
Acquiring the relative distance between the lane line C and the vehicle;
carrying out initial assignment on lane line parameters corresponding to the lane lines C in each fourth lane model according to the relative distance to obtain N fifth lane models;
fitting lane line characteristics of each sub-image in the T sub-images to obtain at least one fitting equation;
if a target fitting equation is firstly obtained in a kth sub-image in the T sub-images, updating each of the N fourth lane models by adopting lane line characteristics of the lane line C in the kth sub-image to obtain N fourth current lane models, wherein the target fitting equation is a fitting equation with the intercept in the at least one fitting equation and the relative distance smaller than a distance threshold;
and starting from the (k+1) th sub-image in the T sub-images, sequentially adopting the lane line characteristics in each sub-image to update the N fourth lane models to obtain the current latest N fourth lane models, and obtaining the N new lane models after updating the current latest N fourth lane models by adopting the lane line characteristics in the T sub-images.
In some possible embodiments, the probability parameters include a target matching probability, a priori probability, and an adaptation probability of the first lane model B at the time t-1; the adaptation probability of the third lane model B' is obtained by the target matching probability, the prior probability and the adaptation probability of the first lane model B at the t-1 moment; the first lane model B is any one of the N first lane models;
the target matching probability is used for representing the matching degree of all lane lines in the target image and the third lane model B';
the prior probability is used to characterize the origin of the third lane model B ', including that the third lane model B' was obtained by performing an update operation on the first lane model B. .
In some possible embodiments, the apparatus further comprises a deletion unit 760, the deletion unit 760 being configured to:
obtaining a target third lane model according to the adaptation probability of each third lane model, wherein the target third lane model is a third lane model with the adaptation probability smaller than a probability threshold value in the N third lane models;
and deleting the target third lane model with the tracking time length greater than a time length threshold, wherein the tracking time length is the total time length of the target third lane model from the creation time to the t time.
Referring to fig. 8, an embodiment of the present application provides a lane line tracking apparatus 800, including:
a processor 830, a communication interface 820, and a memory 810 coupled to each other; such as processor 830, communication interface 820, and memory 810 are coupled via bus 840.
Memory 810 may include, but is not limited to, random access Memory (Random Access Memory, RAM), erasable programmable Read-Only Memory (Erasable Programmable ROM, EPROM), read-Only Memory (ROM), or portable Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), etc., with Memory 810 for associated instructions and data.
The processor 830 may be one or more central processing units (Central Processing Unit, CPU), and in the case where the processor 830 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
The processor 830 is configured to read the program code stored in the memory 810, and cooperate with the communication interface 840 to perform some or all of the steps of the method performed by the travel platform 800 in the above-described embodiments of the present application.
For example, the communication interface 820 is configured to receive a vehicle driving parameter at time t;
the processor 830 is configured to predict each of N first lane models at time t-1 according to a driving parameter of a vehicle at time t to obtain N second lane models, where the first lane models are used to track multiple groups of lanes, lanes a and lanes B are not parallel, the lanes a and the lanes B are lanes in any two groups of the multiple groups of lanes respectively, lanes included in each group of lanes are parallel to each other, and N is an integer greater than or equal to 1;
The processor 830 is further configured to update the N second lane models according to lane line characteristics in a target image, to obtain N third lane models, where the target image is an image of a preset area in front of the vehicle at the time t;
the processor 830 is further configured to calculate, according to a probability parameter, an adaptation probability of each of the N third lane models, where the adaptation probability is used to characterize a degree of adaptation of the third lane model to a lane line of the vehicle driving road surface;
the processor 830 is further configured to determine a third lane model with a largest adaptation probability among the N third lane models, where the third lane model with the largest adaptation probability is used to track a lane line of the vehicle driving road surface.
In some possible embodiments, the processor 830 is specifically configured to, in predicting each of the N first lane models at time t-1 according to the driving parameters of the vehicle, obtain N second lane models: obtaining a prediction matrix according to the running parameters of the vehicle; and predicting each of the N first vehicle channel models at the t-1 moment according to the prediction matrix to obtain N second vehicle channel models.
In some possible embodiments, the processor 830 is specifically configured to, when updating the N second lane models according to the lane line characteristics in the target image, obtain N third lane models:
dividing the target image into T sub-images, wherein the distance between the region corresponding to the ith sub-image and the vehicle is smaller than the distance between the region corresponding to the (i+1) th sub-image and the vehicle, i is an integer, i is more than or equal to 1 and less than or equal to T, and T is an integer which is more than or equal to 2;
acquiring lane line characteristics in the ith sub-image;
selecting U target lane models matched with the lane line characteristics of the ith sub-image from N first reference lane models, executing an ith updating operation on each of the U target lane models according to the lane line characteristics to obtain U first reference lane models, wherein an ith updating result comprises the U updated first reference lane models, when i=1, the N first reference lane models are the N second lane models, and when i is larger than 1, the N first reference lane models are the i-1 th updating results, and U is more than or equal to 0 and less than or equal to N;
and when i=t, the T-th updating result obtained after the T-th updating operation is executed is the N third lane models.
In some possible embodiments, the processor 830 is specifically configured to, in selecting, from the N first reference lane models, U target lane models that match the lane line features of the i-th sub-image:
under an image coordinate system, obtaining an observation vector of a lane line in the target image according to the lane line characteristics;
under a vehicle coordinate system, obtaining M predictive observation vectors corresponding to lane lines in the target image and a lane model A according to the lane line characteristics, wherein M is the number of the lane lines tracked by the lane model A, the lane model A is any one of the N first reference lane models, and M is an integer greater than or equal to 1;
determining M Markov distances corresponding to the observation vectors and the M predictive observation vectors;
and determining the minimum Markov distance in the M Markov distances, and determining the lane model A as a target lane model matched with the lane line characteristics of the ith sub-image when the minimum Markov distance is smaller than a distance threshold.
In some possible implementations, the processor 830 is further configured to:
when i=t, if f minimum mahalanobis distances corresponding to a lane line C in the target image are all greater than or equal to the distance threshold, creating N fourth lane models according to the N third lane models, where the N fourth lane models are consistent with lane line parameters of the N third lane models, the lane line C is any one lane line in the target image, the f minimum mahalanobis distances are minimum mahalanobis distances corresponding to lane line features of the lane line C in f sub-images, the f sub-images are sub-images containing the lane line features of the lane line C in the T sub-images, and f is an integer greater than or equal to 1, and f is less than or equal to T;
Processing each of the N fourth lane models to obtain N new lane models;
and taking the N fourth lane models and the N new lane models as the first lane model at the moment t.
In some possible embodiments, in processing each of the N fourth lane models to obtain N new lane models, the processor 830 is specifically configured to:
acquiring the relative distance between the lane line C and the vehicle;
carrying out initial assignment on lane line parameters corresponding to the lane lines C in each fourth lane model according to the relative distance to obtain N fifth lane models;
fitting lane line characteristics of each sub-image in the T sub-images to obtain at least one fitting equation;
if a target fitting equation is firstly obtained in a kth sub-image in the T sub-images, updating each of the N fourth lane models by adopting lane line characteristics of the lane line C in the kth sub-image to obtain N fourth current lane models, wherein the target fitting equation is a fitting equation with the intercept in the at least one fitting equation and the relative distance smaller than a distance threshold;
And starting from the (k+1) th sub-image in the T sub-images, sequentially adopting the lane line characteristics in each sub-image to update the N fourth lane models to obtain the current latest N fourth lane models, and obtaining the N new lane models after updating the current latest N fourth lane models by adopting the lane line characteristics in the T sub-images.
In some possible embodiments, the probability parameters include a target matching probability, a priori probability, and an adaptation probability of the first lane model B at the time t-1; the adaptation probability of the third lane model B' is obtained by the target matching probability, the prior probability and the adaptation probability of the first lane model B at the t-1 moment; the first lane model B is any one of the N first lane models;
the target matching probability is used for representing the matching degree of all lane lines in the target image and the third lane model B';
the prior probability is used for representing the source of the third lane model B ', wherein the source comprises the third lane model B' which is obtained by performing update operation on the first lane model B;
In some possible implementations, the processor 830 is further configured to:
obtaining a target third lane model according to the adaptation probability of each third lane model, wherein the target third lane model is a third lane model with the adaptation probability smaller than a probability threshold value in the N third lane models;
and deleting the target third lane model with the tracking time length greater than a time length threshold, wherein the tracking time length is the total time length of the target third lane model from the creation time to the t time.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., optical disk), or a semiconductor medium (e.g., solid state disk), etc. In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The foregoing embodiments are each focused on the description of each embodiment, and for the part of one embodiment that is not described in detail, reference may be made to the related description of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional divisions of actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the indirect coupling or direct coupling or communication connection between the illustrated or discussed devices and units may be through some interfaces, devices or units, and may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. The aforementioned storage medium may include, for example: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (18)

1. A lane line tracking method, comprising:
at time t, predicting each of N first lane models at time t-1 according to running parameters of a vehicle to obtain N second lane models, wherein the first lane models are used for tracking multiple groups of lanes, lanes A and lanes B are not parallel, the lanes A and the lanes B are lanes in any two groups of the multiple groups of lanes respectively, lanes contained in each group of lanes in the multiple groups of lanes are parallel, and N is an integer greater than or equal to 1;
updating the N second lane models according to lane line characteristics in a target image to obtain N third lane models, wherein the target image is an image of a preset area in front of the vehicle at the moment t;
calculating the adaptation probability of each third lane model in the N third lane models according to probability parameters, wherein the adaptation probability is used for representing the adaptation degree of the third lane model and the lane line of the vehicle driving road surface;
and determining a third lane model with the largest adaptation probability among the N third lane models, wherein the third lane model with the largest adaptation probability is used for tracking a lane line of the vehicle driving road surface.
2. The method according to claim 1, wherein predicting each of the N first lane models at time t-1 according to the driving parameters of the vehicle to obtain N second lane models includes:
obtaining a prediction matrix according to the running parameters of the vehicle;
and predicting each of the N first vehicle channel models at the t-1 moment according to the prediction matrix to obtain N second vehicle channel models.
3. The method according to claim 1 or 2, wherein updating the N second lane models according to the lane line features in the target image to obtain N third lane models includes:
dividing the target image into T sub-images, wherein the distance between the region corresponding to the ith sub-image and the vehicle is smaller than the distance between the region corresponding to the (i+1) th sub-image and the vehicle, i is an integer, i is more than or equal to 1 and less than or equal to T, and T is an integer which is more than or equal to 2;
acquiring lane line characteristics in the ith sub-image;
selecting U target lane models matched with the lane line characteristics of the ith sub-image from N first reference lane models, executing an ith updating operation on each of the U target lane models according to the lane line characteristics to obtain U first reference lane models, wherein an ith updating result comprises the U updated first reference lane models, when i=1, the N first reference lane models are the N second lane models, and when i is larger than 1, the N first reference lane models are the i-1 th updating results, and U is more than or equal to 0 and less than or equal to N;
And when i=t, the T-th updating result obtained after the T-th updating operation is executed is the N third lane models.
4. The method of claim 3, wherein selecting U target lane models from the N first reference lane models that match lane line features of the i-th sub-image comprises:
under an image coordinate system, obtaining an observation vector of a lane line in the target image according to the lane line characteristics;
under a vehicle coordinate system, obtaining M predictive observation vectors corresponding to lane lines in the target image and a lane model A according to the lane line characteristics, wherein M is the number of the lane lines tracked by the lane model A, the lane model A is any one of the N first reference lane models, and M is an integer greater than or equal to 1;
determining M Markov distances corresponding to the observation vectors and the M predictive observation vectors;
and determining the minimum Markov distance in the M Markov distances, and determining the lane model A as a target lane model matched with the lane line characteristics of the ith sub-image when the minimum Markov distance is smaller than a distance threshold.
5. The method according to claim 4, wherein the method further comprises:
when i=t, if f minimum mahalanobis distances corresponding to a lane line C in the target image are all greater than or equal to the distance threshold, creating N fourth lane models according to the N third lane models, where the N fourth lane models are consistent with lane line parameters of the N third lane models, the lane line C is any one lane line in the target image, the f minimum mahalanobis distances are minimum mahalanobis distances corresponding to lane line features of the lane line C in f sub-images, the f sub-images are sub-images containing the lane line features of the lane line C in the T sub-images, and f is an integer greater than or equal to 1, and f is less than or equal to T;
processing each of the N fourth lane models to obtain N new lane models;
and taking the N fourth lane models and the N new lane models as the first lane model at the moment t.
6. The method of claim 5, wherein processing each of the N fourth lane models to obtain N new lane models comprises:
Acquiring the relative distance between the lane line C and the vehicle;
carrying out initial assignment on lane line parameters corresponding to the lane lines C in each fourth lane model according to the relative distance to obtain N fifth lane models;
fitting lane line characteristics in each sub-image in the T sub-images to obtain at least one fitting equation;
if a target fitting equation is firstly obtained in a kth sub-image in the T sub-images, updating each of the N fourth lane models by adopting lane line characteristics of the lane line C in the kth sub-image to obtain N fourth current lane models, wherein the target fitting equation is a fitting equation with the intercept in the at least one fitting equation and the relative distance smaller than a distance threshold;
and starting from the (k+1) th sub-image in the T sub-images, sequentially adopting the lane line characteristics in each sub-image to update the N fourth lane models to obtain the current latest N fourth lane models, and obtaining the N new lane models after updating the current latest N fourth lane models by adopting the lane line characteristics in the T sub-images.
7. The method according to any of claims 4-6, wherein the probability parameters include a target matching probability, a priori probability, and an adaptation probability of the first lane model B at the time instant t-1; the adaptation probability of the third lane model B' is obtained by the target matching probability, the prior probability and the adaptation probability of the first lane model B at the t-1 moment; the first lane model B is any one of the N first lane models;
the target matching probability is used for representing the matching degree of all lane lines in the target image and the third lane model B';
the prior probability is used to characterize the origin of the third lane model B ', including that the third lane model B' was obtained by performing an update operation on the first lane model B.
8. The method according to any one of claims 4-6, further comprising:
obtaining a target third lane model according to the adaptation probability of each third lane model, wherein the target third lane model is a third lane model with the adaptation probability smaller than a probability threshold value in the N third lane models;
and deleting the target third lane model with the tracking time length greater than a time length threshold, wherein the tracking time length is the total time length of the target third lane model from the creation time to the t time.
9. A lane line tracking apparatus, comprising:
the prediction unit is used for predicting each of N first lane models at the time t-1 according to the running parameters of the vehicle to obtain N second lane models, wherein the first lane models are used for tracking a plurality of groups of lanes, the lanes A and the lanes B are not parallel, the lanes A and the lanes B are lanes in any two groups of the plurality of groups of lanes respectively, the lanes contained in each group of lanes in the plurality of groups of lanes are parallel, and N is an integer greater than or equal to 1;
the updating unit is used for updating the N second lane models according to lane line characteristics in a target image to obtain N third lane models, wherein the target image is an image of a preset area in front of the vehicle at the moment t;
the calculation unit is used for calculating the adaptation probability of each third lane model in the N third lane models according to the probability parameters, wherein the adaptation probability is used for representing the adaptation degree of the third lane model and the lane line of the vehicle driving road surface;
the tracking unit is used for determining a third lane model with the largest adaptation probability among the N third lane models, and the third lane model with the largest adaptation probability is used for tracking the lane line of the vehicle driving road surface.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
in the aspect of predicting each of the N first vehicle path models at time t-1 according to the running parameters of the vehicle to obtain N second vehicle path models, the prediction unit is specifically configured to: obtaining a prediction matrix according to the running parameters of the vehicle; and predicting each of the N first vehicle channel models at the t-1 moment according to the prediction matrix to obtain N second vehicle channel models.
11. The device according to claim 9 or 10, wherein,
the updating unit is specifically configured to, in terms of updating the N second lane models according to the lane line characteristics in the target image to obtain N third lane models:
dividing the target image into T sub-images, wherein the distance between the region corresponding to the ith sub-image and the vehicle is smaller than the distance between the region corresponding to the (i+1) th sub-image and the vehicle, i is an integer, i is more than or equal to 1 and less than or equal to T, and T is an integer which is more than or equal to 2;
acquiring lane line characteristics in the ith sub-image;
selecting U target lane models matched with the lane line characteristics of the ith sub-image from N first reference lane models, executing an ith updating operation on each of the U target lane models according to the lane line characteristics to obtain U first reference lane models, wherein an ith updating result comprises the U updated first reference lane models, when i=1, the N first reference lane models are the N second lane models, and when i is larger than 1, the N first reference lane models are the i-1 th updating results, and U is more than or equal to 0 and less than or equal to N;
And when i=t, the T-th updating result obtained after the T-th updating operation is executed is the N third lane models.
12. The apparatus of claim 11, wherein the device comprises a plurality of sensors,
in the aspect of selecting U target lane models matched with the lane line characteristics of the ith sub-image from N first reference lane models, the updating unit is specifically configured to:
under an image coordinate system, obtaining an observation vector of a lane line in the target image according to the lane line characteristics;
under a vehicle coordinate system, obtaining M predictive observation vectors corresponding to lane lines in the target image and a lane model A according to the lane line characteristics, wherein M is the number of the lane lines tracked by the lane model A, the lane model A is any one of the N first reference lane models, and M is an integer greater than or equal to 1;
determining M Markov distances corresponding to the observation vectors and the M predictive observation vectors;
and determining the minimum Markov distance in the M Markov distances, and determining the lane model A as a target lane model matched with the lane line characteristics of the ith sub-image when the minimum Markov distance is smaller than a distance threshold.
13. The apparatus of claim 12, further comprising a creation unit;
the creating unit is configured to create N fourth lane models according to the N third lane models when i=t, where f minimum mahalanobis distances corresponding to a lane line C in the target image are all greater than or equal to the distance threshold, the N fourth lane models are consistent with lane line parameters of the N third lane models, the lane line C is any one lane line in the target image, the f minimum mahalanobis distances are minimum mahalanobis distances corresponding to lane line features of the lane line C in f sub-images, the f sub-images are sub-images including the lane line features of the lane line C in the T sub-images, and f is an integer greater than or equal to 1, and f is less than or equal to T;
processing each of the N fourth lane models to obtain N new lane models;
and taking the N fourth lane models and the N new lane models as the first lane model at the moment t.
14. The apparatus of claim 13, wherein the device comprises a plurality of sensors,
in the aspect of processing each of the N fourth lane models to obtain N new lane models, the creating unit is specifically configured to:
Acquiring the relative distance between the lane line C and the vehicle;
carrying out initial assignment on lane line parameters corresponding to the lane lines C in each fourth lane model according to the relative distance to obtain N fifth lane models;
fitting lane line characteristics of each sub-image in the T sub-images to obtain at least one fitting equation;
if a target fitting equation is firstly obtained in a kth sub-image in the T sub-images, updating each of the N fourth lane models by adopting lane line characteristics of the lane line C in the kth sub-image to obtain N fourth current lane models, wherein the target fitting equation is a fitting equation with the intercept in the at least one fitting equation and the relative distance smaller than a distance threshold;
and starting from the (k+1) th sub-image in the T sub-images, sequentially adopting the lane line characteristics in each sub-image to update the N fourth lane models to obtain the current latest N fourth lane models, and obtaining the N new lane models after updating the current latest N fourth lane models by adopting the lane line characteristics in the T sub-images.
15. The apparatus according to any of claims 12-14, wherein the probability parameters include a target matching probability, a priori probability, and an adaptation probability of the first lane model B at the time instant t-1; the adaptation probability of the third lane model B' is obtained by the target matching probability, the prior probability and the adaptation probability of the first lane model B at the t-1 moment; the first lane model B is any one of the N first lane models;
the target matching probability is used for representing the matching degree of all lane lines in the target image and the third lane model B';
the prior probability is used to characterize the origin of the third lane model B ', including that the third lane model B' was obtained by performing an update operation on the first lane model B.
16. The apparatus according to any one of claims 12-14, further comprising a deletion unit;
the deleting unit is configured to obtain a target third lane model according to the adaptation probability of each third lane model, where the target third lane model is a third lane model with an adaptation probability smaller than a probability threshold value in the N third lane models;
And deleting the target third lane model with the tracking time length greater than a time length threshold, wherein the tracking time length is the total time length of the target third lane model from the creation time to the t time.
17. A lane line tracking apparatus, comprising:
the device comprises a processor, a communication interface and a memory, wherein the processor, the communication interface and the memory are connected through electric signals;
the processor is used for predicting each of N first lane models at the time t-1 according to the running parameters of the vehicle to obtain N second lane models, the first lane models are used for tracking multiple groups of lanes, the lanes A and the lanes B are not parallel, the lanes A and the lanes B are lanes in any two groups of the multiple groups of lanes respectively, lanes contained in each group of lanes in the multiple groups of lanes are parallel, and N is an integer greater than or equal to 1;
the processor is further configured to update the N second lane models according to lane line features in a target image, so as to obtain N third lane models, where the target image is an image of a preset area in front of the vehicle at the time t;
the processor is further used for calculating the adaptation probability of each third lane model in the N third lane models according to probability parameters, wherein the adaptation probability is used for representing the adaptation degree of the third lane model and the lane line of the vehicle driving road surface;
The processor is further configured to determine a third lane model with a largest adaptation probability among the N third lane models, where the third lane model with the largest adaptation probability is used to track a lane line of the vehicle driving road surface.
18. A computer readable storage medium, characterized in that a computer program is stored, which computer program is executed by hardware to implement the method of any one of claims 1 to 8.
CN201910719667.XA 2019-07-31 2019-07-31 Lane line tracking method and related product Active CN110503009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910719667.XA CN110503009B (en) 2019-07-31 2019-07-31 Lane line tracking method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910719667.XA CN110503009B (en) 2019-07-31 2019-07-31 Lane line tracking method and related product

Publications (2)

Publication Number Publication Date
CN110503009A CN110503009A (en) 2019-11-26
CN110503009B true CN110503009B (en) 2023-06-06

Family

ID=68587954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910719667.XA Active CN110503009B (en) 2019-07-31 2019-07-31 Lane line tracking method and related product

Country Status (1)

Country Link
CN (1) CN110503009B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929655B (en) * 2019-11-27 2023-04-14 厦门金龙联合汽车工业有限公司 Lane line identification method in driving process, terminal device and storage medium
CN113885045A (en) * 2020-07-03 2022-01-04 华为技术有限公司 Method and device for detecting lane line
CN111994067A (en) * 2020-09-03 2020-11-27 南京维思科汽车科技有限公司 Intelligent safety control system and method for dealing with vehicle tire burst
CN114264310A (en) * 2020-09-14 2022-04-01 阿里巴巴集团控股有限公司 Positioning and navigation method, device, electronic equipment and computer storage medium
CN112507857B (en) * 2020-12-03 2022-03-15 腾讯科技(深圳)有限公司 Lane line updating method, device, equipment and storage medium
CN112884801A (en) * 2021-02-02 2021-06-01 普联技术有限公司 High altitude parabolic detection method, device, equipment and storage medium
CN113959447B (en) * 2021-10-19 2023-06-27 北京京航计算通讯研究所 Relative navigation high noise measurement identification method, device, equipment and storage medium
CN114973180B (en) * 2022-07-18 2022-11-01 福思(杭州)智能科技有限公司 Lane line tracking method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108216229A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 The vehicles, road detection and driving control method and device
CN109145860A (en) * 2018-09-04 2019-01-04 百度在线网络技术(北京)有限公司 Lane line tracking and device
CN109559334A (en) * 2018-11-23 2019-04-02 广州路派电子科技有限公司 Lane line method for tracing based on Kalman filter

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8055445B2 (en) * 2008-09-24 2011-11-08 Delphi Technologies, Inc. Probabilistic lane assignment method
JP6134276B2 (en) * 2014-03-03 2017-05-24 株式会社Soken Traveling line recognition device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108216229A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 The vehicles, road detection and driving control method and device
CN109145860A (en) * 2018-09-04 2019-01-04 百度在线网络技术(北京)有限公司 Lane line tracking and device
CN109559334A (en) * 2018-11-23 2019-04-02 广州路派电子科技有限公司 Lane line method for tracing based on Kalman filter

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《A novel illumination-invariant lane detection system》;Yassin Kortli等;《2017 2nd International Conference on Anti-Cyber Crimes》;20170424;全文 *
《基于成像模型的车道线检测与跟踪方法》;陈龙等;《中国公路学报》;20111115(第06期);全文 *

Also Published As

Publication number Publication date
CN110503009A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
CN110503009B (en) Lane line tracking method and related product
US8891820B2 (en) Multi-modal sensor fusion
JP7179110B2 (en) Positioning method, device, computing device, computer-readable storage medium and computer program
CN110954113B (en) Vehicle pose correction method and device
JP7316310B2 (en) POSITIONING METHOD, APPARATUS, COMPUTING DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
EP3615955A1 (en) Calibration of laser and vision sensors
CN110717927A (en) Indoor robot motion estimation method based on deep learning and visual inertial fusion
WO2021056341A1 (en) Lane line fusion method, lane line fusion apparatus, vehicle, and storage medium
CN110853085B (en) Semantic SLAM-based mapping method and device and electronic equipment
CN110376605B (en) Map construction method, navigation method and device
KR102362470B1 (en) Mehtod and apparatus for processing foot information
CN112991389B (en) Target tracking method and device and mobile robot
CN111274847A (en) Positioning method
CN112949519B (en) Target detection method, device, equipment and storage medium
CN112329749B (en) Point cloud labeling method and labeling equipment
CN113189989B (en) Vehicle intention prediction method, device, equipment and storage medium
CN114924287A (en) Map construction method, apparatus and medium
CN114387576A (en) Lane line identification method, system, medium, device and information processing terminal
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN110749325B (en) Flight path planning method and device
CN112487861A (en) Lane line recognition method and device, computing equipment and computer storage medium
García-García et al. 3D visual odometry for road vehicles
CN115991195A (en) Automatic detection and compensation method, device and system for wheel slip in automatic driving
CN114426030B (en) Pedestrian passing intention estimation method, device, equipment and automobile
CN115619954A (en) Sparse semantic map construction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant