CN115320641A - Lane line modeling method and device, electronic equipment and automatic driving vehicle - Google Patents

Lane line modeling method and device, electronic equipment and automatic driving vehicle Download PDF

Info

Publication number
CN115320641A
CN115320641A CN202211080419.3A CN202211080419A CN115320641A CN 115320641 A CN115320641 A CN 115320641A CN 202211080419 A CN202211080419 A CN 202211080419A CN 115320641 A CN115320641 A CN 115320641A
Authority
CN
China
Prior art keywords
lane line
target
observation
modeling
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211080419.3A
Other languages
Chinese (zh)
Inventor
王丕阁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd, Apollo Zhixing Technology Guangzhou Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202211080419.3A priority Critical patent/CN115320641A/en
Publication of CN115320641A publication Critical patent/CN115320641A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/14Adaptive cruise control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a lane line modeling method and device, electronic equipment and an automatic driving vehicle, and relates to the technical field of computers, in particular to the technical field of intelligent transportation and automatic driving. The specific implementation scheme is as follows: acquiring a target lane line in current frame lane line observation; acquiring an observation result of the target lane line, and determining a target modeling mode of the target lane line based on the observation result, wherein the observation result is used for determining whether high-precision lane line observation and/or perception lane line observation exist in the target lane line; and modeling the target lane line based on the target modeling mode.

Description

Lane line modeling method and device, electronic equipment and automatic driving vehicle
Technical Field
The disclosure relates to the technical field of computers, particularly to the technical field of intelligent transportation and automatic driving, and particularly relates to a lane line modeling method and device, electronic equipment and an automatic driving vehicle.
Background
In the field of automatic driving, a vehicle needs to control the vehicle to finish adaptive cruise according to surrounding lane line information. To ensure the stability of the downstream control plan, the vehicle needs to model the lane lines according to the lane line observation information. The source of lane line observation generally comprises two types of perception lane lines and high-precision lane lines, and the current modeling method for the lane lines generally models all observations of all the lane lines together without distinction, but the perception observation or the high-precision observation in an actual scene may have larger noise.
Disclosure of Invention
The disclosure provides a lane line modeling method and device, electronic equipment and an automatic driving vehicle.
According to a first aspect of the present disclosure, there is provided a lane line modeling method, including:
acquiring a target lane line in current frame lane line observation;
acquiring an observation result of the target lane line, and determining a target modeling mode of the target lane line based on the observation result, wherein the observation result is used for determining whether high-precision lane line observation and/or perception lane line observation exist in the target lane line;
and modeling the target lane line based on the target modeling mode.
According to a second aspect of the present disclosure, there is provided a lane line modeling apparatus including:
the acquisition module is used for acquiring a target lane line in the current frame lane line observation;
the determining module is used for acquiring an observation result of the target lane line, and determining a target modeling mode of the target lane line based on the observation result, wherein the observation result is used for determining whether the target lane line has high-precision lane line observation and/or perception lane line observation;
and the modeling module is used for modeling the target lane line based on the target modeling mode.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first aspect.
According to a sixth aspect of the present disclosure, there is provided an autonomous vehicle configured to perform the method of the first aspect.
In the embodiment of the disclosure, after a target lane line is obtained, observation results of the target lane line are obtained, the observation results include whether high-precision lane line observation and/or perception lane line observation exist or not, a target modeling mode is determined based on the observation results, and then modeling is performed on the target lane line based on the target modeling mode. In this way, it is also necessary to determine a corresponding target modeling manner based on the observation result of the lane line, so that the manner of modeling the lane line is more flexible.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is one of the flow diagrams of a lane line modeling method provided by the embodiment of the present disclosure;
fig. 2 is a second schematic flow chart of a lane line modeling method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a lane line modeling apparatus provided in an embodiment of the present disclosure;
fig. 4 is a block diagram of an electronic device for implementing a lane line modeling method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
For a better understanding, the following explains the related concepts that may be involved in embodiments of the present disclosure.
Perception lane line modeling: the distance measurement range of the perception lane line is short and is generally within 100m in front of the vehicle, and the curvature of the lane line on the highway is generally small, so that the perception lane line can be observed by using a cubic curve to complete fitting.
Modeling a high-precision lane line: unlike the perception lane line, the high-precision lane line is not limited by range, and the user generally selects an observation range as required, such as 300m ahead of the vehicle, and generally selects a piecewise curve fitting mode during fitting to reduce fitting errors.
The lane line modeling method provided by the embodiment of the present disclosure is described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of a lane line modeling method according to an embodiment of the present disclosure, and as shown in fig. 1, the method includes the following steps:
and S101, acquiring a target lane line in the current frame lane line observation.
It should be noted that the lane line modeling method provided by the embodiment of the present disclosure may be applied to electronic devices, such as a computer, a mobile phone, a tablet computer, a vehicle-mounted terminal, and the like. The lane line modeling method provided by the disclosure can be applied to the field of automatic driving, for example, an execution main body of the method can be a vehicle-mounted terminal on an automatic driving vehicle, and the vehicle-mounted terminal models the lane line through lane line observation, so that the self-adaptive cruise of the automatic driving vehicle is better realized. For better understanding, in the following embodiments, the technical solution provided by the present disclosure will be explained by using an electronic device as an execution subject of the lane line modeling method.
In this step, the electronic device obtains the current frame lane observation, for example, the current frame lane observation may be obtained based on a camera of the electronic device, or the current frame lane observation may be obtained based on a high-precision map in a specific map application of the electronic device. It can be understood that the current frame lane line observation may include at least one lane line, and the target lane line may be any one of the at least one lane line. Moreover, it should be noted that the modeling manner for the target lane line in the present disclosure may be applicable to any one lane line or all lane lines in the current frame lane line observation; for example, the current frame lane line observation includes two lane lines, one lane line may be modeled based on a subsequent first modeling manner, and the other lane line may be modeled based on a third modeling manner. That is to say, the modeling manner for the target lane line in the embodiment of the present disclosure is not limited to modeling a specific lane line.
Step S102, obtaining an observation result of the target lane line, and determining a target modeling mode of the target lane line based on the observation result, wherein the observation result is used for determining whether the target lane line has high-precision lane line observation and/or perception lane line observation.
It can be understood that after the electronic device acquires the target lane line, the electronic device acquires an observation result of the target lane line, that is, determines whether the target lane line has high-precision lane line observation and/or perception lane line observation. For example, determining the target lane line based on the observations may include only high-precision lane line observations or only perception lane line observations, or may also include both high-precision lane line observations and perception lane line observations.
Illustratively, the high-precision lane line observation may be obtained based on a preset high-precision map, and the perception lane line observation may be obtained by the electronic device based on the camera shooting the lane line. In the embodiment of the disclosure, after acquiring the target lane line, the electronic device may determine whether a preset high-precision map includes high-precision lane line observation of the target lane line and whether perception lane line observation obtained by shooting with a camera can be acquired, so as to determine an observation result of the target lane line.
Further, a target modeling manner of the target lane line is determined based on the observation result. For example, if the observation result determines that the target lane line only includes the perception lane line observation, determining that the target modeling mode is to perform lane line modeling based on the perception lane line observation; if the observation result is determined that the target lane line comprises high-precision lane line observation, performing lane line modeling based on the high-precision lane line observation; and if the observation result is determined that the target lane line comprises perception lane line observation and high-precision lane line observation, performing fusion modeling based on the perception lane line observation and the high-precision lane line observation.
And S103, modeling the target lane line based on the target modeling mode.
For example, if the observation result is that the target lane line only includes the perception lane line observation, the target lane line may be modeled based on the perception lane line observation. For example, it may be that a cubic curve y = c of a lane line model is constructed based on observation point data, such as a plurality of observation point data, based on perceived lane line observation data 3 x 3 +c 2 x 2 +c 1 x+c 0 By optimizing the coefficient c = [ c ] of the cubic curve 0 ,c 1 ,c 2 ,c 3 ] T For example, iteration is carried out until convergence by using a method of Lavenxft Wen Beige-Marquardt (Levenberg-Marquardt), an optimized cubic curve is obtained by solving, and lane line modeling is carried out based on the optimized cubic curve. When in useHowever, the modeling manner of the target lane line may also be in other possible forms, which will be specifically described in the following embodiments and will not be described herein.
In the embodiment of the disclosure, after a target lane line is obtained, an observation result of the target lane line is obtained, where the observation result is used to determine whether the target lane line has high-precision lane line observation and/or perception lane line observation, a target modeling mode is determined based on the observation result, and then the target lane line is modeled based on the target modeling mode. In this way, it is also necessary to determine a corresponding target modeling manner based on the observation result of the lane line, so that the manner of modeling the lane line is more flexible.
Optionally, the step S102 may include:
judging whether the target lane line has high-precision lane line observation or not;
under the condition that high-precision lane line observation does not exist in the target lane line, judging whether the target lane line is a lane line of a unilateral lane;
and under the condition that the target lane line is a lane line of a unilateral lane, determining that the target modeling mode of the target lane line is a first modeling mode, wherein the first modeling mode is a mode of modeling the target lane line based on sensing lane line constraint.
In the embodiment of the disclosure, after a target lane line is acquired, whether high-precision lane line observation exists in the target lane line is judged, and if not, whether the target lane line is a lane line of a single-side lane is further judged. It should be noted that, if both the left and right lane lines of a lane have observation (high-precision lane line observation and/or perception lane line observation), the lane is a two-sided lane, and if only one lane line of the left and right lane lines of the lane has observation, the lane is a one-sided lane, and then the lane line of the lane is also the lane line belonging to the one-sided lane. And if the target lane line is the lane line of the unilateral lane, determining that the target modeling mode corresponding to the target lane line is the first modeling mode.
The first modeling mode is a mode for modeling the target lane line based on perception lane line constraint. It can be understood that, based on the above determination, it can be determined that the target lane line only includes the perception lane line observation, and the lane line is a lane line of a single-side lane, and there is no need to consider the influence of the lane, in this case, the target lane line model can be optimally modeled only by using the perception lane line observation.
In the embodiment of the disclosure, under the condition that no high-precision lane line observation exists in the target lane line, whether the target lane line is a lane line of a unilateral lane is further judged, and if yes, the target modeling mode corresponding to the target lane line is determined to be a first modeling mode based on perception lane line constraint. Furthermore, the modeling mode of the target lane line is determined through multi-stage judgment, so that the determination of the modeling mode of the lane line is more flexible and diversified.
Alternatively, in this case, the step S103 may include:
acquiring a first distance from an observation point of the target lane line in the observation of the perception lane line to the target lane line, and acquiring a first constraint based on the first distance;
acquiring direction consistency constraint and curvature consistency constraint of the target lane line based on perception lane line observation;
constructing a perceptual lane line constraint based on the first constraint, the direction consistency constraint and the curvature consistency constraint;
modeling the target lane line based on the perceived lane line constraint.
Specifically, after the target modeling mode of the target lane line is determined to be the first modeling mode, a first distance from an observation point of the target lane line to the target lane line in the observation of the perception lane line is obtained.
Specifically, the perception of lane line observation is characterized in that the farther the vehicle is, the poorer the observation accuracy, the centimeter level at the near position and the meter level at the far position, and the single-frame observation reliability is low due to the influences of illumination change, external reference accuracy, roadside jolt and the like. In the embodiment of the disclosure, a sliding window optimization strategy can be adopted to improve the accuracy of sensing lane line observation modeling. According to the characteristic of sensing that the lane line is observed at different positions, for a plurality of observation points on the lane line behind the vehicle (namely, the area where the vehicle runs) and the lane line in the near area, first distances from the observation points to a target lane line are obtained, namely, a plurality of first distances are obtained, and first constraints are constructed based on the first distances. And for observation points on a lane line of a far area in front of the vehicle, constructing direction consistency constraint and curvature consistency constraint of a target lane line based on the observation points, constructing perception lane line constraint based on the first constraint, the direction consistency constraint and the curvature consistency constraint, and modeling the target lane line through the perception lane line constraint.
Optionally, the formula for perceiving lane line constraints is as follows:
Figure BDA0003832855750000061
wherein M is the number of observation points on the perception lane line observation, c k The kth lane line model (i.e. the lane line model corresponding to the target lane line),
Figure BDA0003832855750000062
a first constraint for the ith observation point,
Figure BDA0003832855750000071
for the directional consistency constraint of the ith observation point,
Figure BDA0003832855750000072
constraint of curvature consistency for the ith observation point, Ω 1 、Ω 2 、Ω 3 Are all weight coefficients.
Further, the above formula may be iteratively optimized, for example, by iterating until convergence using a method of Levenberg-Marquardt (Levenberg-Marquardt) of Levenxft Wen Beige, so as to model the target lane line based on the optimized above perceived lane line constraint. It should be noted that, the specific implementation principle of modeling the lane line based on the constraint equation may refer to the related art, which is not explained in detail in the present disclosure.
In the embodiment of the disclosure, under the condition that the target modeling mode is determined to be the first modeling mode, the perception lane line constraint is constructed through the first constraint related to the distance, the direction consistency constraint and the curvature consistency constraint, so that the distance from the observation point to the lane line in the perception lane line observation, the direction of the target lane line and the curvature are comprehensively considered, the lane line model obtained based on the modeling mode of the perception lane line constraint is higher in precision, and the lane line model is more beneficial to providing more accurate lane line positioning in vehicle driving so as to assist the vehicle driving.
Optionally, after determining whether the target lane line is a lane line of a single lane, the method further includes:
and under the condition that the target lane line is not the lane line of a unilateral lane, determining that the target modeling mode of the target lane line is a second modeling mode, wherein the second modeling mode is a mode of modeling the target lane line based on perception lane level constraint.
In the embodiment of the present disclosure, when it is determined that there is no high-precision lane line observation in the target lane line, if the target lane line is not a lane line of a single lane, for example, both lane lines on the left and right sides of a lane to which the target lane line belongs are observed, the modeling manner of the target lane line is determined to be the second modeling manner that models based on the perceptual lane-level constraint. Therefore, different modeling modes can be adopted for modeling the lane line of the unilateral lane and the lane line of the non-unilateral lane respectively, and the modeling of the lane line is more flexible and diversified.
Optionally, in this case, the modeling the target lane line based on the target modeling manner includes:
acquiring lane width linear change constraint corresponding to a lane line of a non-unilateral lane;
constructing a perception lane level constraint based on the perception lane line constraint and the lane width linear change constraint;
modeling the target lane line based on the perceptual lane-level constraint.
Specifically, the lane line modeling needs to ensure the modeling accuracy of a single lane line, and also needs to consider whether the lane width formed by the lane line is consistent with the actual lane width, and based on the characteristic that the lane width generally meets the linear change, a lane width change model can be added to the state quantity of the lane line modeling for modeling the lane line of the non-unilateral lane, so as to further improve the modeling accuracy.
In the embodiment of the disclosure, for the lane lines of the non-unilateral lane, when the lane width formed by two adjacent lane lines has the characteristic of linear change, a lane width linear change constraint may be constructed, and since the adjacent lane lines have a common lane line, it is necessary to optimize the lanes of the common lane line together. Optionally, lane width linear change constraints corresponding to lane lines of the non-unilateral lane are obtained, perception lane level constraints are constructed based on the lane width linear change constraints and the perception lane line constraints, and then modeling is performed on the target lane lines through the perception lane level constraints.
Optionally, the formula corresponding to the perceptual lane-level constraint is as follows:
Figure BDA0003832855750000081
wherein N is the number of lane lines, Q is the number of lanes, M is the number of observation points on the observation of the perception lane lines, P is the number of sampling points for the observation of the lane width, c k As a k-th lane line model, w l Is the l-th lane width variation model,
Figure BDA0003832855750000082
a lane width linear variation constraint for the ith observation point,
Figure BDA0003832855750000083
a first constraint for the ith observation point,
Figure BDA0003832855750000084
for the directional consistency constraint of the ith observation point,
Figure BDA0003832855750000085
constraint of curvature consistency for the ith observation point, Ω 1 、Ω 2 、Ω 3 、Ω 4 Are all weight coefficients.
Further, the above formula may be subjected to iterative optimization, for example, a Levenberg-Marquardt method is adopted for iterative optimization, if the optimization converges in a set iteration number, then it is determined whether the lane width change model meets a linear assumption, if so, the modeling is successful (the above optimization result is also the modeling result); if the linear assumption is not met, the target lane line is modeled based on a first modeling approach.
In the embodiment of the disclosure, for the target lane line of the non-unilateral lane, the modeling mode not only considers the lane line factor (namely, perception lane line constraint) but also considers the lane factor (namely, lane width linear change constraint), thereby effectively improving the modeling precision.
Optionally, after determining whether the target lane line has high-precision lane line observation, the method further includes:
under the condition that the target lane line has high-precision lane line observation, judging whether the target lane line has perception lane line observation or not;
and under the condition that the target lane line has perception lane line observation, determining that the target modeling mode of the target lane line is a third modeling mode, wherein the third modeling mode is a mode for modeling the target lane line based on high-precision lane-level constraint.
In the embodiment of the disclosure, if there is high-precision lane line observation in the target lane line and there is perception lane line observation, the target modeling manner of the target lane line is determined to be a third modeling manner based on high-precision lane-level constraint. Therefore, high-precision lane line observation and perception lane line observation can be fused simultaneously to model the lane lines, so that the modeling mode of the lane lines is more flexible.
Optionally, in this case, the modeling the target lane line based on the target modeling manner includes:
acquiring relative transformation between a high-precision observation coordinate system and a perception observation coordinate system based on the high-precision lane line observation and the perception lane line observation of the target lane line;
acquiring a second distance from an observation point to the target lane line in the high-precision lane line observation of the target lane line based on the relative transformation, and constructing a second constraint based on the second distance;
constructing a high-precision lane-level constraint based on the second constraint and the perception lane line constraint;
modeling the target lane line based on the high-precision lane-level constraint.
Compared with the perception lane line observation, the high-precision observation geometric attribute is not affected, the constraint structure is not as complicated as the perception lane line observation, and only one high-precision lane line is needed. Considering that the high-precision lane line observation is extracted through global positioning, when there is an error in the global positioning, a coordinate system where the extracted high-precision lane line observation is located does not coincide with an actual vehicle body coordinate system (with a midpoint of a connecting line of rear wheels of the vehicle as an origin), and a difference of the coordinate system is also a time variable, in the embodiment of the disclosure, relative transformation between the high-precision observation coordinate system and a perception observation coordinate system is added to a state quantity of lane line modeling, so as to improve modeling precision.
Specifically, if the target lane line has high-precision lane line observation, obtaining relative transformation between a high-precision observation coordinate system and a perception observation coordinate system, wherein the perception observation coordinate system is superposed with a vehicle body coordinate system; and obtaining second distances from a plurality of observation points to the target lane line in high-precision lane line observation of the target lane line based on the relative transformation, namely obtaining the second distance corresponding to each observation point, constructing second constraints based on the second distances, constructing high-precision lane-level constraints based on the second constraints and the perception lane line constraints, and modeling the target lane line based on the high-precision lane-level constraints.
Optionally, the formula corresponding to the high-precision lane-level constraint is as follows:
Figure BDA0003832855750000101
wherein R is the number of observation points on the high-precision lane line observation, N is the number of lane lines, M is the number of observation points on the perception lane line observation, c k As a k-th lane line model, T bh For the relative transformation between the high precision observation coordinate system and the perceptual observation coordinate system,
Figure BDA0003832855750000102
a second constraint for the ith observation point,
Figure BDA0003832855750000103
a first constraint for the ith observation point,
Figure BDA0003832855750000104
for the directional consistency constraint of the ith observation point,
Figure BDA0003832855750000105
constraint of curvature consistency for the ith observation point, Ω 1 、Ω 2 、Ω 3 、Ω 5 Are all weight coefficients.
Further, the above formula may be iteratively optimized, for example, the above formula may be iteratively optimized by using a Levenberg-Marquardt method, and if the optimization converges in a set number of iterations, the calculated relative transformation T is determined bh Whether the curve is in a preset trust interval or not, if so, continuously judging the curvature matching degree of the optimized curve and the original high-precision observation curve, and if the matching degree meets a preset threshold value, successfully modeling (the optimized curve is also a modeling result); if the optimization fails, the objective isThe marking lane line is modeled based on the first modeling mode or the second modeling mode.
Optionally, after determining whether the target lane line has the perception lane line observation under the condition that the target lane line has the high-precision lane line observation, the method further includes:
under the condition that the target lane line is not observed by a perception lane line, determining that the target modeling mode of the target lane line is a fourth modeling mode, wherein the fourth modeling mode is a mode for modeling the target lane line based on the coordinates converted into the perception observation coordinate system;
the modeling the target lane line based on the target modeling mode includes:
acquiring relative transformation between a high-precision observation coordinate system and a perception observation coordinate system based on the third modeling mode;
converting the high-precision observation coordinates corresponding to the target lane line to coordinates under a sensing observation coordinate system based on the relative transformation;
and modeling the target lane line based on the coordinates converted into the sensing observation coordinate system.
In the embodiment of the present disclosure, if there is high-precision lane line observation in the target lane line but there is no perception lane line observation, the target modeling manner of the target lane line is determined to be a fourth modeling manner that performs modeling based on coordinates converted into a perception observation coordinate system.
In addition, since the lane lines stored in the high-precision map are static data and the geometric attributes of the high-precision lane lines extracted from the high-precision map are fixed whenever they are, the high-precision lane lines need not be optimized during modeling, but the positions of the lane lines are affected by high-precision positioning, and therefore, it is necessary to convert the high-precision lane lines according to the relative transformation between the high-precision observation coordinate system and the perception observation coordinate system to obtain the expression thereof in the perception coordinate system and perform modeling of the lane lines based on the coordinates in the perception observation coordinate system after conversion.
In the embodiment of the disclosure, the relative transformation between the high-precision observation coordinate system and the perception observation coordinate system is obtained, the high-precision observation coordinate corresponding to the target lane line can be converted to the perception observation coordinate under the perception observation coordinate system based on the relative transformation, and then the target lane line is modeled based on the perception observation coordinate converted under the perception observation coordinate system, so as to ensure the precision of the lane line model.
Referring to fig. 2, fig. 2 is a second schematic flow chart of a lane line modeling method according to an embodiment of the present disclosure, and as shown in fig. 2, the method includes the following steps:
step S201, observing a target lane line in a current frame lane line;
step S202, associating the target lane line with the historical lane line;
step S203, judging whether the target lane line has high-precision observation;
step S204, if not, judging whether the target lane line is a unilateral lane or not;
step S211, if yes, sensing lane line level optimization is carried out on the target lane line;
in this step, the perceived lane line constraint is also constructed in the above embodiment.
Step S212, judging whether the optimization is successful;
in the step, whether the constructed sensing lane line constraint equation is iteratively converged within preset times is judged, and if yes, the optimization is considered to be successful.
Step S213, if the optimization is successful, modeling a target lane line based on the perception lane line constraint;
step S221, if the target lane line is not the lane line of the unilateral lane, forming a group of lanes with public lane lines;
step S222, perception lane level grouping optimization;
if the target lane line is not a lane line of a single lane, the lanes with the public lane line form a group, and the group optimization is performed on the lane lines corresponding to the lanes forming the group, in which case, a lane width linear variation constraint needs to be considered, and a perceptual lane level constraint is constructed based on the lane width linear variation constraint and the perceptual lane line constraint.
Step S223, judging whether the optimization is successful;
in the step, whether the constructed perception lane level constraint equation is iteratively converged within a preset number of times is judged, and if yes, the optimization is considered to be successful; if not, the step S211 is entered, that is, the perceptual lane line level optimization is entered.
S224, if the optimization is successful, modeling a target lane line based on the perception lane level constraint;
s231, if the target lane line has high-precision observation, judging whether perception observation exists at the same time;
step S232, if perception observation exists at the same time, high-precision observation and perception observation are optimized simultaneously;
in this step, the high-precision lane-level constraint is constructed in the above embodiment.
Step S233, judging whether the optimization is successful;
in the step, whether the constructed high-precision lane-level constraint equation is iteratively converged within a preset number of times is judged, and if yes, the optimization is considered to be successful; if not, the step S204 is performed, that is, the perceptual lane-level modeling mode or the perceptual lane line-level modeling mode is performed.
Step S234, if the optimization is successful, modeling the target lane line based on high-precision lane-level constraint;
step S241, if the target lane line has high-precision observation but no sensing observation, acquiring relative transformation between a high-precision observation coordinate and a sensing observation coordinate;
step S242, converting to a coordinate under a sensing observation coordinate system;
in this step, the high-precision observation coordinates corresponding to the target lane line are converted into coordinates in the sensing observation coordinate system.
And S243, modeling the target lane line based on the coordinates converted to the sensed observation coordinate system.
It should be noted that, relevant concepts and specific implementation flows related in the embodiments of the present disclosure may refer to the specific description in the embodiment described in fig. 1, and the embodiments of the present disclosure can also achieve the beneficial effects in the embodiments described above, and in order to avoid repetition, details are not described here again.
The lane line modeling method provided by the embodiment of the disclosure is a three-level optimization modeling scheme based on perception, the first level is the fusion optimization of the perception lane line and the high-precision lane line (namely, step S231 to step S234), although the high-precision observation can improve the modeling range and precision of the lane line, the high-precision observation can have the situation of not meeting the reality, so that the fusion fails, and at the moment, the high-precision observation needs to be removed, and the method enters the pure perception lane line modeling link (namely, step S233 to step S204 and the subsequent steps). The second level is the perceptual lane-level optimization (i.e., steps S221 to S224), since this level of optimization requires a precondition in which the lane width varies linearly, if the optimization result does not meet the precondition, the solution is considered to have failed (i.e., steps S223 to S211 and subsequent steps). The third level is perceptual lane line level optimization (i.e., step S211 to step S213), which is used as the last guarantee for modeling lane lines because the optimization compresses the optimization quantity and observation quantity to the minimum, so there is no failure. Although the modeling precision of the lane line is sequentially reduced according to the rank execution sequence, the success rate is sequentially increased, so that the lane line is successfully modeled in one level of optimization, and does not need to participate in the next level of optimization.
The lane line modeling is divided into the following four types:
(1) Perception lane line level modeling (i.e., the first modeling manner in the embodiment of fig. 1) optimizes a single lane line model only by using perception observation;
(2) Perceptual lane-level modeling (i.e., the second modeling method in the embodiment of fig. 1) that simultaneously optimizes multiple lane line models using only perceptual observation;
(3) High-precision lane line level modeling (i.e., the fourth modeling method in the embodiment of fig. 1 described above) for processing lane line modeling with only high-precision observation;
(4) The high-precision lane-level modeling (i.e., the third modeling manner in the embodiment of fig. 1) is used for modeling lane line fusion with both high-precision observation and perception observation.
It should be noted that the implementation flows of the above modeling manners may specifically refer to the specific descriptions in the embodiment described in fig. 1, and are not described herein again to avoid repetition.
The lane line modeling method provided by the embodiment of the disclosure performs differentiation processing on different types of lane line observation, fully utilizes the advantages of high-precision lane line observation, greatly improves the length and precision of lane line modeling, provides a set of high-precision observation abnormal recognition strategy, and timely eliminates the influence of high-precision observation on fusion. Aiming at the modeling of the pure perception lane line, the prior characteristics of the road are fully utilized, and the modeling precision when only perception observation is carried out is improved. In addition, the principle of mainly perceiving observation is adopted, robustness of lane line modeling is guaranteed, important support can be provided for optimization of the rear end of the lane line modeling, and stability of vehicle control planning is guaranteed.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a lane line modeling apparatus according to an embodiment of the present disclosure, and as shown in fig. 3, a lane line modeling apparatus 300 includes:
an obtaining module 301, configured to obtain a target lane line in current frame lane line observation;
a determining module 302, configured to obtain an observation result of the target lane line, and determine a target modeling manner of the target lane line based on the observation result, where the observation result is used to determine whether the target lane line has high-precision lane line observation and/or perception lane line observation;
and the modeling module 303 is configured to model the target lane line based on the target modeling manner.
Optionally, the determining module 302 is further configured to:
judging whether the target lane line has high-precision lane line observation or not;
under the condition that high-precision lane line observation does not exist in the target lane line, judging whether the target lane line is a lane line of a unilateral lane;
and under the condition that the target lane line is a lane line of a unilateral lane, determining that the target modeling mode of the target lane line is a first modeling mode, wherein the first modeling mode is a mode for modeling the target lane line based on sensing lane line constraint.
Optionally, the modeling module 303 is further configured to:
acquiring a first distance from an observation point of the target lane line in the observation of the perception lane line to the target lane line, and acquiring a first constraint based on the first distance;
acquiring direction consistency constraint and curvature consistency constraint of the target lane line based on perception lane line observation;
constructing a perceptual lane line constraint based on the first constraint, the direction consistency constraint and the curvature consistency constraint;
modeling the target lane line based on the perceived lane line constraint.
Optionally, the determining module 302 is further configured to:
and under the condition that the target lane line is not a lane line of a unilateral lane, determining that the target modeling mode of the target lane line is a second modeling mode, wherein the second modeling mode is a mode for modeling the target lane line based on perception lane level constraint.
Optionally, the modeling module 303 is further configured to:
acquiring lane width linear change constraint corresponding to a lane line of a non-unilateral lane;
constructing a perception lane level constraint based on the perception lane line constraint and the lane width linear change constraint;
modeling the target lane line based on the perceptual lane-level constraint.
Optionally, the determining module 302 is further configured to:
under the condition that the target lane line has high-precision lane line observation, judging whether the target lane line has perception lane line observation or not;
and under the condition that the target lane line has perception lane line observation, determining that the target modeling mode of the target lane line is a third modeling mode, wherein the third modeling mode is a mode for modeling the target lane line based on high-precision lane-level constraint.
Optionally, the modeling module 303 is further configured to:
acquiring relative transformation between a high-precision observation coordinate system and a perception observation coordinate system based on the high-precision lane line observation and the perception lane line observation of the target lane line;
acquiring a second distance from an observation point to the target lane line in the high-precision lane line observation of the target lane line based on the relative transformation, and constructing a second constraint based on the second distance;
constructing a high-precision lane-level constraint based on the second constraint and the perception lane line constraint;
modeling the target lane line based on the high-precision lane-level constraint.
Optionally, the determining module 302 is further configured to:
under the condition that the target lane line is not observed by the perception lane line, determining that the target modeling mode of the target lane line is a fourth modeling mode, wherein the fourth modeling mode is a mode for modeling the target lane line based on the coordinates converted into the perception observation coordinate system;
the modeling module 303 is further configured to:
acquiring the relative transformation between a high-precision observation coordinate system and a perception observation coordinate system based on the third modeling mode;
converting the high-precision observation coordinates corresponding to the target lane line to coordinates under a sensing observation coordinate system based on the relative transformation;
and modeling the target lane line based on the coordinates converted into the sensed observation coordinate system.
In the embodiment of the disclosure, the device can determine the corresponding target modeling mode based on the observation result of the lane line, so that the mode of modeling the lane line is more flexible.
It should be noted that the apparatus provided in the embodiment of the present disclosure can implement all processes in the method embodiment described in fig. 1, and can achieve the same technical effect, and for avoiding repetition, details are not described here again.
The embodiment of the present disclosure further provides an automatic driving vehicle, which is configured to perform all processes in the embodiment of the method described in fig. 1, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 4 shows a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the electronic device 400 includes a computing unit 401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the electronic device 400 can also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in the electronic device 400 are connected to the I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the electronic device 400 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 401 executes the respective methods and processes described above, such as the lane line modeling method. For example, in some embodiments, the lane line modeling method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the lane line modeling method described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the lane line modeling method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (20)

1. A method of lane line modeling comprising:
acquiring a target lane line in current frame lane line observation;
acquiring an observation result of the target lane line, and determining a target modeling mode of the target lane line based on the observation result, wherein the observation result is used for determining whether high-precision lane line observation and/or perception lane line observation exist in the target lane line;
and modeling the target lane line based on the target modeling mode.
2. The method of claim 1, wherein obtaining observations of the target lane line and determining a target modeling mode for the target lane line based on the observations comprises:
judging whether the target lane line has high-precision lane line observation or not;
under the condition that high-precision lane line observation does not exist in the target lane line, judging whether the target lane line is a lane line of a unilateral lane;
and under the condition that the target lane line is a lane line of a unilateral lane, determining that the target modeling mode of the target lane line is a first modeling mode, wherein the first modeling mode is a mode for modeling the target lane line based on sensing lane line constraint.
3. The method of claim 2, wherein said modeling the target lane line based on the target modeling approach comprises:
acquiring a first distance from an observation point of the target lane line in the observation of the perception lane line to the target lane line, and acquiring a first constraint based on the first distance;
acquiring direction consistency constraint and curvature consistency constraint of the target lane line based on perception lane line observation;
constructing a perceptual lane line constraint based on the first constraint, the direction consistency constraint and the curvature consistency constraint;
modeling the target lane line based on the perceived lane line constraint.
4. The method of claim 2, wherein after determining whether the target lane line is a lane line of a single lane, the method further comprises:
and under the condition that the target lane line is not a lane line of a unilateral lane, determining that the target modeling mode of the target lane line is a second modeling mode, wherein the second modeling mode is a mode for modeling the target lane line based on perception lane level constraint.
5. The method of claim 4, wherein said modeling the target lane line based on the target modeling approach comprises:
acquiring lane width linear change constraint corresponding to a lane line of a non-unilateral lane;
constructing a perception lane level constraint based on the perception lane line constraint and the lane width linear change constraint;
modeling the target lane line based on the perceptual lane-level constraint.
6. The method of claim 2, wherein after determining whether there is a high-precision lane line observation for the target lane line, the method further comprises:
under the condition that the target lane line has high-precision lane line observation, judging whether the target lane line has perception lane line observation or not;
and under the condition that the target lane line has perception lane line observation, determining that the target modeling mode of the target lane line is a third modeling mode, wherein the third modeling mode is a mode for modeling the target lane line based on high-precision lane-level constraint.
7. The method of claim 6, wherein said modeling the target lane line based on the target modeling approach comprises:
acquiring relative transformation between a high-precision observation coordinate system and a perception observation coordinate system based on the high-precision lane line observation and the perception lane line observation of the target lane line;
acquiring a second distance from the observation point to the target lane line in the high-precision lane line observation of the target lane line based on the relative transformation, and constructing a second constraint based on the second distance;
constructing a high-precision lane-level constraint based on the second constraint and the perception lane line constraint;
modeling the target lane line based on the high-precision lane-level constraints.
8. The method of claim 7, wherein after determining whether there is a perception lane line observation for the target lane line in the presence of a high-precision lane line observation for the target lane line, the method further comprises:
under the condition that the target lane line is not observed by a perception lane line, determining that the target modeling mode of the target lane line is a fourth modeling mode, wherein the fourth modeling mode is a mode for modeling the target lane line based on the coordinates converted into the perception observation coordinate system;
the modeling the target lane line based on the target modeling mode includes:
acquiring the relative transformation between a high-precision observation coordinate system and a perception observation coordinate system based on the third modeling mode;
converting the high-precision observation coordinates corresponding to the target lane line to coordinates under a sensing observation coordinate system based on the relative transformation;
and modeling the target lane line based on the coordinates converted into the sensing observation coordinate system.
9. A lane line modeling apparatus comprising:
the acquisition module is used for acquiring a target lane line in the current frame lane line observation;
the determining module is used for acquiring an observation result of the target lane line, and determining a target modeling mode of the target lane line based on the observation result, wherein the observation result is used for determining whether the target lane line has high-precision lane line observation and/or perception lane line observation;
and the modeling module is used for modeling the target lane line based on the target modeling mode.
10. The apparatus of claim 9, wherein the means for determining is further configured to:
judging whether the target lane line has high-precision lane line observation or not;
under the condition that high-precision lane line observation does not exist in the target lane line, judging whether the target lane line is a lane line of a unilateral lane;
and under the condition that the target lane line is a lane line of a unilateral lane, determining that the target modeling mode of the target lane line is a first modeling mode, wherein the first modeling mode is a mode of modeling the target lane line based on sensing lane line constraint.
11. The apparatus of claim 10, wherein the modeling module is further to:
acquiring a first distance from an observation point of the target lane line in the observation of the perception lane line to the target lane line, and acquiring a first constraint based on the first distance;
acquiring direction consistency constraint and curvature consistency constraint of the target lane line based on perception lane line observation;
constructing a perceptual lane line constraint based on the first constraint, the direction consistency constraint and the curvature consistency constraint;
modeling the target lane line based on the perceived lane line constraint.
12. The apparatus of claim 10, wherein the means for determining is further configured to:
and under the condition that the target lane line is not a lane line of a unilateral lane, determining that the target modeling mode of the target lane line is a second modeling mode, wherein the second modeling mode is a mode for modeling the target lane line based on perception lane level constraint.
13. The apparatus of claim 12, wherein the modeling module is further configured to:
acquiring lane width linear change constraint corresponding to a lane line of a non-unilateral lane;
constructing a perception lane level constraint based on the perception lane line constraint and the lane width linear change constraint;
modeling the target lane line based on the perceptual lane-level constraint.
14. The apparatus of claim 10, wherein the means for determining is further configured to:
under the condition that the target lane line has high-precision lane line observation, judging whether the target lane line has perception lane line observation or not;
and under the condition that the target lane line has perception lane line observation, determining that the target modeling mode of the target lane line is a third modeling mode, wherein the third modeling mode is a mode for modeling the target lane line based on high-precision lane-level constraint.
15. The apparatus of claim 14, wherein the modeling module is further to:
acquiring relative transformation between a high-precision observation coordinate system and a perception observation coordinate system based on the high-precision lane line observation and the perception lane line observation of the target lane line;
acquiring a second distance from an observation point to the target lane line in the high-precision lane line observation of the target lane line based on the relative transformation, and constructing a second constraint based on the second distance;
constructing a high-precision lane-level constraint based on the second constraint and the perception lane line constraint;
modeling the target lane line based on the high-precision lane-level constraint.
16. The apparatus of claim 15, wherein the means for determining is further configured to:
determining that a target modeling mode of the target lane line is a fourth modeling mode under the condition that the target lane line is not observed by a perception lane line, wherein the fourth modeling mode is a mode of modeling the target lane line based on coordinates converted to a perception observation coordinate system;
the modeling module is further configured to:
acquiring the relative transformation between a high-precision observation coordinate system and a perception observation coordinate system based on the third modeling mode;
converting the high-precision observation coordinates corresponding to the target lane line to coordinates under a sensing observation coordinate system based on the relative transformation;
and modeling the target lane line based on the coordinates converted into the sensed observation coordinate system.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
20. An autonomous vehicle configured to perform the method of any of claims 1-8.
CN202211080419.3A 2022-09-05 2022-09-05 Lane line modeling method and device, electronic equipment and automatic driving vehicle Withdrawn CN115320641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211080419.3A CN115320641A (en) 2022-09-05 2022-09-05 Lane line modeling method and device, electronic equipment and automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211080419.3A CN115320641A (en) 2022-09-05 2022-09-05 Lane line modeling method and device, electronic equipment and automatic driving vehicle

Publications (1)

Publication Number Publication Date
CN115320641A true CN115320641A (en) 2022-11-11

Family

ID=83929562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211080419.3A Withdrawn CN115320641A (en) 2022-09-05 2022-09-05 Lane line modeling method and device, electronic equipment and automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN115320641A (en)

Similar Documents

Publication Publication Date Title
CN112763995B (en) Radar calibration method and device, electronic equipment and road side equipment
EP4116935A2 (en) High-definition map creation method and device, and electronic device
CN113759349B (en) Calibration method of laser radar and positioning equipment Equipment and autonomous driving vehicle
CN114626169B (en) Traffic network optimization method, device, equipment, readable storage medium and product
CN113920217A (en) Method, apparatus, device and product for generating high-precision map lane lines
CN113093128A (en) Method and device for calibrating millimeter wave radar, electronic equipment and road side equipment
CN117036422A (en) Method, device, equipment and storage medium for tracking lane lines
CN114036253A (en) High-precision map data processing method and device, electronic equipment and medium
CN112578357A (en) Radar calibration parameter correction method and device, electronic equipment and road side equipment
CN114506343A (en) Trajectory planning method, device, equipment, storage medium and automatic driving vehicle
CN114677653A (en) Model training method, vehicle key point detection method and corresponding devices
CN113762397B (en) Method, equipment, medium and product for training detection model and updating high-precision map
CN113688920A (en) Model training and target detection method and device, electronic equipment and road side equipment
CN113836661A (en) Time prediction method, model training method, related device and electronic equipment
CN115320642A (en) Lane line modeling method and device, electronic equipment and automatic driving vehicle
CN115320641A (en) Lane line modeling method and device, electronic equipment and automatic driving vehicle
CN115510923A (en) Method and device for automatically associating signal lamp with road, electronic equipment and medium
CN113959400B (en) Intersection vertex height value acquisition method and device, electronic equipment and storage medium
CN115127565A (en) High-precision map data generation method and device, electronic equipment and storage medium
CN114170300A (en) High-precision map point cloud pose optimization method, device, equipment and medium
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN114581869A (en) Method and device for determining position of target object, electronic equipment and storage medium
CN114771518B (en) Lane center guide wire generation method and device, electronic equipment and medium
CN114049615B (en) Traffic object fusion association method and device in driving environment and edge computing equipment
CN116559927B (en) Course angle determining method, device, equipment and medium of laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20221111

WW01 Invention patent application withdrawn after publication