CN115116019A - Lane line processing method, lane line processing device, lane line processing apparatus, and storage medium - Google Patents

Lane line processing method, lane line processing device, lane line processing apparatus, and storage medium Download PDF

Info

Publication number
CN115116019A
CN115116019A CN202210828297.5A CN202210828297A CN115116019A CN 115116019 A CN115116019 A CN 115116019A CN 202210828297 A CN202210828297 A CN 202210828297A CN 115116019 A CN115116019 A CN 115116019A
Authority
CN
China
Prior art keywords
observation point
frame
sliding window
constraint
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210828297.5A
Other languages
Chinese (zh)
Other versions
CN115116019B (en
Inventor
王丕阁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202310403461.2A priority Critical patent/CN116486354B/en
Priority to CN202210828297.5A priority patent/CN115116019B/en
Publication of CN115116019A publication Critical patent/CN115116019A/en
Application granted granted Critical
Publication of CN115116019B publication Critical patent/CN115116019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The disclosure provides a lane line processing method, a lane line processing device, lane line processing equipment and a storage medium, and relates to the technical field of computers, in particular to the fields of automatic driving, computer vision, lane line detection and the like. The specific implementation scheme is as follows: according to the relation between the vehicle position of each frame to be processed and the current vehicle position, determining observation points to be processed of the lane lines in each frame to be processed and a constraint mode corresponding to the observation points to be processed; converting the observation point to be processed into a corresponding target observation point according to the current vehicle coordinate system; according to the constraint mode corresponding to the observation point to be processed and the target observation point, establishing lane line constraint corresponding to the target observation point; and obtaining a curve model of the lane line according to the lane line constraint corresponding to the target observation point. In the disclosed embodiments, the accuracy of the fit of the curve model of the lane lines may be improved using lane line constraints related to the vehicle position.

Description

Lane line processing method, lane line processing device, lane line processing apparatus, and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and more particularly to the fields of autopilot, computer vision, and lane marking detection.
Background
In the field of high-speed automatic driving, a vehicle needs to be controlled to finish adaptive cruise according to surrounding lane line information. In the absence of a high-precision map, the vehicle acquires a road image by using a front-view camera, and a lane line in the image is extracted by a perception component in a deep learning mode and the like. Then, 2D (two-dimensional) lane lines on the image are converted into 3D (three-dimensional) lane lines in a vehicle body coordinate system using Inverse Perspective transformation (IPM). The single-frame 3D lane line measurement range is short and susceptible to noise, and therefore cannot be directly used. Therefore, the lane line is generally fitted into a cubic curve by lane line modeling, and lane line information close to a real state around the vehicle is restored and used for obstacle lane division, vehicle control planning and the like.
Disclosure of Invention
The disclosure provides a lane line processing method, a lane line processing device, lane line processing equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a lane line processing method including:
according to the relation between the vehicle position of each frame to be processed and the current vehicle position, determining observation points to be processed of the lane lines in each frame to be processed and a constraint mode corresponding to the observation points to be processed;
converting the observation point to be processed into a corresponding target observation point according to the current vehicle coordinate system;
according to the constraint mode corresponding to the observation point to be processed and the target observation point, establishing lane line constraint corresponding to the target observation point;
and obtaining a curve model of the lane line according to the lane line constraint corresponding to the target observation point.
According to another aspect of the present disclosure, there is provided a lane line processing apparatus including:
the constraint determining module is used for determining observation points to be processed of the lane lines in each frame to be processed and a constraint mode corresponding to the observation points to be processed according to the relationship between the vehicle position of each frame to be processed and the current vehicle position;
the conversion module is used for converting the observation point to be processed into a corresponding target observation point according to the current vehicle coordinate system;
the constraint establishing module is used for establishing lane line constraint corresponding to the target observation point according to the constraint mode corresponding to the observation point to be processed and the target observation point;
and the lane line generating module is used for obtaining a curve model of the lane line according to the lane line constraint corresponding to the target observation point.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
In the embodiment of the disclosure, according to the constraint mode corresponding to the observation point to be processed of the lane line determined by the vehicle position, the lane line constraint corresponding to the target observation point converted by each observation point to be processed can be established, so that the fitting accuracy of the curve model of the lane line is improved by using the lane line constraint related to the vehicle position.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow diagram of a lane line processing method according to one embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram of a lane line processing method according to another embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram of a lane line processing method according to another embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram of a lane line processing method according to another embodiment of the present disclosure;
FIG. 5 is a schematic flow chart diagram of a lane line processing method according to another embodiment of the present disclosure;
FIG. 6 is a schematic flow chart diagram of a lane line processing method according to another embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a lane line processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure;
fig. 9 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure;
fig. 10 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure;
fig. 11 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure;
FIG. 12 is a flow chart diagram of a sliding window queue maintenance method of the present disclosure;
fig. 13 is a block diagram of an electronic device for implementing the lane line processing method of the embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic flow chart of a lane line processing method according to an embodiment of the present disclosure. The method can comprise the following steps:
s101, according to the relation between the vehicle position of each frame to be processed and the current vehicle position, determining observation points to be processed of a lane line in each frame to be processed and a constraint mode corresponding to the observation points to be processed;
s102, converting the observation point to be processed into a corresponding target observation point according to a current vehicle coordinate system;
s103, establishing lane line constraints corresponding to the target observation point according to the constraint mode corresponding to the observation point to be processed and the target observation point;
and S104, obtaining a curve model of the lane line according to the lane line constraint corresponding to the target observation point.
In the disclosed embodiment, a road image can be acquired by using a camera of a vehicle, for example, a front-view camera in front of a vehicle body. The camera can be set to acquire one frame of road image every certain distance, and also can be set to continuously acquire the road image. The road image collected by the camera can be identified through the vehicle controller or a server connected with the vehicle controller, and the observation point of the lane line in the road image is extracted. Then, the observation points of the 2D lane lines on the road image can be converted into the observation points of the 3D lane lines in the vehicle coordinate system (or referred to as the vehicle body coordinate system) by means of IPM and the like. In this example, the frame to be processed may include 3D coordinates of an observation point of a lane line in one frame of image acquired on the camera road. The frame to be processed may be a key frame such as a history frame or a current frame. The history frame may include 3D coordinates of an observation point of a lane line in one frame of image acquired by the camera at the history position on the road. The current frame may include 3D coordinates of an observation point of a lane line in one frame of image acquired by the camera at the current position on the road. Wherein the historical position may be a position through which the vehicle travels to the current position.
In the disclosed embodiment, the frame to be processed may include coordinates of observation points of one or more lane lines. If one frame to be processed includes the coordinates of the observation points of the plurality of lane lines, the coordinates of the observation points of each lane line may be divided into a group. By using the coordinates of a set of observation points, a curve model of the set of corresponding lane lines can be fitted. In addition, part or all of the observation points in the frame to be processed can be used as the observation points to be processed required for fitting the curve model of the lane line.
In one example, the distance between the captured images may be set in advance, the distance traveled by the vehicle may be determined according to the vehicle travel speed, time, and the like, and one frame of road image may be captured each time the set distance is satisfied. According to the road images collected by the camera at a plurality of positions, a plurality of frames to be processed can be obtained. In another example, the camera may collect multiple frames of images continuously according to time, and then sample all the collected images to obtain road images at multiple positions, thereby obtaining multiple frames to be processed. The plurality of frames to be processed can be saved in a sliding window mode. For example, one or more key frames are saved in a sliding window. Each key frame includes coordinates of observation points of one or more lane lines extracted from one frame of the road image. And the key frame in the sliding window can be updated by using the current frame after the current frame is collected.
In one example, in generating the lane line model, the pending frames may include all keyframes in a sliding window. The vehicle position of the latest frame in the sliding window can be taken as the current vehicle position, and the vehicle pose of the latest frame can be taken as the current vehicle pose.
According to the conversion relation between the vehicle pose of the frame to be processed and the current vehicle pose, the 3D coordinates of the observation points to be processed of the lane line of the frame to be processed can be converted into the current vehicle coordinate system (also called as the vehicle coordinate system of the current frame), and the target observation points are obtained. For example, in the 3D coordinates of the observation point, the x-axis coordinate may represent a coordinate in the forward direction of the vehicle, the y-axis coordinate may represent a coordinate in the left-right direction of the vehicle, and the z-axis coordinate may represent a coordinate in the height direction of the vehicle. In addition, the z-axis coordinate of the converted target observation point may be set to zero, and then a curve model may be fitted with the x-axis coordinate and the y-axis coordinate of the converted target observation point. Corresponding to fitting a curve model using the converted 2D coordinates of the target observation points.
In the disclosed embodiment, a conversion formula can be used to convert the frame to be processed (key frame) into the current vehicle coordinate system. An example of a conversion formula is as follows:
Figure BDA0003744921390000051
for example, the vehicle pose of the key frame in the sliding window is T wh The current vehicle pose is T wc The conversion relation between the current vehicle pose and the vehicle pose of the key frame is T ch =T wc -1 T wh . The coordinates of the ith observation point in the key frame are
Figure BDA0003744921390000052
When cubic curve modeling is performed on the lane line, the observation point of the key frame in the sliding window to the lane line needs to be converted into the same coordinate system, that is, the coordinate point after conversion is P in the current vehicle coordinate system i . Then, P can be added i The z-axis coordinate in the z-axis coordinate is set to zero to obtain the two-dimensional coordinate p of the ith observation point i =[x i ,y i ] T . Subsequently, using p i And constructing various constraints of the lane line, and further fitting to obtain a curve model of the lane line. The curve model of the lane line may be various, such as a quadratic curve model, a cubic curve model, and the like.
In the embodiment of the disclosure, according to the constraint mode corresponding to the observation point to be processed of the lane line determined by the vehicle position, the lane line constraint corresponding to the target observation point converted by each observation point to be processed can be established, so that the fitting accuracy of the curve model of the lane line is improved by using the lane line constraint related to the vehicle position.
Fig. 2 is a schematic flow chart of a lane line processing method according to another embodiment of the present disclosure. The method of this embodiment includes one or more features of the lane line processing method embodiments described above. In one possible implementation, S101 includes at least one of:
s201a, when the distance between the vehicle position of the frame to be processed and the current vehicle position is greater than the first threshold, extracting a first observation point from the first area of the frame to be processed. And the constraint mode corresponding to the first observation point comprises point-to-line distance constraint.
S201b, when the distance between the vehicle position of the frame to be processed and the current vehicle position is less than or equal to the first threshold, extracting a second observation point from the second area of the frame to be processed, and extracting a third observation point from the third area of the frame to be processed. The constraint mode corresponding to the second observation point comprises point-to-line distance constraint; and the constraint mode corresponding to the third observation point comprises corresponding direction consistent constraint and/or curvature consistent constraint.
In the disclosed embodiment, the frame to be processed may include one or more observation points of lane lines. One or more observation points belonging to the same lane line can be divided into a group. For example, the observation point to be processed in the frame to be processed may be a 3D coordinate. The observation points to be processed can be selected from the set area in the frame to be processed according to the position of the vehicle where the frame to be processed is located.
In one example, assuming that the current vehicle position is 0m (meters), if the distance between the vehicle position in the keyframe and the current vehicle position is less than some threshold, e.g., 10m, then the keyframe belongs to a new keyframe that is closer to the current vehicle position. The current frame also belongs to the new key frame. If the distance between the vehicle position in the keyframe and the current vehicle position is greater than or equal to a certain threshold, then the keyframe belongs to an old keyframe that is farther away than the current vehicle position. For old keyframes, a first observation point within a portion of the first region, e.g., 0 to 15 meters, may be extracted, and the first observation point may employ a point-to-line distance constraint (which may be referred to simply as a point-to-line constraint). In addition, other areas of the old keyframes, such as observation points beyond 15 meters, may be discarded. For the new key frame, different constraint modes can be adopted according to the observation points of different areas. For example, a second observation point of a second region of the new keyframe, e.g., 0-30 meters, may employ a point-to-line constraint, and a third observation point of a region after a third region, e.g., 30 meters, employs a direction-consistent constraint and a curvature-consistent constraint.
In the embodiment of the disclosure, the required observation points are extracted from the set area of the frame to be processed through the vehicle position, so that appropriate constraints can be constructed for the positions of the vehicle, such as the front, the back, the distance and the like, and the fitting accuracy of the lane line curve model is improved.
After the constraint mode corresponding to the observation point to be processed in the frame to be processed is determined, the observation point to be processed in the frame to be processed can be converted according to the conversion relation between the vehicle poses to obtain a new 3D coordinate of the target observation point, and the z-axis coordinate in the coordinate of the target observation point can be set to be 0. And then, constructing corresponding lane line constraints by using the converted x-axis coordinates and y-axis coordinates of the target observation points. If the constraint mode corresponding to a certain observation point to be processed A1 is point-to-line constraint, the constraint mode of B1 obtained after the conversion of A1 is also point-to-line constraint. If the constraint mode corresponding to a certain observation point A2 to be processed is the direction consistency constraint and the curvature consistency constraint, the constraint mode of B2 obtained after the conversion of A2 is also the direction consistency constraint and the curvature consistency constraint. That is, the observation points before and after the coordinate conversion are constrained in the same manner.
In one possible embodiment, S103 includes at least one of:
s203a, establishing point-to-line distance constraint corresponding to the target observation point according to the converted target observation point of the first observation point or the second observation point;
s203b, establishing a direction consistency constraint and/or a curvature consistency constraint corresponding to the target observation point according to the target observation point converted by the third observation point.
In the embodiment of the disclosure, the frame to be processed where the first observation point is located, for example, the old keyframe, is farther from the current vehicle position, and this type of frame may only select a part of the observation points to establish the point-to-line distance constraint. The frame to be processed, such as a new key frame, where the first observation point is located is closer to the current vehicle position, and the frame of the type can select part of the observation points to establish point-to-line distance constraint and select part of the frames to establish direction consistency constraint and/or curvature consistency constraint, so that the curve model obtained through fitting not only can have higher accuracy in the front and far areas of the vehicle, but also can have higher accuracy in the rear and near areas of the vehicle.
In one possible embodiment, the point-to-line distance constraint corresponding to the target observation point is determined based on a first vector corresponding to the x-axis coordinate of the target observation point, a y-axis coordinate, and a second vector corresponding to the cubic curve coefficient.
For example, the target observation point is the ith observation point p after the lane line A in the frame to be processed is converted i =[x i ,y i ] T The point-to-line distance constraint constructed based on the observation points may be
Figure BDA0003744921390000071
Figure BDA0003744921390000072
Wherein, y i Is an observation point p i Y-axis coordinate of (2), x i Is an observation point p i Corresponding to the first vector. The first vector includes the terms of the curve model without coefficients. E.g. of cubic curve models
Figure BDA0003744921390000073
And c is a second vector corresponding to the observation point curve model coefficient. The second vector includes coefficients in the curve model. For example, c ═ c of the cubic curve model 0 ,c 1 ,c 2 ,c 3 ] T . T denotes transposition. The first observation point and the second observation point are generally positioned in the near areas behind and in front of the vehicle, and the point-to-line distance constraint corresponding to the target observation point after the conversion of the first observation point and the second observation point can improve the fitting accuracy of the near areas behind and in front of the curve model of the lane line.
For example, the observation point in the new keyframe may be in front of or at a close distance from the vehicle. Different constraints can be established for the target observation points in the new keyframe according to different regions, for example, a target observation point which is in front of the vehicle and is closer to the vehicle uses a point-to-line distance constraint, and an observation point which is in front of the vehicle but is farther from the vehicle uses a direction consistent constraint and a curvature consistent constraint.
For a far area in front of the vehicle, the detection accuracy of the sensing lane line at the far position is poor under the influence of the observation distance. The influence of the observation distance on the detection precision of the lane line geometric shape is small, so that the direction consistency constraint and the curvature consistency constraint of the observation points in front of and/or far away can be used for guaranteeing the geometric shape fitting precision of the curve model at far away.
In one possible embodiment, the direction-consistent constraint corresponding to the target observation point is determined based on a first derivative of a y-axis coordinate of the target observation point, a first derivative of a first vector corresponding to an x-axis coordinate of the target observation point, and a second vector corresponding to the curve model coefficient.
Wherein the first derivative of the y-axis coordinate of the target observation point is determined according to the previous observation point of the target observation point and the target observation point.
For example, the target observation point is the i-1 st observation point p after the lane line A in the frame to be processed is converted i-1 =[x i-1 ,y i-1 ] T I th observation point p i =[x i ,y i ] T . The direction consistency constraint constructed based on the i-1 st observation point and the i-th observation point can be
Figure BDA0003744921390000081
In which the first derivative of the y-axis coordinate
Figure BDA0003744921390000082
Figure BDA0003744921390000083
Is an observation point p i Corresponding to a first vector x of x-axis coordinates i The first derivative of (a). For example, c ═ c of the cubic curve model 0 ,c 1 ,c 2 ,c 3 ] T
Figure BDA0003744921390000084
Figure BDA0003744921390000085
In the embodiment of the disclosure, the direction consistency constraint corresponding to the target observation point is established by using the first derivative of the y-axis coordinate of the target observation point, the first derivative of the first vector corresponding to the x-axis coordinate of the second observation point, and the second vector corresponding to the curve model coefficient, so that the direction-related fitting accuracy of the curve model in the far area in front of the vehicle can be improved.
In one possible embodiment, the curvature consistency constraint corresponding to the target observation point is established based on the curvature of the observation point and the curvature of the cubic curve.
Wherein the observation point curvature is determined based on an observation point before the target observation point, and an observation point after the target observation point.
The cubic curve curvature is determined based on the first and second derivatives of the first vector corresponding to the x-axis coordinate of the second observation point and the second vector corresponding to the curve model coefficient.
For example, the target observation point is the i-1 st observation point p after the lane line A in the frame to be processed is converted i-1 =[x i-1 ,y i-1 ] T I th observation point p i =[x i ,y i ] T I +1 th observation point p i+1 =[x i+1 ,y i+1 ] T ,. The curve consistency constraint constructed based on the i-1 st observation point, the i-th observation point and the i +1 st observation point may be:
Figure BDA0003744921390000086
wherein the observation point curvature may be
Figure BDA0003744921390000087
Curvature of cubic curve of
Figure BDA0003744921390000088
Wherein
Figure BDA0003744921390000089
Is an observation point p i First direction corresponding to the x-axis coordinate ofQuantity x i The second derivative of (a). | represents the euclidean distance between the two observation points. For example, c ═ c of the cubic curve model 0 ,c 1 ,c 2 ,c 3 ] T
Figure BDA00037449213900000810
Figure BDA00037449213900000811
In the embodiment of the disclosure, the curvature consistency constraint corresponding to the target observation point is established by using the curvature of the target observation point at the observation point and the curvature of the cubic curve, so that the fitting precision related to the curvature of the curve model of the far area in front of the vehicle can be improved.
In the above example, the subscript i indicates only the order of the observation points, and does not indicate that the first observation point is the same as the second observation point. Generally speaking, in the same lane line in the same frame to be processed, the first observation point and the second observation point are different observation points.
In one possible implementation, taking the cubic curve model as an example, S104 includes:
and S204, constructing a nonlinear least square formula of the cubic curve model according to at least one of point-to-line distance constraint, direction consistency constraint and curvature consistency constraint corresponding to the target observation point.
And S205, iteratively solving the nonlinear least square formula to obtain the value of each coefficient in the cubic curve model of the lane line.
For example, with reference to the point-to-line distance constraint, the direction-consistent constraint, and the curvature-consistent constraint in the above examples, an example of the nonlinear least squares formulation of the constructed cubic curve model is as follows:
Figure BDA0003744921390000091
where argmin represents the value of the variable at which the latter equation is minimized. c. C k Coefficient of cubic curve fitted for k-th lane line, M is vehicleThe number of lane line observations of the rear and near regions (for example, the number of first observation points and second observation points included in the lane line), N is the number of lane line observations of the far region in front of the vehicle (for example, the number of third observation points included in the lane line), and Ω is the weight of the error term.
For example, the nonlinear least squares formulation can be iteratively solved using an LM (Levenberg-Marquardt ) method. And obtaining the increment delta c for reducing the total error once each iteration until the total error is not obviously reduced any more, and considering that the iteration is converged and the lane line solving is completed. The values of the coefficients in the cubic curve model at convergence can be obtained.
In the embodiment of the disclosure, a nonlinear least square formula of a cubic curve model is constructed by using point-to-line distance constraint, direction consistency constraint and curvature consistency constraint, and the cubic curve model obtained by fitting has high precision not only in the front and far regions of a vehicle, but also in the rear and near regions of the vehicle.
Fig. 3 is a schematic flow chart of a lane line processing method according to another embodiment of the present disclosure. The lane line processing method may include:
s301, according to the relation between the current frame and the key frame in the sliding window, the current frame is pressed into the sliding window. In the embodiment of the disclosure, the key frames in the sliding window are maintained according to the relationship between the current frame and the key frames in the sliding window, which is beneficial to keeping the proper key frames (for example, keeping the key frames with proper spacing) in the sliding window, and can reduce the data processing amount and improve the accuracy of the curve model obtained by fitting the key frames in the sliding window.
In one example, the area modeled by the lane lines may include a range of areas forward and rearward of the vehicle. In order to ensure the modeling precision and speed, the length of the sliding window and the sampling distance interval need to be reasonably set. The sliding window may be a bidirectional queue. A bidirectional queue is a queue in which both sides can operate. Bidirectional queues allow fast insertion and deletion at the head of the queue (similar to the tail of the queue).
The lane line processing method of the present embodiment may be combined with the lane line processing method of the above-described embodiments. In a possible implementation manner, the frame to be processed in the above embodiment is a key frame in a sliding window of a lane line. As shown in fig. 4, the lane line processing method may further include: s401, fitting according to the observation points of the key frames in the sliding window to obtain a curve model of the lane line.
In one possible implementation, S401 may include S101 to S104 in the embodiment shown in fig. 1. Reference is specifically made to the description related to fig. 1 to fig. 3, which is not repeated herein.
Fig. 5 is a schematic flow chart diagram of a lane line processing method according to another embodiment of the present disclosure. The method of this embodiment includes one or more features of the lane line processing method embodiments described above. In one possible implementation, S401 includes:
s501, pressing the current frame into the sliding window under the condition that the number of the key frames in the sliding window is smaller than the length N of the sliding window; wherein N is greater than or equal to 1.
In the embodiment of the present disclosure, the sliding window may be a queue, and the length of the sliding window, i.e., the length of the queue, is N. The sliding window length may represent the total number of frames in the queue that can be pushed. It may first be determined whether the sliding window is full. If the number of key frames in the queue is less than N, it indicates that the queue is not full, i.e. the sliding window is not full. When the sliding window is not full, the newly acquired current frame can be directly pushed into the sliding window, namely into the queue. For example, the sliding window may include an identifier of a key frame, an identifier of each lane line in the key frame, and information such as an identifier and coordinates of each observation point in each lane line. By comparing the number of the key frames in the sliding window with the length of the sliding window, the key frames with the set number can be stored in the sliding window, and the number of the key frames participating in the fitting of the subsequent lane line can be configured conveniently.
In one possible implementation, S401 includes:
s502, under the condition that the number of the key frames in the sliding window is equal to the length of the sliding window, deleting the first frame or the Nth frame from the sliding window according to the relative movement distance between the current frame and the Nth frame in the sliding window, and then pressing the current frame into the sliding window.
In the embodiment of the present disclosure, if the number of key frames in the queue is equal to N, it indicates that the queue is full, i.e., the sliding window is full. When the sliding window is full, the newly acquired current frame may not be pushed directly into the sliding window, i.e., into the queue. The current frame needs to be pushed into the sliding window after the partial frame in the sliding window is deleted. If the sliding window is a bidirectional queue, the first frame or nth frame of the queue may be deleted. Specifically, which frame in the queue is deleted may be determined according to a relative movement distance between the current frame and the frame in the sliding window, a relative movement distance between adjacent frames in the sliding window, and the like. Under the condition that the number of the key frames in the sliding window is equal to the length of the sliding window, after part of frames are deleted from the sliding window according to the relative motion distance between the frames, the current frame is pressed into the sliding window, so that the number of the key frames in the sliding window is favorably kept not to exceed the length of the sliding window, the relative motion distance between the frames in the sliding window is more proper, and repeated data are reduced.
In one possible implementation, as shown in fig. 6, S502 includes at least one of:
s601, under the condition that the relative movement distance between the current frame and the Nth frame in the sliding window is larger than a second threshold value, deleting the first frame from the sliding window, and then pressing the current frame into the sliding window.
S602, when the relative motion distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative motion distance between the Nth frame in the sliding window and the (N-1) th frame is larger than a third threshold value, deleting the first frame from the sliding window and then pressing the current frame into the sliding window;
s603, when the relative motion distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative motion distance between the Nth frame in the sliding window and the (N-1) th frame is smaller than or equal to a third threshold value, deleting the Nth frame from the sliding window, and then pressing the current frame into the sliding window.
For example, it is determined whether the relative movement distance between the current frame and the nth frame in the sliding window is greater than a set second threshold. If yes, it means that the current frame is far enough from the Nth frame in the sliding window, and the overlapped data is not much. This allows for prior deletion of old data, such as the first frame in a sliding window (i.e., the head data of the queue). Otherwise, it indicates that the distance between the current frame and the nth frame in the sliding window is not far enough, and the overlapped data is more. Thus, whether to delete the first frame or the Nth frame is determined according to the relative motion distance between the last two frames in the sliding window.
For example, it is determined whether the relative movement distance between the nth frame and the N-1 st frame in the sliding window is greater than a set third threshold. If so, it indicates that the Nth frame in the sliding window is far enough away from the N-1 st frame that there is not much overlapped data. This allows for prior deletion of old data, such as the first frame in a sliding window (i.e., the head data of the queue). Otherwise, the distance between the Nth frame and the (N-1) th frame in the sliding window is not far enough, and the overlapped data is more. This may remove the nth frame (i.e., the tail data of the queue), and after pushing the current frame into the sliding window, the current frame becomes the new nth frame in the sliding window. The distance between the new Nth frame and the (N-1) th frame is longer than that between the original Nth frame and the (N-1) th frame.
By comparing the relative motion distance between the current frame and the key frame in the sliding window and the relative motion distance between the adjacent key frames in the sliding window, the key frame with larger relative motion distance can be reserved in the sliding window, repeated data is reduced, and the processing efficiency of lane line fitting is improved.
Referring to the above example, after the current frame is acquired, the data in the sliding window may be maintained in the manner in the above example. The relative motion distance between two adjacent key frames in the sliding window can be more suitable through the second threshold and/or the third threshold.
Fig. 7 is a schematic structural diagram of a lane line processing apparatus according to an embodiment of the present disclosure, which may include:
a constraint determining module 701, configured to determine, according to a relationship between a vehicle position of each frame to be processed and a current vehicle position, an observation point to be processed of a lane line in each frame to be processed and a constraint manner corresponding to the observation point to be processed;
a conversion module 702, configured to convert the observation point to be processed into a corresponding target observation point according to the current vehicle coordinate system;
a constraint establishing module 703, configured to establish lane line constraints corresponding to the target observation point according to the constraint mode and the target observation point corresponding to the observation point to be processed;
and a lane line generating module 704, configured to obtain a curve model of the lane line according to the lane line constraint corresponding to the target observation point.
Fig. 8 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure. The apparatus of this embodiment includes one or more features of the lane line processing apparatus embodiments described above. In a possible implementation, the constraint determining module 701 is further configured to perform at least one of:
under the condition that the distance between the vehicle position of the frame to be processed and the current vehicle position is larger than a first threshold value, extracting a first observation point from a first area of the frame to be processed; the constraint mode corresponding to the first observation point comprises point-to-line distance constraint;
under the condition that the distance between the vehicle position of the frame to be processed and the current vehicle position is smaller than or equal to a first threshold value, extracting a second observation point from a second area of the frame to be processed, and extracting a third observation point from a third area of the frame to be processed; the constraint mode corresponding to the second observation point comprises point-to-line distance constraint; and the constraint mode corresponding to the third observation point comprises corresponding direction consistent constraint and/or curvature consistent constraint.
In a possible implementation, the constraint establishing module 703 is further configured to perform at least one of the following:
establishing point-to-line distance constraint corresponding to the target observation point according to the target observation point converted from the first observation point or the second observation point;
and establishing direction consistent constraint and/or curvature consistent constraint corresponding to the target observation point according to the target observation point converted by the third observation point.
In one possible embodiment, the point-to-line distance constraint corresponding to the target observation point is determined based on a first vector corresponding to the x-axis coordinate of the target observation point, a y-axis coordinate, and a second vector corresponding to the cubic curve coefficient.
In one possible implementation, the direction-consistent constraint corresponding to the target observation point is determined based on a first derivative of a y-axis coordinate of the target observation point, a first derivative of a first vector corresponding to an x-axis coordinate of the target observation point, and a second vector corresponding to a curve model coefficient;
wherein the first derivative of the y-axis coordinate of the target observation point is determined according to the previous observation point of the target observation point and the target observation point.
In one possible implementation, the curvature consistency constraint corresponding to the target observation point is established based on the curvature of the observation point and the curvature of the cubic curve;
wherein the curvature of the observation point is determined based on a previous observation point of the target observation point, and a subsequent observation point of the target observation point;
the cubic curve curvature is determined based on the first and second derivatives of the first vector corresponding to the x-axis coordinate of the second observation point and the second vector corresponding to the curve model coefficient.
In one possible implementation, the lane line generation module 704 includes:
a construction submodule 801 configured to construct a nonlinear least square formula of a cubic curve model according to at least one of a point-to-line distance constraint, a direction coincidence constraint and a curvature coincidence constraint corresponding to the target observation point;
and a solving submodule 802, configured to iteratively solve the nonlinear least square formula to obtain values of each coefficient in the cubic curve model of the lane line.
Fig. 9 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure. The device includes:
a sliding window maintenance module 901, configured to press the current frame into the sliding window according to the relationship between the current frame and the key frame in the sliding window.
In a possible implementation manner, the frame to be processed in the above-mentioned embodiment of the lane line processing apparatus in fig. 7 or fig. 8 is a key frame in a sliding window of the lane line.
In one possible implementation, as shown in fig. 10, the lane line processing apparatus may include a lane line fitting module 1001 configured to fit a curve model of a lane line according to the observation points of the keyframes in the sliding window.
In a possible implementation, the lane line fitting module 1001 may include the constraint determining module 701, the converting module 702, the constraint establishing module 703 and the lane line generating module 704 of the lane line processing apparatus of fig. 7 or fig. 8, and the related functions of the respective modules may be as described in the foregoing description of the embodiments.
In one possible embodiment, as shown in fig. 11, the sliding window maintenance module 901 includes:
a first push submodule 1101, configured to push the current frame into the sliding window if the number of key frames in the sliding window is smaller than the sliding window length N; wherein N is greater than or equal to 1.
In one possible implementation, the sliding window maintenance module 901 includes:
the second pushing sub-module 1102 is configured to, when the number of the key frames in the sliding window is equal to the length of the sliding window, push the current frame into the sliding window after deleting the first frame or the nth frame from the sliding window according to the relative movement distance between the current frame and the nth frame in the sliding window.
In one possible implementation, the second push submodule 1102 is configured to perform at least one of:
under the condition that the relative movement distance between the current frame and the Nth frame in the sliding window is larger than a second threshold value, after deleting the first frame from the sliding window, pressing the current frame into the sliding window;
when the relative motion distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative motion distance between the Nth frame and the (N-1) th frame in the sliding window is larger than a third threshold value, deleting the first frame from the sliding window and then pressing the current frame into the sliding window;
and under the condition that the relative motion distance between the current frame and the Nth frame in the sliding window is less than or equal to a second threshold value and the relative motion distance between the Nth frame and the (N-1) th frame in the sliding window is less than or equal to a third threshold value, pressing the current frame into the sliding window after deleting the Nth frame from the sliding window.
For example, the sliding window maintenance module 901 may further include a judgment sub-module. Firstly, the judging submodule judges whether the relative movement distance between the current frame and the Nth frame in the sliding window is larger than a second threshold value. If so, the second push submodule 1102 may push the current frame into the sliding window after deleting the first frame from the sliding window. Otherwise, the judgment sub-module judges whether the relative motion distance between the Nth frame and the (N-1) th frame in the sliding window is larger than a third threshold value. If so, the second push submodule 1102 may push the current frame into the sliding window after deleting the first frame from the sliding window; otherwise, the second push submodule 1102 may push the current frame into the sliding window after deleting the nth frame from the sliding window.
For a description of specific functions and examples of each module and sub-module of the apparatus in the embodiment of the present disclosure, reference may be made to the description of corresponding steps in the foregoing method embodiments, and details are not repeated here.
Since lane line modeling will directly affect the behavior of the entire vehicle, the stability and accuracy of its modeling is critical. According to different implementation frames, the current lane line modeling can be divided into a lane line modeling method based on filtering, a lane line modeling method based on single frame optimization, a lane line modeling method based on sliding window optimization, and the like.
The perception 3D lane line technique may result in lane line detection accuracy being worse the farther away from the host vehicle, e.g., accuracy at near is in the centimeter level and accuracy at far is in the meter level. In addition, the 3D lane line is also very susceptible to illumination changes, external reference accuracy, road bumping and the like. The filtering/single frame optimization approach may be frustrating in dealing with these problems. In the lane line modeling method based on sliding window optimization, if multi-frame 3D lane line observation (namely observation points included by a lane line) is collected according to a set sampling interval, optimization constraint of a curve model is constructed. The collected lane line observation can participate in the construction of optimization constraint, so that the problem that the single-frame lane line observation is susceptible to noise can be reduced. That is, the sliding window optimization can solve the lane line model using a fixed number of frame observations, with greater robustness and flexibility in dealing with noise. The lane lines behind the vehicle (the area where the vehicle has traveled) and near areas are generally observed well, while the lane lines far in front of the vehicle are observed with less information and are also observed with poor quality. To ensure overall modeling accuracy, embodiments of the present disclosure may perform the following when constructing constraints:
(1) the lane line observation of the adjacent frames in the sliding window usually has a certain overlapping area, so that when the lane line is modeled in the rear and near areas of a vehicle, the higher modeling precision can be realized only by using the observation of the key frames in the sliding window in a certain range near the vehicle.
(2) After the modeling precision of the lane line in the rear area and the near area of the vehicle is ensured, the prior information that the lane line is generally a smooth cubic curve is obtained according to the high-speed lane line. The constraint of far lane line observation on the curve position can be removed, and only the curve trend and the geometric shape constraint of the far lane line observation are reserved.
Length maintenance of sliding window
As shown in fig. 12, in the embodiment of the present disclosure, the sliding window maintenance may be performed by using a bidirectional queue, for example, the following steps may be included:
s1200, acquiring lane line observation of the current frame. One or more lane line observations may be included in the current frame. One or more observation points may be included in the lane line observation.
S1201, judging whether the number of frames in the current sliding window is larger than a threshold value T1, if so, executing S1202; if not, S1207 is performed. T1 may be equal to the sliding window length. The sliding window may be a bidirectional queue and the sliding window length may be a queue length. T1 may represent the total number of keyframes with lane line observations that can be saved in the sliding window.
S1202, judging whether the relative movement distance between the current frame and the latest frame in the sliding window is larger than a threshold T2, if so, executing S1203; if not, S1204 is executed.
S1203, the oldest key frame in the sliding window is removed, and then S1207 is executed.
S1204, judging whether the relative motion distance between the latest frame and the next new frame in the sliding window is greater than a threshold value T3, if so, executing S1205; if not, go to S1206.
S1205, the oldest key frame in the sliding window is removed, and then S1207 is executed.
And S1206, eliminating the latest key frame in the sliding window, and then executing S1207.
S1207, pressing the current frame into the sliding window.
The curve model of the lane line can then be non-linearly optimized from the keyframes in the sliding window.
The thresholds T1, T2, and T3 in this example may be the same or different.
The area of lane line modeling contains the area of the front and the rear of the vehicle in a certain range, and in order to ensure the modeling precision and speed, the length of a sliding window and the interval of sampling distance need to be reasonably set. The sliding window is implemented using a bidirectional queue. When the sliding window is not full, e.g., the number of frames within the sliding window is less than a threshold T1, the new current frame is pushed directly into the queue. When the sliding window is full, judging whether the relative movement distance between the current frame and the latest frame in the queue is greater than a threshold value T2, and if so, eliminating the oldest frame in the queue. Otherwise, it needs to judge whether the relative motion distance between the latest frame and the next new frame in the queue is greater than the threshold T3. If the frame number meets the requirement, the oldest frame in the queue is removed, otherwise, the newest frame in the queue is removed. And finally, pressing the current frame into a queue.
Nonlinear optimization of lane line
Depending on where the lane line observation is located, three different constraints may be used in the optimization, for example: the lane lines in the areas behind and near the vehicle adopt point-to-line distance constraint, and the lane lines in the areas in front of the vehicle and far away adopt direction-consistent constraint and curvature-consistent constraint.
For example, the lane line model uses a cubic curve model y ═ c 3 x 3 +c 2 x 2 +c 1 x+c 0 The optimized variable is the vector c ═ c corresponding to the coefficient of the cubic curve 0 ,c 1 ,c 2 ,c 3 ] T The two-dimensional observation point for fitting the curve is p i =[x i ,y i ] T And, a vector constructed based on x
Figure BDA0003744921390000161
x i First derivative of
Figure BDA0003744921390000162
x i Second derivative of (2)
Figure BDA0003744921390000163
Suppose that the vehicle pose of a certain key frame contained in the sliding window is T wh The current vehicle pose in the sliding window is T wc The conversion relation between the current vehicle pose and the vehicle pose of the key frame can be T ch =T wc -1 T wh A sample point in the key frame is
Figure BDA0003744921390000164
When the cubic curve modeling is performed on the lane line, the observation points of the key frames in the sliding window to the lane line need to be converted into the same coordinate system, namely the current vehicle coordinate system, and the converted coordinate point is P i Then the coordinate transformation formula may be:
Figure BDA0003744921390000165
and, P can be substituted i The z-axis coordinate of the observation point is set to zero to obtain a two-dimensional coordinate p of the ith observation point i =[x i ,y i ] T . According to the characteristic of sensing lane line detection accuracy, when the vehicle moves from back to front, more key frames in the sliding window can observe lane lines in the rear area and the near area of the vehicle. And the observation has larger overlapping area, so that only one section of observation with higher near detection precision in the key frame is used for constructing point-to-line constraint on the cubic curve (ensuring the fitting precision of the position and the geometric shape of the cubic curve). For a far area in front of the vehicle, the detection accuracy of the sensing lane line at the far position is poor under the influence of the observation distance. The influence of the observation distance on the detection precision of the geometric shape of the lane line is small, so that the geometric shape fitting precision (square) of the cubic curve at a far distance can be guaranteed by using the part of observation pointsTo a consistent constraint and a curvature consistent constraint). Therefore, the overall fitting accuracy of the cubic curve in the rear, near and far areas in front of the vehicle can be well guaranteed. The constraint mode corresponding to the observation point may be determined according to the relationship between the keyframe and the current vehicle position, and the like, which may be specifically referred to the relevant description of the above embodiment.
(1) Point-to-line distance constraint:
Figure BDA0003744921390000171
wherein, y i Is an observation point p i Y-axis coordinate of (2), x i Is an observation point p i Corresponding to the first vector.
(2) And (3) direction consistency constraint:
Figure BDA0003744921390000172
wherein p is i First derivative of the y-axis coordinate of (1)
Figure BDA0003744921390000173
Figure BDA0003744921390000174
Is an observation point p i Corresponding to a first vector x of x-axis coordinates i The first derivative of (a). For example, c ═ c of the cubic curve model 0 ,c 1 ,c 2 ,c 3 ] T
Figure BDA0003744921390000175
(3) Curvature consistency constraint:
Figure BDA0003744921390000176
wherein the curvature of the observation point is
Figure BDA0003744921390000177
Curvature of cubic curve of
Figure BDA0003744921390000178
Wherein
Figure BDA0003744921390000179
Is an observation point p i Corresponding to a first vector x of x-axis coordinates i The second derivative of (a). Iiii represents the euclidean distance between the two observation points. k is a radical of i Representing the calculated observation point curvatures based on the i-1 st observation point, the i-th observation point, and the i +1 st observation point.
Using the above three constraints, the following formula for the nonlinear least squares problem of the cubic curve model (which may be referred to as a nonlinear least squares formula, a nonlinear least squares function, an objective function, etc.) may be constructed, as exemplified below:
Figure BDA00037449213900001710
where argmin represents the value of the variable at which the latter equation is minimized. c. C k And (3) fitting a cubic curve coefficient for the kth lane line, wherein M is the number of lane line observations (namely the number of lane line observation points) in the rear and near regions of the vehicle, N is the number of lane line observations in the far region in front of the vehicle, and omega is the weight of the error term.
Wherein, c k And fitting cubic curve coefficients for the kth lane line, wherein M is the number of lane line observations in the area behind and near the vehicle, N is the number of lane line observations in the area far away from the front of the vehicle, and omega is the weight of the error term. In the solving process, a Levenberg-Marquardt method can be adopted to iteratively solve the least square method problem. And obtaining an increment delta c for reducing the total error once each iteration until the total error is not reduced obviously any more, considering that the iteration is converged, and solving the lane line.
The embodiment of the disclosure aims at the problem of perceiving 3D lane line observation, and the proposed lane line processing method is a lane line modeling method based on sliding window optimization. The method can greatly improve the noise handling capability of the algorithm. In addition, different constraints are constructed for lane line observation in different areas, and the lane lines in the areas behind and near the vehicle are constrained by point-to-line distance, so that the fitting precision of the lane line position and the geometric shape is ensured; the direction consistency constraint and the curvature consistency constraint are adopted for the far lane line in front of the vehicle, the fitting precision of the lane line trend and the geometric shape is guaranteed, and the problem caused by inaccurate position of the far perception 3D lane line is also avoided. The scheme of the embodiment of the disclosure is used for an automatic driving scene, provides important support for the rear-end optimization of the lane line modeling of an ANP (android Navigation Pilot system) project, and guarantees the stability of vehicle control planning.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
Fig. 13 illustrates a schematic block diagram of an example electronic device 1300 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 13, the apparatus 1300 includes a computing unit 1301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1302 or a computer program loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM1303, various programs and data necessary for the operation of the device 1300 can also be stored. The calculation unit 1301, the ROM 1302, and the RAM1303 are connected to each other via a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
A number of components in the device 1300 connect to the I/O interface 1305, including: an input unit 1306 such as a keyboard, a mouse, or the like; an output unit 1307 such as various types of displays, speakers, and the like; storage unit 1308, such as a magnetic disk, optical disk, or the like; and a communication unit 1309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 1309 allows the device 1300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 1301 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of computing unit 1301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1301 executes the respective methods and processes described above, such as the lane line processing method. For example, in some embodiments, the lane line processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1308. In some embodiments, some or all of the computer program may be loaded onto and/or installed onto device 1300 via ROM 1302 and/or communications unit 1309. When the computer program is loaded into the RAM1303 and executed by the computing unit 1301, one or more steps of the lane line processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1301 may be configured to perform the lane line processing method in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (25)

1. A lane line processing method, comprising:
determining observation points to be processed of the lane lines in each frame to be processed and a constraint mode corresponding to the observation points to be processed according to the relationship between the vehicle position of each frame to be processed and the current vehicle position;
converting the observation points to be processed into corresponding target observation points according to the current vehicle coordinate system;
according to the constraint mode corresponding to the observation point to be processed and the target observation point, establishing lane line constraint corresponding to the target observation point;
and obtaining a curve model of the lane line according to the lane line constraint corresponding to the target observation point.
2. The method according to claim 1, wherein determining the observation points to be processed of the lane lines in each frame to be processed and the constraint mode corresponding to the observation points to be processed according to the relationship between the vehicle position of each frame to be processed and the current vehicle position comprises at least one of:
extracting a first observation point from a first area of the frame to be processed when the distance between the vehicle position of the frame to be processed and the current vehicle position is greater than a first threshold value; the constraint mode corresponding to the first observation point comprises point-to-line distance constraint;
under the condition that the distance between the vehicle position of the frame to be processed and the current vehicle position is smaller than or equal to a first threshold value, extracting a second observation point from a second area of the frame to be processed, and extracting a third observation point from a third area of the frame to be processed; the constraint mode corresponding to the second observation point comprises point-to-line distance constraint; and the constraint mode corresponding to the third observation point comprises corresponding direction consistent constraint and/or curvature consistent constraint.
3. The method according to claim 2, wherein establishing lane line constraints corresponding to the target observation points according to the constraint modes corresponding to the observation points to be processed and the target observation points comprises at least one of:
according to the target observation point converted from the first observation point or the second observation point, establishing point-to-line distance constraint corresponding to the target observation point;
and establishing direction consistent constraint and/or curvature consistent constraint corresponding to the target observation point according to the target observation point converted by the third observation point.
4. The method of claim 3, wherein the point-to-line distance constraint corresponding to the target observation point is determined based on a first vector corresponding to an x-axis coordinate of the target observation point, a y-axis coordinate, and a second vector corresponding to a cubic curve coefficient.
5. The method of claim 3 or 4, wherein the directional congruency constraint corresponding to the target observation point is determined based on a first derivative of a y-axis coordinate of the target observation point, a first derivative of a first vector corresponding to an x-axis coordinate of the target observation point, and a second vector corresponding to a curve model coefficient;
wherein a first derivative of the y-axis coordinate of the target observation point is determined from an observation point prior to the target observation point and the target observation point.
6. The method of any one of claims 3 to 5, wherein the curvature conformance constraint corresponding to the target observation point is established based on an observation point curvature and a cubic curve curvature;
wherein the observation point curvature is determined based on an observation point prior to the target observation point, and an observation point subsequent to the target observation point;
the cubic curve curvature is determined based on first and second derivatives of a first vector corresponding to the x-axis coordinate of the second observation point and a second vector corresponding to a curve model coefficient.
7. The method according to any one of claims 1 to 6, wherein obtaining a curve model of the lane line according to the lane line constraint corresponding to the target observation point comprises:
constructing a nonlinear least square formula of a cubic curve model according to at least one of point-to-line distance constraint, direction consistent constraint and curvature consistent constraint corresponding to the target observation point;
and iteratively solving the nonlinear least square formula to obtain the value of each coefficient in the cubic curve model of the lane line.
8. The method of any of claims 1-7, the frame to be processed comprising a key frame in a sliding window of a lane line, the method further comprising:
and pressing the current frame into the sliding window according to the relationship between the current frame and the key frame in the sliding window.
9. The method of claim 8, wherein pushing the current frame into the sliding window according to the relationship between the current frame and the key frame in the sliding window comprises:
pressing the current frame into the sliding window under the condition that the number of the key frames in the sliding window is smaller than the length N of the sliding window; wherein N is greater than or equal to 1.
10. The method according to claim 8 or 9, wherein pushing the current frame into the sliding window according to the relationship between the current frame and the key frame in the sliding window comprises:
and under the condition that the number of the key frames in the sliding window is equal to the length of the sliding window, pressing the current frame into the sliding window after deleting the first frame or the Nth frame from the sliding window according to the relative movement distance between the current frame and the Nth frame in the sliding window.
11. The method of claim 10, wherein pushing the current frame into the sliding window after deleting a first frame or an nth frame from the sliding window according to a relative motion distance between the current frame and the nth frame in the sliding window comprises at least one of:
under the condition that the relative movement distance between the current frame and the Nth frame in the sliding window is larger than a second threshold value, after deleting the first frame from the sliding window, pressing the current frame into the sliding window;
when the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame and the (N-1) th frame in the sliding window is larger than a third threshold value, deleting the first frame from the sliding window and then pressing the current frame into the sliding window;
and under the condition that the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame in the sliding window and the (N-1) th frame is smaller than or equal to a third threshold value, after deleting the Nth frame from the sliding window, pressing the current frame into the sliding window.
12. A lane line processing apparatus comprising:
the constraint determining module is used for determining observation points to be processed of the lane lines in each frame to be processed and a constraint mode corresponding to the observation points to be processed according to the relationship between the vehicle position of each frame to be processed and the current vehicle position;
the conversion module is used for converting the observation points to be processed into corresponding target observation points according to the current vehicle coordinate system;
the constraint establishing module is used for establishing lane line constraints corresponding to the target observation points according to the constraint modes corresponding to the observation points to be processed and the target observation points;
and the lane line generation module is used for obtaining a curve model of the lane line according to the lane line constraint corresponding to the target observation point.
13. The apparatus of claim 12, wherein the constraint determination module is further configured to perform at least one of:
extracting a first observation point from a first area of the frame to be processed when the distance between the vehicle position of the frame to be processed and the current vehicle position is greater than a first threshold value; the constraint mode corresponding to the first observation point comprises point-to-line distance constraint;
under the condition that the distance between the vehicle position of the frame to be processed and the current vehicle position is smaller than or equal to a first threshold value, extracting a second observation point from a second area of the frame to be processed, and extracting a third observation point from a third area of the frame to be processed; the constraint mode corresponding to the second observation point comprises point-to-line distance constraint; and the constraint mode corresponding to the third observation point comprises corresponding direction consistent constraint and/or curvature consistent constraint.
14. The apparatus of claim 13, wherein the constraint establishing module is further configured to perform at least one of:
establishing point-to-line distance constraint corresponding to the target observation point according to the converted target observation point of the first observation point or the second observation point;
and establishing direction consistent constraint and/or curvature consistent constraint corresponding to the target observation point according to the target observation point converted by the third observation point.
15. The apparatus of claim 14, wherein the point-to-line distance constraint corresponding to the target observation point is determined based on a first vector corresponding to an x-axis coordinate of the target observation point, a y-axis coordinate, and a second vector corresponding to a cubic curve coefficient.
16. The apparatus of claim 13 or 14, wherein the orientation agreement constraint corresponding to the target observation point is determined based on a first derivative of a y-axis coordinate of the target observation point, a first derivative of a first vector corresponding to an x-axis coordinate of the target observation point, and a second vector corresponding to a curve model coefficient;
wherein the first derivative of the y-axis coordinate of the target observation point is determined from the previous observation point of the target observation point and the target observation point.
17. The apparatus of any one of claims 14 to 16, wherein the curvature conformance constraint corresponding to the target observation point is established based on an observation point curvature and a cubic curve curvature;
wherein the observation point curvature is determined based on an observation point prior to the target observation point, and an observation point subsequent to the target observation point;
the cubic curve curvature is determined based on first and second derivatives of a first vector corresponding to the x-axis coordinate of the second observation point and a second vector corresponding to a curve model coefficient.
18. The apparatus of any of claims 12 to 17, the lane line generation module, comprising:
the construction submodule is used for constructing a nonlinear least square formula of a cubic curve model according to at least one of point-to-line distance constraint, direction consistency constraint and curvature consistency constraint corresponding to the target observation point;
and the solving submodule is used for iteratively solving the nonlinear least square method formula to obtain the value of each coefficient in the cubic curve model of the lane line.
19. The apparatus of any of claims 12 to 18, the frame to be processed comprising a key frame in a sliding window of a lane line, the apparatus further comprising:
and the sliding window maintenance module is used for pressing the current frame into the sliding window according to the relationship between the current frame and the key frame in the sliding window.
20. The apparatus of claim 19, wherein the sliding window maintenance module comprises:
the first push submodule is used for pushing the current frame into the sliding window under the condition that the number of the key frames in the sliding window is smaller than the length N of the sliding window; wherein N is greater than or equal to 1.
21. The apparatus of claim 19 or 20, wherein the sliding window maintenance module comprises:
and the second pushing submodule is used for pushing the current frame into the sliding window after deleting the first frame or the Nth frame from the sliding window according to the relative movement distance between the current frame and the Nth frame in the sliding window under the condition that the number of the key frames in the sliding window is equal to the length of the sliding window.
22. The apparatus of claim 21, wherein the second push submodule is to perform at least one of:
under the condition that the relative movement distance between the current frame and the Nth frame in the sliding window is larger than a second threshold value, after deleting the first frame from the sliding window, pressing the current frame into the sliding window;
when the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame and the (N-1) th frame in the sliding window is larger than a third threshold value, deleting the first frame from the sliding window and then pressing the current frame into the sliding window;
and under the condition that the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame in the sliding window and the (N-1) th frame is smaller than or equal to a third threshold value, after deleting the Nth frame from the sliding window, pressing the current frame into the sliding window.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-11.
CN202210828297.5A 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium Active CN115116019B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310403461.2A CN116486354B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium
CN202210828297.5A CN115116019B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210828297.5A CN115116019B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310403461.2A Division CN116486354B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115116019A true CN115116019A (en) 2022-09-27
CN115116019B CN115116019B (en) 2023-08-01

Family

ID=83332380

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310403461.2A Active CN116486354B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium
CN202210828297.5A Active CN115116019B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310403461.2A Active CN116486354B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (2) CN116486354B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012058983A (en) * 2010-09-08 2012-03-22 Fuji Heavy Ind Ltd Lane estimating device
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN107194342A (en) * 2017-05-16 2017-09-22 西北工业大学 Method for detecting lane lines based on inverse perspective mapping
US20180060677A1 (en) * 2016-08-29 2018-03-01 Neusoft Corporation Method, apparatus and device for detecting lane lines
CN108845343A (en) * 2018-07-03 2018-11-20 河北工业大学 The vehicle positioning method that a kind of view-based access control model, GPS are merged with high-precision map
CN112818778A (en) * 2021-01-21 2021-05-18 北京地平线机器人技术研发有限公司 Lane line fitting method, lane line fitting device, lane line fitting medium, and electronic apparatus
CN113435392A (en) * 2021-07-09 2021-09-24 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device applied to automatic parking and vehicle
CN113551664A (en) * 2021-08-02 2021-10-26 湖北亿咖通科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113932796A (en) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 High-precision map lane line generation method and device and electronic equipment
CN114140759A (en) * 2021-12-08 2022-03-04 阿波罗智能技术(北京)有限公司 High-precision map lane line position determining method and device and automatic driving vehicle

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156979B (en) * 2010-12-31 2012-07-04 上海电机学院 Method and system for rapid traffic lane detection based on GrowCut
CN111316284A (en) * 2019-02-13 2020-06-19 深圳市大疆创新科技有限公司 Lane line detection method, device and system, vehicle and storage medium
CN112084822A (en) * 2019-06-14 2020-12-15 富士通株式会社 Lane detection device and method and electronic equipment
CN111444778B (en) * 2020-03-04 2023-10-17 武汉理工大学 Lane line detection method
CN112560680A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line processing method and device, electronic device and storage medium
CN114111769A (en) * 2021-11-15 2022-03-01 杭州海康威视数字技术股份有限公司 Visual inertial positioning method and device and automatic driving device
CN114018274B (en) * 2021-11-18 2024-03-26 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device and electronic equipment
CN113807333B (en) * 2021-11-19 2022-03-18 智道网联科技(北京)有限公司 Data processing method and storage medium for detecting lane line

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012058983A (en) * 2010-09-08 2012-03-22 Fuji Heavy Ind Ltd Lane estimating device
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
US20180060677A1 (en) * 2016-08-29 2018-03-01 Neusoft Corporation Method, apparatus and device for detecting lane lines
CN107194342A (en) * 2017-05-16 2017-09-22 西北工业大学 Method for detecting lane lines based on inverse perspective mapping
CN108845343A (en) * 2018-07-03 2018-11-20 河北工业大学 The vehicle positioning method that a kind of view-based access control model, GPS are merged with high-precision map
CN112818778A (en) * 2021-01-21 2021-05-18 北京地平线机器人技术研发有限公司 Lane line fitting method, lane line fitting device, lane line fitting medium, and electronic apparatus
CN113435392A (en) * 2021-07-09 2021-09-24 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device applied to automatic parking and vehicle
CN113551664A (en) * 2021-08-02 2021-10-26 湖北亿咖通科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113932796A (en) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 High-precision map lane line generation method and device and electronic equipment
CN114140759A (en) * 2021-12-08 2022-03-04 阿波罗智能技术(北京)有限公司 High-precision map lane line position determining method and device and automatic driving vehicle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
朱鸿宇 等: "基于级联霍夫变换的车道线快速检测算法", vol. 31, no. 01, pages 88 - 93 *
李静;石欣欣;程志鹏;王军政;: "基于多通道融合和极线约束的道路检测与定位", vol. 40, no. 08, pages 867 - 872 *
石林军 等: "基于多约束条件下的霍夫变换车道线检测方法", vol. 26, no. 09, pages 9 - 12 *

Also Published As

Publication number Publication date
CN116486354B (en) 2024-04-16
CN116486354A (en) 2023-07-25
CN115116019B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
EP3937077B1 (en) Lane marking detecting method, apparatus, electronic device, storage medium, and vehicle
CN114387319B (en) Point cloud registration method, device, equipment and storage medium
EP4160271A1 (en) Method and apparatus for processing data for autonomous vehicle, electronic device, and storage medium
CN114140759A (en) High-precision map lane line position determining method and device and automatic driving vehicle
CN113298910A (en) Method, apparatus and storage medium for generating traffic sign line map
CN115147831A (en) Training method and device of three-dimensional target detection model
CN115457152A (en) External parameter calibration method and device, electronic equipment and storage medium
CN114063858B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN113177980A (en) Target object speed determination method and device for automatic driving and electronic equipment
CN115900697B (en) Object motion trail information processing method, electronic equipment and automatic driving vehicle
CN109816726B (en) Visual odometer map updating method and system based on depth filter
CN116929343A (en) Pose estimation method, related equipment and storage medium
CN114166238B (en) Lane line identification method and device and electronic equipment
CN116486354B (en) Lane line processing method, device, equipment and storage medium
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN115320642A (en) Lane line modeling method and device, electronic equipment and automatic driving vehicle
CN114170300A (en) High-precision map point cloud pose optimization method, device, equipment and medium
CN114581869A (en) Method and device for determining position of target object, electronic equipment and storage medium
CN115330851A (en) Monocular depth estimation method and device, electronic equipment, storage medium and vehicle
CN114299192A (en) Method, device, equipment and medium for positioning and mapping
CN112507957A (en) Vehicle association method and device, road side equipment and cloud control platform
CN111968071A (en) Method, device, equipment and storage medium for generating spatial position of vehicle
CN116228834B (en) Image depth acquisition method and device, electronic equipment and storage medium
CN114202625B (en) Method and device for extracting road shoulder line and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant