CN115116019B - Lane line processing method, device, equipment and storage medium - Google Patents

Lane line processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115116019B
CN115116019B CN202210828297.5A CN202210828297A CN115116019B CN 115116019 B CN115116019 B CN 115116019B CN 202210828297 A CN202210828297 A CN 202210828297A CN 115116019 B CN115116019 B CN 115116019B
Authority
CN
China
Prior art keywords
observation point
frame
sliding window
constraint
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210828297.5A
Other languages
Chinese (zh)
Other versions
CN115116019A (en
Inventor
王丕阁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202310403461.2A priority Critical patent/CN116486354B/en
Priority to CN202210828297.5A priority patent/CN115116019B/en
Publication of CN115116019A publication Critical patent/CN115116019A/en
Application granted granted Critical
Publication of CN115116019B publication Critical patent/CN115116019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The disclosure provides a lane line processing method, a lane line processing device, lane line processing equipment and a storage medium, relates to the technical field of computers, and in particular relates to the fields of automatic driving, computer vision, lane line detection and the like. The specific implementation scheme is as follows: determining the to-be-processed observation points of the lane lines in each to-be-processed frame and the constraint modes corresponding to the to-be-processed observation points according to the relation between the vehicle position of each to-be-processed frame and the current vehicle position; converting the observation point to be processed into a corresponding target observation point according to a current vehicle coordinate system; establishing lane line constraints corresponding to the target observation points according to constraint modes corresponding to the observation points to be processed and the target observation points; and obtaining a curve model of the lane line according to lane line constraint corresponding to the target observation point. In the embodiments of the present disclosure, lane-line constraints related to vehicle positions may be utilized to improve the accuracy of fitting of a curve model of a lane line.

Description

Lane line processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to the fields of autopilot, computer vision, lane line detection, and the like.
Background
In the field of high-speed automatic driving, a vehicle needs to be controlled to complete adaptive cruising according to surrounding lane line information. Under the condition that a high-precision map is not available, the vehicle collects road images by using a forward-looking camera, and a perception component extracts lane lines in the images in a deep learning mode and the like. The 2D (two-dimensional) lane lines on the image are then converted into 3D (three-dimensional) lane lines in the vehicle body coordinate system using an inverse perspective transformation (Inverse Perspective Mapping, IPM). The single frame 3D lane line measurement range is short and susceptible to noise, and thus cannot be used directly. Therefore, the method generally fits the road traffic lane into a cubic curve through road line modeling, and restores road line information, which is close to a real state, around the vehicle, and is used for obstacle lane division, vehicle control planning and the like.
Disclosure of Invention
The disclosure provides a lane line processing method, a lane line processing device, lane line processing equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a lane line processing method including:
determining the to-be-processed observation points of the lane lines in each to-be-processed frame and the constraint modes corresponding to the to-be-processed observation points according to the relation between the vehicle position of each to-be-processed frame and the current vehicle position;
Converting the observation point to be processed into a corresponding target observation point according to a current vehicle coordinate system;
establishing lane line constraints corresponding to the target observation points according to constraint modes corresponding to the observation points to be processed and the target observation points;
and obtaining a curve model of the lane line according to lane line constraint corresponding to the target observation point.
According to another aspect of the present disclosure, there is provided a lane line processing apparatus including:
the constraint determining module is used for determining the to-be-processed observation points of the lane lines in each to-be-processed frame and the constraint modes corresponding to the to-be-processed observation points according to the relation between the vehicle position of each to-be-processed frame and the current vehicle position;
the conversion module is used for converting the observation point to be processed into a corresponding target observation point according to the current vehicle coordinate system;
the constraint establishing module is used for establishing lane line constraints corresponding to the target observation points according to constraint modes corresponding to the observation points to be processed and the target observation points;
the lane line generation module is used for obtaining a curve model of the lane line according to lane line constraint corresponding to the target observation point.
According to another aspect of the present disclosure, there is provided an electronic device including:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
In the embodiment of the disclosure, according to the constraint mode corresponding to the to-be-processed observation points of the lane lines determined by the vehicle positions, lane line constraints corresponding to the target observation points converted by the to-be-processed observation points can be established, so that the fitting precision of the curve model of the lane lines is improved by using the lane line constraints related to the vehicle positions.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a lane line processing method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a lane line processing method according to another embodiment of the present disclosure;
FIG. 3 is a flow chart of a lane line processing method according to another embodiment of the present disclosure;
FIG. 4 is a flow chart of a lane line processing method according to another embodiment of the present disclosure;
FIG. 5 is a flow chart of a lane line processing method according to another embodiment of the present disclosure;
FIG. 6 is a flow chart of a lane line processing method according to another embodiment of the present disclosure;
FIG. 7 is a schematic structural view of a lane line processing apparatus according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure;
fig. 9 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure;
FIG. 10 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure;
FIG. 11 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure;
FIG. 12 is a flow chart of a sliding window queue maintenance method of the present disclosure;
Fig. 13 is a block diagram of an electronic device for implementing a lane line processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flow chart of a lane line processing method according to an embodiment of the present disclosure. The method may include:
s101, determining a to-be-processed observation point of a lane line in each to-be-processed frame and a constraint mode corresponding to the to-be-processed observation point according to the relation between the vehicle position of each to-be-processed frame and the current vehicle position;
s102, converting the observation point to be processed into a corresponding target observation point according to a current vehicle coordinate system;
s103, establishing lane line constraints corresponding to the target observation points according to constraint modes corresponding to the observation points to be processed and the target observation points;
S104, obtaining a curve model of the lane line according to lane line constraint corresponding to the target observation point.
In the embodiment of the disclosure, the road image may be acquired by using a camera of the vehicle, such as a front view camera in front of the vehicle body. The camera may be set to collect one frame of road image every one distance, or may be set to collect road images continuously. The road image collected by the camera can be identified through the vehicle controller or a server connected with the vehicle controller, and the observation point of the lane line in the road image can be extracted. The observation point of the 2D lane line on the road image can then be converted into the observation point of the 3D lane line in the vehicle coordinate system (or referred to as the vehicle body coordinate system) by IPM or the like. In this example, the frame to be processed may include 3D coordinates of the observation point of the lane line in the one frame image acquired on the camera road. The frame to be processed can be a key frame such as a historical frame or a current frame. The historical frame may include 3D coordinates of the observation point of the lane line in a frame of image acquired by the historical position of the camera on the road. The current frame may include 3D coordinates of an observation point of a lane line in a frame image acquired by a current position of the camera on the road. Wherein the historical location may be a location through which the vehicle travels to the current location.
In an embodiment of the present disclosure, coordinates of observation points of one or more lane lines may be included in a frame to be processed. If the coordinates of the observation points of the plurality of lane lines are included in one frame to be processed, the coordinates of the observation points of each lane line may be divided into a group. And fitting a curve model of the lane lines corresponding to the group by using the coordinates of the observation points of the group. In addition, part or all of the observation points in the frame to be processed may be taken as the observation points to be processed required for the curve model of the fitted lane line.
In one example, the pitch of the captured images may be preset, the distance traveled by the vehicle may be determined according to the vehicle travel speed, time, and the like, and one frame of road image may be captured each time the set pitch is satisfied. According to road images acquired by the cameras at a plurality of positions, a plurality of frames to be processed can be obtained. In another example, the camera may first collect multiple frames of images continuously according to time, and then sample all the collected images to obtain road images at multiple positions, so as to obtain multiple frames to be processed. The plurality of frames to be processed may be saved in a sliding window manner. For example, one or more key frames are saved in a sliding window. Each key frame includes coordinates of observation points of one or more lane lines extracted from a frame of road image. And the key frames in the sliding window can be updated by the current frame after the current frame is acquired.
In one example, in generating the lane line model, the frames to be processed may include all key frames in the sliding window. The vehicle position of the latest frame in the sliding window may be taken as the current vehicle position, and the vehicle pose of the latest frame may be taken as the current vehicle pose.
According to the conversion relation between the vehicle pose of the frame to be processed and the current vehicle pose, the 3D coordinates of the observation point to be processed of the lane line of the frame to be processed can be converted into the current vehicle coordinate system (also can be called as the vehicle coordinate system of the current frame) to obtain the target observation point. For example, among the 3D coordinates of the observation point, the x-axis coordinate may represent coordinates of the vehicle forward direction, the y-axis coordinate may represent coordinates of the vehicle left-right direction, and the z-axis coordinate may represent coordinates of the vehicle height direction. In addition, the z-axis coordinate of the converted target observation point can be set to zero, and the curve model can be fitted by adopting the x-axis coordinate and the y-axis coordinate of the converted target observation point. The method is equivalent to the 2D coordinate fitting curve model of the converted target observation point.
In the disclosed embodiments, a conversion formula may be employed to convert the frame to be processed (key frame) into the current vehicle coordinate system. An example of a conversion formula is as follows:
For example, the vehicle pose of the key frame in the sliding window is T wh The current vehicle pose is T wc The conversion relation between the current vehicle pose and the vehicle pose of the key frame is T ch =T wc -1 T wh . The coordinates of the ith observation point in the key frame areWhen the tertiary curve modeling is carried out on the lane line, the observation points of the lane line by the key frame in the sliding window are required to be converted into the same coordinate system, namely the coordinate system of the current vehicle, and the converted coordinate point is P i . Then, P can be added i Is seated on the z-axis of (a)Marking to zero to obtain a two-dimensional coordinate p of the ith observation point i =[x i ,y i ] T . Subsequently, p is used i And constructing various constraints of the lane lines, and fitting to obtain a curve model of the lane lines. The curve model of the lane line may be various, such as a quadratic curve model, a cubic curve model, and the like.
In the embodiment of the disclosure, according to the constraint mode corresponding to the to-be-processed observation points of the lane lines determined by the vehicle positions, lane line constraints corresponding to the target observation points converted by the to-be-processed observation points can be established, so that the fitting precision of the curve model of the lane lines is improved by using the lane line constraints related to the vehicle positions.
Fig. 2 is a flow chart of a lane line processing method according to another embodiment of the present disclosure. The method of this embodiment includes one or more features of the lane-line processing method embodiments described above. In one possible embodiment, S101 includes at least one of:
S201a, extracting a first observation point from a first area of the frame to be processed when the distance between the vehicle position of the frame to be processed and the current vehicle position is greater than a first threshold value. The constraint mode corresponding to the first observation point comprises point-to-line distance constraint.
S201b, extracting a second observation point from a second area of the frame to be processed and extracting a third observation point from a third area of the frame to be processed when the distance between the vehicle position of the frame to be processed and the current vehicle position is smaller than or equal to a first threshold value. The constraint mode corresponding to the second observation point comprises point-to-line distance constraint; the constraint mode corresponding to the third observation point comprises corresponding direction consistency constraint and/or curvature consistency constraint.
In an embodiment of the present disclosure, observation points of one or more lane lines may be included in a frame to be processed. One or more observation points belonging to the same lane line may be divided into a group. For example, the observation point to be processed in the frame to be processed may be 3D coordinates. The observation point to be processed can be selected from a set area in the frame to be processed according to the vehicle position where the frame to be processed is located.
In one example, assuming a current vehicle position of 0m (meters), if the distance between the vehicle position in the key frame and the current vehicle position is less than a certain threshold, e.g., 10m, the key frame belongs to a new key frame that is relatively close to the current vehicle position. The current frame also belongs to the new key frame. If the distance between the vehicle location in the key frame and the current vehicle location is greater than or equal to a certain threshold, the key frame belongs to an old key frame that is farther from the current vehicle location. For old key frames, a portion of the first observation points within the first region, e.g., 0 to 15 meters, may be extracted, and the first observation points may employ a point-to-line distance constraint (which may be abbreviated as a point-to-line constraint). In addition, other areas of the old keyframe, for example observation points beyond 15 meters, may be discarded. For the new key frame, different constraint modes can be adopted according to observation points of different areas. For example, a second observation point for a second region of a new keyframe, e.g., 0-30 meters, may employ a point-to-line constraint, and a third observation point for a third region, e.g., 30 meters later, may employ a direction-consistent constraint and a curvature-consistent constraint.
In the embodiment of the disclosure, the required observation points are extracted from the set region of the frame to be processed through the vehicle position, so that proper constraint can be constructed for the front, rear, far, near and the like of the vehicle, and the fitting precision of the lane line curve model is improved.
After determining the constraint mode corresponding to the observation point to be processed in the frame to be processed, the observation point to be processed in the frame to be processed can be converted according to the conversion relation between the vehicle pose to obtain the new 3D coordinate of the target observation point, and the z-axis coordinate in the coordinate of the target observation point can be set to be 0. And then, constructing corresponding lane line constraints by using the x-axis coordinates and the y-axis coordinates of the converted target observation points. If the constraint mode corresponding to a certain observation point A1 to be processed is the point-to-line constraint, the constraint mode of B1 obtained after A1 conversion is also the point-to-line constraint. If the constraint mode corresponding to a certain observation point A2 to be processed is the direction uniform constraint and the curvature uniform constraint, the constraint mode of B2 obtained after A2 conversion is also the direction uniform constraint and the curvature uniform constraint. That is, the constraint modes of the observation points before and after the coordinate conversion are the same.
In one possible embodiment, S103 includes at least one of:
S203a, establishing point-to-line distance constraint corresponding to the target observation point according to the target observation point converted by the first observation point or the second observation point;
s203b, according to the target observation point converted by the third observation point, establishing a direction consistency constraint and/or a curvature consistency constraint corresponding to the target observation point.
In the embodiment of the disclosure, the frame to be processed, such as the old key frame, where the first observation point is located is far from the current vehicle position, and only part of the observation points can be selected for this type of frame to establish the point-to-line distance constraint. The frame to be processed, such as a new key frame, where the first observation point is located is closer to the current vehicle position, the frame of the type can select part of the observation points to establish point-to-line distance constraint, and select part of the frames to establish direction consistency constraint and/or curvature consistency constraint, so that a curve model obtained by fitting can have higher precision in front of the vehicle and in a far area and also have higher precision in back of the vehicle and in a near area.
In one possible implementation, the point-to-line distance constraint for the target observation point is determined based on a first vector for the x-axis coordinate, a y-axis coordinate, and a second vector for the cubic curve coefficient for the target observation point.
For example, the target observation point is the i-th observation point p after the lane line A is converted in the frame to be processed i =[x i ,y i ] T The point-to-line distance constraint constructed based on the observation point may be Wherein y is i For the observation point p i Y-axis coordinate, x i For the observation point p i A first vector corresponding to the x-axis coordinate of (c). The first vector includes entries in the curve model without coefficients. Example(s)E.g./third-degree curve model>c is a second vector corresponding to the observation point curve model coefficient. The second vector includes coefficients in the curve model. For example, c= [ c ] of the cubic curve model 0 ,c 1 ,c 2 ,c 3 ] T . T represents the transpose. The first observation point and the second observation point are generally located in the rear and front near areas of the vehicle, and the point-to-line distance constraint corresponding to the target observation point after the conversion of the first observation point and the second observation point can improve the fitting precision of the rear and front near areas of the curve model of the lane line.
For example, the observation point in the new keyframe may be in front of the vehicle or closer to the vehicle. Different constraints can be established for the target observation points in the new key frame according to different areas, for example, point-to-line distance constraints are used for the target observation points which are in front of the vehicle and are closer to the vehicle, and direction-consistent constraints and curvature-consistent constraints are used for the observation points which are in front of the vehicle but are farther from the vehicle.
For a far area in front of the vehicle, the detection accuracy of the perceived lane line at a far distance is poor under the influence of the observation distance. The observation distance has little influence on the detection precision of the geometric shape of the lane line, so that the geometric shape fitting precision of the curve model at a far distance can be ensured by using the direction consistency constraint and the curvature consistency constraint of the observation points at the front and/or the far distance.
In one possible implementation, the direction-consistent constraint corresponding to the target observation point is determined based on a first derivative of a y-axis coordinate of the target observation point, a first derivative of a first vector corresponding to an x-axis coordinate of the target observation point, and a second vector corresponding to a curve model coefficient.
The first derivative of the y-axis coordinate of the target observation point is determined according to the previous observation point of the target observation point and the target observation point.
For example, the target observation point is the i-1 th observation point p after the lane line A is converted in the frame to be processed i-1 =[x i-1 ,y i-1 ] T The i-th observation point p i =[x i ,y i ] T . The direction consistency constraint constructed based on the i-1 th observation point and the i-th observation point can be as followsWherein the first derivative of the y-axis coordinate +.> Is the observation point p i First vector x corresponding to x-axis coordinate of (2) i Is a first derivative of (a). For example, c= [ c ] of the cubic curve model 0 ,c 1 ,c 2 ,c 3 ] T
In the embodiment of the disclosure, the first derivative of the y-axis coordinate of the target observation point, the first derivative of the first vector corresponding to the x-axis coordinate of the second observation point and the second vector corresponding to the curve model coefficient are utilized to establish the direction consistency constraint corresponding to the target observation point, so that the fitting precision of the direction correlation of the curve model of the far area in front of the vehicle can be improved.
In one possible implementation, the curvature-coincidence constraint for the target observation point is established based on the observation point curvature and the cubic curve curvature.
The curvature of the observation point is determined based on the previous observation point of the target observation point, the target observation point and the subsequent observation point of the target observation point.
The third curve curvature is determined based on the first and second derivatives of the first vector corresponding to the x-axis coordinates of the second observation point and the second vector corresponding to the curve model coefficient.
For example, the target observation point isI-1 observation point p after lane line A conversion in frame to be processed i-1 =[x i-1 ,y i-1 ] T The i-th observation point p i =[x i ,y i ] T I+1th observation point p i+1 =[x i+1 ,y i+1 ] T ,. The curve coincidence constraint constructed based on the i-1 th observation point, the i-th observation point, and the i+1 th observation point may be:
Wherein the curvature of the observation point can be as followsThe curvature of the cubic curve is +.>Wherein->Is the observation point p i First vector x corresponding to x-axis coordinate of (2) i Is a second derivative of (c). II represents the Euclidean distance between two observation points. For example, c= [ c ] of the cubic curve model 0 ,c 1 ,c 2 ,c 3 ] T
In the embodiment of the disclosure, the curvature coincidence constraint corresponding to the target observation point is established by using the curvature of the target observation point and the curvature of the cubic curve, so that the fitting precision of the curvature correlation of the curve model of the far area in front of the vehicle can be improved.
In the above example, the index i indicates only the order of the observation points, and does not indicate that the first observation point is identical to the second observation point. Generally, in the same lane line in the same frame to be processed, the first observation point and the second observation point are different observation points.
In one possible implementation, taking a cubic curve model as an example, S104 includes:
s204, constructing a nonlinear least square formula of the cubic curve model according to at least one of point-to-line distance constraint, direction consistency constraint and curvature consistency constraint corresponding to the target observation point.
S205, iteratively solving the nonlinear least square formula to obtain the values of the coefficients in the cubic curve model of the lane line.
For example, with reference to the point-to-line distance constraint, the direction coincidence constraint, and the curvature coincidence constraint in the above examples, examples of the nonlinear least squares equations of the constructed cubic curve model are as follows:
wherein argmin represents a variable value when the following formula is made to take a minimum value. c k And (3) fitting a cubic curve coefficient for the kth lane line, wherein M is the number of lane line observations (such as the number of first observation points and second observation points included in the lane line) of the rear and near areas of the vehicle, N is the number of lane line observations (such as the number of third observation points included in the lane line) of the far areas in front of the vehicle, and Ω is the weight of the error term.
For example, the nonlinear least squares formulation may be solved iteratively using the LM (Levenberg-Marquardt ) method. And each iteration is performed, an increment delta c for reducing the total error is obtained until the total error is not obviously reduced, and the iteration convergence can be considered, and the lane line solving is completed. The values of the coefficients in the cubic curve model at the time of convergence can be obtained.
In the embodiment of the disclosure, a nonlinear least square formula of a cubic curve model is constructed by utilizing point-to-line distance constraint, direction consistency constraint and curvature consistency constraint, and the fitted cubic curve model has higher precision in the front and far areas of a vehicle and also has higher precision in the rear and near areas of the vehicle.
Fig. 3 is a flow chart of a lane line processing method according to another embodiment of the present disclosure. The lane line processing method may include:
s301, pressing the current frame into the sliding window according to the relation between the current frame and the key frame in the sliding window. In the embodiment of the disclosure, according to the relation between the current frame and the key frames in the sliding window, the key frames in the sliding window are maintained, so that proper key frames (such as key frames with proper intervals) can be reserved in the sliding window, the data processing amount can be reduced, and the accuracy of a curve model obtained by fitting the key frames in the sliding window can be improved.
In one example, the lane line modeled region may include a range of regions in front of and behind the vehicle. To ensure modeling accuracy and speed, it is necessary to set the sliding window length and sampling distance interval reasonably. The sliding window may be a bi-directional queue. A bidirectional queue is a queue that can be operated on both sides. Bidirectional queues allow for quick insertion and deletion at the head of the queue (similar to the tail of the queue).
The lane line processing method of the present embodiment may be combined with the lane line processing method of the above-described embodiment. In one possible implementation manner, the frame to be processed in the above embodiment is a key frame in a sliding window of a lane line. As shown in fig. 4, the lane line processing method may further include: s401, fitting according to observation points of the key frames in the sliding window to obtain a curve model of the lane line.
In one possible implementation, S401 may include S101 to S104 in the example shown in fig. 1. See in particular the relevant description of fig. 1 to 3, which are not repeated here.
Fig. 5 is a flow chart of a lane line processing method according to another embodiment of the present disclosure. The method of this embodiment includes one or more features of the lane-line processing method embodiments described above. In one possible implementation, S401 includes:
s501, pressing the current frame into the sliding window under the condition that the number of key frames in the sliding window is smaller than the length N of the sliding window; wherein N is greater than or equal to 1.
In the embodiment of the present disclosure, the sliding window may be a queue, and the sliding window length, i.e., the queue length is N. The sliding window length may represent the total number of frames in the queue that can be pushed in. It may be determined whether the sliding window is full. If the number of key frames in the queue is less than N, this indicates that the queue is not full, i.e., the sliding window is not full. When the sliding window is not full, the newly acquired current frame may be pushed directly into the sliding window, i.e. into the queue. For example, the sliding window may include an identifier of a key frame, an identifier of each lane line in the key frame, and information such as an identifier and coordinates of each observation point in each lane line. By comparing the number of key frames in the sliding window with the length of the sliding window, the set number of key frames can be stored in the sliding window, so that the number of key frames participating in subsequent lane line fitting can be conveniently configured.
In one possible implementation, S401 includes:
s502, under the condition that the number of key frames in the sliding window is equal to the length of the sliding window, deleting a first frame or an N frame from the sliding window according to the relative motion distance of the current frame and the N frame in the sliding window, and pressing the current frame into the sliding window.
In the disclosed embodiment, if the number of key frames in the queue is equal to N, this indicates that the queue is full, i.e., the sliding window is full. When the sliding window is full, the newly acquired current frame may not be pushed directly into the sliding window, i.e. into the queue. The current frame needs to be pressed into the sliding window after deleting part of the frames in the sliding window. If the sliding window is a bi-directional queue, the first frame or the nth frame of the queue may be deleted. The specific frame in the queue to be deleted can be judged according to the relative motion distance between the current frame and the frame in the sliding window, the relative motion distance between the adjacent frames in the sliding window, and the like. Under the condition that the number of key frames in the sliding window is equal to the length of the sliding window, after partial frames are deleted from the sliding window according to the relative motion distance between frames, the current frames are pressed into the sliding window, so that the number of key frames in the sliding window is kept not to exceed the length of the sliding window, the relative motion distance between frames in the sliding window is more proper, and repeated data are reduced.
In one possible implementation, as shown in fig. 6, S502 includes at least one of:
s601, when the relative motion distance between the current frame and the Nth frame in the sliding window is larger than a second threshold value, the current frame is pressed into the sliding window after the first frame is deleted from the sliding window.
S602, when the relative motion distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative motion distance between the Nth frame in the sliding window and the N-1 th frame is larger than a third threshold value, deleting the first frame from the sliding window, and then pressing the current frame into the sliding window;
s603, when the relative motion distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative motion distance between the Nth frame in the sliding window and the N-1 th frame is smaller than or equal to a third threshold value, the current frame is pressed into the sliding window after deleting the Nth frame from the sliding window.
For example, it is determined whether the relative movement distance of the current frame and the nth frame in the sliding window is greater than a set second threshold. If so, the current frame is far enough away from the Nth frame in the sliding window, and the overlapping data is not much. This may give priority to deleting old data such as the first frame in the sliding window (i.e., the head of queue data). Otherwise, the distance between the current frame and the Nth frame in the sliding window is not far enough, and the overlapping data is more. Thus, whether to delete the first frame or the Nth frame is determined according to the relative motion distance between the last two frames in the sliding window.
For example, it is determined whether the relative movement distance between the nth frame and the N-1 th frame in the sliding window is greater than a set third threshold. If so, the N frame in the sliding window is far enough away from the N-1 frame, and the overlapping data is not much. This may give priority to deleting old data such as the first frame in the sliding window (i.e., the head of queue data). Otherwise, the N frame and the N-1 frame in the sliding window are not far enough, and the overlapping data are more. This may remove the nth frame (i.e., the tail data of the queue) and after the current frame is pushed into the sliding window, the current frame becomes the new nth frame in the sliding window. The new nth frame is farther from the N-1 th frame than the original nth frame is from the N-1 th frame.
By comparing the relative movement distance between the current frame and the key frames in the sliding window and the relative movement distance between the adjacent key frames in the sliding window, the key frames with larger relative movement distance can be reserved in the sliding window, repeated data are reduced, and the processing efficiency of lane line fitting is improved.
Referring to the above example, after each acquisition of the current frame, the data within the sliding window may be maintained in the manner described in the above example. The second threshold value and/or the third threshold value can enable the relative movement distance between two adjacent key frames in the sliding window to be more suitable.
Fig. 7 is a schematic structural view of a lane line processing apparatus according to an embodiment of the present disclosure, which may include:
the constraint determining module 701 is configured to determine, according to a relationship between a vehicle position and a current vehicle position of each frame to be processed, an observation point to be processed of a lane line in each frame to be processed and a constraint manner corresponding to the observation point to be processed;
the conversion module 702 is configured to convert the observation point to be processed into a corresponding target observation point according to a current vehicle coordinate system;
the constraint establishing module 703 is configured to establish lane line constraints corresponding to the target observation points according to constraint modes corresponding to the observation points to be processed and the target observation points;
and the lane line generating module 704 is configured to obtain a curve model of the lane line according to lane line constraints corresponding to the target observation point.
Fig. 8 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure. The apparatus of this embodiment includes one or more features of the lane line processing apparatus embodiments described above. In a possible implementation manner, the constraint determining module 701 is further configured to perform at least one of the following:
extracting a first observation point from a first region of the frame to be processed if a distance between a vehicle position of the frame to be processed and the current vehicle position is greater than a first threshold; the constraint mode corresponding to the first observation point comprises point-to-line distance constraint;
Extracting a second observation point from a second area of the frame to be processed and extracting a third observation point from a third area of the frame to be processed under the condition that the distance between the vehicle position of the frame to be processed and the current vehicle position is smaller than or equal to a first threshold value; the constraint mode corresponding to the second observation point comprises point-to-line distance constraint; the constraint mode corresponding to the third observation point comprises corresponding direction consistency constraint and/or curvature consistency constraint.
In a possible implementation, the constraint establishment module 703 is further configured to perform at least one of:
establishing point-to-line distance constraint corresponding to the target observation point according to the target observation point converted by the first observation point or the second observation point;
and establishing the direction consistency constraint and/or the curvature consistency constraint corresponding to the target observation point according to the target observation point converted by the third observation point.
In one possible implementation, the point-to-line distance constraint for the target observation point is determined based on a first vector for the x-axis coordinate, a y-axis coordinate, and a second vector for the cubic curve coefficient for the target observation point.
In one possible implementation, the direction-consistent constraint corresponding to the target observation point is determined based on a first derivative of a y-axis coordinate of the target observation point, a first derivative of a first vector corresponding to an x-axis coordinate of the target observation point, and a second vector corresponding to a curve model coefficient;
The first derivative of the y-axis coordinate of the target observation point is determined according to the previous observation point of the target observation point and the target observation point.
In one possible implementation, the curvature-consistent constraint corresponding to the target observation point is established based on the observation point curvature and the cubic curve curvature;
the curvature of the observation point is determined based on the previous observation point of the target observation point, the target observation point and the subsequent observation point of the target observation point;
the third curve curvature is determined based on the first and second derivatives of the first vector corresponding to the x-axis coordinates of the second observation point and the second vector corresponding to the curve model coefficient.
In one possible implementation, the lane line generation module 704 includes:
a constructing submodule 801, configured to construct a nonlinear least square equation of a cubic curve model according to at least one of a point-to-line distance constraint, a direction consistency constraint, and a curvature consistency constraint corresponding to the target observation point;
and a solving sub-module 802, configured to iteratively solve the nonlinear least square equation to obtain values of coefficients in the cubic curve model of the lane line.
Fig. 9 is a schematic structural view of a lane line processing apparatus according to another embodiment of the present disclosure. The device comprises:
The sliding window maintenance module 901 is configured to press the current frame into the sliding window according to a relationship between the current frame and a key frame in the sliding window.
In one possible implementation manner, the frame to be processed in the lane line processing apparatus embodiment of fig. 7 or fig. 8 is a key frame in a sliding window of a lane line.
In one possible implementation, as shown in fig. 10, the lane line processing apparatus may be configured to perform a lane line fitting module 1001, configured to obtain a curve model of a lane line by fitting from observation points of a key frame in the sliding window.
In a possible implementation manner, the lane line fitting module 1001 may include the constraint determining module 701, the converting module 702, the constraint establishing module 703 and the lane line generating module 704 of the lane line processing apparatus of fig. 7 or fig. 8, and relevant functions of the respective modules may be described with reference to the above embodiments.
In one possible embodiment, as shown in fig. 11, the sliding window maintenance module 901 includes:
a first pushing submodule 1101, configured to push the current frame into the sliding window if the number of key frames in the sliding window is less than the sliding window length N; wherein N is greater than or equal to 1.
In one possible embodiment, the sliding window maintenance module 901 includes:
And the second pushing sub-module 1102 is configured to push the current frame into the sliding window after deleting the first frame or the nth frame from the sliding window according to the relative motion distance between the current frame and the nth frame in the sliding window when the number of key frames in the sliding window is equal to the length of the sliding window.
In one possible implementation, the second push sub-module 1102 is configured to perform at least one of:
when the relative motion distance between the current frame and the Nth frame in the sliding window is greater than a second threshold value, the current frame is pressed into the sliding window after the first frame is deleted from the sliding window;
when the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame in the sliding window and the N-1 th frame is larger than a third threshold value, deleting the first frame from the sliding window, and then pressing the current frame into the sliding window;
and when the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame in the sliding window and the N-1 th frame is smaller than or equal to a third threshold value, pressing the current frame into the sliding window after deleting the Nth frame from the sliding window.
For example, the sliding window maintenance module 901 may further include a determination sub-module. Firstly, the judging sub-module judges whether the relative movement distance between the current frame and the Nth frame in the sliding window is larger than a second threshold value. If so, the second push sub-module 1102 may push the current frame into the sliding window after deleting the first frame from the sliding window. Otherwise, the judging sub-module judges whether the relative movement distance between the N frame and the N-1 frame in the sliding window is larger than a third threshold value. If so, the second push sub-module 1102 may push the current frame into the sliding window after deleting the first frame from the sliding window; otherwise, the second push sub-module 1102 may push the current frame into the sliding window after deleting the nth frame from the sliding window.
For descriptions of specific functions and examples of each module and sub-module of the apparatus in the embodiments of the present disclosure, reference may be made to the related descriptions of corresponding steps in the foregoing method embodiments, which are not repeated herein.
Since lane line modeling will directly affect the behavior of the entire vehicle, stability and accuracy of its modeling are critical. According to different implementation frameworks, the current lane modeling can be divided into a lane modeling method based on filtering, a lane modeling method based on single-frame optimization, a lane modeling method based on sliding window optimization and the like.
The perceived 3D lane line technique may result in the lane line detection accuracy being worse the farther from the host vehicle, e.g., the accuracy in the near range is in the order of centimeters and the accuracy in the far range is in the order of meters. In addition, the 3D lane lines are also extremely susceptible to illumination changes, external parameter precision, road bumps and the like. The manner of filtering/single frame optimization may be frustrating in dealing with these issues. In the sliding window optimization-based lane line modeling method, if multi-frame 3D lane line observations (i.e. observation points included in the lane lines) are collected at set sampling intervals, optimization constraints of a curve model are constructed. The collected lane line observation can participate in the construction of optimization constraint, so that the problem that single-frame lane line observation is susceptible to noise can be reduced. That is, sliding window optimization can solve the lane line model using fixed frame number observations, with greater robustness and flexibility in coping with noise. The lane lines in the rear (area where the vehicle has travelled) and in the near area of the vehicle generally have a better view, whereas the lane lines in the far front of the vehicle have less information and the view quality is also poor. To ensure overall modeling accuracy, embodiments of the present disclosure may perform the following processing when constructing constraints:
(1) The lane line observation of adjacent frames in the sliding window usually has a certain overlapping area, so when modeling the lane line of the rear and near areas of the vehicle, relatively high modeling accuracy can be realized only by using the observation of the key frames in the sliding window in a certain near area.
(2) After the modeling accuracy of the lane line in the rear and near areas of the vehicle is ensured, the prior information of a smooth cubic curve is generally obtained according to the lane line on a high speed. The constraint of the far lane line observation on the curve position can be removed, and only the curve trend and the geometric constraint of the far lane line observation are reserved.
1. Sliding window length maintenance
As shown in fig. 12, in an embodiment of the disclosure, a bidirectional queue may be used for sliding window maintenance, for example, the following steps may be included:
s1200, obtaining lane line observation of the current frame. One or more lane line observations may be included in the current frame. One or more observation points may be included in the lane line observation.
S1201, judging whether the number of frames in the current sliding window is larger than a threshold T1, if so, executing S1202; if not, S1207 is performed. T1 may be equal to the sliding window length. The sliding window may be a bi-directional queue and the sliding window length may be a queue length. T1 may represent the total number of key frames with lane line observations that can be saved in the sliding window.
S1202, judging whether the relative motion distance between the current frame and the latest frame in the sliding window is greater than a threshold T2, if so, executing S1203; if not, then S1204 is performed.
S1203, the oldest key frame in the sliding window is removed, and then S1207 is executed.
S1204, judging whether the relative motion distance between the latest frame and the next latest frame in the sliding window is greater than a threshold T3, if so, executing S1205; if not, execution proceeds to S1206.
S1205, the oldest key frame in the sliding window is removed, and then S1207 is executed.
S1206, the latest key frame in the sliding window is removed, and then S1207 is executed.
S1207, the current frame is pressed into the sliding window.
The curve model of the lane line may then be non-linearly optimized based on the key frames in the sliding window.
The thresholds T1, T2, and T3 in this example may be the same or different.
The area modeled by the lane lines comprises a certain range of areas in front of and behind the vehicle, and in order to ensure modeling accuracy and speed, the sliding window length and the sampling distance interval need to be reasonably set. The sliding window is implemented using a bi-directional queue. When the sliding window is not full, for example, the number of frames in the sliding window is less than the threshold T1, the new current frame is directly pushed into the queue. When the sliding window is full, judging whether the relative motion distance between the current frame and the latest frame in the queue is greater than a threshold T2, and if so, eliminating the oldest frame in the queue. Otherwise, whether the relative motion distance between the latest frame and the next latest frame in the queue is larger than a threshold T3 is needed to be judged. If the frame is satisfied, the oldest frame in the queue is rejected, otherwise, the newest frame in the queue is rejected. And finally, pressing the current frame into a queue.
2. Nonlinear optimization of lane lines
Depending on where the lane line is observed, three different constraints may be employed in the optimization, such as: the lane lines in the rear and near areas of the vehicle are constrained by point-to-line distances, and the lane lines in the front and far areas of the vehicle are constrained by consistent directions and consistent curvatures.
For example, the lane line model adopts a cubic curve model y=c 3 x 3 +c 2 x 2 +c 1 x+c 0 The optimization variable is vector c= [ c ] corresponding to the coefficient of the cubic curve 0 ,c 1 ,c 2 ,c 3 ] T The two-dimensional observation point for fitting the curve is p i =[x i ,y i ] T And, based on the x-structured vectorx i Is>x i Second derivative of (2)
Assume that the vehicle pose of a certain key frame contained in the sliding window is T wh The current vehicle pose in the sliding window is T wc The conversion relationship between the current vehicle pose and the vehicle pose of the key frame can be T ch =T wc -1 T wh A sampling point in the key frame isWhen the tertiary curve modeling is carried out on the lane line, the observation points of the key frame to the lane line in the sliding window are required to be converted into the same coordinate system, namely the current vehicle coordinate system, and the converted coordinate points are required to be convertedIs P i The coordinate transformation formula may be:
and, P can be i Setting zero to obtain the two-dimensional coordinate p of the ith observation point i =[x i ,y i ] T . According to the characteristics of the sensing lane line detection precision, when the vehicle moves from back to front, more key frames in the sliding window can observe lane lines in the rear and near areas of the vehicle. Moreover, the observations have larger overlapping areas, so that only a section of observation with higher near detection precision in a key frame is needed to construct point-to-line constraint on the tertiary curve (ensuring the fitting precision of the tertiary curve position and geometry). For a far area in front of the vehicle, the detection accuracy of the perceived lane line at a far distance is poor under the influence of the observation distance. The observation distance has little influence on the detection precision of the lane line geometric shape, so that the part of observation points can be used for guaranteeing the geometric shape fitting precision (the direction consistency constraint and the curvature consistency constraint) of the cubic curve at a distance. Thus, the overall fitting accuracy of the cubic curve in the near and far areas behind the vehicle and in front of the vehicle can be well ensured. The constraint mode corresponding to the observation point can be determined according to the relation between the key frame and the current vehicle position, and the like, and the specific description can be seen in the above embodiment.
(1) Point-to-line distance constraint:
wherein y is i For the observation point p i Y-axis coordinate, x i For the observation point p i A first vector corresponding to the x-axis coordinate of (c).
(2) Orientation coincidence constraint:
wherein p is i First derivative of the y-axis coordinate of (2) Is the observation point p i First vector x corresponding to x-axis coordinate of (2) i Is a first derivative of (a). For example, c= [ c ] of the cubic curve model 0 ,c 1 ,c 2 ,c 3 ] T
(3) Curvature coincidence constraint:
wherein the curvature of the observation point isThe curvature of the cubic curve is +.>Wherein->Is the observation point p i First vector x corresponding to x-axis coordinate of (2) i Is a second derivative of (c). II represents the Euclidean distance between two observation points. k (k) i The observation point curvatures calculated based on the i-1 th observation point, the i-th observation point, and the i+1 th observation point are shown.
Using the above three constraints, the following formulas (which may be referred to as nonlinear least squares formulas, nonlinear least squares functions, objective functions, etc.) for nonlinear least squares problems of the cubic curve model may be constructed, examples of which are as follows:
wherein argmin represents the value obtained by making the following formula the mostVariable values at small values. c k And fitting a third curve coefficient for the kth lane line, wherein M is the number of lane line observations (namely the number of lane line observation points) of the rear and near areas of the vehicle, N is the number of lane line observations of the far areas in front of the vehicle, and Ω is the weight of the error term.
Wherein c k And fitting a third curve coefficient for the kth lane line, wherein M is the observed number of lane lines of the rear and near areas of the vehicle, N is the observed number of lane lines of the far areas of the front of the vehicle, and Ω is the weight of the error term. In the solving process, the least square method problem can be solved iteratively by adopting a Levenberg-Marquardt method. And each iteration is performed, an increment delta c for reducing the total error can be obtained until the total error is not obviously reduced, the iteration is considered to be converged, and the lane line solving is completed.
Aiming at the problem of perception of 3D lane line observation, the embodiment of the disclosure provides a lane line processing method, which is a lane line modeling method based on sliding window optimization. The method can greatly improve the noise coping capability of the algorithm. In addition, different constraints are constructed aiming at the lane line observation of different areas, and the lane lines of the rear and near areas of the vehicle are constrained by adopting point-to-line distances, so that the fitting precision of the positions and the geometric shapes of the lane lines is ensured; the lane line in front of the vehicle is constrained by consistent direction and curvature, so that the fitting precision of the lane line trend and geometric shape is ensured, and the problem caused by inaccurate position of the far-sensing 3D lane line is avoided. The scheme of the embodiment of the disclosure is used for an automatic driving scene, provides important support for rear-end optimization of modeling of the lane lines of an ANP (Apollo Navigation Pilot, hundred-degree Apollo pilot-assisted driving system) project, and ensures the stability of vehicle control planning.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 13 illustrates a schematic block diagram of an example electronic device 1300 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 13, the apparatus 1300 includes a computing unit 1301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1302 or a computer program loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM1303, various programs and data required for the operation of the device 1300 can also be stored. The computing unit 1301, the ROM 1302, and the RAM1303 are connected to each other through a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
Various components in device 1300 are connected to I/O interface 1305, including: an input unit 1306 such as a keyboard, a mouse, or the like; an output unit 1307 such as various types of displays, speakers, and the like; storage unit 1308, such as a magnetic disk, optical disk, etc.; and a communication unit 1309 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1309 allows the device 1300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 1301 performs the respective methods and processes described above, for example, a lane line processing method. For example, in some embodiments, the lane line processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1300 via the ROM 1302 and/or the communication unit 1309. When the computer program is loaded into the RAM 1303 and executed by the computing unit 1301, one or more steps of the lane line processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1301 may be configured to perform lane line processing methods by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (16)

1. A lane line processing method, comprising:
determining a to-be-processed observation point of a lane line in each to-be-processed frame and a constraint mode corresponding to the to-be-processed observation point according to the relation between the vehicle position of each to-be-processed frame and the current vehicle position, wherein the constraint mode corresponding to the to-be-processed observation point comprises at least one constraint mode of point-to-line distance constraint, direction consistent constraint and curvature consistent constraint corresponding to the to-be-processed observation point;
Converting the observation point to be processed into a corresponding target observation point according to a current vehicle coordinate system;
establishing lane line constraints corresponding to the target observation points according to constraint modes corresponding to the observation points to be processed and the target observation points, wherein the lane line constraints corresponding to the target observation points comprise at least one of point-to-line distance constraints, direction consistency constraints and curvature consistency constraints corresponding to the target observation points;
obtaining a curve model of the lane line according to lane line constraint corresponding to the target observation point;
the method for obtaining the curve model of the lane line according to the lane line constraint corresponding to the target observation point comprises the following steps: constructing a nonlinear least square formula of a cubic curve model according to at least one of point-to-line distance constraint, direction consistency constraint and curvature consistency constraint corresponding to the target observation point; iteratively solving the nonlinear least square formula to obtain the value of each coefficient in the cubic curve model of the lane line; the cubic curve model is y=c 3 x 3 +c 2 x 2 +c 1 x+c 0 The vector corresponding to the cubic curve coefficient is c= [ c ] 0 ,c 1 ,c 2 ,c 3 ] T
The point-to-line distance constraint corresponding to the target observation point is determined based on a first vector corresponding to an x-axis coordinate of the target observation point, a y-axis coordinate and a second vector corresponding to a cubic curve coefficient;
The direction consistency constraint corresponding to the target observation point is determined based on a first derivative of a y-axis coordinate of the target observation point, a first derivative of a first vector corresponding to an x-axis coordinate of the target observation point and a second vector corresponding to a cubic curve coefficient, and the first derivative of the y-axis coordinate of the target observation point is determined according to a previous observation point of the target observation point and the target observation point;
the curvature coincidence constraint corresponding to the target observation point is determined based on the curvature of the observation point and the curvature of a third curve of a curve model of the lane line, the curvature of the observation point is determined based on a previous observation point of the target observation point, the target observation point and a subsequent observation point of the target observation point, and the curvature of the third curve is determined based on a first derivative and a second derivative of a first vector corresponding to an x-axis coordinate of the target observation point and a second vector corresponding to a third curve coefficient.
2. The method of claim 1, wherein determining the to-be-processed observation point of the lane line in each to-be-processed frame and the constraint mode corresponding to the to-be-processed observation point according to the relation between the vehicle position and the current vehicle position of each to-be-processed frame comprises at least one of the following:
Extracting a first observation point from a first region of the frame to be processed if a distance between a vehicle position of the frame to be processed and the current vehicle position is greater than a first threshold; the constraint mode corresponding to the first observation point comprises point-to-line distance constraint, and the first area is an area from the current vehicle position to the distance between the current vehicle position and the first value;
extracting a second observation point from a second area of the frame to be processed and extracting a third observation point from a third area of the frame to be processed under the condition that the distance between the vehicle position of the frame to be processed and the current vehicle position is smaller than or equal to a first threshold value; the constraint mode corresponding to the second observation point comprises point-to-line distance constraint; the constraint mode corresponding to the third observation point comprises corresponding direction consistency constraint and/or curvature consistency constraint, the second area is an area between the current vehicle position and a second value of the distance from the current vehicle position, the second value is larger than the first value, and the third area is an area of which the distance from the current vehicle position is larger than the second value.
3. The method of claim 2, wherein establishing lane line constraints corresponding to the target observation points according to constraint modes corresponding to the to-be-processed observation points and the target observation points comprises at least one of the following:
Establishing point-to-line distance constraint corresponding to the target observation point according to the target observation point converted by the first observation point or the second observation point;
and establishing a direction consistency constraint and/or a curvature consistency constraint corresponding to the target observation point according to the target observation point converted by the third observation point.
4. A method according to any one of claims 1 to 3, the frame to be processed comprising a key frame in a sliding window of a lane line, the method further comprising:
and pressing the current frame into the sliding window according to the relation between the current frame and the key frame in the sliding window.
5. The method of claim 4, wherein pushing the current frame into the sliding window according to a relationship of the current frame and key frames in the sliding window comprises:
pressing the current frame into the sliding window under the condition that the number of key frames in the sliding window is smaller than the sliding window length N; wherein N is greater than or equal to 1.
6. The method of claim 4, wherein pushing the current frame into the sliding window according to a relationship of the current frame and key frames in the sliding window comprises:
and under the condition that the number of key frames in the sliding window is equal to the length of the sliding window, deleting a first frame or an N frame from the sliding window according to the relative motion distance of the current frame and the N frame in the sliding window, and pressing the current frame into the sliding window.
7. The method of claim 6, wherein pressing the current frame into the sliding window after deleting a first or nth frame from the sliding window based on a relative distance of movement of the current frame and the nth frame in the sliding window comprises at least one of:
when the relative movement distance between the current frame and the Nth frame in the sliding window is greater than a second threshold value, deleting the first frame from the sliding window, and pressing the current frame into the sliding window;
when the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame in the sliding window and the N-1 th frame is larger than a third threshold value, deleting the first frame from the sliding window, and then pressing the current frame into the sliding window;
and when the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame in the sliding window and the N-1 th frame is smaller than or equal to a third threshold value, pressing the current frame into the sliding window after deleting the Nth frame from the sliding window.
8. A lane line processing apparatus comprising:
the constraint determining module is used for determining to-be-processed observation points of lane lines in each to-be-processed frame and constraint modes corresponding to the to-be-processed observation points according to the relation between the vehicle position of each to-be-processed frame and the current vehicle position, wherein the constraint modes corresponding to the to-be-processed observation points comprise constraint modes of at least one of point-to-line distance constraint, direction consistent constraint and curvature consistent constraint corresponding to the to-be-processed observation points;
The conversion module is used for converting the observation point to be processed into a corresponding target observation point according to the current vehicle coordinate system;
the constraint establishing module is used for establishing lane line constraints corresponding to the target observation points according to constraint modes corresponding to the observation points to be processed and the target observation points, wherein the lane line constraints corresponding to the target observation points comprise at least one of point-to-line distance constraints, direction consistency constraints and curvature consistency constraints corresponding to the target observation points;
the lane line generation module is used for obtaining a curve model of the lane line according to lane line constraint corresponding to the target observation point;
the lane line generating module includes:
the construction submodule is used for constructing a nonlinear least square formula of a cubic curve model according to at least one of point-to-line distance constraint, direction consistency constraint and curvature consistency constraint corresponding to the target observation point;
the solving submodule is used for iteratively solving the nonlinear least square formula to obtain the value of each coefficient in the cubic curve model of the lane line; the cubic curve model is y=c 3 x 3 +c 2 x 2 +c 1 x+c 0 The vector corresponding to the cubic curve coefficient is c= [ c ] 0 ,c 1 ,c 2 ,c 3 ] T
The point-to-line distance constraint corresponding to the target observation point is determined based on a first vector corresponding to an x-axis coordinate of the target observation point, a y-axis coordinate and a second vector corresponding to a cubic curve coefficient;
the direction consistency constraint corresponding to the target observation point is determined based on a first derivative of a y-axis coordinate of the target observation point, a first derivative of a first vector corresponding to an x-axis coordinate of the target observation point and a second vector corresponding to a cubic curve coefficient, and the first derivative of the y-axis coordinate of the target observation point is determined according to a previous observation point of the target observation point and the target observation point;
the curvature coincidence constraint corresponding to the target observation point is determined based on the curvature of the observation point and the curvature of a third curve of a curve model of the lane line, the curvature of the observation point is determined based on a previous observation point of the target observation point, the target observation point and a subsequent observation point of the target observation point, and the curvature of the third curve is determined based on a first derivative and a second derivative of a first vector corresponding to an x-axis coordinate of the target observation point and a second vector corresponding to a third curve coefficient.
9. The apparatus of claim 8, wherein the constraint determination module is further configured to perform at least one of:
Extracting a first observation point from a first region of the frame to be processed if a distance between a vehicle position of the frame to be processed and the current vehicle position is greater than a first threshold; the constraint mode corresponding to the first observation point comprises point-to-line distance constraint, and the first area is an area between the current vehicle position and a first value of the distance from the current vehicle position;
extracting a second observation point from a second area of the frame to be processed and extracting a third observation point from a third area of the frame to be processed under the condition that the distance between the vehicle position of the frame to be processed and the current vehicle position is smaller than or equal to a first threshold value; the constraint mode corresponding to the second observation point comprises point-to-line distance constraint; the constraint mode corresponding to the third observation point comprises corresponding direction consistency constraint and/or curvature consistency constraint, the second area is an area between the current vehicle position and a second value of the distance from the current vehicle position, the second value is larger than the first value, and the third area is an area of which the distance from the current vehicle position is larger than the second value.
10. The apparatus of claim 9, wherein the constraint establishment module is further configured to perform at least one of:
Establishing point-to-line distance constraint corresponding to the target observation point according to the target observation point converted by the first observation point or the second observation point;
and establishing a direction consistency constraint and/or a curvature consistency constraint corresponding to the target observation point according to the target observation point converted by the third observation point.
11. The apparatus of any of claims 8 to 10, the frame to be processed comprising a key frame in a sliding window of a lane line, the apparatus further comprising:
and the sliding window maintenance module is used for pressing the current frame into the sliding window according to the relation between the current frame and the key frame in the sliding window.
12. The apparatus of claim 11, wherein the sliding window maintenance module comprises:
a first pressing sub-module, configured to press the current frame into the sliding window when the number of key frames in the sliding window is less than a sliding window length N; wherein N is greater than or equal to 1.
13. The apparatus of claim 11, wherein the sliding window maintenance module comprises:
and the second pressing-in sub-module is used for pressing the current frame into the sliding window after deleting the first frame or the N frame from the sliding window according to the relative motion distance of the current frame and the N frame in the sliding window under the condition that the number of key frames in the sliding window is equal to the length of the sliding window.
14. The apparatus of claim 13, wherein the second push sub-module is to perform at least one of:
when the relative movement distance between the current frame and the Nth frame in the sliding window is greater than a second threshold value, deleting the first frame from the sliding window, and pressing the current frame into the sliding window;
when the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame in the sliding window and the N-1 th frame is larger than a third threshold value, deleting the first frame from the sliding window, and then pressing the current frame into the sliding window;
and when the relative movement distance between the current frame and the Nth frame in the sliding window is smaller than or equal to a second threshold value and the relative movement distance between the Nth frame in the sliding window and the N-1 th frame is smaller than or equal to a third threshold value, pressing the current frame into the sliding window after deleting the Nth frame from the sliding window.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202210828297.5A 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium Active CN115116019B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310403461.2A CN116486354B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium
CN202210828297.5A CN115116019B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210828297.5A CN115116019B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310403461.2A Division CN116486354B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115116019A CN115116019A (en) 2022-09-27
CN115116019B true CN115116019B (en) 2023-08-01

Family

ID=83332380

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310403461.2A Active CN116486354B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium
CN202210828297.5A Active CN115116019B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310403461.2A Active CN116486354B (en) 2022-07-13 2022-07-13 Lane line processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (2) CN116486354B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN107194342A (en) * 2017-05-16 2017-09-22 西北工业大学 Method for detecting lane lines based on inverse perspective mapping
CN114140759A (en) * 2021-12-08 2022-03-04 阿波罗智能技术(北京)有限公司 High-precision map lane line position determining method and device and automatic driving vehicle

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5276637B2 (en) * 2010-09-08 2013-08-28 富士重工業株式会社 Lane estimation device
CN102156979B (en) * 2010-12-31 2012-07-04 上海电机学院 Method and system for rapid traffic lane detection based on GrowCut
CN106407893B (en) * 2016-08-29 2019-11-22 东软集团股份有限公司 A kind of method, apparatus and equipment detecting lane line
CN108845343B (en) * 2018-07-03 2020-04-28 河北工业大学 Vehicle positioning method based on fusion of vision, GPS and high-precision map
CN111316284A (en) * 2019-02-13 2020-06-19 深圳市大疆创新科技有限公司 Lane line detection method, device and system, vehicle and storage medium
CN112084822A (en) * 2019-06-14 2020-12-15 富士通株式会社 Lane detection device and method and electronic equipment
CN111444778B (en) * 2020-03-04 2023-10-17 武汉理工大学 Lane line detection method
CN112560680A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line processing method and device, electronic device and storage medium
CN112818778B (en) * 2021-01-21 2023-10-03 北京地平线机器人技术研发有限公司 Lane line fitting method, lane line fitting device, lane line fitting medium and electronic equipment
CN113435392A (en) * 2021-07-09 2021-09-24 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device applied to automatic parking and vehicle
CN113551664B (en) * 2021-08-02 2022-02-25 湖北亿咖通科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113932796A (en) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 High-precision map lane line generation method and device and electronic equipment
CN114111769A (en) * 2021-11-15 2022-03-01 杭州海康威视数字技术股份有限公司 Visual inertial positioning method and device and automatic driving device
CN114018274B (en) * 2021-11-18 2024-03-26 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device and electronic equipment
CN113807333B (en) * 2021-11-19 2022-03-18 智道网联科技(北京)有限公司 Data processing method and storage medium for detecting lane line

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN107194342A (en) * 2017-05-16 2017-09-22 西北工业大学 Method for detecting lane lines based on inverse perspective mapping
CN114140759A (en) * 2021-12-08 2022-03-04 阿波罗智能技术(北京)有限公司 High-precision map lane line position determining method and device and automatic driving vehicle

Also Published As

Publication number Publication date
CN115116019A (en) 2022-09-27
CN116486354B (en) 2024-04-16
CN116486354A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN108520554B (en) Binocular three-dimensional dense mapping method based on ORB-SLAM2
WO2021218123A1 (en) Method and device for detecting vehicle pose
EP3852065A1 (en) Data processing method and apparatus
CN114387319B (en) Point cloud registration method, device, equipment and storage medium
CN116255992A (en) Method and device for simultaneously positioning and mapping
CN114013449B (en) Data processing method and device for automatic driving vehicle and automatic driving vehicle
KR101985344B1 (en) Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN114140759A (en) High-precision map lane line position determining method and device and automatic driving vehicle
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN116188893A (en) Image detection model training and target detection method and device based on BEV
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN115116019B (en) Lane line processing method, device, equipment and storage medium
CN115900697B (en) Object motion trail information processing method, electronic equipment and automatic driving vehicle
CN116929343A (en) Pose estimation method, related equipment and storage medium
WO2023226154A1 (en) Autonomous localization method and apparatus, and device and computer-readable storage medium
CN114166238B (en) Lane line identification method and device and electronic equipment
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN114299192A (en) Method, device, equipment and medium for positioning and mapping
CN116228834B (en) Image depth acquisition method and device, electronic equipment and storage medium
CN114663596B (en) Large scene mapping method based on unmanned aerial vehicle real-time ground-imitating flight method
CN115311635B (en) Lane line processing method, device, equipment and storage medium
CN115200601A (en) Navigation method, navigation device, wheeled robot and storage medium
CN116164777A (en) Sensor calibration method, device, equipment and automatic driving vehicle
CN117516544A (en) Fusion positioning method and device for vehicle, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant