CN116152767A - Method and device for post-processing lane line detection - Google Patents

Method and device for post-processing lane line detection Download PDF

Info

Publication number
CN116152767A
CN116152767A CN202310183047.5A CN202310183047A CN116152767A CN 116152767 A CN116152767 A CN 116152767A CN 202310183047 A CN202310183047 A CN 202310183047A CN 116152767 A CN116152767 A CN 116152767A
Authority
CN
China
Prior art keywords
lane line
key points
key
lane
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310183047.5A
Other languages
Chinese (zh)
Inventor
王小刚
吴超
闫亚庆
徐铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dazhuo Intelligent Technology Co ltd
Dazhuo Quxing Intelligent Technology Shanghai Co ltd
Original Assignee
Chery Automobile Co Ltd
Lion Automotive Technology Nanjing Co Ltd
Wuhu Lion Automotive Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd, Lion Automotive Technology Nanjing Co Ltd, Wuhu Lion Automotive Technologies Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202310183047.5A priority Critical patent/CN116152767A/en
Publication of CN116152767A publication Critical patent/CN116152767A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of intelligent driving, in particular to a method and a device for detecting and post-processing lane lines, wherein the method comprises the following steps: acquiring single-frame image data in front of a vehicle; extracting key point position information with label attribute on the lane line according to the single frame image data; performing inverse perspective transformation on the position information of the key points by using an inverse perspective transformation matrix, projecting the position information to a bird's eye view perspective, and removing the miscellaneous points based on the coordinate relationship between adjacent lanes and adjacent rows to obtain a plurality of lane line key points; and (3) combining the actual distance and the confidence coefficient of the key point to perform local weighted linear regression fit of the lane line parameters so as to obtain a final lane line curve equation. Therefore, the technical problems that the lane line is fitted by corresponding post-processing after information is output in the related technology, the fitting effect is poor, the accuracy of the post-processing is reduced, the accuracy of lane line detection is reduced, the automation level of a vehicle is low, and the driving requirement of a user cannot be met are solved.

Description

Method and device for post-processing lane line detection
Technical Field
The application relates to the technical field of intelligent driving, in particular to a method and a device for detecting and post-processing lane lines.
Background
In the related art, lane line detection methods such as traditional image processing, deep learning semantic segmentation or deep learning key point detection can be used, and corresponding post-processing can be performed to fit lane lines after information is output, so that lane line information can be clearly described.
However, in the related art, after outputting the information, corresponding post-processing is needed to fit the lane lines, so that the fitting effect is poor, the accuracy of the post-processing is reduced, the accuracy of lane line detection is reduced, the automation level of the vehicle is low, the driving requirement of a user cannot be met, and the problem needs to be solved.
Disclosure of Invention
The application provides a method and a device for detecting and post-processing lane lines, which are used for solving the technical problems that corresponding post-processing is needed to be performed to fit the lane lines after information is output in the related technology, the fitting effect is poor, the accuracy of post-processing is reduced, the accuracy of lane line detection is reduced, the automation level of a vehicle is low, and the driving requirement of a user cannot be met.
An embodiment of a first aspect of the present application provides a method for post-processing lane line detection, including the following steps: acquiring single-frame image data in front of a vehicle; extracting key point position information with label attribute on the lane line according to the single frame image data; performing inverse perspective transformation on the position information of the key points by using an inverse perspective transformation matrix, projecting the position information to a bird's eye view perspective, and removing the miscellaneous points based on the coordinate relationship between adjacent lanes and adjacent rows to obtain a plurality of lane line key points; and (3) combining the actual distance and the confidence coefficient of the key point to perform local weighted linear regression fit of the lane line parameters so as to obtain a final lane line curve equation.
Optionally, in an embodiment of the present application, the description formula of the keypoint location information is:
Figure BDA0004104221490000011
wherein N represents N lane lines in total, K represents each lane line described by K key points, l i Is a descriptor of the i-th lane line,
Figure BDA0004104221490000012
represents the jth key point on the ith lane line, which is composed of pixels on the image +.>
Figure BDA0004104221490000013
Coordinates, score represents the confidence score for the current keypoint.
Optionally, in an embodiment of the present application, the removing the clutter based on the coordinate relationship between the adjacent lanes and the adjacent rows includes: and filtering abnormal points based on the consistency criterion of a plurality of adjacent key points of the current lane line and the distance constraint relation of the key points between the adjacent lanes.
Optionally, in an embodiment of the present application, the performing local weighted linear regression on the lane line parameter by combining the actual distance and the confidence coefficient of the key point to obtain a final lane line curve equation includes: calculating a first longitudinal distance according to the pixel key points, and calculating a second longitudinal distance from the farthest point on the lane line; and obtaining the weight of the pixel key point according to the first longitudinal distance and the second longitudinal distance.
Optionally, in an embodiment of the present application, the calculation formula of the weight is:
Figure BDA0004104221490000021
wherein ,
Figure BDA0004104221490000022
representing the pixel key (x j ,y j ) Calculated longitudinal distance,/->
Figure BDA0004104221490000023
Represents the distance between the lane line and the most distant point (x K ,y K ) The calculated longitudinal distance.
An embodiment of a second aspect of the present application provides a device for post-processing lane line detection, including: the acquisition module is used for acquiring single-frame image data in front of the vehicle; the extraction module is used for extracting the key point position information with the tag attribute on the lane line according to the single-frame image data; the projection module is used for carrying out inverse perspective transformation on the position information of the key points by utilizing an inverse perspective transformation matrix, projecting the position information to a bird's eye view angle, and removing the miscellaneous points based on the coordinate relation between the adjacent lanes and the adjacent rows to obtain a plurality of lane line key points; and the fitting module is used for carrying out local weighted linear regression fitting on the lane line parameters by combining the actual distance and the confidence coefficient of the key points to obtain a final lane line curve equation.
Optionally, in an embodiment of the present application, the description formula of the keypoint location information is:
Figure BDA0004104221490000024
wherein N represents N lane lines in total, K represents each lane line described by K key points, l i Is a descriptor of the i-th lane line,
Figure BDA0004104221490000025
represents the jth key point on the ith lane line, which is composed of pixels on the image +.>
Figure BDA0004104221490000026
Coordinates to represent the current key pointConfidence score.
Optionally, in one embodiment of the present application, the projection module includes: and the filtering unit is used for filtering abnormal points based on the consistency criterion of a plurality of adjacent key points of the current lane line and the distance constraint relation of the key points between the adjacent lanes.
Optionally, in one embodiment of the present application, the fitting module includes: a calculation unit for calculating a first longitudinal distance according to the pixel key points and calculating a second longitudinal distance from the farthest point on the lane line; the acquisition unit is used for obtaining the weight of the pixel key point according to the first longitudinal distance and the second longitudinal distance.
Optionally, in an embodiment of the present application, the calculation formula of the weight is:
Figure BDA0004104221490000031
wherein ,
Figure BDA0004104221490000032
representing the pixel key (x j ,y j ) Calculated longitudinal distance,/->
Figure BDA0004104221490000033
Represents the distance between the lane line and the most distant point (x K ,y K ) The calculated longitudinal distance.
An embodiment of a third aspect of the present application provides a vehicle, including: the lane line detection post-processing system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the lane line detection post-processing method according to the embodiment.
A fourth aspect of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method of lane line detection post-processing as above.
According to the embodiment of the application, the key point position information with the tag attribute on the lane line can be extracted according to the single-frame image data acquired by the front vehicle, the key point position information is subjected to inverse perspective transformation by utilizing the inverse perspective transformation matrix, the key point position information is projected to the perspective view angle of the aerial view, and the miscellaneous points are removed based on the coordinate relation between the adjacent lanes and the adjacent rows, so that a plurality of lane line key points are obtained, the local weighted linear regression fitting lane line parameters are combined with the actual distance and the confidence coefficient of the key points, the final lane line curve equation is obtained, the accuracy of lane line detection is further improved, the intelligent level of the vehicle is improved, and the driving requirement of a user is met. Therefore, the technical problems that the lane line is fitted by corresponding post-processing after information is output in the related technology, the fitting effect is poor, the accuracy of the post-processing is reduced, the accuracy of lane line detection is reduced, the automation level of a vehicle is low, and the driving requirement of a user cannot be met are solved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method for lane line detection post-processing according to an embodiment of the present application;
FIG. 2 is a schematic diagram of ID-bearing keypoint data on each lane line output by a deep learning algorithm according to one embodiment of the present application;
FIG. 3 is a schematic diagram of an inverse perspective transformed embodiment of the present application;
FIG. 4 is a schematic diagram of consistency of 3 keypoints adjacent to a current lane line according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a distance constraint relationship of keypoints between adjacent lanes according to one embodiment of the present application;
FIG. 6 is a flow chart of a method of lane line detection post-processing according to one embodiment of the present application;
fig. 7 is a schematic structural diagram of a device for post-processing lane line detection according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
The following describes a method and an apparatus for post-processing of lane line detection according to an embodiment of the present application with reference to the accompanying drawings. According to the method, key point position information with label attribute on the lane line can be extracted according to single frame image data acquired by a vehicle in front, and the key point position information is subjected to inverse perspective transformation by utilizing an inverse perspective transformation matrix and projected to a bird's eye view angle, and the mixed points are removed based on the coordinate relation between adjacent lanes and adjacent rows, so that a plurality of lane line key points are obtained, and a local weighted linear regression fit lane line parameter is obtained by combining the actual distance and the key point confidence coefficient, so that a final lane line curve equation is obtained, the accuracy of lane line detection is improved, the intelligent level of the vehicle is improved, and the driving requirement of a user is met. Therefore, the technical problems that the lane line is fitted by corresponding post-processing after information is output in the related technology, the fitting effect is poor, the accuracy of the post-processing is reduced, the accuracy of lane line detection is reduced, the automation level of a vehicle is low, and the driving requirement of a user cannot be met are solved.
Specifically, fig. 1 is a flow chart of a method for post-processing lane line detection according to an embodiment of the present application.
As shown in fig. 1, the method for post-processing lane line detection includes the following steps:
in step S101, single frame image data in front of the vehicle is acquired.
It can be appreciated that the embodiment of the application can acquire single-frame image data in front of the vehicle, for example, the single-frame image data in front of the vehicle can be acquired through the front-view camera installed on the front windshield, so that the executable performance of lane line detection is effectively improved.
In step S102, key point position information with a tag attribute on a lane line is extracted from single frame image data.
It may be understood that, in the embodiment of the present application, the location information of the key point with the tag attribute on the lane line may be extracted according to the single-frame image data, for example, as shown in fig. 2, the acquired single-frame image data in front of the vehicle, that is, the original image may be input to the lane line key point detection function of the vehicle, the tagged key point data on each lane line may be output through the deep convolutional network model, for example, the resolution of the original image 1280×720 may be input, and the key point location information in the following steps of the corresponding tagged attribute may be output through the input of the trained lane line key point detection function, thereby effectively improving the accuracy of vehicle detection.
In one embodiment of the present application, the description formula of the key point position information is:
Figure BDA0004104221490000051
wherein N represents N lane lines in total, K represents each lane line described by K key points, l i Is a descriptor of the i-th lane line,
Figure BDA0004104221490000052
represents the jth key point on the ith lane line, which is composed of pixels on the imagePoint->
Figure BDA0004104221490000053
Coordinates, score represents the confidence score for the current keypoint.
The score represents the confidence score of the current key point, and the confidence score is 0-1.
In step S103, the position information of the key points is subjected to inverse perspective transformation by using the inverse perspective transformation matrix, projected to the perspective view of the bird' S eye view, and the miscellaneous points are removed based on the coordinate relationship between the adjacent lanes and the adjacent rows, so as to obtain a plurality of lane line key points.
It can be appreciated that in the embodiment of the present application, the key point position information may be subjected to inverse perspective transformation by using an inverse perspective transformation matrix, for example, as shown in fig. 3, since there is perspective effect on an image captured by a front view camera, and thus lane lines are not parallel, the perspective effect may be eliminated by IPM (Inverse Perspective Mapping, inverse perspective transformation), and projected to a bird's eye view perspective, so that the lane lines maintain original characteristics, where the IPM transformation matrix is generated by camera calibration parameters, that is:
Figure BDA0004104221490000054
wherein M represents the corresponding IPM inverse perspective transformation matrix (3*3 size), (x) s ,y s ) Representing the original pixel coordinates, (x) d ,y d ) Representing new pixel coordinates obtained by IPM transformation.
In addition, the M matrix may be 3*3 in size, the M matrix being generated by camera calibration parameters.
Then, according to the embodiment of the application, the miscellaneous points can be removed based on the coordinate relation between the adjacent lanes and the adjacent rows in the following steps, so that a plurality of lane line key points are obtained, the accuracy and the reliability of lane line detection are effectively improved, and the intelligent level of a vehicle is improved.
Wherein, in one embodiment of the present application, removing the clutter based on the coordinate relationship between the adjacent lanes and the adjacent rows comprises: and filtering abnormal points based on the consistency criterion of a plurality of adjacent key points of the current lane line and the distance constraint relation of the key points between the adjacent lanes.
For example, as shown in fig. 4, the embodiment of the present application may be based on the consistency criterion of 3 neighboring keypoints of the current lane line, whether straight line or curve, and under the BEV (Birds Eye View) perspective, the transformation trend between the neighboring keypoints has consistency, when the consistency is not satisfied, there is a greater probability that the keypoint deviation occurs, and the current keypoint p1 (x 1, y 1) and the slope between the coordinates of the neighboring keypoints p0 (x 0, y 0) and p2 (x 2, y 2) may be calculated as the important criterion for decision, and then the criterion for determining whether the p1 keypoint is abnormal is:
Figure BDA0004104221490000061
Figure BDA0004104221490000062
wherein Flag1 is 1, which represents that the current key point has a certain probability of being an outlier, is used as a candidate outlier, and judgment is made using a distance constraint relationship of the key points between adjacent lanes in the following steps.
Next, as shown in fig. 5, the embodiment of the present application may use the distance constraint relation criterion based on the key points between adjacent lanes to verify and select the key points of the adjacent lane lines after the consistency criterion of the key points of the adjacent 3 adjacent lane lines of the current lane line is determined to be the candidate abnormal point.
Finally, the embodiment of the application can determine whether the point p1 on the lane line L1 is abnormal, use 3 adjacent key point consistency criteria of the current lane line to have error processing conditions at curve corners, add constraint conditions of distance, and count the distance in the X direction on two adjacent lane lines L1 and L2 corresponding to key point information on the line:
Figure BDA0004104221490000063
and statistical mean and variance:
meanD 12 ,δD 12
then:
Figure BDA0004104221490000064
wherein Flag2 is 1, which represents that the current jth key point is an abnormal point, and needs to be removed, such as the left and right boundary points in the ellipse in fig. 5, if the boundary point is exceeded, the current jth key point is directly determined as the abnormal point, and the deviation is too large to be filtered.
In summary, the function of filtering abnormal points in the embodiment of the application is mainly to process key points on each changed lane line under the BEV view angle and filter out some points with larger deviation, so that the accuracy of post-detection processing of the lane lines is improved and the safety of vehicles is improved.
In step S104, local weighted linear regression is performed to fit the lane line parameters by combining the actual distance and the confidence coefficient of the key point, so as to obtain a final lane line curve equation.
It can be understood that the embodiment of the application can combine the actual distance and the confidence coefficient of the key points in the following steps to perform local weighted linear regression fit on the lane line parameters to obtain a final lane line curve equation, so that the accuracy of the fit data is effectively improved, the lane line description information is more accurate and reliable, the intelligent level of the vehicle is improved, and the driving requirement of a user is effectively met.
For example, the lane line fitting function in the embodiment of the present application may receive the lane line keypoints filtered in the above steps, and may use local weighted regression to fit parameters, and then the minimization objective function is:
Figure BDA0004104221490000071
Figure BDA0004104221490000072
wherein ,yj A y-direction value representing the jth key point on the lane line,
Figure BDA0004104221490000073
predicted value in y direction representing jth key point on lane line, w j Representing the weights of the corresponding keypoints.
In addition, the curve fitting selects the second order, w j The lane line fitting function may select the optimal combination of parameters (a, b, c) to minimize Loss, representing the weights of the corresponding keypoints.
In some embodiments, the innovation of the lane-line fitting function is primarily to improve the weight value w of the keypoints j The lane line key point detection function can output corresponding key points and output corresponding score, namely confidence score of the current key point, the score value is between 0 and 1, the higher the score value is, the higher the representative accuracy is, the higher the score value is used as one of key point weight components, in addition, the importance of key points in different distances in the lane line fitting process is a certain difference, the fitting importance of key points in a long distance corresponding to high-order parameters is higher, therefore, the longitudinal distance between the key points and the vehicle can be used as another important consideration part, the accuracy of the fitting parameters of the lane line can be effectively improved, and the robustness of post-detection processing of the lane line is improved.
Optionally, in an embodiment of the present application, performing local weighted linear regression fitting on the lane line parameters in combination with the actual distance and the confidence coefficient of the key point to obtain a final lane line curve equation, including: calculating a first longitudinal distance according to the pixel key points, and calculating a second longitudinal distance from the farthest point on the lane line; and obtaining the weight of the pixel key point according to the first longitudinal distance and the second longitudinal distance.
For example, embodiments of the present application may provide for the placement of pixel keypoints (x j ,y j ) Substituting (u, v, 1) to calculate the longitudinal distance
Figure BDA0004104221490000074
For the most distant point (x K ,y K ) Calculating longitudinal distance +.>
Figure BDA0004104221490000075
Obtain the final fit (x j ,y j ) Weight w of (2) j Therefore, the accuracy of the weight of the pixel key points can be improved, and the reliability of intelligent driving of the vehicle can be improved.
In one embodiment of the present application, the calculation formula of the weight is:
Figure BDA0004104221490000076
wherein ,
Figure BDA0004104221490000077
representing the pixel key (x j ,y j ) Calculated longitudinal distance,/->
Figure BDA0004104221490000078
Represents the distance between the lane line and the most distant point (x K ,y K ) The calculated longitudinal distance.
In addition, θ 1 Take the value of 0.6, theta 2 Take a value of 0.4.
For example, as shown in fig. 6, the working principle of the embodiment of the present application will be described in detail below with a specific embodiment.
Step S601: front view image data.
That is, the embodiment of the application can acquire single-frame image data in front of the vehicle to improve the performability of lane line detection.
Step S602: lane line key point detection function.
That is, the lane line detection function can output a large amount of key point position information on the lane line, so that the accuracy of vehicle detection is improved.
Step S603: IPM inverse perspective transformation.
That is, the embodiment of the application can complete the transformation of the image data and the mapping of the key point data by using the inverse perspective transformation matrix, so that the intelligent level of the vehicle is improved.
Step S604: filtering abnormal points.
That is, the abnormal key point filtering method under the BEV view angle can improve the accuracy of post-treatment of lane line detection and the safety of vehicles.
Step S605: and (5) lane line fitting.
That is, the embodiment of the application can improve the local weighted regression fit lane line, improve the intelligent level of the vehicle and effectively meet the driving requirement of a user.
According to the method for post-processing of lane line detection, which is provided by the embodiment of the application, the key point position information with the tag attribute on the lane line can be extracted according to the single-frame image data acquired by the front vehicle, the key point position information is subjected to reverse perspective transformation by utilizing the reverse perspective transformation matrix, the key point position information is projected to a bird's eye view angle, and the miscellaneous points are removed based on the coordinate relationship between the adjacent lanes and the adjacent rows, so that a plurality of lane line key points are obtained, and the lane line parameters are fitted by combining the actual distance and the confidence level of the key points in a local weighted linear regression mode, so that a final lane line curve equation is obtained, the accuracy of lane line detection is further improved, the intelligent level of the vehicle is improved, and the driving requirement of a user is met. Therefore, the technical problems that in the related art, the fitting effect is poor, the accuracy of post-processing is reduced, the accuracy of lane line detection is reduced, and the driving requirement of a user cannot be met are solved.
Next, a device for post-processing of lane line detection according to an embodiment of the present application will be described with reference to the accompanying drawings.
Fig. 7 is a block schematic diagram of an apparatus for lane line detection post-processing according to an embodiment of the present application.
As shown in fig. 7, the lane line detection post-processing apparatus 10 includes: the device comprises an acquisition module 100, an extraction module 200, a projection module 300 and a fitting module 400.
Specifically, the acquiring module 100 is configured to acquire single frame image data in front of the vehicle.
And the extracting module 200 is used for extracting the key point position information with the tag attribute on the lane line according to the single-frame image data.
The projection module 300 is configured to perform inverse perspective transformation on the key point position information by using an inverse perspective transformation matrix, project the key point position information to a bird's eye view perspective, and remove the miscellaneous points based on the coordinate relationship between the adjacent lanes and the adjacent rows, so as to obtain a plurality of lane line key points.
The fitting module 400 is configured to perform local weighted linear regression fitting on the lane line parameters by combining the actual distance and the confidence coefficient of the key point, so as to obtain a final lane line curve equation.
Optionally, in an embodiment of the present application, the description formula of the keypoint location information is:
Figure BDA0004104221490000091
wherein N represents N lane lines in total, K represents each lane line described by K key points, l i Is a descriptor of the i-th lane line,
Figure BDA0004104221490000092
represents the jth key point on the ith lane line, which is composed of pixels on the image +.>
Figure BDA0004104221490000093
Coordinates, score represents the confidence score for the current keypoint.
Optionally, in one embodiment of the present application, the projection module includes: filtering out the unit.
The filtering unit is used for filtering abnormal points based on the consistency criterion of a plurality of adjacent key points of the current lane line and the distance constraint relation of the key points between the adjacent lanes.
Optionally, in one embodiment of the present application, the fitting module includes: a calculation unit and an acquisition unit.
The calculating unit is used for calculating a first longitudinal distance according to the pixel key points and calculating a second longitudinal distance from the farthest point on the lane line.
The acquisition unit is used for obtaining the weight of the pixel key point according to the first longitudinal distance and the second longitudinal distance.
Optionally, in an embodiment of the present application, the calculation formula of the weight is:
Figure BDA0004104221490000094
wherein ,
Figure BDA0004104221490000095
representing the pixel key (x j ,y j ) Calculated longitudinal distance,/->
Figure BDA0004104221490000096
Represents the distance between the lane line and the most distant point (x K ,y K ) The calculated longitudinal distance.
It should be noted that the foregoing explanation of the method embodiment of the lane line detection post-processing is also applicable to the apparatus of the lane line detection post-processing of this embodiment, and will not be repeated here.
According to the lane line detection post-processing device provided by the embodiment of the application, the key point position information with the tag attribute on the lane line can be extracted according to the single-frame image data acquired by the front vehicle, the key point position information is subjected to inverse perspective transformation by utilizing the inverse perspective transformation matrix, the key point position information is projected to the view angle of the bird's eye view, the miscellaneous points are removed based on the coordinate relationship between the adjacent lane and the adjacent row, a plurality of lane line key points are obtained, the lane line parameters are fitted by combining the actual distance and the confidence level of the key points in a local weighted linear regression mode, a final lane line curve equation is obtained, the lane line detection accuracy is further improved, the intelligent level of the vehicle is improved, and the driving requirement of a user is met. Therefore, the technical problems that in the related art, the fitting effect is poor, the accuracy of post-processing is reduced, the accuracy of lane line detection is reduced, and the driving requirement of a user cannot be met are solved.
Fig. 8 is a schematic structural diagram of a vehicle according to an embodiment of the present application. The vehicle may include:
a memory 801, a processor 802, and a computer program stored on the memory 801 and executable on the processor 802.
The processor 802 implements the method of lane line detection post-processing provided in the above-described embodiment when executing a program.
Further, the vehicle further includes:
a communication interface 803 for communication between the memory 801 and the processor 802.
A memory 801 for storing a computer program executable on the processor 802.
The memory 801 may include high-speed RAM memory or may further include non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
If the memory 801, the processor 802, and the communication interface 803 are implemented independently, the communication interface 803, the memory 801, and the processor 802 may be connected to each other through a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (Peripheral Component, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 8, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 801, the processor 802, and the communication interface 803 are integrated on a chip, the memory 801, the processor 802, and the communication interface 803 may communicate with each other through internal interfaces.
The processor 802 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more integrated circuits configured to implement embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method of lane line detection post-processing as above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "N" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (10)

1. A method of lane line detection post-processing, comprising the steps of:
acquiring single-frame image data in front of a vehicle;
extracting key point position information with label attribute on the lane line according to the single frame image data;
performing inverse perspective transformation on the position information of the key points by using an inverse perspective transformation matrix, projecting the position information to a bird's eye view perspective, and removing the miscellaneous points based on the coordinate relationship between adjacent lanes and adjacent rows to obtain a plurality of lane line key points; and
and (3) combining the actual distance and the confidence coefficient of the key point to perform local weighted linear regression fit of the lane line parameters so as to obtain a final lane line curve equation.
2. The method of claim 1, wherein the description formula of the keypoint location information is:
Figure FDA0004104221480000011
wherein N represents N lane lines in total, K represents each lane line described by K key points, l i Is a descriptor of the i-th lane line,
Figure FDA0004104221480000012
represents the jth key point on the ith lane line, which is composed of pixels on the image +.>
Figure FDA0004104221480000016
Coordinates, score represents the confidence score for the current keypoint.
3. The method of claim 1, wherein the removing clutter based on the adjacent lanes and the coordinate relationship between adjacent rows comprises:
and filtering abnormal points based on the consistency criterion of a plurality of adjacent key points of the current lane line and the distance constraint relation of the key points between the adjacent lanes.
4. The method of claim 1, wherein the combining the actual distance with the key point confidence to make a locally weighted linear regression fit to the lane-line parameters to obtain a final lane-line curve equation comprises:
calculating a first longitudinal distance according to the pixel key points, and calculating a second longitudinal distance from the farthest point on the lane line;
and obtaining the weight of the pixel key point according to the first longitudinal distance and the second longitudinal distance.
5. The method of claim 4, wherein the weight is calculated as:
Figure FDA0004104221480000013
wherein ,
Figure FDA0004104221480000014
representing the pixel key (x j ,y j ) Calculated longitudinal distance,/->
Figure FDA0004104221480000015
Represents the distance between the lane line and the most distant point (x K ,y K ) The calculated longitudinal distance.
6. A lane line detection post-processing apparatus, comprising:
the acquisition module is used for acquiring single-frame image data in front of the vehicle;
the extraction module is used for extracting the key point position information with the tag attribute on the lane line according to the single-frame image data;
the projection module is used for carrying out inverse perspective transformation on the position information of the key points by utilizing an inverse perspective transformation matrix, projecting the position information to a bird's eye view angle, and removing the miscellaneous points based on the coordinate relation between the adjacent lanes and the adjacent rows to obtain a plurality of lane line key points; and
and the fitting module is used for carrying out local weighted linear regression fitting on the lane line parameters by combining the actual distance and the confidence coefficient of the key points to obtain a final lane line curve equation.
7. The apparatus of claim 6, wherein the description formula of the keypoint location information is:
Figure FDA0004104221480000021
wherein N represents N lane lines in total, K represents each lane line described by K key points, l i Is a descriptor of the i-th lane line,
Figure FDA0004104221480000022
represents the jth key point on the ith lane line, which is composed of pixels on the image +.>
Figure FDA0004104221480000023
Coordinates, score represents the confidence score for the current keypoint.
8. The apparatus of claim 6, wherein the projection module comprises:
and the filtering unit is used for filtering abnormal points based on the consistency criterion of a plurality of adjacent key points of the current lane line and the distance constraint relation of the key points between the adjacent lanes.
9. A vehicle, characterized by comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the method of lane line detection post-processing of any one of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor for implementing the method of lane line detection post-processing according to any one of claims 1 to 5.
CN202310183047.5A 2023-02-24 2023-02-24 Method and device for post-processing lane line detection Pending CN116152767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310183047.5A CN116152767A (en) 2023-02-24 2023-02-24 Method and device for post-processing lane line detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310183047.5A CN116152767A (en) 2023-02-24 2023-02-24 Method and device for post-processing lane line detection

Publications (1)

Publication Number Publication Date
CN116152767A true CN116152767A (en) 2023-05-23

Family

ID=86340605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310183047.5A Pending CN116152767A (en) 2023-02-24 2023-02-24 Method and device for post-processing lane line detection

Country Status (1)

Country Link
CN (1) CN116152767A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576651A (en) * 2024-01-18 2024-02-20 合众新能源汽车股份有限公司 Lane line fitting method and system for driving assistance and vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576651A (en) * 2024-01-18 2024-02-20 合众新能源汽车股份有限公司 Lane line fitting method and system for driving assistance and vehicle

Similar Documents

Publication Publication Date Title
CN109740469B (en) Lane line detection method, lane line detection device, computer device, and storage medium
JP6802331B2 (en) Lane processing method and equipment
CN109658454B (en) Pose information determination method, related device and storage medium
EP3617938A1 (en) Lane line processing method and device
CN116152767A (en) Method and device for post-processing lane line detection
CN109839937B (en) Method, device and computer equipment for determining automatic driving planning strategy of vehicle
CN111295666A (en) Lane line detection method, device, control equipment and storage medium
KR20210018493A (en) Lane property detection
CN112132131A (en) Measuring cylinder liquid level identification method and device
CN109325388A (en) Recognition methods, system and the automobile of lane line
CN112037180B (en) Chromosome segmentation method and device
CN117612128B (en) Lane line generation method, device, computer equipment and storage medium
CN114820679A (en) Image annotation method and device, electronic equipment and storage medium
CN111027474A (en) Face area acquisition method and device, terminal equipment and storage medium
CN113902740A (en) Construction method of image blurring degree evaluation model
CN114882470A (en) Vehicle-mounted anti-collision early warning method and device, computer equipment and storage medium
CN113643311A (en) Image segmentation method and device for boundary error robustness
CN116721396A (en) Lane line detection method, device and storage medium
CN116935134A (en) Point cloud data labeling method, point cloud data labeling system, terminal and storage medium
CN114926817B (en) Method and device for identifying parking space, electronic equipment and computer readable storage medium
CN116343148A (en) Lane line detection method, device, vehicle and storage medium
CN111126109B (en) Lane line identification method and device and electronic equipment
CN109117866B (en) Lane recognition algorithm evaluation method, computer device, and storage medium
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN115909271A (en) Parking space identification method and device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240416

Address after: 241000 10th Floor, Block B1, Wanjiang Wealth Plaza, Guandou Street, Jiujiang District, Wuhu City, Anhui Province

Applicant after: Dazhuo Intelligent Technology Co.,Ltd.

Country or region after: China

Applicant after: Dazhuo Quxing Intelligent Technology (Shanghai) Co.,Ltd.

Address before: 241009 South Road, Wuhu economic and Technological Development Zone, Anshan, Anhui

Applicant before: Wuhu Sambalion auto technology Co.,Ltd.

Country or region before: China

Applicant before: Lion Automotive Technology (Nanjing) Co.,Ltd.

Applicant before: CHERY AUTOMOBILE Co.,Ltd.