CN113313047B - Lane line detection method and system based on lane structure prior - Google Patents

Lane line detection method and system based on lane structure prior Download PDF

Info

Publication number
CN113313047B
CN113313047B CN202110654503.0A CN202110654503A CN113313047B CN 113313047 B CN113313047 B CN 113313047B CN 202110654503 A CN202110654503 A CN 202110654503A CN 113313047 B CN113313047 B CN 113313047B
Authority
CN
China
Prior art keywords
lane
lane line
image
weight
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110654503.0A
Other languages
Chinese (zh)
Other versions
CN113313047A (en
Inventor
凌强
刘彬辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110654503.0A priority Critical patent/CN113313047B/en
Publication of CN113313047A publication Critical patent/CN113313047A/en
Application granted granted Critical
Publication of CN113313047B publication Critical patent/CN113313047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a lane line detection method and a system based on lane structure prior, wherein the method comprises the following steps: s1: normalizing the forward-looking traffic image to obtain a lane line image; s2: the lane line image is subjected to an encoding-decoding network to obtain a lane line characteristic diagram; obtaining a forward lane weight map and a lane line key point coordinate thereof according to the lane line feature map; constructing a lane line key point constraint loss function based on the key point coordinates of the forward lane weight graph; obtaining a overlooking lane weight graph through perspective transformation, and constructing a parallel structure constraint loss function; s3: connecting the key point coordinates with the overlooking lane weight graph to obtain a ternary image; s4: and performing minimum quadratic fitting on the ternary image, and obtaining lane curve parameters according to the number of the lane lines defined in advance. The method solves the problem that structural constraint is difficult to carry out based on a segmentation method, improves the lane detection precision under the shielding and strong illumination scenes, and improves the lane line reasoning accuracy and the correlation of lane parameters.

Description

Lane line detection method and system based on lane structure prior
Technical Field
The invention relates to the field of automatic driving, in particular to a lane line detection method and system based on lane structure prior.
Background
The lane line is one of the most main indication information of the road surface, can effectively guide the vehicle to run in a restricted road area, and can be used for automobile positioning, lane deviation detection and track planning, so that the accurate detection of the lane line of the road surface is the basis for realizing automatic driving. It is not yet popular in high-precision maps. Most of lane line detection algorithms are implemented based on images.
The lane line detection task faces a plurality of difficulties, such as lane line defect and shielding, severe illumination change, uncertain lane line number, various lane line types, inherent tiny and long characteristics of lanes, real-time algorithm requirements and the like. This makes the lane line detection task extremely challenging. The lane is used as an important clue for limiting the vehicle to run on the road, and the accurate, stable and quick completion of the lane line detection is very significant for the practical use of the automatic driving automobile.
The lane lines themselves are typically continuous in the scene and there is generally a parallel relationship between different lane lines. In the imaging process of the vehicle-mounted forward-looking camera, the parallel lane lines are usually converged to the same vanishing point. However, the conventional lane line inspection method based on segmentation is difficult to structurally constrain from task description because a segmentation result at a pixel level is obtained. The existing method is lack of structure prior knowledge, so that the accuracy requirement on lane line detection is difficult to meet.
Disclosure of Invention
In order to solve the technical problem, the invention provides a lane line detection method and system based on lane structure prior.
The technical solution of the invention is as follows: a lane line detection method based on lane structure prior comprises the following steps:
step S1: according to a foresight traffic image obtained by the vehicle-mounted foresight camera, carrying out data normalization processing to obtain a lane line image;
step S2: carrying out feature extraction on the lane line image through a coding-decoding network to obtain a lane line feature map; up-sampling the lane line characteristic graph to obtain a forward lane weight value graph with the same size as the lane line image, and calculating expected values of weights to obtain line-by-line key point coordinates of each lane line; obtaining a overlooking lane weight graph by utilizing a lane line key point constraint loss function and perspective transformation according to the key point coordinates, and constructing a parallel structure constraint loss function so as to enable the final lane line to meet the condition of parallel and continuous structure prior;
step S3: connecting the key point coordinates with the overlooking lane weight graph to obtain a ternary image; each pixel point of the ternary image comprises a position coordinate and a corresponding weight;
step S4: and performing minimum quadratic fitting on the ternary image, and obtaining the lane curve parameters according to the number of the lane lines defined in advance.
Compared with the prior art, the invention has the following advantages:
1. the invention discloses a lane line detection method based on lane structure prior, which introduces vector loss constraint based on lane key points, solves the problem that the structural constraint is difficult to carry out based on a segmentation method, and improves the lane detection precision in sheltered and strong-light scenes.
2. The method disclosed by the invention also introduces loss constraint based on the lane line parallel structure, and improves the lane line reasoning accuracy and the correlation of lane parameters.
Drawings
FIG. 1 is a flowchart of a lane line detection method based on lane structure prior in an embodiment of the present invention;
fig. 2 shows a step S2 in the lane line detection method based on lane structure prior in the embodiment of the present invention: performing feature extraction on the lane line image through a coding-decoding network to obtain a lane line feature map; up-sampling the lane line characteristic graph to obtain a forward lane weight value graph with the same size as the lane line image, and calculating expected values of weights to obtain line-by-line key point coordinates of each lane line; obtaining a overlooking lane weight graph by utilizing a lane line key point constraint loss function and perspective transformation according to the key point coordinates, and constructing a parallel structure constraint loss function so as to enable the final lane line to meet a flow chart of parallel and continuous structure prior;
FIG. 3A is a schematic diagram of a lane marking image according to an embodiment of the present disclosure;
FIG. 3B is a schematic diagram of a key point image of a lane line obtained after adding a key point constraint according to an embodiment of the present invention;
FIG. 4A is a schematic view of an original front view lane line image in accordance with an embodiment of the present invention;
FIG. 4B is a schematic diagram of a continuous lane line image obtained through key point constraint in the embodiment of the present invention;
FIG. 5A is a diagram illustrating an original forward lane line image in accordance with an embodiment of the present invention;
FIG. 5B is a schematic diagram illustrating lane data labeling according to an embodiment of the present invention;
FIG. 5C is a schematic diagram of a perspective transformed parallel lane image according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the lane-line parallel structure constraint in an embodiment of the present invention;
fig. 7 is a structural block diagram of a lane line detection system based on lane structure prior in an embodiment of the present invention.
Detailed Description
The invention provides a lane line detection method based on lane structure prior, which solves the problem that structural constraint is difficult to carry out based on a segmentation method, improves lane detection precision under sheltering and strong lighting scenes, and improves lane line reasoning accuracy and correlation of lane parameters.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings.
Example one
As shown in fig. 1, a lane line detection method based on lane structure prior provided by an embodiment of the present invention includes the following steps:
step S1: according to a forward-looking traffic image obtained by the vehicle-mounted forward-looking camera, carrying out data normalization processing to obtain a lane line image;
step S2: performing feature extraction on the lane line image through a coding-decoding network to obtain a lane line feature map; up-sampling the lane line characteristic graph to obtain a forward lane weight value graph with the same size as the lane line image, and calculating expected values of weights to obtain line-by-line key point coordinates of each lane line; obtaining a overlooking lane weight map by utilizing a lane line key point constraint loss function and perspective transformation according to the key point coordinates, and constructing a parallel structure constraint loss function so as to enable the final lane line to meet the requirements of mutual parallel and continuous structure prior;
step S3: connecting the key point coordinates with the overlooking lane weight graph to obtain a ternary image; each pixel point of the ternary image comprises a position coordinate and a corresponding weight;
step S4: and performing minimum quadratic fitting on the ternary image, and obtaining lane curve parameters according to the number of the lane lines defined in advance.
In one embodiment, the step S1: according to the foresight traffic image obtained by the vehicle-mounted foresight camera, data normalization processing is carried out to obtain a lane line image, and the method specifically comprises the following steps:
according to a forward-looking traffic image obtained by the vehicle-mounted forward-looking camera, a lane line image is obtained by performing normalization processing of solving the mean value and the variance on the forward-looking traffic image.
In actual life, most of lane lines are parallel to each other. However, after imaging by the camera, the parallel relationship between the lane lines does not hold in the camera image plane, but converges to the same vanishing point. Therefore, it is necessary to obtain lane lines that are parallel to each other in the bird's-eye view angle by the following procedure.
As shown in fig. 2, in one embodiment, the step S2: performing feature extraction on the lane line image through a coding-decoding network to obtain a lane line feature map; the method comprises the steps of up-sampling a lane line characteristic graph to obtain a forward lane weight graph with the same size as a lane line image, and calculating expected values of weights to obtain line-by-line key point coordinates of each lane line; according to the key point coordinates, obtaining a overlooking lane weight graph by utilizing a lane line key point constraint loss function and perspective transformation, and constructing a parallel structure constraint loss function so as to enable the final lane line to meet the requirements of parallel and continuous structure prior, specifically comprising the following steps of:
s21: performing feature extraction on the lane line image through a coding-decoding network to obtain a lane line feature map; up-sampling the lane line characteristic graph to obtain a forward lane weight value graph with the same size as the lane line image;
superposing the lane line images by a multilayer convolutional neural network of an encoder in an encoding-decoding network, and continuously enlarging a perception domain to obtain a series of lane line characteristic graphs with continuously reduced sizes and different scales; in a decoder in a coding-decoding network, a lane line feature map is up-sampled by using a transposed convolution layer step by step until a forward lane weight map with the same size as an input lane line image is obtained, wherein the forward lane weight map describes the possibility that pixel points at corresponding positions belong to a lane line. Meanwhile, the expected value of the weight is calculated line by line, and the line-by-line key point coordinates of each lane line are obtained. The coding-decoding network used in the embodiment of the invention can be realized by selecting a lightweight network according to objective hardware conditions, and can also realize the improvement of the operation speed by cutting the characteristic channel layer number of a deeper network.
S22: constructing a lane line key point constraint loss function according to the following formulas (1) to (2), calculating position expectation of a forward weight value graph in a preset row to obtain a key point set distributed on a specific row, and constructing a vector set V by taking any two points in the key point set as a starting point and an end point of a vector;
L k =L o +βL s (1)
Figure BDA0003112055490000041
wherein a, b belongs to V, and V is a vector set consisting of all vectors of which the starting point and the end point are key points; beta is a coefficient; l is o As a function of the original lane line loss, L s Structural constraints for new introductions;
this is extremely natural for human judgment because of the mutual spatial positional relationship between lane lines. However, in machine learning, if extreme conditions such as occlusion and strong illumination occur in a sample, false detection easily occurs in a deep learning algorithm. Therefore, the embodiment of the method performs uniform line sampling on the forward weight value graph, calculates the expected mean value of each line and further obtains a group of key points of each lane line, so as to reduce the false detection condition.
As shown in fig. 3A, fig. 3B shows a detection result of a lane line after adding a key point vector constraint. In the initial stage of training, the key points of the lane lines have large deviation, and the relative positions of the key points on different lanes can be adjusted to be reasonably distributed in space through training adjustment. Meanwhile, the constraint of the key points of the lane lines provided by the embodiment of the invention can ensure that the continuity of a single lane line is better. When the data test is performed using the above constraints, as shown in fig. 4A, which is an original forward-looking lane line image, the shape of the continuous lane can be effectively detected, as shown in fig. 4B.
S23: carrying out perspective transformation processing on the forward lane weight graph according to the following formulas (3) to (5) to obtain an overhead lane weight graph;
Figure BDA0003112055490000051
Figure BDA0003112055490000052
Figure BDA0003112055490000053
wherein, (u, v) is the pixel coordinate of the forward lane weight map, (x ', y') is the pixel coordinate of the look-down lane weight map after perspective transformation, a ij Is a matrix coefficient;
to solve for all eight normalized parameters of the deformation matrix, four pairs of points are required, each pair of points will provide two separate equations. Determining a according to the position of the camera on the vehicle and the parameters of the camera ij And selecting a proper region for mapping to obtain a perspective transformed overlooking lane weight graph.
The lanes between the real world are parallel to each other, but the forward lane line image as shown in fig. 5A is not parallel lane lines, fig. 5B is the lane data annotation, and fig. 5C is the parallel lanes after the perspective transformation of this step from the bird's eye view perspective.
S24: constructing a parallel structure constraint loss function based on the overlooking lane weight graph according to the following formula (6):
L f =L k +αL p (6)
wherein L is k A loss function after a key point constraint is introduced; l is p The method comprises the following steps of (1) representing the common existing area of two lane lines after different lane curves are translated to the same coordinate origin; alpha is a coefficient; l is f Representing the final loss function after the constraint of introducing a parallel structure.
The embodiment of the invention defines a series of lane lines by using a quadratic equation, wherein the lane lines are parallel to each other, namely a series of curves with the same coefficient alpha are embodied in the image as the curvatures of lanes are close to each other. The gray area in FIG. 6 is L p And the area where the two lane lines coexist after the curves of different lanes are translated to the same coordinate origin is shown. By training, minimizing this area, the shape differences between lane lines can be continuously eliminated, so that the final lane lines meet a mutual parallel and continuous structure prior.
In one embodiment, the step S3: connecting the key point coordinates with the overlooking lane weight graph to obtain a ternary image; each pixel point of the ternary image comprises a position coordinate and a corresponding weight.
And connecting each key point coordinate with the obtained overlooking lane weight image to obtain a three-channel image with the same size as the original image, wherein each key point position (x, y, w) comprises the position coordinate of a pixel point and the corresponding weight, wherein x and y are pixel coordinates, and w is the weight of the pixel.
In one embodiment, the step S4: carrying out minimum quadratic fitting on the ternary image, and obtaining lane line curve parameters according to the number of predefined lane lines, wherein the minimum quadratic fitting on the ternary image specifically comprises the following steps:
constructing a linear equation system for lane curve fitting, as shown in equation (7):
Xβ=Y (7)
wherein X ∈ R mxn ,β∈R nx1 ,Y∈R mx1 Equation (8) can be derived:
Figure BDA0003112055490000061
wherein m is>n, n-1 is the maximum number of times of curve to be fitted, m is the point pair (x) for curve fitting i ,y i ) The number of (2), that is, the number of equations; beta is the coefficient value before each order parameter item to be solved;
an approximate solution β is determined according to equation (9):
Figure BDA0003112055490000062
solving the pseudo-inverse of X according to equation (10):
β=(X T X)X T Y (10)
equation (10) can be viewed as a weighted least squares fit with a weight matrix of 1;
when a diagonal weighting matrix W is considered, weighting is simultaneously carried out on two sides of the formula (10), and then a formula (11) can be obtained;
WXβ=WY (11)
wherein,
Figure BDA0003112055490000063
simplifying formula (11) by X ═ WX and Y ═ WY yields formula (12):
X'β=Y' (12)
and finally outputting the lane curve parameters through the steps.
The invention provides a lane line detection method based on lane structure prior, which introduces the vector loss constraint based on lane key points, solves the problem that the structural constraint is difficult to carry out based on a segmentation method, and improves the lane detection precision in the sheltered and strong-light scenes. The method provided by the invention also introduces loss constraint based on the lane line parallel structure, and can improve the lane line reasoning accuracy and the correlation with lane parameters.
Example two
As shown in fig. 7, an embodiment of the present invention provides a lane line detection system based on lane structure prior, including the following modules:
the lane line image acquisition module is used for carrying out data normalization processing according to the foresight traffic image obtained by the vehicle-mounted foresight camera to obtain a lane line image;
the lane weight value image acquisition module is used for extracting the features of the lane line image through a coding-decoding network to obtain a lane line feature image; up-sampling the lane line characteristic graph to obtain a forward lane weight value graph with the same size as the lane line image, and calculating expected values of weights to obtain line-by-line key point coordinates of each lane line; obtaining a overlooking lane weight graph by utilizing a lane line key point constraint loss function and perspective transformation according to the key point coordinates, and constructing a parallel structure constraint loss function so as to enable the final lane lines to meet the requirements of mutual parallelism and continuous structure prior;
the ternary image obtaining module is used for connecting the key point coordinates with the overlook lane weight graph to obtain a ternary image; each pixel point of the ternary image comprises a position coordinate and a corresponding weight;
and the module for obtaining the curve parameters of the lane lines is used for performing minimum quadratic fitting on the ternary images and obtaining the curve parameters of the lane lines according to the number of the lane lines defined in advance.
The above examples are provided only for the purpose of describing the present invention, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent substitutions and modifications can be made without departing from the spirit and principles of the invention, and are intended to be within the scope of the invention.

Claims (3)

1. A lane line detection method based on lane structure prior is characterized by comprising the following steps:
step S1: according to a foresight traffic image obtained by the vehicle-mounted foresight camera, carrying out data normalization processing to obtain a lane line image;
step S2: carrying out feature extraction on the lane line image through a coding-decoding network to obtain a lane line feature map; up-sampling the lane line characteristic graph to obtain a forward lane weight value graph with the same size as the lane line image, and calculating expected values of weights to obtain line-by-line key point coordinates of each lane line; respectively constructing a vector set of each lane line by taking the key points as starting and stopping points according to the coordinates of the key points, and constructing a lane line key point constraint loss function based on the structural constraint of the vector set; carrying out perspective transformation on the forward lane weight graph to obtain an overlooking lane weight graph; based on the overlooking lane weight graph, constructing a parallel structure constraint loss function according to the key point constraint loss function so as to enable the final lane line to meet the condition of parallel and continuous structure prior, and the method specifically comprises the following steps:
s21: carrying out feature extraction on the lane line image through a coding-decoding network to obtain a lane line feature map; up-sampling the lane line feature map to obtain a forward lane weight map with the same size as the lane line image;
s22: constructing a constraint loss function of the key points of the lane line according to the following formulas (1) to (2), calculating position expectation of a forward weight value graph in a preset row to obtain a key point set distributed on a specific row, and constructing a vector set V by taking any two points in the key point set as a starting point and an end point of a vector;
L k =L o +βL s (1)
Figure FDA0003700178030000011
wherein a, b belongs to V, and V is a vector set consisting of all vectors of which the starting point and the end point are the key points; beta is a coefficient; l is o As a function of the original lane line loss, L s For newly introduced structural constraints, L k Is a new organizationBuilding a loss function;
s23: performing perspective transformation processing on the forward lane weight graph according to the following formulas (3) to (5) to obtain an overhead lane weight graph;
Figure FDA0003700178030000012
Figure FDA0003700178030000013
Figure FDA0003700178030000014
wherein (u, v) is the pixel coordinate of the forward lane weight map, (x ', y') is the pixel coordinate of the look-down lane weight map after perspective transformation, a ij Is a matrix coefficient;
s24: constructing a parallel structure constraint loss function based on the overlooking lane weight graph according to the following formula (6):
L f =L k +αL p (6)
wherein L is k A loss function after constraint for introducing key points; l is p The method comprises the following steps of (1) representing the common existing area of two lane lines after different lane curves are translated to the same coordinate origin; alpha is a coefficient; l is f Representing the final loss function after introducing parallel structure constraint;
step S3: connecting the key point coordinates with the overhead lane weight graph to obtain a ternary image; each pixel point of the ternary image comprises a position coordinate and a corresponding weight;
step S4: and performing minimum quadratic fitting on the ternary image, and obtaining the lane curve parameters according to the number of the lane lines defined in advance.
2. The method for detecting a lane line based on lane structure prior according to claim 1, wherein the step S4 of performing least-squares fitting on the ternary image specifically includes:
constructing a linear equation system for lane curve fitting, as shown in equation (7):
Xβ=Y (7)
wherein X ∈ R mxn ,β∈R nx1 ,Y∈R mx1 Equation (8) can be derived:
Figure FDA0003700178030000021
wherein m is>n, n-1 is the maximum number of times of curve fitting, m is the point pair (x) for curve fitting i ,y i ) The number of the equations, namely the number of the equations; beta is the coefficient value before each order parameter item to be solved;
an approximate solution β is determined according to equation (9):
Figure FDA0003700178030000022
solving for the pseudo-inverse of X according to equation (10):
β=(X T X)X T Y (10)
equation (10) can be viewed as a weighted least squares fit with a weight matrix of 1;
when a diagonal weighting matrix W is considered, weighting is carried out on two sides of the formula (10) simultaneously, and then a formula (11) can be obtained;
WXβ=WY (11)
wherein,
Figure FDA0003700178030000023
simplifying formula (11) by making X '═ WX and Y' ═ WY yields formula (12):
X′β=Y′ (12)。
3. a lane line detection system based on lane structure prior is characterized by comprising the following modules:
the lane line image acquisition module is used for carrying out data normalization processing according to a foresight traffic image obtained by the vehicle-mounted foresight camera to obtain a lane line image;
the lane weight value graph obtaining module is used for extracting the features of the lane line image through a coding-decoding network to obtain a lane line feature graph; up-sampling the lane line characteristic graph to obtain a forward lane weight value graph with the same size as the lane line image, and calculating expected values of weights to obtain line-by-line key point coordinates of each lane line; respectively constructing a vector set of each lane line by taking the key points as starting and stopping points according to the coordinates of the key points, and constructing a lane line key point constraint loss function based on the structural constraint of the vector set; carrying out perspective transformation on the forward lane weight graph to obtain an overlooking lane weight graph; based on the overlooking lane weight graph, constructing a parallel structure constraint loss function according to the key point constraint loss function so as to enable the final lane line to meet the condition of parallel and continuous structure prior, and specifically comprises the following steps:
s21: carrying out feature extraction on the lane line image through a coding-decoding network to obtain a lane line feature map; the lane line feature map is subjected to upsampling to obtain a forward lane weight map with the same size as the lane line image;
s22: constructing a constraint loss function of the key points of the lane line according to the following formulas (1) to (2), calculating position expectation of a forward weight value graph in a preset row to obtain a key point set distributed on a specific row, and constructing a vector set V by taking any two points in the key point set as a starting point and an end point of a vector;
L k =L o +βL s (1)
Figure FDA0003700178030000031
wherein a, b is equal to V, V is all directions of the starting point and the end point of the key pointA set of vectors of quantities; beta is a coefficient; l is o As a function of the original lane line loss, L s For newly introduced structural constraints, L k A newly constructed loss function is obtained;
s23: performing perspective transformation processing on the forward lane weight graph according to the following formulas (3) to (5) to obtain an overhead lane weight graph;
Figure FDA0003700178030000032
Figure FDA0003700178030000033
Figure FDA0003700178030000034
wherein (u, v) is the pixel coordinate of the forward lane weight map, (x ', y') is the pixel coordinate of the look-down lane weight map after perspective transformation, a ij Is a matrix coefficient;
s24: constructing a parallel structure constraint loss function based on the overlooking lane weight graph according to the following formula (6):
L f =L k +αL p (6)
wherein L is k A loss function after a key point constraint is introduced; l is p The method comprises the steps of representing the area where two lane lines coexist after different lane curves are translated to the same coordinate origin; alpha is a coefficient; l is f Representing a final loss function after parallel structure constraint is introduced;
the ternary image obtaining module is used for connecting the key point coordinates with the overhead lane weight map to obtain a ternary image; each pixel point of the ternary image comprises a position coordinate and a corresponding weight;
and the module for obtaining the curve parameters of the lane lines is used for performing minimum quadratic fitting on the ternary images and obtaining the curve parameters of the lane lines according to the number of the lane lines defined in advance.
CN202110654503.0A 2021-06-11 2021-06-11 Lane line detection method and system based on lane structure prior Active CN113313047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110654503.0A CN113313047B (en) 2021-06-11 2021-06-11 Lane line detection method and system based on lane structure prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110654503.0A CN113313047B (en) 2021-06-11 2021-06-11 Lane line detection method and system based on lane structure prior

Publications (2)

Publication Number Publication Date
CN113313047A CN113313047A (en) 2021-08-27
CN113313047B true CN113313047B (en) 2022-09-06

Family

ID=77378531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110654503.0A Active CN113313047B (en) 2021-06-11 2021-06-11 Lane line detection method and system based on lane structure prior

Country Status (1)

Country Link
CN (1) CN113313047B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763483B (en) * 2021-09-10 2024-04-02 智道网联科技(北京)有限公司 Method and device for calibrating pitch angle of automobile data recorder
CN114677442B (en) * 2022-05-26 2022-10-28 之江实验室 Lane line detection system, device and method based on sequence prediction
CN115376082B (en) * 2022-08-02 2023-06-09 北京理工大学 Lane line detection method integrating traditional feature extraction and deep neural network
CN115376091A (en) * 2022-10-21 2022-11-22 松立控股集团股份有限公司 Lane line detection method assisted by image segmentation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016463A (en) * 2020-08-28 2020-12-01 佛山市南海区广工大数控装备协同创新研究院 Deep learning-based lane line detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470159B (en) * 2018-03-09 2019-12-20 腾讯科技(深圳)有限公司 Lane line data processing method and device, computer device and storage medium
CN111566441B (en) * 2018-04-18 2022-08-09 移动眼视力科技有限公司 Vehicle environment modeling with camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016463A (en) * 2020-08-28 2020-12-01 佛山市南海区广工大数控装备协同创新研究院 Deep learning-based lane line detection method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Novel Structure Prior-based Loss Function for Lane Detection;Binhui Liu et al.;《2020 39th Chinese Control Conference(CCC)》;20200909;第5649-5653页 *
End-to-end Lane Detection through Differentiable Least-Squares Fitting;Wouter Van Gansbeke et al.;《arXiv[cs.CV]》;20190905;第1-9页 *
Ultra Fast Structure-aware Deep Lane Detection;Zequn Qin et al.;《arXiv[cs.CV]》;20200805;第1-16页 *
张翔 等.道路结构特征下的车道线智能检测.《中国图象图形学报》.2021,第26卷(第01期),第123-134页. *

Also Published As

Publication number Publication date
CN113313047A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN113313047B (en) Lane line detection method and system based on lane structure prior
CN112884760B (en) Intelligent detection method for multi-type diseases of near-water bridge and unmanned ship equipment
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN115082674B (en) Multi-mode data fusion three-dimensional target detection method based on attention mechanism
CN112613343B (en) River waste monitoring method based on improved YOLOv4
CN111046843A (en) Monocular distance measurement method under intelligent driving environment
CN110490913A (en) Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method
CN113592839B (en) Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN111242026A (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN112907573A (en) Depth completion method based on 3D convolution
CN116152068A (en) Splicing method for solar panel images
CN116071721A (en) Transformer-based high-precision map real-time prediction method and system
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN114998375A (en) Live fish weight estimation method and system based on example segmentation
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN117197241B (en) Robot tail end absolute pose high-precision tracking method based on multi-eye vision
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net
CN117333846A (en) Detection method and system based on sensor fusion and incremental learning in severe weather
CN115984360B (en) Method and system for calculating length of dry beach based on image processing
CN115375977B (en) Deep sea cultured fish sign parameter identification system and identification method
CN116385994A (en) Three-dimensional road route extraction method and related equipment
CN116385347A (en) Deformation analysis-based visual inspection method for aircraft skin curved surface pattern
CN116189012A (en) Unmanned aerial vehicle ground small target detection method based on improved YOLOX
CN115035193A (en) Bulk grain random sampling method based on binocular vision and image segmentation technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant