CN115937825A - Robust lane line generation method and device under BEV (beam-based attitude vector) of on-line pitch angle estimation - Google Patents

Robust lane line generation method and device under BEV (beam-based attitude vector) of on-line pitch angle estimation Download PDF

Info

Publication number
CN115937825A
CN115937825A CN202310016576.6A CN202310016576A CN115937825A CN 115937825 A CN115937825 A CN 115937825A CN 202310016576 A CN202310016576 A CN 202310016576A CN 115937825 A CN115937825 A CN 115937825A
Authority
CN
China
Prior art keywords
lane line
bev
pitch angle
information
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310016576.6A
Other languages
Chinese (zh)
Other versions
CN115937825B (en
Inventor
高海明
华炜
邱奇波
张骞
张顺
姜峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202310016576.6A priority Critical patent/CN115937825B/en
Publication of CN115937825A publication Critical patent/CN115937825A/en
Application granted granted Critical
Publication of CN115937825B publication Critical patent/CN115937825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a robust lane line generation method and a robust lane line generation device under BEV (beam-based attitude vector) of on-line pitch angle estimation, which are used for completing pixel-level dense segmentation on a forward-looking monocular image based on lane line information and other information to obtain corresponding image mask information; extracting a plurality of groups of lane lines meeting the parallel relation on a two-dimensional image plane according to the image mask information; combining an unknown pitch angle structure with the external parameter matrix, back-projecting the end points of the parallel lane lines on the image plane to the aerial view BEV, and constructing a cost function related to the unknown pitch angle according to the lane line parallel prior information; and giving resolution and size information, constructing a grid interest area under the aerial view BEV, substituting the unknown pitch angle obtained by solving, and generating a lane line under the aerial view BEV by combining image mask information, thereby more effectively realizing the detection of the lane line.

Description

Robust lane line generation method and device under BEV (beam-based attitude vector) of on-line pitch angle estimation
Technical Field
The invention relates to the field of environment perception for ground unmanned vehicles, in particular to a robust lane line generation method and device under BEV for on-line pitch angle estimation.
Background
The lane line detection is one of the most important sensing tasks of the unmanned driving and high-order assistant driving systems, and provides important information for real-time robust positioning and motion planning of unmanned vehicles. Early traditional lane line detection methods were mostly designed based on manual features. Since the lane line detection task depends on texture features and high-level semantic analysis at the same time, the lane line detection task can benefit from the strong representation capability of a deep learning model, and the lane line detection has entered a new era with higher robustness and stronger generalization along with the rapid development of deep learning. Among these, in order to realize an effective lane line detection task, a method based on semantic segmentation attracts a wide attention of scholars in the field as one of the most typical methods in the field of lane detection.
In practical application, the lane line information extracted from the image space cannot be directly used for robust positioning and motion planning, and generally, a BEV (Bird's Eye View) image is generated by using Inverse Perspective transformation (IPM) to eliminate a Perspective effect, so that more effective perception information is provided for a post algorithm. While the effective implementation of IPM relies on precise camera intrinsic parameters and extrinsic parameters between the camera and the drone, it is necessary to assume that there is a rigid body relationship between the camera and the ground. However, when the unmanned vehicle platform has severe motion variation, the generated bird's eye view BEV image information is distorted.
Disclosure of Invention
In order to solve the defects of the existing lane line detection and realize the purpose of robustly extracting the lane line information under the BEV, the invention adopts the following technical scheme:
a robust lane line generation method under BEV of on-line pitch angle estimation comprises the following steps:
step S1: based on the lane line information and other information, completing pixel-level dense segmentation on the forward-looking monocular image to obtain corresponding image mask information;
step S2: extracting a plurality of groups of lane lines meeting the parallel relation on a two-dimensional image plane according to the image mask information;
and step S3: combining an unknown pitch angle structure with the external parameter matrix, back-projecting the end points of the parallel lane lines on the image plane to the aerial view BEV, and constructing a cost function related to the unknown pitch angle according to the lane line parallel prior information;
and step S4: and giving resolution and size information, constructing a grid interest area under the aerial view BEV, substituting the solved unknown pitch angle, and generating a lane line under the aerial view BEV by combining image mask information.
Further, the step S1 includes the steps of:
step S1.1: reasoning to obtain corresponding lane line class mask information according to the original image information to complete pixel-level segmentation;
step S1.2: and according to the distortion parameters obtained by calibrating the monocular camera, carrying out distortion removal on the image to obtain a new camera internal parameter matrix.
Further, in step S1.1, a deep learning framework based on a pure visual transform is used, and an online inference is performed to obtain a segmentation result of the forward-looking monocular image, where the classes of dense-level pixel segmentation include: lane boundary lines, lane center lines, stop lines, and the like.
Further, in the step S1.2, considering the influence of monocular image distortion on the subsequent BEV lane line generation, before the implementation of the step S1.2, the monocular camera is calibrated by using the checkerboard to obtain a monocular camera distortion parameter:
Figure 100002_DEST_PATH_IMAGE001
wherein
Figure 407753DEST_PATH_IMAGE002
The radial distortion parameter is represented by the radial distortion parameter,
Figure 100002_DEST_PATH_IMAGE003
representing a tangential distortion parameter; and carrying out distortion removal on the obtained class mask information by combining with the camera distortion parameters, and obtaining a new camera internal parameter matrix.
Further, the step S2 includes the steps of:
step S2.1: converting the image mask information to obtain binary image information, and performing image thinning processing;
step S2.2: based on the image skeletonization processing result, the lane line object extraction is completed by utilizing region growth, the number of corresponding pixel points is counted to remove the lane line target smaller than a given number threshold, and simultaneously, image burrs are removed; removing redundant lane line noise information to further improve subsequent processing efficiency, and meanwhile, overcoming image burrs caused by image thinning operation;
step S2.3: in the subsequent straight-line segment feature extraction process, pixel-level path information corresponding to the lane line needs to be obtained, so that the shortest path method is utilized at the end points of two sides of the given lane line to obtain the pixel-level path information of the lane line;
step S2.4: and (3) dividing the lane line by using a straight line feature extraction algorithm, representing the lane line as a broken line segment, reserving a part of line segments which are larger than a given length threshold value, and returning a plurality of groups of straight line segments which meet the parallel relation.
Further, in step S2.1, traversing lane line category mask information, taking a center lane line and a boundary lane line representing current road direction information as foreground portions of the binarized image, and performing thinning processing on the image in consideration of noise and a large amount of effective pixel information existing in dense segmentation of the image, so as to retain lane line direction information and further reduce effective pixel points.
Further, in the step S2.4, the pitch angle at the previous time is applied to the judgment of the parallel relationship of the current lane lines, and finally, a plurality of groups of straight line segments meeting the parallel relationship are obtained.
Further, the step S3 includes the steps of:
step S3.1: constructing an external parameter matrix relative to the ground according to the unknown pitch angle, respectively projecting two end points of the parallel lane lines to a bird's-eye view BEV through perspective projection transformation, and acquiring corresponding point information in a European space;
step S3.2: and respectively constructing vector information of the corresponding lane lines in the space of the aerial view BEV through the point information, obtaining lane line parallel constraint by taking two vector cross multiplication as reference information of quantitative evaluation parallel relation, constructing a cost function with unknown pitch angle by combining the lane line parallel constraint, and solving to obtain the unknown pitch angle through a minimum cost function.
Further, the step S4 includes the steps of:
step S4.1: given offset distance, resolution and size information under camera coordinates, constructing a grid interest region under the BEV for subsequent generation of BEV lane lines;
step S4.2: and according to the unknown pitch angle obtained by the solution, projecting the center of each grid of the interest area to an image plane, and giving the semantic category of the lane line corresponding to the grid by combining image mask information, thereby generating lane line information under the bird's-eye view BEV.
A robust lane line generation device under a BEV of an online pitch angle estimation comprises a memory and one or more processors, wherein executable codes are stored in the memory, and when the one or more processors execute the executable codes, the robust lane line generation device under the BEV of the online pitch angle estimation is used for realizing the robust lane line generation method under the BEV of the online pitch angle estimation.
The invention has the advantages and beneficial effects that:
the invention discloses a robust lane line generation method and device under BEV (beam-based attitude and heading) for on-line pitch angle estimation, and aims to more effectively acquire lane line information under BEV by on-line estimation of pitch angle information relative to the ground. And constructing a cost function by combining lane line parallel constraint to solve unknown pitch angle, and estimating the pitch angle on line to realize robust and reliable generation of the lane line under the BEV, so as to achieve the purpose of robustly detecting lane line information under the BEV.
Drawings
FIG. 1 is a flow chart of a method in an embodiment of the invention.
FIG. 2 is a diagram of the architecture of the method of an embodiment of the present invention.
Fig. 3 is a schematic diagram of a lane line broken line segment representation in the embodiment of the invention.
Fig. 4a is a diagram of the effect of robust lane line generation under BEV without pitch angle estimation in the embodiment of the present invention.
Fig. 4b is a diagram of the effect of robust lane line generation under BEV for pitch angle estimation in the embodiment of the present invention.
Fig. 5 is a diagram illustrating an effect of road information generated by multi-frame probability accumulation according to an embodiment of the present invention.
FIG. 6 is a schematic diagram of the structure of the device in the embodiment of the present invention.
Detailed Description
The following describes in detail embodiments of the present invention with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
The robust lane line generation method oriented to the unmanned driving and high-order auxiliary driving system comprises the steps of firstly, carrying out pixel-level dense segmentation on a forward-looking monocular image, reasoning to obtain corresponding lane line type mask information, then expressing lane lines as broken line segments on an image plane to obtain a plurality of groups of straight line segments meeting the parallel relation, and finally, combining lane line parallel constraint to construct a cost function about an unknown pitch angle to solve the unknown pitch angle. Aiming at the problems that pitch angle change is overcome in the field of unmanned vehicles at present, and related work for generating robust lane lines under BEV is less, a solution which is reliable in performance and easy to develop is provided.
As shown in fig. 1 and fig. 2, a robust lane line generation method under BEV for line pitch angle estimation includes the following steps:
step S1: based on lane line information and other information, completing pixel-level dense segmentation on the forward-looking monocular image to obtain corresponding image mask information, and specifically comprising the following substeps:
step S1.1: reasoning to obtain corresponding lane line class mask information according to the original image information to complete pixel-level segmentation;
in the implementation process of the invention, a deep learning frame based on pure vision transducer is utilized to obtain the segmentation result of a forward-looking monocular image through online reasoning, and the classification of dense-level pixel segmentation comprises four types of information, namely lane boundary lines, lane central lines, stop lines and other types.
Step S1.2: according to distortion parameters obtained by calibrating the monocular camera, carrying out distortion removal on the image to obtain a new camera internal parameter matrix;
considering the influence of monocular image distortion on the generation of subsequent BEV lane lines, before the implementation process of the sub-step, firstly, calibrating a monocular camera by using a checkerboard to obtain a monocular camera distortion parameter:
Figure 515386DEST_PATH_IMAGE001
wherein
Figure 441754DEST_PATH_IMAGE002
Is a parameter of the radial distortion that is,
Figure 990547DEST_PATH_IMAGE003
is a tangential distortion parameter; and carrying out distortion removal on the obtained class mask information by combining with the camera distortion parameters, and obtaining a new camera internal parameter matrix.
Step S2: according to the image mask information, extracting a plurality of groups of lane lines meeting the parallel relation on a two-dimensional image plane, and specifically comprising the following substeps:
step S2.1: converting the image mask information to obtain binary image information, and performing image thinning processing;
in the implementation case of the invention, a binary image needs to be generated, specifically, a central lane line and a boundary lane line representing the current road direction information are used as the foreground part of the binary image through the lane line category mask information obtained by traversing the previous step; if the corresponding pixels are the boundary line and the center line which can express the lane direction, the value is assigned to be 1, otherwise, the value is assigned to be 0. Considering the noise existing in dense segmentation of the image and a large amount of effective pixel information, the effective pixel points are further reduced by retaining the lane line direction information through image thinning processing.
Step S2.2: based on the image skeletonization processing result, the lane line object extraction is completed by utilizing region growth, the number of corresponding pixel points is counted to remove the lane line target smaller than a given threshold value, and meanwhile, image burrs are removed;
in the practical application process, the redundant lane noise information needs to be removed to further improve the subsequent processing efficiency, and meanwhile, image burrs caused by image thinning operation also need to be overcome.
Step S2.3: giving end points on two sides of the lane line, and obtaining pixel-level path information of the lane line by using a shortest path method;
in the subsequent straight-line segment feature extraction process, pixel-level path information corresponding to the lane line needs to be acquired, and in the implementation process of the invention, the end points on two sides of the lane line are given
Figure 344168DEST_PATH_IMAGE004
Obtaining route end point by dijkstra shortest path algorithm
Figure DEST_PATH_IMAGE005
To the endpoint
Figure 939097DEST_PATH_IMAGE006
Shortest pixel level path information.
Step S2.4: dividing the lane line obtained in the step by using a straight line feature extraction algorithm, expressing the lane line as a broken line segment, reserving a part of line segments which are larger than a given length threshold value, and returning a plurality of groups of straight line segments which meet the parallel relation;
through the straight line segment feature extraction in the present sub-step, the corresponding lane line information can be converted into a broken line segment, as shown in fig. 3. In particular, in connection with current lane line end points
Figure 669156DEST_PATH_IMAGE004
And finally representing the lane line by a plurality of iterations by utilizing a classical straight line segment feature extraction method, namely segmentation and merging (Split-and-Merge). In practical application, the pitch angle at the previous moment is applied to judgment of the parallel relation of the current lane lines, and finally a plurality of groups of straight line sections meeting the parallel relation are obtained.
And step S3: combined with unknown pitch angle
Figure DEST_PATH_IMAGE007
Constructing to obtain an external parameter matrix, back-projecting the end points of the parallel lane lines on the image plane to the aerial View (BEV), and constructing the unknown pitch angle according to the lane line parallel prior information
Figure 603614DEST_PATH_IMAGE007
Of (2) a cost function
Figure 393715DEST_PATH_IMAGE008
(ii) a The method specifically comprises the following substeps:
step S3.1: according to unknown pitch angle
Figure 679203DEST_PATH_IMAGE007
Constructing an external parameter matrix relative to the ground, respectively projecting two end points of the parallel lane lines to a bird's-eye view image BEV through perspective projection transformation, and acquiring corresponding point information in European space;
according to unknown pitch angle
Figure 150635DEST_PATH_IMAGE007
An external reference matrix relative to the ground can be constructed
Figure DEST_PATH_IMAGE009
And respectively projecting two end points of the lane lines with a parallel relation to the BEV through perspective projection transformation to obtain corresponding point information in the Euclidean space.
Step S3.2: by the point information, respectivelyConstructing vector information of a corresponding lane line in an aerial view BEV space, obtaining lane line parallel constraint by taking two-vector cross multiplication as reference information for quantitatively evaluating parallel relation, and constructing the vector information with an unknown pitch angle by combining the lane line parallel constraint
Figure 1917DEST_PATH_IMAGE007
Cost function of
Figure 962920DEST_PATH_IMAGE008
Obtaining the unknown pitch angle by minimizing the cost function and solving
Figure 938966DEST_PATH_IMAGE007
And respectively constructing vector information of the corresponding lane line in the BEV space by combining the point information obtained in the substeps, and taking cross multiplication of two vectors as reference information for quantitatively evaluating the parallel relation. Finally, constructing a cost function by combining a plurality of groups of parallel lane lines
Figure 745248DEST_PATH_IMAGE008
And solving to obtain corresponding pitch angle information through a minimum cost function.
And step S4: giving resolution and size information, constructing a grid interest area under the aerial view BEV, substituting an unknown pitch angle obtained by solving, and generating a lane line under the aerial view BEV by combining image mask information:
step S4.1: under camera coordinates, given offset distance, resolution and size information, constructing a grid interest region under the BEV;
given offset distance
Figure 911088DEST_PATH_IMAGE010
Resolution of the grid
Figure DEST_PATH_IMAGE011
And size information
Figure 574151DEST_PATH_IMAGE012
Constructing grid of image under current camera coordinate systemInteresting regions for subsequent BEV lane line generation.
Step S4.2: obtaining an unknown pitch angle according to the solution
Figure DEST_PATH_IMAGE013
Projecting the center of each grid of the interest area to an image plane, and giving the semantic category of the lane line corresponding to the grid by combining image mask information, thereby generating lane line information under the bird's-eye view BEV;
according to the method, firstly, pixel-level dense segmentation is completed based on a pure vision transform deep learning frame, and mask information corresponding to a forward-looking monocular image is obtained; then extracting a plurality of groups of lane lines meeting the parallel relation on a two-dimensional image plane according to the image mask information; then, the unknown pitch angle is combined
Figure 568651DEST_PATH_IMAGE007
And constructing a cost function according to lane line parallel constraint to solve pitch angle information on line; and finally, constructing a grid interest region under the BEV, substituting the pitch angle information obtained by solving, and combining the mask information to generate a lane line under the BEV. Fig. 4a and 4b show comparison experimental graphs of robust lane line generation under BEV showing a typical case of the present invention, wherein the left graph does not perform pitch angle estimation, and the right graph performs pitch angle estimation, and it can be seen from the graphs that the implementation effect after pitch angle estimation is due to the experimental effect of not performing pitch angle estimation; fig. 5 shows an experimental graph of road information generated by multi-frame probability accumulation.
Corresponding to the embodiment of the robust lane line generation method under the BEV of the online pitch angle estimation, the invention also provides an embodiment of a robust lane line generation device under the BEV of the online pitch angle estimation.
Referring to fig. 6, the device for generating a robust lane line under BEV for estimating an on-line pitch angle according to an embodiment of the present invention includes a memory and one or more processors, where the memory stores executable codes, and when the one or more processors execute the executable codes, the one or more processors are configured to implement the method for generating a robust lane line under BEV for estimating an on-line pitch angle according to the above embodiment.
Embodiments of the robust lane line generation apparatus under BEV for line pitch angle estimation of the present invention may be applied to any data processing capable device, such as a computer or other like device or apparatus. The apparatus embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. In terms of hardware, as shown in fig. 6, the hardware structure diagram of any device with data processing capability where the robust lane line generation apparatus is located under the BEV for estimating the line pitch angle according to the present invention is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 6, in the embodiment, any device with data processing capability where the apparatus is located may generally include other hardware according to the actual function of the any device with data processing capability, which is not described again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
An embodiment of the present invention further provides a computer-readable storage medium, on which a program is stored, where the program, when executed by a processor, implements the robust lane line generation method under BEV for online pitch angle estimation in the above-described embodiments.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be any external storage device of a device with data processing capabilities, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing-capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A robust lane line generation method under BEV of line pitch angle estimation is characterized by comprising the following steps:
step S1: based on the lane line information and other information, completing pixel-level dense segmentation on the forward-looking monocular image to obtain corresponding image mask information;
step S2: extracting a plurality of groups of lane lines meeting the parallel relation on a two-dimensional image plane according to the image mask information;
and step S3: combining an unknown pitch angle structure with the external parameter matrix, back-projecting the end points of the parallel lane lines on the image plane to the aerial view BEV, and constructing a cost function related to the unknown pitch angle according to the lane line parallel prior information;
and step S4: and giving resolution and size information, constructing a grid interest area under the aerial view BEV, substituting the solved unknown pitch angle, and generating a lane line under the aerial view BEV by combining image mask information.
2. The method of robust lane line generation under BEV for line pitch angle estimation of claim 1, wherein: the step S1 includes the steps of:
step S1.1: reasoning to obtain corresponding lane line class mask information according to the original image information to complete pixel-level segmentation;
step S1.2: and according to the distortion parameters obtained by calibrating the monocular camera, carrying out distortion removal on the image to obtain a new camera internal parameter matrix.
3. The method of robust lane line generation under BEV for line pitch angle estimation of claim 2, wherein: in the step S1.1, a deep learning framework based on a pure visual transformer is used to perform online reasoning to obtain a segmentation result of a forward looking monocular image, and the dense-level pixel segmentation categories include: lane boundary lines, lane center lines, stop lines, and the like.
4. The method of robust lane line generation under BEV for on-line pitch angle estimation of claim 2, wherein: in step S1.2, the monocular camera is first calibrated by using the checkerboard to obtain a distortion parameter of the monocular camera:
Figure DEST_PATH_IMAGE001
in which
Figure 116787DEST_PATH_IMAGE002
The radial distortion parameter is represented by the radial distortion parameter,
Figure DEST_PATH_IMAGE003
representing a tangential distortion parameter; binding phaseAnd (4) carrying out distortion removal on the obtained class mask information according to the machine distortion parameters, and obtaining a new camera internal parameter matrix.
5. The method of robust lane line generation under BEV for line pitch angle estimation of claim 1, wherein: the step S2 includes the steps of:
step S2.1: converting the image mask information to obtain binary image information;
step S2.2: based on the image skeletonization processing result, the lane line object extraction is completed by utilizing region growing, and the number of corresponding pixel points is counted to remove the lane line targets smaller than a given number threshold;
step S2.3: giving end points on two sides of a lane line, and obtaining pixel-level path information of the lane line by using a shortest path method;
step S2.4: and (3) dividing the lane line by using a straight line feature extraction algorithm, representing the lane line as a broken line segment, reserving partial line segments which are larger than a given length threshold value, and returning a plurality of groups of straight line segments meeting the parallel relation.
6. The method of robust lane line generation under BEV for line pitch angle estimation of claim 5, wherein: in the step S2.1, the lane line category mask information is traversed, the central lane line and the boundary lane line representing the current road direction information are used as the foreground portion of the binarized image, and the lane line direction information is retained by thinning the image, so that effective pixel points are further reduced.
7. The method of robust lane line generation under BEV for line pitch angle estimation of claim 5, wherein: in the step S2.4, the pitch angle at the previous time is applied to the judgment of the parallel relationship of the current lane line, and finally, a plurality of groups of straight line segments meeting the parallel relationship are obtained.
8. The method of robust lane line generation under BEV for on-line pitch angle estimation of claim 5, wherein: the step S3 includes the steps of:
step S3.1: constructing an external parameter matrix relative to the ground according to the unknown pitch angle, respectively projecting two end points of the parallel lane lines to a bird's-eye view BEV through perspective projection transformation, and acquiring corresponding point information in a European space;
step S3.2: and respectively constructing vector information of the corresponding lane lines in the space of the aerial view BEV through the point information, obtaining lane line parallel constraint by taking two vector cross multiplication as reference information of quantitative evaluation parallel relation, constructing a cost function with unknown pitch angle by combining the lane line parallel constraint, and solving to obtain the unknown pitch angle through a minimum cost function.
9. The method of robust lane line generation under BEV for line pitch angle estimation of claim 5, wherein: the step S4 includes the steps of:
step S4.1: under camera coordinates, given offset distance, resolution and size information, constructing a grid interest region under the BEV;
step S4.2: and projecting the centers of the grids of the interest area to an image plane according to the solved unknown pitch angle, and giving the corresponding lane line semantic categories to the grids by combining the image mask information so as to generate lane line information under the bird's-eye view BEV.
10. A robust lane line generation apparatus under BEV for line pitch angle estimation, comprising a memory having stored therein executable code and one or more processors which, when executing the executable code, are configured to implement the robust lane line generation method under BEV for line pitch angle estimation according to any one of claims 1 to 9.
CN202310016576.6A 2023-01-06 2023-01-06 Method and device for generating robust lane line under BEV of on-line pitch angle estimation Active CN115937825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310016576.6A CN115937825B (en) 2023-01-06 2023-01-06 Method and device for generating robust lane line under BEV of on-line pitch angle estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310016576.6A CN115937825B (en) 2023-01-06 2023-01-06 Method and device for generating robust lane line under BEV of on-line pitch angle estimation

Publications (2)

Publication Number Publication Date
CN115937825A true CN115937825A (en) 2023-04-07
CN115937825B CN115937825B (en) 2023-06-20

Family

ID=86552512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310016576.6A Active CN115937825B (en) 2023-01-06 2023-01-06 Method and device for generating robust lane line under BEV of on-line pitch angle estimation

Country Status (1)

Country Link
CN (1) CN115937825B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168173A (en) * 2023-04-24 2023-05-26 之江实验室 Lane line map generation method, device, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785291A (en) * 2018-12-20 2019-05-21 南京莱斯电子设备有限公司 A kind of lane line self-adapting detecting method
EP3624001A1 (en) * 2018-09-13 2020-03-18 Volvo Car Corporation Methods and systems for parking line marker detection and pairing and parking spot detection and classification
CN111401150A (en) * 2020-02-27 2020-07-10 江苏大学 Multi-lane line detection method based on example segmentation and adaptive transformation algorithm
CN111652952A (en) * 2020-06-05 2020-09-11 腾讯科技(深圳)有限公司 Lane line generation method, lane line generation device, computer device, and storage medium
US20210276574A1 (en) * 2020-03-03 2021-09-09 GM Global Technology Operations LLC Method and apparatus for lane detection on a vehicle travel surface
CN114037970A (en) * 2021-11-19 2022-02-11 中国重汽集团济南动力有限公司 Sliding window-based lane line detection method, system, terminal and readable storage medium
CN114399588A (en) * 2021-12-20 2022-04-26 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114445593A (en) * 2022-01-30 2022-05-06 重庆长安汽车股份有限公司 Aerial view semantic segmentation label generation method based on multi-frame semantic point cloud splicing
CN114445392A (en) * 2022-01-31 2022-05-06 重庆长安汽车股份有限公司 Lane line-based pitch angle calibration method and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3624001A1 (en) * 2018-09-13 2020-03-18 Volvo Car Corporation Methods and systems for parking line marker detection and pairing and parking spot detection and classification
CN109785291A (en) * 2018-12-20 2019-05-21 南京莱斯电子设备有限公司 A kind of lane line self-adapting detecting method
CN111401150A (en) * 2020-02-27 2020-07-10 江苏大学 Multi-lane line detection method based on example segmentation and adaptive transformation algorithm
US20210276574A1 (en) * 2020-03-03 2021-09-09 GM Global Technology Operations LLC Method and apparatus for lane detection on a vehicle travel surface
CN111652952A (en) * 2020-06-05 2020-09-11 腾讯科技(深圳)有限公司 Lane line generation method, lane line generation device, computer device, and storage medium
CN114037970A (en) * 2021-11-19 2022-02-11 中国重汽集团济南动力有限公司 Sliding window-based lane line detection method, system, terminal and readable storage medium
CN114399588A (en) * 2021-12-20 2022-04-26 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114445593A (en) * 2022-01-30 2022-05-06 重庆长安汽车股份有限公司 Aerial view semantic segmentation label generation method based on multi-frame semantic point cloud splicing
CN114445392A (en) * 2022-01-31 2022-05-06 重庆长安汽车股份有限公司 Lane line-based pitch angle calibration method and readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PINGPING LU 等: "Graph-Embedded Lane Detection", IEEE TRANSACTIONS ON IMAGE PROCESSING *
QIBO QIU, HAIMING GAO, WEI HUA, GANG HUANG, XIAOFEI HE: "PriorLane: A Prior Knowledge Enhanced Lane Detection Approach Based on Transformer", ARXIV *
付黎明: "基于多尺度重采样的车道线检测", 电子技术应用 *
蔡英凤;张田田;王海: "基于实例分割和自适应透视变换算法的多车道线检测", 东南大学学报(自然科学版) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168173A (en) * 2023-04-24 2023-05-26 之江实验室 Lane line map generation method, device, electronic device and storage medium

Also Published As

Publication number Publication date
CN115937825B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
Uhrig et al. Sparsity invariant cnns
JP7033373B2 (en) Target detection method and device, smart operation method, device and storage medium
US10818014B2 (en) Image object segmentation based on temporal information
Yang et al. Semantic segmentation-assisted scene completion for lidar point clouds
CN116402976A (en) Training method and device for three-dimensional target detection model
CN115937825A (en) Robust lane line generation method and device under BEV (beam-based attitude vector) of on-line pitch angle estimation
Huang et al. ES-Net: An efficient stereo matching network
Bullinger et al. 3d vehicle trajectory reconstruction in monocular video data using environment structure constraints
CN114549542A (en) Visual semantic segmentation method, device and equipment
CN116563303B (en) Scene generalizable interactive radiation field segmentation method
Sun et al. Uni6Dv2: Noise elimination for 6D pose estimation
Tsuji et al. Non-guided depth completion with adversarial networks
CN116734834A (en) Positioning and mapping method and device applied to dynamic scene and intelligent equipment
CN114648639B (en) Target vehicle detection method, system and device
Lin et al. Dense 3D surface reconstruction of large-scale streetscape from vehicle-borne imagery and LiDAR
CN116563104A (en) Image registration method and image stitching method based on particle swarm optimization
CN116403062A (en) Point cloud target detection method, system, equipment and medium
Cheng et al. Two-branch convolutional sparse representation for stereo matching
CN115713633A (en) Visual SLAM method, system and storage medium based on deep learning in dynamic scene
Carvalho et al. Technical Report: Co-learning of geometry and semantics for online 3D mapping
CN115115698A (en) Pose estimation method of equipment and related equipment
Murayama et al. Depth Image Noise Reduction and Super-Resolution by Pixel-Wise Multi-Frame Fusion
CN114018215B (en) Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation
CN115984583B (en) Data processing method, apparatus, computer device, storage medium, and program product
US20230237811A1 (en) Object detection and tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant