CN115546319B - Lane keeping method, lane keeping apparatus, computer device, and storage medium - Google Patents

Lane keeping method, lane keeping apparatus, computer device, and storage medium Download PDF

Info

Publication number
CN115546319B
CN115546319B CN202211482598.3A CN202211482598A CN115546319B CN 115546319 B CN115546319 B CN 115546319B CN 202211482598 A CN202211482598 A CN 202211482598A CN 115546319 B CN115546319 B CN 115546319B
Authority
CN
China
Prior art keywords
lane
lane line
potential field
result
fitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211482598.3A
Other languages
Chinese (zh)
Other versions
CN115546319A (en
Inventor
席华炜
董洪泉
卢兵
王博
宋士佳
孙超
王文伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Auto Co Ltd
Shenzhen Automotive Research Institute of Beijing University of Technology
Original Assignee
Shenzhen Automotive Research Institute of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Automotive Research Institute of Beijing University of Technology filed Critical Shenzhen Automotive Research Institute of Beijing University of Technology
Priority to CN202211482598.3A priority Critical patent/CN115546319B/en
Publication of CN115546319A publication Critical patent/CN115546319A/en
Application granted granted Critical
Publication of CN115546319B publication Critical patent/CN115546319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a lane keeping method, a lane keeping device, computer equipment and a storage medium. The method comprises the following steps: acquiring camera calibration data; extracting a lane line according to the camera calibration data to obtain an extraction result; performing lane line fitting according to the extraction result to obtain a fitting result; constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result; and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function. By implementing the method provided by the embodiment of the invention, small probability events in an actual driving scene can be coped with, the method is suitable for detecting curves and dotted lines, and the stability in the process of controlling the overbending is improved.

Description

Lane keeping method, lane keeping apparatus, computer device, and storage medium
Technical Field
The present invention relates to a driving assistance system, and more particularly, to a lane keeping method, apparatus, computer device, and storage medium.
Background
The lane keeping function is an important component of an auxiliary driving system, and automatic deviation rectification of the vehicle is realized through active transverse control, so that the vehicle keeps running on a given lane. Therefore, the vehicle with the lane keeping function can greatly improve the driving safety and reduce the fatigue degree of the driver. Lane line detection is a precondition for developing a lane keeping function, and extensive algorithm research is developed at home and abroad aiming at the problem of lane line detection. The lane line detection algorithm can be generalized to two types of detection algorithms based on traditional image feature extraction and deep learning.
The detection algorithm based on deep learning can obtain a lane line detection result with higher precision, but the requirements of model training on calculation power and data are very strict, and due to the lack of interpretability and the completeness of a data set, the model is difficult to converge, and a small-probability event in an actual driving scene cannot be processed to induce a traffic accident; meanwhile, carrying a deep learning model also provides a brand-new challenge for the performance of the whole hardware, and the consumption of large computational power can seriously affect the real-time performance of algorithm implementation. Therefore, in order to ensure the real-time performance and stability of the operation of the algorithm, the traditional lane line detection algorithm based on image preprocessing, feature extraction, lane line fitting and the like is still the mainstream algorithm in practical application. The traditional lane detection algorithm is mostly based on a lane line detection algorithm of Hough transform, has high real-time performance, but is too sensitive to the illumination environment and is not suitable for detecting curves and dotted lines. Meanwhile, the center line of the lane is used as a target track for lane keeping to calculate a transverse error, and the vehicle is prone to instability during lane keeping of a large-curvature curve.
Therefore, it is necessary to design a new method, which is suitable for detecting curves and dotted lines and can cope with small probability events in actual driving scenes, and improve the stability in the process of controlling the overbending.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a lane keeping method, a lane keeping device, a computer device and a storage medium.
In order to achieve the purpose, the invention adopts the following technical scheme: a lane keeping method comprising:
acquiring camera calibration data;
extracting a lane line according to the camera calibration data to obtain an extraction result;
performing lane line fitting according to the extraction result to obtain a fitting result;
constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result;
and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function.
The further technical scheme is as follows: the acquiring of camera calibration data includes:
calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result;
and acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
The further technical scheme is as follows: the extracting lane lines according to the camera calibration data to obtain an extraction result comprises:
performing image gray processing on the camera calibration data to obtain a processing result;
carrying out inverse perspective transformation on the processing result to obtain a transformation result;
and performing feature extraction on the transformation result to obtain an extraction result.
The further technical scheme is as follows: the extracting the feature of the transformation result to obtain an extraction result includes:
adopting a DBSCAN clustering algorithm to extract clustering features of the transformation result aiming at the objects with the same density so as to obtain feature points;
and screening the connection points of the characteristic points through parameter adjustment and identification to obtain an extraction result.
The further technical scheme is as follows: the performing lane line fitting according to the extraction result to obtain a fitting result includes:
and performing curve fitting on the extraction result based on a moving least square method to obtain a fitting result.
The further technical scheme is as follows: constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result, comprising:
determining a curve coordinate mapping relation of the lane line according to the fitting result;
and designing a potential field function of the lane line based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct the potential field of the lane line.
The further technical scheme is as follows: the fusing the lane line potential field with a lane keeping model to obtain a lane keeping function includes:
calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model;
designing an objective function by combining the transverse distance, the reference transverse target and the lane line potential field;
and combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to construct an optimization problem so as to obtain a lane keeping function.
The present invention also provides a lane keeping apparatus comprising:
the data acquisition unit is used for acquiring camera calibration data;
the lane line extraction unit is used for extracting a lane line according to the camera calibration data to obtain an extraction result;
the fitting unit is used for fitting the lane line according to the extraction result to obtain a fitting result;
the construction unit is used for constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result;
and the fusion unit is used for fusing the lane line potential field with a lane keeping model to obtain a lane keeping function.
The invention also provides computer equipment which comprises a memory and a processor, wherein the memory is stored with a computer program, and the processor realizes the method when executing the computer program.
The invention also provides a storage medium storing a computer program which, when executed by a processor, implements the method described above.
Compared with the prior art, the invention has the beneficial effects that: according to the method, internal and external parameters of the camera are calibrated, camera calibration data are obtained, lane lines are extracted, lane line fitting is carried out, a lane line potential field is established by combining the transverse position of a vehicle, the lane line potential field is fused with a lane keeping model to obtain a lane keeping function, the lane keeping function can be used for carrying out lane keeping prediction, small probability events in an actual driving scene can be responded, the method is suitable for detecting curves and dotted lines, and the stability in the process of over-curve control is improved.
The invention is further described below with reference to the accompanying drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a lane keeping method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a lane keeping method according to an embodiment of the present invention;
FIG. 3 is a sub-flowchart of a lane keeping method according to an embodiment of the present invention;
FIG. 4 is a sub-flowchart of a lane keeping method according to an embodiment of the present invention;
FIG. 5 is a sub-flowchart of a lane keeping method according to an embodiment of the present invention;
FIG. 6 is a sub-flowchart of a lane keeping method according to an embodiment of the present invention;
FIG. 7 is a sub-flowchart of a lane keeping method according to an embodiment of the present invention;
FIG. 8 is a first graph of the fitting effect provided by the embodiment of the present invention;
FIG. 9 is a second graph of the fitting effect provided by the embodiment of the present invention;
FIG. 10 is a third graph of the fitting effect provided by the embodiment of the present invention;
FIG. 11 is a fourth graph of the fitting effect provided by the embodiment of the present invention;
fig. 12 is a schematic diagram of acquiring coordinates of a lane line curve according to an embodiment of the present invention;
FIG. 13 is a schematic view of a lateral distance of a vehicle provided by an embodiment of the present invention;
fig. 14 is a schematic diagram of a 2D lane line potential field provided by an embodiment of the present invention;
fig. 15 is a schematic block diagram of a lane keeping apparatus provided in an embodiment of the present invention;
fig. 16 is a schematic block diagram of a data acquisition unit of the lane keeping apparatus provided by the embodiment of the present invention;
fig. 17 is a schematic block diagram of a lane line extraction unit of the lane keeping apparatus provided by the embodiment of the present invention;
fig. 18 is a schematic block diagram of an extraction subunit of the lane keeping apparatus provided by the embodiment of the present invention;
fig. 19 is a schematic block diagram of a construction unit of the lane keeping device provided by the embodiment of the present invention;
fig. 20 is a schematic block diagram of a fusion unit of the lane keeping apparatus provided by the embodiment of the present invention;
FIG. 21 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view illustrating an application scenario of a lane keeping method according to an embodiment of the present invention. Fig. 2 is a schematic flow chart of a lane keeping method according to an embodiment of the present invention. The lane keeping method is applied to a server. The server and the camera perform data interaction to realize the combination of the accuracy of lane line detection and the stability of the vehicle in the lane keeping process, firstly, internal and external parameter calibration of the camera is completed based on camera parameters to complete image gray processing and inverse perspective transformation; secondly, completing lane line feature extraction by DBSCAN (Based on space-Based Spatial Clustering of Applications with Noise) and realizing lane line fitting by combining a mobile least square method; and finally, fusing the lane line potential field and the lane keeping model for predictive control, and completing a vehicle longitudinal and transverse cooperative control module to realize a lane keeping function. The problem of algorithm compatibility under the scenes of a solid line and a dotted line is solved; the stability of the algorithm in the turning control process is improved, so that the vehicle has better tracking performance and tracking potential.
Fig. 2 is a schematic flow chart of a lane keeping method according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S150.
And S110, acquiring camera calibration data.
In the present embodiment, the camera calibration data refers to 2D pixel coordinates formed by the camera shooting lane.
In an embodiment, referring to fig. 3, the step S110 may include steps S111 to S112.
And S111, calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result.
In this embodiment, the calibration result refers to the calibration content of the internal and external parameters of the camera.
And S112, acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
In this embodiment, the camera parameter identification is a precondition for realizing the expression of the mapping relationship between the spatial coordinates and the pixel coordinates, and the external reference and the internal reference identification of the camera are realized by combining information such as the installation position of the camera, the size of the lens and the size of the output image. The transformation of any coordinate in 3D space into 2D pixel coordinate is a perspective transformation, and the formula is summarized as follows
Figure SMS_1
Wherein is present>
Figure SMS_2
Is the camera internal reference, R and T are the camera external reference>
Figure SMS_3
Is the position coordinate of a point in the geodetic coordinate system, is based on the evaluation of the location coordinate of the point in the geodetic coordinate system>
Figure SMS_4
Is the Z-axis coordinate of the camera coordinate system, and (u, v) is the pixel coordinate of the point. This formula will 3D coordinate
Figure SMS_5
Converted to 2D pixel coordinates (u, v). The output of this step is the 2D pixel coordinates (u, v) as input for lane line extraction.
And S120, extracting the lane line according to the camera calibration data to obtain an extraction result.
In the present embodiment, the extracted result refers to the extracted lane line.
In an embodiment, referring to fig. 4, the step S120 may include steps S121 to S123.
And S121, performing image gray scale processing on the camera calibration data to obtain a processing result.
In this embodiment, the processing result refers to data formed by performing image gradation processing on the camera calibration data.
The image gray scale processing of data belongs to the prior art, and is not described herein again.
And S122, performing inverse perspective transformation on the processing result to obtain a transformation result.
In this embodiment, the transformation result is a result obtained by performing inverse perspective transformation on the processing result.
Specifically, the inverse perspective transformation mainly converts a pixel coordinate system into a world coordinate system, outputs an input image as a bird's-eye view, and presents the characteristics of parallelism and equal width of real world lane lines so as to improve the detection accuracy of the lane lines. Mainly by
Figure SMS_6
And realizing coordinate conversion. Wherein IPM is an inverse perspective matrix. Pixel coordinates (u, v) are input, and the pixel coordinates are inversely converted by an inverse perspective matrix to output 3D coordinates (X, Y, Z).
And S123, performing feature extraction on the transformation result to obtain an extraction result.
In an embodiment, referring to fig. 4, the step S123 may include steps S1231 to S1232.
And S1231, extracting clustering features of the transformation result aiming at the objects with the same density by adopting a DBSCAN clustering algorithm to obtain feature points.
In this embodiment, the feature point refers to a result obtained by extracting clustering features of the objects with the same density by using a DBSCAN clustering algorithm.
And S1232, screening the connection points of the characteristic points through parameter adjustment and identification so as to obtain an extraction result.
Specifically, after a series of 3D coordinates are obtained, a DBSCAN clustering algorithm is adopted, clustering features are extracted for objects with the same density, connection points are identified through parameter adjustment, and connection points are screened, and at the moment, lane lines are extracted as a series of feature points.
And S130, performing lane line fitting according to the extraction result to obtain a fitting result.
In this embodiment, the fitting result refers to a result formed by fitting a series of extracted feature points, and specifically, the extracted result is subjected to curve fitting based on a moving least square method to obtain a fitting result, so as to ensure the curvature continuity of the lane line, as shown in fig. 8 to 11.
And S140, constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result.
In this embodiment, the lane line potential field refers to a lane line composite potential field.
In an embodiment, referring to fig. 6, the step S140 may include steps S141 to S142.
And S141, determining the curve coordinate mapping relation of the lane line according to the fitting result.
In this embodiment, after the fitting of the lane line is completed, the absolute coordinates of the lane line are obtained, and then the conversion between the absolute coordinates and the road coordinates is realized based on the coordinate conversion theory, so as to obtain the curve coordinate mapping relationship of the lane line, as shown in fig. 12.
And S142, designing a lane line potential field function based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct a lane line potential field.
The lane keeping function aims mainly to make the vehicle stably run in a certain range of the lane center line, and does not lose the stability of the vehicle in order to strictly follow the center line, and the lane line transverse distance of the vehicle in a curve coordinate system is set as d, which is shown in fig. 13 in detail.
Specifically, a lane line potential field function is designed based on the calculated lateral position of the vehicle, and a lane line composite potential field is constructed. Respectively defining potential field functions on lane boundaries and lane center lines in the structured road, wherein the potential field functions of the lane lines are designed as follows:
Figure SMS_7
; />
Figure SMS_8
(ii) a Wherein it is present>
Figure SMS_9
And &>
Figure SMS_10
Respectively a repulsive field at the boundary of a lane line and a gravitational field at the center of the lane line under a curve coordinate, b and a respectively are adjustment coefficients of the boundary of the lane and the potential field at the center of the lane, d LR And d LL Respectively, the abscissa of the right and left lane lines, d the abscissa of the target vehicle, d LC The abscissa representing the center line of the lane. The lane line potential field may be constructed by combining the identified lane lines with a potential field function, as shown in fig. 14.
S150, fusing the lane line potential field with a lane keeping model to obtain a lane keeping function.
In the present embodiment, the lane keeping function refers to a function for predicting lane keeping.
In an embodiment, referring to fig. 7, the step S150 may include steps S151 to S153.
And S151, calculating the transverse position and the longitudinal position of the vehicle from the lane line based on the lane keeping model.
In the embodiment, an optimization problem is constructed based on a model predictive control theory, the conversion between geodetic coordinates and road coordinates is realized based on a coordinate conversion theory, and the transverse position and the longitudinal position of the vehicle from a lane line are calculated. The prediction and reference values in the prediction domain are generated as:
Figure SMS_11
and S152, designing an objective function by combining the transverse distance, the reference transverse target and the lane line potential field.
In this embodiment, the objective function is
Figure SMS_12
(ii) a Wherein, V pre Represents N P Step prediction state, V ref Represents N P State of stepThe reference and control weight coefficient matrices are respectively represented as a state weight coefficient matrix and a control weight coefficient matrix.
And S153, constructing an optimization problem by combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to obtain a lane keeping function.
Combining the step S142, the step S151, the step S152 and the vehicle three-degree-of-freedom dynamic model, constructing a nonlinear, multi-objective and multi-constraint convex optimization problem as follows:
Figure SMS_13
(ii) a Wherein A is d And B d The representation is based on
Figure SMS_14
Coefficient of equation of state matrix, C d And D d Represents the observation equation coefficient matrix, < > is selected>
Figure SMS_15
Representing a relaxation factor, converting the soft constraint to a semi-hard constraint. The semi-hard constraint allows the vehicle to deviate from the lane centerline within a certain range at the cost of penalty.
The method of the embodiment firstly finishes the calibration of the internal and external parameters of the camera based on the camera parameters, and finishes the image gray processing and the inverse perspective transformation; secondly, finishing the extraction of lane line features based on a space density clustering algorithm, and realizing the fitting of the lane lines by combining a moving least square method; and finally, fusing the lane line potential field and the lane keeping model for predictive control, and completing a vehicle longitudinal and transverse cooperative control module to realize a lane keeping function. The method is characterized in that a vehicle is combined with a model to predict transverse distance, a transverse target and a lane line potential field are referred to, meanwhile, a target function is designed by taking vehicle passing efficiency and tracking precision as targets, a nonlinear, multi-target and multi-constraint convex optimization problem function is constructed by combining corresponding formulas, a lane keeping function is combined with lane line detection precision, the stability of a vehicle in a lane keeping process is guaranteed, the lane keeping function has good compatibility under the scenes of solid lines and dotted lines, and meanwhile, the lane control function has better stability in the process of passing curve control. Compared with single transverse control, the vehicle tracking performance and tracking potential are improved.
According to the lane keeping method, the camera calibration data are obtained by calibrating the internal and external parameters of the camera, lane lines are extracted, then lane line fitting is carried out, a lane line potential field is constructed by combining the transverse position of a vehicle, the lane line potential field is fused with a lane keeping model to obtain a lane keeping function, the lane keeping function can be used for carrying out lane keeping prediction, small probability events in an actual driving scene can be responded, the lane keeping method is suitable for detecting curves and dotted lines, and the stability in the over-bending control process is improved.
Fig. 15 is a schematic block diagram of a lane keeping apparatus 300 according to an embodiment of the present invention. As shown in fig. 15, the present invention also provides a lane keeping device 300 corresponding to the above lane keeping method. The lane keeping apparatus 300 includes a unit for performing the above-described lane keeping method, and the apparatus may be configured in a server. Specifically, referring to fig. 15, the lane keeping device 300 includes a data acquisition unit 301, a lane line extraction unit 302, a fitting unit 303, a construction unit 304, and a fusion unit 305.
A data acquisition unit 301 configured to acquire camera calibration data; a lane line extraction unit 302, configured to extract a lane line according to the camera calibration data to obtain an extraction result; a fitting unit 303, configured to perform lane line fitting according to the extraction result to obtain a fitting result; a construction unit 304, configured to construct a lane line potential field based on a vehicle lateral position according to the fitting result; and a fusion unit 305, configured to fuse the lane line potential field and a lane keeping model to obtain a lane keeping function.
In one embodiment, as shown in fig. 16, the data acquisition unit 301 includes a calibration subunit 3011 and a coordinate acquisition subunit 3012.
The calibration subunit 3011 is configured to calibrate the camera internal parameter and the camera external parameter based on the camera parameter to obtain a calibration result; and a coordinate obtaining subunit 3012, configured to obtain 2D pixel coordinates according to the calibration result, so as to obtain camera calibration data.
In one embodiment, as shown in fig. 17, the lane line extraction unit 302 includes a processing subunit 3021, a transformation subunit 3022, and an extraction subunit 3023.
A processing subunit 3021, configured to perform image grayscale processing on the camera calibration data to obtain a processing result; a transformation subunit 3022, configured to perform inverse perspective transformation on the processing result to obtain a transformation result; an extracting subunit 3023, configured to perform feature extraction on the transform result to obtain an extraction result.
In an embodiment, as shown in fig. 18, the extracting subunit 3023 includes a feature point extracting module 30231 and a filtering module 30232.
A feature point extraction module 30231, configured to perform clustering feature extraction on the same-density object with respect to the transform result by using a DBSCAN clustering algorithm to obtain feature points; a screening module 30232, configured to screen the connection points through parameter adjustment and identification of the feature points to obtain an extraction result.
In an embodiment, the fitting unit 303 is configured to perform curve fitting on the extracted result based on a moving least square method to obtain a fitting result.
In one embodiment, as shown in fig. 19, the building unit 304 includes a relationship determining subunit 3041 and a function building subunit 3042.
A relationship determination subunit 3041, configured to determine a curve coordinate mapping relationship of the lane line according to the fitting result; the function constructing subunit 3042 is configured to design a lane line potential field function based on the vehicle transverse position according to the curve coordinate mapping relationship of the lane line, so as to construct the lane line potential field.
In one embodiment, as shown in FIG. 20, the fusion unit 305 includes a position calculation subunit 3051, an objective function design subunit 3052, and a problem construction subunit 3053.
A position calculation subunit 3051 for calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model; an objective function design subunit 3052, configured to design an objective function by combining the lateral distance, the reference lateral target, and the lane line potential field; and the problem construction subunit 3053, configured to construct an optimization problem by combining the lane line potential field, the objective function, and the vehicle three-degree-of-freedom dynamic model, so as to obtain a lane keeping function.
It should be noted that, as will be clear to those skilled in the art, the specific implementation process of the lane keeping device 300 and each unit can be referred to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, no further description is provided herein.
The lane keeping apparatus 300 described above may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 21.
Referring to fig. 21, fig. 21 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, wherein the server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 21, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 comprises program instructions that, when executed, may cause the processor 502 to perform a lane keeping method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for running the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be caused to execute a lane keeping method.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 21 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation of the computer device 500 to which the present application is applied, and that a particular computer device 500 may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following steps:
acquiring camera calibration data; extracting a lane line according to the camera calibration data to obtain an extraction result; performing lane line fitting according to the extraction result to obtain a fitting result; constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result; and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function.
In an embodiment, when the processor 502 implements the step of acquiring the camera calibration data, the following steps are specifically implemented:
calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result; and acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
In an embodiment, when the processor 502 implements the step of extracting the lane line according to the camera calibration data to obtain the extraction result, the following steps are specifically implemented:
performing image gray processing on the camera calibration data to obtain a processing result; carrying out inverse perspective transformation on the processing result to obtain a transformation result; and performing feature extraction on the transformation result to obtain an extraction result.
In an embodiment, when implementing the step of extracting the feature of the transformation result to obtain the extraction result, the processor 502 specifically implements the following steps:
adopting a DBSCAN clustering algorithm to extract clustering features of the transformation result aiming at the objects with the same density so as to obtain feature points; and (4) screening the connection points of the characteristic points through parameter adjustment and identification to obtain an extraction result.
In an embodiment, when implementing the step of performing lane line fitting according to the extraction result to obtain a fitting result, the processor 502 specifically implements the following steps:
and performing curve fitting on the extraction result based on a moving least square method to obtain a fitting result.
In an embodiment, when implementing the step of constructing the lane line potential field based on the lateral position of the vehicle according to the fitting result, the processor 502 specifically implements the following steps:
determining a curve coordinate mapping relation of the lane line according to the fitting result; and designing a potential field function of the lane line based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct the potential field of the lane line.
In an embodiment, when the step of fusing the lane line potential field and the lane keeping model to obtain the lane keeping function is implemented by the processor 502, the following steps are specifically implemented:
calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model; designing a target function by combining the transverse distance, the reference transverse target and the lane line potential field; and constructing an optimization problem by combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to obtain a lane keeping function.
It should be understood that in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program, when executed by a processor, causes the processor to perform the steps of:
acquiring camera calibration data; extracting a lane line according to the camera calibration data to obtain an extraction result; performing lane line fitting according to the extraction result to obtain a fitting result; constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result; and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function.
In an embodiment, when the step of obtaining camera calibration data is implemented by executing the computer program, the processor specifically implements the following steps:
calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result; and acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
In an embodiment, when the processor executes the computer program to implement the step of extracting lane lines according to the camera calibration data to obtain an extraction result, the following steps are specifically implemented:
performing image gray processing on the camera calibration data to obtain a processing result; carrying out inverse perspective transformation on the processing result to obtain a transformation result; and performing feature extraction on the transformation result to obtain an extraction result.
In an embodiment, when the processor executes the computer program to implement the step of extracting the feature of the transformation result to obtain the extraction result, the processor specifically implements the following steps:
adopting a DBSCAN clustering algorithm to extract clustering features of the transformation result aiming at the objects with the same density so as to obtain feature points; and (4) screening the connection points of the characteristic points through parameter adjustment and identification to obtain an extraction result.
In an embodiment, when the processor executes the computer program to implement the step of performing lane line fitting according to the extraction result to obtain a fitting result, the following steps are specifically implemented:
and performing curve fitting on the extraction result based on a mobile least square method to obtain a fitting result.
In an embodiment, when the step of constructing the lane line potential field based on the lateral position of the vehicle according to the fitting result is implemented by the processor executing the computer program, the following steps are specifically implemented:
determining a curve coordinate mapping relation of the lane line according to the fitting result; and designing a potential field function of the lane line based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct the potential field of the lane line.
In an embodiment, when the processor executes the computer program to realize the step of fusing the lane line potential field and the lane keeping model to obtain the lane keeping function, the following steps are specifically realized:
calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model; designing a target function by combining the transverse distance, the reference transverse target and the lane line potential field; and constructing an optimization problem by combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to obtain a lane keeping function.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A lane keeping method, characterized by comprising:
acquiring camera calibration data;
extracting a lane line according to the camera calibration data to obtain an extraction result;
performing lane line fitting according to the extraction result to obtain a fitting result;
constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result;
fusing the lane line potential field with a lane keeping model to obtain a lane keeping function;
constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result, comprising:
determining a curve coordinate mapping relation of the lane line according to the fitting result;
designing a lane line potential field function based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line to construct a lane line potential field;
designing a lane line potential field function based on the calculated transverse position of the vehicle, and constructing a lane line composite potential field; respectively defining potential field functions on the lane boundary and the lane central line in the structured road, wherein the potential field functions of the lane lines are designed as follows:
Figure QLYQS_1
(ii) a Wherein it is present>
Figure QLYQS_2
And &>
Figure QLYQS_3
Respectively a repulsive field at the boundary of a lane line and a gravitational field at the center of the lane line under a curve coordinate, b and a respectively are adjustment coefficients of the boundary of the lane and the potential field at the center of the lane, d LR And d LL Respectively showing right and left hand carsAbscissa of the lane line, d represents the abscissa of the target vehicle, d LC An abscissa representing a lane center line; and combining the identified lane lines with the potential field function to construct a lane line potential field.
2. The lane keeping method of claim 1, wherein said acquiring camera calibration data comprises:
calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result;
and acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
3. The lane keeping method of claim 1, wherein said extracting lane lines from said camera calibration data to obtain an extraction result comprises:
performing image gray processing on the camera calibration data to obtain a processing result;
carrying out inverse perspective transformation on the processing result to obtain a transformation result;
and performing feature extraction on the transformation result to obtain an extraction result.
4. The lane keeping method of claim 3, wherein the performing feature extraction on the transformation result to obtain an extraction result comprises:
adopting a DBSCAN clustering algorithm to extract clustering features of the transformation result aiming at the objects with the same density so as to obtain feature points;
and screening the connection points of the characteristic points through parameter adjustment and identification to obtain an extraction result.
5. The lane keeping method according to claim 1, wherein the performing lane line fitting according to the extraction result to obtain a fitting result comprises:
and performing curve fitting on the extraction result based on a moving least square method to obtain a fitting result.
6. The lane keeping method of claim 1, wherein said fusing the lane line potential field with a lane keeping model to derive a lane keeping function comprises:
calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model;
designing a target function by combining the transverse distance, the reference transverse target and the lane line potential field;
and constructing an optimization problem by combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to obtain a lane keeping function.
7. Lane keeping device, characterized by comprising:
the data acquisition unit is used for acquiring camera calibration data;
the lane line extraction unit is used for extracting a lane line according to the camera calibration data to obtain an extraction result;
the fitting unit is used for fitting the lane line according to the extraction result to obtain a fitting result;
the construction unit is used for constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result;
the fusion unit is used for fusing the lane line potential field with a lane keeping model to obtain a lane keeping function;
the construction unit comprises a relation determination subunit and a function construction subunit;
the relation determining subunit is used for determining the curve coordinate mapping relation of the lane line according to the fitting result; the function construction subunit is used for designing a lane line potential field function based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct a lane line potential field;
designing a lane line potential field function based on the calculated transverse position of the vehicle, and constructing a lane line composite potential field; respectively defining potential field function on lane boundary and lane central line in the structured road, and designing the vehicleThe track line potential field function is as follows:
Figure QLYQS_4
;/>
Figure QLYQS_5
(ii) a Wherein it is present>
Figure QLYQS_6
And &>
Figure QLYQS_7
Respectively a repulsive field at the boundary of a lane line and a gravitational field at the center of the lane line under a curve coordinate, b and a respectively are adjustment coefficients of the boundary of the lane and the potential field at the center of the lane, d LR And d LL Respectively, the abscissa of the right and left lane lines, d the abscissa of the target vehicle, d LC An abscissa representing a lane center line; and combining the identified lane lines with the potential field function to construct a lane line potential field.
8. A computer device, characterized in that the computer device comprises a memory, on which a computer program is stored, and a processor, which when executing the computer program implements the method according to any of claims 1 to 6.
9. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 6.
CN202211482598.3A 2022-11-24 2022-11-24 Lane keeping method, lane keeping apparatus, computer device, and storage medium Active CN115546319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211482598.3A CN115546319B (en) 2022-11-24 2022-11-24 Lane keeping method, lane keeping apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211482598.3A CN115546319B (en) 2022-11-24 2022-11-24 Lane keeping method, lane keeping apparatus, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN115546319A CN115546319A (en) 2022-12-30
CN115546319B true CN115546319B (en) 2023-03-28

Family

ID=84720587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211482598.3A Active CN115546319B (en) 2022-11-24 2022-11-24 Lane keeping method, lane keeping apparatus, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN115546319B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021253245A1 (en) * 2020-06-16 2021-12-23 华为技术有限公司 Method and device for identifying vehicle lane changing tendency

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113525375B (en) * 2020-04-21 2023-07-21 宇通客车股份有限公司 Vehicle lane changing method and device based on artificial potential field method
CN112319469A (en) * 2020-11-16 2021-02-05 深圳市康士柏实业有限公司 Lane keeping auxiliary system and method based on machine vision
US11753009B2 (en) * 2021-04-30 2023-09-12 Nissan North America, Inc. Intelligent pedal lane change assist

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021253245A1 (en) * 2020-06-16 2021-12-23 华为技术有限公司 Method and device for identifying vehicle lane changing tendency

Also Published As

Publication number Publication date
CN115546319A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN110232169B (en) Track denoising method based on bidirectional long-time and short-time memory model and Kalman filtering
WO2022007776A1 (en) Vehicle positioning method and apparatus for target scene region, device and storage medium
CN111311685A (en) Motion scene reconstruction unsupervised method based on IMU/monocular image
CN109887021B (en) Cross-scale-based random walk stereo matching method
CN110487286B (en) Robot pose judgment method based on point feature projection and laser point cloud fusion
CN109214422B (en) Parking data repairing method, device, equipment and storage medium based on DCGAN
CN114565616B (en) Unstructured road state parameter estimation method and system
Xie et al. A binocular vision application in IoT: Realtime trustworthy road condition detection system in passable area
CN110992424B (en) Positioning method and system based on binocular vision
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN116612468A (en) Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN113916565B (en) Steering wheel zero deflection angle estimation method and device, vehicle and storage medium
CN114578807A (en) Active target detection and obstacle avoidance method for unmanned target vehicle radar vision fusion
CN116702607A (en) BIM-FEM-based bridge structure digital twin body and method
WO2024041447A1 (en) Pose determination method and apparatus, electronic device and storage medium
CN113191427B (en) Multi-target vehicle tracking method and related device
CN111553954B (en) Online luminosity calibration method based on direct method monocular SLAM
CN115546319B (en) Lane keeping method, lane keeping apparatus, computer device, and storage medium
CN113420590A (en) Robot positioning method, device, equipment and medium in weak texture environment
CN113063412A (en) Multi-robot cooperative positioning and mapping method based on reliability analysis
CN112529011A (en) Target detection method and related device
CN114280583B (en) Laser radar positioning accuracy verification method and system without GPS signal
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No. 301, 302, Floor 3, Building A, Subzone 3, Leibai Zhongcheng Life Science Park, No. 22, Jinxiu East Road, Jinsha Community, Kengzi Street, Pingshan District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Automotive Research Institute Beijing University of Technology

Address before: Floor 19, block a, innovation Plaza, 2007 Pingshan street, Pingshan District, Shenzhen, Guangdong 518000

Applicant before: Shenzhen Automotive Research Institute Beijing University of Technology

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230515

Address after: No. 301, 302, Floor 3, Building A, Subzone 3, Leibai Zhongcheng Life Science Park, No. 22, Jinxiu East Road, Jinsha Community, Kengzi Street, Pingshan District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Automotive Research Institute Beijing University of Technology

Patentee after: BYD AUTO Co.,Ltd.

Address before: No. 301, 302, Floor 3, Building A, Subzone 3, Leibai Zhongcheng Life Science Park, No. 22, Jinxiu East Road, Jinsha Community, Kengzi Street, Pingshan District, Shenzhen, Guangdong 518000

Patentee before: Shenzhen Automotive Research Institute Beijing University of Technology