Disclosure of Invention
The invention provides a lane line identification method and device and a vehicle.
Specifically, the invention is realized by the following technical scheme:
according to a first aspect of the present invention, there is provided a lane line identification method, the method comprising:
identifying all line segments in the road image;
for each line segment, determining a relevant line segment of the line segment from other line segments, and calculating the contribution degree of the relevant line segment of the line segment to the line segment, wherein the contribution degree is used for representing the influence degree of the relevant line segment on the line segment as a lane line;
calculating the score of each line segment according to the length of the line segment and the contribution degree of the associated line segment of the line segment to the line segment, wherein the score is used for representing the possibility that the line segment is a lane line;
and determining the line segment with the highest score and the associated line segment of the line segment with the highest score as the lane line.
According to a second aspect of the present invention, there is provided a lane line identification device comprising:
storage means for storing program instructions;
a processor calling program instructions stored in the storage device, the program instructions when executed operable to:
identifying all line segments in the road image;
for each line segment, determining a relevant line segment of the line segment from other line segments, and calculating the contribution degree of the relevant line segment of the line segment to the line segment, wherein the contribution degree is used for representing the influence degree of the relevant line segment on the line segment as a lane line;
calculating the score of each line segment according to the length of the line segment and the contribution degree of the associated line segment of the line segment to the line segment, wherein the score is used for representing the possibility that the line segment is a lane line;
and determining the line segment with the highest score and the associated line segment of the line segment with the highest score as the lane line.
According to a third aspect of the present invention, there is provided a vehicle comprising:
a vehicle body;
a camera fixed on the vehicle body; and
the processor is electrically connected with the shooting device;
the shooting device is used for shooting road images in front of the vehicle and sending the road images to the processor, and the processor is used for:
identifying all line segments in a road image
For each line segment, determining a relevant line segment of the line segment from other line segments, and calculating the contribution degree of the relevant line segment of the line segment to the line segment, wherein the contribution degree is used for representing the influence degree of the relevant line segment on the line segment as a lane line;
calculating the score of each line segment according to the length of the line segment and the contribution degree of the associated line segment of the line segment to the line segment, wherein the score is used for representing the possibility that the line segment is a lane line;
and determining the line segment with the highest score and the associated line segment of the line segment with the highest score as the lane line.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
identifying all line segments in the road image;
for each line segment, determining a relevant line segment of the line segment from other line segments, and calculating the contribution degree of the relevant line segment of the line segment to the line segment, wherein the contribution degree is used for representing the influence degree of the relevant line segment on the line segment as a lane line;
calculating the score of each line segment according to the length of the line segment and the contribution degree of the associated line segment of the line segment to the line segment, wherein the score is used for representing the possibility that the line segment is a lane line;
and determining the line segment with the highest score and the associated line segment of the line segment with the highest score as the lane line.
According to the technical scheme provided by the embodiment of the invention, the embodiment of the invention comprehensively considers the length of each line segment in the road image and the contribution degree of the associated line segment of the line segment to judge the possibility of the line segment as the lane line, and the detected lane line result is made to conform to the actual lane line as much as possible through combination optimization, so that the robustness of lane line detection is improved.
Example one
Fig. 1 is a flowchart of a method for identifying a lane line according to an embodiment of the present invention. Referring to fig. 1, the lane line identification method may include the steps of:
step S101: identifying all line segments in the road image;
in practice, the lane line may include a straight line and a curved line, so that both the straight line segment and the curved line segment in the road image are considered as the suspected lane line. By executing step S101, the present embodiment may identify all straight line segments and/or curved line segments in the road image, that is, the line segments of the present embodiment may include straight line segments and/or curved line segments.
In this embodiment, all the line segments in the road image are segmented first, and then all the line segments are identified based on a line segment detection algorithm. The manner of dividing all the line segment regions in the road image may be selected as needed, for example, in some examples, all the line segment regions in the road image are divided based on a CNN (Convolutional Neural Network). Optionally, all segment regions in the road image are segmented based on the CNN semantic meaning. Optionally, a deep learning algorithm is used to train a large number of road image samples to obtain a lane line model, and the current road image is input into the lane line model to obtain all line segment regions (including a straight line segment region and/or a curve segment region) in the current road image.
In other examples, all the line segment regions in the road image are segmented based on an edge detection algorithm. Specifically, the edges of all line segments in the road image are detected based on an edge detection algorithm, so that all line segment areas in the road image are segmented.
The line segment detection algorithm of this embodiment may be a hough transform algorithm (hough transform), or may be other line segment detection algorithms, and specifically, the type of the line segment detection algorithm may be selected as needed. In the present embodiment, based on the line segment detection algorithm, some parameter information of all line segments in the road image can be identified, for example, lengths of all line segments, and positional relationships between all line segments (such as included angles between all line segments and/or spaces between all line segments).
The lane line recognition method of the embodiment can be applied to vehicles, particularly unmanned vehicles, and road images in front of the vehicles can be captured by a camera on the vehicles, and the road images are generally in front view. In the front view, lane markers such as road surface marker arrows and lane lines may be distorted, the distorted shape is related to the position of the vehicle, the line segments farther away from the vehicle in the front view are difficult to recognize, and the same lane markers in the front view have poor consistency and are difficult to recognize accurately. In order to improve the accuracy of the lane line identification, the embodiment performs image rectification on the road image before identifying all line segments in the road image. The image rectification method can be selected as required, and in one embodiment, the road image is projected to the corresponding top view based on the inverse perspective transformation. In the embodiment, the road image is projected to the top view, so that the road surface markers such as lane lines and arrows can be restored to real dimensions and properties, and the road surface markers in the top view are easier to identify; in addition, the positions of the pixel points on the road surface in the top view directly correspond to the real positions, and the position relation between a certain pixel point and a vehicle can be directly obtained according to the top view, so that the requirements of a basic ADAS function and an automatic driving function can be met.
Specifically, based on the inverse perspective transformation, the projecting the road image to the corresponding top view may include the following steps:
(1) calibrating internal parameters of a shooting device and external parameters of the shooting device to the ground;
wherein, the internal reference of the shooting device
f
x、f
yCharacterizing the focal length of the camera, c
x、 c
yThe optical axis of the lens of the characterization shooting device passes through the position of the imaging sensor. The calibration of the camera's internal parameters may use existing calibration algorithms, which will not be described in detail here.
The external parameters of the shooting device to the ground comprise a rotation matrix R and a translation vector T, which are respectively rotation and translation of the shooting device relative to an object plane, wherein the object plane is a plane where a lane line is located in the embodiment. Wherein, T can be converted to the ground height through the shooting device. The calibration of R is realized by indirectly calibrating a ground pitch angle pitch of the shooting device (the ground when the shooting device shoots the current road image), a ground roll angle roll of the shooting device and a yaw angle yaw of the shooting device right ahead of the vehicle, wherein the pitch, roll and yaw are respectively rotation angles of the shooting device to self coordinate axes x, y and z and are respectively rotation angles of the shooting device to the self coordinate axes x, y and z
![Figure BDA0002315516700000052](https://patentimages.storage.googleapis.com/30/26/48/93d5e71b8453af/BDA0002315516700000052.png)
Theta and phi, and rotation matrixes respectively corresponding to the three axes can be calculated according to the 3 angles
R
y(θ)、R
zAnd (phi), and then calculating R according to the rotation matrixes respectively corresponding to the three axes.
In the present embodiment, the first and second electrodes are,
(2) calculating a projection matrix H for mapping pixel points in the road image to the top view;
mapping points of the object plane coordinate system onto the image coordinate system can be expressed as:
wherein u and v are pixel points of a road image coordinate system;
s is a normalization coefficient;
m is the internal reference of the shooting device;
[r1r2r3t]the external parameter of the shooting device to the object plane, namely the position relation;
r1、r2、r3is a 3 by 1 column vector, r1、r2、r3Forming a rotation matrix R;
t is a 3 by 1 column vector representing the translation of the camera to the object plane;
x, Y denote coordinates in the plane of the object.
Assuming that the object is in one plane and Z is zero, equation (1) can be expressed as:
H=sM[r1r2t](3);
(3) the road image is projected to the top view according to the projection matrix H.
Substituting equation (3) into equation (2) yields:
if X, Y object plane coordinates are taken as the ground coordinates, then equation (4) becomes the inverse of the ground-to-road image mapping multiplied by H on both sides:
and (4) substituting the coordinates of each pixel point on the road image into the formula (5), so as to obtain the coordinates of each pixel point in the top view.
The points of the object plane are projected to the top view by using the inverse perspective transformation algorithm, and only the points on the object plane can be accurately processed, and the projection of the points on the non-object plane has errors, for example, the guardrail bar on the guardrail is very close to a roadside lane line on a real top view angle, but the projection of the points on the guardrail to the top view by using the inverse perspective transformation algorithm is inaccurate, so that the guardrail is falsely detected as the lane line. In the embodiment, the road image is projected to the top view by using the inverse perspective transformation algorithm, then the lane line detection is performed on the top view, and then the real lane line in the top view and the false lane line in the top view can be more easily distinguished by using the lane line detection method under the prior condition.
In this embodiment, all line segments in the top view are identified, specifically, all line segment areas in the top view are segmented first, and then all line segments are identified based on a line segment detection algorithm.
Step S102: for each line segment, determining the associated line segment of the line segment from other line segments, and calculating the contribution degree of the associated line segment of the line segment to the line segment, wherein the contribution degree is used for representing the influence degree of the associated line segment on the line segment as a lane line;
specifically, referring to fig. 2, the associated line segment of the line segment is determined from the other line segments according to the position relationship between the line segment and the other line segments and the preset lane line prior condition. Optionally, the position relationship between the line segment and the other line segments includes an included angle between the line segment and the other line segments, or/and a distance between the line segment and the other line segments, and of course, the position relationship between the line segment and the other line segments may also be set according to other position relationships between the real lane lines. In one embodiment, the position relationship between the line segment and the other line segments includes an included angle between the line segment and the other line segments. In another embodiment, the positional relationship between the line segment and the other line segments includes a spacing between the line segment and the other line segments. In another embodiment, the positional relationship between the line segment and other line segments includes the included angle between the line segment and other line segments and the distance between the line segment and other line segments.
The present embodiment further describes the position relationship between the line segment and other line segments, including the included angle between the line segment and other line segments, and the distance between the line segment and other line segments.
The lane lines on the actual road are as parallel as possible, and the spacing between adjacent lane lines is in the range of about 2.5 meters to 4.2 meters. However, due to the influence of the shooting angle and other factors, a smaller included angle generally exists between the lane lines in the road image, the smaller the included angle between two line segments in the road image is, which indicates that the two line segments tend to be in a parallel position relationship on the actual road, the higher the possibility that the two line segments are the lane lines is, the preset prior condition of the lane line in the embodiment includes that the included angle between the lane lines is within a preset included angle range, wherein the size of the preset included angle range can be set according to the condition that the lane lines are parallel as much as possible.
Further, the preset lane line prior condition further includes that the distance between lane lines is an integral multiple of the preset distance. The preset distance is usually a value or a range of values in which the distance between adjacent lane lines is reduced in an equal ratio, and after the preset distance is determined, the ratio of the distance between two line segments in the road image to the preset distance is an integer or closer to an integer, which indicates that the two line segments are more likely to be lane lines. In this embodiment, the preset distance may be set according to a condition that the distance between adjacent lane lines is approximately in a range of 2.5 m to 4.2 m.
And preliminarily screening the identified line segments through a preset lane line prior condition, and filtering line segments which are obviously not lane lines, such as line segments with arrows or vehicle edges or the distance between a guardrail and a normal lane line not conforming to the real distance, line segments which are not parallel to the normal lane line and the like.
In this step, if the current line segment is a straight line segment, the determined line segment associated with the current line segment is a straight line segment. If the current line segment is a curve segment, the determined associated line segment of the current line segment is the curve segment.
During processing, aiming at the straight line segment, the associated straight line segment of the straight line segment can be determined from other straight line segments directly according to the position relation between the straight line segment and other straight line segments and the preset prior condition of the lane line. For a curve segment, the curve segment is usually divided into a plurality of segments, each segment of the curve segment is similar to a straight line segment, and then the associated curve segment of each segment of the curve segment is determined, and the determination process is similar to the straight line segment, which is not described herein again. When processing a straight line segment, the straight line segment may be divided into a plurality of stages.
Referring to fig. 3, after determining the associated line segment of each line segment, for each line segment, the contribution degree of the associated line segment of the line segment to the line segment is calculated according to the position relationship between the line segment and the associated line segment of the line segment, the length of the associated line segment of the line segment, and the length of the line segment. The position relation between the line segment and the associated line segment of the line segment comprises an included angle between the line segment and the associated line segment of the line segment and a distance between the line segment and the associated line segment.
For each line segment, the specific steps of calculating the contribution degree of the associated line segment of the line segment to the line segment according to the position relationship between the line segment and the associated line segment of the line segment, the length of the associated line segment of the line segment, and the length of the line segment can be seen in fig. 4:
step S401: calculating the score of each associated line segment according to the length of the associated line segment and the contribution degree of other associated line segments of the line segment to the associated line segment;
step S402: and aiming at each line segment, calculating the contribution degree of each associated line segment to the line segment according to the score of each associated line segment, the included angle between each associated line segment and the line segment, the ratio of the distance between each associated line segment and the line segment to a preset distance and the length of the line segment.
In a possible implementation manner, when step S401 is executed, a left associated line segment located on the left side of the line segment and a right associated line segment located on the right side of the line segment are determined according to a position relationship between the line segment and the associated line segment of the line segment; then, for each left associated line segment of each line segment, calculating the score of the left associated line segment according to the length of the left associated line segment and the contribution degree of other left associated line segments of the left associated line segment to the left associated line segment, and for each right associated line segment of each line segment, calculating the score of the right associated line segment according to the length of the right associated line segment and the contribution degree of other right associated line segments of the right associated line segment to the right associated line segment. And calculating the score of the left associated line segment and the score of the right associated line segment can be executed simultaneously or sequentially.
It should be noted that, in the embodiment of the present invention, the other left associated line segment of the left associated line segment refers to an associated line segment located on the left side of the left associated line segment, and the other right associated line segment of the right associated line segment refers to an associated line segment located on the right side of the right associated line segment. For example, for an associated line segment identified in the road image: the line comprises a line segment 1, a line segment 2, a line segment 3, a line segment 4 and a line segment 5, wherein the line segment 1, the line segment 2, the line segment 3, the line segment 4 and the line segment 5 are sequentially arranged from left to right, taking the line segment 3 as an example, the left associated line segment of the line segment 3 comprises the line segment 1 and the line segment 2, the right associated line segment comprises the line segment 4 and the line segment 5, the line segment 1 has no left associated line segment, the left associated line segment of the line segment 2 comprises the line segment 1, the right associated line segment of the line segment 4 comprises the line segment 5, and. When step S402 is executed, for each line segment, calculating a left contribution degree of each left associated line segment to the line segment according to the score of each left associated line segment, an included angle between each left associated line segment and the line segment, a ratio of a distance between each left associated line segment and the line segment to a preset distance, and a length of the line segment; and calculating the right contribution degree of each right associated line segment to each line segment according to the score of each right associated line segment, the included angle between each right associated line segment and the line segment, the ratio of the distance between each right associated line segment and the line segment to a preset distance and the length of the line segment.
Specifically, for the ith line segment, the left contribution degree of the jth left associated line segment to the ith line segment
The calculation formula of (a) is as follows:
for the ith line segment, the left contribution degree of the jth left associated line segment to the ith line segment
The calculation formula of (a) is as follows:
wherein i and j are positive integers, i belongs to (1, n), j belongs to (1, n), and n is the number of line segments;
Liis the length of the ith line segment;
k1is a first predetermined coefficient, and k1>0; in this example, k1Can be set as desired, k1The larger the setting is, the larger the contribution degree is influenced and expanded by α;
α is the included angle between the ith line segment and the jth left line segment or jth right line segment;
delta is the ratio of the distance between the ith line segment and the jth left line segment or the jth right line segment to the preset distance;
k2is a second predetermined coefficient, and 0<k2<1; in this example, k2Can be set as desired, k2The smaller the setting is, the larger the contribution degree is influenced by delta;
the score of the jth left associated line segment;
is the score of the jth right-associated line segment.
It should be noted that, when there is no associated line segment left of the jth left associated line segment,
when there is no associated segment to the right of the jth right segment,
it will be appreciated that the above-described,
the calculation manner is not limited to the calculation formula of formula (6) listed in the present embodiment,
the calculation method of (c) is also not limited to the calculation formula of formula (7) listed in the present embodiment.
Step S103: calculating the score of each line segment according to the length of each line segment and the contribution degree of the associated line segment of the line segment to the line segment, wherein the score is used for representing the possibility that the line segment is a lane line;
in this embodiment, the score of the ith line segment is Si,SiThe calculation formula of (a) is as follows:
Si=Li+CSi,N(8);
in the formula (8), LiIs the length of the ith line segment;
CSi,Nthe contribution degree of the Nth associated line segment of the ith line segment to the ith line segment.
It is understood that the ith line segment has a score of SiThe calculation method of (c) is not limited to the above equation (8).
To simplify the calculation process, step S103 is executed specifically for eachA line segment, the maximum value of the left contribution of the left associated line segment of the line segment to the line segment according to the length of the line segment
And the maximum value of the right contribution degree of the right associated line segment of the line segment to the line segment
Calculating the score of the line segment, wherein the maximum value
And maximum value
The score of the line segment can be calculated after different weighting coefficients are processed, the specific weighting coefficient is determined by a specific application scene, in the scheme, the weighting coefficients of the two are both 1, and the contribution degrees of the two to the line segment are equivalent. Optionally, the score of each line segment is the maximum value of the length of the line segment and the left contribution of the left associated line segment of the line segment to the line segment
And the maximum value of the right contribution degree of the right associated line segment of the line segment to the line segment
And (4) summing.
The above equation (8) is simplified as:
wherein the content of the first and second substances,
the calculation method can be referred to the above step S102, and is not described herein again.
The score of the ith line segment can be calculated through the formula (9).
For example, for a line segment 3 identified by a road image, the associated line segments of the line segment 3 are determined to be a line segment 1, a line segment 2, a line segment 4 and a line segment 5, wherein the line segment 1 and the line segment 2 are located on the left side of the line segment 3, and the line segment 4 and the line segment 5 are located on the left side of the line segment 3, and when the score of the line segment 3 is calculated, the contribution degrees of the line segment 1 and the line segment 2 to the line segment 3 and the contribution degrees of the line segment 4 and the line segment 5 to the line.
When the contribution degree of the line segment 1 to the line segment 3 is calculated, the value of the line segment 1 is determined only according to the length of the line segment 1 as no associated line segment exists on the left side of the line segment 1; and determining the contribution degree of the line segment 1 to the line segment 3 according to the score of the line segment 1, the position relationship between the line segment 1 and the line segment 3 (the included angle between the line segment 1 and the line segment 3, the ratio of the distance between the line segment 1 and the line segment 3 to the preset distance) and the length of the line segment 3.
When calculating the contribution degree of the line segment 2 to the line segment 3, firstly, determining the score of the line segment 2 according to the length of the line segment 2 and the contribution degree of the line segment 1 to the line segment 2; and determining the contribution degree of the line segment 1 to the line segment 3 according to the score of the line segment 2, the position relationship between the line segment 2 and the line segment 3 (the included angle between the line segment 2 and the line segment 3, the ratio of the distance between the line segment 2 and the line segment 3 to the preset distance) and the length of the line segment 3.
When calculating the contribution degree of the line segment 4 to the line segment 3, firstly, determining the score of the line segment 4 according to the length of the line segment 4 and the contribution degree of the line segment 5 to the line segment 4; and determining the contribution degree of the line segment 4 to the line segment 3 according to the score of the line segment 4, the position relationship between the line segment 4 and the line segment 3 (the included angle between the line segment 4 and the line segment 3, the ratio of the distance between the line segment 4 and the line segment 3 to the preset distance) and the length of the line segment 4.
When the contribution degree of the line segment 5 to the line segment 3 is calculated, the value of the line segment 5 is determined only according to the length of the line segment 5 as no associated line segment exists on the right side of the line segment 5; and determining the contribution degree of the line segment 5 to the line segment 3 according to the score of the line segment 5, the position relationship between the line segment 5 and the line segment 3 (the included angle between the line segment 5 and the line segment 3, the ratio of the distance between the line segment 5 and the line segment 3 to the preset distance) and the length of the line segment 5.
After calculating the contribution of the line segment 1, the line segment 2, the line segment 4 and the line segment 5 to the line segment 3, the score of the line segment 3 is calculated according to the length of the line segment 3, the maximum value of the contribution of the line segment 1 to the line segment 3 and the contribution of the line segment 2 to the line segment 3, and the maximum value of the contribution of the line segment 4 to the line segment 3 and the contribution of the line segment 5 to the line segment 3.
Step S104: and determining the line segment with the highest score and the associated line segment of the line segment with the highest score as the lane line.
After step S103 is executed, the scores of all the line segments in the road image may be obtained, and then, the line segment with the highest score in all the line segments is determined:
the line segment with the highest score and the associated line segment of the line segment with the highest score are determined as the lane lines through the formula (10).
The method for identifying the lane line comprehensively considers the length of each line segment in the road image and the contribution degree of the associated line segment of the line segment to judge the possibility of the line segment as the lane line, and enables the detected lane line result to conform to the actual lane line as far as possible through combination optimization, thereby improving the robustness of lane line detection.
Based on the length of the line segments, the included angle between the line segments and the ratio of the distance between the line segments to the preset distance, the line segments which are wrongly detected to be lane lines can be filtered, such as arrows, sidewalks, road surface characters, guardrails, vehicle edges and the like, a group of optimal combinations are selected to be used as the lane lines, and the false detection rate of the lane lines is reduced.