Disclosure of Invention
The invention aims to solve the problems that the prediction precision of the front gradient is not high and the transverse gradient cannot be measured in the running process of an automobile, and provides a gradient real-time prediction method based on a binocular camera and a system for realizing the method.
In order to achieve the purpose, the invention provides the following technical scheme:
a binocular camera-based gradient instant prediction method comprises the following specific steps:
detecting a road lane curve through a binocular camera;
making two auxiliary lines perpendicular to the lane curve, wherein the auxiliary lines and the lane line intersect at four reference points;
calculating the coordinates of the four reference points in a pixel coordinate system through a lane line fitting equation and an auxiliary line equation;
obtaining coordinates of a world coordinate system of four reference points through depth measurement;
and calculating and outputting the gradient value of the front road by using the coordinate values of the world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula.
Detecting a road lane curve by using a binocular camera, specifically;
acquiring a front lane line image through a left binocular camera and a right binocular camera, and calibrating and acquiring internal and external parameters of the left binocular camera and the right binocular camera through the cameras;
correcting the left image and the right image to enable the two images to be located on the same plane and to be parallel to each other;
acquiring image coordinates of characteristic points of the lane line by a lane line detection method; and obtaining a lane line fitting quadratic function through a least square method:
wherein a is1Y、b1Y、c1YAnd a2Y、b2Y、c2YThe coefficients of the lane line function in the right-eye camera are respectively the first auxiliary line and the second auxiliary line; a is1z、b1z、c1zAnd a2z、b2z、c2zAre respectively asThe first auxiliary line and the second auxiliary line are coefficients of a lane line function in the left-eye camera.
For the straight line, the auxiliary line equation is an initial auxiliary line equation, and the initial auxiliary line equation is obtained by the following way:
two datum lines perpendicular to the lane lines are made on a road in a world coordinate system, the optical axes of the binocular cameras are ensured to be parallel to the lane lines, and an auxiliary line 1 of the two datum lines in the left and right binocular cameras is obtained by using a lane line detection method1Z、12ZAnd 11Y、12YThe function expression of (A) is as follows
Wherein k is1Z、b1ZAnd k is1Y、b1YThe slope and intercept of the first auxiliary line in the left-eye camera and the right-eye camera respectively; k is a radical of2Z、b2ZAnd k is2Y、b2YThe slope and intercept of the second auxiliary line in the left-eye camera and the right-eye camera respectively;
for a curve road, an auxiliary line equation is used for predicting a steering angle alpha of a road ahead by using a steering angle prediction module;
obtaining an auxiliary line rotation angle beta according to the steering angle of the front road;
α ═ β; obtaining a function expression of the rotated auxiliary line in the left camera image and the right camera image:
wherein k'1Z、k’2Z、k’1Y、k’1YThe slopes of the function expressions of the rotated auxiliary lines in the left and right camera images respectively are calculated as follows:
the coordinates of auxiliary points in the left image and the right image are obtained by a lane line fitting equation and an auxiliary line equation respectively, wherein the auxiliary points are respectively the left side of P11, the left side of P21, the left side of P12, the left side of P22, the right side of P11, the right side of P21, the right side of P12 and the right side of P22.
The world coordinates of the four auxiliary points obtained by binocular camera depth measurement are as follows:
P11(X11,Y11,Z11)、P21(X21,Y21,Z21)、P12(X12,Y12,Z12)、P22(X22,Y22,Z22)。
a binocular camera based gradient real-time prediction system comprises
A binocular camera for acquiring a front lane line image,
a processor comprising
The calibration module is used for acquiring internal parameters and external parameters of the left binocular camera and the right binocular camera through camera calibration; correcting the left image and the right image to enable the two images to be located on the same plane and to be parallel to each other;
the lane line characteristic point image acquisition module is used for making two auxiliary lines vertical to a lane curve, the auxiliary lines and the lane lines are intersected at four reference points, and image coordinates of lane line characteristic points are acquired by a lane line detection method;
the lane line fitting module is used for fitting the lane line characteristic points into a quadratic function curve segment by a least square method; calculating the coordinates of the four reference points in a pixel coordinate system through a lane line fitting equation and an auxiliary line equation;
the depth detection module is used for obtaining coordinates of a world coordinate system of four reference points through depth measurement;
and the gradient calculation module is used for calculating and outputting the gradient value of the front road by using the coordinate values of the world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula.
And the output module is used for outputting the slope values of the relative transverse slope and the longitudinal slope of the road in front.
A computer-readable storage medium, storing a computer program comprising program instructions, which when executed by a processor, cause the processor to perform the method of any one of the above.
Compared with the prior art, the invention has the beneficial effects that:
the invention can predict the slope value of the road ahead in real time through the binocular camera, and can also predict the cross slope value and the longitudinal slope value of the curve.
The slope value is calculated by projecting the auxiliary points on the lane line into the world coordinates, so that the precision is higher.
The results obtained by the lane line detection module and the steering angle prediction module in the invention can be used in other auxiliary driving systems or intelligent driving technologies, such as: lane departure warning, autonomous driving above L3, and the like.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The binocular camera-based gradient real-time prediction method comprises a lane line fitting step and a road gradient calculation step.
Example 1, as shown in figure 1:
the gradient real-time prediction method based on the binocular camera comprises the following steps:
s10: acquiring a front lane line image through a left binocular camera and a right binocular camera; and calibrating and acquiring the internal and external parameters of the left camera and the right camera through the cameras.
S20: and for the lane line image, acquiring image coordinates of the lane line characteristic points by a lane line detection method, and acquiring a fitted quadratic function of the lane line by a lane line fitting method.
S30: the auxiliary line function prestored in the system and the lane line function calculated in S2 are combined to obtain the image coordinates of 4 auxiliary points in the left and right cameras, respectively.
S40: and obtaining world coordinates corresponding to the four auxiliary points by using a depth detection module of the binocular camera.
S50: and calculating the front slope value by the slope calculating party.
In order to clearly illustrate the binocular camera-based gradient real-time prediction method, the following is a description of the steps in the embodiment of the method of the present invention with reference to fig. 2.
The binocular camera-based gradient real-time prediction method comprises the following steps of S10-S50, wherein the steps are as follows:
step S10: acquiring a front lane line image through a left binocular camera and a right binocular camera; calibrating and acquiring internal and external parameters of a left camera and a right camera through the cameras; the left and right images are corrected.
In one example of the present invention, a binocular camera is mounted on the top of a vehicle and ensures that the road in front of the vehicle can be clearly photographed.
In this example, a binocular camera is used, but the function can be realized by two monocular cameras; the calibration method adopted in the present example is a zhangyingyou calibration method, but other binocular camera calibration methods may also be adopted. The present invention does not restrict the above two items.
And correcting the two images through internal and external parameters obtained by calibration so that the two images are positioned on the same plane and are parallel to each other.
Step S20: and for the lane line image, acquiring the image coordinates of the lane line characteristic points by a lane line detection method. And obtaining a fitted quadratic function of the lane line through a lane line fitting module.
The lane line detection method comprises the following specific steps:
s21, graying of the image:
the following three methods are generally used for processing a gray-scale image of a color image: averaging, maximum, and weighted averaging. The average method is to average R, G, B three components to obtain the gray value; the maximum value method is to take the largest component of the three components as the gray value; the weighted average method is to calculate the final gray value by giving a certain weight to the three components. Since the objective of this document is to extract the lane line, and the lane line is mainly yellow and white, this document adopts a weighted average method, and the calculation formula is as follows (the weight distribution is determined after the comparison experiment):
Gray=ω1R+ω2G+ω3B
s22 perspective transformation:
the perspective transformation is a transformation which utilizes the condition that three points of a perspective center, an image point and a target point are collinear to rotate a bearing surface (perspective surface) by a certain angle around a trace line (perspective axis) according to a perspective rotation law, so that the original projection light beam is damaged, and the projection geometric figure on the bearing surface can still be kept unchanged. The principle formula is as follows:
in the formula, (x, y) is coordinates of the projection image, and (u, v) is coordinates of the front view. a. b, c, d, e, f, g, h are distortion parameters. For the lane line, an image of a straight road can be selected, a trapezoidal area is selected along the edge of the left lane line and the right lane line, the real image of the area is rectangular, four end points of the trapezoid can be selected as perspective transformation points, and finally a projection result such as a bird's-eye view is obtained.
S23 image noise reduction
When a camera acquires a road image, noise points appear on the image due to some reasons, which increases the difficulty of detecting a lane line, so that filtering and noise reduction are required to be adopted to eliminate noise in the image, and common filtering algorithms include mean filtering, median filtering, bilateral filtering and the like.
S24 image information separation (thresholding and edge detection)
Since there is much other useless information in the image besides the road line, it is necessary to remove the unnecessary information in the image. The operation of separating the information of the image is mainly based on threshold processing of gradient and edge detection;
threshold processing: threshold processing an image including a target object, a background and noise, and in order to directly extract the target object from a multi-valued digital image, a common method is to set a threshold T, and divide the data of the image into two parts by T: pixel groups larger than T and pixel groups smaller than T. This is the most specific method for studying gray scale transformation, called Binarization (Binarization) of the image. The principle formula is as follows:
in the formula, g (i, j) is the gray scale value of the pixel point on the image after the threshold operation, and f (i, j) is the gray scale value of the pixel point on the image before the threshold operation. And when the gray value of the pixel point on the image is larger than the threshold value, setting the gray value to be 255, and when the gray value of the pixel point on the image is smaller than the threshold value, setting the gray value to be 0.
S25, extracting the characteristic points of the lane line:
since sobel edge extraction cannot handle dark and shadowy roads well, it is considered to extract white lane lines and yellow lane lines by using a color space (convert an RGB channel map into an HLS channel map, then perform segmentation processing on an L channel to extract white lane lines in an image; convert an RGB channel map into a Lab channel map, then perform segmentation processing on a b channel to extract yellow lane lines in an image), and then combine the two images.
S26: and (3) obtaining a lane line fitting equation by lane line fitting:
using imagesThe histogram, which is a range of gray values of 0, L-1, finds the approximate location of the lane line]The histogram of the digital image of (a) is a discrete function: h (r)k)=nk;
Finding out the column number corresponding to the maximum value of the left half side of the histogram, namely the approximate position of the left lane line; and finding the column number corresponding to the maximum value of the right half edge of the histogram, namely the approximate position of the right lane line.
The method of using a sliding window searches left and right lanes:
first, the approximate positions of the left and right lane lines are found according to the histogram method described above, and these two approximate positions are used as starting points. A rectangular area called a window is defined, two starting points are respectively used as the middle points of the lower lines of the window, and the horizontal coordinates of all white points in the square are stored. Then, the stored abscissa is averaged, and the column where the average is located and the position where the upper edge of the first "window" is located are taken as the middle point of the lower line of the next "window", and the search is continued. And repeating the steps until all the rows are searched. All white points falling in the window are the points to be selected of the left lane line and the right lane line.
And (3) performing quadratic curve fitting on the searched points:
determining a, b, c of a quadratic function Y ═ aX2+ bX + c using a least squares method;
wherein a, b and c are respectively a quadratic coefficient, a primary coefficient and a constant coefficient of a lane line quadratic function; xi and yi are image horizontal and vertical coordinates of the characteristic points.
The lane line detection and fitting method can also adopt other detection algorithms which can be used for the curve lane, and the invention does not limit the method.
Step S30: the auxiliary line functions (left and right image reference line equations prestored in the ecu) prestored in the system and the lane line functions calculated in S20 are combined to respectively obtain the image coordinates of 4 auxiliary points in the left and right cameras.
For the straight-line road as shown in fig. 2, the auxiliary line function in step S3 is an initial auxiliary line function, and the acquisition route of the function is as follows:
two reference lines perpendicular to a lane line need to be made on a road in a world coordinate system, the optical axis of a camera is ensured to be parallel to the lane line, and the two reference lines are obtained by utilizing a lane line detection principle to obtain an auxiliary line 11Z、12ZAnd 11Y、12YIs used for the functional expression of (1).
The function of the auxiliary line is expressed as follows:
wherein k is1Z、b1ZAnd k is1Y、b1YThe slope and intercept of the first auxiliary line in the left-eye camera and the right-eye camera respectively; k is a radical of2Z、b2ZAnd k is2Y、b2YThe slope and intercept of the second auxiliary line in the left-eye camera and the right-eye camera, respectively.
For the curve road, as shown in fig. 3 and 4, the steering angle prediction module predicts the forward road steering angle α using the assist line function in step S3.
Obtaining an auxiliary line rotation angle beta according to the front road steering angle:
α ═ β; further, a functional expression of the rotated auxiliary line in the left and right camera images can be obtained:
wherein k'1Z、k’2Z、k’1Y、k’1YThe slopes of the functional expressions of the rotated auxiliary lines in the left and right camera images, respectively.
Further, the calculation formula is as follows:
step S3 obtains coordinates of auxiliary points in the left and right images respectively through the simultaneous road route and the auxiliary line function, the auxiliary points being respectively the left of P11, the left of P21, the left of P12, the left of P22, the right of P11, the right of P21, the right of P12, and the right of P22.
Step S40: and obtaining world coordinates corresponding to the four points by using a depth detection module of the binocular camera.
The formula of the depth detection module is as follows:
wherein d is parallax; u and v are respectively the horizontal and vertical coordinates of the object point in the right eye image; the world coordinates of the object point are (XW/W, YW/W, ZW/W).
Further, the matrix Q is:
wherein f isxThe reference value is obtained by calibrating a camera; b is the image length (in pixels); u. of0、v0Respectively, the horizontal and vertical coordinates of the object point in the left eye image.
Step S50: and calculating a front slope value through a slope calculation module.
The formula of the gradient calculation module is as follows:
for a cross slope angle λ:
λ=arcsin[(P22B/P21P22+P12B/P11P12)/2]
for the longitudinal slope angle γ:
γ=arcsin[(P11A/P11P21+P12A/P12P22)/2]
the binocular camera-based gradient real-time prediction system comprises an image acquisition module, a calibration module, a lane line characteristic point image acquisition module, a lane line fitting module, a steering angle prediction module, a depth detection module, a gradient calculation module and an output module;
a binocular camera for acquiring a front lane line image,
a processor comprising
The calibration module is used for acquiring internal parameters and external parameters of the left binocular camera and the right binocular camera through camera calibration; correcting the left image and the right image to enable the two images to be located on the same plane and to be parallel to each other;
the lane line characteristic point image acquisition module is used for making two auxiliary lines vertical to a lane curve, the auxiliary lines and the lane lines are intersected at four reference points, and image coordinates of lane line characteristic points are acquired by a lane line detection method;
the lane line fitting module is used for fitting the lane line characteristic points into a quadratic function curve segment by a least square method; calculating the coordinates of the four reference points in a pixel coordinate system through a lane line fitting equation and an auxiliary line equation;
the depth detection module is used for obtaining coordinates of a world coordinate system of four reference points through depth measurement;
and the gradient calculation module is used for calculating and outputting the gradient value of the front road by using the coordinate values of the world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula.
And the output module is used for outputting the slope values of the relative transverse slope and the longitudinal slope of the road in front.
The steering angle prediction module is configured to the following formula:
wherein α is a steering angle; f. ofZ(x) And fY(x) Fitting equations of left and right lane lines respectively; x is the number of0And x1Respectively, the abscissa of the intersection point of the image center line and the left and right lane lines.
The depth detection module is configured to calculate the following formula:
wherein d is parallax; u and v are respectively the horizontal and vertical coordinates of the object point in the right eye image; the world coordinates of the object point are (XW/W, YW/W, ZW/W).
Further, the matrix Q is:
wherein f isxThe reference value is obtained by calibrating a camera; b is the image length (in pixels); u. of0、v0Respectively, the horizontal and vertical coordinates of the object point in the left eye image.
A gradient calculation module configured to calculate the following formula:
for a cross slope angle λ:
λ=arcsin[(P22B/P21P22+P12B/P11P12)/2]
for the longitudinal slope angle γ:
γ=arcsin[(P11A/P11P21+P12A/P12P22)/2]
it is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The related modules involved in the system are all hardware system modules or functional modules combining computer software programs or protocols with hardware in the prior art, and the computer software programs or the protocols involved in the functional modules are all known in the technology of persons skilled in the art, and are not improvements of the system; the improvement of the system is the interaction relation or the connection relation among all the modules, namely the integral structure of the system is improved, so as to solve the corresponding technical problems to be solved by the system.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.