CN113345035A - Binocular camera-based gradient real-time prediction method and system and computer-readable storage medium - Google Patents
Binocular camera-based gradient real-time prediction method and system and computer-readable storage medium Download PDFInfo
- Publication number
- CN113345035A CN113345035A CN202110805290.7A CN202110805290A CN113345035A CN 113345035 A CN113345035 A CN 113345035A CN 202110805290 A CN202110805290 A CN 202110805290A CN 113345035 A CN113345035 A CN 113345035A
- Authority
- CN
- China
- Prior art keywords
- auxiliary
- lane
- image
- binocular camera
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000005259 measurement Methods 0.000 claims abstract description 12
- 238000001514 detection method Methods 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000014509 gene expression Effects 0.000 claims description 9
- 238000012887 quadratic function Methods 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 4
- 238000012545 processing Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 240000004050 Pentaglottis sempervirens Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a binocular camera-based gradient real-time prediction method and system and a readable storage medium for executing the method, and particularly relates to a method for detecting a road lane curve through a binocular camera; making two auxiliary lines perpendicular to the lane curve, wherein the auxiliary lines and the lane line intersect at four reference points; calculating the coordinates of the four reference points in a pixel coordinate system through an equation and an auxiliary line equation; obtaining coordinates of a world coordinate system of four reference points through depth measurement; calculating and outputting the gradient value of the front road by using coordinate values of world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula; the scheme solves the problems that the prediction precision of the front gradient is not high and the transverse gradient cannot be measured in the running process of the automobile, realizes the accurate prediction of the front gradient and the simultaneous measurement of the transverse gradient when the automobile runs, and has the characteristic of high precision.
Description
Technical Field
The invention relates to the technical field of automobile safety, in particular to a binocular camera-based real-time slope prediction method capable of accurately predicting a front slope when a vehicle runs and realizing measurement of a transverse slope and a system for realizing the method.
Background
With the continuous progress of the development of intelligent automobiles, for the running vehicles, the gradient of the road ahead is acquired in real time, and the method is particularly important for effectively and accurately controlling the running of the vehicles.
In the existing road gradient measurement, a method of combining a sensor and automobile dynamics is mostly adopted, but the method can only measure the gradient of the current position of a vehicle and cannot be used for predicting the gradient of a road ahead. Meanwhile, a method of obtaining the front road gradient by installing a plurality of radars is adopted, but the method is too high in cost and difficult to popularize.
Most of the conventional methods for predicting the road gradient based on a monocular camera can predict the gradient of a road ahead with low cost, but have low measurement accuracy, can only predict the gradient value of a straight road, and cannot predict a transverse gradient value.
Disclosure of Invention
The invention aims to solve the problems that the prediction precision of the front gradient is not high and the transverse gradient cannot be measured in the running process of an automobile, and provides a gradient real-time prediction method based on a binocular camera and a system for realizing the method.
In order to achieve the purpose, the invention provides the following technical scheme:
a binocular camera-based gradient instant prediction method comprises the following specific steps:
detecting a road lane curve through a binocular camera;
making two auxiliary lines perpendicular to the lane curve, wherein the auxiliary lines and the lane line intersect at four reference points;
calculating the coordinates of the four reference points in a pixel coordinate system through a lane line fitting equation and an auxiliary line equation;
obtaining coordinates of a world coordinate system of four reference points through depth measurement;
and calculating and outputting the gradient value of the front road by using the coordinate values of the world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula.
Detecting a road lane curve by using a binocular camera, specifically;
acquiring a front lane line image through a left binocular camera and a right binocular camera, and calibrating and acquiring internal and external parameters of the left binocular camera and the right binocular camera through the cameras;
correcting the left image and the right image to enable the two images to be located on the same plane and to be parallel to each other;
acquiring image coordinates of characteristic points of the lane line by a lane line detection method; and obtaining a lane line fitting quadratic function through a least square method:
wherein a is1Y、b1Y、c1YAnd a2Y、b2Y、c2YThe coefficients of the lane line function in the right-eye camera are respectively the first auxiliary line and the second auxiliary line; a is1z、b1z、c1zAnd a2z、b2z、c2zAre respectively asThe first auxiliary line and the second auxiliary line are coefficients of a lane line function in the left-eye camera.
For the straight line, the auxiliary line equation is an initial auxiliary line equation, and the initial auxiliary line equation is obtained by the following way:
two datum lines perpendicular to the lane lines are made on a road in a world coordinate system, the optical axes of the binocular cameras are ensured to be parallel to the lane lines, and an auxiliary line 1 of the two datum lines in the left and right binocular cameras is obtained by using a lane line detection method1Z、12ZAnd 11Y、12YThe function expression of (A) is as follows
Wherein k is1Z、b1ZAnd k is1Y、b1YThe slope and intercept of the first auxiliary line in the left-eye camera and the right-eye camera respectively; k is a radical of2Z、b2ZAnd k is2Y、b2YThe slope and intercept of the second auxiliary line in the left-eye camera and the right-eye camera respectively;
for a curve road, an auxiliary line equation is used for predicting a steering angle alpha of a road ahead by using a steering angle prediction module;
obtaining an auxiliary line rotation angle beta according to the steering angle of the front road;
α ═ β; obtaining a function expression of the rotated auxiliary line in the left camera image and the right camera image:
wherein k'1Z、k’2Z、k’1Y、k’1YThe slopes of the function expressions of the rotated auxiliary lines in the left and right camera images respectively are calculated as follows:
the coordinates of auxiliary points in the left image and the right image are obtained by a lane line fitting equation and an auxiliary line equation respectively, wherein the auxiliary points are respectively the left side of P11, the left side of P21, the left side of P12, the left side of P22, the right side of P11, the right side of P21, the right side of P12 and the right side of P22.
The world coordinates of the four auxiliary points obtained by binocular camera depth measurement are as follows:
P11(X11,Y11,Z11)、P21(X21,Y21,Z21)、P12(X12,Y12,Z12)、P22(X22,Y22,Z22)。
a binocular camera based gradient real-time prediction system comprises
A binocular camera for acquiring a front lane line image,
a processor comprising
The calibration module is used for acquiring internal parameters and external parameters of the left binocular camera and the right binocular camera through camera calibration; correcting the left image and the right image to enable the two images to be located on the same plane and to be parallel to each other;
the lane line characteristic point image acquisition module is used for making two auxiliary lines vertical to a lane curve, the auxiliary lines and the lane lines are intersected at four reference points, and image coordinates of lane line characteristic points are acquired by a lane line detection method;
the lane line fitting module is used for fitting the lane line characteristic points into a quadratic function curve segment by a least square method; calculating the coordinates of the four reference points in a pixel coordinate system through a lane line fitting equation and an auxiliary line equation;
the depth detection module is used for obtaining coordinates of a world coordinate system of four reference points through depth measurement;
and the gradient calculation module is used for calculating and outputting the gradient value of the front road by using the coordinate values of the world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula.
And the output module is used for outputting the slope values of the relative transverse slope and the longitudinal slope of the road in front.
A computer-readable storage medium, storing a computer program comprising program instructions, which when executed by a processor, cause the processor to perform the method of any one of the above.
Compared with the prior art, the invention has the beneficial effects that:
the invention can predict the slope value of the road ahead in real time through the binocular camera, and can also predict the cross slope value and the longitudinal slope value of the curve.
The slope value is calculated by projecting the auxiliary points on the lane line into the world coordinates, so that the precision is higher.
The results obtained by the lane line detection module and the steering angle prediction module in the invention can be used in other auxiliary driving systems or intelligent driving technologies, such as: lane departure warning, autonomous driving above L3, and the like.
Drawings
FIG. 1 is a schematic flow chart of a binocular camera-based real-time slope prediction method according to the present invention;
FIG. 2 is a schematic illustration of a cross slope grade calculation;
FIG. 3 is a schematic view illustrating a bird's eye view calculation of the slope of the longitudinal slope;
fig. 4 is a front view explanatory diagram of the calculation of the longitudinal gradient.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The binocular camera-based gradient real-time prediction method comprises a lane line fitting step and a road gradient calculation step.
Example 1, as shown in figure 1:
the gradient real-time prediction method based on the binocular camera comprises the following steps:
s10: acquiring a front lane line image through a left binocular camera and a right binocular camera; and calibrating and acquiring the internal and external parameters of the left camera and the right camera through the cameras.
S20: and for the lane line image, acquiring image coordinates of the lane line characteristic points by a lane line detection method, and acquiring a fitted quadratic function of the lane line by a lane line fitting method.
S30: the auxiliary line function prestored in the system and the lane line function calculated in S2 are combined to obtain the image coordinates of 4 auxiliary points in the left and right cameras, respectively.
S40: and obtaining world coordinates corresponding to the four auxiliary points by using a depth detection module of the binocular camera.
S50: and calculating the front slope value by the slope calculating party.
In order to clearly illustrate the binocular camera-based gradient real-time prediction method, the following is a description of the steps in the embodiment of the method of the present invention with reference to fig. 2.
The binocular camera-based gradient real-time prediction method comprises the following steps of S10-S50, wherein the steps are as follows:
step S10: acquiring a front lane line image through a left binocular camera and a right binocular camera; calibrating and acquiring internal and external parameters of a left camera and a right camera through the cameras; the left and right images are corrected.
In one example of the present invention, a binocular camera is mounted on the top of a vehicle and ensures that the road in front of the vehicle can be clearly photographed.
In this example, a binocular camera is used, but the function can be realized by two monocular cameras; the calibration method adopted in the present example is a zhangyingyou calibration method, but other binocular camera calibration methods may also be adopted. The present invention does not restrict the above two items.
And correcting the two images through internal and external parameters obtained by calibration so that the two images are positioned on the same plane and are parallel to each other.
Step S20: and for the lane line image, acquiring the image coordinates of the lane line characteristic points by a lane line detection method. And obtaining a fitted quadratic function of the lane line through a lane line fitting module.
The lane line detection method comprises the following specific steps:
s21, graying of the image:
the following three methods are generally used for processing a gray-scale image of a color image: averaging, maximum, and weighted averaging. The average method is to average R, G, B three components to obtain the gray value; the maximum value method is to take the largest component of the three components as the gray value; the weighted average method is to calculate the final gray value by giving a certain weight to the three components. Since the objective of this document is to extract the lane line, and the lane line is mainly yellow and white, this document adopts a weighted average method, and the calculation formula is as follows (the weight distribution is determined after the comparison experiment):
Gray=ω1R+ω2G+ω3B
s22 perspective transformation:
the perspective transformation is a transformation which utilizes the condition that three points of a perspective center, an image point and a target point are collinear to rotate a bearing surface (perspective surface) by a certain angle around a trace line (perspective axis) according to a perspective rotation law, so that the original projection light beam is damaged, and the projection geometric figure on the bearing surface can still be kept unchanged. The principle formula is as follows:
in the formula, (x, y) is coordinates of the projection image, and (u, v) is coordinates of the front view. a. b, c, d, e, f, g, h are distortion parameters. For the lane line, an image of a straight road can be selected, a trapezoidal area is selected along the edge of the left lane line and the right lane line, the real image of the area is rectangular, four end points of the trapezoid can be selected as perspective transformation points, and finally a projection result such as a bird's-eye view is obtained.
S23 image noise reduction
When a camera acquires a road image, noise points appear on the image due to some reasons, which increases the difficulty of detecting a lane line, so that filtering and noise reduction are required to be adopted to eliminate noise in the image, and common filtering algorithms include mean filtering, median filtering, bilateral filtering and the like.
S24 image information separation (thresholding and edge detection)
Since there is much other useless information in the image besides the road line, it is necessary to remove the unnecessary information in the image. The operation of separating the information of the image is mainly based on threshold processing of gradient and edge detection;
threshold processing: threshold processing an image including a target object, a background and noise, and in order to directly extract the target object from a multi-valued digital image, a common method is to set a threshold T, and divide the data of the image into two parts by T: pixel groups larger than T and pixel groups smaller than T. This is the most specific method for studying gray scale transformation, called Binarization (Binarization) of the image. The principle formula is as follows:
in the formula, g (i, j) is the gray scale value of the pixel point on the image after the threshold operation, and f (i, j) is the gray scale value of the pixel point on the image before the threshold operation. And when the gray value of the pixel point on the image is larger than the threshold value, setting the gray value to be 255, and when the gray value of the pixel point on the image is smaller than the threshold value, setting the gray value to be 0.
S25, extracting the characteristic points of the lane line:
since sobel edge extraction cannot handle dark and shadowy roads well, it is considered to extract white lane lines and yellow lane lines by using a color space (convert an RGB channel map into an HLS channel map, then perform segmentation processing on an L channel to extract white lane lines in an image; convert an RGB channel map into a Lab channel map, then perform segmentation processing on a b channel to extract yellow lane lines in an image), and then combine the two images.
S26: and (3) obtaining a lane line fitting equation by lane line fitting:
using imagesThe histogram, which is a range of gray values of 0, L-1, finds the approximate location of the lane line]The histogram of the digital image of (a) is a discrete function: h (r)k)=nk;
Finding out the column number corresponding to the maximum value of the left half side of the histogram, namely the approximate position of the left lane line; and finding the column number corresponding to the maximum value of the right half edge of the histogram, namely the approximate position of the right lane line.
The method of using a sliding window searches left and right lanes:
first, the approximate positions of the left and right lane lines are found according to the histogram method described above, and these two approximate positions are used as starting points. A rectangular area called a window is defined, two starting points are respectively used as the middle points of the lower lines of the window, and the horizontal coordinates of all white points in the square are stored. Then, the stored abscissa is averaged, and the column where the average is located and the position where the upper edge of the first "window" is located are taken as the middle point of the lower line of the next "window", and the search is continued. And repeating the steps until all the rows are searched. All white points falling in the window are the points to be selected of the left lane line and the right lane line.
And (3) performing quadratic curve fitting on the searched points:
determining a, b, c of a quadratic function Y ═ aX2+ bX + c using a least squares method;
wherein a, b and c are respectively a quadratic coefficient, a primary coefficient and a constant coefficient of a lane line quadratic function; xi and yi are image horizontal and vertical coordinates of the characteristic points.
The lane line detection and fitting method can also adopt other detection algorithms which can be used for the curve lane, and the invention does not limit the method.
Step S30: the auxiliary line functions (left and right image reference line equations prestored in the ecu) prestored in the system and the lane line functions calculated in S20 are combined to respectively obtain the image coordinates of 4 auxiliary points in the left and right cameras.
For the straight-line road as shown in fig. 2, the auxiliary line function in step S3 is an initial auxiliary line function, and the acquisition route of the function is as follows:
two reference lines perpendicular to a lane line need to be made on a road in a world coordinate system, the optical axis of a camera is ensured to be parallel to the lane line, and the two reference lines are obtained by utilizing a lane line detection principle to obtain an auxiliary line 11Z、12ZAnd 11Y、12YIs used for the functional expression of (1).
The function of the auxiliary line is expressed as follows:
wherein k is1Z、b1ZAnd k is1Y、b1YThe slope and intercept of the first auxiliary line in the left-eye camera and the right-eye camera respectively; k is a radical of2Z、b2ZAnd k is2Y、b2YThe slope and intercept of the second auxiliary line in the left-eye camera and the right-eye camera, respectively.
For the curve road, as shown in fig. 3 and 4, the steering angle prediction module predicts the forward road steering angle α using the assist line function in step S3.
Obtaining an auxiliary line rotation angle beta according to the front road steering angle:
α ═ β; further, a functional expression of the rotated auxiliary line in the left and right camera images can be obtained:
wherein k'1Z、k’2Z、k’1Y、k’1YThe slopes of the functional expressions of the rotated auxiliary lines in the left and right camera images, respectively.
Further, the calculation formula is as follows:
step S3 obtains coordinates of auxiliary points in the left and right images respectively through the simultaneous road route and the auxiliary line function, the auxiliary points being respectively the left of P11, the left of P21, the left of P12, the left of P22, the right of P11, the right of P21, the right of P12, and the right of P22.
Step S40: and obtaining world coordinates corresponding to the four points by using a depth detection module of the binocular camera.
The formula of the depth detection module is as follows:
wherein d is parallax; u and v are respectively the horizontal and vertical coordinates of the object point in the right eye image; the world coordinates of the object point are (XW/W, YW/W, ZW/W).
Further, the matrix Q is:
wherein f isxThe reference value is obtained by calibrating a camera; b is the image length (in pixels); u. of0、v0Respectively, the horizontal and vertical coordinates of the object point in the left eye image.
Step S50: and calculating a front slope value through a slope calculation module.
The formula of the gradient calculation module is as follows:
for a cross slope angle λ:
λ=arcsin[(P22B/P21P22+P12B/P11P12)/2]
for the longitudinal slope angle γ:
γ=arcsin[(P11A/P11P21+P12A/P12P22)/2]
the binocular camera-based gradient real-time prediction system comprises an image acquisition module, a calibration module, a lane line characteristic point image acquisition module, a lane line fitting module, a steering angle prediction module, a depth detection module, a gradient calculation module and an output module;
a binocular camera for acquiring a front lane line image,
a processor comprising
The calibration module is used for acquiring internal parameters and external parameters of the left binocular camera and the right binocular camera through camera calibration; correcting the left image and the right image to enable the two images to be located on the same plane and to be parallel to each other;
the lane line characteristic point image acquisition module is used for making two auxiliary lines vertical to a lane curve, the auxiliary lines and the lane lines are intersected at four reference points, and image coordinates of lane line characteristic points are acquired by a lane line detection method;
the lane line fitting module is used for fitting the lane line characteristic points into a quadratic function curve segment by a least square method; calculating the coordinates of the four reference points in a pixel coordinate system through a lane line fitting equation and an auxiliary line equation;
the depth detection module is used for obtaining coordinates of a world coordinate system of four reference points through depth measurement;
and the gradient calculation module is used for calculating and outputting the gradient value of the front road by using the coordinate values of the world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula.
And the output module is used for outputting the slope values of the relative transverse slope and the longitudinal slope of the road in front.
The steering angle prediction module is configured to the following formula:
wherein α is a steering angle; f. ofZ(x) And fY(x) Fitting equations of left and right lane lines respectively; x is the number of0And x1Respectively, the abscissa of the intersection point of the image center line and the left and right lane lines.
The depth detection module is configured to calculate the following formula:
wherein d is parallax; u and v are respectively the horizontal and vertical coordinates of the object point in the right eye image; the world coordinates of the object point are (XW/W, YW/W, ZW/W).
Further, the matrix Q is:
wherein f isxThe reference value is obtained by calibrating a camera; b is the image length (in pixels); u. of0、v0Respectively, the horizontal and vertical coordinates of the object point in the left eye image.
A gradient calculation module configured to calculate the following formula:
for a cross slope angle λ:
λ=arcsin[(P22B/P21P22+P12B/P11P12)/2]
for the longitudinal slope angle γ:
γ=arcsin[(P11A/P11P21+P12A/P12P22)/2]
it is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The related modules involved in the system are all hardware system modules or functional modules combining computer software programs or protocols with hardware in the prior art, and the computer software programs or the protocols involved in the functional modules are all known in the technology of persons skilled in the art, and are not improvements of the system; the improvement of the system is the interaction relation or the connection relation among all the modules, namely the integral structure of the system is improved, so as to solve the corresponding technical problems to be solved by the system.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (6)
1. A binocular camera-based gradient real-time prediction method is characterized by comprising the following steps:
detecting a road lane curve through a binocular camera;
making two auxiliary lines perpendicular to the lane curve, wherein the auxiliary lines and the lane line intersect at four reference points;
calculating the coordinates of the four reference points in a pixel coordinate system through a lane line fitting equation and an auxiliary line equation;
obtaining coordinates of a world coordinate system of four reference points through depth measurement;
and calculating and outputting the gradient value of the front road by using the coordinate values of the world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula.
2. The binocular camera based gradient immediate prediction method according to claim 1, wherein the road lane curve is detected by a binocular camera, specifically;
acquiring a front lane line image through a left binocular camera and a right binocular camera, and calibrating and acquiring internal and external parameters of the left binocular camera and the right binocular camera through the cameras;
correcting the left image and the right image to enable the two images to be located on the same plane and to be parallel to each other;
acquiring image coordinates of characteristic points of the lane line by a lane line detection method; and obtaining a lane line fitting quadratic function through a least square method:
wherein a is1Y、b1Y、c1YAnd a2Y、b2Y、c2YThe coefficients of the lane line function in the right-eye camera are respectively the first auxiliary line and the second auxiliary line; a is1z、b1z、c1zAnd a2z、b2z、c2zThe coefficients of the lane line function of the first auxiliary line and the second auxiliary line in the left-eye camera are respectively.
3. The binocular camera based gradient immediate prediction method of claim 1, wherein:
for the straight line, the auxiliary line equation is an initial auxiliary line equation, and the initial auxiliary line equation is obtained by the following way:
two datum lines perpendicular to the lane lines are made on a road in a world coordinate system, the optical axes of the binocular cameras are ensured to be parallel to the lane lines, and an auxiliary line 1 of the two datum lines in the left and right binocular cameras is obtained by using a lane line detection method1Z、12ZAnd 11Y、12YThe function expression of (A) is as follows
Wherein k is1Z、b1ZAnd k is1Y、b1YThe slope and intercept of the first auxiliary line in the left-eye camera and the right-eye camera respectively; k is a radical of2Z、b2ZAnd k is2Y、b2YThe slope and intercept of the second auxiliary line in the left-eye camera and the right-eye camera respectively;
for a curve road, an auxiliary line equation is used for predicting a steering angle alpha of a road ahead by using a steering angle prediction module;
obtaining an auxiliary line rotation angle beta according to the steering angle of the front road;
α ═ β; obtaining a function expression of the rotated auxiliary line in the left camera image and the right camera image:
wherein k'1Z、k'2Z、k′1Y、k′1YThe slopes of the function expressions of the rotated auxiliary lines in the left and right camera images respectively are calculated as follows:
the coordinates of auxiliary points in the left image and the right image are obtained by a lane line fitting equation and an auxiliary line equation respectively, wherein the auxiliary points are respectively the left side of P11, the left side of P21, the left side of P12, the left side of P22, the right side of P11, the right side of P21, the right side of P12 and the right side of P22.
4. The binocular camera based gradient immediate prediction method of claim 1, wherein: the world coordinates of the four auxiliary points obtained by binocular camera depth measurement are as follows:
P11(X11,Y11,Z11)、P21(X21,Y21,Z21)、P12(X12,Y12,Z12)、P22(X22,Y22,Z22)。
5. the utility model provides an instant prediction system of slope based on binocular camera which characterized in that: comprises that
A binocular camera for acquiring a front lane line image,
a processor comprising
The calibration module is used for acquiring internal parameters and external parameters of the left binocular camera and the right binocular camera through camera calibration; correcting the left image and the right image to enable the two images to be located on the same plane and to be parallel to each other;
the lane line characteristic point image acquisition module is used for making two auxiliary lines vertical to a lane curve, the auxiliary lines and the lane lines are intersected at four reference points, and image coordinates of lane line characteristic points are acquired by a lane line detection method;
the lane line fitting module is used for fitting the lane line characteristic points into a quadratic function curve segment by a least square method; calculating the coordinates of the four reference points in a pixel coordinate system through a lane line fitting equation and an auxiliary line equation;
the depth detection module is used for obtaining coordinates of a world coordinate system of four reference points through depth measurement;
and the gradient calculation module is used for calculating and outputting the gradient value of the front road by using the coordinate values of the world coordinate systems of the four reference points as input quantities through a prestored gradient solving formula.
And the output module is used for outputting the slope values of the relative transverse slope and the longitudinal slope of the road in front.
6. A computer-readable storage medium, storing a computer program comprising program instructions, which when executed by a processor, cause the processor to perform the method of any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110805290.7A CN113345035A (en) | 2021-07-16 | 2021-07-16 | Binocular camera-based gradient real-time prediction method and system and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110805290.7A CN113345035A (en) | 2021-07-16 | 2021-07-16 | Binocular camera-based gradient real-time prediction method and system and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113345035A true CN113345035A (en) | 2021-09-03 |
Family
ID=77479830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110805290.7A Pending CN113345035A (en) | 2021-07-16 | 2021-07-16 | Binocular camera-based gradient real-time prediction method and system and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113345035A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114056345A (en) * | 2021-12-23 | 2022-02-18 | 西安易朴通讯技术有限公司 | Safe driving prompting method and device, electronic equipment and storage medium |
-
2021
- 2021-07-16 CN CN202110805290.7A patent/CN113345035A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114056345A (en) * | 2021-12-23 | 2022-02-18 | 西安易朴通讯技术有限公司 | Safe driving prompting method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
US10970566B2 (en) | Lane line detection method and apparatus | |
CN106096525B (en) | A kind of compound lane recognition system and method | |
CN107506711B (en) | Convolutional neural network-based binocular vision barrier detection system and method | |
US8391555B2 (en) | Lane recognition apparatus for vehicle, vehicle thereof, and lane recognition program for vehicle | |
EP3007099B1 (en) | Image recognition system for a vehicle and corresponding method | |
CN107305632B (en) | Monocular computer vision technology-based target object distance measuring method and system | |
CN110647850A (en) | Automatic lane deviation measuring method based on inverse perspective principle | |
RU2568777C2 (en) | Device to detect moving bodies and system to detect moving bodies | |
CN104899554A (en) | Vehicle ranging method based on monocular vision | |
EP2126843A2 (en) | Method and system for video-based road lane curvature measurement | |
JP2010224924A (en) | Image processing device | |
EP3046049A1 (en) | Complex marking determining device and complex marking determining method | |
Liu et al. | Development of a vision-based driver assistance system with lane departure warning and forward collision warning functions | |
CN111723778B (en) | Vehicle distance measuring system and method based on MobileNet-SSD | |
EP3667612A1 (en) | Roadside object detection device, roadside object detection method, and roadside object detection system | |
CN108108667A (en) | A kind of front vehicles fast ranging method based on narrow baseline binocular vision | |
WO2018153915A1 (en) | Determining an angular position of a trailer without target | |
CN112927283A (en) | Distance measuring method and device, storage medium and electronic equipment | |
Raguraman et al. | Intelligent drivable area detection system using camera and LiDAR sensor for autonomous vehicle | |
CN110398226A (en) | A kind of monocular vision distance measuring method for advanced DAS (Driver Assistant System) | |
CN113345035A (en) | Binocular camera-based gradient real-time prediction method and system and computer-readable storage medium | |
CN113221739B (en) | Monocular vision-based vehicle distance measuring method | |
JP5091897B2 (en) | Stop line detector | |
CN117746357A (en) | Lane line identification method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |