CN110264525B - Camera calibration method based on lane line and target vehicle - Google Patents

Camera calibration method based on lane line and target vehicle Download PDF

Info

Publication number
CN110264525B
CN110264525B CN201910510879.7A CN201910510879A CN110264525B CN 110264525 B CN110264525 B CN 110264525B CN 201910510879 A CN201910510879 A CN 201910510879A CN 110264525 B CN110264525 B CN 110264525B
Authority
CN
China
Prior art keywords
lane line
roll
angle
target vehicle
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910510879.7A
Other languages
Chinese (zh)
Other versions
CN110264525A (en
Inventor
李方
刘杨
卢金波
胡坤福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Desay SV Intelligent Transport Technology Research Institute Co Ltd
Original Assignee
Huizhou Desay SV Intelligent Transport Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Desay SV Intelligent Transport Technology Research Institute Co Ltd filed Critical Huizhou Desay SV Intelligent Transport Technology Research Institute Co Ltd
Priority to CN201910510879.7A priority Critical patent/CN110264525B/en
Publication of CN110264525A publication Critical patent/CN110264525A/en
Application granted granted Critical
Publication of CN110264525B publication Critical patent/CN110264525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of camera calibration methods, and provides a camera calibration method based on lane lines and target vehicles.

Description

Camera calibration method based on lane line and target vehicle
Technical Field
The invention relates to the field of camera calibration methods, in particular to a camera calibration method based on a lane line and a target vehicle.
Background
In image measurement processes and machine vision applications, in order to determine the correlation between the three-dimensional geometric position of a certain point on the surface of an object in space and the corresponding point in the image, a geometric model of camera imaging must be established, and the parameters of the geometric model are the parameters of the camera. Under most conditions, these parameters must be obtained through experiments and calculation, and this process of solving the parameters is called camera calibration (or video camera calibration).
The camera calibration is important in the process of recovering three-dimensional information of an object in a two-dimensional image, and a corresponding relation exists between a space point and an image point on an image plane in an imaging geometric model of the camera, wherein the corresponding relation is determined by camera parameters (including internal and external parameters of the camera). In a broad sense, camera calibration can be classified into two categories, namely, a traditional camera calibration method and a camera self-calibration method, specifically as follows:
1. the traditional camera calibration method (static calibration) is relatively simple, internal and external parameters of a camera are calculated by utilizing the imaging position of a calibration plate on an image plane, but the error of the obtained target distance is large when a static calibration result is used because a vehicle jolts in the driving process, and the requirement is difficult to meet.
2. The camera self-calibration method does not need to utilize a calibration plate for calibration, and mainly utilizes the constraint of camera motion. The existing dynamic calibration method is a method for calibrating by using the distance between a vehicle and a parallel line and a vanishing point, but the required conditions are more, the method is only suitable for a specific road, and the universality is lower.
At present, a method for calibrating by using three straight lines on a flat road surface is provided in China, but the method needs three parallel lines on a ground plane and can calibrate by knowing the distance between the parallel lines, and the calibration process is more complicated due to too many limited conditions.
Disclosure of Invention
The invention provides a camera calibration method based on a lane line and a target vehicle, which solves the technical problems of large calculation error, more limiting conditions and poor algorithm universality of the existing calibration technology.
In order to solve the technical problems, the invention provides a camera calibration method based on a lane line and a target vehicle, which comprises the following steps:
s1, obtaining a current road surface image, obtaining the height of a front-view camera from the ground through static calibration, and solving the theoretical distance between the front-view camera and a target vehicle according to internal reference and external reference angles of the front-view camera;
s2, extracting lane lines in the road surface image, and correcting the external reference angle parameter of the front-view camera by combining a preset correction formula;
s3, calculating the actual distance of the target vehicle according to the extracted optical flow information in the frame of the target vehicle;
and S4, dynamically adjusting the angle parameter of the forward-looking camera according to the actual distance of the target vehicle and the theoretical distance of the target vehicle.
As an embodiment of the present invention, when the road surface image includes at least two mutually parallel lane lines, the step S2 specifically includes the steps of:
s21, performing curve fitting according to world coordinates of pixel points of a left lane line and a right lane line of a lane where a vehicle is located, and intercepting a part of a fitted lane line curve to establish a tangent equation;
and S22, deriving constraint equations of the pitch angle, the yaw angle and the roll angle according to the tangent equation, and solving the pitch angle, the yaw angle and the roll angle through the constraint equations.
S23, inputting multiple frames of road surface images to calculate multiple groups of intermediate variables (q)0、q1、q2) And find the corresponding sets of tangent functions tan (pitch)i)、tan(rolli) And finally solving for tan (pitch) by means of an arctangent functioni)、tan(rolli) To obtain the pitch angle pitchiRoll angle rolliThe correction value of (2).
In the step S21, in the above step,
tangent equations of the left lane line and the right lane line are respectively as follows:
x=a0+a1y (1);
x=b0+b1y (2);
wherein x represents the abscissa and y represents the ordinate; a is0Constant term representing left lane line tangent equation, a1A first order coefficient representing a left lane line tangent equation; b0Constant term representing left lane line tangent equation, b1The coefficients of the first order of the left lane line tangent equation are represented.
In the step S22, in the above step,
the constraint equations of the pitch angle, the yaw angle and the rolling angle are as follows:
q1tan(pitchi)+q2tan(rolli)=q0 (3);
yawi=tan-1(a1) (4);
wherein:
q0=(b1-a1)cos(rolli-1)h (5);
q1=(b0-a0) (6);
q2=(a0*b1-a1*b0)cos(rolli-1) (7);
wherein, pitchiFor the currently corrected pitch angle, yawiFor the currently corrected yaw angle, rolliFor the currently corrected roll angle, rolli-1The initial value or the previous calculated value of the roll angle of the camera; h is the height from the front-looking camera to the ground; q. q.s0、q1、q2Are all intermediate variables.
Preferably, when the lane line includes an adjacent lane line, the roll angle roll may be further calculated by the following stepsiThe value of (c):
s221, calibrating a rolling angle correction mode of the front-view camera relative to the left lane line and the right lane line according to the left lane line and the right lane line of the lane where the vehicle is located and the left adjacent lane line or the right adjacent lane line.
Optionally selecting two points L1, L2 on the left lane line, optionally selecting two points R1, R2 on the right lane line, optionally selecting two points N1, N2 on the left adjacent lane line or the right adjacent lane line;
the roll angle roll when the left adjacent lane line is selectediThe correction formula of (2) is as follows:
rolli=(N1.x+R1.x+N2.x+R2.x-2*L1.x-2*L2.x)*beta+rolli-1 (8);
the roll angle roll when the right adjacent lane line is selectediThe correction formula of (2) is as follows:
rolli=(N1.x+L1.x+N2.x+L2.x-2*R1.x-2*R2.x)*beta+rolli-1(9);
wherein, L1.x is a transverse coordinate value of the point L1, L1.y is a longitudinal coordinate value of the point L1, L2.x is a transverse coordinate value of the point L2, and L2.y is a longitudinal coordinate value of the point L2; r1.x is the transverse coordinate value of the point R1, R1.y is the longitudinal coordinate value of the point R1, R2.x is the transverse coordinate value of the point R2, and R2.y is the longitudinal coordinate value of the point R2; n1.x is a lateral coordinate value of point N1, N1.y is a longitudinal coordinate value of point N1, N2.x is a lateral coordinate value of point N2, N2.y is a longitudinal coordinate value of point N2, and beta is the set roll angle correction rate.
The step S3 specifically includes the steps of:
s31, marking a first tracking point of a frame of road surface image;
s32, marking a second tracking point of the next frame of road surface image matched with the first tracking point;
s33, calculating the pixel width ratio between the one-frame road surface image and the next-frame road surface image;
and S34, obtaining the actual distance of the target vehicle of the next frame of road surface according to the pixel width ratio and the actual distance of the target vehicle of the one frame of road surface image.
The step S4 specifically includes:
and judging whether the distance difference between the actual distance of the target vehicle and the theoretical distance of the target vehicle is greater than a preset distance correction threshold, if so, adjusting the angle parameter, and outputting the adjusted angle parameter after the angle parameter difference obtained by continuous two times of correction is smaller than the set correction threshold.
The method for making the angle parameter difference value of two continuous corrections smaller than the set correction threshold specifically comprises the following steps:
make pitchi-pitchi-1<Th、yawi-yawi-1<Th and rolli-rolli-1<Th and Th are set correction threshold values.
Preferably, a deep learning method is adopted to extract lane line and target vehicle position information in the road surface image;
preferably, an OpenCV cross-platform computer vision library is adopted to extract optical flow information of a target vehicle in a road surface image;
preferably, the filtering process is performed using a Kalman filter.
According to the method, a deep learning method is adopted to extract lane line and target vehicle position information in a road surface image, a correction formula is set by combining lane line information with a static calibration result, a correction threshold value is set to dynamically adjust the external reference angle of a camera, Opencv is used to extract optical flow information of a target vehicle in a detection frame in the road surface image, the external reference angle is dynamically adjusted again by using the change of the optical flow information of the target vehicle in the detection frame of an upper frame and a lower frame, a more accurate camera external reference angle is solved, the calculation precision and accuracy of camera calibration are improved, the requirement on a calibration scene is reduced, and the stability and universality of an algorithm are enhanced.
Drawings
Fig. 1 is a system flowchart of a camera calibration method based on a lane line and a target vehicle according to an embodiment of the present invention;
FIG. 2 is a flowchart of the operation provided in embodiment 1 of the present invention;
FIG. 3 is a flow chart of an optical flow adjustment algorithm provided by an embodiment of the present invention;
FIG. 4 is a diagram of a process for implementing technical effects provided by an embodiment of the present invention;
fig. 5 is a diagram of technical effects provided by an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings, which are given solely for the purpose of illustration and are not to be construed as limitations of the invention, including the drawings which are incorporated herein by reference and for illustration only and are not to be construed as limitations of the invention, since many variations thereof are possible without departing from the spirit and scope of the invention.
The camera calibration method based on the lane lines and the target vehicles, provided by the embodiment of the invention, is suitable for calibrating lane lines with small road surface curvature change, such as national standard expressways, urban straight roads and the like, and is not suitable for calibrating lane lines of curves, connected road sections, such as ramps and the like, as shown in fig. 1, and in the embodiment, the camera calibration method comprises the following steps:
s1, obtaining a current road surface image, obtaining the height of a front-view camera from the ground through static calibration, and solving the theoretical distance between the front-view camera and a target vehicle according to internal reference and external reference angles of the front-view camera;
s2, extracting lane lines in the road surface image, and correcting the external reference angle parameter of the front-view camera by combining a preset correction formula;
s3, calculating the actual distance of the target vehicle according to the extracted optical flow information in the frame of the target vehicle;
and S4, dynamically adjusting the angle parameter of the forward-looking camera according to the actual distance of the target vehicle and the theoretical distance of the target vehicle.
Referring to fig. 2, as an embodiment of the present invention, when the road surface image includes at least two mutually parallel lane lines, the step S2 specifically includes the steps of:
s21, performing curve fitting according to world coordinates of pixel points of a left lane line and a right lane line of a lane where a vehicle is located, and intercepting a part of a fitted lane line curve to establish a tangent equation;
and S22, deriving constraint equations of the pitch angle, the yaw angle and the roll angle according to the tangent equation, and solving the pitch angle, the yaw angle and the roll angle through the constraint equations.
S23, inputting multiple frames of road surface images to calculate multiple groups of intermediate variables (q)0、q1、q2) And find the corresponding sets of tangent functions tan (pitch)i)、tan(rolli) And finally solving for tan (pitch) by means of an arctangent functioni)、tan(rolli) To obtain the pitch angle pitchiRoll angle rolliThe correction value of (2).
In the step S21, in the above step,
tangent equations of the left lane line and the right lane line are respectively as follows:
x=a0+a1y (1);
x=b0+b1y (2);
wherein x represents the abscissa and y represents the ordinate; a is0Constant term representing left lane line tangent equation, a1A first order coefficient representing a left lane line tangent equation; b0Constant term representing left lane line tangent equation, b1The coefficients of the first order of the left lane line tangent equation are represented.
In step S22, the constraint equations of the elevation angle, the yaw angle and the roll angle are as follows:
q1tan(pitchi)+q2tan(rolli)=q0 (3);
yawi=tan-1(a1) (4);
wherein:
q0=(b1-a1)cos(rolli-1)h (5);
q1=(b0-a0) (6);
q2=(a0*b1-a1*b0)cos(rolli-1) (7);
wherein, pitchiFor the currently corrected pitch angle, yawiFor the currently corrected yaw angle, rolliFor the currently corrected roll angle, rolli-1The initial value or the previous calculated value of the roll angle of the camera; h is the height from the front-looking camera to the ground; q. q.s0、q1、q2Are all intermediate variables.
Preferably, when the lane line includes an adjacent lane line, the roll angle roll may be further calculated by the following stepsiThe value of (c):
s221, calibrating a rolling angle correction mode of the front-view camera relative to the left lane line and the right lane line according to the left lane line and the right lane line of the lane where the vehicle is located and the left adjacent lane line or the right adjacent lane line.
Optionally selecting two points L1, L2 on the left lane line, optionally selecting two points R1, R2 on the right lane line, optionally selecting two points N1, N2 on the left adjacent lane line or the right adjacent lane line;
the roll angle roll when the left adjacent lane line is selectediThe correction formula of (2) is as follows:
rolli=(N1.x+R1.x+N2.x+R2.x-2*L1.x-2*L2.x)*beta+rolli-1 (8);
the roll angle roll when the right adjacent lane line is selectediThe correction formula of (2) is as follows:
rolli=(N1.x+L1.x+N2.x+L2.x-2*R1.x-2*R2.x)*beta+rolli-1(9);
wherein, L1.x is a transverse coordinate value of the point L1, L1.y is a longitudinal coordinate value of the point L1, L2.x is a transverse coordinate value of the point L2, and L2.y is a longitudinal coordinate value of the point L2; r1.x is the transverse coordinate value of the point R1, R1.y is the longitudinal coordinate value of the point R1, R2.x is the transverse coordinate value of the point R2, and R2.y is the longitudinal coordinate value of the point R2; n1.x is a lateral coordinate value of point N1, N1.y is a longitudinal coordinate value of point N1, N2.x is a lateral coordinate value of point N2, N2.y is a longitudinal coordinate value of point N2, and beta is the set roll angle correction rate.
Referring to fig. 3, the step S3 specifically includes the steps of:
s31, marking a first tracking point of a frame of road surface image;
s32, marking a second tracking point of the next frame of road surface image matched with the first tracking point;
s33, calculating the pixel width ratio between the one-frame road surface image and the next-frame road surface image;
and S34, obtaining the actual distance of the target vehicle of the next frame of road surface according to the pixel width ratio and the actual distance of the target vehicle of the one frame of road surface image.
The step S4 specifically includes:
and judging whether the distance difference between the actual distance of the target vehicle and the theoretical distance of the target vehicle is greater than a preset distance correction threshold, if so, adjusting the angle parameter, and outputting the adjusted angle parameter after the angle parameter difference obtained by continuous two times of correction is smaller than the set correction threshold.
The method for making the angle parameter difference value of two continuous corrections smaller than the set correction threshold specifically comprises the following steps:
make pitchi-pitchi-1<Th、yawi-yawi-1<Th and rolli-rolli-1<Th and Th are set correction threshold values.
Preferably, a deep learning method is adopted to extract lane line and target vehicle position information in the road surface image;
preferably, an OpenCV cross-platform computer vision library is adopted to extract optical flow information of a target vehicle in a road surface image;
preferably, the filtering process is performed using a Kalman filter.
Referring to fig. 1 to 3, the specific work flow of the camera calibration is as follows:
firstly, acquiring a road surface image through a front-view camera arranged on a vehicle body, endowing an initial value 0 to external reference angles of the front-view camera, and calculating a theoretical distance between the front-view camera and a target vehicle by combining an internal reference angle directly obtained from a front-view camera technical manual; and the height of the forward-looking camera from the ground is obtained through static calibration.
And secondly, performing first correction, extracting lane lines and target vehicle position information in the road surface image by adopting a deep learning method, and arranging contrast frame lines around the target vehicle in the road surface image. Firstly, judging whether adjacent lane lines exist or not, if so, randomly selecting two lane lines, and selecting a point from the selected left lane line and the right lane line; if not, selecting a point on each of the left lane line and the right lane line, and calculating the world coordinates of the selected point according to the external reference angle in the first step. Then, inputting a plurality of frames of road surface images, and enabling the world coordinates of the selected point in the road surface images and the initial value 0 of the rolling angle or the rolling angle roll obtained in the previous timei-1The angle values are input into formulas (1) - (7) and respectively correspond to the external reference angle pitchiYaw angle yawiRoll angle rolliCarrying out dynamic correction; wherein, based on the calculated sets of intermediate variables (q)0、q1、q2) And the arctan function solves tan (pitch)i)、tan(rolli) To obtain the pitch angle pitchiYaw angle yawiRoll angle rolliThe correction value of (1); and finishing the correction until the difference value of the angle parameters of the two previous and next corrections is smaller than the correction threshold Th.
Thirdly, performing second correction, extracting optical flow information of a target vehicle in the road surface image by adopting an OpenCV cross-platform computer vision library, and inputting the road surface images of two adjacent frames up and down and the distance Z0 from the front-view camera of the road surface image of the previous frame to the target vehicle;
selecting N groups of corresponding tracking points P1 from the road surface images of the upper and lower adjacent frames, selecting N groups of tracking points P2 from the road surface image of the next frame, and obtaining the pixel distance between every two tracking points of the N groups, namely the width w of the N groups of pixels0Correspondingly acquiring N groups of pixel widths w corresponding to the next frame of road surface image tracking point P1 in the previous frame of road surface image1Calculating the pixel width w between the upper and lower two frames of road surface images1、w0Ratio S ofsrcAnd get the number of the group SsrcMedian value of SmidWill SmidS obtained through filtering processing of a Kalman filter is used as a final pixel width ratio between an upper frame road surface image and a lower frame road surface image, and a specific calculation formula is as follows;
Figure BDA0002093458550000091
the pixel width ratio of the road surface images of the upper frame and the lower frame is the reciprocal of the distance, the actual distance Z1 of the target vehicle of the road surface image of the next frame is obtained by utilizing the quantity relation of S and Z0, and the target distance Z obtained by the camera external referencecamAnd finally, adjusting the external parameters of the camera by a general progressive correction method, judging whether the distance difference between the actual distance of the target vehicle and the theoretical distance of the target vehicle is greater than a preset distance correction threshold Th, if so, continuously adjusting the angle parameters, and outputting the adjusted angle parameters after the angle parameter difference obtained by continuous two-time correction is less than the set correction threshold.
So far, the adjustment is finished.
Preferably, when the adjacent lane line exists during the first correction, if the left adjacent lane line exists, the parameter data is input into a correction formula (8) preset by the system, and the roll angle roll is adjustediDynamic correction is carried out until the angle parameter difference values of the two previous and next corrections are smaller than a correction threshold Th, and the correction is completed; if the right side adjacent lane line is available, the parameter data is input into the systemCorrection formula (9) for roll angle rolliAnd performing dynamic correction until the angle parameter difference of the two previous and next corrections is smaller than a correction threshold Th, and finishing the correction. The corrected roll angle rolliRoll angle roll, which can be determined as the above-mentioned arctan functioniThe comparison and verification can also be directly substituted into a constraint formula to obtain the pitch angle pitchi
Referring to fig. 4, according to the principle of camera external reference angle calculation, when the lane line is converted from the image coordinates to the world coordinates, if the pitch angle is larger or smaller, as shown in a diagram a of fig. 4, the lane line shows a shape of "inner eight" or "outer eight" in the world coordinates; when the yaw angle is incorrect, as shown in the diagrams B and C of FIG. 4, the center line of the lane line will incline to one direction; when the roll angle is incorrect, as in the D diagram of fig. 4, the lane lines have unequal widths in world coordinates.
When the camera external reference angles are all correct, as shown in the graph E of fig. 4, the center line of the lane line is "parallel" to, equidistant "from, and perpendicular to the road surface cross section when the lane line is projected from the image coordinates to the world coordinates.
Referring to fig. 5, it can be seen that the edge of the carriage of the front truck is parallel to and perpendicular to the lines of the rectangular detection frame, and the carriage is not angularly displaced to maintain the stereoscopic impression when the image is formed, and the carriage is parallel to, perpendicular to and equidistant from the four sides.
The embodiment of the invention adopts a deep learning method to extract the lane line and target vehicle position information in the road surface image, utilizes the lane line information and a static calibration result to set a correction formula, sets a correction threshold value to dynamically adjust the external reference angle of the camera, utilizes Opencv to extract the optical flow information of the target vehicle in the detection frame in the road surface image, and utilizes the change of the optical flow information of the target vehicle in the detection frames of the upper frame and the lower frame to dynamically adjust the external reference angle again, so that more accurate external reference angle of the camera is solved, the calculation precision and the accuracy of the camera calibration are improved, the requirement on a calibration scene is reduced, and the stability and the universality of an algorithm are enhanced.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. A camera calibration method based on a lane line and a target vehicle is characterized by comprising the following steps:
s1, obtaining a current road surface image, obtaining the height of a front-view camera from the ground through static calibration, and solving the theoretical distance between the front-view camera and a target vehicle according to internal reference and external reference angles of the front-view camera;
s2, extracting lane lines in the road surface image, and correcting the external reference angle parameter of the front-view camera by combining a preset correction formula;
s3, calculating the actual distance of the target vehicle according to the extracted optical flow information in the frame of the target vehicle;
s4, dynamically adjusting the angle parameter of the forward-looking camera according to the actual distance of the target vehicle and the theoretical distance of the target vehicle;
when the road surface image includes at least two mutually parallel lane lines and further includes an adjacent lane line, the step S2 specifically includes:
calibrating a rolling angle correction mode of the front-looking camera relative to a left lane line and a right lane line according to the left lane line and the right lane line of a lane where a vehicle is located and a left adjacent lane line or a right adjacent lane line;
optionally selecting two points L1, L2 on the left lane line, optionally selecting two points R1, R2 on the right lane line, optionally selecting two points N1, N2 on the left adjacent lane line or the right adjacent lane line;
the roll angle roll when the left adjacent lane line is selectediThe correction formula of (2) is as follows:
rolli=(N1.x+R1.x+N2.x+R2.x-2*L1.x-2*L2.x)*beta+rolli-1 (8);
the roll angle roll when the right adjacent lane line is selectediThe correction formula of (2) is as follows:
rolli=(N1.x+L1.x+N2.x+L2.x-2*R1.x-2*R2.x)*beta+rolli-1 (9);
wherein, L1.x is a transverse coordinate value of the point L1, L1.y is a longitudinal coordinate value of the point L1, L2.x is a transverse coordinate value of the point L2, and L2.y is a longitudinal coordinate value of the point L2; r1.x is the transverse coordinate value of the point R1, R1.y is the longitudinal coordinate value of the point R1, R2.x is the transverse coordinate value of the point R2, and R2.y is the longitudinal coordinate value of the point R2; n1.x is a lateral coordinate value of point N1, N1.y is a longitudinal coordinate value of point N1, N2.x is a lateral coordinate value of point N2, N2.y is a longitudinal coordinate value of point N2, and beta is the set roll angle correction rate.
2. The method for calibrating a camera based on a lane line and a target vehicle according to claim 1, wherein when the road surface image includes two lane lines parallel to each other, the step S2 specifically includes the steps of:
s21, performing curve fitting according to world coordinates of pixel points of a left lane line and a right lane line of a lane where a vehicle is located, and intercepting a part of a fitted lane line curve to establish a tangent equation;
and S22, deriving constraint equations of the pitch angle, the yaw angle and the roll angle according to the tangent equation, and solving the pitch angle, the yaw angle and the roll angle through the constraint equations.
3. The method for calibrating camera based on lane line and target vehicle as claimed in claim 2, wherein in step S21,
tangent equations of the left lane line and the right lane line are respectively as follows:
x=a0+a1y (1);
x=b0+b1y (2);
wherein x represents the abscissa and y represents the ordinate; a is0Constant term representing left lane line tangent equation, a1First order term representing left lane line tangent equationA coefficient; b0Constant term representing left lane line tangent equation, b1The coefficients of the first order of the left lane line tangent equation are represented.
4. The method for calibrating camera based on lane line and target vehicle as claimed in claim 3, wherein in step S22,
the constraint equations of the pitch angle, the yaw angle and the rolling angle are as follows:
q1 tan(pitchi)+q2 tan(rolli)=q0 (3);
yawi=tan-1(a1) (4);
wherein:
q0=(b1-a1)cos(rolli-1)h (5);
q1=(b0-a0) (6);
q2=(a0*b1-a1*b0)cos(rolli-1) (7);
wherein, pitchiFor the currently corrected pitch angle, yawiFor the currently corrected yaw angle, rolliRoll for the currently corrected roll anglei-1The initial value or the previous calculated value of the roll angle of the camera; h is the height from the front-looking camera to the ground; q. q.s0、q1、q2Are all intermediate variables.
5. The method for calibrating a camera based on a lane line and a target vehicle as claimed in claim 4, wherein the step S2 further comprises the steps of:
s23, inputting multiple frames of road surface images to calculate multiple groups of intermediate variables q0、q1、q2And find the corresponding sets of tangent functions tan (pitch)i)、tan(rolli) And finally solving for tan (pitch) by means of an arctangent functioni)、tan(rolli) To obtain the pitch angle pitchiRoll angle rolliThe correction value of (2).
6. The method for calibrating a camera based on a lane line and a target vehicle according to claim 1, wherein the step S3 specifically comprises the steps of:
s31, marking a first tracking point of a frame of road surface image;
s32, marking a second tracking point of the next frame of road surface image matched with the first tracking point;
s33, calculating the pixel width ratio between the one-frame road surface image and the next-frame road surface image;
and S34, obtaining the actual distance of the target vehicle of the next frame of road surface according to the pixel width ratio and the actual distance of the target vehicle of the one frame of road surface image.
7. The method for calibrating a camera based on a lane line and a target vehicle according to claim 1, wherein the step S4 specifically comprises:
and judging whether the distance difference between the actual distance of the target vehicle and the theoretical distance of the target vehicle is greater than a preset distance correction threshold, if so, adjusting the angle parameter, and outputting the adjusted angle parameter after the angle parameter difference obtained by continuous two times of correction is smaller than the set correction threshold.
8. The method of claim 7, wherein the camera calibration method based on the lane line and the target vehicle,
the method for making the angle parameter difference value of two continuous corrections smaller than the set correction threshold specifically comprises the following steps:
make pitchi-pitchi-1<Th、yawi-yawi-1<Th and rolli-rolli-1<Th and Th are set correction threshold values.
CN201910510879.7A 2019-06-13 2019-06-13 Camera calibration method based on lane line and target vehicle Active CN110264525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910510879.7A CN110264525B (en) 2019-06-13 2019-06-13 Camera calibration method based on lane line and target vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910510879.7A CN110264525B (en) 2019-06-13 2019-06-13 Camera calibration method based on lane line and target vehicle

Publications (2)

Publication Number Publication Date
CN110264525A CN110264525A (en) 2019-09-20
CN110264525B true CN110264525B (en) 2021-08-06

Family

ID=67918040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910510879.7A Active CN110264525B (en) 2019-06-13 2019-06-13 Camera calibration method based on lane line and target vehicle

Country Status (1)

Country Link
CN (1) CN110264525B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060924B (en) * 2019-12-02 2021-10-15 北京交通大学 SLAM and target tracking method
CN113706624A (en) * 2020-05-20 2021-11-26 杭州海康威视数字技术股份有限公司 Camera external parameter correction method and device and vehicle-mounted all-round-looking system
CN112509054B (en) * 2020-07-20 2024-05-17 重庆兰德适普信息科技有限公司 Camera external parameter dynamic calibration method
CN111862235B (en) * 2020-07-22 2023-12-29 中国科学院上海微系统与信息技术研究所 Binocular camera self-calibration method and system
CN112017249A (en) * 2020-08-18 2020-12-01 东莞正扬电子机械有限公司 Vehicle-mounted camera roll angle obtaining and mounting angle correcting method and device
CN112348752B (en) * 2020-10-28 2022-08-16 武汉极目智能技术有限公司 Lane line vanishing point compensation method and device based on parallel constraint
CN112529966B (en) * 2020-12-17 2023-09-15 豪威科技(武汉)有限公司 On-line calibration method of vehicle-mounted looking-around system and vehicle-mounted looking-around system thereof
CN112712708A (en) * 2020-12-28 2021-04-27 上海眼控科技股份有限公司 Information detection method, device, equipment and storage medium
CN112800986B (en) * 2021-02-02 2021-12-07 深圳佑驾创新科技有限公司 Vehicle-mounted camera external parameter calibration method and device, vehicle-mounted terminal and storage medium
CN112862899B (en) 2021-02-07 2023-02-28 黑芝麻智能科技(重庆)有限公司 External parameter calibration method, device and system for image acquisition equipment
CN113256742B (en) * 2021-07-15 2021-10-15 禾多科技(北京)有限公司 Interface display method and device, electronic equipment and computer readable medium
CN116228834B (en) * 2022-12-20 2023-11-03 阿波罗智联(北京)科技有限公司 Image depth acquisition method and device, electronic equipment and storage medium
CN116630436B (en) * 2023-05-17 2024-01-12 禾多科技(北京)有限公司 Camera external parameter correction method, camera external parameter correction device, electronic equipment and computer readable medium
CN117173257B (en) * 2023-11-02 2024-05-24 安徽蔚来智驾科技有限公司 3D target detection and calibration parameter enhancement method, electronic equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745452A (en) * 2013-11-26 2014-04-23 理光软件研究所(北京)有限公司 Camera external parameter assessment method and device, and camera external parameter calibration method and device
CN109859278A (en) * 2019-01-24 2019-06-07 惠州市德赛西威汽车电子股份有限公司 The scaling method and calibration system joined outside in-vehicle camera system camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9563951B2 (en) * 2013-05-21 2017-02-07 Magna Electronics Inc. Vehicle vision system with targetless camera calibration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745452A (en) * 2013-11-26 2014-04-23 理光软件研究所(北京)有限公司 Camera external parameter assessment method and device, and camera external parameter calibration method and device
CN109859278A (en) * 2019-01-24 2019-06-07 惠州市德赛西威汽车电子股份有限公司 The scaling method and calibration system joined outside in-vehicle camera system camera

Also Published As

Publication number Publication date
CN110264525A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110264525B (en) Camera calibration method based on lane line and target vehicle
CN109859278B (en) Calibration method and calibration system for camera external parameters of vehicle-mounted camera system
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110285793B (en) Intelligent vehicle track measuring method based on binocular stereo vision system
US11113843B2 (en) Method for calibrating the orientation of a camera mounted to a vehicle
US8498479B2 (en) Image processing device for dividing an image into a plurality of regions
CN110031829B (en) Target accurate distance measurement method based on monocular vision
CN111862234B (en) Binocular camera self-calibration method and system
CN107578430B (en) Stereo matching method based on self-adaptive weight and local entropy
CN111862235B (en) Binocular camera self-calibration method and system
KR20070051275A (en) Method for the automatic calibration of a stereovision system
CN110660105B (en) Calibration parameter optimization method and device for panoramic looking-around system
WO2008089964A2 (en) Method and system for video-based road lane curvature measurement
CN101887589A (en) Stereoscopic vision-based real low-texture image reconstruction method
CN112184792B (en) Road gradient calculation method and device based on vision
CN105718865A (en) System and method for road safety detection based on binocular cameras for automatic driving
CN106340045B (en) Calibration optimization method in three-dimensional facial reconstruction based on binocular stereo vision
CN111126306A (en) Lane line detection method based on edge features and sliding window
CN111862236B (en) Self-calibration method and system for fixed-focus binocular camera
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN101980292B (en) Regular octagonal template-based board camera intrinsic parameter calibration method
CN110889874B (en) Error evaluation method for binocular camera calibration result
CN111402593B (en) Video traffic parameter acquisition method based on polynomial fitting
JP4053314B2 (en) Stereo image misalignment adjusting device, misalignment adjusting method, and stereo monitoring device
CN113643427A (en) Binocular ranging and three-dimensional reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant