CN111815713A - Method and system for automatically calibrating external parameters of camera - Google Patents
Method and system for automatically calibrating external parameters of camera Download PDFInfo
- Publication number
- CN111815713A CN111815713A CN202010621335.0A CN202010621335A CN111815713A CN 111815713 A CN111815713 A CN 111815713A CN 202010621335 A CN202010621335 A CN 202010621335A CN 111815713 A CN111815713 A CN 111815713A
- Authority
- CN
- China
- Prior art keywords
- camera
- coordinate system
- lane line
- external parameters
- world coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
Abstract
The invention provides a method and a system for automatically calibrating external parameters of a camera, which comprises the following steps: step 1: respectively taking the image lane line information and the lane line information in the world coordinate system as input values and true values, and setting a loss function; step 2: and carrying out reverse derivation on the loss function, iteratively updating the external parameters of the camera, and solving to obtain the optimal external parameters of the camera. The method is simple to operate, has small scene limitation, and meets the requirement of real-time on-site calibration; the method reduces manual intervention, and is convenient, quick and high in accuracy.
Description
Technical Field
The invention relates to the technical field of camera calibration, in particular to a method and a system for automatically calibrating camera external parameters.
Background
The camera external reference calibration is used for describing a conversion relation between a camera coordinate system and other coordinate systems (such as a vehicle body coordinate system). In the application field of computer vision, camera external reference calibration is a key link, and the calibration accuracy determines whether a computer vision system can effectively perform functions related to two-dimensional and three-dimensional interaction such as positioning, distance measurement and detection.
Taking the example of solving the external parameters of the camera coordinate system and the vehicle body coordinate system, the traditional camera external parameter calibration method is to establish a world coordinate system by means of a calibration plate, fix the transformation relationship between the vehicle body coordinate system and the world coordinate system (for example, parking the vehicle to a fixed place), and determine the internal and external parameters of the world coordinate system and the camera coordinate system by using the imaging information of the calibration plate on an image plane and the grid size information of the calibration plate, so that the external parameters of the camera coordinate system and the vehicle body coordinate system can be calculated by means of the fixed transformation relationship of the vehicle body coordinate system and the world coordinate system and the intermediary of the world coordinate system. The method has the disadvantages that the method needs to be carried out in a specific scene, namely a scene specially used for external reference marking needs to be provided, a specific reference system (such as a calibration plate) needs to be prepared in the scene, the calibration position of the vehicle needs to be fixed in advance, and the method is not suitable for real-time and real-time operation and is tedious.
Aiming at the defects of the method, the invention provides a method for automatically calibrating camera external parameters.
Patent document CN109191531A (application number: 201810851930.6) discloses an automatic external reference calibration method for a rear vehicle-mounted camera based on lane line detection, which includes the following steps: obtaining the steering wheel angle information of the vehicle, and performing calibration operation when the absolute value of the steering wheel angle phi is less than 2 degrees and the interframe change of phi is less than 1 degree; at least 2 frames of images are taken, denoted as T0 frames and T1 frames, where T0And T1Representing the acquisition time of two frames respectively, the time interval between two frames is (T)1-T0) For convenience of presentation, contract T00; calculating an intersection point P of L1F and L2F, calculating Rx and Ry according to the position of the point P in an F picture, setting the width of a lane line as W, wherein W belongs to Sw, calculating tx according to the current parameter, defining a matrix to obtain a vehicle speed V parameter, defining the matrix to calculate SSD parameter, and obtaining Rx, Ry, Rz, tx and ty which are calculated to be the optimal parameters to be obtained.
Disclosure of Invention
In view of the shortcomings in the prior art, it is an object of the present invention to provide a method and system for automatically calibrating external parameters of a camera.
According to the invention, the method for automatically calibrating the external parameters of the camera comprises the following steps:
step 1: respectively taking the image lane line information and the lane line information in the world coordinate system as input values and true values, and setting a loss function;
step 2: and carrying out reverse derivation on the loss function, iteratively updating the external parameters of the camera, and solving to obtain the optimal external parameters of the camera.
Preferably, automatic calibration is performed in a road section with a world coordinate system, and an initial camera external parameter is used as an initial value to be updated;
the camera external parameters comprise a rotation matrix and a translation matrix, and initial external parameters are obtained to serve as solving initial values, wherein the expression mode of the rotation matrix comprises a 3 x 3 matrix, an Euler angle or four elements;
and acquiring data of the vehicle to be calibrated, and realizing automatic solution of the external parameters of the camera by carrying out reverse derivation on the loss function, wherein the solution only updates the rotation matrix in the initial external parameters.
Preferably, the measurement standard of the loss function is the similarity degree of the image lane line with the corresponding lane line in the world coordinate system after the conversion of the internal and external references of the camera and the external references in the world coordinate system;
preferably, the setting of the loss function includes distance error integral of the corresponding curve or straight line and angle error integral of the tangent direction of the curve or straight line based on the degree of similarity.
Preferably, the loss function setting simultaneously carries out effective constraint on all parameters of the rotation matrix in the camera external parameters;
the parameters include three euler angles: yaw angle, pitch angle, roll angle.
Preferably, iteratively updating the camera extrinsic parameters is performed by gradient descent.
Preferably, constraints on the respective parameters are added to the loss function, and the constraint method includes: when the distance error and the slope error of the corresponding straight line are lower than the preset threshold value;
-constraining camera external parameters by the parallel and distance relationship between the left and right lane lines;
-or increasing the stop line to constrain the pitch angle, increasing the pole to take into account the roll angle;
inputting a point (u, v) of a lane line in a two-dimensional coordinate system, namely a pixel coordinate system, and solving a corresponding coordinate point (X, Y, Z) in a world coordinate system through formulas (1) and (2), wherein fx、fy、u0、v0Represents the camera internal reference, Rc、TcRepresenting a rotation matrix and a translation matrix between an external reference camera coordinate system and a vehicle body coordinate system of the camera, Rb、TbRepresenting a rotation matrix between a vehicle body coordinate system and a world coordinate systemAnd a translation matrix, ZcRepresenting depth information of the (u, v) point in a camera coordinate system; input (u, v), camera parameters fx、fy、u0、v0Vehicle body position and attitude Rb、Tb,Zb,TcAll are known values, the lane line is taken as input and the origin of the coordinate system of the vehicle body is the projection of the center of the rear wheel on the ground, the lane line is on the road surface, ZbAlways 0, TcObtaining, from a measurement reference, RcTo update the parameters.
Preferably, after (X, Y, Z), i.e. the lane line point in the world coordinate system, is obtained through the equations (1) and (2), the loss function loss equation (3) is expressed as:
loss=loss1+loss2+loss3+loss4…………(3)
loss1 is the distance between the left lane line in the world coordinate system and the left lane line in the world coordinate system after the transformation of the internal and external parameters of the left image lane line; loss2 is the distance between the right image lane line and the right lane line in the world coordinate system after the internal and external parameters are transformed; loss3 is the gradient error between the left lane line in the world coordinate system and the image lane line on the left side after the transformation of the internal and external parameters; loss4 is the gradient error between the right lane line of the image on the right side and the right lane line in the world coordinate system after the transformation of the internal and external parameters.
Preferably, formula (4) is expressed as:
selecting offset error from point to line through distance between loss1 and loss2, namely calculating the distance between the point of the world coordinate system obtained by conversion of formulas (1) and (2) and the corresponding true value, wherein A, B, C is the coefficient of the linear equation obtained by linear fitting the true value, n is the number of input midpoints, (x)i,yi) Obtaining points of a world coordinate system after inputting the points through formulas (1) and (2);
equation (5) is expressed as:
loss3 and loss4 calculate the direction error through the slope, that is, calculate the average slope of two straight lines through the cumulative slope formula, so as to obtain the slope error of two straight lines, wherein (x)i,yi) In order to input the plane coordinates of the world coordinate system obtained by the transformation of the formulas (1) and (2), (x)i',yi') is the plane coordinate in the true value, n and n1 represent the number of points of input and true value respectively;
equations (4), (5) show that the common constraints of offset and direction describe the similarity relationship of two straight lines.
Preferably, the solution is performed based on deep learning, and the loss function is converged to the minimum value in a certain range through learning gradient descent, so that the input is subjected to a series of transformations to predict a truth value, wherein the truth value is called as a label in the training stage;
in the automatic solving method, a straight line obtained after fitting an image lane line, namely a two-dimensional point under a pixel coordinate system, is input; the labels are corresponding lane lines in a world coordinate system, namely three-dimensional points in the world coordinate system; after a loss function is set, performing reverse derivation on the loss function in an iterative mode from the loss function by adopting a loss reverse derivation mode to obtain a gradient of the external parameter of the camera, wherein the reverse derivation mode comprises manually calculating the gradient or performing gradient solution on the parameter by considering a deep learning tool pytorch, and updating a rotation matrix in the initial external parameter at a preset learning rate; the selection mode of the learning rate comprises a fixed value, an adaptive learning rate and a gradient learning rate;
the expression is as follows:
yaw=yaw-lr*yawgrad…………(6)
pitch=pitch-lr*pitchgrad…………(7)
roll=roll-lr*rollgrad…………(8)
at an euler angle: the yaw angle yaw, pitch angle pitch and roll angle are represented by a rotation matrix, and the corresponding gradient yaw is solved each timegrad、pitchgrad、rollgradAnd then, updating the original initial value through the learning rate lr until the value of the loss function converges to a certain range and obtains a minimum value, wherein the image lane line conforms to a certain preset threshold value with the lane line similarity index corresponding to the world coordinate system after the conversion of the camera internal parameter, the updated camera external parameter and the vehicle body and world coordinate system external parameter, and the solved optimal camera external parameter is obtained.
The system for automatically calibrating the external parameters of the camera provided by the invention comprises the following components:
module M1: respectively taking the image lane line information and the lane line information in the world coordinate system as input values and true values, and setting a loss function;
module M2: carrying out reverse derivation on the loss function, iteratively updating external parameters of the camera, and solving to obtain the optimal external parameters of the camera;
the measure standard of the loss function is the similarity degree of the image lane line with the corresponding lane line in the world coordinate system after the conversion of the internal and external parameters of the camera and the external parameters in the world coordinate system.
Compared with the prior art, the invention has the following beneficial effects:
1. the method is simple to operate, has small scene limitation, and meets the requirement of real-time on-site calibration;
2. the method reduces manual intervention, and is convenient, quick and high in accuracy;
3. when the loss function is set, all parameters of the rotation matrix are constrained, and the similarity of lane lines is used as a measurement standard, so that the method is more accurate and practical.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example (b):
according to the method for automatically calibrating the external parameters, provided by the invention, the information of the image lane lines and the information of the lane lines in a corresponding map (a world coordinate system) can be respectively used as input and true values, a loss function is set, and the external parameters of the camera are iteratively updated in a mode of carrying out reverse derivation on the loss function, so that the image lane lines are superposed with the map lane lines after the internal and external parameters are converted, and the external parameters of the camera at the moment are the optimal external parameters for solving. The method for automatically calibrating the external parameter needs to provide an initial external parameter of the camera and only updates a rotation matrix in the external parameter of the camera.
Further, the measure of the loss function is the similarity between the image lane line and the corresponding lane line in the map after the conversion of the inside and outside references of the camera and the outside references of the vehicle body and the map (world coordinate system).
Furthermore, the setting of the loss function needs to effectively constrain all parameters (such as three euler angles: yaw angle, pitch angle and roll angle) of the rotation matrix in the camera external parameters at the same time.
Further, iteratively updating the camera extrinsic parameters is achieved by gradient descent.
The method for automatically calibrating the external reference supports automatic calibration in a road section with a map (a world coordinate system), namely, an initial camera external parameter is provided as an initial value to be updated continuously, wherein the translation matrix needs to be measured manually, the error range of each direction should be within 10cm, the rotation matrix needs to provide an initial value, the representation mode includes but is not limited to 3 x 3 matrix, euler angle, four elements and the like, then, the vehicle which needs to be calibrated is driven stably on the road section for data acquisition, and the automatic solution of the camera external parameters is realized by carrying out reverse derivation on the loss function, the solution only updates the rotation matrix in the initial external parameters, namely, the method only updates the rotation matrix, because the translation matrix is easier to measure in a certain error allowable range, and the translation error in a certain range has less influence on the external reference whole body.
Preferably, the setting purpose of the loss function is to measure the similarity degree of the image lane line with the corresponding lane line in the map after the conversion of the inside and outside references of the camera and the outside references of the map (world coordinate system).
Preferably, from the viewpoint of similarity, the setting of the loss function includes, but is not limited to, considering distance error integral corresponding to the curve or straight line and tangential direction angle error integral of the curve or straight line, and the integral can be discretized in the implementation process.
Preferably, the loss function measures the similarity degree, and effectively constrains all parameters (such as three euler angles, yaw angle yaw, pitch angle and roll angle) of the rotation matrix in the camera external reference at the same time, that is, if the loss function only considers a single lane line, the pitch angle is lack of condition limitation, for example, the pitch angle can be turned downwards as much as possible, so that the lane line is gradually converged into a short section, the distance from a point to a line can be satisfied, and the slope error is small, but the method is obviously not in practical use. Therefore, constraints on each parameter are added to the loss function, and the constraint method includes, but is not limited to, simultaneously considering left and right lane lines, such as simultaneously constraining camera external parameters through the parallel and distance relationship between the left and right lane lines while meeting the requirements that the distance error and the slope error of the corresponding straight line are small, or considering reference objects in multiple directions, such as increasing a stop line to constrain a pitch angle, increasing a telegraph pole to consider a roll angle, and the like. For example, the input, i.e. the point (u, v) of the lane line in the pixel coordinate system can be solved by the corresponding coordinate points (X, Y, Z) in the world coordinate system through the formulas (1) and (2), wherein fx、fy、u0、v0Represents the camera internal reference, Rc、TcRepresenting the camera external parameters (rotation matrix and translation matrix between the camera coordinate system and the vehicle body coordinate system), Rb、TbRepresenting a rotation matrix and a translation matrix between the coordinate system of the vehicle body and the world coordinate system, ZcRepresenting depth information of the (u, v) point in the camera coordinate system. The parameters are input into (u, v) and camera parameter fx、fy、u0、v0Vehicle body position and attitude Rb、Tb,Zb,TcAre known values, where the lane lines are used herein as input and the origin of the body coordinate system is the projection of the rear wheel center on the ground, hence ZbAlways 0 (on the road surface of the lane line), TcObtaining, from the above-mentioned measurement reference, RcThe initial (X, Y, Z) is solved for the updated parameters, but can be a temporary known value due to the existence of the initial value.
After (X, Y, Z) is obtained through equations (1) and (2), that is, the lane line point in the world coordinate system, the loss function loss may be composed of four parts, as shown in equation (3), which are: loss1 is the distance (point-to-line distance) between the left image lane line and the corresponding map left lane line in the world coordinate system after the internal and external reference transformation, loss2 is the distance (point-to-line distance) between the right image lane line and the corresponding map right lane line in the world coordinate system after the internal and external reference transformation, loss3 is the gradient error (slope error) between the left image lane line and the corresponding map left lane line in the world coordinate system after the internal and external reference transformation, and loss4 is the gradient error (slope error) between the right image lane line and the corresponding map right lane line in the world coordinate system after the internal and external reference transformation. As shown in formula (4), loss1 and loss2 consider the offset error through the distance from point to line, that is, consider the distance from the point of the map (world coordinate system) obtained by conversion of formulas (1) and (2) to the corresponding true value (lane line of the world coordinate system), wherein A, B, C is the coefficient of the linear equation obtained by linear fitting the true value, n is the number of input midpoints, (x) is the distance between the point and the line, andi,yi) In order to input points which are obtained by the formulas (1) and (2) and correspond to the map, only plane coordinates of the points are taken. Such as formula(5) As shown, loss3 and loss4 account for the direction error by slope, i.e., the average slope of two lines is calculated by the cumulative slope formula, to obtain the slope error of the two lines, where (x)i,yi) For inputting the corresponding map plane coordinates obtained by the conversion of the formulas (1) and (2), (x)i',yi') are the plane coordinates in the true value, and n1 represent the number of points entered and true values, respectively. Equations (4), (5) indicate that the common constraints of offset and direction can describe the similarity relationship of two straight lines.
loss=loss1+loss2+loss3+loss4…………(3)
Preferably, the method of the invention solves by taking the algorithm thought of deep learning as a reference, and the loss function is converged to the minimum value in a certain range through learning (gradient descent), namely, the true value can be predicted after input is subjected to a series of transformations, and the true value is called as a label in the training stage. In the automatic solving method, a straight line obtained by fitting an image lane line, namely a two-dimensional point under a pixel coordinate system, is input; the labels are corresponding lane lines in a map, namely three-dimensional points in a world coordinate system. After the loss function is set, the loss function is reversely derived in an iterative manner from the loss function by adopting a loss reverse derivation manner, wherein the reverse derivation manner includes, but is not limited to, manually calculating the gradient or solving the gradient of the parameter by considering a deep learning tool such as pytorch and the like, and updating the rotation matrix in the initial external parameter at a certain learning rate (the selection manner of the learning rate includes, but is not limited to, a fixed value, an adaptive learning rate and a gradient learning rate), as shown in equations (6), (7) and (8), the corresponding gradient yaw is solved each time by taking the euler angles yaw, pitch and roll as the expression manner of the rotation matrix, and the corresponding gradient yaw is solved each timegrad、pitchgrad、rollgradThen, updating the original initial value of the image by the learning rate lr until the value of the loss function converges to a certain range and obtains the minimum value, wherein at the moment, the similarity degree of the image lane line after the conversion of the camera internal reference, the updated camera external reference and the vehicle body and the map (world coordinate system) external reference with the corresponding lane line in the map accords with a certain threshold value, and the solved optimal camera external reference can be obtained;
yaw=yaw-lr*yawgrad…………(6)
pitch=pitch-lr*pitchgrad…………(7)
roll=roll-lr*rollgrad…………(8)
the system for automatically calibrating the external parameters of the camera provided by the invention comprises the following components:
module M1: respectively taking the image lane line information and the lane line information in the world coordinate system as input values and true values, and setting a loss function;
module M2: carrying out reverse derivation on the loss function, iteratively updating external parameters of the camera, and solving to obtain the optimal external parameters of the camera;
the measure standard of the loss function is the similarity degree of the image lane line with the corresponding lane line in the world coordinate system after the conversion of the internal and external parameters of the camera and the external parameters in the world coordinate system.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (11)
1. A method for automatically calibrating camera external parameters is characterized by comprising the following steps:
step 1: respectively taking the image lane line information and the lane line information in the world coordinate system as input values and true values, and setting a loss function;
step 2: and carrying out reverse derivation on the loss function, iteratively updating the external parameters of the camera, and solving to obtain the optimal external parameters of the camera.
2. The method for automatically calibrating camera external parameters according to claim 1, wherein automatic calibration is performed in a road section with a world coordinate system, and an initial camera external parameter is updated as an initial value;
the camera external parameters comprise a rotation matrix and a translation matrix, and initial external parameters are obtained to serve as solving initial values, wherein the expression mode of the rotation matrix comprises a 3 x 3 matrix, an Euler angle or four elements;
and acquiring data of the vehicle to be calibrated, and realizing automatic solution of the external parameters of the camera by carrying out reverse derivation on the loss function, wherein the solution only updates the rotation matrix in the initial external parameters.
3. The method for automatically calibrating the camera external reference according to claim 1, wherein the measure of the loss function is the similarity degree of the image lane line with the corresponding lane line in the world coordinate system after the conversion of the camera external reference and the vehicle body with the external reference in the world coordinate system.
4. The method of automatically calibrating external parameters of a camera of claim 3, wherein the setting of the loss function includes distance error integrals corresponding to curves or lines and tangential direction angle error integrals corresponding to curves or lines based on the degree of similarity.
5. The method for automatically calibrating the camera external parameters according to claim 2, wherein the setting of the loss function effectively constrains all parameters of the rotation matrix in the camera external parameters at the same time;
the parameters include three euler angles: yaw angle, pitch angle, roll angle.
6. The method for automatically calibrating the camera external parameter according to claim 1, wherein the iterative updating of the camera external parameter is implemented by gradient descent.
7. The method for automatically calibrating the camera external parameters according to claim 1, wherein constraints on each parameter are added in the loss function, and the constraint method comprises the following steps: when the distance error and the slope error of the corresponding straight line are lower than the preset threshold value;
-constraining camera external parameters by the parallel and distance relationship between the left and right lane lines;
-or increasing the stop line to constrain the pitch angle, increasing the pole to take into account the roll angle;
inputting a point (u, v) of a lane line in a two-dimensional coordinate system, namely a pixel coordinate system, and solving a corresponding coordinate point (X, Y, Z) in a world coordinate system through formulas (1) and (2), wherein fx、fy、u0、v0Represents the camera internal reference, Rc、TcRepresenting a rotation matrix and a translation matrix between an external reference camera coordinate system and a vehicle body coordinate system of the camera, Rb、TbRepresenting a rotation matrix between a vehicle body coordinate system and a world coordinate systemAnd a translation matrix, ZcRepresenting depth information of the (u, v) point in a camera coordinate system; input (u, v), camera parameters fx、fy、u0、v0Vehicle body position and attitude Rb、Tb,Zb,TcAll are known values, the lane line is taken as input and the origin of the coordinate system of the vehicle body is the projection of the center of the rear wheel on the ground, the lane line is on the road surface, ZbAlways 0, TcObtaining, from a measurement reference, RcTo update the parameters.
8. The method for automatically calibrating camera external parameters according to claim 7, wherein after obtaining (X, Y, Z) which is the lane line point in the world coordinate system through the formulas (1) and (2), the loss function loss formula (3) is expressed as:
loss=loss1+loss2+loss3+loss4…………(3)
loss1 is the distance between the left lane line in the world coordinate system and the left lane line in the world coordinate system after the transformation of the internal and external parameters of the left image lane line; loss2 is the distance between the right image lane line and the right lane line in the world coordinate system after the internal and external parameters are transformed; loss3 is the gradient error between the left lane line in the world coordinate system and the image lane line on the left side after the transformation of the internal and external parameters; loss4 is the gradient error between the right lane line of the image on the right side and the right lane line in the world coordinate system after the transformation of the internal and external parameters.
9. The method for automatically calibrating external parameters of camera according to claim 8, wherein the formula (4) is expressed as:
selecting offset error from point to line through distance between loss1 and loss2, namely calculating the distance between the point of the world coordinate system obtained by conversion of formulas (1) and (2) and the corresponding true value, wherein A, B, C is the coefficient of the linear equation obtained by linear fitting the true value, n is the input midpoint(ii) number (x)i,yi) Obtaining points of a world coordinate system after inputting the points through formulas (1) and (2);
equation (5) is expressed as:
loss3 and loss4 calculate the direction error through the slope, that is, calculate the average slope of two straight lines through the cumulative slope formula, so as to obtain the slope error of two straight lines, wherein (x)i,yi) In order to input the plane coordinates of the world coordinate system obtained by the transformation of the formulas (1) and (2), (x)i',yi') is the plane coordinate in the true value, n and n1 represent the number of points of input and true value respectively;
equations (4), (5) show that the common constraints of offset and direction describe the similarity relationship of two straight lines.
10. The method for automatically calibrating the camera external parameters according to claim 1, wherein the solution is performed based on deep learning, and the loss function is converged to the minimum value in a certain range through learning gradient descent, so that the input is subjected to a series of transformations to predict a true value, which is called as a label in the training phase;
inputting a straight line obtained by fitting the image lane line, namely a two-dimensional point under a pixel coordinate system; the labels are corresponding lane lines in a world coordinate system, namely three-dimensional points in the world coordinate system; after a loss function is set, performing reverse derivation on the loss function in an iterative mode from the loss function by adopting a loss reverse derivation mode to obtain a gradient of the external parameter of the camera, wherein the reverse derivation mode comprises manually calculating the gradient or performing gradient solution on the parameter by considering a deep learning tool pytorch, and updating a rotation matrix in the initial external parameter at a preset learning rate; the selection mode of the learning rate comprises a fixed value, an adaptive learning rate and a gradient learning rate;
the solving expression is:
yaw=yaw-lr*yawgrad…………(6)
pitch=pitch-lr*pitchgrad…………(7)
roll=roll-lr*rollgrad…………(8)
at an euler angle: the yaw angle yaw, pitch angle pitch and roll angle are represented by a rotation matrix, and the corresponding gradient yaw is solved each timegrad、pitchgrad、rollgradAnd then, updating the original initial value through the learning rate lr until the value of the loss function converges to a certain range and obtains a minimum value, wherein the image lane line conforms to a certain preset threshold value with the lane line similarity index corresponding to the world coordinate system after the conversion of the camera internal parameter, the updated camera external parameter and the vehicle body and world coordinate system external parameter, and the solved optimal camera external parameter is obtained.
11. A system for automatically calibrating camera external parameters is characterized by comprising:
module M1: respectively taking the image lane line information and the lane line information in the world coordinate system as input values and true values, and setting a loss function;
module M2: carrying out reverse derivation on the loss function, iteratively updating external parameters of the camera, and solving to obtain the optimal external parameters of the camera;
the measure standard of the loss function is the similarity degree of the image lane line with the corresponding lane line in the world coordinate system after the conversion of the internal and external parameters of the camera and the external parameters in the world coordinate system.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2020104771051 | 2020-05-29 | ||
CN202010477105 | 2020-05-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111815713A true CN111815713A (en) | 2020-10-23 |
Family
ID=72855681
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010621335.0A Withdrawn CN111815713A (en) | 2020-05-29 | 2020-07-01 | Method and system for automatically calibrating external parameters of camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111815713A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529966A (en) * | 2020-12-17 | 2021-03-19 | 豪威科技(武汉)有限公司 | On-line calibration method of vehicle-mounted looking-around system and vehicle-mounted looking-around system thereof |
CN112614192A (en) * | 2020-12-24 | 2021-04-06 | 湖北亿咖通科技有限公司 | On-line calibration method of vehicle-mounted camera and vehicle-mounted information entertainment system |
CN112785655A (en) * | 2021-01-28 | 2021-05-11 | 中汽创智科技有限公司 | Method, device and equipment for automatically calibrating external parameters of all-round camera based on lane line detection and computer storage medium |
CN113340334A (en) * | 2021-07-29 | 2021-09-03 | 新石器慧通(北京)科技有限公司 | Sensor calibration method and device for unmanned vehicle and electronic equipment |
CN113379852A (en) * | 2021-08-10 | 2021-09-10 | 禾多科技(北京)有限公司 | Method, device, electronic equipment and medium for verifying camera calibration result |
CN113658279A (en) * | 2021-08-25 | 2021-11-16 | 深研人工智能技术(深圳)有限公司 | Camera internal parameter and external parameter estimation method and device, computer equipment and storage medium |
CN113674357A (en) * | 2021-08-04 | 2021-11-19 | 禾多科技(北京)有限公司 | Camera external parameter calibration method and device, electronic equipment and computer readable medium |
CN113763483A (en) * | 2021-09-10 | 2021-12-07 | 智道网联科技(北京)有限公司 | Method and device for calibrating pitch angle of automobile data recorder |
CN114332241A (en) * | 2021-12-29 | 2022-04-12 | 元橡科技(苏州)有限公司 | External parameter calibration method, three-dimensional reconstruction method and storage medium of RGBD camera based on process calibration |
CN114708333A (en) * | 2022-03-08 | 2022-07-05 | 智道网联科技(北京)有限公司 | Method and device for generating external reference model of automatic calibration camera |
CN114708336A (en) * | 2022-03-21 | 2022-07-05 | 禾多科技(北京)有限公司 | Multi-camera online calibration method and device, electronic equipment and computer readable medium |
-
2020
- 2020-07-01 CN CN202010621335.0A patent/CN111815713A/en not_active Withdrawn
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529966B (en) * | 2020-12-17 | 2023-09-15 | 豪威科技(武汉)有限公司 | On-line calibration method of vehicle-mounted looking-around system and vehicle-mounted looking-around system thereof |
CN112529966A (en) * | 2020-12-17 | 2021-03-19 | 豪威科技(武汉)有限公司 | On-line calibration method of vehicle-mounted looking-around system and vehicle-mounted looking-around system thereof |
CN112614192A (en) * | 2020-12-24 | 2021-04-06 | 湖北亿咖通科技有限公司 | On-line calibration method of vehicle-mounted camera and vehicle-mounted information entertainment system |
CN112614192B (en) * | 2020-12-24 | 2022-05-17 | 亿咖通(湖北)技术有限公司 | On-line calibration method of vehicle-mounted camera and vehicle-mounted information entertainment system |
CN112785655A (en) * | 2021-01-28 | 2021-05-11 | 中汽创智科技有限公司 | Method, device and equipment for automatically calibrating external parameters of all-round camera based on lane line detection and computer storage medium |
CN113340334A (en) * | 2021-07-29 | 2021-09-03 | 新石器慧通(北京)科技有限公司 | Sensor calibration method and device for unmanned vehicle and electronic equipment |
CN113674357A (en) * | 2021-08-04 | 2021-11-19 | 禾多科技(北京)有限公司 | Camera external parameter calibration method and device, electronic equipment and computer readable medium |
CN113379852A (en) * | 2021-08-10 | 2021-09-10 | 禾多科技(北京)有限公司 | Method, device, electronic equipment and medium for verifying camera calibration result |
CN113379852B (en) * | 2021-08-10 | 2021-11-30 | 禾多科技(北京)有限公司 | Method, device, electronic equipment and medium for verifying camera calibration result |
CN113658279A (en) * | 2021-08-25 | 2021-11-16 | 深研人工智能技术(深圳)有限公司 | Camera internal parameter and external parameter estimation method and device, computer equipment and storage medium |
CN113658279B (en) * | 2021-08-25 | 2024-04-02 | 深研人工智能技术(深圳)有限公司 | Camera internal reference and external reference estimation method, device, computer equipment and storage medium |
CN113763483A (en) * | 2021-09-10 | 2021-12-07 | 智道网联科技(北京)有限公司 | Method and device for calibrating pitch angle of automobile data recorder |
CN113763483B (en) * | 2021-09-10 | 2024-04-02 | 智道网联科技(北京)有限公司 | Method and device for calibrating pitch angle of automobile data recorder |
CN114332241A (en) * | 2021-12-29 | 2022-04-12 | 元橡科技(苏州)有限公司 | External parameter calibration method, three-dimensional reconstruction method and storage medium of RGBD camera based on process calibration |
CN114708333A (en) * | 2022-03-08 | 2022-07-05 | 智道网联科技(北京)有限公司 | Method and device for generating external reference model of automatic calibration camera |
CN114708336A (en) * | 2022-03-21 | 2022-07-05 | 禾多科技(北京)有限公司 | Multi-camera online calibration method and device, electronic equipment and computer readable medium |
CN114708336B (en) * | 2022-03-21 | 2023-02-17 | 禾多科技(北京)有限公司 | Multi-camera online calibration method and device, electronic equipment and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111815713A (en) | Method and system for automatically calibrating external parameters of camera | |
CN110160502B (en) | Map element extraction method, device and server | |
CN109544629B (en) | Camera position and posture determining method and device and electronic equipment | |
KR102249769B1 (en) | Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same | |
US20130147948A1 (en) | Image processing apparatus and imaging apparatus using the same | |
CN110378962B (en) | Calibration method and device for vehicle-mounted camera and computer readable storage medium | |
CN112183171A (en) | Method and device for establishing beacon map based on visual beacon | |
CN110243380A (en) | A kind of map-matching method based on multi-sensor data and angle character identification | |
CN107330927B (en) | Airborne visible light image positioning method | |
CN112017236B (en) | Method and device for calculating target object position based on monocular camera | |
CN112819903A (en) | Camera and laser radar combined calibration method based on L-shaped calibration plate | |
CN112967344B (en) | Method, device, storage medium and program product for calibrating camera external parameters | |
CN112146682B (en) | Sensor calibration method and device for intelligent automobile, electronic equipment and medium | |
CN110766761B (en) | Method, apparatus, device and storage medium for camera calibration | |
CN112348902A (en) | Method, device and system for calibrating installation deviation angle of road end camera | |
CN111553945B (en) | Vehicle positioning method | |
CN110579754A (en) | Method for determining external parameters of a lidar and other sensors of a vehicle | |
CN112347205A (en) | Method and device for updating error state of vehicle | |
CN114283201A (en) | Camera calibration method and device and road side equipment | |
CN115564865A (en) | Construction method and system of crowdsourcing high-precision map, electronic equipment and vehicle | |
CN112446915B (en) | Picture construction method and device based on image group | |
CN113706633B (en) | Three-dimensional information determination method and device for target object | |
CN115641385A (en) | Vehicle-mounted panoramic camera calibration method, device, equipment and medium | |
CN115471619A (en) | City three-dimensional model construction method based on stereo imaging high-resolution satellite image | |
CN101145240A (en) | Camera image road multiple-point high precision calibration method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20201023 |