CN112102413B - Virtual lane line-based automatic calibration method for vehicle-mounted camera - Google Patents

Virtual lane line-based automatic calibration method for vehicle-mounted camera Download PDF

Info

Publication number
CN112102413B
CN112102413B CN202010713419.7A CN202010713419A CN112102413B CN 112102413 B CN112102413 B CN 112102413B CN 202010713419 A CN202010713419 A CN 202010713419A CN 112102413 B CN112102413 B CN 112102413B
Authority
CN
China
Prior art keywords
camera
coordinate system
coordinates
vehicle
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010713419.7A
Other languages
Chinese (zh)
Other versions
CN112102413A (en
Inventor
陈俊龙
魏宇豪
曾科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010713419.7A priority Critical patent/CN112102413B/en
Publication of CN112102413A publication Critical patent/CN112102413A/en
Application granted granted Critical
Publication of CN112102413B publication Critical patent/CN112102413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses a virtual lane line-based vehicle-mounted camera automatic calibration method, which comprises the following steps: a world coordinate system is established at the intersection point of the center of a rear axle of the vehicle and the ground vertically downwards, the Z axis is arranged right in front of the vehicle, the X axis is arranged on the right side of the advancing direction, and the Y axis is arranged vertically downwards; establishing a camera coordinate system; taking a single picture at the center of a lane in front of a vehicle by using a camera, measuring the lane width, selecting a rectangular frame formed by virtual lane lines on two sides as a calibration graph under the overlooking view angle of a world coordinate system, obtaining the relation between four characteristic points of the rectangle and the lane width according to the rectangular property, and obtaining a rotation matrix equation based on the camera coordinate system according to the orthogonal matrix property and the coordinate transformation relation between the camera coordinate system and the world coordinate system; the camera coordinates are converted into pixel coordinates by using the camera internal parameters, and then the pixel coordinates of the four characteristic points are acquired from the image, and then psi, theta, phi and h parameters of a rotation matrix and a translation matrix related to the camera external parameters are obtained.

Description

Virtual lane line-based automatic calibration method for vehicle-mounted camera
Technical Field
The invention belongs to the field of traffic, and particularly relates to a vehicle-mounted camera automatic calibration method based on virtual lane lines.
Background
Up to now, automatic calibration algorithms in the traffic field (including vehicle-mounted cameras, traffic monitoring cameras, and the like) can be roughly classified into calibration algorithms based on static targets such as lane lines and calibration algorithms based on moving targets such as vehicles and pedestrians, depending on markers. Compared with a calibration algorithm based on a static target, the algorithm based on the moving target is much more complex, the method not only requires that targets such as vehicles or pedestrians appear in a picture, but also needs to analyze a video sequence to obtain a moving track so as to obtain a vanishing point, and partial algorithms even have requirements on the moving direction and speed, so that the method is more suitable for a stationary traffic monitoring camera. For the vehicle-mounted camera, a large number of vehicles may appear in a scene, but due to complex relative motion between the vehicles, it is difficult to find a suitable target for trajectory analysis, and the lane line is used as a stationary object and is more suitable for being used as a marker for automatic calibration of the vehicle-mounted camera.
In the imaging process of the camera, a certain point in the three-dimensional world is converted into a pixel point in a two-dimensional image, a model can be established by using a geometric method to describe the process, and camera parameters are parameters related to the geometric model. The camera internal parameters comprise focal length, optical center position, distortion coefficient and the like; the camera external parameters include a rotation matrix and a translation matrix. The purpose of camera calibration is to obtain camera parameters, and the calibration precision directly influences the visual perception and positioning of the automatic driving vehicle. The traditional camera calibration method needs to determine camera parameters by using specific points on a calibration plate, so the method is only suitable for static conditions and is generally used for calibrating camera internal parameters. When the vehicle-mounted camera is in a driving process, external parameters of the vehicle-mounted camera may change due to various factors such as road bump, vehicle body vibration and the like (the internal parameters of the camera are not changed), and the external parameters need to be calibrated again. The lane lines generally exist in a driving scene, and the camera external parameters can be automatically calibrated by utilizing the characteristics of the lane lines, such as the parallelism, the known lane width and the like.
Disclosure of Invention
The invention aims to provide a vehicle-mounted camera automatic calibration method based on a virtual lane line aiming at the defects of the prior art.
The invention is realized by adopting the following technical scheme:
a vehicle-mounted camera automatic calibration method based on virtual lane lines comprises the following steps:
1) A world coordinate system is established at the intersection point of the center of a rear axle of the vehicle and the ground vertically downwards, wherein the front of the vehicle is a Z axis, the right side of the advancing direction is an X axis, and the vertical direction is a Y axis; establishing a camera coordinate system, wherein the coordinate of the origin of the camera coordinate system in a world coordinate system is (d, h, l);
2) Taking a single picture at the center of a lane in front of a vehicle by using a camera, measuring the lane width, selecting a rectangular frame formed by virtual lane lines on two sides as a calibration graph under the overlooking view angle of a world coordinate system, obtaining the relation between four characteristic points of the rectangle and the lane width according to the rectangular property, and obtaining a rotation matrix equation based on the camera coordinate system according to the orthogonal matrix property and the coordinate transformation relation between the camera coordinate system and the world coordinate system;
3) The camera internal parameters are unchanged in the driving process of the vehicle, the camera coordinates are converted into pixel coordinates by using the camera internal parameters, and then the pixel coordinates of the four characteristic points are obtained from the image, and then psi, theta, phi and h four parameters of a rotation matrix and a translation matrix related to the camera external parameters are obtained.
The further improvement of the invention is that the specific implementation method of the step 2) is as follows:
101 Introduction of a transformation model between the world coordinate system W and the camera coordinate system C as follows
P c =R·P w +T
Wherein R represents a rotation matrix and T represents a translation matrix;
since the rotation matrix R is an orthogonal matrix, the formula is rewritten as follows according to the properties of the orthogonal matrix:
P w =R -1 P c -R -1 T=R T P c -R T T
in the formula, -R T The practical meaning of T is the coordinate of the origin of the camera coordinate system in the world coordinate system;
for ease of understanding and calculation, r is used herein mn Representing the elements in the rotation matrix R, the formula is rewritten to matrix form as follows:
Figure GDA0003825731110000031
102 Let four points of the rectangle be ABCD, A, C, B, D respectively, on the same virtual lane line, distributed along Y axis, lane width is width, and get the following formula according to the property of the rectangle:
Figure GDA0003825731110000032
103 Because the world coordinates are unknown, the equation in (1) is substituted into (2), which converts the world coordinates to camera coordinates, as follows
Figure GDA0003825731110000033
At the moment, the world coordinates of each point are not contained in the equation, only the camera coordinates of each point are left, the internal reference of the vehicle is not changed in the driving process, and the internal reference of the camera is known, so that the camera coordinates are converted into pixel coordinates by using the internal reference of the camera.
The further improvement of the invention is that the specific implementation method of the step 3) is as follows:
201 Introduce a transformation model between the classical pixel coordinate system and the world coordinate system as follows:
Figure GDA0003825731110000041
in the formula (f) x =f/dx;f y = f/dy, which is called normalized focal length of x-axis and y-axis, dx and dy represent physical size of a pixel point in x-axis and y-axis directions, respectively, f is camera focal length, (u) 0 ,v 0 ) Representing the coordinates of the origin of an image coordinate system under a pixel coordinate system, R representing a camera rotation matrix, and T representing a camera translation matrix;
202 From the transformation model between the world coordinate system and the camera coordinate system in combination with the transformation model between the pixel coordinate system and the world coordinate system, the transformation model between the camera coordinate system and the pixel coordinate system is derived as follows:
Figure GDA0003825731110000042
expansion gives the formula:
Figure GDA0003825731110000043
in the formula (I), the compound is shown in the specification,
Figure GDA0003825731110000044
f x 、f y 、u 0 and v 0 Are all known parameters;
substituting the above equation into the last equation in (1) yields the following equation:
Figure GDA0003825731110000045
203 Solving an external parameter matrix R, T;
substituting the formulas (2) and (3) into the formula (1) to eliminate the camera coordinates of each point, only leaving parameters related to the pixel coordinates, and directly obtaining the pixel coordinates from the image; the equation is simplified to the following equation:
Figure GDA0003825731110000051
the formula only includes psi, theta, phi and h four unknowns, so simultaneous solution can obtain the following formula:
Figure GDA0003825731110000052
in the formula, F AC =(m C -m A )+tanφm A m C (n A -n C );G AC =sinφ(m C -m A )+cosφm A m C (n A -n C );F BD =(m D -m B )+tanφm B m D (n B -n D );G BD =sinφ(m D -m B )+cosφm B m D (n B -n D )
Four parameters of psi, theta, phi and h of the rotation matrix R and the translation matrix T of the external parameters of the camera are solved.
The invention has at least the following beneficial technical effects:
the invention provides an automatic calibration method of a vehicle-mounted camera based on virtual lane lines, which is characterized in that a rectangle formed by the virtual lane lines on two sides in an image obtained by the vehicle-mounted camera in real time is used as a calibration object, and the calibration work of the camera can be completed at one time by automatically calibrating external parameters of the camera by utilizing the characteristics of parallelism of the lane lines, known lane width and the like in a common driving scene. The calibration method provided by the invention can realize real-time automatic calibration aiming at camera external parameter changes caused by road jolt, vehicle body vibration and the like in the vehicle driving process, and has the advantages of simple operation, convenient measurement, good real-time performance and the like.
Furthermore, because the introduced unknown variable only has the lane width, and the coordinates of the four points of the virtual lane rectangle are converted into the coordinates under the camera coordinate system through the conversion relation between the camera coordinate system and the world coordinate system, the invention has less selected calibration parameters and convenient measurement. The coordinates of four points obtained under a camera coordinate system are converted into a pixel coordinate system by the internal reference of the camera, the coordinates of the four points are converted into the pixel coordinates under the pixel coordinate system in the image which can be directly obtained from the image, equations of psi, theta, phi and h parameters of a rotation matrix R and a translation matrix T of the external reference of the camera are converted into equations only related to lane width, and the external reference of the camera can be obtained by simultaneous solution. Therefore, the virtual lane line-based vehicle-mounted camera automatic calibration method provided by the invention is simple to operate, uses few calibration parameters, is convenient and fast to measure, and has excellent universality and good real-time property.
Drawings
FIG. 1 is a schematic diagram of a world coordinate system to a camera coordinate system.
FIG. 2 is a schematic diagram of a world coordinate system with a point rotated by angle ψ about the X-axis.
Fig. 3 is a schematic diagram of a camera coordinate system to an image coordinate system.
FIG. 4 is a diagram of an image coordinate system to a pixel coordinate system.
Fig. 5 is a schematic diagram of a positional relationship between the vehicle and the camera, in which fig. 5 (a) is a front view, fig. 5 (b) is a side view, and fig. 5 (c) is a top view.
FIG. 6 is a schematic diagram of a rectangle formed by two side dashed lane lines.
Fig. 7 is an image of an actual road scene calibrated by Opencv.
FIG. 8 is an image calibrated by the method of the present invention.
Detailed Description
The invention is further described below with reference to the following figures and examples.
Basic theory of camera calibration
The process of capturing an image by the camera is an optical imaging process. The process involves the following four coordinate systems:
pixel coordinate system: and (u, v) with the upper left corner of the image as the origin, the horizontal right as the u-axis, and the vertical down as the v-axis, in pixels.
Image coordinate system: expressed as (x, y), the origin is the image center and the horizontal right is the x-axis. Vertically down is the y-axis in physical units.
Camera coordinate system: by (X) c ,Y c ,Z c ) The origin is the optical center of the lens, the X, Y axes are parallel to the two sides of the phase plane, respectively, the Z axis is the optical axis of the lens, is perpendicular to the image plane, and has the unit of physical unit.
World coordinate system: by (X) w ,Y w ,Z w ) The position of the world coordinate system is not fixed and is defined by human, and the unit is a physical unit.
World to camera coordinate system
The transformation process from the world coordinate system to the camera coordinate system belongs to rigid body transformation, namely, an object does not deform in the transformation process, and only rotation operation and translation operation are required to be carried out on the coordinate system. The relationship between the world coordinate system and the camera coordinate system is shown in fig. 1, where R represents the rotation matrix and T represents the translation matrix.
Assuming that there is a point P, the coordinate in the world coordinate system is P w (X w ,Y w ,Z w ) The coordinate in the camera coordinate system is P c (X c ,Y c ,Z c ) Then P is c And P w The following relationships exist:
P c =R·P w +T (5-1)
since the camera coordinate system can be derived from the world coordinate system by rotational translation, the present invention first rotates point P by an angle ψ about the X-axis, as shown in fig. 2:
from the relationship between the two coordinate systems in fig. 2, the matrix form of the world coordinate system rotated by ψ about the X-axis can be obtained as shown in equation (5-2):
Figure GDA0003825731110000071
in the same way, the coordinate change relationship after rotating the theta angle around the Y axis and the phi angle around the Z axis is shown as the formula (5-3).
Figure GDA0003825731110000081
The rotation matrix R is then:
Figure GDA0003825731110000082
the relationship between the camera coordinate system and the world coordinate system can be obtained, and because the elements in the rotation matrix R are long, for the convenience of understanding and expression, R and the subscript are collectively expressed as shown in formula (5-5):
Figure GDA0003825731110000083
camera coordinate system to image coordinate system
This process is a process of converting from a three-dimensional coordinate system to a two-dimensional planar coordinate system, and the two coordinate systems are in a perspective projection relationship and conform to the triangle similarity theorem. The relationship between the two coordinate systems is shown in fig. 3, where f is the camera focal length.
As can be seen from the above figure, PO c Is a point P c (X c ,Y c ,Z c ) And the optical center O c Connecting lines between, PO c The intersection point with the imaging plane is the space point P c (X c ,Y c ,Z c ) The projection point p (x, y) on the imaging plane, so the invention can obtain two pairs of similar triangles delta ABO c ~ΔoCO c ,ΔPBO c ~ΔpCO c Equation (5-6) can be obtained by the similarity relationship of two pairs of similar triangles:
Figure GDA0003825731110000084
the above formula is rewritten into a matrix form as follows:
Figure GDA0003825731110000085
image coordinate system to pixel coordinate system
In the conversion process, rotation conversion is not carried out, but the original positions of the two coordinate systems are not consistent, and the unit sizes of the coordinate systems are also not consistent, so that the method can be realized through telescopic conversion and translation conversion. The relationship between the two coordinate systems is shown in FIG. 4, (u) 0 ,v 0 ) Representing the coordinates of the origin of the image coordinate system in the pixel coordinate system, P (x, y), i.e. the spatial point P c (X c ,Y c ,Z c ) A projected point on the imaging plane.
The relationship between the two coordinate systems can therefore be represented by:
Figure GDA0003825731110000091
in the formula, dx and dy respectively represent the physical sizes of one pixel point in the directions of the x axis and the y axis. The above formula is then expressed in terms of homogeneous coordinates and matrices as follows:
Figure GDA0003825731110000092
up to this point, the matrix relationships between the four coordinate systems have been obtained. And (5-5), (5-7) and (5-9) are arranged to finally obtain the coordinate transformation relation between the pixel coordinate system and the world coordinate system, wherein the matrix form is shown as formula (5-10):
Figure GDA0003825731110000093
in the formula (f) x =f/dx;f y And = f/dy, which are called normalized focal lengths of the x-axis and the y-axis, respectively.
In the formula (5-10), the first matrix behind the second equal sign is the internal reference matrix of the camera, and the second matrix is the external reference matrix of the camera. Thus, the camera parameters mainly include f x 、f y 、u 0 And v 0 Four parameters and distortion coefficients reflecting the relationship between the camera coordinate system and the pixel coordinate system; the camera external parameters are 6 parameters which are psi, theta, phi and three elements in the translation matrix T respectively, and the external parameters reflect the relation between the world coordinate system and the camera coordinate system.
Vehicle-mounted camera automatic calibration method based on virtual lane line
As shown in fig. 5, a world coordinate system and a camera coordinate system are established. The default camera coordinate system is along the optical axis as the Z-axis, to the right as the X-axis, and vertically down as the Y-axis. The origin of the world coordinate system is vertically downward at the center of a rear shaft of the vehicle and intersects with the ground, the axis Z is right in front of the vehicle, the axis X is right in the advancing direction, the axis Y is vertically downward, and the coordinates of the origin of the camera coordinate system in the world coordinate system are (d, h, l).
The relative position of the camera and the vehicle can change along with the vibration of the vehicle during the running of the vehicle, generally speaking, three rotation angles and the height of the camera in the camera external reference change obviously, and d and l are basically unchanged, so that the automatic calibration algorithm provided by the invention mainly calculates the following four parameters: psi, theta, phi and h. Assuming that the road surface is flat and the advancing direction of the vehicle is parallel to the lane line direction, the invention selects a rectangular frame formed by two virtual lane lines as a calibration graph, as shown in fig. 6:
under the top view of the world coordinate system, the invention considers that four points of ABCD form a rectangle, and the equations (5-11) can be obtained according to the properties of the rectangle:
Figure GDA0003825731110000101
in the formula, width represents a lane width.
In the formula (5-1), since the rotation matrix R is an orthogonal matrix, the formula can be rewritten as follows according to the property of the orthogonal matrix:
P w =R -1 P c -R -1 T=R T P c -R T T (5-12)
in the formula, -R T The practical meaning of T is the coordinates of the origin of the camera coordinate system in the world coordinate system.
The equations (5-12) are rewritten to a matrix form as shown in equations (5-13), and r is used here for easy understanding and calculation mn Representing the elements in the rotation matrix R.
Figure GDA0003825731110000111
Since the world coordinates are unknown, the present invention substitutes the formula (5-13) into the formula (5-11), which can convert the world coordinates into camera coordinates, as shown in the formula (5-14).
Figure GDA0003825731110000112
At this time, the equation no longer contains the world coordinates of each point, but only the camera coordinates of each point. It has been mentioned in the foregoing that the camera reference is known here, since the camera reference is not changed during the driving of the vehicle. Camera parameters can be used to convert the camera coordinates to pixel coordinates.
From the equations (5-10), the relationship between the camera coordinate system and the pixel coordinate system is shown as follows:
Figure GDA0003825731110000113
expanding equations (5-15) yields the following equation:
Figure GDA0003825731110000114
in the formula (I), the compound is shown in the specification,
Figure GDA0003825731110000115
f x 、f y 、u 0 and v 0 All are known parameters, so that m and n can be calculated only by acquiring the pixel coordinates of each point from the image.
Substituting equations (5-16) into the last equation in equations (5-14) yields:
Figure GDA0003825731110000116
substituting equations (5-16) and (5-17) into equation (5-14) may eliminate the camera coordinates of each point, leaving only parameters related to pixel coordinates, which may be directly obtained from the image. The equation is simplified to the following form:
Figure GDA0003825731110000121
equation (5-18) contains only four unknowns ψ, θ, φ and h, and hence can be solved simultaneously:
Figure GDA0003825731110000122
in the formula, F AC =(m C -m A )+tanφm A m C (n A -n C );G AC =sinφ(m C -m A )+cosφm A m C (n A -n C );F BD =(m D -m B )+tanφm B m D (n B -n D );G BD =sinφ(m D -m B )+cosφm B m D (n B -n D )。
Fig. 7 is an original image calibrated by Opencv (a computer vision and machine learning software library issued by an open source based on BSD license), after the vehicle is parked, the present invention actually measures the world coordinates of 8 points on the lane line, and the Opencv built-in function solvePnP can obtain an external reference according to the world coordinates and the corresponding image coordinates; FIG. 8 is a diagram of a calibration performed using the calibration algorithm of the present invention, wherein four vertices of a rectangle formed by dashed lane lines are selected. Table 1 shows a comparison between calibration results and errors in an actual road scene of a vehicle-mounted camera automatic calibration method (the present invention) based on a virtual lane line and an Opencv method.
Table 1:
Figure GDA0003825731110000131

Claims (1)

1. a vehicle-mounted camera automatic calibration method based on virtual lane lines is characterized by comprising the following steps:
1) A world coordinate system is established at the intersection point of the center of a rear axle of the vehicle and the ground vertically downwards, the Z axis is arranged right in front of the vehicle, the X axis is arranged on the right side of the advancing direction, and the Y axis is arranged vertically downwards; establishing a camera coordinate system, wherein the coordinate of the origin of the camera coordinate system in a world coordinate system is (d, h, l);
2) Taking a single picture at the center of a lane in front of a vehicle by using a camera, measuring the lane width, selecting a rectangular frame formed by virtual lane lines on two sides as a calibration graph under the overlooking view angle of a world coordinate system, obtaining the relation between four characteristic points of the rectangle and the lane width according to the rectangular property, and obtaining a rotation matrix equation based on the camera coordinate system according to the orthogonal matrix property and the coordinate transformation relation between the camera coordinate system and the world coordinate system; the specific implementation method comprises the following steps:
101 Introduction of a transformation model between the world coordinate system W and the camera coordinate system C as follows
P c =R·P w +T
Wherein R represents a rotation matrix and T represents a translation matrix;
since the rotation matrix R is an orthogonal matrix, the formula is rewritten as follows according to the properties of the orthogonal matrix:
P w =R -1 P c -R -1 T=R T P c -R T T
in the formula, -R T The actual meaning of T is the coordinate of the origin of the camera coordinate system in the world coordinate system;
for ease of understanding and calculation, r is used herein mn Representing the elements in the rotation matrix R, the formula is rewritten to matrix form as follows:
Figure FDA0003825731100000011
102 Let four points of the rectangle be ABCD, A, C, B, D respectively on the same virtual lane line, distributed along Y axis, lane width is width, and obtain the following formula according to the property of the rectangle:
Figure FDA0003825731100000021
103 Because the world coordinates are unknown, the equation in (1) is substituted into (2), which converts the world coordinates to camera coordinates, as follows
Figure FDA0003825731100000022
At the moment, the equation does not contain the world coordinates of each point any more, only the camera coordinates of each point are left, the internal reference of the camera is known in the running process of the vehicle, and the camera coordinates are converted into pixel coordinates by using the internal reference of the camera;
3) The camera internal parameters are unchanged in the driving process of the vehicle, the camera coordinates are converted into pixel coordinates by using the camera internal parameters, and then the four parameters psi, theta, phi and h of a rotation matrix and a translation matrix related to the camera external parameters are solved after the pixel coordinates of the four characteristic points are obtained from the image; the specific implementation method comprises the following steps:
201 Introducing a transformation model between the classical pixel coordinate system and the world coordinate system as follows:
Figure FDA0003825731100000023
in the formula, f x =f/dx;f y = f/dy, which are respectively called normalized focal length of x axis and y axis, dx and dy respectively represent the physical size of one pixel point in x axis direction and y axis direction, f is camera focal length, (u) 0 ,v 0 ) Representing the coordinates of the origin of an image coordinate system under a pixel coordinate system, R representing a camera rotation matrix, and T representing a camera translation matrix;
202 From the transformation model between the world coordinate system and the camera coordinate system in combination with the transformation model between the pixel coordinate system and the world coordinate system, the transformation model between the camera coordinate system and the pixel coordinate system is obtained as follows:
Figure FDA0003825731100000031
expansion gives the following formula:
Figure FDA0003825731100000032
in the formula (I), the compound is shown in the specification,
Figure FDA0003825731100000033
f x 、f y 、u 0 and v 0 Are all known parameters;
substituting the above equation into the last equation in (1) yields the following equation:
Figure FDA0003825731100000034
203 Solving an external parameter matrix R, T;
substituting the formulas (2) and (3) into the formula (1) to eliminate the camera coordinates of each point, only leaving parameters related to the pixel coordinates, and directly obtaining the pixel coordinates from the image; the equation is simplified to the following equation:
Figure FDA0003825731100000035
the formula only includes psi, theta, phi and h four unknowns, so simultaneous solution can obtain the following formula:
Figure FDA0003825731100000041
in the formula, F AC =(m C -m A )+tanφm A m C (n A -n C );G AC =sinφ(m C -m A )+cosφm A m C (n A -n C );F BD =(m D -m B )+tanφm B m D (n B -n D );G BD =sinφ(m D -m B )+cosφm B m D (n B -n D )
Four parameters of psi, theta, phi and h of the rotation matrix R and the translation matrix T of the external parameters of the camera are solved.
CN202010713419.7A 2020-07-22 2020-07-22 Virtual lane line-based automatic calibration method for vehicle-mounted camera Active CN112102413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010713419.7A CN112102413B (en) 2020-07-22 2020-07-22 Virtual lane line-based automatic calibration method for vehicle-mounted camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010713419.7A CN112102413B (en) 2020-07-22 2020-07-22 Virtual lane line-based automatic calibration method for vehicle-mounted camera

Publications (2)

Publication Number Publication Date
CN112102413A CN112102413A (en) 2020-12-18
CN112102413B true CN112102413B (en) 2022-12-09

Family

ID=73749988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010713419.7A Active CN112102413B (en) 2020-07-22 2020-07-22 Virtual lane line-based automatic calibration method for vehicle-mounted camera

Country Status (1)

Country Link
CN (1) CN112102413B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785653A (en) * 2020-12-30 2021-05-11 中山联合汽车技术有限公司 Vehicle-mounted camera attitude angle calibration method
CN112927303B (en) * 2021-02-22 2023-01-24 中国重汽集团济南动力有限公司 Lane line-based automatic driving vehicle-mounted camera pose estimation method and system
CN112927309B (en) * 2021-03-26 2024-04-09 苏州欧菲光科技有限公司 Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium
CN113223095B (en) * 2021-05-25 2022-06-17 中国人民解放军63660部队 Internal and external parameter calibration method based on known camera position
CN114463439B (en) * 2022-01-18 2023-04-11 襄阳达安汽车检测中心有限公司 Vehicle-mounted camera correction method and device based on image calibration technology
CN114359412B (en) * 2022-03-08 2022-05-27 盈嘉互联(北京)科技有限公司 Automatic calibration method and system for external parameters of camera facing to building digital twins
CN115024740B (en) * 2022-08-11 2022-10-25 晓智未来(成都)科技有限公司 Virtual radiation field display method for common X-ray photography

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898638A (en) * 2018-06-27 2018-11-27 江苏大学 A kind of on-line automatic scaling method of vehicle-mounted camera
CN110008893A (en) * 2019-03-29 2019-07-12 武汉理工大学 A kind of automobile driving running deviation automatic testing method based on vehicle-mounted imaging sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9981605B2 (en) * 2014-05-16 2018-05-29 GM Global Technology Operations LLC Surround-view camera system (VPM) and vehicle dynamic

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898638A (en) * 2018-06-27 2018-11-27 江苏大学 A kind of on-line automatic scaling method of vehicle-mounted camera
CN110008893A (en) * 2019-03-29 2019-07-12 武汉理工大学 A kind of automobile driving running deviation automatic testing method based on vehicle-mounted imaging sensor

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Automatic on-the-fly extrinsic camera calibration of onboard vehicular cameras";M.B.de Paula等;《Expert Systems with Applications》;20140331;第41卷(第4期);第1997-2007页 *
"Research on Lane-Marking Line Based Camera Calibration";Kunfeng Wang等;《2007 IEEE International Conference on Vehicular Electronics and Safety》;20080225;全文 *
"基于IPM和边缘图像过滤的多干扰车道线检测";吴骅跃等;《中国公路学报》;20200531;第33卷(第5期);第153-164页 *

Also Published As

Publication number Publication date
CN112102413A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112102413B (en) Virtual lane line-based automatic calibration method for vehicle-mounted camera
CN110148169B (en) Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera
CN109741455B (en) Vehicle-mounted stereoscopic panoramic display method, computer readable storage medium and system
JP5739584B2 (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
JP4555876B2 (en) Car camera calibration method
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
Scaramuzza et al. Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes
US7697126B2 (en) Three dimensional spatial imaging system and method
JP5455124B2 (en) Camera posture parameter estimation device
CN110842940A (en) Building surveying robot multi-sensor fusion three-dimensional modeling method and system
US8817079B2 (en) Image processing apparatus and computer-readable recording medium
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
WO2015127847A1 (en) Super resolution processing method for depth image
CN113362228A (en) Method and system for splicing panoramic images based on improved distortion correction and mark splicing
US20230351625A1 (en) A method for measuring the topography of an environment
CN113205603A (en) Three-dimensional point cloud splicing reconstruction method based on rotating platform
CN112254680B (en) Multi freedom's intelligent vision 3D information acquisition equipment
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN115239922A (en) AR-HUD three-dimensional coordinate reconstruction method based on binocular camera
CN112927133A (en) Image space projection splicing method based on integrated calibration parameters
WO2022078437A1 (en) Three-dimensional processing apparatus and method between moving objects
CN112257535B (en) Three-dimensional matching equipment and method for avoiding object
CN112304250B (en) Three-dimensional matching equipment and method between moving objects
CN112254678B (en) Indoor 3D information acquisition equipment and method
CN112254669B (en) Intelligent visual 3D information acquisition equipment of many bias angles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant