CN109685855B - Camera calibration optimization method under road cloud monitoring platform - Google Patents

Camera calibration optimization method under road cloud monitoring platform Download PDF

Info

Publication number
CN109685855B
CN109685855B CN201811480427.0A CN201811480427A CN109685855B CN 109685855 B CN109685855 B CN 109685855B CN 201811480427 A CN201811480427 A CN 201811480427A CN 109685855 B CN109685855 B CN 109685855B
Authority
CN
China
Prior art keywords
road
coordinate system
calibration
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811480427.0A
Other languages
Chinese (zh)
Other versions
CN109685855A (en
Inventor
王伟
唐心瑶
张朝阳
武非凡
李俊彦
梁浩翔
雷琪
杨露
云旭
侯景严
刘莅辰
贾金明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xulong Technology Puyang Co ltd
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201811480427.0A priority Critical patent/CN109685855B/en
Publication of CN109685855A publication Critical patent/CN109685855A/en
Application granted granted Critical
Publication of CN109685855B publication Critical patent/CN109685855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a camera calibration optimization method under a road cloud monitoring platform, which automatically identifies geometric identifications such as road edge lines and road dotted lines from a road traffic scene, solves parameters such as the azimuth and the focal length of a camera by using a multi-identification fusion scene self-adaptive calibration method according to the data, and completes calibration. On the basis, if the redundant identification exists, the calibration result can be optimized. The invention can adapt to different road traffic scenes, and utilizes the pan-tilt camera to extract a large amount of geometric information in the scene to complete the calibration and optimization process of the camera. The method is simple to implement, has good universality, can be applied to camera calibration in various road scenes, and can optimize the calibration result.

Description

Camera calibration optimization method under road cloud monitoring platform
Technical Field
The invention relates to the technical field of intelligent transportation, in particular to a camera calibration optimization method under a road cloud monitoring platform.
Background
There are many kinds of technologies for collecting traffic information, and most of China adopts buried coil detectors, but this method is technically mature, but has many defects in the aspects of coverage, detection parameters, maintainability, installation difficulty and the like, and other methods such as radar, infrared ray, ultrasonic wave and the like also have defects. In recent years, methods for acquiring traffic information parameters through videos are gradually mature, the most common method is a road cloud monitoring platform, the method is more flexible than a traditional camera, the camera can acquire data of different road scenes through rotation, the data volume is greatly improved, human intervention is not needed in most cases, vehicle identification, tracking and behavior analysis can be achieved through sequence analysis of shot videos, and traffic parameters in a video environment can be extracted. The camera calibration is an essential step for realizing the functions, the conversion relation between the three-dimensional world coordinate and the two-dimensional image coordinate can be determined through calibration, and the method is an important premise for effectively acquiring traffic parameters related to a physical space.
In the field of computer vision, aiming at various specific scenes, a common camera calibration method is that internal and external parameters of a camera model are obtained by establishing a corresponding relation between a point with known coordinates on a calibration object and an image point of the point, and the calibration object is required all the time in the calibration process; the calibration object is divided into a three-dimensional plane and a two-dimensional plane, the three-dimensional calibration object can be calibrated by a single image, the calibration precision is higher, but the processing and the maintenance of the three-dimensional calibration object are more difficult; the two-dimensional plane calibration object is simple to manufacture, the precision is easy to guarantee, but two or more images are needed to be adopted during calibration. Although the traditional camera calibration method has high calibration result precision, calibration objects are always needed in the calibration process, and the calibration objects are not suitable to be placed in outdoor traffic scenes, so that the use of the method is limited.
Disclosure of Invention
The invention aims to provide a camera calibration optimization method under a road cloud monitoring platform, aiming at the problems of inaccurate camera calibration result and poor universality of the traditional method.
In order to realize the task, the invention adopts the following technical scheme:
a camera calibration optimization method under a road cloud monitoring platform comprises the following steps:
step 1, establishing a camera model and a coordinate system
Step 1.1, establishing a world coordinate system, a camera coordinate system, an image coordinate system and a camera model, wherein the camera model is simplified into a pinhole model;
step 1.2, converting the world coordinate of any point on the road surface in the image shot by the camera into an image coordinate system to obtain the projection relation between the point of the world coordinate system and the point of the image coordinate system;
step 2, selecting the minimum calibration condition and calculating the calibration result
Step 2.1, converting the intersection points of a plurality of parallel straight lines in the image coordinate system into the intersection points of the fold line groups in the diamond space;
2.2, transforming the intersection point of the broken line group in the diamond space into an image coordinate system to obtain a vanishing point coordinate of the image coordinate system;
step 2.3, determining a minimum calibration condition according to the number of the vanishing point coordinates obtained in the step 2.2, and calculating a calibration result;
step 3, optimizing the calibration result
And on the basis of the minimum calibration condition and the solved calibration result, judging whether redundant information exists in the road scene or not, and optimizing the calibration result by using the redundant information.
Further, the converting the world coordinate of any point on the road surface in the image captured by the camera into the image coordinate system to obtain the projection relationship between the point of the world coordinate system and the point of the image coordinate system includes:
coordinates in the world coordinate system: x = [ x y z 1 ]] T Coordinates in the image coordinate system: p = [ alpha u alpha v alpha ]] T α ≠ 0; wherein x, y and z are values of coordinates in a world coordinate system on three axes of x, y and z, u and v are values of coordinates in the world coordinate system on two corresponding axes of u and v in an image coordinate system, and alpha is a component of the coordinates in the image coordinate system;
the projection equation from the world coordinate system to the image coordinate system is:
p=KRTx
wherein K, R and T respectively represent an internal reference matrix, a rotation matrix and a translation matrix; substituting K, R and T into a projection equation to obtain an expanded projection model; assuming that the world coordinate of any point on the road surface is (x, y, 0), the projection relationship between the point of the world coordinate system and the point of the image coordinate system is:
Figure BDA0001893236250000021
Figure BDA0001893236250000022
Figure BDA0001893236250000023
Figure BDA0001893236250000031
further, the converting the intersection of a plurality of parallel straight lines in the image coordinate system into the intersection of a set of folding lines in the diamond space includes:
let the equation of a straight line in the image coordinate system be: ax + by + c =0, which is mapped in diamond space as a set of polylines, the mapping relationship is as follows:
Figure BDA0001893236250000032
wherein a, b and c are three parameters of a straight line general equation, the three parameters are constants, sgn is a sign function, and subscript o is represented as an image coordinate system.
Further, transforming the intersection point of the broken line group in the diamond space into an image coordinate system to obtain the vanishing point coordinate of the image coordinate system, includes:
transforming infinite image domain into finite diamond domain by diamond space method to establish mapping relation between image space and diamond space, d represents diamond spacey half-axis length, D represents diamond space x half-axis length; using the spatial midpoint [ x, y, w ] of the diamond] d And the point [ x, y, w ] in image space] o The transformation formula, which is transformed into each other, can complete the mapping:
[x,y,w] o →[-dDw,-dx,sgn(xy)x+y+sgn(y)dw] d
[x,y,w] d →[Dy,sgn(x)dx+sgn(y)Dy-dDw,x] o
wherein w is a component of a coordinate in an image coordinate system, subscript o is the image coordinate system, and subscript d is a diamond space; and transforming the intersection points of the broken line groups in the diamond space into the image space according to the transformation formula to obtain accurate vanishing point coordinates.
Further, the determining a minimum calibration condition according to the number of vanishing point coordinates obtained in step 2.2, and calculating a calibration result includes:
in a road traffic scene, the physical geometric information of a road comprises lane edge lines, road dotted lines and road width, the physical parameters of the physical geometric information all have national standards, and the free combination of the information and the identified vanishing points form a minimum calibration condition; the specific steps for selecting the minimum calibration condition are as follows:
according to the number of vanishing points, the minimum calibration condition can be divided into the following two categories:
(1) Case of two mutually perpendicular vanishing points
Vanishing point (u) introduced into road surface extension direction 0 ,v 0 ) And vanishing point (u) formed in the vertical direction 1 ,v 1 ) The y axis forms an included angle theta with the extending direction of the road surface, and the coordinate of the infinite point in the extending direction of the road surface in the world coordinate system is x 0 =[-tanθ 1 0 0] T The coordinate of the vertical direction infinite point is x 1 =[1 tanθ 0 0] T From the vanishing point principle, (u) 0 ,v 0 ) And (u) 1 ,v 1 ) Is x 0 And x 1 Projection in image space; substituting the coordinates into the formulas (1) and (2) can simplify the following result:
Figure BDA0001893236250000041
Figure BDA0001893236250000042
Figure BDA0001893236250000043
let v 1 =v 0 Transforming the formula to obtain the expression of f, phi and theta:
Figure BDA0001893236250000044
Figure BDA0001893236250000045
Figure BDA0001893236250000046
combining the formulas (3), (4) and (5) to obtain f, phi and theta from the coordinates of the two mutually perpendicular vanishing points, and finishing the calibration process because h is known;
(1-2) case where the height h of the camera is unknown
And setting the physical length of the road dotted line mark as l, and respectively setting the physical ordinate and the pixel coordinate of the dotted line endpoint as follows: y is b And y f ,v b And v f (ii) a The physical width of the road is w, and the length of an intercept pixel between the physical width of the road and the abscissa of the image coordinate system is delta;
the representation of the physical coordinate y can be inversely calculated from equation (2):
Figure BDA0001893236250000047
the physical coordinate y is independent of the corresponding pixel abscissa u, and thus for any position of the road and the roadThe dashed line l parallel to the road direction can establish the equation relationship: y is b =y f + lcos θ; in two ways f Expressed to carry out simultaneous reaction, solving h:
Figure BDA0001893236250000051
Figure BDA0001893236250000052
the condition in the formula (4)
Figure BDA0001893236250000053
Substituting, an expression for h can be derived:
Figure BDA0001893236250000054
in which the intermediate variable τ = (v) is introduced for computational convenience f -v 0 )(v b -v 0 )/(v f -v b );
The height h of the camera can be indirectly represented by the deduced dotted line mark l, f, phi and theta can be obtained by combining the formulas (3), (4) and (5) and the coordinates of two mutually perpendicular vanishing points, and all unknown parameters f, phi, theta and h are solved, so that the calibration process is completed;
(2) Case of a single vanishing point
(2-1) case where the height h of the camera is unknown
Let the vanishing point coordinate in the extending direction of the road surface be (u) 0 ,v 0 ) Introducing the road width w and the dotted line l, wherein the width w can also establish an equality relation with the height h; assuming that the intercept between the road and the road boundary along the x-axis direction of the world coordinate system in the road is Δ x, the intercept in the corresponding image is Δ u, and substituting Δ x and Δ u into the formula (1) can be simplified:
Figure BDA0001893236250000055
solving h from the above formula inversely:
Figure BDA0001893236250000056
for ease of expression, consider the resulting intercept Δ u = δ in the v =0 special case;
y| v=0 =hcotφ
meanwhile, the delta x and the road width w satisfy the equality relation delta x = wsec theta, y is not yellow v=0 Substituting the expression h obtained by the inverse solution for the equation h, so as to obtain the equation relation between the road width w and the height h:
Figure BDA0001893236250000057
combining the formulas (6) and (7):
Figure BDA0001893236250000058
the cos theta, sin phi can be solved by the formulas (4) and (5):
Figure BDA0001893236250000061
Figure BDA0001893236250000062
since the known parameter is the vanishing point (u) 0 ,v 0 ) And l and w and their corresponding image projection values, and substituting equations (8) and (9) into the simultaneous equations (6) and (7) can obtain a quartic equation for the unknown parameter f:
Figure BDA0001893236250000063
wherein intermediate variables are introduced for computational conveniencek V =δτl/wv 0
Because the equation is a quadratic equation about f, if two positive root solutions of the equation cannot be eliminated according to the constraint of f >0, the f is constrained to find out a correct solution, in an actual application scene, the height h of a common camera is used as a judgment basis, two undetermined positive roots f and phi corresponding to each positive root are solved, and after theta is solved, the two undetermined positive roots f and phi corresponding to each positive root are substituted into the formula (6) or (7), and the characteristic that the actual scene h meets a certain range is utilized to determine the only root; when f is uniquely determined, phi and theta can be solved according to the formulas (4) and (5), h can be solved according to the formulas (6) and (7), and all unknown parameters are solved, so that the calibration process is completed;
(2-2) case where the height h of the camera is known
The physical identification parameter at this time can be the length l of the road dotted line or the width w of the road, and the rest can be used as a redundancy condition to optimize the calibration parameter;
if the physical identification parameter is the road width w, the quadratic equation about the unknown parameter f can be obtained by squaring both sides of the equation (7) and substituting the conditions of the equations (8) and (9):
Figure BDA0001893236250000064
wherein, an intermediate variable k is introduced for convenient calculation w =wv 0 /hδ;
Since it is a fourth order equation about f, according to f>Constraint of 0, if two positive root solutions of the equation can not be eliminated, the f is constrained to find out a correct solution, and the camera height is known and can not be used as a judgment basis, so that l is selected as the judgment basis; after solving two undetermined positive roots f and phi and theta corresponding to each positive root, utilizing the calibration information f, phi and h according to the formulas (11), (12), (13) and (14) and the distance formula between two points
Figure BDA0001893236250000065
Calculating the spatial distance of l, and selecting a root f corresponding to l with smaller error compared with the real physical distance; when f is uniquely determined, phi and theta can be solved according to the expressions (4) and (5)Since h is known, the calibration process is completed.
If the physical identification parameter is the length l of the road dotted line, a quartic equation about the unknown parameter f can be obtained by squaring the two sides of the equation (6) and substituting the conditions of the equation (8):
Figure BDA0001893236250000071
wherein, an intermediate variable k is introduced for convenient calculation L =τl/h;
Since it is a fourth order equation about f, according to f>And if two positive root solutions of the equation cannot be omitted, the equation should be constrained to find out a correct solution, and the camera height is known and cannot be used as a judgment basis, so that l is selected as the judgment basis. After solving two undetermined positive roots f and phi and theta corresponding to each positive root, utilizing the calibration information f, phi and h according to the formulas (11), (12), (13) and (14) and the distance formula between two points
Figure BDA0001893236250000072
Calculating the space distance of l, and selecting a root f corresponding to l with a smaller error compared with the real physical distance; when f is uniquely determined, phi and theta can be solved according to the expressions (4) and (5), and the calibration process is completed because h is known.
Further, the optimizing the calibration result by using the redundant information includes:
Figure BDA0001893236250000073
the above formula is recorded as a cost function, and the quantity of the redundant length information is set as N 1 The number of redundant width information is N 2
Figure BDA0001893236250000074
Representing the normalized error of the corresponding geometrical-physical information represented by the parameter X to be estimated from the actual value under each set of redundant conditions,
Figure BDA0001893236250000075
means for finding a value of the parameter X to be estimated that minimizes the expression (10); initial value X 0 =(f 00 ,h 0 ) The method comprises the steps of obtaining an initial calibration result by calibrating a parameter X to be estimated;
obtaining the image coordinates of two end points of the redundant length information as (u) 0 ,v 0 ),(u 1 ,v 1 ) Corresponding to world coordinates of (x) 0 ,y 0 ,0),(x 1 ,y 1 ,0);
The physical coordinates of the redundant length information represented by the parameter f, Φ, h to be estimated are as follows:
Figure BDA0001893236250000076
Figure BDA0001893236250000077
Figure BDA0001893236250000078
Figure BDA0001893236250000079
if the redundant length information is selected to obtain the normalized error, the spatial distance calculated according to the redundant length information can be known as
Figure BDA0001893236250000081
And the actual distance in space l truth If the actual spatial distance is known, the difference between the calculated spatial distance and the actual spatial distance is calculated to obtain a set of e 2 (l)=l truth -l cal
If the redundant width information is selected to obtain the normalized error, the normalized error is calculated according to (11), (12), (13) and (14)The formula and the distance between two points can know that the space distance calculated according to the redundant width information is
Figure BDA0001893236250000082
And the actual distance w in space truth For the known spatial distance, the difference between the actual spatial distance and the calculated spatial distance can be used to obtain a set of e 2 (w)=w truth -w cal
And summing all redundant length and width information according to the number of groups to construct a complete (10) formula. Solving for parameter X that minimizes equation (10) N =(f NN ,h N ) Namely the optimization result of the parameters.
The invention has the following technical characteristics:
the method is simple to implement, can be applied to camera calibration under various road scenes, ensures the universality under a road cloud monitoring environment, meets the requirement of accurately acquiring traffic parameters in an intelligent traffic monitoring system, and can optimize the calibration result.
Drawings
FIG. 1 is a flow chart of a camera calibration optimization method provided by the present invention;
FIG. 2 is a schematic diagram of a coordinate system of a camera model of the present invention; wherein, (a) is a side view of the world coordinate system of the camera model, and (b) is a top view of the camera coordinate system of the camera model;
FIG. 3 is a schematic representation of the minimum calibration condition in a coordinate system according to the present invention; wherein, (a) is a representation diagram of the minimum calibration condition in a world coordinate system, and (b) is a representation diagram of the minimum calibration condition in an image coordinate system;
FIG. 4 is a diagram showing the mapping relationship between the image space and the diamond space according to the present invention;
FIG. 5 is an original image of a traffic scene used in an embodiment of the present invention;
FIG. 6 is a minimum calibration condition image selected from the original images in an embodiment of the present invention;
Detailed Description
The invention utilizes the parallel line characteristics which are very easy to extract in the traffic scene to automatically identify the minimum calibration condition, thereby calibrating the camera. The method is characterized in that the method is adopted for calibration based on vanishing points, minimum calibration conditions are determined by automatically identifying road edge lines, road width lines and road center broken lines, and calibration is carried out according to different minimum calibration conditions. In the actual scene of the road, the lane line group including the two side edges of the road is a good parallel line group for obtaining vanishing points, the intersection line of the horizontal line in the center of the road image and the two side edges of the road is often used as a line segment for calculating the width of the road, and the dotted line with a fixed interval in the middle of the road is often used as a good road line group for calculating the length of the road. These parameters are easily obtained in the road environment, which substantially ensures the versatility of the calibration method in this scenario. On the basis of the minimum calibration condition, if redundant information exists, the calibration parameters can be optimized, and the calibration precision is improved.
As shown in fig. 1 to 6, the invention discloses a camera calibration optimization method under a road cloud monitoring platform, which comprises the following detailed steps:
step 1, establishing a camera model and a coordinate system
Step 1.1, establishing a world coordinate system O-XYZ and a camera coordinate system O-X C Y C Z C An image coordinate system O-UV and a camera model;
the camera model is simplified into a pinhole model, a principal point coincides with the center of an image, an imaging plane is vertical to an optical axis, only the focal length of an internal parameter is undetermined, and an observation road surface is straight. Fig. 2 (a) and (b) are schematic diagrams of a camera space model in a road scene. In order to facilitate subsequent analysis, a focal length of a camera is set to be f, a height from an origin of the camera to the ground is set to be h, a pitch angle of the camera is set to be phi, and a deflection angle (an included angle between projection of an optical axis of the camera on a road plane and an extending direction of a road) of the camera is set to be theta.
The established coordinate systems are all right-handed systems. Establishing a world coordinate system, wherein the coordinate system comprises x, y and z axes, the origin is positioned at the projection point of the camera on the road surface, and the z axis is vertical to the groundIn the upward direction, as seen in the side view of FIG. 2 (a), the x-axis is directed into the paper, as
Figure BDA0001893236250000091
Indicating that the y-axis is perpendicular to the xoz plane, it can be seen in the top view of FIG. 2 (b) that the z-axis is directed out of the paper, as indicated by [; establishing a camera coordinate system, the coordinate system comprising x c ,y c ,z c The origin is at the position of the camera, x c The axis being parallel to the x-axis in the world coordinate system, z c Axial forward pointing along the optical axis of the camera towards the ground, y c Axis perpendicular to x c oz c The plane is directed to the ground. The principal point in the figure is point r and is point z c The axis extends to the intersection point with the ground, and according to the angle relation in FIG. 2, the coordinate of the r point in the world coordinate system is (0, hcot phi, 0); and establishing an image coordinate system, wherein the r point is taken as an origin, the horizontal right direction is taken as a u axis, and the vertical downward direction is taken as a v axis. The image coordinate system is an image plane coordinate system.
Step 1.2, converting the world coordinate of any point on the road surface in the image shot by the camera into an image coordinate system to obtain the projection relation between the point of the world coordinate system and the point of the image coordinate system;
coordinates in the world coordinate system: x = [ x y z 1 =] T Coordinates in the image coordinate system: p = [ alpha u alpha v alpha =] T And α ≠ 0. Wherein x, y and z are values of coordinates in the world coordinate system on three axes of x, y and z, u and v are values of the coordinates in the world coordinate system on two corresponding axes of u and v in the image coordinate system, and alpha is a component of the coordinates in the image coordinate system. The projection equation from the world coordinate system to the image coordinate system is:
p=KRTx
wherein K, R and T respectively represent an internal reference matrix, a rotation matrix and a translation matrix.
Figure BDA0001893236250000101
Figure BDA0001893236250000102
Figure BDA0001893236250000103
And substituting K, R and T into the projection equation to obtain an expanded projection model.
Figure BDA0001893236250000104
Since the following minimum calibration conditions are all based on the road surface identification information, the projection relationship between the space point and the image point (the point of the world coordinate system and the point of the image coordinate system) can be simplified by setting the world coordinate of any point on the road surface to (x, y, 0), and the projection relationship between the point of the world coordinate system and the point of the image coordinate system is:
Figure BDA0001893236250000105
Figure BDA0001893236250000106
Figure BDA0001893236250000107
Figure BDA0001893236250000108
step 2, selecting the minimum calibration condition and calculating the calibration result
Step 2.1, converting the intersection points of a plurality of parallel straight lines in the image coordinate system into the intersection points of the fold line groups in the diamond space;
in this scheme, there are several intersections of the straight lines in the image coordinate system, and the straight lines are converted to the corresponding intersections in the diamond space (from the Real subject Plane Mapping for Detection of organic vanizing Points) with the same number of intersections.
Let the equation of a straight line in the image coordinate system be: ax + by + c =0, which is mapped in diamond space as a set of polylines, the mapping relationship is as follows:
Figure BDA0001893236250000111
wherein a, b and c are three parameters of a straight line general equation, the three parameters are constants, sgn is a sign function, and subscript o is represented as an image coordinate system.
The mapping of the straight lines present in the image space to the diamond space is to convert the infinite straight lines in the image coordinate system to finite broken lines in the diamond space, thus finding the vanishing points.
2.2, transforming the intersection point of the broken line group in the diamond space into an image coordinate system to obtain a vanishing point coordinate of the image coordinate system;
in order to obtain accurate vanishing point coordinates, an infinite image domain is transformed into a limited diamond domain by adopting a diamond space method, and a mapping relation between the image space and the diamond space is established. As shown in fig. 4, D represents the length of the y half axis of the diamond space, D represents the length of the x half axis of the diamond space, infinite space in the original image domain is mapped into a finite diamond region, the dotted line in fig. 4 represents infinite points distributed in four quadrants of the image domain, and the mapping relationship of different coordinate axes in the diamond space corresponds. Then through the diamond spatial midpoint x, y, w] d And the point [ x, y, w ] in image space] o The mapping can be done by a transformational transformation formula:
[x,y,w] o →[-dDw,-dx,sgn(xy)x+y+sgn(y)dw] d
[x,y,w] d →[Dy,sgn(x)dx+sgn(y)Dy-dDw,x] o
where w is a component of the coordinates in the image coordinate system, subscript o is the image coordinate system, and subscript d is the diamond space.
Similar to the voting principle in Hough transformation, the intersection point of a plurality of parallel straight lines in the image space is changed into the intersection point of the broken line group in the diamond space, and the intersection point of the broken line group in the diamond space is transformed into the image space according to the transformation formula, so that the accurate vanishing point coordinate is obtained.
And 2.3, determining a minimum calibration condition according to the number of the vanishing point coordinates obtained in the step 2.2, and calculating a calibration result.
Two mutually perpendicular vanishing points +1 physical identification parameters in a scene:
case where camera height h is known: and (3) substituting the infinite point coordinate in the extending direction of the road surface and the infinite point coordinate in the vertical direction in the world coordinate system into the formula 1 and the formula 2 in the step 1 to obtain the expressions of f, phi and theta, and finishing the calibration.
Case where the camera height h is unknown: y-axis coordinate y of world coordinates of two end points of length l of physical line segment of road b ,y f And calculating an expression of the height h and the length l of the camera through the expressions of f, phi and theta to finish calibration.
One vanishing point +2 physical identification parameters in the scene:
case where the camera height h is unknown: introducing the physical width w of the road, obtaining an expression of the height h and the width w of the camera through the formula 1 in the step 1, combining the expression of h and w and the expression of h and l to obtain a quartic equation about f, solving phi and theta through the formulas 4 and 5, and completing calibration.
Case where the camera height h is known: if the physical width w of the road is introduced, a quartic equation related to f is obtained by squaring two sides of an expression of h and w and substituting conditions of a formula 4 and a formula 5, f is solved, phi and theta are solved through the formula 4 and the formula 5, and calibration is completed. If the length l of the road physical line segment is introduced, a quartic equation related to f is obtained by substituting the square of two sides of an expression of h and l into the condition of a formula 5, f is solved, phi and theta are solved through the formula 4 and the formula 5, and calibration is completed.
In a road traffic scene, the physical geometric information of a road mainly comprises lane edge lines, road broken lines, road width and the like, the physical parameters of the physical geometric information have national standards, and the free combination of the information and the identified vanishing points form a minimum calibration condition; the specific steps for selecting the minimum calibration condition are as follows:
according to the number of vanishing points, the minimum calibration condition can be divided into the following two categories:
the first type is: two mutually perpendicular vanishing points in a scene plus 1 physical identification parameter;
the second type: a vanishing point +2 physical identification parameters in the scene;
wherein, two mutually perpendicular vanishing points are: one in the direction of traffic flow, i.e. the direction of extension of the road surface, and the other in a direction perpendicular to the direction of extension of the road surface. The single vanishing point is along the direction of traffic flow, i.e. the direction of extension of the road surface.
(1) Case of two mutually perpendicular vanishing points
The 1 physical identification parameter in the first minimum calibration condition can be divided into two cases: the first method is that the height h of the camera is known, and the calibration process is completed only by calculating and solving f, phi and theta; the second is that the height h of the camera is unknown, f, phi, theta and h need to be calculated, and h can be indirectly calculated through f, phi and theta.
(1-1) case where the height h of the camera is known
Introduction of vanishing point (u) in road surface extending direction (traffic flow direction) 0 ,v 0 ) And vanishing point (u) formed in the vertical direction 1 ,v 1 ) As shown in fig. 3 (a), the angle between the y-axis and the road surface extending direction is θ, and the coordinate of the infinite point in the road surface extending direction in the world coordinate system is x 0 =[-tanθ 1 0 0] T The coordinate of the vertical direction infinite point is x 1 =[1 tanθ 0 0] T From the principle of vanishing point, (u) 0 ,v 0 ) And (u) 1 ,v 1 ) Is x 0 And x 1 Projection in image space. Substituting the coordinates into the formulas (1) and (2) can simplify the following result:
Figure BDA0001893236250000131
Figure BDA0001893236250000132
Figure BDA0001893236250000133
since it has been assumed before that the camera has no spin angle, the longitudinal coordinates of the vanishing points are equal, i.e. v 1 =v 0 The above formula is transformed to obtain the expressions of f, phi and theta.
Figure BDA0001893236250000134
Figure BDA0001893236250000135
Figure BDA0001893236250000136
Combining the formulas (3), (4) and (5) to obtain f, phi and theta from the coordinates of the two mutually perpendicular vanishing points, and finishing the calibration process because h is known.
(1-2) case where the height h of the camera is unknown
Under the condition that the height h of the camera is unknown, the physical identification parameters can be calculated by selecting a common dotted line identification in a road traffic scene.
And setting the physical length of the road dotted line mark as l, and respectively setting the physical ordinate and the pixel coordinate of the dotted line endpoint as follows: y is b And y f ,v b And v f (ii) a The physical width of the road is w, and the length of the pixel is delta from the intersection of the road and the abscissa of the image coordinate system. The spatial geometry of the road marking information and the perspective projection relationship in the image are shown in fig. 3.
The representation of the physical coordinate y can be inversely calculated from equation (2).
Figure BDA0001893236250000141
Physical coordinate y and corresponding pixel horizontal lineThe coordinate u is not related, and therefore, for a broken line l where an arbitrary position of the road is parallel to the road direction, as shown in (a) of fig. 3, an equation relationship can be established: y is b =y f + lcos θ. In two ways f Expressed to carry out simultaneous reaction, solving h:
Figure BDA0001893236250000142
Figure BDA0001893236250000143
the condition in the formula (4)
Figure BDA0001893236250000144
Substituting, an expression for h can be obtained:
Figure BDA0001893236250000145
in which the intermediate variable τ = (v) is introduced for computational convenience f -v 0 )(v b -v 0 )/(v f -v b )。
The height h of the camera can be indirectly represented by the derived dotted line mark l, f, phi and theta can be obtained by combining the formulas (3), (4) and (5) and the coordinates of the two mutually perpendicular vanishing points, and all unknown parameters f, phi, theta and h are solved, so that the calibration process is completed.
(2) Single vanishing point case
In a road traffic cloud monitoring scene, because the angle of the holder is constantly changed, two accurate vanishing points are difficult to obtain, and generally only one vanishing point in the traffic flow direction can be accurately obtained, so in an actual scene, a second minimum calibration condition can be preferentially adopted for parameter calibration, and the specific calculation process is as follows:
the 2 physical identification parameters in the second minimum calibration condition can be divided into two cases:
(2-1) case where the height h of the camera is unknown
The physical identification parameters at this time must select the length of the road dotted line and the width of the road. Let the vanishing point coordinate in the road surface extending direction (traffic flow direction) be (u) 0 ,v 0 ) Introducing the road width w and the dashed line l, the width w may also establish an equality relation with the height h. Let Δ x be the intercept between the road and the road boundary along the x-axis of the world coordinate system, and Δ u be the intercept in the corresponding image, as shown in fig. 3. Substituting Δ x and Δ u into equation (1) can be simplified:
Figure BDA0001893236250000146
solving h from the above formula inversely:
Figure BDA0001893236250000151
for the sake of convenience of expression, as shown in fig. 3 (b), the intercept Δ u = δ obtained in the special case of v =0 is considered.
y| v=0 =hcotφ
Meanwhile, as shown in fig. 3 (a), Δ x and the road width w satisfy the equation Δ x = wsec θ, y ∞ v=0 Substituting = hcot phi into the expression h obtained by the inverse solution, and obtaining an equation relation between the road width w and the height h:
Figure BDA0001893236250000152
combining the formulas (6) and (7):
Figure BDA0001893236250000153
the cos theta, sin phi can be solved by the formulas (4) and (5):
Figure BDA0001893236250000154
Figure BDA0001893236250000155
since the known parameter is the vanishing point (u) 0 ,v 0 ) And l and w and their corresponding image projection values, the equation (8) and (9) are substituted into the simultaneous equations of the equations (6) and (7) to obtain a quartic equation for the unknown parameter f:
Figure BDA0001893236250000156
wherein, an intermediate variable k is introduced for convenient calculation V =δτl/wv 0
Because the equation is a quadratic equation about f, if two positive root solutions of the equation cannot be eliminated according to the constraint of f >0, the f is constrained to find out a correct solution, in an actual application scene, the height h of a common camera is used as a judgment basis, two undetermined positive roots f and phi corresponding to each positive root are solved, theta is substituted into the equation (6) or (7), and the characteristic that the actual scene h meets a certain range is utilized to determine the only root. When f is uniquely determined, phi and theta can be solved according to the expressions (4) and (5), h can be solved according to the expressions (6) and (7), all unknown parameters are solved, and therefore the calibration process is completed.
(2-2) case where the height h of the camera is known
The physical identification parameter at this time may be a road broken line length l or a road width w, and the rest may be used as a redundant condition to optimize the calibration parameter.
If the physical identification parameter is the road width w, the quadratic equation about the unknown parameter f can be obtained by squaring both sides of the equation (7) and substituting the conditions of the equations (8) and (9):
Figure BDA0001893236250000161
wherein, an intermediate variable k is introduced for convenient calculation w =wv 0 /hδ。
Due to the fact thatIs a fourth order equation for f, according to which>And if two positive root solutions of the equation and the constraint of 0 cannot be omitted, f is constrained to find out a correct solution, and l is selected as a judgment basis because the height of the camera is known and cannot be used as a judgment basis. After solving two undetermined positive roots f and phi and theta corresponding to each positive root, utilizing the calibration information f, phi and h according to the formulas (11), (12), (13) and (14) and the distance formula between two points
Figure BDA0001893236250000162
And calculating the space distance of l, and selecting the root f corresponding to l with smaller error compared with the real physical distance. When f is uniquely determined, phi and theta can be solved according to the expressions (4) and (5), and the calibration process is completed because h is known.
If the physical identification parameter is the length l of the road dotted line, the quadratic equation about the unknown parameter f can be obtained by squaring two sides of the formula (6) and substituting the condition of the formula (8):
Figure BDA0001893236250000163
wherein, an intermediate variable k is introduced for convenient calculation L =τl/h。
Since it is a fourth order equation with respect to f, according to f>And if two positive root solutions of the equation and the constraint of 0 cannot be omitted, f is constrained to find out a correct solution, and l is selected as a judgment basis because the height of the camera is known and cannot be used as a judgment basis. After solving two undetermined positive roots f and phi and theta corresponding to each positive root, utilizing the calibration information f, phi and h according to the formulas (11), (12), (13) and (14) and the distance formula between two points
Figure BDA0001893236250000164
And calculating the spatial distance of l, and selecting the root f corresponding to l with smaller error compared with the real physical distance. When f is uniquely determined, phi and theta can be solved according to the expressions (4) and (5), and the calibration process is completed because h is known.
Step 3, optimizing the calibration result
On the basis of the minimum calibration condition and the solution of the calibration result, if redundant information (redundant length and redundant width) exists in the road scene, the calibration result can be optimized.
Through analysis, the calibration under the road scene is equivalent to the parameters to be estimated: x = (f, Φ, h). Due to the presence of redundant information in most scenarios, such as: multiple broken lines of roads, multiple broken line intervals of roads, widths of roads at different positions, etc. Therefore, the open optimization solving method is adopted, and under the existing minimum calibration condition, the existing redundant geometric information is utilized to further improve the calibration precision:
Figure BDA0001893236250000171
recording the formula as a cost function, and setting the quantity of redundant length information as N 1 The number of redundant width information is N 2
Figure BDA0001893236250000172
Represents the normalized error of the corresponding geometrical-physical information represented by the parameter X to be estimated from the actual value under each set of redundant conditions,
Figure BDA0001893236250000173
it is indicated that the value of the parameter X to be estimated is found so as to minimize expression 10. The geometric length information may be a road dotted line, a road dotted line interval, and the geometric width information may be a road width. Initial value X 0 =(f 00 ,h 0 ) The initial calibration result is obtained by calibrating the parameter X to be estimated.
From the figure, we can obtain the image coordinates of two end points of the redundant length information as (u) 0 ,v 0 ),(u 1 ,v 1 ) Corresponding to world coordinates of (x) 0 ,y 0 ,0),(x 1 ,y 1 ,0)。
The physical coordinates of the redundant length information represented by the parameter f, Φ, h to be estimated are as follows:
Figure BDA0001893236250000174
Figure BDA0001893236250000175
Figure BDA0001893236250000176
Figure BDA0001893236250000177
if the redundant length information is selected to obtain the normalized error, the spatial distance calculated according to the redundant length information can be known as
Figure BDA0001893236250000178
And the actual distance in space is l truth (the setting specification of road traffic marking lines is known, if the redundant length information is a road dotted line, the redundant length information is calculated by 6 meters, and if the redundant length information is a road dotted line interval, the redundant length information is calculated by 9 meters), and the actual space distance and the calculated space distance are differed to obtain a group of e 2 (l)=l truth -l cal
If the redundant width information is selected to obtain the normalized error, the spatial distance calculated according to the redundant width information can be known as
Figure BDA0001893236250000179
And the actual distance in space is w truth (the road traffic marking setting specifications are known, redundant width information is calculated by the width of a single lane being 3.75 meters), and a group e can be obtained by subtracting the actual space distance from the calculated space distance 2 (w)=w truth -w cal . Summing all redundant length and width information by number of groupsThus, the complete formula (10) can be constructed. Solving parameter X that minimizes equation (10) N =(f NN ,h N ) Namely the optimization result of the parameters.
To verify the effectiveness of the proposed method, one embodiment of the present invention uses the actual road traffic scene image shown in fig. 5, and identifies the minimum calibration condition in the actual traffic scene to calibrate the camera. As shown in fig. 6, a set of parallel lines along the traffic flow direction is identified, and vanishing point coordinates are obtained through diamond space transformation; the dotted line with fixed intervals in the middle of the road is used as a condition for solving the length of the road; and calculating the length of the section of the intersection line of the central horizontal line of the road image and the edges of the two sides of the road as a condition for solving the width of the road. The physical length of the dashed line and the physical width of the road are both known by reference to the relevant data.
The experimental result shows that the minimum calibration condition identified by the method can complete parameter calibration, and it can be seen from fig. 6 that the road scene contains a large amount of redundant information, so that the calibration parameters can be further optimized. The optimization results are shown in table 1. The experimental result shows that the method can completely meet the precision requirement of calibration in a road traffic scene, and the effectiveness of the method provided by the invention is proved to a certain extent by the experiment.
TABLE 1 comparison of calibration results of single vanishing point and multiple identifiers fusion calibration method and traditional calibration method in road scene
Figure BDA0001893236250000181

Claims (5)

1. A camera calibration optimization method under a road cloud monitoring platform is characterized by comprising the following steps:
step 1, establishing a camera model and a coordinate system
Step 1.1, establishing a world coordinate system, a camera coordinate system, an image coordinate system and a camera model, wherein the camera model is simplified into a pinhole model;
step 1.2, converting the world coordinate of any point on the road surface in the image shot by the camera into an image coordinate system to obtain the projection relation between the point of the world coordinate system and the point of the image coordinate system;
step 2, selecting the minimum calibration condition and calculating the calibration result
Step 2.1, converting the intersection points of a plurality of parallel straight lines in the image coordinate system into the intersection points of the fold line groups in the diamond space;
2.2, transforming the intersection point of the broken line group in the diamond space into an image coordinate system to obtain a vanishing point coordinate of the image coordinate system;
step 2.3, determining a minimum calibration condition according to the number of the vanishing point coordinates obtained in the step 2.2, and calculating a calibration result;
step 3, optimizing the calibration result
On the basis of the minimum calibration condition and the solved calibration result, judging whether redundant information exists in a road scene or not, and optimizing the calibration result by using the redundant information;
determining a minimum calibration condition according to the number of the vanishing point coordinates obtained in the step 2.2, and calculating a calibration result, wherein the minimum calibration condition comprises the following steps:
in a road traffic scene, the physical geometric information of a road comprises lane edge lines, road dotted lines and road width, the physical parameters of the physical geometric information all have national standards, and the free combination of the information and the identified vanishing points form a minimum calibration condition; the specific steps for selecting the minimum calibration condition are as follows:
according to the number of vanishing points, the minimum calibration condition can be divided into the following two categories:
(1) Case of two mutually perpendicular vanishing points
Vanishing point (u) introduced into road surface extension direction 0 ,v 0 ) And vanishing point (u) formed in the vertical direction 1 ,v 1 ) The included angle between the y axis and the extending direction of the road surface is theta, and the coordinate of the infinite point in the extending direction of the road surface in a world coordinate system is x 0 =[-tanθ 1 0 0] T The coordinate of the vertical direction infinite point is x 1 =[1 tanθ 0 0] T From the principle of vanishing point, (u) 0 ,v 0 ) And (u) 1 ,v 1 ) Is x 0 And x 1 Projection in image space; substituting the coordinates into the formulas (1) and (2) can simplify the following result:
Figure FDA0003822024800000021
Figure FDA0003822024800000022
Figure FDA0003822024800000023
let v 1 =v 0 Transforming the formula to obtain the expression of f, phi and theta:
Figure FDA0003822024800000024
Figure FDA0003822024800000025
Figure FDA0003822024800000026
combining the formulas (3), (4) and (5) to obtain f, phi and theta by the coordinates of the two mutually perpendicular vanishing points, and finishing the calibration process because h is known;
(1-2) case where the height h of the camera is unknown
Setting the physical length of the road dotted line mark as l, and respectively setting the physical ordinate and the pixel coordinate of a dotted line endpoint as: y is b And y f ,v b And v f (ii) a The physical width of the road is w, and the length of an intercept pixel of the physical width of the road and the abscissa of an image coordinate system is delta;
the representation of the physical coordinate y can be inversely calculated from equation (2):
Figure FDA0003822024800000027
the physical coordinate y is independent of the corresponding pixel abscissa u, so for a dashed line l where the road arbitrary position is parallel to the road direction, an equality relationship can be established: y is b =y f + lcos θ; in two ways f Expressed to carry out simultaneous reaction, solving h:
Figure FDA0003822024800000028
Figure FDA0003822024800000029
the condition in the formula (4)
Figure FDA00038220248000000210
Substituting, an expression for h can be derived:
Figure FDA0003822024800000031
in which an intermediate variable τ = (v) is introduced for computational convenience f -v 0 )(v b -v 0 )/(v f -v b );
The height h of the camera can be indirectly represented by the deduced dotted line mark l, f, phi and theta can be obtained by combining the formulas (3), (4) and (5) and the coordinates of two mutually perpendicular vanishing points, and all unknown parameters f, phi, theta and h are solved, so that the calibration process is completed;
(2) Case of a single vanishing point
(2-1) case where the height h of the camera is unknown
Eliminating in the direction of extension of the road surfaceThe coordinate of the point of missing is (u) 0 ,v 0 ) Introducing the road width w and the dotted line l, wherein the width w can also establish an equality relation with the height h; assuming that the intercept between the road and the road boundary along the x-axis direction of the world coordinate system in the road is Δ x, the intercept in the corresponding image is Δ u, and substituting Δ x and Δ u into the formula (1) can be simplified:
Figure FDA0003822024800000032
solving h from the above formula inversely:
Figure FDA0003822024800000033
for ease of expression, consider the resulting intercept Δ u = δ in the v =0 special case;
y| v=0 =hcotφ
meanwhile, the delta x and the road width w satisfy the equality relation delta x = wsec theta, y is not yellow v=0 Substituting the expression h obtained by the inverse solution for the equation h, so as to obtain the equation relation between the road width w and the height h:
Figure FDA0003822024800000034
combining the formulas (6) and (7):
Figure FDA0003822024800000035
the cos theta, sin phi can be solved by the formulas (4) and (5):
Figure FDA0003822024800000036
Figure FDA0003822024800000037
since the known parameter is the vanishing point (u) 0 ,v 0 ) And l and w and their corresponding image projection values, the equation (8) and (9) are substituted into the simultaneous equations of the equations (6) and (7) to obtain a quartic equation for the unknown parameter f:
Figure FDA0003822024800000041
wherein, an intermediate variable k is introduced for convenient calculation V =δτl/wv 0
Because the quadratic equation is related to f, if the equation has two positive root solutions which can not be eliminated according to the constraint that f is more than 0, the f is constrained to find out a correct solution, in an actual application scene, the height h of a common camera is used as a judgment basis, two undetermined positive roots f and phi and theta corresponding to each positive root are solved, and then the two undetermined positive roots f and phi and theta are substituted into the formula (6) or (7), and the characteristic that the actual scene h meets a certain range is utilized to determine a unique root; when f is uniquely determined, phi and theta can be solved according to the formulas (4) and (5), h can be solved according to the formulas (6) and (7), and all unknown parameters are solved, so that the calibration process is completed;
(2-2) case where the height h of the camera is known
The physical identification parameter at this time can be the length l of the road dotted line or the width w of the road, and the rest can be used as a redundancy condition to optimize the calibration parameter;
if the physical identification parameter is the road width w, the quadratic equation about the unknown parameter f can be obtained by squaring both sides of the equation (7) and substituting the conditions of the equations (8) and (9):
Figure FDA0003822024800000042
wherein, an intermediate variable k is introduced for convenient calculation W =wv 0 /hδ;
Because of the quadratic equation for f, according to the constraint that f >0, if there are two positive root solutions to the equation that cannot be truncated,f is constrained to find out a correct solution, and since the height of the camera is known and can not be used as a judgment basis, l is selected as the judgment basis; after solving two undetermined positive roots f and phi and theta corresponding to each positive root, utilizing the calibration information f, phi and h according to the formulas (11), (12), (13) and (14) and the distance formula between two points
Figure FDA0003822024800000043
Calculating the space distance of l, and selecting a root f corresponding to l with a smaller error compared with the real physical distance; when f is uniquely determined, phi and theta can be solved according to the formulas (4) and (5), and the calibration process is finished because h is known;
if the physical identification parameter is the length l of the road dotted line, the quadratic equation about the unknown parameter f can be obtained by squaring two sides of the formula (6) and substituting the condition of the formula (8):
Figure FDA0003822024800000044
wherein, an intermediate variable k is introduced for convenient calculation L =τl/h;
Because the equation is a quadratic equation about f, if two positive root solutions of the equation can not be omitted according to the constraint that f is more than 0, the f is constrained to find out a correct solution, and the camera height is known and can not be used as a judgment basis any more, so that l is selected as the judgment basis; after solving two undetermined positive roots f and phi and theta corresponding to each positive root, utilizing the calibration information f, phi and h according to the formulas (11), (12), (13) and (14) and the distance formula between two points
Figure FDA0003822024800000051
Calculating the spatial distance of l, and selecting a root f corresponding to l with smaller error compared with the real physical distance; when f is uniquely determined, phi and theta can be solved according to the expressions (4) and (5), and the calibration process is completed because h is known.
2. The method for calibrating and optimizing the camera under the road cloud monitoring platform according to claim 1, wherein the step of converting the world coordinates of any point on the road surface in the image shot by the camera into an image coordinate system to obtain the projection relationship between the points of the world coordinate system and the points of the image coordinate system comprises the following steps:
coordinates in the world coordinate system: x = [ x y z 1 =] T Coordinates in the image coordinate system: p = [ alpha u alpha v alpha =] T α ≠ 0; wherein x, y and z are values of coordinates in a world coordinate system on three axes of x, y and z, u and v are values of coordinates in the world coordinate system on two corresponding axes of u and v in an image coordinate system, and alpha is a component of the coordinates in the image coordinate system;
the projection equation from the world coordinate system to the image coordinate system is:
p=KRTx
wherein K, R and T respectively represent an internal reference matrix, a rotation matrix and a translation matrix; substituting K, R and T into a projection equation to obtain an expanded projection model; assuming that the world coordinate of any point on the road surface is (x, y, 0), the projection relationship between the point of the world coordinate system and the point of the image coordinate system is:
Figure FDA0003822024800000052
Figure FDA0003822024800000053
Figure FDA0003822024800000054
Figure FDA0003822024800000055
3. the method for calibrating and optimizing the camera under the cloud monitoring platform according to claim 1, wherein the step of converting the intersection point of a plurality of parallel straight lines in the image coordinate system into the intersection point of a fold line group in the diamond space comprises the following steps:
let the equation of a straight line in the image coordinate system be: ax + by + c =0, which is mapped in diamond space as a set of polylines, the mapping relationship is as follows:
Figure FDA0003822024800000061
wherein a, b and c are three parameters of a straight line general equation, the three parameters are constants, sgn is a sign function, and subscript o is represented as an image coordinate system.
4. The method for calibrating and optimizing the camera under the cloud monitoring platform for the road according to claim 1, wherein the step of transforming the intersection point of the polygonal line group in the diamond space into the image coordinate system to obtain the vanishing point coordinate of the image coordinate system comprises the following steps:
transforming an infinite image domain into a limited diamond domain by using a diamond space method to establish a mapping relation between the image space and the diamond space, wherein D represents the length of a half axis y of the diamond space, and D represents the length of a half axis x of the diamond space; using the spatial midpoint [ x, y, w ] of the diamond] d And the point [ x, y, w ] in image space] o The mapping can be done by a transformational transformation formula:
[x,y,w] o →[-dDw,-dx,sgn(xy)x+y+sgn(y)dw] d
[x,y,w] d →[Dy,sgn(x)dx+sgn(y)Dy-dDw,x] o
wherein w is a component of a coordinate in an image coordinate system, subscript o is the image coordinate system, and subscript d is a diamond space; and transforming the intersection points of the broken line groups in the diamond space into the image space according to the transformation formula to obtain accurate vanishing point coordinates.
5. The method for optimizing calibration of a camera under a road cloud monitoring platform according to claim 1, wherein the optimizing the calibration result by using the redundant information comprises:
Figure FDA0003822024800000062
the above formula is recorded as a cost function, and the quantity of the redundant length information is set as N 1 The number of redundant width information is N 2
Figure FDA0003822024800000063
Represents the normalized error of the corresponding geometrical-physical information represented by the parameter X to be estimated from the actual value under each set of redundant conditions,
Figure FDA0003822024800000064
means for finding a value of the parameter X to be estimated that minimizes the expression (10); initial value X 0 =(f 0 ,φ 0 ,h 0 ) The method comprises the following steps of obtaining an initial calibration result by utilizing a parameter X to be estimated to perform calibration;
obtaining the image coordinates of two end points of the redundant length information as (u) 0 ,v 0 ),(u 1 ,v 1 ) Corresponding to world coordinates of (x) 0 ,y 0 ,0),(x 1 ,y 1 ,0);
The physical coordinates of the redundant length information represented by the parameter f, Φ, h to be estimated are as follows:
Figure FDA0003822024800000071
Figure FDA0003822024800000072
Figure FDA0003822024800000073
Figure FDA0003822024800000074
if the redundant length information is selected to obtain the normalized error, the spatial distance calculated according to the redundant length information can be known as
Figure FDA0003822024800000075
And the actual distance l in space truth If the actual spatial distance is known, the difference between the calculated spatial distance and the actual spatial distance is calculated to obtain a set of e 2 (l)=l truth -l cal
If the redundant width information is selected to obtain the normalized error, the spatial distance calculated according to the redundant width information can be known as
Figure FDA0003822024800000076
And the actual distance w in space truth If the actual spatial distance is known, the difference between the calculated spatial distance can be used to obtain a set e 2 (w)=w truth -w cal
Summing all the redundant length and width information according to the number of groups to construct a complete (10) formula; solving for parameter X that minimizes equation (10) N =(f N ,φ N ,h N ) Namely the optimization result of the parameters.
CN201811480427.0A 2018-12-05 2018-12-05 Camera calibration optimization method under road cloud monitoring platform Active CN109685855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811480427.0A CN109685855B (en) 2018-12-05 2018-12-05 Camera calibration optimization method under road cloud monitoring platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811480427.0A CN109685855B (en) 2018-12-05 2018-12-05 Camera calibration optimization method under road cloud monitoring platform

Publications (2)

Publication Number Publication Date
CN109685855A CN109685855A (en) 2019-04-26
CN109685855B true CN109685855B (en) 2022-10-14

Family

ID=66187129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811480427.0A Active CN109685855B (en) 2018-12-05 2018-12-05 Camera calibration optimization method under road cloud monitoring platform

Country Status (1)

Country Link
CN (1) CN109685855B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110146869B (en) * 2019-05-21 2021-08-10 北京百度网讯科技有限公司 Method and device for determining coordinate system conversion parameters, electronic equipment and storage medium
CN110412603B (en) * 2019-07-22 2023-07-04 昆山伟宇慧创智能科技有限公司 Calibration parameter self-adaptive updating method for lane departure calculation
CN110718068B (en) * 2019-09-27 2020-12-08 华中科技大学 Road monitoring camera installation angle estimation method
CN110930365B (en) * 2019-10-30 2023-11-03 长安大学 Orthogonal vanishing point detection method under traffic scene
CN111862231B (en) * 2020-06-15 2024-04-12 南方科技大学 Camera calibration method, lane departure early warning method and system
CN115086541B (en) * 2021-03-15 2023-12-22 北京字跳网络技术有限公司 Shooting position determining method, device, equipment and medium
CN112950725A (en) * 2021-03-22 2021-06-11 深圳市城市交通规划设计研究中心股份有限公司 Monitoring camera parameter calibration method and device
CN113160325B (en) * 2021-04-01 2022-10-11 长春博立电子科技有限公司 Multi-camera high-precision automatic calibration method based on evolutionary algorithm
CN115237164B (en) * 2022-08-12 2024-01-23 南京理工大学 Constraint following-based two-degree-of-freedom cradle head stability control method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1564581A (en) * 2004-04-15 2005-01-12 上海交通大学 Calibrating method of pick-up device under condition of traffic monitering
JP2010025569A (en) * 2008-07-15 2010-02-04 Toa Corp Camera parameter identification apparatus, method, and program
CN107492123A (en) * 2017-07-07 2017-12-19 长安大学 A kind of road monitoring camera self-calibrating method using information of road surface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1564581A (en) * 2004-04-15 2005-01-12 上海交通大学 Calibrating method of pick-up device under condition of traffic monitering
JP2010025569A (en) * 2008-07-15 2010-02-04 Toa Corp Camera parameter identification apparatus, method, and program
CN107492123A (en) * 2017-07-07 2017-12-19 长安大学 A kind of road monitoring camera self-calibrating method using information of road surface

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于两平行线及其线上三点的摄像机标定方法;贺科学等;《光学技术》;20161115(第06期);全文 *

Also Published As

Publication number Publication date
CN109685855A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109685855B (en) Camera calibration optimization method under road cloud monitoring platform
CN110148169B (en) Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera
CN104200086B (en) Wide-baseline visible light camera pose estimation method
CN111311689B (en) Method and system for calibrating relative external parameters of laser radar and camera
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN112894832A (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN107167788A (en) Obtain laser radar calibration parameter, the method and system of laser radar calibration
CN105445721A (en) Combined calibrating method of laser radar and camera based on V-shaped calibrating object having characteristic protrusion
CN109829853A (en) A kind of unmanned plane image split-joint method
CN110084785B (en) Power transmission line vertical arc measuring method and system based on aerial images
CN112819903A (en) Camera and laser radar combined calibration method based on L-shaped calibration plate
CN109961485A (en) A method of target positioning is carried out based on monocular vision
Gerke Using horizontal and vertical building structure to constrain indirect sensor orientation
CN108362205B (en) Space distance measuring method based on fringe projection
CN111932565B (en) Multi-target recognition tracking calculation method
CN110930365B (en) Orthogonal vanishing point detection method under traffic scene
CN111241988A (en) Method for detecting and identifying moving target in large scene by combining positioning information
CN113050074B (en) Camera and laser radar calibration system and calibration method in unmanned environment perception
Su et al. A novel camera calibration method based on multilevel-edge-fitting ellipse-shaped analytical model
CN113902809A (en) Method for jointly calibrating infrared camera and laser radar
CN112017238A (en) Method and device for determining spatial position information of linear object
CN105809706A (en) Global calibration method of distributed multi-camera system
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN104318566B (en) Can return to the new multi-view images plumb line path matching method of multiple height values

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240329

Address after: 457000, B10-201, Big Data Smart Ecological Park, Intersection of Weidu Avenue, Jindi Road (No. 433), Puyang City, Henan Province

Patentee after: Xulong Technology (Puyang) Co.,Ltd.

Country or region after: China

Address before: 710064 No. 126 central section of South Ring Road, Yanta District, Xi'an, Shaanxi

Patentee before: CHANG'AN University

Country or region before: China

TR01 Transfer of patent right