CN112419416B - Method and system for estimating camera position based on small amount of control point information - Google Patents

Method and system for estimating camera position based on small amount of control point information Download PDF

Info

Publication number
CN112419416B
CN112419416B CN202011454902.4A CN202011454902A CN112419416B CN 112419416 B CN112419416 B CN 112419416B CN 202011454902 A CN202011454902 A CN 202011454902A CN 112419416 B CN112419416 B CN 112419416B
Authority
CN
China
Prior art keywords
camera
control point
coordinate system
control
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011454902.4A
Other languages
Chinese (zh)
Other versions
CN112419416A (en
Inventor
张钧
周文杰
刘小茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202011454902.4A priority Critical patent/CN112419416B/en
Publication of CN112419416A publication Critical patent/CN112419416A/en
Application granted granted Critical
Publication of CN112419416B publication Critical patent/CN112419416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention discloses a method and a system for estimating a camera position based on a small amount of control point information, and belongs to the technical field of machine vision. The classical PnP method can solve the position of the camera by at least requiring information of three control points, can still solve the coordinates of the camera when the number of the control points is less than three, and can be applied to occasions needing to determine the position of the camera when the number of the control points is less than three; the classical PnP method has multiple solutions when only three control points are used to solve for the camera position, and only has a unique solution when the triangle formed by the control points is an isosceles triangle and the camera is in some specific area. The present invention does not have these specific requirements relative to the PnP method, and can find unique solutions with various amounts of control point information.

Description

Method and system for estimating camera position based on small amount of control point information
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a method and a system for estimating a camera position based on a small amount of control point information.
Background
In some scenarios, such as an unmanned aerial vehicle, the unmanned aerial vehicle usually determines its position by positioning through a GPS, but cannot be positioned through the GPS in the case that the GPS signal is weak or the GPS signal disappears, and at this time, the position of the unmanned aerial vehicle can be determined by estimating the camera position.
When an image is captured using a camera, the position of the camera at the time the image was captured can be estimated when the world coordinates of several three-dimensional control points and the pixel coordinates of the pixel points they project in the image are known. The classical PnP algorithm is an algorithm that estimates the camera position by using the three-dimensional spatial coordinates of n points in space and the pixel coordinates of the pixel points that they project on in the image. However, the requirement n ≧ 3 needs to be satisfied in the PnP algorithm, that is, only when at least 3 three-dimensional control point coordinates and pixel point coordinates of pixel points projected in an image are known, the position of the camera can be estimated by using the PnP algorithm.
Because the classical PnP method requires at least three control points to be able to solve the position of the camera, when the method is implemented, usually more than three control points need to be manually set or more than three control points exist in a camera shooting scene, but in an actual application scene, the condition often cannot be met, for example, for the estimation of the camera position carried on the unmanned aerial vehicle, the unmanned aerial vehicle often cannot ensure that more than three control points exist in the scene shot by the camera in the flying process. Therefore, the application of the PnP method to occasions needing to determine the position of the camera is limited by the stricter solving conditions.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides a method and a system for estimating the position of a camera based on a small amount of control point information, and aims to solve the technical problems that the PnP method can solve the position of the camera only by requiring information of at least three control points and the solving conditions are harsh.
To achieve the above object, according to an aspect of the present invention, there is provided a method of estimating a camera position based on a small amount of control point information, including:
s1, shooting a scene containing a control point by a camera to obtain an ith control point P i Homogeneous pixel coordinates I of pixel points projected in an image i
S2, calculating the ith control point P in a world coordinate system i Direction vector N to its projected pixel points in the image i
N i =-R T K -1 I i
Where R is the rotation matrix from the world coordinate system to the camera coordinate system, R T Representing the transpose of the matrix R, K being the intrinsic parameter matrix of the camera, K -1 Represents the inverse of the matrix K;
s3, when only one control point exists, calculating a three-dimensional coordinate C of the camera in a world coordinate system by using the following formula:
C=P i +z ci N i
wherein, P i Is the three-dimensional coordinate of the control point in the world coordinate system, z ci Is the third dimension coordinate, z, of the control point in the camera coordinate system ci >0;
When the number of the control points is more than 1, the distance is determined by the control point P i Starting with a direction vector of N i The position of the space point with the minimum sum of squared distances of the n rays is determinedIs the camera position coordinates; i =1,2, \8230;, n.
Further, when the number of the control points is greater than 1, the solving process of the camera position coordinates specifically includes:
calculating the ith control point P i Unit direction vector J to the pixel points it projects in the image i
Figure BDA0002828346380000021
The three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure BDA0002828346380000031
where n is the number of control points, E is a third order identity matrix, P i Is the three-dimensional space coordinate of the ith control point, and represents the 2-norm of the vector a.
Further, when the number of the control points is equal to 2, the midpoint of the common vertical line segment of the two rays from the two control points is used as the camera position coordinate.
Further, the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure BDA0002828346380000032
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002828346380000033
P 1 ,P 2 are the three-dimensional coordinates of two control points in a world coordinate system.
According to another aspect of the present invention, there is provided a system for estimating a camera position based on a small amount of control point information, including:
the camera is used for shooting a scene where the control point is located;
a control point extracting module for extractingTaking a pixel point of the control point projection in the image to obtain an ith control point P i Homogeneous pixel coordinate I of pixel point projected in image i
A direction vector calculation module for calculating the direction vector N from the control point to the pixel point projected in the image in the world coordinate system i =-R T K -1 I i
The position calculation module calculates the coordinates of the camera by using two methods respectively:
when there is only one control point, the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
C=P i +z ci N i
wherein, P i Is the three-dimensional coordinate of the control point in the world coordinate system, z ci Is the third dimension coordinate, z, of the control point in the camera coordinate system ci >0;
When the number of the control points is more than 1, the distance is controlled by the control point P i Starting with a direction vector of N i The space point position with the minimum sum of squared distances of the n rays is used as a camera position coordinate; i =1,2, \8230;, n.
Further, when the number of the control points is greater than 1, the solving process of the camera position coordinates specifically includes:
calculate the ith control point P i Unit direction vector J to its projected pixel in the image i
Figure BDA0002828346380000041
The three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure BDA0002828346380000042
where n is the number of control points, E is a third order identity matrix, P i Is the three-dimensional space coordinate of the ith control pointAnd | a | | represents the 2-norm of vector a.
Further, when the number of the control points is equal to 2, the midpoint of the common vertical line segment of the two rays from the two control points is used as the camera position coordinate.
Further, the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure BDA0002828346380000043
wherein the content of the first and second substances,
Figure BDA0002828346380000044
P 1 ,P 2 are the three-dimensional coordinates of two control points in a world coordinate system.
In general, the above technical solutions contemplated by the present invention can achieve the following advantageous effects compared to the prior art.
(1) The classical PnP method can solve the position of the camera by at least requiring information of three control points, can still solve the coordinates of the camera when the number of the control points is less than three, and can be applied to occasions where the position of the camera needs to be determined when the number of the control points is less than three.
(2) When the classical PnP method only uses three control point information to solve the camera position, there are multiple solutions, such as 1 solution, 2 solutions, 3 solutions, 4 solutions, etc., and only when the triangle formed by the control points is an isosceles triangle and the camera is in some specific areas, there is a unique solution. The present invention does not have these specific requirements relative to the PnP method, and can find unique solutions with various amounts of control point information.
Drawings
FIG. 1 is a schematic diagram of control points, pixel points of the control points projected in an image, and the position of a camera, where P i Is the ith control point, I i Is the pixel point of the ith control point projected in the image, C is the camera, C is located at the slave control point P i Starting direction I i On the ray of (a).
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, a method of estimating a camera position based on a small amount of control point information, includes:
(1) Firstly, a camera is started to shoot a scene containing a control point to obtain a control point P i Homogeneous pixel coordinate I of pixel point projected in image i
(2) Under the world coordinate system, calculating the direction vector N from the ith control point to the pixel point projected in the image i
N i =-R T K -1 I i
Where R is the rotation matrix of the camera coordinate system relative to the world coordinate system, R T Representing the transpose of the matrix R, K being the intrinsic parameter matrix of the camera, K -1 Representing the inverse of the matrix K.
(3) Solving the coordinate position of the camera:
the method comprises the following steps: when there is only one control point, the three-dimensional coordinates C of the camera in the world coordinate system can be calculated using the following formula:
C=P i +z ci N i
wherein, P i Is the three-dimensional coordinate of the control point in the world coordinate system, z ci Is the third dimensional coordinate of the control point in the camera coordinate system, and z ci >0。
The second method comprises the following steps: the number of control points is more than 1:
these control points all satisfy the following equation:
C=P i +z ci N i
wherein z is ci >0,i=1,2,…,n。
That is, the three-dimensional coordinates C of the camera in the world coordinate system should be located at the slave control point P i Starting with a direction vector of N i The ray equation is as above.
When n is more than or equal to 2, C is n bars in the ideal situation and is the controlled point P of the formula i Starting with a direction vector of N i Is determined by the unique intersection of rays of (a). In practice, the n rays may not have a unique intersection due to error perturbation. At this time, an optimal estimate of C may be defined as the point at which the sum of squared distances to the n rays is minimized. In the ideal case, the n rays would intersect at a point where the sum of the squared distances from this intersection to the n rays is minimal, so this optimal estimate would work even in the ideal case.
And calculating a unit direction vector of the ith ray according to the following calculation formula:
Figure BDA0002828346380000061
where | a | | represents the 2-norm of vector a.
The square of the distance from any point P in the 3-dimensional space to the ith ray satisfies the following equation,
Figure BDA0002828346380000062
wherein E is a third order identity matrix.
Then, the sum of the squared distances from any point P in space to the n rays satisfies the following equation,
Figure BDA0002828346380000071
when the point P is found so that S (P) in the above formula is minimum, P is satisfied,
Figure BDA0002828346380000072
the solution is obtained by dissolving the raw materials,
Figure BDA0002828346380000073
therefore, the optimal estimate C of the 3-dimensional coordinates of the camera in the world coordinate system satisfies the following equation,
Figure BDA0002828346380000074
wherein E is a third order identity matrix, P i Is the three-dimensional space coordinate of the ith control point.
The third method comprises the following steps: the number of control points is equal to 2:
when only the information of two control points is known, the calculation can be performed by using the second method, but the second method involves matrix inversion and increases the calculation complexity when calculating the three-dimensional coordinates of the camera, so that the following method is recommended to obtain C.
It can be shown that the sum of the squares of the distances from the midpoint of the common perpendicular segment of the two rays in three-dimensional space to these two rays is minimal.
Given two control points, two rays can be obtained by the second method, and the vertical foot of the public perpendicular line on the ith ray (the line segment endpoint of the public perpendicular line on the ray) is set as Q i ,i=1,2。
Then there is a list of the number of,
(Q 2 -Q 1 ) T J i =0
Q i =P i +t i J i ,i=1,2
wherein, J i Is the unit direction vector of the ith ray, t i >0。
Figure BDA0002828346380000075
Where (i, j) is an arrangement of (1, 2), that is (i, j) is either equal to (1, 2) or (2, 1).
Then there is a change in the number of,
Figure BDA0002828346380000081
Figure BDA0002828346380000082
Figure BDA0002828346380000083
it is possible to obtain,
Figure BDA0002828346380000084
wherein the content of the first and second substances,
Figure BDA0002828346380000085
when unit vector J j And J i When they are not parallel to each other, there is always beta 2 <1。
The information can be directly verified and obtained,
Figure BDA0002828346380000086
thus, beta 2 When < 1, third order square matrix
Figure BDA0002828346380000087
Is reversible and has the characteristics of having,
Figure BDA0002828346380000088
as a result of this, the number of the first and second,
Figure BDA0002828346380000089
since the optimal estimate of C is
Figure BDA00028283463800000810
Substitution into Q 1 And Q 2 So as to obtain the compound with the characteristics of,
Figure BDA00028283463800000811
wherein the content of the first and second substances,
Figure BDA00028283463800000812
P 1 ,P 2 are the three-dimensional coordinates of two control points in a world coordinate system.
The third method only relates to the inner product and the multiplication operation of the vector, and compared with the second method, the calculation complexity is reduced.
Simulation experiment:
1 simulation experiments were performed using Python programming, where the rotation matrix was obtained from the world coordinate system by 45 ° rotation around the z-axis, 180 ° rotation around the x-axis, and 30 ° rotation around the z-axis, so that the rotation matrix R is as follows:
Figure BDA0002828346380000091
the camera internal reference matrix K for simulation is as follows:
Figure BDA0002828346380000092
the simulation experiment sets 3 control points, the dimension is meter, and the coordinates are respectively as follows:
Figure BDA0002828346380000093
setting the true values of the coordinates of 5 cameras in sequence, wherein the dimension is meter, and the coordinates are as follows:
Figure BDA0002828346380000094
homogeneous pixel coordinate I of pixel point of ith control point projected on imaging surface of jth camera ij Truth value can be expressed by z cij I ij =KR(P i -C j ) First, KR (P) is calculated according to the known conditions set in the experiment i -C j ) Then, the three components of the calculated vector are divided by the third-dimensional coordinate respectively to obtain the homogeneous pixel coordinate I ij It is.
2 in an ideal case, the camera coordinates are calculated by three methods respectively.
2.1 measuring by the method one for one control point, respectively performing 5 sets of simulation experiments according to the 5 camera truth values set by the experiment, wherein each set of simulation experiments respectively uses 3 different control points P 1 、P 2 、P 3 Calculations were performed to obtain calculated coordinates of 3 cameras, and the results of 15 experiments are shown in table 1. The calculation formula is as follows:
C=P i +z ci N i
wherein z in the simulation experiment ci Which is the height of the camera from the ground, can be measured by the height measurement of the camera.
2.2 the two control points are measured by the third method, 5 sets of simulation experiments are respectively carried out according to the 5 camera truth values set by the experiment, and each set of simulation experiments respectively use 3 different control point combinations (P) 1 ,P 2 )、(P 1 ,P 3 )、(P 2 ,P 3 ) Calculations were performed to obtain calculated coordinates of 3 cameras, and the results of 15 experiments are shown in table 2. The calculation formula is as follows:
Figure BDA0002828346380000101
2.3 measuring the three control points by the second method, respectively performing 5 sets of simulation experiments according to the 5 camera truth values set by the experiment, wherein each set of simulation experiments uses 3 control points P 1 、P 2 、P 3 Calculations were performed to obtain the calculated coordinates of 1 camera and the results of 5 experiments are shown in table 3. The calculation formula is as follows:
Figure BDA0002828346380000102
respectively calculating the error of each image calculation result (such as table 1, table 2 and table 3), wherein the error calculation formula is
Figure BDA0002828346380000103
It can be seen that in an ideal situation, the errors of the calculation results of the three methods are all 0, which indicates that the three methods can all obtain correct solutions in the ideal situation.
3 for simulating practical application scene, for I ij Plus-2, respectively, to the first and second coordinates]Random disturbance uniformly distributed within the pixel range, to z ci Plus [ -0.1,0.1]Uniformly distributed random disturbance is taken within the meter range, and then the coordinates of the camera are calculated by three methods respectively.
3.1 measurement is carried out by adopting the method one for one control point, 5 groups of simulation experiments are respectively carried out according to 5 camera truth values set by the experiment, and each group of simulation experiments respectively use 3 different control points P 1 、P 2 、P 3 The calculation was performed to obtain the calculated coordinates of 3 cameras, and the results of 15 experiments are shown in table 4. The calculation formula is as follows:
C=P i +z ci N i
wherein z is ci I.e. the height of the camera from the ground, can be measured by altimetry.
3.2 measuring two control points by adopting the third method, respectively performing 5 sets of simulation experiments according to the 5 camera truth values set by the experiment, and respectively enabling each set of simulation experiments to realize the measurementWith 3 different sets of control point combinations (P) 1 ,P 2 )、(P 1 ,P 3 )、(P 2 ,P 3 ) Calculations were performed to obtain calculated coordinates of 3 cameras, and 15 experimental results are shown in table 5. The calculation formula is as follows:
Figure BDA0002828346380000111
3.3 measuring the three control points by the second method, respectively performing 5 sets of simulation experiments according to the 5 camera truth values set by the experiment, wherein each set of simulation experiments uses 3 control points P 1 、P 2 、P 3 The calculation was performed to obtain 1 camera coordinate calculated, and the results of 5 experiments are shown in table 6. The calculation formula is as follows:
Figure BDA0002828346380000112
separately determining the error of the calculated result of each image (see tables 4, 5 and 6), wherein the error is calculated by the formula
Figure BDA0002828346380000113
In the case of disturbance, the average value of errors of calculation results at a known control point is 0.0596m, and the standard deviation of the errors is 0.0310m; the average value of errors of calculation results at two control points is known to be 0.0806m, and the standard deviation of the errors is known to be 0.0798m; the average value of the errors of the calculation results at three control points is 0.0361m, and the standard deviation of the errors is 0.0325m. It can be seen that the three methods have certain robustness.
TABLE 1 position calculation results when ideally one control point is known
Figure BDA0002828346380000114
Figure BDA0002828346380000121
TABLE 2 position calculation results for two control points known in the ideal case
Figure BDA0002828346380000122
TABLE 3 position calculation results when ideally three control points are known
Figure BDA0002828346380000131
TABLE 4 position calculation results for a known control point in the presence of a disturbance
Figure BDA0002828346380000132
TABLE 5 position calculation results for known two control points with disturbance
Figure BDA0002828346380000133
Figure BDA0002828346380000141
TABLE 6 position calculation results for known three control points with disturbance
Figure BDA0002828346380000142
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.

Claims (8)

1. A method of estimating a camera position based on a small amount of control point information, comprising:
s1, shooting a scene containing a control point by a camera to obtain an ith control point P i Homogeneous pixel coordinate I of pixel point projected in image i
S2, calculating the ith control point P in a world coordinate system i Direction vector N to its projected pixel points in the image i
N i =-R T K -1 I i
Where R is a rotation matrix from the world coordinate system to the camera coordinate system, R T Representing the transpose of the matrix R, K being the intrinsic parameter matrix of the camera, K -1 Represents the inverse of the matrix K;
s3, when only one control point exists, calculating a three-dimensional coordinate C of the camera in a world coordinate system by using the following formula:
C=P i +z ci N i
wherein, P i Is the three-dimensional coordinate of the control point in the world coordinate system, z ci Is the third dimension coordinate, z, of the control point in the camera coordinate system ci >0;
When the number of the control points is more than 1, the distance is controlled by the control point P i Starting with a direction vector of N i The space point position with the minimum sum of squared distances of the n rays is used as a camera position coordinate; i =1,2, \8230;, n.
2. The method according to claim 1, wherein when the number of control points is greater than 1, the process of solving the coordinates of the camera position specifically includes:
calculating the ith control point P i Unit direction vector J to its projected pixel in the image i
Figure FDA0003804546220000011
Wherein, | | N i | | denotes the vector N i 2-norm of (d);
the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure FDA0003804546220000021
where n is the number of control points, E is a third order identity matrix, P i Is the three-dimensional space coordinate of the ith control point.
3. The method of claim 2, wherein when the number of control points is equal to 2, the middle point of the line segment perpendicular to the two rays from the two control points is used as the camera position coordinate.
4. The method of claim 3, wherein the three-dimensional coordinate C of the camera in the world coordinate system is calculated by the following formula:
Figure FDA0003804546220000022
wherein the content of the first and second substances,
Figure FDA0003804546220000023
P 1 ,P 2 are the three-dimensional coordinates of two control points in a world coordinate system.
5. A system for estimating camera position based on a small amount of control point information, comprising:
the camera is used for shooting a scene where the control point is located;
a control point extraction module for extracting the pixel point of the control point projected in the image to obtain the ith control pointPoint P i Homogeneous pixel coordinate I of pixel point projected in image i
A direction vector calculation module for calculating the direction vector N from the control point to the pixel point projected in the image in the world coordinate system i =-R T K -1 I i
The position calculation module calculates the coordinates of the camera by using two methods respectively:
when there is only one control point, the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
C=P i +z ci N i
wherein, P i Is the three-dimensional coordinate of the control point in the world coordinate system, z ci Is the third dimension coordinate, z, of the control point in the camera coordinate system ci >0;
When the number of the control points is more than 1, the distance is determined by the control point P i Starting with a direction vector of N i The space point position with the minimum sum of squared distances of the n rays is used as a camera position coordinate; i =1,2, \8230;, n.
6. The system according to claim 5, wherein when the number of control points is greater than 1, the process of solving the coordinates of the camera position specifically includes:
calculating the ith control point P i Unit direction vector J to the pixel points it projects in the image i
Figure FDA0003804546220000031
Wherein, | | N i | | denotes the vector N i 2-norm of (d);
the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure FDA0003804546220000032
where n is the number of control points, E is a third order identity matrix, P i Is the three-dimensional space coordinate of the ith control point.
7. The system of claim 6, wherein the camera position coordinates are determined as the midpoint of a line segment perpendicular to the line of the rays from the two control points when the number of control points is equal to 2.
8. The system of claim 7, wherein the three-dimensional coordinates C of the camera in the world coordinate system is calculated by the following formula:
Figure FDA0003804546220000033
wherein the content of the first and second substances,
Figure FDA0003804546220000034
P 1 ,P 2 are the three-dimensional coordinates of two control points in a world coordinate system.
CN202011454902.4A 2020-12-10 2020-12-10 Method and system for estimating camera position based on small amount of control point information Active CN112419416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011454902.4A CN112419416B (en) 2020-12-10 2020-12-10 Method and system for estimating camera position based on small amount of control point information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011454902.4A CN112419416B (en) 2020-12-10 2020-12-10 Method and system for estimating camera position based on small amount of control point information

Publications (2)

Publication Number Publication Date
CN112419416A CN112419416A (en) 2021-02-26
CN112419416B true CN112419416B (en) 2022-10-14

Family

ID=74776710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011454902.4A Active CN112419416B (en) 2020-12-10 2020-12-10 Method and system for estimating camera position based on small amount of control point information

Country Status (1)

Country Link
CN (1) CN112419416B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113804165B (en) * 2021-09-30 2023-12-22 北京欧比邻科技有限公司 Unmanned aerial vehicle simulation GPS signal positioning method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013111229A1 (en) * 2012-01-23 2013-08-01 日本電気株式会社 Camera calibration device, camera calibration method, and camera calibration program
CN108648237A (en) * 2018-03-16 2018-10-12 中国科学院信息工程研究所 A kind of space-location method of view-based access control model
CN108986172A (en) * 2018-07-25 2018-12-11 西北工业大学 A kind of single-view linear camera scaling method towards small depth of field system
CN111915685A (en) * 2020-08-17 2020-11-10 沈阳飞机工业(集团)有限公司 Zoom camera calibration method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11024054B2 (en) * 2019-05-16 2021-06-01 Here Global B.V. Method, apparatus, and system for estimating the quality of camera pose data using ground control points of known quality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013111229A1 (en) * 2012-01-23 2013-08-01 日本電気株式会社 Camera calibration device, camera calibration method, and camera calibration program
CN108648237A (en) * 2018-03-16 2018-10-12 中国科学院信息工程研究所 A kind of space-location method of view-based access control model
CN108986172A (en) * 2018-07-25 2018-12-11 西北工业大学 A kind of single-view linear camera scaling method towards small depth of field system
CN111915685A (en) * 2020-08-17 2020-11-10 沈阳飞机工业(集团)有限公司 Zoom camera calibration method

Also Published As

Publication number Publication date
CN112419416A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN110567469B (en) Visual positioning method and device, electronic equipment and system
Zhou A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences
CN101763632B (en) Method for demarcating camera and device thereof
CN105118021A (en) Feature point-based image registering method and system
CN111080714B (en) Parallel binocular camera calibration method based on three-dimensional reconstruction
CN107067437B (en) Unmanned aerial vehicle positioning system and method based on multi-view geometry and bundle adjustment
CN112967344B (en) Method, device, storage medium and program product for calibrating camera external parameters
CN110310331B (en) Pose estimation method based on combination of linear features and point cloud features
CN108629810B (en) Calibration method and device of binocular camera and terminal
CN105551020A (en) Method and device for detecting dimensions of target object
CN111899282A (en) Pedestrian trajectory tracking method and device based on binocular camera calibration
CN112419416B (en) Method and system for estimating camera position based on small amount of control point information
Miksch et al. Automatic extrinsic camera self-calibration based on homography and epipolar geometry
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
Hamzah et al. Visualization of image distortion on camera calibration for stereo vision application
CN113240746B (en) Speckle structure light marking method and device based on ideal imaging plane
CN111223148B (en) Method for calibrating camera internal parameters based on same circle and orthogonal properties
Ragab et al. Multiple nonoverlapping camera pose estimation
Miksch et al. Homography-based extrinsic self-calibration for cameras in automotive applications
CN111145267A (en) IMU (inertial measurement unit) assistance-based 360-degree panoramic view multi-camera calibration method
CN110176033A (en) A kind of mixing probability based on probability graph is against depth estimation method
CN115578417A (en) Monocular vision inertial odometer method based on feature point depth
CN106875374B (en) Weak connection image splicing method based on line features
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
CN115018922A (en) Distortion parameter calibration method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant