CN112419416A - Method and system for estimating camera position based on small amount of control point information - Google Patents

Method and system for estimating camera position based on small amount of control point information Download PDF

Info

Publication number
CN112419416A
CN112419416A CN202011454902.4A CN202011454902A CN112419416A CN 112419416 A CN112419416 A CN 112419416A CN 202011454902 A CN202011454902 A CN 202011454902A CN 112419416 A CN112419416 A CN 112419416A
Authority
CN
China
Prior art keywords
camera
control point
coordinate system
point
control points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011454902.4A
Other languages
Chinese (zh)
Other versions
CN112419416B (en
Inventor
张钧
周文杰
刘小茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202011454902.4A priority Critical patent/CN112419416B/en
Publication of CN112419416A publication Critical patent/CN112419416A/en
Application granted granted Critical
Publication of CN112419416B publication Critical patent/CN112419416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method and a system for estimating a camera position based on a small amount of control point information, and belongs to the technical field of machine vision. The classical PnP method can solve the position of the camera by at least requiring information of three control points, can still solve the coordinates of the camera when the number of the control points is less than three, and can be applied to occasions needing to determine the position of the camera when the number of the control points is less than three; the classical PnP method has multiple solutions when only three control points are used to solve for the camera position, and only has a unique solution when the triangle formed by the control points is an isosceles triangle and the camera is in some specific area. The present invention does not have these specific requirements relative to the PnP method, and can find unique solutions with various amounts of control point information.

Description

Method and system for estimating camera position based on small amount of control point information
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a method and a system for estimating a camera position based on a small amount of control point information.
Background
In some scenarios, such as an unmanned aerial vehicle, the unmanned aerial vehicle usually determines its position by positioning through a GPS, but cannot perform positioning through the GPS under the condition that a GPS signal is weak or the GPS signal disappears, and at this time, the position of the unmanned aerial vehicle can be determined by estimating a camera position.
When an image is taken using a camera, the position of the camera at the time of taking the image can be estimated when the world coordinates of a number of three-dimensional control points and the pixel coordinates of the pixel points they project in the image are known. The classical PnP algorithm is an algorithm that estimates the camera position by using the three-dimensional spatial coordinates of n points in space and the pixel coordinates of the pixel points that they project on in the image. However, the requirement n ≧ 3 needs to be satisfied in the PnP algorithm, that is, only when at least 3 three-dimensional control point coordinates and the pixel point coordinates of the pixel points projected in the image are known, the position of the camera can be estimated by the PnP algorithm.
Because the classical PnP method requires at least three control points to be able to solve the position of the camera, when the method is implemented, usually more than three control points need to be manually set or more than three control points exist in a camera shooting scene, but in an actual application scene, the condition often cannot be met, for example, for the estimation of the camera position carried on the unmanned aerial vehicle, the unmanned aerial vehicle often cannot ensure that more than three control points exist in the scene shot by the camera in the flying process. Therefore, the application of the PnP method to occasions where the camera position needs to be determined is limited by the stricter solving conditions.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides a method and a system for estimating the position of a camera based on a small amount of control point information, and aims to solve the technical problems that the PnP method can solve the position of the camera only by requiring information of at least three control points and the solving conditions are harsh.
To achieve the above object, according to an aspect of the present invention, there is provided a method of estimating a camera position based on a small amount of control point information, including:
s1, shooting a scene containing a control point by a camera to obtain an ith control point PiHomogeneous pixel coordinate I of pixel point projected in imagei
S2, calculating the ith control point P in a world coordinate systemiTo the pixel it projects in the imageDirection vector N of pointi
Ni=-RTK-1Ii
Where R is a rotation matrix from the world coordinate system to the camera coordinate system, RTRepresenting the transpose of the matrix R, K being the intrinsic parameter matrix of the camera, K-1Represents the inverse of the matrix K;
s3, when only one control point exists, calculating a three-dimensional coordinate C of the camera in a world coordinate system by using the following formula:
C=Pi+zciNi
wherein, PiIs the three-dimensional coordinate of the control point in the world coordinate system, zciIs the third dimension coordinate, z, of the control point in the camera coordinate systemci>0;
When the number of the control points is more than 1, the distance is determined by the control point PiStarting with a direction vector of NiThe space point position with the minimum sum of squared distances of the n rays is used as a camera position coordinate; i is 1,2, …, n.
Further, when the number of the control points is greater than 1, the solving process of the camera position coordinates specifically includes:
calculating the ith control point PiUnit direction vector J to its projected pixel in the imagei
Figure BDA0002828346380000021
The three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure BDA0002828346380000031
where n is the number of control points, E is a third order identity matrix, PiIs the three-dimensional space coordinate of the ith control point, and represents the 2-norm of the vector a.
Further, when the number of the control points is equal to 2, the midpoint of the common vertical line segment of the two rays from the two control points is used as the camera position coordinate.
Further, the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure BDA0002828346380000032
wherein the content of the first and second substances,
Figure BDA0002828346380000033
P1,P2are the three-dimensional coordinates of two control points in a world coordinate system.
According to another aspect of the present invention, there is provided a system for estimating a camera position based on a small amount of control point information, comprising:
the camera is used for shooting a scene where the control point is located;
a control point extraction module for extracting the pixel point of the control point projected in the image to obtain the ith control point PiHomogeneous pixel coordinate I of pixel point projected in imagei
A direction vector calculation module for calculating the direction vector N from the control point to the pixel point projected in the image in the world coordinate systemi=-RTK-1Ii
The position calculation module calculates the coordinates of the camera by using two methods respectively:
when there is only one control point, the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
C=Pi+zciNi
wherein, PiIs the three-dimensional coordinate of the control point in the world coordinate system, zciIs the third dimension coordinate, z, of the control point in the camera coordinate systemci>0;
When the number of the control points is more than 1, the distance is determined by the control point PiStarting with a direction vector of NiThe position of the space point with the minimum sum of squared distances of the n rays is taken as the phaseMachine position coordinates; i is 1,2, …, n.
Further, when the number of the control points is greater than 1, the solving process of the camera position coordinates specifically includes:
calculating the ith control point PiUnit direction vector J to its projected pixel in the imagei
Figure BDA0002828346380000041
The three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure BDA0002828346380000042
where n is the number of control points, E is a third order identity matrix, PiIs the three-dimensional space coordinate of the ith control point, and represents the 2-norm of the vector a.
Further, when the number of the control points is equal to 2, the midpoint of the common vertical line segment of the two rays from the two control points is used as the camera position coordinate.
Further, the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure BDA0002828346380000043
wherein the content of the first and second substances,
Figure BDA0002828346380000044
P1,P2are the three-dimensional coordinates of two control points in a world coordinate system.
In general, the above technical solutions contemplated by the present invention can achieve the following advantageous effects compared to the prior art.
(1) The classical PnP method can solve the position of the camera by at least requiring information of three control points, can still solve the coordinates of the camera when the number of the control points is less than three, and can be applied to occasions where the position of the camera needs to be determined when the number of the control points is less than three.
(2) The classical PnP method has multiple solutions, such as 1 solution, 2 solution, 3 solution, 4 solution, etc., when only three control points are used to solve the camera position, and only one solution exists when the triangle formed by the control points is an isosceles triangle and the camera is in some specific areas. The present invention does not have these specific requirements relative to the PnP method, and can find unique solutions with various amounts of control point information.
Drawings
FIG. 1 is a schematic diagram of control points, pixel points of the control points projected in an image, and the position of a camera, where PiIs the ith control point, IiIs the pixel point of the ith control point projected in the image, C is the camera, C is located at the slave control point PiStarting direction IiOn the ray of (a).
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, a method of estimating a camera position based on a small amount of control point information, includes:
(1) firstly, a camera is started to shoot a scene containing a control point to obtain a control point PiHomogeneous pixel coordinate I of pixel point projected in imagei
(2) Under the world coordinate system, calculating the direction vector N from the ith control point to the pixel point projected in the imagei
Ni=-RTK-1Ii
Wherein R is the camera coordinate system relative to the world coordinate systemRotation matrix, RTRepresenting the transpose of the matrix R, K being the intrinsic parameter matrix of the camera, K-1Representing the inverse of the matrix K.
(3) Solving the coordinate position of the camera:
the method comprises the following steps: when there is only one control point information, the three-dimensional coordinates C of the camera in the world coordinate system can be calculated using the following formula:
C=Pi+zciNi
wherein, PiIs the three-dimensional coordinate of the control point in the world coordinate system, zciIs the third dimensional coordinate of the control point in the camera coordinate system, and zci>0。
The second method comprises the following steps: the number of control points is more than 1:
these control points all satisfy the following equation:
C=Pi+zciNi
wherein z isci>0,i=1,2,…,n。
That is, the three-dimensional coordinates C of the camera in the world coordinate system should be located at the slave control point PiStarting with a direction vector of NiThe ray equation is as above.
When n is more than or equal to 2, C is n bars in the ideal situation and is the controlled point P of the formulaiStarting with a direction vector of NiIs determined by the unique intersection of rays of (a). In practical applications, the n rays may not have unique intersections due to error perturbations. At this time, an optimal estimate of C may be defined as the point at which the sum of squared distances to the n rays is minimized. In the ideal case, the n rays intersect at a point where the sum of the squares of the distances from this intersection to the n rays is minimal, so that this optimal estimate applies even in the ideal case.
And calculating a unit direction vector of the ith ray according to the following calculation formula:
Figure BDA0002828346380000061
where | a | | represents the 2-norm of vector a.
The square of the distance from any point P in the 3-dimensional space to the ith ray satisfies the following equation,
Figure BDA0002828346380000062
wherein E is a third order identity matrix.
Then, the sum of the squares of the distances from any point P in space to the n rays satisfies the following equation,
Figure BDA0002828346380000071
the point P is found so that S (P) in the above formula is minimized, then P is satisfied,
Figure BDA0002828346380000072
the solution is obtained by dissolving the raw materials,
Figure BDA0002828346380000073
therefore, the optimal estimate C of the 3-dimensional coordinates of the camera in the world coordinate system satisfies the following equation,
Figure BDA0002828346380000074
wherein E is a third order identity matrix, PiIs the three-dimensional space coordinate of the ith control point.
The third method comprises the following steps: the number of control points is equal to 2:
when only the information of two control points is known, the calculation can be performed by using the second method, but the second method involves matrix inversion and increases the calculation complexity when calculating the three-dimensional coordinates of the camera, so that the following method is recommended to obtain C.
It can be shown that the sum of the squares of the distances from the midpoint of the common perpendicular line segment of the two rays in three-dimensional space to the two rays is minimal.
Knowing two control points, two rays can be obtained by the second method, and the vertical foot of the male perpendicular line on the ith ray (the line segment endpoint of the male perpendicular line segment on the ith ray) is set as Qi,i=1,2。
Then there is a change in the number of,
(Q2-Q1)TJi=0
Qi=Pi+tiJi,i=1,2
wherein, JiIs the unit direction vector of the ith ray, ti>0。
Figure BDA0002828346380000075
Where (i, j) is an arrangement of (1, 2), that is (i, j) is either equal to (1, 2) or (2, 1).
Then there is a change in the number of,
Figure BDA0002828346380000081
Figure BDA0002828346380000082
Figure BDA0002828346380000083
it is possible to obtain,
Figure BDA0002828346380000084
wherein the content of the first and second substances,
Figure BDA0002828346380000085
when unit vector JjAnd JiWhen they are not parallel to each other, there is always beta2<1。
The information can be directly verified and obtained,
Figure BDA0002828346380000086
thus, β2Third order square matrix of < 1 hour
Figure BDA0002828346380000087
The material is reversible and has the following characteristics that,
Figure BDA0002828346380000088
as a result of this, the number of the,
Figure BDA0002828346380000089
since the optimal estimate of C is
Figure BDA00028283463800000810
Substitution into Q1And Q2So as to obtain the compound with the characteristics of,
Figure BDA00028283463800000811
wherein the content of the first and second substances,
Figure BDA00028283463800000812
P1,P2are the three-dimensional coordinates of two control points in a world coordinate system.
The third method only relates to the inner product and the multiplication operation of vectors, and compared with the second method, the calculation complexity is reduced.
Simulation experiment:
1 simulation experiments were performed using Python programming, where the rotation matrix was obtained from the world coordinate system by 45 ° rotation around the z-axis, 180 ° rotation around the x-axis, and 30 ° rotation around the z-axis, so that the rotation matrix R is as follows:
Figure BDA0002828346380000091
the camera internal reference matrix K for simulation is as follows:
Figure BDA0002828346380000092
the simulation experiment sets 3 control points, the dimension is meter, and the coordinates are respectively as follows:
Figure BDA0002828346380000093
setting a true value of coordinates of 5 cameras in sequence, wherein the dimension is meter, and the coordinates are as follows:
Figure BDA0002828346380000094
homogeneous pixel coordinate I of pixel point of ith control point projected on imaging surface of jth cameraijTruth value can be passed through zcijIij=KR(Pi-Cj) Calculated by first calculating KR (P) according to the known conditions set in the experimenti-Cj) Then, the three components of the calculated vector are divided by the third-dimensional coordinate respectively to obtain the homogeneous pixel coordinate IijIt is used.
2 in an ideal case, the camera coordinates are calculated by three methods respectively.
2.1 measuring by the method one for one control point, respectively performing 5 sets of simulation experiments according to the 5 camera truth values set by the experiment, wherein each set of simulation experiments respectively uses 3 different control points P1、P2、P3Performing a calculation to obtain a calculationAnd 15 experimental results are shown in table 1. The calculation formula is as follows:
C=Pi+zciNi
wherein z in the simulation experimentciThat is, the height of the camera from the ground, can be measured by the height measurement of the camera.
2.2 the two control points are measured by the third method, 5 sets of simulation experiments are respectively carried out according to the 5 camera truth values set by the experiment, and each set of simulation experiments respectively use 3 different control point combinations (P)1,P2)、(P1,P3)、(P2,P3) Calculations were performed to obtain calculated coordinates of 3 cameras, and the results of 15 experiments are shown in table 2. The calculation formula is as follows:
Figure BDA0002828346380000101
2.3 measuring the three control points by the second method, respectively performing 5 sets of simulation experiments according to the 5 camera truth values set by the experiment, wherein each set of simulation experiments uses 3 control points P1、P2、P3Calculations were performed to obtain the calculated coordinates of 1 camera and the results of 5 experiments are shown in table 3. The calculation formula is as follows:
Figure BDA0002828346380000102
separately determining the error of the calculated result of each image (see tables 1,2 and 3), wherein the error is calculated by the formula
Figure BDA0002828346380000103
It can be seen that in an ideal situation, the errors of the calculation results of the three methods are all 0, which indicates that the three methods can all obtain correct solutions in the ideal situation.
3 for simulating practical application scene, for IijPlus-2, 2 respectively to the first and second coordinates]Uniform distribution within pixel rangeRandom perturbation of the cloth, pair zciPlus [ -0.1, 0.1 [)]Uniformly distributed random disturbance is taken within the meter range, and then the coordinates of the camera are calculated by three methods respectively.
3.1 measuring by method one for one control point, respectively performing 5 sets of simulation experiments according to 5 camera truth values set by the experiment, wherein each set of simulation experiments respectively uses 3 different control points P1、P2、P3Calculations were performed to obtain calculated coordinates of 3 cameras, and the results of 15 experiments are shown in table 4. The calculation formula is as follows:
C=Pi+zciNi
wherein z isciI.e. the height of the camera from the ground, can be measured by altimetry.
3.2 the two control points are measured by the third method, 5 sets of simulation experiments are respectively carried out according to the 5 camera truth values set by the experiment, and each set of simulation experiments respectively use 3 different control point combinations (P)1,P2)、(P1,P3)、(P2,P3) Calculations were performed to obtain calculated coordinates of 3 cameras, and 15 experimental results are shown in table 5. The calculation formula is as follows:
Figure BDA0002828346380000111
3.3 measuring the three control points by the second method, respectively performing 5 sets of simulation experiments according to the 5 camera truth values set by the experiment, wherein each set of simulation experiments uses 3 control points P1、P2、P3Calculations were performed to obtain the calculated coordinates of 1 camera and the results of 5 experiments are shown in table 6. The calculation formula is as follows:
Figure BDA0002828346380000112
separately determining the error of the calculated result of each image (see tables 4, 5 and 6), wherein the error is calculated by the formula
Figure BDA0002828346380000113
In the case of disturbance, the average value of errors of calculation results at a known control point is 0.0596m, and the standard deviation of the errors is 0.0310 m; the average value of the errors of the calculation results at the known two control points was 0.0806m, and the standard deviation of the errors was 0.0798 m; the average value of the errors of the calculation results is 0.0361m and the standard deviation of the errors is 0.0325m for the three control points. It can be seen that the three methods have certain robustness.
TABLE 1 position calculation results when ideally one control point is known
Figure BDA0002828346380000114
Figure BDA0002828346380000121
TABLE 2 position calculation results for two control points known in the ideal case
Figure BDA0002828346380000122
TABLE 3 position calculation results when ideally three control points are known
Figure BDA0002828346380000131
TABLE 4 position calculation results for a known control point in the presence of a disturbance
Figure BDA0002828346380000132
TABLE 5 position calculation results for known two control points with disturbance
Figure BDA0002828346380000133
Figure BDA0002828346380000141
TABLE 6 position calculation results for known three control points with disturbance
Figure BDA0002828346380000142
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method for estimating a camera position based on a small amount of control point information, comprising:
s1, shooting a scene containing a control point by a camera to obtain an ith control point PiHomogeneous pixel coordinate I of pixel point projected in imagei
S2, calculating the ith control point P in a world coordinate systemiDirection vector N to its projected pixel points in the imagei
Ni=-RTK-1Ii
Where R is a rotation matrix from the world coordinate system to the camera coordinate system, RTRepresenting the transpose of the matrix R, K being the intrinsic parameter matrix of the camera, K-1Represents the inverse of the matrix K;
s3, when only one control point exists, calculating a three-dimensional coordinate C of the camera in a world coordinate system by using the following formula:
C=Pi+zciNi
wherein, PiIs the three-dimensional coordinate of the control point in the world coordinate system, zciIs the control point is in phaseThird dimension coordinate in machine coordinate system, zci>0;
When the number of the control points is more than 1, the distance is determined by the control point PiStarting with a direction vector of NiThe space point position with the minimum sum of squared distances of the n rays is used as a camera position coordinate; i is 1,2, …, n.
2. The method according to claim 1, wherein when the number of control points is greater than 1, the process of solving the coordinates of the camera position specifically includes:
calculating the ith control point PiUnit direction vector J to its projected pixel in the imagei
Figure FDA0002828346370000011
The three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure FDA0002828346370000021
where n is the number of control points, E is a third order identity matrix, PiIs the three-dimensional space coordinate of the ith control point, and represents the 2-norm of the vector a.
3. The method of claim 2, wherein when the number of control points is equal to 2, the middle point of the line segment perpendicular to the two rays from the two control points is used as the camera position coordinate.
4. The method of claim 3, wherein the three-dimensional coordinate C of the camera in the world coordinate system is calculated by the following formula:
Figure FDA0002828346370000022
wherein the content of the first and second substances,
Figure FDA0002828346370000023
P1,P2are the three-dimensional coordinates of two control points in a world coordinate system.
5. A system for estimating camera position based on a small amount of control point information, comprising:
the camera is used for shooting a scene where the control point is located;
a control point extraction module for extracting the pixel point of the control point projected in the image to obtain the ith control point PiHomogeneous pixel coordinate I of pixel point projected in imagei
A direction vector calculation module for calculating the direction vector N from the control point to the pixel point projected in the image in the world coordinate systemi=-RTK-1Ii
The position calculation module calculates the coordinates of the camera by using two methods respectively:
when there is only one control point, the three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
C=Pi+zciNi
wherein, PiIs the three-dimensional coordinate of the control point in the world coordinate system, zciIs the third dimension coordinate, z, of the control point in the camera coordinate systemci>0;
When the number of the control points is more than 1, the distance is determined by the control point PiStarting with a direction vector of NiThe space point position with the minimum sum of squared distances of the n rays is used as a camera position coordinate; i is 1,2, …, n.
6. The system according to claim 5, wherein when the number of control points is greater than 1, the process of solving the coordinates of the camera position specifically includes:
calculating the ith control point PiUnit direction vector J to its projected pixel in the imagei
Figure FDA0002828346370000031
The three-dimensional coordinates C of the camera in the world coordinate system are calculated using the following formula:
Figure FDA0002828346370000032
where n is the number of control points, E is a third order identity matrix, PiIs the three-dimensional space coordinate of the ith control point, and represents the 2-norm of the vector a.
7. The system of claim 6, wherein the camera position coordinates are determined as the midpoint of the line segment perpendicular to the two rays from the two control points when the number of control points is equal to 2.
8. The system of claim 7, wherein the three-dimensional coordinates C of the camera in the world coordinate system is calculated by the following formula:
Figure FDA0002828346370000033
wherein the content of the first and second substances,
Figure FDA0002828346370000034
P1,P2is three of two control points in world coordinate systemAnd (4) dimensional coordinates.
CN202011454902.4A 2020-12-10 2020-12-10 Method and system for estimating camera position based on small amount of control point information Active CN112419416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011454902.4A CN112419416B (en) 2020-12-10 2020-12-10 Method and system for estimating camera position based on small amount of control point information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011454902.4A CN112419416B (en) 2020-12-10 2020-12-10 Method and system for estimating camera position based on small amount of control point information

Publications (2)

Publication Number Publication Date
CN112419416A true CN112419416A (en) 2021-02-26
CN112419416B CN112419416B (en) 2022-10-14

Family

ID=74776710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011454902.4A Active CN112419416B (en) 2020-12-10 2020-12-10 Method and system for estimating camera position based on small amount of control point information

Country Status (1)

Country Link
CN (1) CN112419416B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113804165A (en) * 2021-09-30 2021-12-17 北京欧比邻科技有限公司 Unmanned aerial vehicle simulated GPS signal positioning method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013111229A1 (en) * 2012-01-23 2013-08-01 日本電気株式会社 Camera calibration device, camera calibration method, and camera calibration program
CN108648237A (en) * 2018-03-16 2018-10-12 中国科学院信息工程研究所 A kind of space-location method of view-based access control model
CN108986172A (en) * 2018-07-25 2018-12-11 西北工业大学 A kind of single-view linear camera scaling method towards small depth of field system
CN111915685A (en) * 2020-08-17 2020-11-10 沈阳飞机工业(集团)有限公司 Zoom camera calibration method
US20200364898A1 (en) * 2019-05-16 2020-11-19 Here Global B.V. Method, apparatus, and system for estimating the quality of camera pose data using ground control points of known quality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013111229A1 (en) * 2012-01-23 2013-08-01 日本電気株式会社 Camera calibration device, camera calibration method, and camera calibration program
CN108648237A (en) * 2018-03-16 2018-10-12 中国科学院信息工程研究所 A kind of space-location method of view-based access control model
CN108986172A (en) * 2018-07-25 2018-12-11 西北工业大学 A kind of single-view linear camera scaling method towards small depth of field system
US20200364898A1 (en) * 2019-05-16 2020-11-19 Here Global B.V. Method, apparatus, and system for estimating the quality of camera pose data using ground control points of known quality
CN111915685A (en) * 2020-08-17 2020-11-10 沈阳飞机工业(集团)有限公司 Zoom camera calibration method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113804165A (en) * 2021-09-30 2021-12-17 北京欧比邻科技有限公司 Unmanned aerial vehicle simulated GPS signal positioning method and device
CN113804165B (en) * 2021-09-30 2023-12-22 北京欧比邻科技有限公司 Unmanned aerial vehicle simulation GPS signal positioning method and device

Also Published As

Publication number Publication date
CN112419416B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN107862719B (en) Method and device for calibrating external parameters of camera, computer equipment and storage medium
Zhou A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences
Jiao et al. A novel dual-lidar calibration algorithm using planar surfaces
CN105118021A (en) Feature point-based image registering method and system
CN110142805A (en) A kind of robot end&#39;s calibration method based on laser radar
CN104596502A (en) Object posture measuring method based on CAD model and monocular vision
CN105551020A (en) Method and device for detecting dimensions of target object
Zhou et al. A new algorithm for the extrinsic calibration of a 2D LIDAR and a camera
CN114387347B (en) Method, device, electronic equipment and medium for determining external parameter calibration
CN111179351B (en) Parameter calibration method and device and processing equipment thereof
Lee et al. Vision-based terrain referenced navigation for unmanned aerial vehicles using homography relationship
CN110163902A (en) A kind of inverse depth estimation method based on factor graph
CN112419416B (en) Method and system for estimating camera position based on small amount of control point information
CN115560760A (en) Unmanned aerial vehicle-oriented vision/laser ranging high-altitude navigation method
Miksch et al. Automatic extrinsic camera self-calibration based on homography and epipolar geometry
Takahashi et al. Mirror-based camera pose estimation using an orthogonality constraint
Hamzah et al. Visualization of image distortion on camera calibration for stereo vision application
CN111223148B (en) Method for calibrating camera internal parameters based on same circle and orthogonal properties
Miksch et al. Homography-based extrinsic self-calibration for cameras in automotive applications
Ragab et al. Multiple nonoverlapping camera pose estimation
CN110176033A (en) A kind of mixing probability based on probability graph is against depth estimation method
CN115578417A (en) Monocular vision inertial odometer method based on feature point depth
CN115830116A (en) Robust visual odometer method
Zhang et al. INS assisted monocular visual odometry for aerial vehicles
CN111210476B (en) Method and device for simultaneously positioning and mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant