CN117197231A - Feature point uncertainty-based monocular camera pose estimation method - Google Patents

Feature point uncertainty-based monocular camera pose estimation method Download PDF

Info

Publication number
CN117197231A
CN117197231A CN202310666923.XA CN202310666923A CN117197231A CN 117197231 A CN117197231 A CN 117197231A CN 202310666923 A CN202310666923 A CN 202310666923A CN 117197231 A CN117197231 A CN 117197231A
Authority
CN
China
Prior art keywords
matrix
uncertainty
feature points
feature point
equation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310666923.XA
Other languages
Chinese (zh)
Inventor
于启炟
谢征峰
周晓彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202310666923.XA priority Critical patent/CN117197231A/en
Publication of CN117197231A publication Critical patent/CN117197231A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a monocular camera pose estimation method based on feature point uncertainty, which comprises the following steps: constructing a covariance inverse matrix of the measurement error of the feature points according to the pixel gray level distribution information of the feature point data space; singular value decomposition is carried out on the covariance inverse matrix of the characteristic points, and an affine transformation matrix of the characteristic points is obtained; carrying out uncertainty weighting, constructing a weighted algebraic error function, obtaining a linear equation model according to the weighted algebraic error function, and rewriting the linear equation model into a nonlinear equation model; and obtaining consistent noise variance, solving the nonlinear equation model based on the consistent noise variance to obtain a closed solution with consistent deviation, obtaining an estimated value of the rotation matrix and an estimated value of the translation vector according to the closed solution, and outputting a camera pose estimated result. The uncertainty of the feature points is considered, asymptotic deviation is eliminated, and the precision and efficiency of pose estimation are improved.

Description

Feature point uncertainty-based monocular camera pose estimation method
Technical Field
The application relates to a monocular camera pose estimation method based on feature point uncertainty, and belongs to the technical field of vision measurement.
Background
Camera pose estimation has wide application in three-dimensional reconstruction, autopilot, camera calibration, augmented reality, photogrammetry. The pose of a camera may be described by a rotation matrix and a translation vector, which are also referred to as the camera's extrinsic parameters. The external parameters of the camera are solved by establishing a 3D-2D matching relation of the feature points through a world coordinate system and a plane pixel coordinate system, and the process is called PnP problem.
Most of the existing PnP algorithms are used for obtaining the optimal solution of the pose through establishing algebraic error functions and iteration algorithms. However, these algorithms ignore the observation errors of the feature points and do not take the uncertainty of the feature points into account. In the PnP algorithm considering uncertainty of feature points, the observation errors are generally isotropic and distributed independently and uniformly. In actual feature point extraction, since the gray distribution pattern around the feature point is greatly different, the error is often anisotropic and is not independently distributed.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the application provides a monocular camera pose estimation method based on feature point uncertainty.
The technical scheme is as follows: in order to solve the technical problems, the application adopts the following technical scheme:
in a first aspect, the present application provides a method for estimating a pose of a camera based on feature point uncertainty, including:
extracting uncertain information of the characteristic points according to pixel gray level distribution information of the characteristic point data space, and constructing a covariance inverse matrix Q of measurement errors of the characteristic points -1
Covariance inverse matrix Q for feature points -1 Performing singular value decomposition to obtain an affine transformation matrix F of the characteristic points;
carrying out uncertainty weighting on the feature points based on an affine transformation matrix F of the feature points, and constructing a weighted algebraic error function;
obtaining a linear equation model according to the weighted algebraic error function;
rewriting the linear equation model into a nonlinear equation model;
based on nonlinear equation modeObtaining consistent noise variance, and solving a nonlinear equation model based on the consistent noise variance to obtain a closed solution with consistent deviation
According to the closed form solutionObtaining an estimate of the rotation matrix +.>And an estimate of the translation vector +.>And outputting a camera pose estimation result.
In some embodiments, extracting uncertainty information of the feature points according to pixel gray scale distribution information of the feature point data space, constructing a covariance inverse matrix of measurement errors of the feature points, including:
covariance inverse matrix Q of measurement error through feature points -1 Describing uncertainty of feature points, Q -1 The modeling of (2) is as follows:
wherein Q is a covariance matrix of the measurement error, W is an elliptical region centered on the feature point, W (u, v) is a sum of pixel grayscales of the elliptical region,and->The gradients of the image in the u and v directions, respectively.
In some embodiments, the covariance inverse matrix Q for the feature points -1 Performing singular value decomposition to obtain an affine transformation matrix F of the feature points, wherein the affine transformation matrix F comprises the following steps:
for Q -1 Singular value decomposition is carried out to obtain Q -1 =U∑ -1 U T Wherein, the method comprises the steps of, wherein,
affine transformation matrix
Wherein sigma 1 Sum sigma 2 Representing uncertainty of feature points, Σ -1/2 Transforming uncertainty ellipses into unit circles, U T Is a rotation matrix that rotates the tilted uncertainty ellipse to an ellipse that is the same as the direction of the image planes u and v;
the feature point data space is projected into the weighted covariance space through an affine transformation matrix F, and the measurement error of the feature points is converted from anisotropic distribution to isotropic distribution.
In some embodiments, uncertainty weighting the feature points based on an affine transformation matrix F of the feature points, constructing a weighted algebraic error function, comprising:
the camera aperture imaging model is:
wherein,the unit is pixel, K is known quantity; q i =[u i v i ] T For the i-th pixel plane feature point, +.>Is a reference point in the world coordinate system; d, d i Representing depth information; r and t are external parameter information to be solved, and respectively represent a rotation matrix and a translation vector;
based on an affine transformation matrix F of the characteristic points, the camera aperture imaging model is rewritten into the following by taking the measurement errors of the characteristic points into consideration:
[u 0 v 0 ] T it is known to simplify the above equation and weight the feature points to obtain the observation equation:
the weighted algebraic error function is:
wherein f x For the focal length f of the pixel in the x direction y For the focal length of the pixel in the y direction, u 0 Is the center in the u direction of the pixel plane, v 0 Epsilon is the center in the v direction of the pixel plane i Measurement error, W-finger matrixE indicates matrix-> Measurement error representing characteristic point of ith pixel plane,/->Representing the i-th weighted pixel plane feature point, F i An affine transformation matrix representing the i-th pixel plane feature point.
In some embodiments, deriving the linear equation model from the weighted algebraic error function includes:
on both sides of the weighted algebraic error functionAt the same time multiply d i According to the camera aperture imaging model,intermediate parameter e= [0 0 0 1] T Obtaining a measurement equation:
and (3) rewriting the measurement equation into a matrix form to obtain a linear equation model:
wherein M is a coefficient matrix, x is composed of external parameters R and t,is an error term;wherein->Representing tensor product of matrix, I 3 Representing a 3 x 3 identity matrix.
In some embodiments, the overwriting of the linear equation model to the nonlinear equation model includes:
introducing constraintsAnd->To eliminate scale ambiguity; the rotation matrix R is denoted as [ R1R 2R 3 ]] T Translation vector t= [ t ] 1 t 2 t 3 ] T Get->The linear equation model is carried in, and the method comprises the following steps:
and (3) rewriting the model into a matrix form to obtain a nonlinear equation model:
Aθ+η=b
wherein,is a vector of weighted feature points, +.>Eta is an error term, theta is an unknown quantity to be solved for, < ->A is a coefficient matrix:
wherein alpha is a scale factor, n is the number of feature points,Centroid coordinates representing feature points in world coordinate system, +.>Representing the coordinates of the i-th weighted pixel plane feature point in the u-direction, +.>Representing the coordinates of the i-th weighted pixel plane feature point in the v-direction.
In some embodiments, a consistent noise variance is obtained based on the nonlinear equation model, and the nonlinear equation model is solved based on the consistent noise variance to obtain a closed-form solution with consistent biasComprising the following steps:
construction of matrixDefinition matrix->And
the barycenter coordinates of the feature points in the world coordinate system are represented, A is a coefficient matrix, and b is a vector formed by weighted feature points;
constructing a function H (lambda) =phi-lambda delta, wherein lambda is a generalized eigenvalue, and solving to obtain a consistent noise variance
Solving the nonlinear equation model based on the consistent noise variance to obtain a closed solution with consistent deviation:
in some embodiments, according to the closed form solutionObtaining an estimate of the rotation matrix +.>And an estimate of the translation vector +.>Comprising the following steps:
further, the method further comprises the following steps: by singular value decompositionProjection into the space of the prune group, the formula is as follows:
wherein the method comprises the steps ofThe representation will->Projection formula projected into the lie space, diag diagonal matrix, det determinant, assuming ∈ ->The singular value decomposition result of +.>Wherein->And->Are parameters in the singular value decomposition result.
In a second aspect, the application provides a camera pose estimation device based on feature point uncertainty, which comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the method according to the first aspect.
In a third aspect, the application provides an apparatus comprising,
a memory;
a processor;
and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first aspect described above.
In a fourth aspect, the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
The beneficial effects are that: the monocular camera pose estimation method based on feature point uncertainty provided by the application has the following advantages: according to the application, anisotropic noise of the feature points is considered, and an algebraic error function based on the feature point uncertainty weighting coefficient is used to establish a monocular camera pose solving model based on the feature point uncertainty weighting coefficient; and deducing a solution algorithm with consistent deviation; an uncertainty model is built through image gray level distribution information around the feature points, the uncertainty model is integrated into an objective function for pose solving, a least square solution for eliminating deviation is introduced, asymptotic deviation of the feature points is eliminated, accuracy of an algorithm is improved, and the pose of a camera is accurately and reliably solved.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the application.
FIG. 2 is a schematic diagram of feature point uncertainty in one embodiment of the present application.
Detailed Description
The application is further described below with reference to the drawings and examples. The following examples are only for more clearly illustrating the technical aspects of the present application, and are not intended to limit the scope of the present application.
In the description of the present application, the meaning of a number is one or more, the meaning of a number is two or more, and greater than, less than, exceeding, etc. are understood to exclude the present number, and the meaning of a number is understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present application, the descriptions of the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Example 1
In a first aspect, as shown in fig. 1, the present embodiment provides a method for estimating a pose of a camera based on uncertainty of feature points, including:
s1, extracting uncertain information of the feature points according to pixel gray level distribution information of a feature point data space, and constructing a covariance inverse matrix Q of measurement errors of the feature points -1
S2 covariance inverse matrix Q of characteristic points -1 Performing singular value decomposition to obtain an affine transformation matrix F of the characteristic points;
s3, carrying out uncertainty weighting on the feature points based on an affine transformation matrix F of the feature points, and constructing a weighted algebraic error function;
s4, obtaining a linear equation model according to the weighted algebraic error function;
s5, rewriting the linear equation model into a nonlinear equation model;
s6, obtaining consistent noise variance based on the nonlinear equation model, and solving the nonlinear equation model based on the consistent noise variance to obtain a closed solution with consistent deviation
S7 according to the closed solutionObtaining an estimate of the rotation matrix +.>And an estimate of the translation vector +.>And outputting a camera pose estimation result.
In some embodiments, step S1 extracts uncertainty information of the feature points according to pixel gray distribution information of the feature point data space, and constructs a covariance inverse matrix of measurement errors of the feature points, including:
in the actual visual measurement, different feature points have different gray pattern distributions on the imaging plane, and the directionality of the gray pattern distribution is introduced when the image point is extracted and reflected in the u and v directions, so that the uncertainty of the image point error can be used for representing the anisotropy and the non-independent identical distribution of the extraction error. Different image point errors have different magnitudes, anisotropy distributions, and uncertainties. Q (Q) -1 Described as feature point q i =[u i v i ] T An uncertainty ellipse with a circle center, and long and short axes a and b of the ellipse represent a characteristic point q i The angles of the major axis a and the minor axis b relative to the directions u and v represent the uncertainty direction. Mainly shown in FIG. 2Three different types of image point measurement error uncertainties. If the image point error is in the case of fig. 2 (II), this means that the uncertainty of the feature point is isotropic and there is no directional uncertainty, only scale uncertainty. If all the pixel errors are the cases of fig. 2 (I) and fig. 2 (III), the uncertainty of the feature points is directional, and the uncertainty of the pixel error direction in the actual situation needs to be considered.
Q -1 The modeling of (2) is as follows:
wherein Q is a covariance matrix of the measurement error, W is an elliptical region centered on the feature point, W (u, v) is a sum of pixel grayscales of the elliptical region,and->The gradients of the image in the u and v directions, respectively.
In some embodiments, step S2 is performed on the covariance inverse matrix Q of the feature points -1 Performing singular value decomposition to obtain an affine transformation matrix F of the feature points, wherein the affine transformation matrix F comprises the following steps:
will Q -1 Singular value decomposition is carried out to obtain Q -1 =U∑ -1 U T Wherein, the method comprises the steps of, wherein,defining a transformation matrix->
Wherein sigma 1 Sum sigma 2 Representing uncertainty of feature points, Σ -1/2 Transforming uncertainty ellipses into unit circles, U T Is a rotation matrix that rotates the tilted uncertainty ellipse to the same ellipse as the image planes u and v.
F is a 2 x 2 affine transformation matrix that functions to project the feature point coordinates into the weighted covariance space through transformation F, where the error can be considered isotropic.
In some embodiments, step S3 performs uncertainty weighting on the feature points based on the affine transformation matrix F of the feature points, and constructs a weighted algebraic error function, including:
establishing an algebraic measurement error function according to the camera aperture imaging model; the camera aperture imaging model is as follows:
wherein,is a camera reference matrix, and the unit is a pixel. In the present application, K is a known amount. q i =[u i v i ] T Is pixel plane feature point +.>Is a reference point in the world coordinate system. d, d i Representing depth information. R and t are solved extrinsic information, and respectively represent a rotation matrix and a translation vector;
based on an affine transformation matrix F of the characteristic points, the camera aperture imaging model is rewritten into the following by taking the measurement errors of the characteristic points into consideration:
due to [ u ] 0 v 0 ] T It is known that the above equation can be simplified, and the feature points are weighted, so as to obtain the observation equation:
the weighted algebraic error function is:
wherein f x For the focal length f of the pixel in the x direction y For the focal length of the pixel in the y direction, u 0 Is the center in the u direction of the pixel plane, v 0 Epsilon is the center in the v direction of the pixel plane i Measurement error, W-finger matrixE indicates matrix-> Measurement error representing characteristic point of ith pixel plane,/->Representing the i-th weighted pixel plane feature point, F i An affine transformation matrix representing the i-th pixel plane feature point.
The step S4 obtains a linear equation model according to the weighted algebraic error function, and the method comprises the following steps:
multiplying d simultaneously on both sides of a weighted algebraic error function i According to the camera aperture imaging model,intermediate parameter e= [0 0 0 1] T Obtaining a measurement equation:
and (3) rewriting the measurement equation into a matrix form to obtain a linear equation model:
wherein M is a coefficient matrix, x is composed of external parameters R and t,is an error term;wherein->Representing tensor product of matrix, I 3 Representing a 3 x 3 identity matrix.
In some embodiments, step S5 rewrites the linear equation model to a nonlinear equation model, comprising:
introducing constraintsAnd->To eliminate scale ambiguity; the rotation matrix R is denoted as [ R1R 2R 3 ]] T Translation vector t= [ t ] 1 t 2 t 3 ] T Can get->The linear equation model is carried in, and the method comprises the following steps:
and (3) rewriting the model into a matrix form to obtain a nonlinear equation model:
Aθ+η=b
wherein,is a vector of weighted feature points, +.>Eta is an error term, theta is an unknown quantity to be solved for, < ->A is a coefficient matrix:
wherein alpha is a scale factor, n is the number of feature points,Centroid coordinates representing feature points in world coordinate system, +.>Representing the coordinates of the i-th weighted pixel plane feature point in the u-direction, +.>Representing the coordinates of the i-th weighted pixel plane feature point in the v-direction.
In some embodiments, step S6 obtains a consistent noise variance based on the nonlinear equation model, and solves the nonlinear equation model based on the consistent noise variance to obtain a closed-form solution with consistent biasComprising the following steps:
construction of matrixDefinition matrix->And the barycenter coordinates of the feature points in the world coordinate system are represented, A is a coefficient matrix, and b is a vector formed by weighted feature points;
constructing a function H (lambda) =phi-lambda delta, wherein lambda is a generalized eigenvalue, and solving the generalized eigenvalue to obtain a consistent noise variance
The closed-form solution of the non-linear system to eliminate bias is as follows:
in some embodiments, step S7 is based on the closed-form solutionObtaining an estimate of the rotation matrix +.>And an estimate of the translation vector +.>Comprising the following steps:
due to the rotation matrixDoes not necessarily satisfy->Is a condition of (2). Therefore, it is necessary to divide +.>Projecting into the space of the plum clusters, wherein the method comprises the following steps:
wherein the method comprises the steps ofThe representation will->Projection formula projected into the lie space, diag diagonal matrix, det determinant, assuming ∈ ->The singular value decomposition result of +.>Wherein->And->Are parameters in the singular value decomposition result.
Example 2
In a second aspect, based on embodiment 1, the present embodiment provides a camera pose estimation device based on feature point uncertainty, including a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the method according to embodiment 1.
Example 3
In a third aspect, based on embodiment 1, the present embodiment provides an apparatus, comprising,
a memory;
a processor;
and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of embodiment 1.
Example 4
In a fourth aspect, based on embodiment 1, the present embodiment provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the method described in embodiment 1.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is only a preferred embodiment of the application, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the application.

Claims (10)

1. A camera pose estimation method based on feature point uncertainty, the method comprising:
extracting uncertain information of the characteristic points according to pixel gray level distribution information of the characteristic point data space, and constructing a covariance inverse matrix Q of measurement errors of the characteristic points -1
Covariance inverse matrix Q for feature points -1 Performing singular value decomposition to obtain an affine transformation matrix F of the characteristic points;
carrying out uncertainty weighting on the feature points based on an affine transformation matrix F of the feature points, and constructing a weighted algebraic error function;
obtaining a linear equation model according to the weighted algebraic error function;
rewriting the linear equation model into a nonlinear equation model;
obtaining consistent noise variance based on nonlinear equation model, and solving the nonlinear equation model based on consistent noise variance to obtain deviationConsistent closed form solution
According to the closed form solutionObtaining an estimate of the rotation matrix +.>And an estimate of the translation vector +.>And outputting a camera pose estimation result.
2. The camera pose estimation method based on feature point uncertainty according to claim 1, wherein extracting uncertainty information of feature points according to pixel gray distribution information of feature point data space, constructing covariance inverse matrix of measurement error of feature points, comprises:
covariance inverse matrix Q of measurement error through feature points -1 Describing uncertainty of feature points, Q -1 The modeling of (2) is as follows:
wherein Q is a covariance matrix of the measurement error, W is an elliptical region centered on the feature point, W (u, v) is a sum of pixel grayscales of the elliptical region,and->The gradients of the image in the u and v directions, respectively.
3. The camera pose estimation method based on feature point uncertainty as claimed in claim 1, wherein covariance inverse matrix Q for feature points -1 Performing singular value decomposition to obtain an affine transformation matrix F of the feature points, wherein the affine transformation matrix F comprises the following steps:
for Q -1 Singular value decomposition is carried out to obtain Q -1 =U∑ -1 U T Wherein, the method comprises the steps of, wherein,
affine transformation matrix
Wherein sigma 1 Sum sigma 2 Representing uncertainty of feature points, Σ -1/2 Transforming uncertainty ellipses into unit circles, U T Is a rotation matrix that rotates the tilted uncertainty ellipse to an ellipse that is the same as the direction of the image planes u and v;
the feature point data space is projected into the weighted covariance space through an affine transformation matrix F, and the measurement error of the feature points is converted from anisotropic distribution to isotropic distribution.
4. The camera pose estimation method based on feature point uncertainty of claim 1, wherein uncertainty weighting is performed on feature points based on an affine transformation matrix F of the feature points, and constructing a weighted algebraic error function comprises:
the camera aperture imaging model is:
wherein,the unit is pixel, K is known quantity; q i =[u i v i ] T For the i-th pixel plane feature point, +.>Is a reference point in the world coordinate system; d, d i Representing depth information; r and t are external parameter information to be solved, and respectively represent a rotation matrix and a translation vector;
based on an affine transformation matrix F of the characteristic points, the camera aperture imaging model is rewritten into the following by taking the measurement errors of the characteristic points into consideration:
[u 0 v 0 ] T it is known to simplify the above equation and weight the feature points to obtain the observation equation:
the weighted algebraic error function is:
wherein f x For the focal length f of the pixel in the x direction y For the focal length of the pixel in the y direction, u 0 Is the center in the u direction of the pixel plane, v 0 Epsilon is the center in the v direction of the pixel plane i Measurement error, W-finger matrixE indicates matrix->Measurement error representing characteristic point of ith pixel plane,/->Representing the i-th weighted pixel plane feature point, F i An affine transformation matrix representing the i-th pixel plane feature point.
5. The method for estimating camera pose based on feature point uncertainty as claimed in claim 4, wherein obtaining a linear equation model from a weighted algebraic error function comprises:
multiplying d simultaneously on both sides of a weighted algebraic error function i According to the camera aperture imaging model,intermediate parameter e= [0 0 0 1] T Obtaining a measurement equation:
and (3) rewriting the measurement equation into a matrix form to obtain a linear equation model:
wherein M is a coefficient matrix, x is composed of external parameters R and t,is an error term;wherein->Representing tensor product of matrix, I 3 Representing a 3 x 3 identity matrix.
6. The feature point uncertainty-based camera pose estimation method according to claim 5, wherein rewriting the linear equation model into the nonlinear equation model comprises:
introducing constraintsAnd->To eliminate scale ambiguity; the rotation matrix R is denoted as [ R1R 2R 3 ]] T Translation vector t= [ t ] 1 t 2 t 3 ] T Get->The linear equation model is carried in, and the method comprises the following steps:
and (3) rewriting the model into a matrix form to obtain a nonlinear equation model:
Aθ+η=b
wherein,is a vector of weighted feature points, +.>Eta is an error term, theta is an unknown quantity to be solved for, < ->A is a coefficient matrix:
wherein alpha is a scale factor, n is the number of feature points,Centroid coordinates representing feature points in world coordinate system, +.>Representing the coordinates of the i-th weighted pixel plane feature point in the u-direction, +.>Representing the coordinates of the i-th weighted pixel plane feature point in the v-direction.
7. The camera pose estimation method based on feature point uncertainty as claimed in claim 1, wherein the consistent noise variance is obtained based on a nonlinear equation model, and the nonlinear equation model is solved based on the consistent noise variance to obtain a closed-form solution with consistent deviationComprising the following steps:
construction of matrixDefinition matrix->Andthe barycenter coordinates of the feature points in the world coordinate system are represented, A is a coefficient matrix, and b is a vector formed by weighted feature points;
constructing a function H (lambda) =phi-lambda delta, wherein lambda is a generalized eigenvalue, and solving to obtain a consistent noise variance
Solving the nonlinear equation model based on the consistent noise variance to obtain a closed solution with consistent deviation:
8. the feature point uncertainty-based camera pose estimation method according to claim 1, wherein the closed-form solution is based onObtaining an estimate of the rotation matrix +.>And an estimate of the translation vector +.>Comprising the following steps:
9. the feature point uncertainty-based camera pose estimation method of claim 8, further comprising: by singular value decompositionProjection into the space of the prune group, the formula is as follows:
wherein the method comprises the steps ofThe representation will->Projection formula projected into the lie space, diag diagonal matrix, det determinant, assuming ∈ ->The singular value decomposition result of +.>Wherein->And->Are parameters in the singular value decomposition result.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1 to 9.
CN202310666923.XA 2023-06-07 2023-06-07 Feature point uncertainty-based monocular camera pose estimation method Pending CN117197231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310666923.XA CN117197231A (en) 2023-06-07 2023-06-07 Feature point uncertainty-based monocular camera pose estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310666923.XA CN117197231A (en) 2023-06-07 2023-06-07 Feature point uncertainty-based monocular camera pose estimation method

Publications (1)

Publication Number Publication Date
CN117197231A true CN117197231A (en) 2023-12-08

Family

ID=89000439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310666923.XA Pending CN117197231A (en) 2023-06-07 2023-06-07 Feature point uncertainty-based monocular camera pose estimation method

Country Status (1)

Country Link
CN (1) CN117197231A (en)

Similar Documents

Publication Publication Date Title
Heimann et al. 3D active shape models using gradient descent optimization of description length
US10380413B2 (en) System and method for pose-invariant face alignment
US6434278B1 (en) Generating three-dimensional models of objects defined by two-dimensional image data
US20180225823A1 (en) Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
CN111723691B (en) Three-dimensional face recognition method and device, electronic equipment and storage medium
JP4985516B2 (en) Information processing apparatus, information processing method, and computer program
CN101377812B (en) Method for recognizing position and attitude of space plane object
CN110335297A (en) A kind of point cloud registration method based on feature extraction
CN113888689A (en) Image rendering model training method, image rendering method and image rendering device
CN112651944A (en) 3C component high-precision six-dimensional pose estimation method and system based on CAD model
CN117197159A (en) Efficient and high-quality high-temperature object point cloud segmentation method
CN108830888A (en) Thick matching process based on improved multiple dimensioned covariance matrix Feature Descriptor
CN112364881B (en) Advanced sampling consistency image matching method
Heitz et al. Statistical shape model generation using nonrigid deformation of a template mesh
CN113658194B (en) Point cloud splicing method and device based on reference object and storage medium
CN111178429A (en) System and method for providing medical guidance using patient depth images
Liu Automatic range image registration in the markov chain
CN117197231A (en) Feature point uncertainty-based monocular camera pose estimation method
CN114022526B (en) SAC-IA point cloud registration method based on three-dimensional shape context
CN100590658C (en) Method for matching two dimensional object point and image point with bilateral constraints
Kirschner et al. Construction of groupwise consistent shape parameterizations by propagation
CN110264562A (en) Skull model characteristic point automatic calibration method
CN112307971B (en) Sphere target identification method and device based on three-dimensional point cloud data
CN115511935A (en) Normal distribution transformation point cloud registration method based on iterative discretization and linear interpolation
CN114419255A (en) Three-dimensional human head model generation method and device fusing real human faces, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination