WO2014034035A1 - Dispositif d'estimation d'attitude de positionnement, procédé d'estimation d'attitude de positionnement, et programme d'estimation d'attitude de positionnement - Google Patents

Dispositif d'estimation d'attitude de positionnement, procédé d'estimation d'attitude de positionnement, et programme d'estimation d'attitude de positionnement Download PDF

Info

Publication number
WO2014034035A1
WO2014034035A1 PCT/JP2013/004849 JP2013004849W WO2014034035A1 WO 2014034035 A1 WO2014034035 A1 WO 2014034035A1 JP 2013004849 W JP2013004849 W JP 2013004849W WO 2014034035 A1 WO2014034035 A1 WO 2014034035A1
Authority
WO
WIPO (PCT)
Prior art keywords
posture
unknown
camera
candidates
candidate
Prior art date
Application number
PCT/JP2013/004849
Other languages
English (en)
Japanese (ja)
Inventor
中野 学
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2014532761A priority Critical patent/JP6260533B2/ja
Publication of WO2014034035A1 publication Critical patent/WO2014034035A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to a technique for estimating the relative position and orientation of a camera that has captured a plurality of images.
  • At least two images are required to estimate the camera position and orientation. When there are three or more images, it is possible to cope with posture estimation by combining two images.
  • a method for estimating the relative position and orientation between two images will be described below.
  • the reason for estimating the relative position and orientation is that when the input is only an image, the absolute change amount of the position and orientation between the two images cannot be estimated.
  • one of the two images is assumed to be the origin of the three-dimensional space. If the origin and distance indices are known, appropriate coordinate transformation may be applied to the estimated relative position and orientation.
  • the first image is the coordinate origin
  • the orientation of the second image is a three-dimensional rotation matrix represented by three degrees of freedom
  • the indefinite position is a three-dimensional vector represented by two degrees of freedom. .
  • the epipolar equation is an equation representing the geometric relationship between the image coordinates (hereinafter referred to as corresponding points) representing the same three-dimensional coordinates on the two images and the camera position and orientation.
  • Non-Patent Document 1 describes a method of estimating a camera position and orientation by solving an epipolar equation using eight or more pairs of corresponding points on two images.
  • the corresponding point is converted into a parameter called a three-dimensional E matrix (Essential Matrix) represented by the camera position and orientation.
  • Essential Matrix a parameter represented by the camera position and orientation.
  • the constraint condition is that one of three singular values obtained by singular value decomposition of the E matrix is zero and two are equal.
  • the E matrix is corrected after the fact so as to satisfy the constraint condition, and the corrected E matrix is decomposed to estimate the position and orientation.
  • the correction is to perform singular value decomposition on the estimated E matrix and change the singular value as described above.
  • Non-Patent Document 1 Since the estimation accuracy of the camera position and orientation by the method described in Non-Patent Document 1 depends on the extraction accuracy of the corresponding point set, such as when the movement of the camera is minute or when the movement direction of the camera is close to the optical axis direction, It is known that when the difference between the image coordinates of the corresponding point set is small, the accuracy is greatly reduced.
  • Non-Patent Document 2 describes an epipolar equation that satisfies two sets of degrees of freedom 3 representing a posture from an acceleration sensor and one vanishing point (hereinafter, referred to as a known posture) and is satisfied by three sets of corresponding points.
  • a known posture a posture from an acceleration sensor and one vanishing point
  • a method for directly estimating the position and orientation by solving the above is described. Since the parameters to be estimated are reduced to a total of 3 degrees of freedom including 2 degrees of freedom representing the position and 1 degree of freedom representing the remaining posture (hereinafter referred to as an unknown posture), it is more accurate than Non-Patent Document 1. .
  • Non-Patent Document 3 describes a method for estimating the position and orientation by dividing the corresponding points into 3, 4, 5, or more cases.
  • an E matrix is obtained from a linear epipolar equation, the E matrix is corrected so as to satisfy the constraint conditions, and the corrected E matrix is decomposed to estimate the position and orientation.
  • Non-Patent Document 4 Non-Patent Document 5
  • Non-Patent Document 2 if two of the postures are known, the camera position and posture can be estimated from a minimum of three sets of corresponding points. However, since observation errors due to quantization and sensor noise inside the camera occur at the corresponding points, the camera position and orientation cannot be estimated with high accuracy using only three sets. In general, a plurality of camera positions and orientations are estimated in three sets, and it is impossible to mathematically distinguish which camera position and orientation should be selected. In order to estimate the only camera position and orientation with high accuracy, it is necessary to use four or more sets of corresponding points. However, the position and orientation estimation method described in Non-Patent Document 2 is a case where there are only three sets of corresponding points, and cannot be applied to four or more sets.
  • Non-Patent Document 3 calculates the position and orientation at the same time, and therefore requires a correction means, which requires a large amount of calculation. Specifically, it is necessary to perform matrix operations such as singular value decomposition for correction.
  • An object of the present invention is to provide a position / orientation estimation apparatus, a position / orientation estimation method, and a position / orientation estimation program that can estimate the relative position and orientation of a camera that has taken two images with high accuracy and high speed. To do.
  • the position / orientation estimation apparatus includes three or more pairs of corresponding points included in two images and two known orientation parameters representing the relative orientation of the camera that has captured the two images.
  • a known posture which is a posture parameter
  • the relative position between the unknown posture which is one of the posture parameters and is an unknown posture parameter
  • An unknown posture candidate calculation unit that outputs, as candidates for unknown postures, solutions of all the unknown postures that satisfy a predetermined function represented by using the unknown posture and the corresponding point as a variable.
  • a position candidate calculation unit that calculates a position candidate of the camera relative to each of the posture candidates, and all of the unknown posture candidates and the camera relative to the unknown posture candidates.
  • One or a plurality of the unknown postures having a minimum error function and a minimum error candidate extraction unit that extracts a relative position of the camera are provided.
  • the position / orientation estimation method uses three or more sets of corresponding points included in two images and two known orientation parameters representing the relative orientation of the camera that captured the two images.
  • a known posture which is a posture parameter
  • the relative position between the unknown posture which is one of the posture parameters and is an unknown posture parameter
  • the relative position between the unknown posture which is one of the posture parameters and is an unknown posture parameter
  • the relative position between the unknown posture which is one of the posture parameters and is an unknown posture parameter
  • Calculates the relative position candidates of the camera and inputs all the unknown posture candidates, the relative position candidates of the camera for the unknown posture candidates, and the corresponding points.
  • the unknown posture candidates and the position candidates one or more of the unknowns that minimize a predetermined error function representing a geometric relationship between the relative position of the corresponding point, the camera, and the unknown posture.
  • the position / orientation estimation program according to the present invention is known to a computer among three or more sets of corresponding points included in two images and three orientation parameters representing the relative orientations of the cameras that photographed the two images.
  • the known posture which is one of the posture parameters and is an unknown posture parameter
  • the known posture which are two posture parameters of
  • An unknown posture candidate calculation process for outputting all the unknown posture solutions satisfying a predetermined function represented by using the unknown posture and the corresponding points as relative variables of a camera as candidates for unknown posture
  • a position candidate calculation process for calculating a position candidate of the camera relative to each of the unknown posture candidates, and all the unknown posture candidates and the unknown posture candidates, respectively.
  • the relative position candidate of the camera and the corresponding point are input, and the relative position of the corresponding point, the camera, and the unknown posture are selected from the unknown posture candidate and the position candidate.
  • One or a plurality of unknown poses that minimize a predetermined error function representing a geometric relationship and a minimum error candidate extraction process that extracts a relative position of the camera are executed.
  • the relative position and orientation of a camera that has captured two images can be estimated with high accuracy and high speed.
  • FIG. FIG. 1 is a block diagram showing a configuration example of a first embodiment (Embodiment 1) of a position and orientation estimation apparatus according to the present invention.
  • the position and orientation estimation apparatus shown in FIG. 1 includes an unknown orientation candidate calculation unit 1, a position candidate calculation unit 2, and a minimum error candidate extraction unit 3.
  • the unknown posture candidate calculation unit 1 includes a coefficient calculation unit 11, a simultaneous polynomial solution unit 12, a real solution extraction unit 13, and an unknown posture candidate conversion unit 14.
  • the unknown posture candidate calculation unit 1 represents three or more sets of corresponding points of two images and the relative posture of the camera that captured the two images (hereinafter, sometimes referred to as “posture”) 3
  • Two known posture parameters among the two posture parameters are input as known postures.
  • the unknown posture candidate calculation unit 1 is one of the three posture parameters and is an unknown posture parameter and the relative position of the camera (hereinafter, may be referred to as “position”).
  • position is used as variables, and all unknown poses satisfying a predetermined function expressed using corresponding points and unknown poses are calculated.
  • a predetermined function represented by a corresponding point and an unknown posture is simply referred to as a simultaneous polynomial.
  • the coefficient calculation unit 11 inputs three or more sets of corresponding points and two known postures, calculates the coefficient of the simultaneous polynomial, and outputs it to the simultaneous polynomial solving unit 12.
  • the simultaneous polynomial solving unit 12 receives the coefficients of the simultaneous polynomial calculated by the coefficient calculating unit 11, solves the simultaneous polynomial, and outputs all the solutions.
  • the real number solution extraction unit 13 inputs all the solutions of the simultaneous polynomials calculated by the simultaneous polynomial solution unit 12, extracts all the real number solutions from the solutions, and outputs them. If no real number solution can be extracted, the real number solution extraction unit 13 stops the subsequent processing and outputs a no solution flag.
  • the unknown posture candidate conversion unit 14 receives all the real solutions extracted by the real number solution extraction unit 13 and calculates and outputs unknown posture candidates.
  • the unknown posture candidate conversion unit 14 converts each real solution into a value that is treated as one unknown posture candidate in subsequent calculations.
  • the position candidate calculation unit 2 is a processing unit that receives the unknown posture candidate calculated by the unknown posture candidate calculation unit 1, calculates a relative position candidate of the camera, and outputs the calculated candidate.
  • the minimum error candidate extraction unit 3 inputs an unknown posture candidate, a position candidate, and a corresponding point, and one or a plurality of unknowns that minimize a predetermined error function representing a geometric relationship between the corresponding point, the unknown posture, and the position. Calculate and output posture and position. Further, the minimum error candidate extraction unit 3 may input the known posture when replacing the corresponding points using the known posture as described later.
  • unknown posture candidate calculation unit 1 (more specifically, coefficient calculation unit 11, simultaneous polynomial solution unit 12, real solution extraction unit 13, unknown posture candidate conversion unit 14), position candidate calculation unit 2, error
  • the minimum candidate extraction unit 3 is realized by, for example, an information processing device such as hardware designed to perform specific arithmetic processing or the like, or a CPU (Central Processing Unit) that operates according to a program.
  • an information processing device such as hardware designed to perform specific arithmetic processing or the like, or a CPU (Central Processing Unit) that operates according to a program.
  • CPU Central Processing Unit
  • FIG. 2 is a flowchart showing an example of the operation of the first embodiment of the position / orientation estimation apparatus according to the present invention.
  • the coefficient calculation unit 11 uses a unknown point and a relative position of the camera as variables, and is represented by a predetermined point represented using the corresponding point and the unknown posture.
  • the coefficient of the function (hereinafter referred to as simultaneous polynomial) is calculated and output to the simultaneous polynomial solving unit 12 (step S11).
  • the posture of the camera is represented by a three-degree-of-freedom rotation matrix or three variables necessary to express the rotation matrix.
  • the degree of freedom of the posture is an index indicating how many variables each of the nine elements of the 3 ⁇ 3 matrix representing the posture of the camera are represented.
  • the two known poses are two known variables out of the three variables or a rotation matrix expressed by two degrees of freedom.
  • the unknown pose is a rotation matrix expressed with one remaining variable or one degree of freedom.
  • the simultaneous polynomials for which the coefficient calculation unit 11 calculates coefficients are uniquely determined by how to express the degree of freedom of the camera posture. Therefore, the coefficient calculation unit 11 calculates the coefficient in the simultaneous polynomial using, for example, three or more pairs of corresponding points and two known postures in accordance with a predefined expression method of the degree of freedom of the camera posture or the type of the simultaneous polynomial. What is necessary is just to calculate.
  • the coefficient calculation unit 11 predefines a plurality of simultaneous polynomial types corresponding to, for example, a method for expressing the degree of freedom of the camera posture, and uses the simultaneous polynomials that are used according to the values of the setting parameters read at the time of activation or the like It is also possible to select the type.
  • the simultaneous polynomial solving unit 12 inputs the coefficient of the simultaneous polynomial calculated by the coefficient calculating unit 11, and solves the simultaneous polynomial using the corresponding point and the known posture input in step S11 (step S12). Further, the simultaneous polynomial solving unit 12 outputs all the solutions satisfying the simultaneous polynomial to the real number solution extracting unit 13.
  • the solution of the simultaneous polynomial here includes an inappropriate value as an unknown posture. For example, some or all of them may be complex numbers.
  • the real number solution extraction unit 13 When all the solutions of the simultaneous polynomials solved by the simultaneous polynomial solving unit 12 are input to the real number solution extracting unit 13, when there are real number solutions in those solutions (Yes in step S13), all the real number solutions are obtained. Extracted and output to the unknown posture candidate conversion unit 14. When no real number solution can be extracted from all the solutions of the simultaneous polynomials, the real number solution extraction unit 13 outputs a “no solution” flag as the position and orientation estimation result and ends the operation (No in step S13). Step S17). For example, if all the solutions are complex numbers, the real number solution extraction unit 13 outputs a “no solution” flag and ends the operation.
  • the “no solution” flag may be a true / false value, for example, or may be a position / orientation value indicating no solution determined in advance.
  • the unknown posture candidate conversion unit 14 inputs all the real number solutions and converts them into unknown posture candidates (step S14).
  • the unknown posture candidates obtained as a result of the calculation are output to the position candidate calculation unit 2 and the minimum error candidate extraction unit 3.
  • the unknown posture candidate conversion unit 14 may perform processing for converting the obtained real solution into values (one numerical value representing the unknown posture or a 3 ⁇ 3 rotation matrix with one degree of freedom) that are candidates for unknown postures. .
  • the position candidate calculation unit 2 receives all the unknown posture candidates obtained in step S14, calculates the relative positions of the cameras corresponding to the unknown postures, and extracts the minimum error candidate extraction unit 3 as the position candidates. (Step S15).
  • the minimum error candidate extraction unit 3 inputs an unknown posture candidate, a position candidate, and a corresponding point, and one or a plurality of unknowns that minimize a predetermined error function representing a geometric relationship between the corresponding point, the unknown posture, and the position.
  • the posture and position are calculated and output (step S16). Further, the minimum error candidate extraction unit 3 does not need to input a known posture when replacing a corresponding point using a known posture as described later, and may input a known posture when not replacing.
  • the minimum error candidate extraction unit 3 outputs all of the input unknown posture candidates and position candidates as unknown postures and positions when the number of input corresponding points is three sets. When the number of input corresponding points is four or more, the minimum error candidate extraction unit 3 minimizes a predetermined error function from among all the input unknown posture candidates and position candidates. Calculate and output the position and unknown posture. The reason why this calculation is not performed when the number of corresponding points is three is that the error in all unknown posture candidates is 0, and it is impossible to determine which candidate is closest to the correct answer. On the other hand, when the number of corresponding points is four or more, since the error in each candidate is different, only one candidate with the smallest error can be selected.
  • Example 1 Hereinafter, the first embodiment will be described using a specific example. First, the epipolar equation with the unknown posture and the relative position of the camera as variables will be described.
  • I is a unit matrix
  • det is a determinant
  • is an L2 norm of a vector
  • [] ⁇ represents a matrix representation of a cross product of three-dimensional vectors.
  • the number of input corresponding points is n sets
  • the i-th corresponding point coordinate of the first image is v i
  • the i-th corresponding point coordinate of the second image is v ′ i
  • the camera such as focal length and optical center It is assumed that the internal parameters of are already calibrated and v i and v ′ i are homogenized. Further, it is assumed that the number of v i and v ′ i is equal. If the two known postures are ⁇ and ⁇ , and the unknown posture is ⁇ , the relative posture of the camera is expressed by Equation (1) as a rotation matrix.
  • the posture is expressed as in Expression (1), but can be arbitrarily expressed depending on how to define the axial direction, the rotational direction, and the order of multiplication.
  • the known posture is expressed by two parameters ⁇ and ⁇ .
  • the known posture may be expressed using a unit quaternion, and as described in Non-Patent Document 2, a rotation axis and It may be expressed using a rotation amount.
  • Equation (3) The degree of freedom of t is set to 2 due to the indefiniteness of the scale. Since ⁇ and ⁇ are already known, the epipolar equation is expressed as shown in Equation (3) when deformed by replacing v i with R ⁇ R ⁇ v i .
  • Equation (3) can be solved if there are a minimum of three corresponding points, and a least squares solution can be calculated if there are four or more pairs.
  • Expression (3) is expressed as Expression (4).
  • A is a 3 ⁇ 3 matrix including ⁇ as a variable.
  • Equation (4) indicates that t is a zero vector or a null space of A. Since t is not a zero vector, t is represented as an eigenvector corresponding to the eigenvalue zero of A. That is, ⁇ is a solution of a function in which ⁇ is a variable and one eigenvalue of A is zero.
  • Equation (5) is not a simultaneous polynomial but a univariate polynomial.
  • the simultaneous polynomial solving unit 12 substitutes the obtained ⁇ for A, and calculates t as an eigenvector corresponding to the eigenvalue zero of A.
  • the norm of t is normalized to 1.
  • B is an n ⁇ 3 matrix that is expressed as in Expression (7) and includes ⁇ as a variable.
  • the error represented by Equation (6) is the minimum eigenvalue of B T B, and t corresponds to the minimum eigenvalue of B T B, not the zero vector. It is an eigenvector. That is, ⁇ is a solution of a function that minimizes one eigenvalue of B T B with ⁇ as a variable. For example, since B T B is a 3 ⁇ 3 matrix, the eigenvalue of B T B can be written down with ⁇ as a variable, and can be obtained by an iterative solution as in Equation (4). In addition, since one eigenvalue is the minimum, the simultaneous polynomial solving unit 12 may obtain a solution of the following equation (8).
  • the simultaneous polynomial solving unit 12 replaces cos ⁇ and sin ⁇ in Expression (8) with two independent variables c and s, for example, and c 2 + s 2 as a new conditional expression. You may solve the following formula
  • equation (9) which added 1.
  • Equation (9) is not a simultaneous polynomial but a univariate polynomial.
  • the simultaneous polynomial solving unit 12 substitutes the obtained ⁇ for B T B, and calculates t as an eigenvector corresponding to the minimum eigenvalue of B T B.
  • the norm of t is normalized to 1.
  • the two known postures can be acquired using, for example, an acceleration sensor or a vanishing point.
  • an acceleration sensor the direction of gravity is measured with the camera stationary.
  • the direction of gravity is represented by a three-dimensional vector, the magnitude of the norm is ignored, so the degree of freedom is 2. Therefore, the difference in the direction of gravity when two images are taken can be expressed by two parameters ⁇ and ⁇ .
  • a vanishing point is a point where two or more parallel lines in a three-dimensional space intersect on an image by projective transformation.
  • Non-patent document 2 describes a method for calculating a known posture from one vanishing point.
  • step S12 the simultaneous polynomial solving unit 12 solves the simultaneous polynomial using the coefficient output from the coefficient calculating unit 11, and outputs the solution of the equation (5) or the equation (9) to the real solution extracting unit 13.
  • a method of reducing to a single variable polynomial using a Sylvester matrix or a method of simultaneously calculating a solution of two variables using a Gröbner basis, etc. Various methods can be used.
  • step S13 the real number solution extraction unit 13 inputs the solution of the simultaneous polynomial expressed by the equation (5) or the equation (9), calculates all the real number solutions from the solution, and determines the presence or absence of the real number solution. . If there is a real number solution, the real number solution extraction unit 13 outputs it to the unknown posture candidate conversion unit 14. If all the solutions are complex numbers, the real number solution extraction unit 13 outputs a no solution flag and ends the operation (step S17).
  • the no solution flag may be, for example, a true / false value or a position / orientation value indicating no solution determined in advance.
  • step S ⁇ b> 14 the unknown posture candidate conversion unit 14 inputs all real number solutions, calculates unknown posture candidates, and outputs them to the position candidate calculation unit 2 and the minimum error candidate extraction unit 3.
  • Candidate unknown posture for example, is given by R theta of formula (1). For example, if there are k real solutions, it means that there are k ⁇ , and the unknown posture candidate conversion unit 14 substitutes k ⁇ one by one into the equation (1), and k unknown postures. Can be obtained (3 ⁇ 3 matrix).
  • step S ⁇ b> 15 the position candidate calculation unit 2 calculates position candidates for all input unknown posture candidates and outputs the position candidates to the minimum error candidate extraction unit 3.
  • the position candidate is an eigenvector corresponding to the eigenvalue zero of A or an eigenvector corresponding to the smallest eigenvalue of B T B.
  • the eigenvector can be obtained by eigenvalue decomposition or singular value decomposition.
  • step S16 first, the minimum error candidate extraction unit 3 inputs a corresponding point, an unknown posture candidate, and a position candidate.
  • the minimum error candidate extraction unit 3 performs different operations according to the input number of corresponding points.
  • the minimum error candidate extraction unit 3 extracts the unknown posture candidate and the position candidate that minimize the minimum eigenvalue of B TB. Are output as unknown posture and position.
  • the position candidate calculation unit 2 narrows down the candidates for the optimal solution by batch processing for all real solutions, but it is also possible to sequentially process the solutions of simultaneous polynomials one by one. For example, each time the simultaneous polynomial solving unit 12 obtains one solution, the solution is output to the real number solution extracting unit 13. If the real solution extraction unit 13 determines that the solution is a real number, the conversion processing to the posture candidate by the unknown posture candidate conversion unit 14 and the minimum error candidate determination processing by the minimum error candidate extraction unit 3 are executed by loop processing.
  • the error minimum candidate extraction unit 3 stores, as the error minimum candidate determination process, a candidate that minimizes the error each time, and every time a real solution is extracted, error calculation, error minimum value comparison process, Overwriting processing of the minimum candidate and the minimum value may be performed.
  • the branching process based on the number of posture candidates and the number of input points may be collectively performed by the minimum error candidate extraction unit 3.
  • the minimum error candidate extraction unit 3 uses the minimum eigenvalue of B T B as the error, but may use det (B T B). In this case, next to the unknown posture candidate conversion unit 14, the minimum error candidate extraction unit 3 calculates det (B T B), selects only one candidate of the smallest unknown posture, and outputs it to the position candidate calculation unit 2. To do. As other errors, a Euclidean distance between the epipolar line and the corresponding point, or a Sampson error that is an approximation of the Euclidean distance may be used.
  • the position / orientation estimation apparatus first calculates all candidates having the smallest error based on the geometric relationship between the position and the unknown attitude, and extracts the candidate having the smallest error from the candidates. Therefore, it is guaranteed that the output unknown posture and position are the global minimum solution. In other words, unlike the method of performing correction after ignoring the constraint conditions as in Non-Patent Document 3, since the unknown posture and position candidates are calculated separately, no error increase due to the correction occurs.
  • the position / orientation estimation apparatus does not require correction means and can reduce the amount of calculation. That is, it is not necessary to perform matrix operations such as singular value decomposition for correction as in Non-Patent Document 3.
  • the relative position / orientation of the camera can be estimated with high accuracy and stability at a high speed.
  • FIG. 3 is a block diagram showing a configuration example of the second embodiment of the position / orientation estimation apparatus according to the present invention.
  • the configuration of the position / orientation estimation apparatus of the present embodiment is different from the configuration of the first embodiment shown in FIG. 1 in that the unknown orientation candidate calculation unit 1 further includes a secondary optimality verification unit 4. Since the configuration other than the secondary optimality verification unit 4 is the same as that of the first embodiment, the description thereof is omitted.
  • the second-order optimality verification unit 4 receives the real number solution output from the real number solution extraction unit 13, verifies the positive / negative of the function obtained by second-order differentiation of the equation having the unknown pose as a variable with the unknown pose, and is a positive real number
  • the solution is output to the unknown posture candidate conversion unit 14 as a candidate that satisfies the second optimality sufficient condition.
  • the secondary optimality verification unit 4 stops the subsequent processing and outputs a no solution flag.
  • the secondary optimality verification unit 4 is realized by, for example, hardware designed to perform specific arithmetic processing or the like, or an information processing device such as a CPU that operates according to a program.
  • FIG. 4 is a flowchart showing an example of the operation of the position / orientation estimation apparatus of this embodiment. Since the operations other than step S21 are the same as those in the first embodiment, description thereof will be omitted.
  • the quadratic optimality verification unit 4 verifies the sign of the function obtained by second-order differentiation of the equation having the unknown pose as a variable with respect to the real solution of the simultaneous polynomial extracted by the real number solution extraction unit 21 (step) S21), a positive real number solution is output to the unknown posture candidate conversion unit 14 as a solution satisfying the second-order optimality sufficient condition. If all real solutions do not satisfy the secondary optimality sufficient condition, the secondary optimality verification unit 4 outputs a no-solution flag and ends the operation (No in step S21, step S17).
  • the no solution flag may be a true / false value, for example, or may be a posture and position value representing no solution determined in advance.
  • the second-order optimality verification unit 4 inputs the real number solution output from the real number solution extraction unit 13, and verifies the positive / negative by substituting the real number solution into the function obtained by second-order differentiation of the equation with the unknown posture as a variable with the unknown posture.
  • the equation using the unknown posture as a variable may be, for example, an eigenvalue of (B T B) written with ⁇ as a variable, or may be det (B T B).
  • the secondary optimality verification unit 4 If the function into which the real solution is substituted is positive, the secondary optimality verification unit 4 outputs the solution to the unknown posture candidate conversion unit 14 as a solution satisfying the secondary optimality sufficient condition (Yes in step S21). If none is positive, the second-order optimality verification unit 4 outputs a no-solution flag in the same manner as the real solution extraction unit 13, and ends (No in step S21, step S17).
  • the second-order optimality verification unit 4 eliminates inappropriate solutions called saddle points that satisfy simultaneous polynomials but are not locally optimal solutions.
  • the real number solution obtained by the second-order optimality verification unit 4 is always a local optimum solution. Therefore, it is guaranteed that the solution having the smallest error among them is a global optimum solution.
  • the calculation cost can be reduced by terminating the subsequent processing.
  • the reliability of the position and orientation output as the estimation result can be further improved, and the relative position and orientation of the camera can be estimated at a higher speed.
  • FIG. 5 is a block diagram showing a configuration example of the third embodiment of the position / orientation estimation apparatus according to the present invention.
  • the position / orientation estimation apparatus of the present embodiment further includes a three-dimensional shape restoration unit 5 in addition to the configuration of the first embodiment shown in FIG. 1 or the second embodiment shown in FIG. Since the configuration other than the three-dimensional shape restoration unit 5 is the same as that of the first embodiment or the second embodiment, the description thereof is omitted.
  • the 3D shape restoration unit 5 inputs the corresponding point and the known posture, and the position and unknown posture output by the minimum error candidate extraction unit 3, and restores and outputs the three-dimensional coordinates of the corresponding point.
  • the three-dimensional shape restoration unit 5 outputs the position and orientation together with the restored three-dimensional coordinates.
  • the posture may be, for example, three parameters or a rotation matrix.
  • the three-dimensional shape restoration unit 5 is realized by, for example, hardware designed to perform specific arithmetic processing or the like, or an information processing apparatus such as a CPU that operates according to a program.
  • FIG. 6 is a flowchart illustrating an example of the operation of the position / orientation estimation apparatus according to this embodiment.
  • step S31 operations other than step S31 are the same as those in the second embodiment, description thereof will be omitted. Note that when the configuration of the position / orientation estimation apparatus of the present embodiment is obtained by adding the three-dimensional shape restoration unit 5 to the configuration of the first embodiment, step S21 in FIG. 6 is unnecessary.
  • the three-dimensional shape restoration unit 5 receives the corresponding point, the known posture, the position and the unknown posture output by the minimum error candidate extraction unit 3, restores the three-dimensional coordinates of the corresponding point, and outputs the corresponding point and the posture. (Step S31).
  • Example 3 Next, the operation of each part in this embodiment will be specifically described.
  • the operations other than the three-dimensional shape restoration unit 5 are the same as those in the first embodiment or the second embodiment.
  • the 3D shape restoration unit 5 receives the corresponding points, the known postures, and the positions and unknown postures output by the minimum error candidate extraction unit 3, and restores and outputs the three-dimensional coordinates of the corresponding points.
  • the three-dimensional shape restoration unit 5 outputs the position and orientation together with the restored three-dimensional coordinates.
  • the posture may be, for example, three parameters or a rotation matrix.
  • the three-dimensional shape restoration unit 5 may output only the three-dimensional coordinates without outputting the position and orientation.
  • Non-Patent Document 4 A method for restoring the three-dimensional coordinates of corresponding points based on the relative position and orientation of the camera is described in Non-Patent Document 4 and Non-Patent Document 5, for example.
  • FIG. 7 is a block diagram when the position / orientation estimation apparatus according to the present invention is implemented in an information processing system.
  • the information processing system shown in FIG. 7 is a general information processing system including a processor 61, a program memory 62, and a storage medium 63.
  • the storage medium 63 may be a storage area composed of separate storage media, or may be a storage area composed of the same storage medium.
  • a magnetic storage medium such as a RAM (Random Access Memory) or a hard disk can be used.
  • the program memory 62 includes the above-described unknown posture candidate calculation unit 1 (more specifically, the coefficient calculation unit 11, the simultaneous polynomial solution unit 12, the real number solution extraction unit 13, and the unknown posture candidate conversion unit 14), Stored is a program for causing the processor 61 to perform processing of each of the position candidate calculation unit 2, the minimum error candidate extraction unit 3, the secondary optimality verification unit 4, and the three-dimensional shape restoration unit 5,
  • the processor 61 operates according to this program.
  • the processor 61 may be a processor that operates according to a program such as a CPU, for example.
  • the present invention can also be realized by a computer program. Note that it is not necessary to operate all the parts that can be operated by the program by the program, and a part may be configured by hardware. Moreover, you may implement
  • FIG. 8 is a block diagram showing a configuration of a main part of the position / orientation estimation apparatus according to the present invention.
  • the position / orientation estimation apparatus according to the present invention has, as main components, three or more sets of corresponding points included in two images and the relative orientation of the camera that captured the two images.
  • the known posture which is a known two posture parameter among the three posture parameters representing the input, is input, and the corresponding posture and the known posture are used to input one of the posture parameters and the unknown posture parameter.
  • Unknown posture candidate calculation that outputs the solutions of all unknown postures satisfying a predetermined function expressed using the unknown posture and corresponding points using the unknown posture and the relative position of the camera as variables Unit 1, position candidate calculation unit 2 for calculating a candidate for the relative position of the camera with respect to each of the unknown posture candidates, and the relative position of the camera with respect to each of the unknown posture candidates and the unknown posture candidates.
  • Complements and corresponding points are input, and one or a minimum of a predetermined error function representing a geometric relationship between the relative position of the corresponding point and the camera and the unknown posture is selected from the unknown posture candidates and the position candidates
  • a plurality of unknown postures, and a minimum error candidate extraction unit 3 that extracts a relative position of the camera.
  • an unknown posture candidate calculation unit (for example, unknown posture candidate calculation unit 1) inputs a corresponding point and a known posture, and is represented using a predetermined posture represented by the unknown posture and the corresponding point.
  • a coefficient calculation unit (for example, the coefficient calculation unit 11) that calculates the coefficient of the function, a simultaneous polynomial solution unit (for example, the simultaneous polynomial solution unit 12) that inputs all the coefficients and calculates all the solutions that satisfy the predetermined function;
  • a real solution extraction unit (for example, a real solution extraction unit 13) that extracts all real solutions from the solutions obtained by the simultaneous polynomial solution unit and outputs a flag indicating all the extracted real solutions or no solution;
  • It is configured to include an unknown posture candidate conversion unit (for example, unknown posture candidate conversion unit 14) that converts each real number solution of all unknown postures extracted by the real number solution extraction unit into one unknown posture candidate. May be.
  • the unknown orientation candidate calculation unit calculates and outputs a real number solution satisfying the second-order optimality condition among all the real number solutions satisfying a predetermined function, and outputs the second-order optimality verification (For example, secondary optimality verification unit 4) may be included.
  • the reliability of the position / orientation output as the estimation result can be further improved, and the relative position / orientation of the camera can be estimated at a higher speed.
  • the position / orientation estimation apparatus inputs the corresponding point, the known attitude, the relative position of the camera output by the minimum error candidate extraction unit, and the unknown attitude, and restores the three-dimensional coordinates of the corresponding point.
  • the output three-dimensional shape restoration unit (for example, the three-dimensional shape restoration unit 5) may be provided.
  • the present invention is applied to the reconstruction of the three-dimensional shape of a subject, the generation of a panoramic image of a background, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

Le dispositif d'estimation d'attitude de positionnement selon l'invention comprend : une unité (1) de calcul d'attitudes candidates inconnues qui reçoit une entrée de trois ou plus de trois combinaisons de points de correspondance inclus dans deux images, et des attitudes connues qui sont deux paramètres d'attitude connus parmi trois paramètres d'attitude qui représentent une attitude relative d'une caméra ayant photographié les deux images, et qui fournit en sortie en tant qu'attitudes candidates inconnues, toutes les solutions d'attitudes inconnues qui satisfont à une fonction prescrite; une unité (2) de calcul de positionnement candidat qui calcule un candidat d'un positionnement relatif de la caméra par rapport à chacune des attitudes candidates inconnues; et une unité (3) d'extraction de candidats avec erreur minimum qui, parmi les attitudes candidates inconnues et les positionnements candidats, extrait un(e) ou une pluralité d'attitudes inconnues et de positionnements relatifs de la caméra pour lesquels une fonction d'erreur prescrite, qui représente une relation géométrique entre les points de correspondance, les positionnements relatifs de la caméra, et les attitudes inconnues, atteint un minimum.
PCT/JP2013/004849 2012-08-31 2013-08-13 Dispositif d'estimation d'attitude de positionnement, procédé d'estimation d'attitude de positionnement, et programme d'estimation d'attitude de positionnement WO2014034035A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014532761A JP6260533B2 (ja) 2012-08-31 2013-08-13 位置姿勢推定装置、位置姿勢推定方法および位置姿勢推定プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-191262 2012-08-31
JP2012191262 2012-08-31

Publications (1)

Publication Number Publication Date
WO2014034035A1 true WO2014034035A1 (fr) 2014-03-06

Family

ID=50182876

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/004849 WO2014034035A1 (fr) 2012-08-31 2013-08-13 Dispositif d'estimation d'attitude de positionnement, procédé d'estimation d'attitude de positionnement, et programme d'estimation d'attitude de positionnement

Country Status (2)

Country Link
JP (1) JP6260533B2 (fr)
WO (1) WO2014034035A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016114580A (ja) * 2014-12-18 2016-06-23 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation 処理装置、処理方法、およびプログラム
WO2016208404A1 (fr) * 2015-06-23 2016-12-29 ソニー株式会社 Dispositif et procédé de traitement d'informations, et programme

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000074641A (ja) * 1998-08-27 2000-03-14 Nec Corp 三次元形状計測方法
JP2006242943A (ja) * 2005-02-04 2006-09-14 Canon Inc 位置姿勢計測方法及び装置
JP3158823U (ja) * 2009-12-25 2010-04-22 財団法人日本交通管理技術協会 デジタルカメラを用いた三次元計測方法における基準軸鉛直設定装置
WO2012160787A1 (fr) * 2011-05-20 2012-11-29 日本電気株式会社 Dispositif d'estimation de position/posture, méthode d'estimation de position/posture et programme d'estimation de position/posture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000074641A (ja) * 1998-08-27 2000-03-14 Nec Corp 三次元形状計測方法
JP2006242943A (ja) * 2005-02-04 2006-09-14 Canon Inc 位置姿勢計測方法及び装置
JP3158823U (ja) * 2009-12-25 2010-04-22 財団法人日本交通管理技術協会 デジタルカメラを用いた三次元計測方法における基準軸鉛直設定装置
WO2012160787A1 (fr) * 2011-05-20 2012-11-29 日本電気株式会社 Dispositif d'estimation de position/posture, méthode d'estimation de position/posture et programme d'estimation de position/posture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AKIYOSHI SHIOURA ET AL.: "Dai 11 Kai Chapter 3 Hisenkei Keikaku 3.2 Seiyaku Nashi Saitekika", 2009, pages 6 - 19, Retrieved from the Internet <URL:http://www.dais.is.tohoku.ac.jp/-shioura/teaching/mp08/mp08-11.pdf> [retrieved on 20131029] *
TAKUYA KANEKO ET AL.: "Hybrid GMRES-ho ni Tsuite", DAI 54 KAI (HEISEI 9 NEN ZENKI) ZENKOKU TAIKAI KOEN RONBUNSHU (1) ARCHITECTURE SOFTWARE KAGAKU KOGAKU, vol. 6F-5, no. 1-81, 12 March 1997 (1997-03-12) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016114580A (ja) * 2014-12-18 2016-06-23 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation 処理装置、処理方法、およびプログラム
WO2016208404A1 (fr) * 2015-06-23 2016-12-29 ソニー株式会社 Dispositif et procédé de traitement d'informations, et programme
JPWO2016208404A1 (ja) * 2015-06-23 2018-04-12 ソニー株式会社 情報処理装置および方法、並びにプログラム
US10600202B2 (en) 2015-06-23 2020-03-24 Sony Corporation Information processing device and method, and program

Also Published As

Publication number Publication date
JPWO2014034035A1 (ja) 2016-08-08
JP6260533B2 (ja) 2018-01-17

Similar Documents

Publication Publication Date Title
Gilitschenski et al. Deep orientation uncertainty learning based on a bingham loss
JP6261811B2 (ja) 第1の座標系と第2の座標系との間の運動を求める方法
US9959625B2 (en) Method for fast camera pose refinement for wide area motion imagery
WO2023165093A1 (fr) Procédé d&#39;entraînement pour modèle d&#39;odomètre inertiel visuel, procédé et appareils d&#39;estimation de posture, dispositif électronique, support de stockage lisible par ordinateur et produit de programme
JP7052788B2 (ja) カメラパラメータ推定装置、カメラパラメータ推定方法、及びプログラム
JP5833507B2 (ja) 画像処理装置
EP3633606B1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et programme
EP3300025B1 (fr) Dispositif de traitement d&#39;images et procédé de traitement d&#39;images
CN113256718B (zh) 定位方法和装置、设备及存储介质
EP2960859B1 (fr) Construction d&#39;une structure 3D
JP6636894B2 (ja) カメラ情報修正装置、カメラ情報修正方法、及びカメラ情報修正プログラム
JP2017036970A (ja) 情報処理装置、情報処理方法、プログラム
JP6922348B2 (ja) 情報処理装置、方法、及びプログラム
US10252417B2 (en) Information processing apparatus, method of controlling information processing apparatus, and storage medium
JP6260533B2 (ja) 位置姿勢推定装置、位置姿勢推定方法および位置姿勢推定プログラム
Ricolfe-Viala et al. Optimal conditions for camera calibration using a planar template
JP2019192299A (ja) カメラ情報修正装置、カメラ情報修正方法、及びカメラ情報修正プログラム
JP5030183B2 (ja) 3次元物体位置姿勢計測方法
CN113048985B (zh) 已知相对旋转角度条件下的像机相对运动估计方法
JP6154759B2 (ja) カメラパラメータ推定装置、カメラパラメータ推定方法及びカメラパラメータ推定プログラム
Bartoli On the non-linear optimization of projective motion using minimal parameters
JP2017163386A (ja) カメラパラメータ推定装置、カメラパラメータ推定方法、及びプログラム
WO2019058487A1 (fr) Dispositif de traitement d&#39;images tridimensionnelles reconstituées, procédé de traitement d&#39;images tridimensionnelles reconstituées, et support de stockage lisible par ordinateur sur lequel est stocké un programme de traitement d&#39;images tridimensionnelles reconstituées
CN113763481A (zh) 一种移动场景中多相机视觉三维地图构建与自标定方法
JP2017162449A (ja) 情報処理装置、情報処理装置の制御方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13832512

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014532761

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13832512

Country of ref document: EP

Kind code of ref document: A1