WO2014034035A1 - Location attitude estimation device, location attitude estimation method, and location attitude estimation program - Google Patents

Location attitude estimation device, location attitude estimation method, and location attitude estimation program Download PDF

Info

Publication number
WO2014034035A1
WO2014034035A1 PCT/JP2013/004849 JP2013004849W WO2014034035A1 WO 2014034035 A1 WO2014034035 A1 WO 2014034035A1 JP 2013004849 W JP2013004849 W JP 2013004849W WO 2014034035 A1 WO2014034035 A1 WO 2014034035A1
Authority
WO
WIPO (PCT)
Prior art keywords
posture
unknown
camera
candidates
candidate
Prior art date
Application number
PCT/JP2013/004849
Other languages
French (fr)
Japanese (ja)
Inventor
中野 学
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2014532761A priority Critical patent/JP6260533B2/en
Publication of WO2014034035A1 publication Critical patent/WO2014034035A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to a technique for estimating the relative position and orientation of a camera that has captured a plurality of images.
  • At least two images are required to estimate the camera position and orientation. When there are three or more images, it is possible to cope with posture estimation by combining two images.
  • a method for estimating the relative position and orientation between two images will be described below.
  • the reason for estimating the relative position and orientation is that when the input is only an image, the absolute change amount of the position and orientation between the two images cannot be estimated.
  • one of the two images is assumed to be the origin of the three-dimensional space. If the origin and distance indices are known, appropriate coordinate transformation may be applied to the estimated relative position and orientation.
  • the first image is the coordinate origin
  • the orientation of the second image is a three-dimensional rotation matrix represented by three degrees of freedom
  • the indefinite position is a three-dimensional vector represented by two degrees of freedom. .
  • the epipolar equation is an equation representing the geometric relationship between the image coordinates (hereinafter referred to as corresponding points) representing the same three-dimensional coordinates on the two images and the camera position and orientation.
  • Non-Patent Document 1 describes a method of estimating a camera position and orientation by solving an epipolar equation using eight or more pairs of corresponding points on two images.
  • the corresponding point is converted into a parameter called a three-dimensional E matrix (Essential Matrix) represented by the camera position and orientation.
  • Essential Matrix a parameter represented by the camera position and orientation.
  • the constraint condition is that one of three singular values obtained by singular value decomposition of the E matrix is zero and two are equal.
  • the E matrix is corrected after the fact so as to satisfy the constraint condition, and the corrected E matrix is decomposed to estimate the position and orientation.
  • the correction is to perform singular value decomposition on the estimated E matrix and change the singular value as described above.
  • Non-Patent Document 1 Since the estimation accuracy of the camera position and orientation by the method described in Non-Patent Document 1 depends on the extraction accuracy of the corresponding point set, such as when the movement of the camera is minute or when the movement direction of the camera is close to the optical axis direction, It is known that when the difference between the image coordinates of the corresponding point set is small, the accuracy is greatly reduced.
  • Non-Patent Document 2 describes an epipolar equation that satisfies two sets of degrees of freedom 3 representing a posture from an acceleration sensor and one vanishing point (hereinafter, referred to as a known posture) and is satisfied by three sets of corresponding points.
  • a known posture a posture from an acceleration sensor and one vanishing point
  • a method for directly estimating the position and orientation by solving the above is described. Since the parameters to be estimated are reduced to a total of 3 degrees of freedom including 2 degrees of freedom representing the position and 1 degree of freedom representing the remaining posture (hereinafter referred to as an unknown posture), it is more accurate than Non-Patent Document 1. .
  • Non-Patent Document 3 describes a method for estimating the position and orientation by dividing the corresponding points into 3, 4, 5, or more cases.
  • an E matrix is obtained from a linear epipolar equation, the E matrix is corrected so as to satisfy the constraint conditions, and the corrected E matrix is decomposed to estimate the position and orientation.
  • Non-Patent Document 4 Non-Patent Document 5
  • Non-Patent Document 2 if two of the postures are known, the camera position and posture can be estimated from a minimum of three sets of corresponding points. However, since observation errors due to quantization and sensor noise inside the camera occur at the corresponding points, the camera position and orientation cannot be estimated with high accuracy using only three sets. In general, a plurality of camera positions and orientations are estimated in three sets, and it is impossible to mathematically distinguish which camera position and orientation should be selected. In order to estimate the only camera position and orientation with high accuracy, it is necessary to use four or more sets of corresponding points. However, the position and orientation estimation method described in Non-Patent Document 2 is a case where there are only three sets of corresponding points, and cannot be applied to four or more sets.
  • Non-Patent Document 3 calculates the position and orientation at the same time, and therefore requires a correction means, which requires a large amount of calculation. Specifically, it is necessary to perform matrix operations such as singular value decomposition for correction.
  • An object of the present invention is to provide a position / orientation estimation apparatus, a position / orientation estimation method, and a position / orientation estimation program that can estimate the relative position and orientation of a camera that has taken two images with high accuracy and high speed. To do.
  • the position / orientation estimation apparatus includes three or more pairs of corresponding points included in two images and two known orientation parameters representing the relative orientation of the camera that has captured the two images.
  • a known posture which is a posture parameter
  • the relative position between the unknown posture which is one of the posture parameters and is an unknown posture parameter
  • An unknown posture candidate calculation unit that outputs, as candidates for unknown postures, solutions of all the unknown postures that satisfy a predetermined function represented by using the unknown posture and the corresponding point as a variable.
  • a position candidate calculation unit that calculates a position candidate of the camera relative to each of the posture candidates, and all of the unknown posture candidates and the camera relative to the unknown posture candidates.
  • One or a plurality of the unknown postures having a minimum error function and a minimum error candidate extraction unit that extracts a relative position of the camera are provided.
  • the position / orientation estimation method uses three or more sets of corresponding points included in two images and two known orientation parameters representing the relative orientation of the camera that captured the two images.
  • a known posture which is a posture parameter
  • the relative position between the unknown posture which is one of the posture parameters and is an unknown posture parameter
  • the relative position between the unknown posture which is one of the posture parameters and is an unknown posture parameter
  • the relative position between the unknown posture which is one of the posture parameters and is an unknown posture parameter
  • Calculates the relative position candidates of the camera and inputs all the unknown posture candidates, the relative position candidates of the camera for the unknown posture candidates, and the corresponding points.
  • the unknown posture candidates and the position candidates one or more of the unknowns that minimize a predetermined error function representing a geometric relationship between the relative position of the corresponding point, the camera, and the unknown posture.
  • the position / orientation estimation program according to the present invention is known to a computer among three or more sets of corresponding points included in two images and three orientation parameters representing the relative orientations of the cameras that photographed the two images.
  • the known posture which is one of the posture parameters and is an unknown posture parameter
  • the known posture which are two posture parameters of
  • An unknown posture candidate calculation process for outputting all the unknown posture solutions satisfying a predetermined function represented by using the unknown posture and the corresponding points as relative variables of a camera as candidates for unknown posture
  • a position candidate calculation process for calculating a position candidate of the camera relative to each of the unknown posture candidates, and all the unknown posture candidates and the unknown posture candidates, respectively.
  • the relative position candidate of the camera and the corresponding point are input, and the relative position of the corresponding point, the camera, and the unknown posture are selected from the unknown posture candidate and the position candidate.
  • One or a plurality of unknown poses that minimize a predetermined error function representing a geometric relationship and a minimum error candidate extraction process that extracts a relative position of the camera are executed.
  • the relative position and orientation of a camera that has captured two images can be estimated with high accuracy and high speed.
  • FIG. FIG. 1 is a block diagram showing a configuration example of a first embodiment (Embodiment 1) of a position and orientation estimation apparatus according to the present invention.
  • the position and orientation estimation apparatus shown in FIG. 1 includes an unknown orientation candidate calculation unit 1, a position candidate calculation unit 2, and a minimum error candidate extraction unit 3.
  • the unknown posture candidate calculation unit 1 includes a coefficient calculation unit 11, a simultaneous polynomial solution unit 12, a real solution extraction unit 13, and an unknown posture candidate conversion unit 14.
  • the unknown posture candidate calculation unit 1 represents three or more sets of corresponding points of two images and the relative posture of the camera that captured the two images (hereinafter, sometimes referred to as “posture”) 3
  • Two known posture parameters among the two posture parameters are input as known postures.
  • the unknown posture candidate calculation unit 1 is one of the three posture parameters and is an unknown posture parameter and the relative position of the camera (hereinafter, may be referred to as “position”).
  • position is used as variables, and all unknown poses satisfying a predetermined function expressed using corresponding points and unknown poses are calculated.
  • a predetermined function represented by a corresponding point and an unknown posture is simply referred to as a simultaneous polynomial.
  • the coefficient calculation unit 11 inputs three or more sets of corresponding points and two known postures, calculates the coefficient of the simultaneous polynomial, and outputs it to the simultaneous polynomial solving unit 12.
  • the simultaneous polynomial solving unit 12 receives the coefficients of the simultaneous polynomial calculated by the coefficient calculating unit 11, solves the simultaneous polynomial, and outputs all the solutions.
  • the real number solution extraction unit 13 inputs all the solutions of the simultaneous polynomials calculated by the simultaneous polynomial solution unit 12, extracts all the real number solutions from the solutions, and outputs them. If no real number solution can be extracted, the real number solution extraction unit 13 stops the subsequent processing and outputs a no solution flag.
  • the unknown posture candidate conversion unit 14 receives all the real solutions extracted by the real number solution extraction unit 13 and calculates and outputs unknown posture candidates.
  • the unknown posture candidate conversion unit 14 converts each real solution into a value that is treated as one unknown posture candidate in subsequent calculations.
  • the position candidate calculation unit 2 is a processing unit that receives the unknown posture candidate calculated by the unknown posture candidate calculation unit 1, calculates a relative position candidate of the camera, and outputs the calculated candidate.
  • the minimum error candidate extraction unit 3 inputs an unknown posture candidate, a position candidate, and a corresponding point, and one or a plurality of unknowns that minimize a predetermined error function representing a geometric relationship between the corresponding point, the unknown posture, and the position. Calculate and output posture and position. Further, the minimum error candidate extraction unit 3 may input the known posture when replacing the corresponding points using the known posture as described later.
  • unknown posture candidate calculation unit 1 (more specifically, coefficient calculation unit 11, simultaneous polynomial solution unit 12, real solution extraction unit 13, unknown posture candidate conversion unit 14), position candidate calculation unit 2, error
  • the minimum candidate extraction unit 3 is realized by, for example, an information processing device such as hardware designed to perform specific arithmetic processing or the like, or a CPU (Central Processing Unit) that operates according to a program.
  • an information processing device such as hardware designed to perform specific arithmetic processing or the like, or a CPU (Central Processing Unit) that operates according to a program.
  • CPU Central Processing Unit
  • FIG. 2 is a flowchart showing an example of the operation of the first embodiment of the position / orientation estimation apparatus according to the present invention.
  • the coefficient calculation unit 11 uses a unknown point and a relative position of the camera as variables, and is represented by a predetermined point represented using the corresponding point and the unknown posture.
  • the coefficient of the function (hereinafter referred to as simultaneous polynomial) is calculated and output to the simultaneous polynomial solving unit 12 (step S11).
  • the posture of the camera is represented by a three-degree-of-freedom rotation matrix or three variables necessary to express the rotation matrix.
  • the degree of freedom of the posture is an index indicating how many variables each of the nine elements of the 3 ⁇ 3 matrix representing the posture of the camera are represented.
  • the two known poses are two known variables out of the three variables or a rotation matrix expressed by two degrees of freedom.
  • the unknown pose is a rotation matrix expressed with one remaining variable or one degree of freedom.
  • the simultaneous polynomials for which the coefficient calculation unit 11 calculates coefficients are uniquely determined by how to express the degree of freedom of the camera posture. Therefore, the coefficient calculation unit 11 calculates the coefficient in the simultaneous polynomial using, for example, three or more pairs of corresponding points and two known postures in accordance with a predefined expression method of the degree of freedom of the camera posture or the type of the simultaneous polynomial. What is necessary is just to calculate.
  • the coefficient calculation unit 11 predefines a plurality of simultaneous polynomial types corresponding to, for example, a method for expressing the degree of freedom of the camera posture, and uses the simultaneous polynomials that are used according to the values of the setting parameters read at the time of activation or the like It is also possible to select the type.
  • the simultaneous polynomial solving unit 12 inputs the coefficient of the simultaneous polynomial calculated by the coefficient calculating unit 11, and solves the simultaneous polynomial using the corresponding point and the known posture input in step S11 (step S12). Further, the simultaneous polynomial solving unit 12 outputs all the solutions satisfying the simultaneous polynomial to the real number solution extracting unit 13.
  • the solution of the simultaneous polynomial here includes an inappropriate value as an unknown posture. For example, some or all of them may be complex numbers.
  • the real number solution extraction unit 13 When all the solutions of the simultaneous polynomials solved by the simultaneous polynomial solving unit 12 are input to the real number solution extracting unit 13, when there are real number solutions in those solutions (Yes in step S13), all the real number solutions are obtained. Extracted and output to the unknown posture candidate conversion unit 14. When no real number solution can be extracted from all the solutions of the simultaneous polynomials, the real number solution extraction unit 13 outputs a “no solution” flag as the position and orientation estimation result and ends the operation (No in step S13). Step S17). For example, if all the solutions are complex numbers, the real number solution extraction unit 13 outputs a “no solution” flag and ends the operation.
  • the “no solution” flag may be a true / false value, for example, or may be a position / orientation value indicating no solution determined in advance.
  • the unknown posture candidate conversion unit 14 inputs all the real number solutions and converts them into unknown posture candidates (step S14).
  • the unknown posture candidates obtained as a result of the calculation are output to the position candidate calculation unit 2 and the minimum error candidate extraction unit 3.
  • the unknown posture candidate conversion unit 14 may perform processing for converting the obtained real solution into values (one numerical value representing the unknown posture or a 3 ⁇ 3 rotation matrix with one degree of freedom) that are candidates for unknown postures. .
  • the position candidate calculation unit 2 receives all the unknown posture candidates obtained in step S14, calculates the relative positions of the cameras corresponding to the unknown postures, and extracts the minimum error candidate extraction unit 3 as the position candidates. (Step S15).
  • the minimum error candidate extraction unit 3 inputs an unknown posture candidate, a position candidate, and a corresponding point, and one or a plurality of unknowns that minimize a predetermined error function representing a geometric relationship between the corresponding point, the unknown posture, and the position.
  • the posture and position are calculated and output (step S16). Further, the minimum error candidate extraction unit 3 does not need to input a known posture when replacing a corresponding point using a known posture as described later, and may input a known posture when not replacing.
  • the minimum error candidate extraction unit 3 outputs all of the input unknown posture candidates and position candidates as unknown postures and positions when the number of input corresponding points is three sets. When the number of input corresponding points is four or more, the minimum error candidate extraction unit 3 minimizes a predetermined error function from among all the input unknown posture candidates and position candidates. Calculate and output the position and unknown posture. The reason why this calculation is not performed when the number of corresponding points is three is that the error in all unknown posture candidates is 0, and it is impossible to determine which candidate is closest to the correct answer. On the other hand, when the number of corresponding points is four or more, since the error in each candidate is different, only one candidate with the smallest error can be selected.
  • Example 1 Hereinafter, the first embodiment will be described using a specific example. First, the epipolar equation with the unknown posture and the relative position of the camera as variables will be described.
  • I is a unit matrix
  • det is a determinant
  • is an L2 norm of a vector
  • [] ⁇ represents a matrix representation of a cross product of three-dimensional vectors.
  • the number of input corresponding points is n sets
  • the i-th corresponding point coordinate of the first image is v i
  • the i-th corresponding point coordinate of the second image is v ′ i
  • the camera such as focal length and optical center It is assumed that the internal parameters of are already calibrated and v i and v ′ i are homogenized. Further, it is assumed that the number of v i and v ′ i is equal. If the two known postures are ⁇ and ⁇ , and the unknown posture is ⁇ , the relative posture of the camera is expressed by Equation (1) as a rotation matrix.
  • the posture is expressed as in Expression (1), but can be arbitrarily expressed depending on how to define the axial direction, the rotational direction, and the order of multiplication.
  • the known posture is expressed by two parameters ⁇ and ⁇ .
  • the known posture may be expressed using a unit quaternion, and as described in Non-Patent Document 2, a rotation axis and It may be expressed using a rotation amount.
  • Equation (3) The degree of freedom of t is set to 2 due to the indefiniteness of the scale. Since ⁇ and ⁇ are already known, the epipolar equation is expressed as shown in Equation (3) when deformed by replacing v i with R ⁇ R ⁇ v i .
  • Equation (3) can be solved if there are a minimum of three corresponding points, and a least squares solution can be calculated if there are four or more pairs.
  • Expression (3) is expressed as Expression (4).
  • A is a 3 ⁇ 3 matrix including ⁇ as a variable.
  • Equation (4) indicates that t is a zero vector or a null space of A. Since t is not a zero vector, t is represented as an eigenvector corresponding to the eigenvalue zero of A. That is, ⁇ is a solution of a function in which ⁇ is a variable and one eigenvalue of A is zero.
  • Equation (5) is not a simultaneous polynomial but a univariate polynomial.
  • the simultaneous polynomial solving unit 12 substitutes the obtained ⁇ for A, and calculates t as an eigenvector corresponding to the eigenvalue zero of A.
  • the norm of t is normalized to 1.
  • B is an n ⁇ 3 matrix that is expressed as in Expression (7) and includes ⁇ as a variable.
  • the error represented by Equation (6) is the minimum eigenvalue of B T B, and t corresponds to the minimum eigenvalue of B T B, not the zero vector. It is an eigenvector. That is, ⁇ is a solution of a function that minimizes one eigenvalue of B T B with ⁇ as a variable. For example, since B T B is a 3 ⁇ 3 matrix, the eigenvalue of B T B can be written down with ⁇ as a variable, and can be obtained by an iterative solution as in Equation (4). In addition, since one eigenvalue is the minimum, the simultaneous polynomial solving unit 12 may obtain a solution of the following equation (8).
  • the simultaneous polynomial solving unit 12 replaces cos ⁇ and sin ⁇ in Expression (8) with two independent variables c and s, for example, and c 2 + s 2 as a new conditional expression. You may solve the following formula
  • equation (9) which added 1.
  • Equation (9) is not a simultaneous polynomial but a univariate polynomial.
  • the simultaneous polynomial solving unit 12 substitutes the obtained ⁇ for B T B, and calculates t as an eigenvector corresponding to the minimum eigenvalue of B T B.
  • the norm of t is normalized to 1.
  • the two known postures can be acquired using, for example, an acceleration sensor or a vanishing point.
  • an acceleration sensor the direction of gravity is measured with the camera stationary.
  • the direction of gravity is represented by a three-dimensional vector, the magnitude of the norm is ignored, so the degree of freedom is 2. Therefore, the difference in the direction of gravity when two images are taken can be expressed by two parameters ⁇ and ⁇ .
  • a vanishing point is a point where two or more parallel lines in a three-dimensional space intersect on an image by projective transformation.
  • Non-patent document 2 describes a method for calculating a known posture from one vanishing point.
  • step S12 the simultaneous polynomial solving unit 12 solves the simultaneous polynomial using the coefficient output from the coefficient calculating unit 11, and outputs the solution of the equation (5) or the equation (9) to the real solution extracting unit 13.
  • a method of reducing to a single variable polynomial using a Sylvester matrix or a method of simultaneously calculating a solution of two variables using a Gröbner basis, etc. Various methods can be used.
  • step S13 the real number solution extraction unit 13 inputs the solution of the simultaneous polynomial expressed by the equation (5) or the equation (9), calculates all the real number solutions from the solution, and determines the presence or absence of the real number solution. . If there is a real number solution, the real number solution extraction unit 13 outputs it to the unknown posture candidate conversion unit 14. If all the solutions are complex numbers, the real number solution extraction unit 13 outputs a no solution flag and ends the operation (step S17).
  • the no solution flag may be, for example, a true / false value or a position / orientation value indicating no solution determined in advance.
  • step S ⁇ b> 14 the unknown posture candidate conversion unit 14 inputs all real number solutions, calculates unknown posture candidates, and outputs them to the position candidate calculation unit 2 and the minimum error candidate extraction unit 3.
  • Candidate unknown posture for example, is given by R theta of formula (1). For example, if there are k real solutions, it means that there are k ⁇ , and the unknown posture candidate conversion unit 14 substitutes k ⁇ one by one into the equation (1), and k unknown postures. Can be obtained (3 ⁇ 3 matrix).
  • step S ⁇ b> 15 the position candidate calculation unit 2 calculates position candidates for all input unknown posture candidates and outputs the position candidates to the minimum error candidate extraction unit 3.
  • the position candidate is an eigenvector corresponding to the eigenvalue zero of A or an eigenvector corresponding to the smallest eigenvalue of B T B.
  • the eigenvector can be obtained by eigenvalue decomposition or singular value decomposition.
  • step S16 first, the minimum error candidate extraction unit 3 inputs a corresponding point, an unknown posture candidate, and a position candidate.
  • the minimum error candidate extraction unit 3 performs different operations according to the input number of corresponding points.
  • the minimum error candidate extraction unit 3 extracts the unknown posture candidate and the position candidate that minimize the minimum eigenvalue of B TB. Are output as unknown posture and position.
  • the position candidate calculation unit 2 narrows down the candidates for the optimal solution by batch processing for all real solutions, but it is also possible to sequentially process the solutions of simultaneous polynomials one by one. For example, each time the simultaneous polynomial solving unit 12 obtains one solution, the solution is output to the real number solution extracting unit 13. If the real solution extraction unit 13 determines that the solution is a real number, the conversion processing to the posture candidate by the unknown posture candidate conversion unit 14 and the minimum error candidate determination processing by the minimum error candidate extraction unit 3 are executed by loop processing.
  • the error minimum candidate extraction unit 3 stores, as the error minimum candidate determination process, a candidate that minimizes the error each time, and every time a real solution is extracted, error calculation, error minimum value comparison process, Overwriting processing of the minimum candidate and the minimum value may be performed.
  • the branching process based on the number of posture candidates and the number of input points may be collectively performed by the minimum error candidate extraction unit 3.
  • the minimum error candidate extraction unit 3 uses the minimum eigenvalue of B T B as the error, but may use det (B T B). In this case, next to the unknown posture candidate conversion unit 14, the minimum error candidate extraction unit 3 calculates det (B T B), selects only one candidate of the smallest unknown posture, and outputs it to the position candidate calculation unit 2. To do. As other errors, a Euclidean distance between the epipolar line and the corresponding point, or a Sampson error that is an approximation of the Euclidean distance may be used.
  • the position / orientation estimation apparatus first calculates all candidates having the smallest error based on the geometric relationship between the position and the unknown attitude, and extracts the candidate having the smallest error from the candidates. Therefore, it is guaranteed that the output unknown posture and position are the global minimum solution. In other words, unlike the method of performing correction after ignoring the constraint conditions as in Non-Patent Document 3, since the unknown posture and position candidates are calculated separately, no error increase due to the correction occurs.
  • the position / orientation estimation apparatus does not require correction means and can reduce the amount of calculation. That is, it is not necessary to perform matrix operations such as singular value decomposition for correction as in Non-Patent Document 3.
  • the relative position / orientation of the camera can be estimated with high accuracy and stability at a high speed.
  • FIG. 3 is a block diagram showing a configuration example of the second embodiment of the position / orientation estimation apparatus according to the present invention.
  • the configuration of the position / orientation estimation apparatus of the present embodiment is different from the configuration of the first embodiment shown in FIG. 1 in that the unknown orientation candidate calculation unit 1 further includes a secondary optimality verification unit 4. Since the configuration other than the secondary optimality verification unit 4 is the same as that of the first embodiment, the description thereof is omitted.
  • the second-order optimality verification unit 4 receives the real number solution output from the real number solution extraction unit 13, verifies the positive / negative of the function obtained by second-order differentiation of the equation having the unknown pose as a variable with the unknown pose, and is a positive real number
  • the solution is output to the unknown posture candidate conversion unit 14 as a candidate that satisfies the second optimality sufficient condition.
  • the secondary optimality verification unit 4 stops the subsequent processing and outputs a no solution flag.
  • the secondary optimality verification unit 4 is realized by, for example, hardware designed to perform specific arithmetic processing or the like, or an information processing device such as a CPU that operates according to a program.
  • FIG. 4 is a flowchart showing an example of the operation of the position / orientation estimation apparatus of this embodiment. Since the operations other than step S21 are the same as those in the first embodiment, description thereof will be omitted.
  • the quadratic optimality verification unit 4 verifies the sign of the function obtained by second-order differentiation of the equation having the unknown pose as a variable with respect to the real solution of the simultaneous polynomial extracted by the real number solution extraction unit 21 (step) S21), a positive real number solution is output to the unknown posture candidate conversion unit 14 as a solution satisfying the second-order optimality sufficient condition. If all real solutions do not satisfy the secondary optimality sufficient condition, the secondary optimality verification unit 4 outputs a no-solution flag and ends the operation (No in step S21, step S17).
  • the no solution flag may be a true / false value, for example, or may be a posture and position value representing no solution determined in advance.
  • the second-order optimality verification unit 4 inputs the real number solution output from the real number solution extraction unit 13, and verifies the positive / negative by substituting the real number solution into the function obtained by second-order differentiation of the equation with the unknown posture as a variable with the unknown posture.
  • the equation using the unknown posture as a variable may be, for example, an eigenvalue of (B T B) written with ⁇ as a variable, or may be det (B T B).
  • the secondary optimality verification unit 4 If the function into which the real solution is substituted is positive, the secondary optimality verification unit 4 outputs the solution to the unknown posture candidate conversion unit 14 as a solution satisfying the secondary optimality sufficient condition (Yes in step S21). If none is positive, the second-order optimality verification unit 4 outputs a no-solution flag in the same manner as the real solution extraction unit 13, and ends (No in step S21, step S17).
  • the second-order optimality verification unit 4 eliminates inappropriate solutions called saddle points that satisfy simultaneous polynomials but are not locally optimal solutions.
  • the real number solution obtained by the second-order optimality verification unit 4 is always a local optimum solution. Therefore, it is guaranteed that the solution having the smallest error among them is a global optimum solution.
  • the calculation cost can be reduced by terminating the subsequent processing.
  • the reliability of the position and orientation output as the estimation result can be further improved, and the relative position and orientation of the camera can be estimated at a higher speed.
  • FIG. 5 is a block diagram showing a configuration example of the third embodiment of the position / orientation estimation apparatus according to the present invention.
  • the position / orientation estimation apparatus of the present embodiment further includes a three-dimensional shape restoration unit 5 in addition to the configuration of the first embodiment shown in FIG. 1 or the second embodiment shown in FIG. Since the configuration other than the three-dimensional shape restoration unit 5 is the same as that of the first embodiment or the second embodiment, the description thereof is omitted.
  • the 3D shape restoration unit 5 inputs the corresponding point and the known posture, and the position and unknown posture output by the minimum error candidate extraction unit 3, and restores and outputs the three-dimensional coordinates of the corresponding point.
  • the three-dimensional shape restoration unit 5 outputs the position and orientation together with the restored three-dimensional coordinates.
  • the posture may be, for example, three parameters or a rotation matrix.
  • the three-dimensional shape restoration unit 5 is realized by, for example, hardware designed to perform specific arithmetic processing or the like, or an information processing apparatus such as a CPU that operates according to a program.
  • FIG. 6 is a flowchart illustrating an example of the operation of the position / orientation estimation apparatus according to this embodiment.
  • step S31 operations other than step S31 are the same as those in the second embodiment, description thereof will be omitted. Note that when the configuration of the position / orientation estimation apparatus of the present embodiment is obtained by adding the three-dimensional shape restoration unit 5 to the configuration of the first embodiment, step S21 in FIG. 6 is unnecessary.
  • the three-dimensional shape restoration unit 5 receives the corresponding point, the known posture, the position and the unknown posture output by the minimum error candidate extraction unit 3, restores the three-dimensional coordinates of the corresponding point, and outputs the corresponding point and the posture. (Step S31).
  • Example 3 Next, the operation of each part in this embodiment will be specifically described.
  • the operations other than the three-dimensional shape restoration unit 5 are the same as those in the first embodiment or the second embodiment.
  • the 3D shape restoration unit 5 receives the corresponding points, the known postures, and the positions and unknown postures output by the minimum error candidate extraction unit 3, and restores and outputs the three-dimensional coordinates of the corresponding points.
  • the three-dimensional shape restoration unit 5 outputs the position and orientation together with the restored three-dimensional coordinates.
  • the posture may be, for example, three parameters or a rotation matrix.
  • the three-dimensional shape restoration unit 5 may output only the three-dimensional coordinates without outputting the position and orientation.
  • Non-Patent Document 4 A method for restoring the three-dimensional coordinates of corresponding points based on the relative position and orientation of the camera is described in Non-Patent Document 4 and Non-Patent Document 5, for example.
  • FIG. 7 is a block diagram when the position / orientation estimation apparatus according to the present invention is implemented in an information processing system.
  • the information processing system shown in FIG. 7 is a general information processing system including a processor 61, a program memory 62, and a storage medium 63.
  • the storage medium 63 may be a storage area composed of separate storage media, or may be a storage area composed of the same storage medium.
  • a magnetic storage medium such as a RAM (Random Access Memory) or a hard disk can be used.
  • the program memory 62 includes the above-described unknown posture candidate calculation unit 1 (more specifically, the coefficient calculation unit 11, the simultaneous polynomial solution unit 12, the real number solution extraction unit 13, and the unknown posture candidate conversion unit 14), Stored is a program for causing the processor 61 to perform processing of each of the position candidate calculation unit 2, the minimum error candidate extraction unit 3, the secondary optimality verification unit 4, and the three-dimensional shape restoration unit 5,
  • the processor 61 operates according to this program.
  • the processor 61 may be a processor that operates according to a program such as a CPU, for example.
  • the present invention can also be realized by a computer program. Note that it is not necessary to operate all the parts that can be operated by the program by the program, and a part may be configured by hardware. Moreover, you may implement
  • FIG. 8 is a block diagram showing a configuration of a main part of the position / orientation estimation apparatus according to the present invention.
  • the position / orientation estimation apparatus according to the present invention has, as main components, three or more sets of corresponding points included in two images and the relative orientation of the camera that captured the two images.
  • the known posture which is a known two posture parameter among the three posture parameters representing the input, is input, and the corresponding posture and the known posture are used to input one of the posture parameters and the unknown posture parameter.
  • Unknown posture candidate calculation that outputs the solutions of all unknown postures satisfying a predetermined function expressed using the unknown posture and corresponding points using the unknown posture and the relative position of the camera as variables Unit 1, position candidate calculation unit 2 for calculating a candidate for the relative position of the camera with respect to each of the unknown posture candidates, and the relative position of the camera with respect to each of the unknown posture candidates and the unknown posture candidates.
  • Complements and corresponding points are input, and one or a minimum of a predetermined error function representing a geometric relationship between the relative position of the corresponding point and the camera and the unknown posture is selected from the unknown posture candidates and the position candidates
  • a plurality of unknown postures, and a minimum error candidate extraction unit 3 that extracts a relative position of the camera.
  • an unknown posture candidate calculation unit (for example, unknown posture candidate calculation unit 1) inputs a corresponding point and a known posture, and is represented using a predetermined posture represented by the unknown posture and the corresponding point.
  • a coefficient calculation unit (for example, the coefficient calculation unit 11) that calculates the coefficient of the function, a simultaneous polynomial solution unit (for example, the simultaneous polynomial solution unit 12) that inputs all the coefficients and calculates all the solutions that satisfy the predetermined function;
  • a real solution extraction unit (for example, a real solution extraction unit 13) that extracts all real solutions from the solutions obtained by the simultaneous polynomial solution unit and outputs a flag indicating all the extracted real solutions or no solution;
  • It is configured to include an unknown posture candidate conversion unit (for example, unknown posture candidate conversion unit 14) that converts each real number solution of all unknown postures extracted by the real number solution extraction unit into one unknown posture candidate. May be.
  • the unknown orientation candidate calculation unit calculates and outputs a real number solution satisfying the second-order optimality condition among all the real number solutions satisfying a predetermined function, and outputs the second-order optimality verification (For example, secondary optimality verification unit 4) may be included.
  • the reliability of the position / orientation output as the estimation result can be further improved, and the relative position / orientation of the camera can be estimated at a higher speed.
  • the position / orientation estimation apparatus inputs the corresponding point, the known attitude, the relative position of the camera output by the minimum error candidate extraction unit, and the unknown attitude, and restores the three-dimensional coordinates of the corresponding point.
  • the output three-dimensional shape restoration unit (for example, the three-dimensional shape restoration unit 5) may be provided.
  • the present invention is applied to the reconstruction of the three-dimensional shape of a subject, the generation of a panoramic image of a background, and the like.

Abstract

A location attitude estimation device comprises: an unknown attitude candidate calculation unit (1) which receives input of three or more combinations of correspondence points which are included in two images, and known attitudes which are two known attitude parameters among three attitude parameters which represent a relative attitude of a camera which has photographed the two images, and outputs as unknown attitude candidates all unknown attitude solutions which satisfy a prescribed function; a location candidate calculation unit (2) which calculates a candidate of a relative location of the camera with respect to each of the unknown attitude candidates; and an error minimum candidate extraction unit (3) which, from among the unknown attitude candidates and the location candidates, extracts one or a plurality of unknown attitudes and relative locations of the camera whereat a prescribed error function which represents a geometrical relation between the correspondence points, the relative locations of the camera, and the unknown attitudes, reaches a minimum.

Description

位置姿勢推定装置、位置姿勢推定方法および位置姿勢推定プログラムPosition / orientation estimation apparatus, position / orientation estimation method, and position / orientation estimation program
 本発明は、複数の画像からそれらを撮影したカメラの相対的な位置姿勢を推定する技術に関する。 The present invention relates to a technique for estimating the relative position and orientation of a camera that has captured a plurality of images.
 カメラで撮影した複数の画像から、被写体の3次元形状復元や背景のパノラマ画像を生成するには、画像を撮影したカメラ位置姿勢の高精度な推定が必要である。 In order to restore the 3D shape of a subject and generate a panoramic image of a background from a plurality of images taken with a camera, it is necessary to accurately estimate the position and orientation of the camera that has taken the image.
 カメラ位置姿勢を推定するには最低2枚の画像が必要である。3枚以上の画像がある場合は、2枚ずつ組みにすることによって姿勢の推定に対応可能である。以下では2枚の画像間の相対的な位置姿勢を推定する方法について述べる。相対的な位置姿勢を推定する理由は、入力が画像のみの場合、2画像間の位置姿勢の絶対的な変化量は推定不可能だからである。相対的な位置姿勢を推定するためには、2枚の画像の内どちらかを3次元空間の原点と仮定する。原点や距離の指標が既知な場合は、推定した相対的な位置姿勢に適切な座標変換を適用すればよい。以下では1枚目の画像を座標原点とし、2枚目の画像の姿勢を自由度3で表される3次元の回転行列、スケール不定の位置を自由度2で表される3次元ベクトルとする。 最低 At least two images are required to estimate the camera position and orientation. When there are three or more images, it is possible to cope with posture estimation by combining two images. A method for estimating the relative position and orientation between two images will be described below. The reason for estimating the relative position and orientation is that when the input is only an image, the absolute change amount of the position and orientation between the two images cannot be estimated. In order to estimate the relative position and orientation, one of the two images is assumed to be the origin of the three-dimensional space. If the origin and distance indices are known, appropriate coordinate transformation may be applied to the estimated relative position and orientation. In the following, the first image is the coordinate origin, the orientation of the second image is a three-dimensional rotation matrix represented by three degrees of freedom, and the indefinite position is a three-dimensional vector represented by two degrees of freedom. .
 焦点距離、光学中心座標、レンズ歪みなどの内部パラメータが既知の場合にカメラ位置姿勢を推定する1つの方法として、エピポーラ方程式を用いる方法がある。エピポーラ方程式は、2枚の画像上で同一の3次元座標を表す画像座標(以下、対応点と呼ぶ)とカメラ位置姿勢の幾何学的関係を表す方程式である。 One method for estimating the camera position and orientation when internal parameters such as focal length, optical center coordinates, and lens distortion are known is a method using an epipolar equation. The epipolar equation is an equation representing the geometric relationship between the image coordinates (hereinafter referred to as corresponding points) representing the same three-dimensional coordinates on the two images and the camera position and orientation.
 例えば、非特許文献1には、2枚の画像上の8組以上の対応点を用いてエピポーラ方程式を解き、カメラ位置姿勢を推定する方法が記載されている。非特許文献1に記載されている方法では、まず、対応点をカメラ位置姿勢により表される3次元のE行列(Essential Matrix)と呼ばれるパラメータに変換する。次に、E行列の拘束条件を無視して自由度8の独立な変数とすることで、8組以上の対応点が満たす線形なエピポーラ方程式を解き、E行列を計算する。拘束条件とは、E行列を特異値分解して得られる3つの特異値の内、1つがゼロで、2つが等しいことである。最後に、拘束条件を満たすように事後にE行列を補正し、補正されたE行列を分解して位置と姿勢をそれぞれ推定する。補正とは、推定されたE行列を特異値分解し、特異値を前述のように変更することである。 For example, Non-Patent Document 1 describes a method of estimating a camera position and orientation by solving an epipolar equation using eight or more pairs of corresponding points on two images. In the method described in Non-Patent Document 1, first, the corresponding point is converted into a parameter called a three-dimensional E matrix (Essential Matrix) represented by the camera position and orientation. Next, by ignoring the constraint condition of the E matrix and making it an independent variable with 8 degrees of freedom, a linear epipolar equation satisfied by 8 or more pairs of corresponding points is solved, and the E matrix is calculated. The constraint condition is that one of three singular values obtained by singular value decomposition of the E matrix is zero and two are equal. Finally, the E matrix is corrected after the fact so as to satisfy the constraint condition, and the corrected E matrix is decomposed to estimate the position and orientation. The correction is to perform singular value decomposition on the estimated E matrix and change the singular value as described above.
 非特許文献1に記載されている方法によるカメラ位置姿勢の推定精度は対応点組の抽出精度に依存するため、カメラの動きが微小な場合やカメラの光軸方向と移動方向が近い場合など、対応点組の画像座標の差分が少ないときは、精度が大きく低下することが知られている。 Since the estimation accuracy of the camera position and orientation by the method described in Non-Patent Document 1 depends on the extraction accuracy of the corresponding point set, such as when the movement of the camera is minute or when the movement direction of the camera is close to the optical axis direction, It is known that when the difference between the image coordinates of the corresponding point set is small, the accuracy is greatly reduced.
 そのような場合にも高精度にカメラ位置姿勢を推定する方法として、対応点組とカメラ以外のセンサ情報を併用する方法がある。 In such a case, as a method for estimating the camera position and orientation with high accuracy, there is a method in which a corresponding point set and sensor information other than the camera are used in combination.
 例えば、非特許文献2には、加速度センサや1つの消失点より姿勢を表す自由度3のうち2つの自由度を既知(以下、既知姿勢と呼ぶ)として、3組の対応点が満たすエピポーラ方程式を解いて位置と姿勢を直接推定する方法が記載されている。推定すべきパラメータが、位置を表す2自由度と残りの姿勢を表す1自由度(以下、未知姿勢と呼ぶ)の合計3自由度に削減されるため、非特許文献1よりも高精度である。 For example, Non-Patent Document 2 describes an epipolar equation that satisfies two sets of degrees of freedom 3 representing a posture from an acceleration sensor and one vanishing point (hereinafter, referred to as a known posture) and is satisfied by three sets of corresponding points. A method for directly estimating the position and orientation by solving the above is described. Since the parameters to be estimated are reduced to a total of 3 degrees of freedom including 2 degrees of freedom representing the position and 1 degree of freedom representing the remaining posture (hereinafter referred to as an unknown posture), it is more accurate than Non-Patent Document 1. .
 しかし、非特許文献2に記載された技術では、パラメータ間の拘束条件を考慮したエピポーラ方程式は複雑な非線形連立方程式になるため、対応点の入力が3組に限られ、4組以上の最小自乗解を求めることができない。 However, in the technique described in Non-Patent Document 2, the epipolar equation considering the constraint condition between parameters becomes a complex nonlinear simultaneous equation, so the input of corresponding points is limited to three sets, and the least squares of four sets or more are used. I cannot find a solution.
 最小自乗解を求める方法として、非特許文献3には、対応点が3組、4組、5組以上に場合分けして位置姿勢を推定する方法が記載されている。どの場合も、非特許文献1と同様に、線形なエピポーラ方程式からE行列を求め、拘束条件を満たすようにE行列を補正し、補正されたE行列を分解して位置と姿勢を推定する。 As a method for obtaining the least squares solution, Non-Patent Document 3 describes a method for estimating the position and orientation by dividing the corresponding points into 3, 4, 5, or more cases. In any case, as in Non-Patent Document 1, an E matrix is obtained from a linear epipolar equation, the E matrix is corrected so as to satisfy the constraint conditions, and the corrected E matrix is decomposed to estimate the position and orientation.
 上記の各方法を用いてカメラ位置姿勢を推定することで、対応点の3次元形状復元や背景のパノラマ画像が生成できることが知られている。カメラの相対的な位置と姿勢を元に対応点の3次元座標を復元する方法は、例えば、非特許文献4や非特許文献5に記載されている。 It is known that by estimating the camera position and orientation using each of the above methods, it is possible to restore the three-dimensional shape of the corresponding points and generate a panoramic image of the background. Methods for restoring the three-dimensional coordinates of corresponding points based on the relative position and orientation of the camera are described in Non-Patent Document 4 and Non-Patent Document 5, for example.
 非特許文献2に記載されているように、姿勢のうち2つが既知であれば、最少3組の対応点からカメラ位置姿勢を推定できる。しかし、対応点には量子化やカメラ内部のセンサノイズによる観測誤差が生じるため、3組のみでは高精度にカメラ位置姿勢を推定できない。また、3組では一般に複数のカメラ位置姿勢が推定され、どのカメラ位置姿勢を選択すべきかを数学的に区別することができない。唯一のカメラ位置姿勢を高精度に推定するためには、4組以上の対応点を用いる必要がある。しかし、非特許文献2に記載されている位置姿勢推定方法は、対応点が3組のみの場合であり、4組以上については適用できない。 As described in Non-Patent Document 2, if two of the postures are known, the camera position and posture can be estimated from a minimum of three sets of corresponding points. However, since observation errors due to quantization and sensor noise inside the camera occur at the corresponding points, the camera position and orientation cannot be estimated with high accuracy using only three sets. In general, a plurality of camera positions and orientations are estimated in three sets, and it is impossible to mathematically distinguish which camera position and orientation should be selected. In order to estimate the only camera position and orientation with high accuracy, it is necessary to use four or more sets of corresponding points. However, the position and orientation estimation method described in Non-Patent Document 2 is a case where there are only three sets of corresponding points, and cannot be applied to four or more sets.
 非特許文献3に記載されている位置姿勢推定方法は、位置と姿勢とを同時に算出するため、補正手段が必要となり計算量が大きい。具体的には補正するための特異値分解などの行列操作を行う必要がある。 The position / orientation estimation method described in Non-Patent Document 3 calculates the position and orientation at the same time, and therefore requires a correction means, which requires a large amount of calculation. Specifically, it is necessary to perform matrix operations such as singular value decomposition for correction.
 そこで、本発明は、2枚の画像を撮影したカメラの相対的な位置姿勢を、高精度かつ高速に推定できる位置姿勢推定装置、位置姿勢推定方法および位置姿勢推定プログラムを提供することを目的とする。 SUMMARY OF THE INVENTION An object of the present invention is to provide a position / orientation estimation apparatus, a position / orientation estimation method, and a position / orientation estimation program that can estimate the relative position and orientation of a camera that has taken two images with high accuracy and high speed. To do.
 本発明による位置姿勢推定装置は、2枚の画像に含まれる3組以上の対応点と、当該2枚の画像を撮影したカメラの相対的な姿勢を表す3つの姿勢パラメータのうち既知の2つの姿勢パラメータである既知姿勢とを入力し、入力された前記対応点と前記既知姿勢とを用いて、前記姿勢パラメータのうちの1つであって未知の姿勢パラメータである未知姿勢と前記カメラの相対的な位置とを変数として前記未知姿勢と前記対応点とを用いて表される所定の関数を満たすすべての前記未知姿勢の解を未知姿勢の候補として出力する未知姿勢候補計算部と、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補を計算する位置候補計算部と、すべての前記未知姿勢の候補、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補、および前記対応点を入力し、前記未知姿勢の候補および前記位置の候補の中から、前記対応点と前記カメラの相対的な位置と前記未知姿勢との幾何関係を表す所定の誤差関数が最小となる1つまたは複数の前記未知姿勢、および前記カメラの相対的な位置を抽出する誤差最小候補抽出部とを備えたことを特徴とする。 The position / orientation estimation apparatus according to the present invention includes three or more pairs of corresponding points included in two images and two known orientation parameters representing the relative orientation of the camera that has captured the two images. A known posture, which is a posture parameter, is input, and the relative position between the unknown posture, which is one of the posture parameters and is an unknown posture parameter, is used by using the input corresponding point and the known posture. An unknown posture candidate calculation unit that outputs, as candidates for unknown postures, solutions of all the unknown postures that satisfy a predetermined function represented by using the unknown posture and the corresponding point as a variable. A position candidate calculation unit that calculates a position candidate of the camera relative to each of the posture candidates, and all of the unknown posture candidates and the camera relative to the unknown posture candidates. A predetermined position representing a geometrical relationship between the relative position of the corresponding point, the camera, and the unknown posture among the unknown posture candidate and the position candidate. One or a plurality of the unknown postures having a minimum error function and a minimum error candidate extraction unit that extracts a relative position of the camera are provided.
 本発明による位置姿勢推定方法は、2枚の画像に含まれる3組以上の対応点と、当該2枚の画像を撮影したカメラの相対的な姿勢を表す3つの姿勢パラメータのうち既知の2つの姿勢パラメータである既知姿勢とを入力し、入力された前記対応点と前記既知姿勢とを用いて、前記姿勢パラメータのうちの1つであって未知の姿勢パラメータである未知姿勢と前記カメラの相対的な位置とを変数として前記未知姿勢と前記対応点とを用いて表される所定の関数を満たすすべての前記未知姿勢の解を未知姿勢の候補として出力し、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補を計算し、すべての前記未知姿勢の候補、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補、および前記対応点を入力し、前記未知姿勢の候補および前記位置の候補の中から、前記対応点と前記カメラの相対的な位置と前記未知姿勢との幾何関係を表す所定の誤差関数が最小となる1つまたは複数の前記未知姿勢、および前記カメラの相対的な位置を抽出することを特徴とする。 The position / orientation estimation method according to the present invention uses three or more sets of corresponding points included in two images and two known orientation parameters representing the relative orientation of the camera that captured the two images. A known posture, which is a posture parameter, is input, and the relative position between the unknown posture, which is one of the posture parameters and is an unknown posture parameter, is used by using the input corresponding point and the known posture. Output the solutions of all the unknown poses satisfying a predetermined function expressed using the unknown pose and the corresponding point as a variable, as unknown pose candidates, and for each of the unknown pose candidates, Calculates the relative position candidates of the camera, and inputs all the unknown posture candidates, the relative position candidates of the camera for the unknown posture candidates, and the corresponding points. Among the unknown posture candidates and the position candidates, one or more of the unknowns that minimize a predetermined error function representing a geometric relationship between the relative position of the corresponding point, the camera, and the unknown posture. The posture and the relative position of the camera are extracted.
 本発明による位置姿勢推定プログラムは、コンピュータに、2枚の画像に含まれる3組以上の対応点と、当該2枚の画像を撮影したカメラの相対的な姿勢を表す3つの姿勢パラメータのうち既知の2つの姿勢パラメータである既知姿勢とを入力し、入力された前記対応点と前記既知姿勢とを用いて、前記姿勢パラメータのうちの1つであって未知の姿勢パラメータである未知姿勢と前記カメラの相対的な位置とを変数として前記未知姿勢と前記対応点とを用いて表される所定の関数を満たすすべての前記未知姿勢の解を未知姿勢の候補として出力する未知姿勢候補計算処理と、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補を計算する位置候補計算処理と、すべての前記未知姿勢の候補、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補、および前記対応点を入力し、前記未知姿勢の候補および前記位置の候補の中から、前記対応点と前記カメラの相対的な位置と前記未知姿勢との幾何関係を表す所定の誤差関数が最小となる1つまたは複数の前記未知姿勢、および前記カメラの相対的な位置を抽出する誤差最小候補抽出処理とを実行させることを特徴とする。 The position / orientation estimation program according to the present invention is known to a computer among three or more sets of corresponding points included in two images and three orientation parameters representing the relative orientations of the cameras that photographed the two images. And the known posture, which is one of the posture parameters and is an unknown posture parameter, and the known posture, which are two posture parameters of An unknown posture candidate calculation process for outputting all the unknown posture solutions satisfying a predetermined function represented by using the unknown posture and the corresponding points as relative variables of a camera as candidates for unknown posture; A position candidate calculation process for calculating a position candidate of the camera relative to each of the unknown posture candidates, and all the unknown posture candidates and the unknown posture candidates, respectively. The relative position candidate of the camera and the corresponding point are input, and the relative position of the corresponding point, the camera, and the unknown posture are selected from the unknown posture candidate and the position candidate. One or a plurality of unknown poses that minimize a predetermined error function representing a geometric relationship and a minimum error candidate extraction process that extracts a relative position of the camera are executed.
 本発明によれば、2枚の画像を撮影したカメラの相対的な位置姿勢を、高精度かつ高速に推定できる。 According to the present invention, the relative position and orientation of a camera that has captured two images can be estimated with high accuracy and high speed.
本発明による位置姿勢推定装置の第1の実施形態の構成例を示すブロック図である。It is a block diagram which shows the structural example of 1st Embodiment of the position and orientation estimation apparatus by this invention. 本発明による位置姿勢推定装置の第1の実施形態の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of 1st Embodiment of the position and orientation estimation apparatus by this invention. 本発明による位置姿勢推定装置の第2の実施形態の構成例を示すブロック図である。It is a block diagram which shows the structural example of 2nd Embodiment of the position and orientation estimation apparatus by this invention. 本発明による位置姿勢推定装置の第2の実施形態の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of 2nd Embodiment of the position and orientation estimation apparatus by this invention. 本発明による位置姿勢推定装置の第3の実施形態の構成例を示すブロック図である。It is a block diagram which shows the structural example of 3rd Embodiment of the position / orientation estimation apparatus by this invention. 本発明による位置姿勢推定装置の第3の実施形態の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of 3rd Embodiment of the position and orientation estimation apparatus by this invention. 本発明による位置姿勢推定装置を情報処理システムに実装した場合のブロック図である。It is a block diagram at the time of mounting the position and orientation estimation apparatus by this invention in an information processing system. 本発明による位置姿勢推定装置の主要部の構成を示すブロック図である。It is a block diagram which shows the structure of the principal part of the position and orientation estimation apparatus by this invention.
 次に、本発明の実施の形態について図面を参照して詳細に説明する。 Next, embodiments of the present invention will be described in detail with reference to the drawings.
 実施形態1.
 図1は、本発明による位置姿勢推定装置の第1の実施形態(実施形態1)の構成例を示すブロック図である。図1に示す位置姿勢推定装置は、未知姿勢候補計算部1と、位置候補計算部2と、誤差最小候補抽出部3とを備える。未知姿勢候補計算部1は、係数計算部11と、連立多項式求解部12と、実数解抽出部13と、未知姿勢候補変換部14とを含む。
Embodiment 1. FIG.
FIG. 1 is a block diagram showing a configuration example of a first embodiment (Embodiment 1) of a position and orientation estimation apparatus according to the present invention. The position and orientation estimation apparatus shown in FIG. 1 includes an unknown orientation candidate calculation unit 1, a position candidate calculation unit 2, and a minimum error candidate extraction unit 3. The unknown posture candidate calculation unit 1 includes a coefficient calculation unit 11, a simultaneous polynomial solution unit 12, a real solution extraction unit 13, and an unknown posture candidate conversion unit 14.
 未知姿勢候補計算部1は、2枚の画像の3組以上の対応点と、2枚の画像を撮影したカメラの相対的な姿勢(以下、「姿勢」と記載することがある)を表す3つの姿勢パラメータのうち既知である2つの姿勢パラメータを既知姿勢として入力する。そして、未知姿勢候補計算部1は、3つの姿勢パラメータのうちの1つであって未知の姿勢パラメータである未知姿勢とカメラの相対的位置(以下、「位置」と記載することがある)とを変数として、対応点と未知姿勢を用いて表される所定の関数を満たすすべての未知姿勢を計算する。以下の説明では、対応点と未知姿勢により表される所定の関数を、単に連立多項式と呼ぶ。 The unknown posture candidate calculation unit 1 represents three or more sets of corresponding points of two images and the relative posture of the camera that captured the two images (hereinafter, sometimes referred to as “posture”) 3 Two known posture parameters among the two posture parameters are input as known postures. Then, the unknown posture candidate calculation unit 1 is one of the three posture parameters and is an unknown posture parameter and the relative position of the camera (hereinafter, may be referred to as “position”). Are used as variables, and all unknown poses satisfying a predetermined function expressed using corresponding points and unknown poses are calculated. In the following description, a predetermined function represented by a corresponding point and an unknown posture is simply referred to as a simultaneous polynomial.
 係数計算部11は、3組以上の対応点と2つの既知姿勢とを入力し、連立多項式の係数を計算して連立多項式求解部12に出力する。 The coefficient calculation unit 11 inputs three or more sets of corresponding points and two known postures, calculates the coefficient of the simultaneous polynomial, and outputs it to the simultaneous polynomial solving unit 12.
 連立多項式求解部12は、係数計算部11によって算出された連立多項式の係数を入力し、連立多項式を解き、すべての解を出力する。 The simultaneous polynomial solving unit 12 receives the coefficients of the simultaneous polynomial calculated by the coefficient calculating unit 11, solves the simultaneous polynomial, and outputs all the solutions.
 実数解抽出部13は、連立多項式求解部12によって算出された連立多項式のすべての解を入力し、その中からすべての実数解を抽出して出力する。実数解抽出部13は、実数解が1つも抽出できない場合、以降の処理を打ち切って解なしのフラグを出力する。 The real number solution extraction unit 13 inputs all the solutions of the simultaneous polynomials calculated by the simultaneous polynomial solution unit 12, extracts all the real number solutions from the solutions, and outputs them. If no real number solution can be extracted, the real number solution extraction unit 13 stops the subsequent processing and outputs a no solution flag.
 未知姿勢候補変換部14は、実数解抽出部13によって抽出されたすべての実数解を入力し、未知姿勢の候補を計算して出力する。未知姿勢候補変換部14は、各実数解をそれぞれ、以降の計算において1つの未知姿勢の候補を表すものとして扱われる値に変換する。 The unknown posture candidate conversion unit 14 receives all the real solutions extracted by the real number solution extraction unit 13 and calculates and outputs unknown posture candidates. The unknown posture candidate conversion unit 14 converts each real solution into a value that is treated as one unknown posture candidate in subsequent calculations.
 位置候補計算部2は、未知姿勢候補計算部1によって算出された未知姿勢の候補を入力し、カメラの相対的な位置の候補を計算して出力する処理部である。 The position candidate calculation unit 2 is a processing unit that receives the unknown posture candidate calculated by the unknown posture candidate calculation unit 1, calculates a relative position candidate of the camera, and outputs the calculated candidate.
 誤差最小候補抽出部3は、未知姿勢の候補と位置の候補と対応点とを入力し、対応点と未知姿勢と位置の幾何関係を表す所定の誤差関数が最小となる1つまたは複数の未知姿勢と位置を計算し、出力する。また、誤差最小候補抽出部3は、後述するように既知姿勢を用いて対応点を置き換える場合、既知姿勢を入力してもよい。 The minimum error candidate extraction unit 3 inputs an unknown posture candidate, a position candidate, and a corresponding point, and one or a plurality of unknowns that minimize a predetermined error function representing a geometric relationship between the corresponding point, the unknown posture, and the position. Calculate and output posture and position. Further, the minimum error candidate extraction unit 3 may input the known posture when replacing the corresponding points using the known posture as described later.
 本実施形態において、未知姿勢候補計算部1(より具体的には、係数計算部11、連立多項式求解部12、実数解抽出部13、未知姿勢候補変換部14)、位置候補計算部2、誤差最小候補抽出部3は、例えば、特定の演算処理等を行うよう設計されたハードウェア、またはプログラムに従って動作するCPU(Central Processing Unit)等の情報処理装置によって実現される。 In the present embodiment, unknown posture candidate calculation unit 1 (more specifically, coefficient calculation unit 11, simultaneous polynomial solution unit 12, real solution extraction unit 13, unknown posture candidate conversion unit 14), position candidate calculation unit 2, error The minimum candidate extraction unit 3 is realized by, for example, an information processing device such as hardware designed to perform specific arithmetic processing or the like, or a CPU (Central Processing Unit) that operates according to a program.
 次に、本実施形態の位置姿勢推定装置の動作を説明する。図2は、本発明による位置姿勢推定装置の第1の実施形態の動作の一例を示すフローチャートである。 Next, the operation of the position / orientation estimation apparatus of this embodiment will be described. FIG. 2 is a flowchart showing an example of the operation of the first embodiment of the position / orientation estimation apparatus according to the present invention.
 まず、係数計算部11は、3組以上の対応点と2つの既知姿勢が入力されると、未知姿勢とカメラの相対位置とを変数として、対応点と未知姿勢を用いて表される所定の関数(以下、連立多項式と呼ぶ)の係数を計算し、連立多項式求解部12に出力する(ステップS11)。 First, when three or more sets of corresponding points and two known postures are input, the coefficient calculation unit 11 uses a unknown point and a relative position of the camera as variables, and is represented by a predetermined point represented using the corresponding point and the unknown posture. The coefficient of the function (hereinafter referred to as simultaneous polynomial) is calculated and output to the simultaneous polynomial solving unit 12 (step S11).
 ここで、カメラの姿勢は、3自由度の回転行列または回転行列を表現するために必要な3つの変数により表される。姿勢の自由度は、カメラの姿勢を表す3×3行列の9個の要素がそれぞれ何個の変数で表現されているかを示す指標である。2つの既知姿勢とは、3つの変数のうちの既知の2つの変数、または、2自由度で表現される回転行列のことである。未知姿勢とは、残りの1つの変数、または、1自由度で表現される回転行列のことである。 Here, the posture of the camera is represented by a three-degree-of-freedom rotation matrix or three variables necessary to express the rotation matrix. The degree of freedom of the posture is an index indicating how many variables each of the nine elements of the 3 × 3 matrix representing the posture of the camera are represented. The two known poses are two known variables out of the three variables or a rotation matrix expressed by two degrees of freedom. The unknown pose is a rotation matrix expressed with one remaining variable or one degree of freedom.
 本実施形態において、係数計算部11が、係数を算出する対象となる連立多項式は、カメラ姿勢の自由度をどのように表現するかで一意に定まる。そのため、係数計算部11は、例えば、予め定義付けられたカメラ姿勢の自由度の表現方法または連立多項式の型に従い、3組以上の対応点と2つの既知姿勢を用いて、連立多項式における係数を算出すればよい。また、係数計算部11は、例えば、カメラ姿勢の自由度の表現方法に応じた複数の連立多項式の型を予め定義づけておき、起動時等に読み出した設定パラメータの値に応じて用いる連立多項式の型を選択することも可能である。 In the present embodiment, the simultaneous polynomials for which the coefficient calculation unit 11 calculates coefficients are uniquely determined by how to express the degree of freedom of the camera posture. Therefore, the coefficient calculation unit 11 calculates the coefficient in the simultaneous polynomial using, for example, three or more pairs of corresponding points and two known postures in accordance with a predefined expression method of the degree of freedom of the camera posture or the type of the simultaneous polynomial. What is necessary is just to calculate. In addition, the coefficient calculation unit 11 predefines a plurality of simultaneous polynomial types corresponding to, for example, a method for expressing the degree of freedom of the camera posture, and uses the simultaneous polynomials that are used according to the values of the setting parameters read at the time of activation or the like It is also possible to select the type.
 次いで、連立多項式求解部12は、係数計算部11が算出した連立多項式の係数を入力し、ステップS11で入力された対応点と既知姿勢とを用いて連立多項式を解く(ステップS12)。また、連立多項式求解部12は、連立多項式を満たすすべての解を実数解抽出部13に出力する。ここでの連立多項式の解は、未知姿勢として不適切な値も含まれる。例えば、その一部またはすべてが複素数である場合がある。 Next, the simultaneous polynomial solving unit 12 inputs the coefficient of the simultaneous polynomial calculated by the coefficient calculating unit 11, and solves the simultaneous polynomial using the corresponding point and the known posture input in step S11 (step S12). Further, the simultaneous polynomial solving unit 12 outputs all the solutions satisfying the simultaneous polynomial to the real number solution extracting unit 13. The solution of the simultaneous polynomial here includes an inappropriate value as an unknown posture. For example, some or all of them may be complex numbers.
 実数解抽出部13は、連立多項式求解部12が解いた連立多項式のすべての解が入力されると、それらの解の中に実数解がある場合(ステップS13のYes)、すべての実数解を抽出して未知姿勢候補変換部14に出力する。連立多項式のすべての解から実数解が1つも抽出できない場合には、実数解抽出部13は、位置姿勢の推定結果として「解なし」のフラグを出力して動作を終了する(ステップS13のNo、ステップS17)。例えば、実数解抽出部13は、すべての解が複素数であれば「解なし」のフラグを出力し、動作を終了する。「解なし」のフラグは、例えば、真偽値であってもよいし、事前に決定した解なしを表す位置姿勢の値であってもよい。 When all the solutions of the simultaneous polynomials solved by the simultaneous polynomial solving unit 12 are input to the real number solution extracting unit 13, when there are real number solutions in those solutions (Yes in step S13), all the real number solutions are obtained. Extracted and output to the unknown posture candidate conversion unit 14. When no real number solution can be extracted from all the solutions of the simultaneous polynomials, the real number solution extraction unit 13 outputs a “no solution” flag as the position and orientation estimation result and ends the operation (No in step S13). Step S17). For example, if all the solutions are complex numbers, the real number solution extraction unit 13 outputs a “no solution” flag and ends the operation. The “no solution” flag may be a true / false value, for example, or may be a position / orientation value indicating no solution determined in advance.
 連立多項式に1つでも実数解があった場合、未知姿勢候補変換部14は、すべての実数解を入力し、未知姿勢の候補へ変換する(ステップS14)。計算の結果得られた未知姿勢の候補は、位置候補計算部2と誤差最小候補抽出部3に出力される。未知姿勢候補変換部14は、得られた実数解をそれぞれ未知姿勢の候補とされる値(未知姿勢を表す1つの数値または自由度1の3×3回転行列)に変換する処理を行えばよい。 When there is even one real number solution in the simultaneous polynomial, the unknown posture candidate conversion unit 14 inputs all the real number solutions and converts them into unknown posture candidates (step S14). The unknown posture candidates obtained as a result of the calculation are output to the position candidate calculation unit 2 and the minimum error candidate extraction unit 3. The unknown posture candidate conversion unit 14 may perform processing for converting the obtained real solution into values (one numerical value representing the unknown posture or a 3 × 3 rotation matrix with one degree of freedom) that are candidates for unknown postures. .
 位置候補計算部2は、ステップS14で得られたすべての未知姿勢の候補を入力し、それぞれの未知姿勢に対応するカメラの相対的な位置を計算し、位置の候補として誤差最小候補抽出部3に出力する(ステップS15)。 The position candidate calculation unit 2 receives all the unknown posture candidates obtained in step S14, calculates the relative positions of the cameras corresponding to the unknown postures, and extracts the minimum error candidate extraction unit 3 as the position candidates. (Step S15).
 誤差最小候補抽出部3は、未知姿勢の候補と位置の候補と対応点とを入力し、対応点と未知姿勢と位置の幾何関係を表す所定の誤差関数が最小となる1つまたは複数の未知姿勢と位置を計算し、出力する(ステップS16)。また、誤差最小候補抽出部3は、後述するように既知姿勢を用いて対応点を置き換える場合、既知姿勢を入力する必要はなく、置き換えない場合、既知姿勢を入力してもよい。 The minimum error candidate extraction unit 3 inputs an unknown posture candidate, a position candidate, and a corresponding point, and one or a plurality of unknowns that minimize a predetermined error function representing a geometric relationship between the corresponding point, the unknown posture, and the position. The posture and position are calculated and output (step S16). Further, the minimum error candidate extraction unit 3 does not need to input a known posture when replacing a corresponding point using a known posture as described later, and may input a known posture when not replacing.
 誤差最小候補抽出部3は、入力された対応点数が3組の場合、入力されたすべての未知姿勢の候補と位置の候補を、未知姿勢と位置として出力する。また、入力した対応点数が4組以上の場合、誤差最小候補抽出部3は、入力したすべての未知姿勢の候補と位置の候補の中から、所定の誤差関数を最小化する1つまたは複数の位置と未知姿勢とを計算し、出力する。対応点数が3組の場合にこの上記計算を行わない理由は、すべての未知姿勢の候補における誤差が0となることから、どの候補が正解に最も近いかを判別することができないからである。一方、対応点数が4組以上の場合には、各候補における誤差が異なることから、ただ一つの誤差最小の候補を選択できる。 The minimum error candidate extraction unit 3 outputs all of the input unknown posture candidates and position candidates as unknown postures and positions when the number of input corresponding points is three sets. When the number of input corresponding points is four or more, the minimum error candidate extraction unit 3 minimizes a predetermined error function from among all the input unknown posture candidates and position candidates. Calculate and output the position and unknown posture. The reason why this calculation is not performed when the number of corresponding points is three is that the error in all unknown posture candidates is 0, and it is impossible to determine which candidate is closest to the correct answer. On the other hand, when the number of corresponding points is four or more, since the error in each candidate is different, only one candidate with the smallest error can be selected.
(実施例1)
 以下、具体的な例を用いて第1の実施形態を説明する。まず、未知姿勢とカメラの相対位置を変数とするエピポーラ方程式について説明する。以下では、Iは単位行列、detは行列式、∥∥はベクトルのL2ノルム、[ ]×は、3次元ベクトルのクロス積の行列表現を表す。また、入力対応点数をn組、1枚目の画像のi番目の対応点座標をv、2枚目の画像のi番目の対応点座標をv’、焦点距離や光学中心などのカメラの内部パラメータは校正済みとし、vとv’は斉次化されているとする。また、vとv’の個数は等しいとする。また、2つの既知姿勢をα、β、未知姿勢をθとすると、カメラの相対的な姿勢は回転行列として式(1)で表される。
(Example 1)
Hereinafter, the first embodiment will be described using a specific example. First, the epipolar equation with the unknown posture and the relative position of the camera as variables will be described. In the following, I is a unit matrix, det is a determinant, ∥∥ is an L2 norm of a vector, and [] × represents a matrix representation of a cross product of three-dimensional vectors. Also, the number of input corresponding points is n sets, the i-th corresponding point coordinate of the first image is v i , the i-th corresponding point coordinate of the second image is v ′ i , and the camera such as focal length and optical center It is assumed that the internal parameters of are already calibrated and v i and v ′ i are homogenized. Further, it is assumed that the number of v i and v ′ i is equal. If the two known postures are α and β, and the unknown posture is θ, the relative posture of the camera is expressed by Equation (1) as a rotation matrix.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 本実施例では、姿勢は式(1)のように表現されるが、軸方向と回転方向と乗算の順番の定義の仕方により、任意に表現可能である。また、本実施形態では既知姿勢はα、βという2つのパラメータで表現されるが、単位四元数を用いて表現されてもよいし、非特許文献2に記載されているように回転軸と回転量を用いて表現されてもよい。 In this embodiment, the posture is expressed as in Expression (1), but can be arbitrarily expressed depending on how to define the axial direction, the rotational direction, and the order of multiplication. In this embodiment, the known posture is expressed by two parameters α and β. However, the known posture may be expressed using a unit quaternion, and as described in Non-Patent Document 2, a rotation axis and It may be expressed using a rotation amount.
 カメラの相対的な位置を自由度2の3次元ベクトルtで表すと、対応点v、v’、位置t、姿勢Rは、エピポーラ方程式と呼ばれる式(2)のように表される。 When the relative position of the camera is represented by a three-dimensional vector t with two degrees of freedom, the corresponding points v i , v ′ i , the position t, and the posture R are represented as an equation (2) called an epipolar equation.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 tの自由度を2とするのは、スケール倍の不定性による。αとβは既知であるため、vをRαβと置き換えることにより、変形するとエピポーラ方程式は式(3)のように表される。 The degree of freedom of t is set to 2 due to the indefiniteness of the scale. Since α and β are already known, the epipolar equation is expressed as shown in Equation (3) when deformed by replacing v i with R α R β v i .
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 式(3)の未知数はtを表す2つとθの3つであるため、最少3組の対応点があれば式(3)は解け、4組以上ならば最小自乗解を計算できる。まず、3組の場合について説明する。3組の対応点が入力された場合、式(3)は式(4)のように表される。 Since there are two unknowns of t in Equation (3) and three in θ, Equation (3) can be solved if there are a minimum of three corresponding points, and a least squares solution can be calculated if there are four or more pairs. First, the case of three sets will be described. When three sets of corresponding points are input, Expression (3) is expressed as Expression (4).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 ここで、Aは、θを変数として含む3×3の行列である。式(4)は、tがゼロベクトルまたはAの零空間であることを示す。tはゼロベクトルではないため、Aの固有値ゼロに対応する固有ベクトルとしてtが表される。すなわち、θは、θを変数としてAの1つの固有値をゼロとした関数の解である。 Here, A is a 3 × 3 matrix including θ as a variable. Equation (4) indicates that t is a zero vector or a null space of A. Since t is not a zero vector, t is represented as an eigenvector corresponding to the eigenvalue zero of A. That is, θ is a solution of a function in which θ is a variable and one eigenvalue of A is zero.
 例えば、Aの固有値は3次式の解であるため、連立多項式求解部12は、θを変数として固有値を閉形式で書き下した式を用いてもよい。この場合、連立多項式求解部12は、適当な初期値のもとニュートン法などの反復解法によりθを求める。また、固有値がゼロであるため、連立多項式求解部12は、det(A)=0の解を求めてもよい。連立多項式求解部12は、det(A)=0を満たすθを求めるために、例えば、det(A)のcosθとsinθを2つの独立変数c、sと置き換えることにより、新たな条件式としてc+s=1を加えた式(5)を解いてもよい。 For example, since the eigenvalue of A is a solution of a cubic equation, the simultaneous polynomial solving unit 12 may use an equation in which the eigenvalue is written in a closed form with θ as a variable. In this case, the simultaneous polynomial solution unit 12 obtains θ by an iterative solution method such as Newton's method with an appropriate initial value. Since the eigenvalue is zero, the simultaneous polynomial solving unit 12 may obtain a solution with det (A) = 0. In order to obtain θ satisfying det (A) = 0, the simultaneous polynomial solving unit 12 replaces cos θ and sin θ of det (A) with two independent variables c and s, for example, as a new conditional expression c the 2 + s 2 = 1 may be solved equation (5) was added.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 また、非特許文献2に記載されているように、cosθとsinθとを半角の公式を用いて1変数で表現することもできる。この場合、式(5)は連立多項式ではなく、1変数多項式となる。連立多項式求解部12は、求められたθをAに代入し、Aの固有値ゼロに対応する固有ベクトルとしてtを計算する。このとき、tの絶対的なスケールは不明で自由度2であるため、tのノルムを1に正規化する。 Further, as described in Non-Patent Document 2, cos θ and sin θ can be expressed by one variable using a half-width formula. In this case, Equation (5) is not a simultaneous polynomial but a univariate polynomial. The simultaneous polynomial solving unit 12 substitutes the obtained θ for A, and calculates t as an eigenvector corresponding to the eigenvalue zero of A. At this time, since the absolute scale of t is unknown and has 2 degrees of freedom, the norm of t is normalized to 1.
 次に、4組以上の対応点が入力された場合について説明する。4組以上の対応点が入力された場合、式(3)を拡張すると式(6)のように表される。 Next, the case where four or more pairs of corresponding points are input will be described. When four or more sets of corresponding points are input, the expression (3) is expanded to be expressed as the expression (6).
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 ここで、Bは、式(7)のように表され、θを変数として含むn×3の行列である。 Here, B is an n × 3 matrix that is expressed as in Expression (7) and includes θ as a variable.
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 非特許文献1や非特許文献3で知られているように、式(6)が示す誤差はBBの最小固有値であり、tは、ゼロベクトルではなくBBの最小固有値に対応する固有ベクトルである。すなわち、θは、θを変数としてBBの1つの固有値を最小化する関数の解である。例えば、BBは3×3の行列であるため、BBの固有値はθを変数として書き下すことができ、式(4)と同様に反復解法により求めることができる。また、1つの固有値が最小であるため、連立多項式求解部12は、以下の式(8)の解を求めてもよい。 As known from Non-Patent Document 1 and Non-Patent Document 3, the error represented by Equation (6) is the minimum eigenvalue of B T B, and t corresponds to the minimum eigenvalue of B T B, not the zero vector. It is an eigenvector. That is, θ is a solution of a function that minimizes one eigenvalue of B T B with θ as a variable. For example, since B T B is a 3 × 3 matrix, the eigenvalue of B T B can be written down with θ as a variable, and can be obtained by an iterative solution as in Equation (4). In addition, since one eigenvalue is the minimum, the simultaneous polynomial solving unit 12 may obtain a solution of the following equation (8).
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 式(8)を満たすθを求めるために、連立多項式求解部12は、例えば、式(8)のcosθとsinθを2つの独立変数c、sと置き換え、新たな条件式としてc+s=1を加えた以下の式(9)を解いてもよい。 In order to obtain θ satisfying Expression (8), the simultaneous polynomial solving unit 12 replaces cos θ and sin θ in Expression (8) with two independent variables c and s, for example, and c 2 + s 2 as a new conditional expression. You may solve the following formula | equation (9) which added = 1.
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
また、式(5)と同様に、非特許文献2に記載されているように、cosθとsinθを半角の公式を用いて1変数で表現することもできる。この場合、式(9)は連立多項式ではなく、1変数多項式となる。連立多項式求解部12は、求められたθをBBに代入し、BBの最小固有値に対応する固有ベクトルとしてtを計算する。3組の場合と同様、tのノルムを1に正規化する。 Similarly to equation (5), as described in Non-Patent Document 2, cos θ and sin θ can be expressed by one variable using a half-angle formula. In this case, Equation (9) is not a simultaneous polynomial but a univariate polynomial. The simultaneous polynomial solving unit 12 substitutes the obtained θ for B T B, and calculates t as an eigenvector corresponding to the minimum eigenvalue of B T B. As in the case of three sets, the norm of t is normalized to 1.
 本実施形態では、3組と4組以上の場合を分けて説明したが、3組の場合も式(9)を解くことで式(5)と同様の解を得ることが出来る。なぜならば、以下の式(10)の関係があるので、式(9)の解に式(5)の解が含まれるからである。 In the present embodiment, the case of 3 sets and 4 sets or more has been described separately, but also in the case of 3 sets, a solution similar to equation (5) can be obtained by solving equation (9). This is because the following equation (10) has a relationship, and therefore the solution of equation (9) includes the solution of equation (5).
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
 次に、本実施例における各部の動作を具体的に説明する。ステップS11において、係数計算部11は、対応点vとv’(3≦i≦n)と2つの既知姿勢(α、β)を入力し、対応点の数をnとすると、n=3の場合は式(5)または式(9)、n≧4の場合は式(9)で表される連立多項式もしくは1変数多項式の係数を計算し、連立多項式求解部12に出力する。 Next, the operation of each part in the present embodiment will be specifically described. In step S11, the coefficient calculation unit 11 inputs corresponding points v i and v i ′ (3 ≦ i ≦ n) and two known postures (α, β), where n = In the case of 3, the coefficient of the simultaneous polynomial or the one-variable polynomial represented by the expression (5) or (9), and in the case of n ≧ 4, the expression (9) is calculated and output to the simultaneous polynomial solving unit 12.
 2つの既知姿勢は、例えば、加速度センサや消失点を利用して取得できる。加速度センサを利用する場合、カメラを静止させて重力方向を計測する。重力方向は、3次元ベクトルで表されるが、ノルムの大きさは無視されるため、その自由度は2である。そのため、2枚の画像を撮影したときの重力方向の違いをαとβの2つのパラメータで表すことができる。消失点とは、3次元空間における2つ以上の平行線が、射影変換により画像上で交わった点のことである。1つの消失点から既知姿勢を計算する方法が、非特許文献2に記載されている。 The two known postures can be acquired using, for example, an acceleration sensor or a vanishing point. When using an acceleration sensor, the direction of gravity is measured with the camera stationary. Although the direction of gravity is represented by a three-dimensional vector, the magnitude of the norm is ignored, so the degree of freedom is 2. Therefore, the difference in the direction of gravity when two images are taken can be expressed by two parameters α and β. A vanishing point is a point where two or more parallel lines in a three-dimensional space intersect on an image by projective transformation. Non-patent document 2 describes a method for calculating a known posture from one vanishing point.
 ステップS12では、連立多項式求解部12は、係数計算部11から出力された係数を用いて連立多項式を解き、式(5)または式(9)の解を実数解抽出部13に出力する。式(5)または式(9)の連立多項式を解くためには、例えば、シルベスター行列などを用いて1変数多項式へ帰着する方法や、グレブナー基底を用いて2変数の解を同時に計算する方法など、様々な方法を用いることができる。 In step S12, the simultaneous polynomial solving unit 12 solves the simultaneous polynomial using the coefficient output from the coefficient calculating unit 11, and outputs the solution of the equation (5) or the equation (9) to the real solution extracting unit 13. In order to solve the simultaneous polynomial of Equation (5) or Equation (9), for example, a method of reducing to a single variable polynomial using a Sylvester matrix or a method of simultaneously calculating a solution of two variables using a Gröbner basis, etc. Various methods can be used.
 ステップS13では、実数解抽出部13が、式(5)または式(9)で表される連立多項式の解を入力し、その中からすべての実数解を計算し、実数解の有無を判定する。実数解抽出部13は、実数解があれば、未知姿勢候補変換部14に出力する。実数解抽出部13は、すべての解が複素数であれば、解なしのフラグを出力して動作を終了する(ステップS17)。解なしのフラグは、例えば、真偽値であってもよいし、事前に決定した解なしを表す位置姿勢の値であってもよい。 In step S13, the real number solution extraction unit 13 inputs the solution of the simultaneous polynomial expressed by the equation (5) or the equation (9), calculates all the real number solutions from the solution, and determines the presence or absence of the real number solution. . If there is a real number solution, the real number solution extraction unit 13 outputs it to the unknown posture candidate conversion unit 14. If all the solutions are complex numbers, the real number solution extraction unit 13 outputs a no solution flag and ends the operation (step S17). The no solution flag may be, for example, a true / false value or a position / orientation value indicating no solution determined in advance.
 ステップS14では、未知姿勢候補変換部14が、全ての実数解を入力し、未知姿勢の候補を計算し、位置候補計算部2と誤差最小候補抽出部3に出力する。未知姿勢の候補は、例えば、式(1)のRθで与えられる。例えば、実数解がk個ある場合、θがk個あることを意味し、未知姿勢候補変換部14は、このk個のθを1つずつ式(1)に代入し、k個の未知姿勢の候補(3×3行列)を求めればよい。 In step S <b> 14, the unknown posture candidate conversion unit 14 inputs all real number solutions, calculates unknown posture candidates, and outputs them to the position candidate calculation unit 2 and the minimum error candidate extraction unit 3. Candidate unknown posture, for example, is given by R theta of formula (1). For example, if there are k real solutions, it means that there are k θ, and the unknown posture candidate conversion unit 14 substitutes k θ one by one into the equation (1), and k unknown postures. Can be obtained (3 × 3 matrix).
 ステップS15では、位置候補計算部2が、入力されたすべての未知姿勢の候補に対し、それぞれ位置の候補を計算し、誤差最小候補抽出部3に出力する。位置の候補は、Aの固有値ゼロに対応する固有ベクトル、もしくは、BBの最小固有値に対応する固有ベクトルである。固有ベクトルは、固有値分解や特異値分解により求めることができる。 In step S <b> 15, the position candidate calculation unit 2 calculates position candidates for all input unknown posture candidates and outputs the position candidates to the minimum error candidate extraction unit 3. The position candidate is an eigenvector corresponding to the eigenvalue zero of A or an eigenvector corresponding to the smallest eigenvalue of B T B. The eigenvector can be obtained by eigenvalue decomposition or singular value decomposition.
 ステップS16では、まず誤差最小候補抽出部3は、対応点と、未知姿勢の候補と、位置の候補とを入力する。誤差最小候補抽出部3は、入力した対応点数に応じて異なる動作を行う。n=3の場合、誤差最小候補抽出部3は、入力したすべての未知姿勢の候補と位置の候補を、未知姿勢と位置として出力する。なぜならば、det(A)=det(BB)=0であるから∥Bt∥=0となり、どの候補が正解に最も近いかを判別することができないからである。n≧4で、かつ、未知姿勢の候補と位置の候補が1つ以上入力された場合、誤差最小候補抽出部3は、BBの最小固有値が最小となる未知姿勢の候補と位置の候補のペアを、未知姿勢と位置として出力する。 In step S16, first, the minimum error candidate extraction unit 3 inputs a corresponding point, an unknown posture candidate, and a position candidate. The minimum error candidate extraction unit 3 performs different operations according to the input number of corresponding points. In the case of n = 3, the minimum error candidate extraction unit 3 outputs all the unknown posture candidates and position candidates that have been input as unknown postures and positions. This is because det (A) = det (B T B) = 0, so ∥Bt∥ 2 = 0, and it is impossible to determine which candidate is closest to the correct answer. When n ≧ 4 and one or more unknown posture candidates and position candidates are input, the minimum error candidate extraction unit 3 extracts the unknown posture candidate and the position candidate that minimize the minimum eigenvalue of B TB. Are output as unknown posture and position.
 本実施例では、位置候補計算部2において、すべての実数解に対して一括処理により最適解の候補を絞り込んだが、連立多項式の解を1つずつ逐次的に処理することも可能である。例えば、連立多項式求解部12が1つ解を求めるたびに、実数解抽出部13に解を出力する。実数解抽出部13により実数解と判定されれば、未知姿勢候補変換部14による姿勢候補への変換処理、誤差最小候補抽出部3による誤差最小候補判定処理がループ処理により実行される。そして、誤差最小候補抽出部3は、誤差最小候補判定処理として、誤差が最小になる候補をその都度記憶しておき、実数解が抽出される度に、誤差計算、誤差最小値の比較処理、最小候補および最小値の上書き処理を行ってもよい。ここで、姿勢候補数および入力点数による分岐処理を、誤差最小候補抽出部3がまとめて行うようにしてもよい。 In the present embodiment, the position candidate calculation unit 2 narrows down the candidates for the optimal solution by batch processing for all real solutions, but it is also possible to sequentially process the solutions of simultaneous polynomials one by one. For example, each time the simultaneous polynomial solving unit 12 obtains one solution, the solution is output to the real number solution extracting unit 13. If the real solution extraction unit 13 determines that the solution is a real number, the conversion processing to the posture candidate by the unknown posture candidate conversion unit 14 and the minimum error candidate determination processing by the minimum error candidate extraction unit 3 are executed by loop processing. Then, the error minimum candidate extraction unit 3 stores, as the error minimum candidate determination process, a candidate that minimizes the error each time, and every time a real solution is extracted, error calculation, error minimum value comparison process, Overwriting processing of the minimum candidate and the minimum value may be performed. Here, the branching process based on the number of posture candidates and the number of input points may be collectively performed by the minimum error candidate extraction unit 3.
 また、本実施例では、係数計算部11と誤差最小候補抽出部3のそれぞれで対応点の組数を数えて処理を切り替えたが、係数計算部11による対応点数の判定結果により、その後の処理をn=3とn≧4の2つに場合分けして処理してもよい。この場合、n=3であれば、未知姿勢候補計算部1と位置候補計算部2は、式(5)を計算し、n≧4であれば、未知姿勢候補計算部1と位置候補計算部2と誤差最小候補抽出部3とは、式(9)を計算する。 In this embodiment, the coefficient calculation unit 11 and the minimum error candidate extraction unit 3 each switch the processing by counting the number of corresponding points. However, depending on the determination result of the number of corresponding points by the coefficient calculation unit 11, the subsequent processing is performed. May be divided into two cases, n = 3 and n ≧ 4. In this case, if n = 3, the unknown posture candidate calculation unit 1 and the position candidate calculation unit 2 calculate Expression (5). If n ≧ 4, the unknown posture candidate calculation unit 1 and the position candidate calculation unit. 2 and the minimum error candidate extraction unit 3 calculate Expression (9).
 また、本実施例では、誤差最小候補抽出部3は、誤差としてBBの最小固有値を用いたが、det(BB)を用いてもよい。この場合、未知姿勢候補変換部14の次に誤差最小候補抽出部3がdet(BB)を計算し、最小となる未知姿勢の候補を1つだけ選択して位置候補計算部2に出力する。また、その他の誤差として、エピポーラ線と対応点とのユークリッド距離や、ユークリッド距離の近似であるサンプソン誤差を用いてもよい。 In this embodiment, the minimum error candidate extraction unit 3 uses the minimum eigenvalue of B T B as the error, but may use det (B T B). In this case, next to the unknown posture candidate conversion unit 14, the minimum error candidate extraction unit 3 calculates det (B T B), selects only one candidate of the smallest unknown posture, and outputs it to the position candidate calculation unit 2. To do. As other errors, a Euclidean distance between the epipolar line and the corresponding point, or a Sampson error that is an approximation of the Euclidean distance may be used.
 以上のように、本実施形態の位置姿勢推定装置は、まず、位置と未知姿勢の幾何学的関係に基づく誤差が最小となる候補をすべて計算し、その中から誤差が最小となる候補を抽出するため、出力された未知姿勢と位置が大域的最小解であることが保証される。すなわち、非特許文献3のように事後に拘束条件を無視した補正を行う方法とは異なり、未知姿勢と位置の候補を個別に計算するため補正による誤差増大が発生しない。 As described above, the position / orientation estimation apparatus according to the present embodiment first calculates all candidates having the smallest error based on the geometric relationship between the position and the unknown attitude, and extracts the candidate having the smallest error from the candidates. Therefore, it is guaranteed that the output unknown posture and position are the global minimum solution. In other words, unlike the method of performing correction after ignoring the constraint conditions as in Non-Patent Document 3, since the unknown posture and position candidates are calculated separately, no error increase due to the correction occurs.
 また、本実施形態の位置姿勢推定装置は、補正手段が不要となり計算量を削減できる。すなわち、非特許文献3のように補正するための特異値分解などの行列操作を行う必要がない。 Also, the position / orientation estimation apparatus according to the present embodiment does not require correction means and can reduce the amount of calculation. That is, it is not necessary to perform matrix operations such as singular value decomposition for correction as in Non-Patent Document 3.
 よって、本実施形態の位置姿勢推定装置によれば、高精度かつ安定的、さらに、高速にカメラの相対的な位置姿勢を推定できる。 Therefore, according to the position / orientation estimation apparatus of the present embodiment, the relative position / orientation of the camera can be estimated with high accuracy and stability at a high speed.
実施形態2.
 第2の実施形態(実施形態2)の位置姿勢推定装置を、図面を参照して説明する。図3は本発明による位置姿勢推定装置の第2の実施形態の構成例を示すブロック図である。本実施形態の位置姿勢推定装置の構成は、図1に示した第1の実施形態の構成と比べて、未知姿勢候補計算部1がさらに2次最適性検証部4を有する点が異なる。なお、2次最適性検証部4以外の構成に関しては、第1の実施形態と同様であるため説明を省略する。
Embodiment 2. FIG.
A position / orientation estimation apparatus according to a second embodiment (Embodiment 2) will be described with reference to the drawings. FIG. 3 is a block diagram showing a configuration example of the second embodiment of the position / orientation estimation apparatus according to the present invention. The configuration of the position / orientation estimation apparatus of the present embodiment is different from the configuration of the first embodiment shown in FIG. 1 in that the unknown orientation candidate calculation unit 1 further includes a secondary optimality verification unit 4. Since the configuration other than the secondary optimality verification unit 4 is the same as that of the first embodiment, the description thereof is omitted.
 2次最適性検証部4は、実数解抽出部13から出力される実数解を入力し、未知姿勢を変数とする方程式を未知姿勢で2階微分した関数の正負を検証し、正である実数解を2次最適性十分条件を満たす候補として未知姿勢候補変換部14に出力する。また、2次最適性検証部4は、検証結果、正である実数解が存在しない場合、以降の処理を打ち切って解なしのフラグを出力する。 The second-order optimality verification unit 4 receives the real number solution output from the real number solution extraction unit 13, verifies the positive / negative of the function obtained by second-order differentiation of the equation having the unknown pose as a variable with the unknown pose, and is a positive real number The solution is output to the unknown posture candidate conversion unit 14 as a candidate that satisfies the second optimality sufficient condition. In addition, when the verification result shows that there is no positive real number solution, the secondary optimality verification unit 4 stops the subsequent processing and outputs a no solution flag.
 本実施形態において、2次最適性検証部4は、例えば、特定の演算処理等を行うよう設計されたハードウェア、またはプログラムに従って動作するCPU等の情報処理装置によって実現される。 In the present embodiment, the secondary optimality verification unit 4 is realized by, for example, hardware designed to perform specific arithmetic processing or the like, or an information processing device such as a CPU that operates according to a program.
 次に本実施形態の位置姿勢推定装置の動作を説明する。図4は、本実施形態の位置姿勢推定装置の動作の一例を示すフローチャートである。以下、ステップS21以外の動作は第1の実施形態と同様であるため、説明を省略する。 Next, the operation of the position / orientation estimation apparatus of this embodiment will be described. FIG. 4 is a flowchart showing an example of the operation of the position / orientation estimation apparatus of this embodiment. Since the operations other than step S21 are the same as those in the first embodiment, description thereof will be omitted.
 2次最適性検証部4は、実数解抽出部21により抽出された連立多項式の実数解に対して、未知姿勢を変数とする方程式を未知姿勢で2階微分した関数の正負を検証し(ステップS21)、正である実数解を2次最適性十分条件を満たす解として未知姿勢候補変換部14に出力する。2次最適性検証部4は、すべての実数解が2次最適性十分条件を満たさない場合、解なしのフラグを出力し、動作を終了する(ステップS21のNo、ステップS17)。解なしのフラグは、例えば、真偽値であってもよく、事前に決定した解なしを表す姿勢と位置の値であってもよい。 The quadratic optimality verification unit 4 verifies the sign of the function obtained by second-order differentiation of the equation having the unknown pose as a variable with respect to the real solution of the simultaneous polynomial extracted by the real number solution extraction unit 21 (step) S21), a positive real number solution is output to the unknown posture candidate conversion unit 14 as a solution satisfying the second-order optimality sufficient condition. If all real solutions do not satisfy the secondary optimality sufficient condition, the secondary optimality verification unit 4 outputs a no-solution flag and ends the operation (No in step S21, step S17). The no solution flag may be a true / false value, for example, or may be a posture and position value representing no solution determined in advance.
(実施例2)
 次に、実施形態2における各部の動作を具体的に説明する。なお、2次最適性検証部4以外の動作は第1の実施形態の場合と同様である。2次最適性検証部4は、実数解抽出部13から出力される実数解を入力し、未知姿勢を変数とする方程式を未知姿勢で2階微分した関数に実数解を代入して正負を検証する。ここで、未知姿勢を変数とする方程式は、例えば、θを変数として書き下した(BB)の固有値としてもよいし、det(BB)としてもよい。
(Example 2)
Next, the operation of each unit in the second embodiment will be specifically described. The operations other than the second optimality verification unit 4 are the same as those in the first embodiment. The second-order optimality verification unit 4 inputs the real number solution output from the real number solution extraction unit 13, and verifies the positive / negative by substituting the real number solution into the function obtained by second-order differentiation of the equation with the unknown posture as a variable with the unknown posture. To do. Here, the equation using the unknown posture as a variable may be, for example, an eigenvalue of (B T B) written with θ as a variable, or may be det (B T B).
 2次最適性検証部4は、実数解を代入した当該関数が正であれば、2次最適性十分条件を満たす解として未知姿勢候補変換部14に出力する(ステップS21のYes)。1つも正とならない場合、2次最適性検証部4は、実数解抽出部13と同様に解なしのフラグを出力し、終了する(ステップS21のNo、ステップS17)。 If the function into which the real solution is substituted is positive, the secondary optimality verification unit 4 outputs the solution to the unknown posture candidate conversion unit 14 as a solution satisfying the secondary optimality sufficient condition (Yes in step S21). If none is positive, the second-order optimality verification unit 4 outputs a no-solution flag in the same manner as the real solution extraction unit 13, and ends (No in step S21, step S17).
 本実施形態の位置姿勢推定装置は、2次最適性検証部4により、連立多項式は満たすが局所最適解ではない、鞍点と呼ばれる不適切な解が排除される。鞍点が削除されると、2次最適性検証部4で求めた実数解は必ず局所最適解である。よって、その中で誤差最小となる解は大域的最適解であることが保証される。また、2次最適性検証部4が求めた実数解が存在しない場合に、以降の処理を打ち切ることで計算コストを低減できる。 In the position / orientation estimation apparatus of this embodiment, the second-order optimality verification unit 4 eliminates inappropriate solutions called saddle points that satisfy simultaneous polynomials but are not locally optimal solutions. When the saddle point is deleted, the real number solution obtained by the second-order optimality verification unit 4 is always a local optimum solution. Therefore, it is guaranteed that the solution having the smallest error among them is a global optimum solution. In addition, when the real number solution obtained by the secondary optimality verification unit 4 does not exist, the calculation cost can be reduced by terminating the subsequent processing.
 よって、本実施形態の位置姿勢推定装置によれば、推定結果として出力する位置姿勢の信頼性をより向上させることができ、より高速にカメラの相対的な位置姿勢を推定することができる。 Therefore, according to the position and orientation estimation apparatus of the present embodiment, the reliability of the position and orientation output as the estimation result can be further improved, and the relative position and orientation of the camera can be estimated at a higher speed.
実施形態3.
 本発明の第3の実施形態の位置姿勢推定装置を、図面を参照して説明する。図5は、本発明による位置姿勢推定装置の第3の実施形態の構成例を示すブロック図である。本実施形態の位置姿勢推定装置は、図1に示した第1の実施形態または図3に示した第2の実施形態の構成に加えて、さらに3次元形状復元部5を備える。なお、3次元形状復元部5以外の構成に関しては、第1の実施形態または第2の実施形態と同様であるため説明を省略する。
Embodiment 3. FIG.
A position / orientation estimation apparatus according to a third embodiment of the present invention will be described with reference to the drawings. FIG. 5 is a block diagram showing a configuration example of the third embodiment of the position / orientation estimation apparatus according to the present invention. The position / orientation estimation apparatus of the present embodiment further includes a three-dimensional shape restoration unit 5 in addition to the configuration of the first embodiment shown in FIG. 1 or the second embodiment shown in FIG. Since the configuration other than the three-dimensional shape restoration unit 5 is the same as that of the first embodiment or the second embodiment, the description thereof is omitted.
 3次元形状復元部5は、対応点および既知姿勢と、誤差最小候補抽出部3により出力された位置および未知姿勢とを入力し、対応点の3次元座標を復元し、出力する。また、3次元形状復元部5は、復元した3次元座標とともに位置と姿勢を出力する。ここで、姿勢は、例えば3つのパラメータでもよいし、回転行列でもよい。 The 3D shape restoration unit 5 inputs the corresponding point and the known posture, and the position and unknown posture output by the minimum error candidate extraction unit 3, and restores and outputs the three-dimensional coordinates of the corresponding point. The three-dimensional shape restoration unit 5 outputs the position and orientation together with the restored three-dimensional coordinates. Here, the posture may be, for example, three parameters or a rotation matrix.
 本実施形態において、3次元形状復元部5は、例えば、特定の演算処理等を行うよう設計されたハードウェア、またはプログラムに従って動作するCPU等の情報処理装置によって実現される。 In the present embodiment, the three-dimensional shape restoration unit 5 is realized by, for example, hardware designed to perform specific arithmetic processing or the like, or an information processing apparatus such as a CPU that operates according to a program.
 次に、本実施形態の位置姿勢推定装置の動作を説明する。図6は、本実施形態の位置姿勢推定装置の動作の一例を示すフローチャートである。以下、ステップS31以外の動作については第2の実施形態と同様であるため、説明を省略する。なお、本実施形態の位置姿勢推定装置の構成を、第1の実施形態の構成に3次元形状復元部5を加えたものとした場合、図6のステップS21は不要である。 Next, the operation of the position / orientation estimation apparatus of this embodiment will be described. FIG. 6 is a flowchart illustrating an example of the operation of the position / orientation estimation apparatus according to this embodiment. Hereinafter, since operations other than step S31 are the same as those in the second embodiment, description thereof will be omitted. Note that when the configuration of the position / orientation estimation apparatus of the present embodiment is obtained by adding the three-dimensional shape restoration unit 5 to the configuration of the first embodiment, step S21 in FIG. 6 is unnecessary.
 3次元形状復元部5は、対応点と既知姿勢と、誤差最小候補抽出部3により出力された位置と未知姿勢とを入力し、対応点の3次元座標を復元し、位置と姿勢とともに出力する(ステップS31)。 The three-dimensional shape restoration unit 5 receives the corresponding point, the known posture, the position and the unknown posture output by the minimum error candidate extraction unit 3, restores the three-dimensional coordinates of the corresponding point, and outputs the corresponding point and the posture. (Step S31).
(実施例3)
 次に、本実施形態における各部の動作を具体的に説明する。なお、3次元形状復元部5以外の動作は第1の実施形態または第2の実施形態の場合と同様である。
(Example 3)
Next, the operation of each part in this embodiment will be specifically described. The operations other than the three-dimensional shape restoration unit 5 are the same as those in the first embodiment or the second embodiment.
 3次元形状復元部5は、対応点と既知姿勢と、誤差最小候補抽出部3により出力された位置と未知姿勢とを入力し、対応点の3次元座標を復元し、出力する。また、3次元形状復元部5は、復元した3次元座標とともに位置と姿勢を出力する。ここで、姿勢は、例えば3つのパラメータでもよいし、回転行列でもよい。また、3次元形状復元部5は、位置と姿勢を出力せず、3次元座標のみを出力してもよい。 The 3D shape restoration unit 5 receives the corresponding points, the known postures, and the positions and unknown postures output by the minimum error candidate extraction unit 3, and restores and outputs the three-dimensional coordinates of the corresponding points. The three-dimensional shape restoration unit 5 outputs the position and orientation together with the restored three-dimensional coordinates. Here, the posture may be, for example, three parameters or a rotation matrix. The three-dimensional shape restoration unit 5 may output only the three-dimensional coordinates without outputting the position and orientation.
 カメラの相対的な位置と姿勢を元に対応点の3次元座標を復元する方法は、例えば、非特許文献4や非特許文献5に記載されている。 A method for restoring the three-dimensional coordinates of corresponding points based on the relative position and orientation of the camera is described in Non-Patent Document 4 and Non-Patent Document 5, for example.
 なお、上記の各実施形態の位置姿勢推定装置は、例えば、各部に対応したハードウェア等により実現可能であり、情報処理システムによっても実現可能である。図7は、本発明による位置姿勢推定装置を情報処理システムに実装した場合のブロック図である。図7に示す情報処理システムは、プロセッサ61と、プログラムメモリ62と、記憶媒体63とを備える一般的な情報処理システムである。記憶媒体63は、別個の記憶媒体からなる記憶領域であってもよいし、同一の記憶媒体からなる記憶領域であってもよい。記憶媒体として、RAM(Random Access Memory)や、ハードディスク等の磁気記憶媒体を用いることができる。 Note that the position / orientation estimation apparatus of each of the above embodiments can be realized by, for example, hardware corresponding to each unit, and can also be realized by an information processing system. FIG. 7 is a block diagram when the position / orientation estimation apparatus according to the present invention is implemented in an information processing system. The information processing system shown in FIG. 7 is a general information processing system including a processor 61, a program memory 62, and a storage medium 63. The storage medium 63 may be a storage area composed of separate storage media, or may be a storage area composed of the same storage medium. As the storage medium, a magnetic storage medium such as a RAM (Random Access Memory) or a hard disk can be used.
 プログラムメモリ62には、上述した未知姿勢候補計算部1(より具体的には、係数計算部11と、連立多項式求解部12と、実数解抽出部13と、未知姿勢候補変換部14)と、位置候補計算部2と、誤差最小候補抽出部3と、2次最適性検証部4と、3次元形状復元部5の各部の処理を、プロセッサ61に行わせるためのプログラムが格納されており、このプログラムに従ってプロセッサ61は動作する。プロセッサ61は、例えば、CPU等のプログラムに従って動作するプロセッサであればよい。このように、本発明は、コンピュータプログラムにより実現することも可能である。なお、プログラムによる動作が可能な部のすべてをプログラムで動作させる必要はなく、一部をハードウェアで構成してもよい。また、それぞれ別々のユニットとして実現されていてもよい。 The program memory 62 includes the above-described unknown posture candidate calculation unit 1 (more specifically, the coefficient calculation unit 11, the simultaneous polynomial solution unit 12, the real number solution extraction unit 13, and the unknown posture candidate conversion unit 14), Stored is a program for causing the processor 61 to perform processing of each of the position candidate calculation unit 2, the minimum error candidate extraction unit 3, the secondary optimality verification unit 4, and the three-dimensional shape restoration unit 5, The processor 61 operates according to this program. The processor 61 may be a processor that operates according to a program such as a CPU, for example. Thus, the present invention can also be realized by a computer program. Note that it is not necessary to operate all the parts that can be operated by the program by the program, and a part may be configured by hardware. Moreover, you may implement | achieve as a separate unit, respectively.
 以上、実施形態および実施例を参照して本発明を説明したが、本発明は、上記実施形態および実施例に限定されない。本発明の構成や詳細には、本発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 As mentioned above, although this invention was demonstrated with reference to embodiment and an Example, this invention is not limited to the said embodiment and Example. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 図8は、本発明による位置姿勢推定装置の主要部の構成を示すブロック図である。図8に示すように、本発明による位置姿勢推定装置は、主要な構成として、2枚の画像に含まれる3組以上の対応点と、当該2枚の画像を撮影したカメラの相対的な姿勢を表す3つの姿勢パラメータのうち既知の2つの姿勢パラメータである既知姿勢とを入力し、入力された対応点と既知姿勢とを用いて、姿勢パラメータのうちの1つであって未知の姿勢パラメータである未知姿勢とカメラの相対的な位置とを変数として未知姿勢と対応点とを用いて表される所定の関数を満たすすべての未知姿勢の解を未知姿勢の候補として出力する未知姿勢候補計算部1と、未知姿勢の候補それぞれに対するカメラの相対的な位置の候補を計算する位置候補計算部2と、すべての未知姿勢の候補、未知姿勢の候補それぞれに対するカメラの相対的な位置の候補、および対応点を入力し、未知姿勢の候補および位置の候補の中から、対応点とカメラの相対的な位置と未知姿勢との幾何関係を表す所定の誤差関数が最小となる1つまたは複数の未知姿勢、およびカメラの相対的な位置を抽出する誤差最小候補抽出部3とを備える。 FIG. 8 is a block diagram showing a configuration of a main part of the position / orientation estimation apparatus according to the present invention. As shown in FIG. 8, the position / orientation estimation apparatus according to the present invention has, as main components, three or more sets of corresponding points included in two images and the relative orientation of the camera that captured the two images. The known posture, which is a known two posture parameter among the three posture parameters representing the input, is input, and the corresponding posture and the known posture are used to input one of the posture parameters and the unknown posture parameter. Unknown posture candidate calculation that outputs the solutions of all unknown postures satisfying a predetermined function expressed using the unknown posture and corresponding points using the unknown posture and the relative position of the camera as variables Unit 1, position candidate calculation unit 2 for calculating a candidate for the relative position of the camera with respect to each of the unknown posture candidates, and the relative position of the camera with respect to each of the unknown posture candidates and the unknown posture candidates. Complements and corresponding points are input, and one or a minimum of a predetermined error function representing a geometric relationship between the relative position of the corresponding point and the camera and the unknown posture is selected from the unknown posture candidates and the position candidates A plurality of unknown postures, and a minimum error candidate extraction unit 3 that extracts a relative position of the camera.
 また、上記の実施形態では、以下の(1)~(3)に示すような位置姿勢推定装置も開示されている。 In the above-described embodiment, a position / orientation estimation apparatus as shown in the following (1) to (3) is also disclosed.
(1)位置姿勢推定装置は、未知姿勢候補計算部(例えば、未知姿勢候補計算部1)が、対応点と既知姿勢とを入力し、未知姿勢と対応点とを用いて表される所定の関数の係数を計算する係数計算部(例えば、係数計算部11)と、係数を入力し、所定の関数を満たすすべての解を計算する連立多項式求解部(例えば、連立多項式求解部12)と、連立多項式求解部により得られた解の中からすべての実数解を抽出し、抽出したすべての実数解または解なしを示すフラグを出力する実数解抽出部(例えば、実数解抽出部13)と、実数解抽出部によって抽出されたすべての未知姿勢の実数解を、それぞれ1つの未知姿勢の候補に変換する未知姿勢候補変換部(例えば、未知姿勢候補変換部14)とを含むように構成されていてもよい。 (1) In the position / orientation estimation apparatus, an unknown posture candidate calculation unit (for example, unknown posture candidate calculation unit 1) inputs a corresponding point and a known posture, and is represented using a predetermined posture represented by the unknown posture and the corresponding point. A coefficient calculation unit (for example, the coefficient calculation unit 11) that calculates the coefficient of the function, a simultaneous polynomial solution unit (for example, the simultaneous polynomial solution unit 12) that inputs all the coefficients and calculates all the solutions that satisfy the predetermined function; A real solution extraction unit (for example, a real solution extraction unit 13) that extracts all real solutions from the solutions obtained by the simultaneous polynomial solution unit and outputs a flag indicating all the extracted real solutions or no solution; It is configured to include an unknown posture candidate conversion unit (for example, unknown posture candidate conversion unit 14) that converts each real number solution of all unknown postures extracted by the real number solution extraction unit into one unknown posture candidate. May be.
(2)位置姿勢推定装置は、未知姿勢候補計算部が、所定の関数を満たすすべての実数解のうち、2次最適性十分条件を満たす実数解を計算して、出力する2次最適性検証部(例えば、2次最適性検証部4)を含むように構成されていてもよい。このような位置姿勢推定装置によれば、推定結果として出力する位置姿勢の信頼性をより向上させることができ、より高速にカメラの相対的な位置姿勢を推定することができる。 (2) In the position / orientation estimation device, the unknown orientation candidate calculation unit calculates and outputs a real number solution satisfying the second-order optimality condition among all the real number solutions satisfying a predetermined function, and outputs the second-order optimality verification (For example, secondary optimality verification unit 4) may be included. According to such a position / orientation estimation apparatus, the reliability of the position / orientation output as the estimation result can be further improved, and the relative position / orientation of the camera can be estimated at a higher speed.
(3)位置姿勢推定装置は、対応点と、既知姿勢と、誤差最小候補抽出部により出力されたカメラの相対的な位置およびと未知姿勢とを入力し、対応点の3次元座標を復元し、出力する3次元形状復元部(例えば、3次元形状復元部5)を備えるように構成されていてもよい。 (3) The position / orientation estimation apparatus inputs the corresponding point, the known attitude, the relative position of the camera output by the minimum error candidate extraction unit, and the unknown attitude, and restores the three-dimensional coordinates of the corresponding point. The output three-dimensional shape restoration unit (for example, the three-dimensional shape restoration unit 5) may be provided.
 この出願は、2012年8月31日に出願された日本出願特願2012-191262を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2012-191262 filed on August 31, 2012, the entire disclosure of which is incorporated herein.
 以上、実施形態および実施例を参照して本願発明を説明したが、本願発明は上記実施形態および実施例に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present invention has been described with reference to the embodiments and examples, the present invention is not limited to the above embodiments and examples. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
産業上の利用の可能性Industrial applicability
 本発明は、被写体の3次元形状復元や背景のパノラマ画像の生成等に適用される。 The present invention is applied to the reconstruction of the three-dimensional shape of a subject, the generation of a panoramic image of a background, and the like.
1 未知姿勢候補計算部
2 位置候補計算部
3 誤差最小候補抽出部
4 2次最適性検証部
5 3次元形状復元部
11 係数計算部
12 連立多項式求解部
13 実数解抽出部
14 未知姿勢候補変換部
21 実数解抽出部
61 プロセッサ
62 プログラムメモリ
63 記憶媒体
DESCRIPTION OF SYMBOLS 1 Unknown posture candidate calculation part 2 Position candidate calculation part 3 Error minimum candidate extraction part 4 Secondary optimality verification part 5 Three-dimensional shape restoration part 11 Coefficient calculation part 12 Simultaneous polynomial solution part 13 Real number solution extraction part 14 Unknown posture candidate conversion part 21 Real Solution Extractor 61 Processor 62 Program Memory 63 Storage Medium

Claims (6)

  1.  2枚の画像に含まれる3組以上の対応点と、当該2枚の画像を撮影したカメラの相対的な姿勢を表す3つの姿勢パラメータのうち既知の2つの姿勢パラメータである既知姿勢とを入力し、入力された前記対応点と前記既知姿勢とを用いて、前記姿勢パラメータのうちの1つであって未知の姿勢パラメータである未知姿勢と前記カメラの相対的な位置とを変数として前記未知姿勢と前記対応点とを用いて表される所定の関数を満たすすべての前記未知姿勢の解を未知姿勢の候補として出力する未知姿勢候補計算部と、
     前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補を計算する位置候補計算部と、
     すべての前記未知姿勢の候補、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補、および前記対応点を入力し、前記未知姿勢の候補および前記位置の候補の中から、前記対応点と前記カメラの相対的な位置と前記未知姿勢との幾何関係を表す所定の誤差関数が最小となる1つまたは複数の前記未知姿勢、および前記カメラの相対的な位置を抽出する誤差最小候補抽出部とを備えた
     ことを特徴とする位置姿勢推定装置。
    Input three or more sets of corresponding points included in the two images and known postures that are two known posture parameters among the three posture parameters representing the relative posture of the camera that captured the two images. Then, using the input corresponding points and the known posture, the unknown posture, which is one of the posture parameters and is an unknown posture parameter, and the relative position of the camera are used as variables. An unknown posture candidate calculation unit that outputs, as candidates for unknown postures, solutions of all the unknown postures that satisfy a predetermined function expressed using posture and the corresponding points;
    A position candidate calculation unit for calculating a candidate for a relative position of the camera with respect to each of the unknown posture candidates;
    All of the unknown posture candidates, the relative position candidates of the camera with respect to each of the unknown posture candidates, and the corresponding points are input, and the corresponding points are selected from the unknown posture candidates and the position candidates. One or more unknown postures that minimize a predetermined error function representing a geometric relationship between the relative position of the camera and the unknown posture, and a minimum error candidate extraction that extracts the relative position of the camera And a position and orientation estimation device.
  2.  未知姿勢候補計算部は、
     対応点と既知姿勢とを入力し、未知姿勢と前記対応点とを用いて表される所定の関数の係数を計算する係数計算部と、
     前記係数を入力し、前記所定の関数を満たすすべての解を計算する連立多項式求解部と、
     前記連立多項式求解部により得られた解の中からすべての実数解を抽出し、抽出したすべての実数解または解なしを示すフラグを出力する実数解抽出部と、
     前記実数解抽出部によって抽出されたすべての前記未知姿勢の実数解を、それぞれ1つの未知姿勢の候補に変換する未知姿勢候補変換部とを含む
     請求項1記載の位置姿勢推定装置。
    The unknown pose candidate calculation unit
    A coefficient calculation unit that inputs a corresponding point and a known posture, and calculates a coefficient of a predetermined function expressed using the unknown posture and the corresponding point;
    A simultaneous polynomial solver that inputs the coefficients and calculates all solutions that satisfy the predetermined function;
    A real solution extraction unit that extracts all real solutions from the solutions obtained by the simultaneous polynomial solution unit and outputs a flag indicating all extracted real solutions or no solution;
    The position and orientation estimation apparatus according to claim 1, further comprising: an unknown posture candidate conversion unit that converts all of the real solutions of the unknown posture extracted by the real number solution extraction unit into one unknown posture candidate.
  3.  未知姿勢候補計算部は、
     所定の関数を満たすすべての実数解のうち、2次最適性十分条件を満たす実数解を計算して、出力する2次最適性検証部を含む
     請求項1または請求項2に記載の位置姿勢推定装置。
    The unknown pose candidate calculation unit
    The position / orientation estimation according to claim 1, further comprising: a second-order optimality verification unit that calculates and outputs a real-number solution satisfying a second-order optimality condition among all real-number solutions satisfying a predetermined function. apparatus.
  4.  対応点と、既知姿勢と、誤差最小候補抽出部により出力されたカメラの相対的な位置および未知姿勢とを入力し、前記対応点の3次元座標を復元し、出力する3次元形状復元部を備えた
     請求項1から請求項3のうちのいずれか1項に記載の位置姿勢推定装置。
    A corresponding point, a known posture, a relative position and an unknown posture of the camera output by the minimum error candidate extracting unit, a three-dimensional shape restoring unit for restoring and outputting the three-dimensional coordinates of the corresponding point; The position and orientation estimation apparatus according to any one of claims 1 to 3.
  5.  2枚の画像に含まれる3組以上の対応点と、当該2枚の画像を撮影したカメラの相対的な姿勢を表す3つの姿勢パラメータのうち既知の2つの姿勢パラメータである既知姿勢とを入力し、入力された前記対応点と前記既知姿勢とを用いて、前記姿勢パラメータのうちの1つであって未知の姿勢パラメータである未知姿勢と前記カメラの相対的な位置とを変数として前記未知姿勢と前記対応点とを用いて表される所定の関数を満たすすべての前記未知姿勢の解を未知姿勢の候補として出力し、
     前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補を計算し、
     すべての前記未知姿勢の候補、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補、および前記対応点を入力し、前記未知姿勢の候補および前記位置の候補の中から、前記対応点と前記カメラの相対的な位置と前記未知姿勢との幾何関係を表す所定の誤差関数が最小となる1つまたは複数の前記未知姿勢、および前記カメラの相対的な位置を抽出する
     ことを特徴とする位置姿勢推定方法。
    Input three or more sets of corresponding points included in the two images and known postures that are two known posture parameters among the three posture parameters representing the relative posture of the camera that captured the two images. Then, using the input corresponding points and the known posture, the unknown posture, which is one of the posture parameters and is an unknown posture parameter, and the relative position of the camera are used as variables. Output all the unknown posture solutions satisfying a predetermined function represented by the posture and the corresponding points as candidates for unknown posture,
    Calculating a relative position candidate of the camera for each of the unknown pose candidates;
    All of the unknown posture candidates, the relative position candidates of the camera with respect to each of the unknown posture candidates, and the corresponding points are input, and the corresponding points are selected from the unknown posture candidates and the position candidates. Extracting one or a plurality of unknown poses and a relative position of the cameras that minimize a predetermined error function representing a geometric relationship between the relative position of the camera and the unknown pose. Position and orientation estimation method.
  6.  コンピュータに、
     2枚の画像に含まれる3組以上の対応点と、当該2枚の画像を撮影したカメラの相対的な姿勢を表す3つの姿勢パラメータのうち既知の2つの姿勢パラメータである既知姿勢とを入力し、入力された前記対応点と前記既知姿勢とを用いて、前記姿勢パラメータのうちの1つであって未知の姿勢パラメータである未知姿勢と前記カメラの相対的な位置とを変数として前記未知姿勢と前記対応点とを用いて表される所定の関数を満たすすべての前記未知姿勢の解を未知姿勢の候補として出力する未知姿勢候補計算処理と、
     前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補を計算する位置候補計算処理と、
     すべての前記未知姿勢の候補、前記未知姿勢の候補それぞれに対する前記カメラの相対的な位置の候補、および前記対応点を入力し、前記未知姿勢の候補および前記位置の候補の中から、前記対応点と前記カメラの相対的な位置と前記未知姿勢との幾何関係を表す所定の誤差関数が最小となる1つまたは複数の前記未知姿勢、および前記カメラの相対的な位置を抽出する誤差最小候補抽出処理と
     を実行させるための位置姿勢推定プログラム。
    On the computer,
    Input three or more sets of corresponding points included in the two images and known postures that are two known posture parameters among the three posture parameters representing the relative posture of the camera that captured the two images. Then, using the input corresponding points and the known posture, the unknown posture, which is one of the posture parameters and is an unknown posture parameter, and the relative position of the camera are used as variables. An unknown posture candidate calculation process for outputting, as candidates for unknown postures, solutions of all the unknown postures that satisfy a predetermined function expressed using posture and the corresponding points;
    A position candidate calculation process for calculating a candidate for a relative position of the camera with respect to each of the unknown posture candidates;
    All of the unknown posture candidates, the relative position candidates of the camera with respect to each of the unknown posture candidates, and the corresponding points are input, and the corresponding points are selected from the unknown posture candidates and the position candidates. One or more unknown postures that minimize a predetermined error function representing a geometric relationship between the relative position of the camera and the unknown posture, and a minimum error candidate extraction that extracts the relative position of the camera A position and orientation estimation program for executing and processing.
PCT/JP2013/004849 2012-08-31 2013-08-13 Location attitude estimation device, location attitude estimation method, and location attitude estimation program WO2014034035A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014532761A JP6260533B2 (en) 2012-08-31 2013-08-13 Position / orientation estimation apparatus, position / orientation estimation method, and position / orientation estimation program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012191262 2012-08-31
JP2012-191262 2012-08-31

Publications (1)

Publication Number Publication Date
WO2014034035A1 true WO2014034035A1 (en) 2014-03-06

Family

ID=50182876

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/004849 WO2014034035A1 (en) 2012-08-31 2013-08-13 Location attitude estimation device, location attitude estimation method, and location attitude estimation program

Country Status (2)

Country Link
JP (1) JP6260533B2 (en)
WO (1) WO2014034035A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016114580A (en) * 2014-12-18 2016-06-23 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Processing device, processing method and program
WO2016208404A1 (en) * 2015-06-23 2016-12-29 ソニー株式会社 Device and method for processing information, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000074641A (en) * 1998-08-27 2000-03-14 Nec Corp Method for measuring three-dimensional shape
JP2006242943A (en) * 2005-02-04 2006-09-14 Canon Inc Position attitude measuring method and device
JP3158823U (en) * 2009-12-25 2010-04-22 財団法人日本交通管理技術協会 Reference axis vertical setting device in 3D measurement method using digital camera
WO2012160787A1 (en) * 2011-05-20 2012-11-29 日本電気株式会社 Position/posture estimation device, position/posture estimation method and position/posture estimation program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000074641A (en) * 1998-08-27 2000-03-14 Nec Corp Method for measuring three-dimensional shape
JP2006242943A (en) * 2005-02-04 2006-09-14 Canon Inc Position attitude measuring method and device
JP3158823U (en) * 2009-12-25 2010-04-22 財団法人日本交通管理技術協会 Reference axis vertical setting device in 3D measurement method using digital camera
WO2012160787A1 (en) * 2011-05-20 2012-11-29 日本電気株式会社 Position/posture estimation device, position/posture estimation method and position/posture estimation program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AKIYOSHI SHIOURA ET AL.: "Dai 11 Kai Chapter 3 Hisenkei Keikaku 3.2 Seiyaku Nashi Saitekika", 2009, pages 6 - 19, Retrieved from the Internet <URL:http://www.dais.is.tohoku.ac.jp/-shioura/teaching/mp08/mp08-11.pdf> [retrieved on 20131029] *
TAKUYA KANEKO ET AL.: "Hybrid GMRES-ho ni Tsuite", DAI 54 KAI (HEISEI 9 NEN ZENKI) ZENKOKU TAIKAI KOEN RONBUNSHU (1) ARCHITECTURE SOFTWARE KAGAKU KOGAKU, vol. 6F-5, no. 1-81, 12 March 1997 (1997-03-12) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016114580A (en) * 2014-12-18 2016-06-23 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Processing device, processing method and program
WO2016208404A1 (en) * 2015-06-23 2016-12-29 ソニー株式会社 Device and method for processing information, and program
JPWO2016208404A1 (en) * 2015-06-23 2018-04-12 ソニー株式会社 Information processing apparatus and method, and program
US10600202B2 (en) 2015-06-23 2020-03-24 Sony Corporation Information processing device and method, and program

Also Published As

Publication number Publication date
JPWO2014034035A1 (en) 2016-08-08
JP6260533B2 (en) 2018-01-17

Similar Documents

Publication Publication Date Title
Gilitschenski et al. Deep orientation uncertainty learning based on a bingham loss
JP6261811B2 (en) Method for determining motion between a first coordinate system and a second coordinate system
US9959625B2 (en) Method for fast camera pose refinement for wide area motion imagery
JP5833507B2 (en) Image processing device
EP3633606B1 (en) Information processing device, information processing method, and program
JPWO2018168255A1 (en) Camera parameter estimation device, camera parameter estimation method, and program
EP2960859B1 (en) Constructing a 3d structure
JP6636894B2 (en) Camera information correction device, camera information correction method, and camera information correction program
EP3300025B1 (en) Image processing device and image processing method
CN113256718B (en) Positioning method and device, equipment and storage medium
JP2017036970A (en) Information processor, information processing method, and program
JP6922348B2 (en) Information processing equipment, methods, and programs
US10252417B2 (en) Information processing apparatus, method of controlling information processing apparatus, and storage medium
JP6260533B2 (en) Position / orientation estimation apparatus, position / orientation estimation method, and position / orientation estimation program
Ricolfe-Viala et al. Optimal conditions for camera calibration using a planar template
JP2019192299A (en) Camera information correction device, camera information correction method, and camera information correction program
EP4187483A1 (en) Apparatus and method with image processing
JP5030183B2 (en) 3D object position and orientation measurement method
CN113048985B (en) Camera relative motion estimation method under known relative rotation angle condition
JP6154759B2 (en) Camera parameter estimation apparatus, camera parameter estimation method, and camera parameter estimation program
Bartoli On the non-linear optimization of projective motion using minimal parameters
JP2017163386A (en) Camera parameter estimation apparatus, camera parameter estimation method, and program
WO2019058487A1 (en) Three-dimensional reconstructed image processing device, three-dimensional reconstructed image processing method, and computer-readable storage medium having three-dimensional reconstructed image processing program stored thereon
JP2017162449A (en) Information processing device, and method and program for controlling information processing device
JP5215615B2 (en) Three-dimensional position information restoration apparatus and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13832512

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014532761

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13832512

Country of ref document: EP

Kind code of ref document: A1