JP4227037B2 - Imaging system and calibration method - Google Patents
Imaging system and calibration method Download PDFInfo
- Publication number
- JP4227037B2 JP4227037B2 JP2004049108A JP2004049108A JP4227037B2 JP 4227037 B2 JP4227037 B2 JP 4227037B2 JP 2004049108 A JP2004049108 A JP 2004049108A JP 2004049108 A JP2004049108 A JP 2004049108A JP 4227037 B2 JP4227037 B2 JP 4227037B2
- Authority
- JP
- Japan
- Prior art keywords
- moving
- imaging means
- transformation matrix
- fixed
- imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Description
本発明は、固定撮像手段及び移動撮像手段の校正を行う撮像システム及び校正方法に関するものである。 The present invention relates to an imaging system and a calibration method for calibrating fixed imaging means and moving imaging means.
従来のカメラの校正手法としては、ストロングキャリブレーションとウィークキャリブレーションとがある。ストロングキャリブレーションでは、3次元座標(X,Y,Z)が既知な3次元空間中の点であるマーカー(ランドマークポイント)とその画像上での観測座標(u,v)との組み合わせからワールド座標系と画像座標系との射影関係として射影変換行列を推定し、推定した射影変換行列からカメラの内部変数と外部変数を求めるものである。この場合、我々が存在するワールド座標系と画像座標系との射影関係を推定するため、推定結果を直感的に理解及び評価し易く、一般的にワールド座標系においてデザインされるCG(Computer Graphics)モデルとの親和性が高いという利点がある。たとえば、非特許文献1には、サッカースタジアム規模の大空間においてストロングキャリブレーションを行うことが開示されている。 Conventional camera calibration methods include strong calibration and weak calibration. In strong calibration, the world is determined from the combination of a marker (landmark point) that is a point in a three-dimensional space with a known three-dimensional coordinate (X, Y, Z) and an observation coordinate (u, v) on the image. A projection transformation matrix is estimated as a projection relationship between the coordinate system and the image coordinate system, and an internal variable and an external variable of the camera are obtained from the estimated projection transformation matrix. In this case, in order to estimate the projective relationship between the world coordinate system and the image coordinate system in which we exist, it is easy to understand and evaluate the estimation result intuitively, and CG (Computer Graphics) generally designed in the world coordinate system There is an advantage of high affinity with the model. For example, Non-Patent Document 1 discloses performing strong calibration in a large space of a soccer stadium scale.
一方、ウィークキャリブレーションは、2枚の画像上における対応点の組合せから画像間の射影関係であるエピポーラ方程式(F行列)を算出し、算出したエピポーラ方程式からカメラの内部変数と相対的な位置関係とを推定するものである(例えば、非特許文献2参照)。この場合、2枚の画像間における対応点の情報のみから射影関係の推定を行うため、対象世界の制約が少なく、また、校正時の作業が比較的少ないという利点がある。
しかしながら、前者のストロングキャリブレーションでは、正確な校正を実現するために、全てのカメラに対して3次元座標及び画像座標上における正確な観測座標の組み合わせが多数必要となり、撮影に用いるカメラ台数の増加及び撮影対象エリアの拡大に伴い、正確なデータ収集が困難になるとともに、データ処理量が膨大となる。そのため、刻々と位置、姿勢が変化する移動撮影においてストロングキャリブレーションを実現するのは困難である。 However, in the former strong calibration, in order to realize accurate calibration, a large number of combinations of three-dimensional coordinates and accurate observation coordinates on image coordinates are required for all cameras, and the number of cameras used for photographing increases. As the area to be photographed expands, accurate data collection becomes difficult and the amount of data processing becomes enormous. For this reason, it is difficult to realize strong calibration in moving shooting where the position and orientation change every moment.
一方、後者のウィークキャリブレーションは、作業が少ないため移動撮影手段への適正が高いが、2次元情報から3次元情報を推定する逆問題を解くことになるため、推定結果に誤差が発生し易い。また、ウィークキャリブレーションの結果のみでは、ワールド座標系との射影関係を得ることができない。 On the other hand, the latter weak calibration is highly suitable for the moving photographing means because of less work, but it solves the inverse problem of estimating the three-dimensional information from the two-dimensional information, so that an error is likely to occur in the estimation result. . In addition, the projection relationship with the world coordinate system cannot be obtained only by the result of the weak calibration.
本発明の目的は、移動撮像手段のワールド座標系との対応付けを行うことができるとともに、簡便な校正作業で固定撮像手段及び移動撮像手段の校正を高精度に行うことができる撮像システム及び校正方法を提供することである。 SUMMARY OF THE INVENTION An object of the present invention is to provide an imaging system and a calibration that can associate a moving imaging means with the world coordinate system and can calibrate the fixed imaging means and the moving imaging means with high accuracy by a simple calibration operation. Is to provide a method.
本発明に係る撮像システムは、空間内の所定位置に固定された複数の固定撮像手段と、前記空間内において任意の位置に移動可能な移動体に装着され、前記移動体の移動に伴い、前記空間内において任意の位置に移動可能な移動撮像手段と、ストロングキャリブレーションにより、前記複数の固定撮像手段によって予め撮影されたマーカーの画像上の既知の観測座標及び前記マーカーの既知の3次元座標から前記複数の固定撮像手段の射影変換行列を算出する固定用射影変換行列算出手段と、前記複数の固定撮像手段と前記移動撮像手段との間のエピポーラ方程式を算出するエピポーラ方程式算出手段と、前記射影変換行列と前記エピポーラ方程式とから前記移動撮像手段の射影変換行列を算出する移動用射影変換行列算出手段と、前記移動用射影変換行列算出手段により算出された前記移動撮像手段の射影変換行列を取得し、前記移動撮像手段の校正を行う校正手段とを備え、前記複数の固定撮像手段及び前記移動撮像手段は、前記空間内の所定位置に配置された複数のマーカーを撮影し、前記エピポーラ方程式算出手段は、前記複数の固定撮像手段及び前記移動撮像手段により撮影されたマーカーの、前記複数の固定撮像手段及び前記移動撮像手段の各画像上の対応点を検出し、検出した対応点の画像上の観測座標を用いて前記エピポーラ方程式を算出し、前記移動用射影変換行列算出手段は、前記固定撮像手段の射影変換行列をP、前記固定撮像手段と前記移動撮像手段との間のエピポーラ方程式をF、前記空間内の任意の3次元座標をMとしたとき、前記空間内の任意の3次元座標Mを代入して(m t ) T FPM=0を解くことにより、前記空間内の任意の3次元座標Mに対応する、前記移動撮像手段の画像上の観測座標m t を算出し、前記3次元座標M及び前記観測座標m t から前記移動撮像手段の射影変換行列を算出する。 An imaging system according to the present invention is attached to a plurality of fixed imaging means fixed at a predetermined position in a space and a movable body movable to an arbitrary position in the space. From moving imaging means that can move to an arbitrary position in space, and known observation coordinates on a marker image previously taken by the plurality of fixed imaging means and known three-dimensional coordinates of the marker by strong calibration A fixed projection transformation matrix calculating means for calculating a projection transformation matrix of the plurality of fixed imaging means; an epipolar equation calculating means for calculating an epipolar equation between the plurality of fixed imaging means and the moving imaging means; and the projection and moving the projective transformation matrix calculation unit configured to calculate a projective transformation matrix of the moving imaging means and a conversion matrix and the epipolar equation, the mobile Get the projective transformation matrix of the moving image pickup means which is calculated by the projective transformation matrix calculation means, and a calibration means for calibrating said mobile imaging means, said plurality of solid-state image pickup means and said moving image pickup unit, the space A plurality of markers arranged at predetermined positions, and the epipolar equation calculating means includes the plurality of fixed imaging means and the moving imaging of the markers photographed by the plurality of fixed imaging means and the moving imaging means. A corresponding point on each image of the means is detected, the epipolar equation is calculated using observation coordinates on the image of the detected corresponding point, and the projective transformation matrix calculation unit for movement is a projection transformation matrix of the fixed imaging unit Is P, F is the epipolar equation between the fixed imaging means and the moving imaging means, and M is any three-dimensional coordinate in the space. By solving by substituting the original coordinates M a (m t) T FPM = 0 , corresponding to an arbitrary three-dimensional coordinates M in the space, and calculates the observation coordinate m t on the image of the moving image pickup means, calculating a projection transformation matrix of the moving image pickup means from the 3-dimensional coordinates M and the observation coordinate m t.
本発明に係る撮像システムでは、複数の固定撮像手段の射影変換行列が算出されるので、複数の固定撮像手段を高精度に校正することができる。また、移動撮像手段のワールド座標系との対応付けを行うことができるとともに、2次元情報から3次元情報を推定する逆問題を解く処理が必要なくなり、推定誤差の影響を受けずに移動撮像手段の射影変換行列を高精度に求めることができ、移動撮像手段を高精度に校正することができる。この結果、移動撮像手段のワールド座標系との対応付けを行うことができるとともに、簡便な校正作業で固定撮像手段及び移動撮像手段の校正を高精度に行うことができる。 In the imaging system according to the present invention, since the projection transformation matrix of the plurality of fixed imaging means is calculated, the plurality of fixed imaging means can be calibrated with high accuracy. In addition, it is possible to associate the moving imaging unit with the world coordinate system, and it is not necessary to solve the inverse problem of estimating the three-dimensional information from the two-dimensional information, and the moving imaging unit is not affected by the estimation error. Can be obtained with high accuracy, and the moving imaging means can be calibrated with high accuracy. As a result, the moving imaging means can be associated with the world coordinate system, and the fixed imaging means and the moving imaging means can be calibrated with high accuracy by a simple calibration operation.
また、複数の固定撮像手段及び前記移動撮像手段により撮影されたマーカーの対応点を検出し、検出した対応点を用いてエピポーラ方程式を高精度に算出することができる。 In addition , it is possible to detect corresponding points of a marker photographed by a plurality of fixed imaging means and the moving imaging means, and to calculate the epipolar equation with high accuracy using the detected corresponding points.
また、射影変換行列とエピポーラ方程式とから空間内の任意の3次元座標に対応する、移動撮像手段により撮像された画像上の観測座標を算出し、3次元座標及び観測座標から移動撮像手段の射影変換行列を算出しているので、2次元情報から3次元情報を推定する逆問題を解く必要がなくなり、推定誤差の影響を受けずに移動撮像手段の射影変換行列を高精度に求めることができ、移動撮像手段の位置及び姿勢を高精度に校正することができる。 In addition , observation coordinates on the image captured by the moving imaging unit corresponding to arbitrary three-dimensional coordinates in the space are calculated from the projection transformation matrix and the epipolar equation, and the projection of the moving imaging unit is calculated from the three-dimensional coordinates and the observation coordinates. Since the transformation matrix is calculated, it is not necessary to solve the inverse problem of estimating the three-dimensional information from the two-dimensional information, and the projection transformation matrix of the moving imaging means can be obtained with high accuracy without being affected by the estimation error. The position and orientation of the moving imaging means can be calibrated with high accuracy.
前記移動撮像手段は、パン方向及びチルト方向に撮像方向を変更可能な可動撮像手段を含み、前記撮像システムは、前記移動撮像手段の回転中心位置を算出する回転中心算出手段をさらに備え、前記校正手段は、前記回転中心算出手段により算出された回転中心位置と、前記回転中心位置に対する前記移動撮像手段の移動時のパン値及びチルト値とから、前記移動撮像手段の移動時の剛体変換行列を求め、求めた剛体変換行列を前記移動用射影変換行列算出手段により算出された射影変換行列に掛け合わせて前記移動撮像手段の移動時の射影変換行列を算出し、前記移動撮像手段のパン方向及びチルト方向の校正を行う移動時射影変換行列算出手段を備えることが好ましい。 The moving imaging unit includes a movable imaging unit that can change an imaging direction in a pan direction and a tilt direction, and the imaging system further includes a rotation center calculating unit that calculates a rotation center position of the moving imaging unit, and the calibration means, the rotation center position calculated by the rotation center calculating means, and a pan value and the tilt value when the movement of the moving imaging means relative to the rotational center position, the rigid transformation matrix at the time of movement of said moving image pickup means The obtained rigid body transformation matrix is multiplied by the projection transformation matrix calculated by the movement projection transformation matrix calculation means to calculate a projection transformation matrix when the moving imaging means is moved, and the pan direction of the moving imaging means and It is preferable to include a projective transformation matrix calculation unit for movement that performs calibration in the tilt direction .
この場合、複数の固定撮像手段により撮影された画像を用いて移動撮像手段の回転中心位置を算出し、算出した回転中心位置に対する移動撮像手段の移動時のパン値及びチルト値から移動撮像手段の移動時の射影変換行列を算出しているので、射影変換行列の分解による外部パラメータ推定処理が必要なくなり、推定誤差の影響を受けずに移動撮像手段の移動時の射影変換行列を高精度に求めることができ、移動撮像手段のパン方向及びチルト方向の校正を高精度に行うことができる。 In this case, the rotation center position of the moving imaging unit is calculated using images captured by the plurality of fixed imaging units, and the moving imaging unit uses the pan value and the tilt value when the moving imaging unit moves relative to the calculated rotation center position. Since the projection transformation matrix at the time of movement is calculated, external parameter estimation processing by decomposing the projection transformation matrix is not necessary, and the projection transformation matrix at the time of movement of the moving imaging means is obtained with high accuracy without being affected by the estimation error. It is possible to calibrate the pan and tilt directions of the moving imaging means with high accuracy.
前記回転中心算出手段は、前記複数の固定撮像手段により撮影されたマーカーの3次元位置を算出するとともに、前記マーカーが画像中心に位置するときの前記移動撮像手段のパン値及びチルト値を取得し、前記3次元位置と前記パン値及びチルト値とから前記移動撮像手段の回転中心位置を算出することが好ましい。 The rotation center calculating unit calculates a three-dimensional position of the marker imaged by the plurality of fixed imaging units, and acquires a pan value and a tilt value of the moving imaging unit when the marker is positioned at the image center. Preferably, the rotational center position of the moving imaging means is calculated from the three-dimensional position and the pan value and tilt value.
この場合、複数の固定撮像手段により撮影されたマーカーの3次元位置を算出するとともに、マーカーが画像中心に位置するときの移動撮像手段のパン値及びチルト値を取得し、算出した3次元位置と対応するパン値及びチルト値とから移動撮像手段の回転中心位置を算出しているので、移動撮像手段の回転中心位置を高精度に求めることができる。 In this case, the three-dimensional position of the marker photographed by the plurality of fixed imaging means is calculated, and the pan value and the tilt value of the moving imaging means when the marker is located at the center of the image are obtained, and the calculated three-dimensional position Since the rotation center position of the moving imaging means is calculated from the corresponding pan value and tilt value, the rotation center position of the moving imaging means can be obtained with high accuracy.
前記移動時射影変換行列算出手段は、前記可動撮像手段の撮影位置と前記可動撮像手段のパン値及びチルト値との関係を規定する制御行列C、前記マーカーの3次元位置を(X,Y,Z)、前記マーカーを画像の中心で撮影したときのパン値をpan、チルト値をtilt、任意の実数をλとしたとき、λ〔pan tilt 1〕 T =C〔X Y Z 1〕 T を満たす前記可動撮像手段の制御行列Cを最小二乗法により算出し、前記撮像システムは、前記可動撮像手段の撮影対象領域の入力を受け付け、前記撮影対象領域として与えられた3次元空間中で注目すべき位置(X,Y,Z)と、前記移動時射影変換行列算出手段により算出された制御行列Cとを用いて、λ〔pan tilt 1〕 T =C〔X Y Z 1〕 T から、受け付けた撮影対象領域に応じたパン値及びチルト値を算出するパン値及びチルト値算出手段と、前記パン値及びチルト値算出手段により算出されたパン値及びチルト値に応じて前記可動撮像手段を制御する制御手段とをさらに備えることが好ましい。 The moving projection transformation matrix calculation means includes a control matrix C defining the relationship between the shooting position of the movable imaging means and the pan value and tilt value of the movable imaging means, and the three-dimensional position of the marker (X, Y, Z), pan bread value when the marker was taken at the center of the image, a tilt value tilt, when the arbitrary real number lambda, lambda a [pan tilt 1] T = C [X Y Z 1] T The control matrix C of the movable imaging means that satisfies the above is calculated by the least square method, and the imaging system accepts an input of the imaging target area of the movable imaging means and pays attention in the three-dimensional space given as the imaging target area to position (X, Y, Z) and, using a control matrix C calculated by the moving time of the projective transformation matrix calculation unit, from λ [pan tilt 1] T = C [X Y Z 1] T, accepted Subject Pan value and tilt value calculation means for calculating a pan value and a tilt value according to a range, and control means for controlling the movable imaging means according to the pan value and tilt value calculated by the pan value and tilt value calculation means It is preferable to further comprise.
この場合、撮影を希望する撮影対象領域に適したパン値及びチルト値に可動撮像手段を自動的に制御することができる。 In this case, the movable imaging means can be automatically controlled to a pan value and a tilt value suitable for the imaging target area desired to be shot.
本発明に係る校正方法は、複数の固定撮像手段が所定位置に固定された空間内における任意の位置に移動可能な移動体に装着され、前記移動体の移動に伴い、前記空間内において任意の位置に移動可能な移動撮像手段の校正方法であって、ストロングキャリブレーションにより、前記複数の固定撮像手段によって予め撮影されたマーカーの画像上の既知の観測座標及び前記マーカーの既知の3次元座標から前記複数の固定撮像手段の射影変換行列を算出するステップと、前記複数の固定撮像手段及び前記移動撮像手段により撮影されたマーカーの、前記複数の固定撮像手段及び前記移動撮像手段の各画像上の対応点を検出し、検出した対応点の画像上の観測座標を用いて、前記複数の固定撮像手段と前記移動撮像手段との間のエピポーラ方程式を算出するステップと、前記固定撮像手段の射影変換行列をP、前記固定撮像手段と前記移動撮像手段との間のエピポーラ方程式をF、前記空間内の任意の3次元座標をMとしたとき、前記空間内の任意の3次元座標Mを代入して(m t ) T FPM=0を解くことにより、前記空間内の任意の3次元座標Mに対応する、前記移動撮像手段の画像上の観測座標m t を算出し、前記3次元座標M及び前記観測座標m t から前記移動撮像手段の射影変換行列を算出し、前記移動撮像手段の校正を行うステップとを含むものである。 In the calibration method according to the present invention, a plurality of fixed imaging means are attached to a movable body that can be moved to an arbitrary position in a space fixed at a predetermined position, and an arbitrary position in the space is determined as the movable body moves . A method of calibrating a moving imaging means that can move to a position , wherein , from strong calibration, a known observation coordinate on a marker image previously captured by the plurality of fixed imaging means and a known three-dimensional coordinate of the marker Calculating a projective transformation matrix of the plurality of fixed imaging means; and markers taken by the plurality of fixed imaging means and the moving imaging means on the images of the plurality of fixed imaging means and the moving imaging means. detecting a corresponding point, by using the observation coordinates on the image of the detected corresponding points, as epipolar direction between the mobile imaging means and the plurality of the solid-state image pickup means Calculating a, P a projective transformation matrix of the solid-state image pickup device, F epipolar equation between said fixed imaging means and the moving image pickup means, when any three-dimensional coordinates of said space has a M, By observing (m t ) T FPM = 0 by substituting arbitrary three-dimensional coordinates M in the space, observation on the image of the moving imaging means corresponding to the arbitrary three-dimensional coordinates M in the space calculates coordinates m t, and calculates a projective transformation matrix of the moving image pickup means from the 3-dimensional coordinates M and the observation coordinate m t, it is intended to include a step of performing a calibration of the moving imaging means.
本発明によれば、移動撮像手段のワールド座標系との対応付けを行うことができるとともに、簡便な校正作業で固定撮像手段及び移動撮像手段の校正を高精度に行うことができる。 According to the present invention, the moving imaging unit can be associated with the world coordinate system, and the fixed imaging unit and the moving imaging unit can be calibrated with high accuracy by a simple calibration operation.
以下、本発明の一実施の形態による撮像システムについて図面を参照しながら説明する。図1は、本発明の一実施の形態による撮像システムの構成を示すブロック図である。 Hereinafter, an imaging system according to an embodiment of the present invention will be described with reference to the drawings. FIG. 1 is a block diagram showing a configuration of an imaging system according to an embodiment of the present invention.
図1に示す撮像システムは、2台以上の固定カメラ11〜1m(mは2以上の整数)、2台以上の固定カメラ用校正処理部21〜2m、ネットワーク31、1台以上の移動カメラ41〜4n(nは1以上の整数)及び移動カメラ用校正処理部51〜5nを備える。固定カメラ11〜1mの各々は、対応する固定カメラ用校正処理部21〜2mに接続され、移動カメラ41〜4nの各々は、対応する移動カメラ用校正処理部51〜5nに接続される。固定カメラ用校正処理部21〜2m及び移動カメラ用校正処理部41〜4nは、ネットワーク31を介して相互に通信可能に接続される。ネットワークとしては、有線又は無線のLAN(Local Area Network)等を用いることができる。 The imaging system shown in FIG. 1 includes two or more fixed cameras 11 to 1m (m is an integer of 2 or more), two or more fixed camera calibration processing units 21 to 2m, a network 31, and one or more mobile cameras 41. To 4n (n is an integer equal to or greater than 1) and mobile camera calibration processing units 51 to 5n. Each of the fixed cameras 11 to 1m is connected to a corresponding fixed camera calibration processing unit 21 to 2m, and each of the moving cameras 41 to 4n is connected to a corresponding moving camera calibration processing unit 51 to 5n. The fixed camera calibration processing units 21 to 2 m and the movable camera calibration processing units 41 to 4 n are connected to each other via the network 31 so as to communicate with each other. As the network, a wired or wireless LAN (Local Area Network) or the like can be used.
固定カメラ11は、空間内の所定位置、例えば、部屋又は廊下等の天井又は壁等に固定され、所定の撮影方向の画像を撮影して画像データを固定カメラ用校正処理部21へ出力する。他の固定カメラ12〜1mも固定カメラ11と同様に構成され、同様に動作する。固定カメラ11〜1mとしては、例えば、ソニー株式会社製高速パン・チルト・ズーム一体型カラーカメラEVI−D100を固定モードで用いることができる。 The fixed camera 11 is fixed to a predetermined position in the space, for example, a ceiling or a wall such as a room or a hallway, captures an image in a predetermined capturing direction, and outputs the image data to the calibration processing unit 21 for the fixed camera. The other fixed cameras 12 to 1 m are configured in the same manner as the fixed camera 11 and operate in the same manner. As the fixed cameras 11 to 1 m, for example, a high-speed pan / tilt / zoom integrated color camera EVI-D100 manufactured by Sony Corporation can be used in the fixed mode.
固定カメラ用校正処理部21では、全体の処理を開始する前に、固定カメラ11が撮影した画像上でのマーカーの観測位置の2次元座標情報と3次元空間中でマーカーが存在する3次元座標情報との組み合わせを用いて固定カメラ11(射影変換行列の算出)を予め行ってあるものとする。移動カメラの射影変換行列を求めるステップでは、移動カメラ用校正処理部51〜5nの要求に応じて、ネットワーク31を介して校正結果(射影変換行列)を出力する。同時に、固定カメラ11で撮影した画像上でマーカーの追跡処理を行い、マーカーの2次元座標情報もネットワーク31を介して移動カメラ用校正処理部51〜5nに出力する。移動カメラの首振り移動の校正を行うステップでは、他の固定カメラ12〜1mが撮影した画像(少なくとも2台以上の固定カメラの画像)を用いて空間中のマーカーの3次元位置を推定し、移動カメラ用校正処理部51〜5nの要求に応じて、ネットワーク31を介してマーカーの3次元位置情報を出力する。他の固定カメラ用校正処理部22〜2mも固定カメラ用校正処理部21と同様に構成され、同様に動作する。固定カメラ用校正処理部21〜2mとしては、ROM(リードオンリメモリ)、CPU(中央演算処理装置)、RAM(ランダムアクセスメモリ)、外部記憶装置、入力装置、通信装置及び表示装置等を備える通常のコンピュータを用いることができるが、専用のハードウエア回路により構成してもよい。なお、マーカーとしては、蛍光体を用いた印となるものを用いることができるが、この例に特に限定されず、画像中の特徴を取りやすい部分、物体の角、境界等の検出しやすい部分等を用いてもよい。 In the fixed camera calibration processing unit 21, the two-dimensional coordinate information of the observation position of the marker on the image taken by the fixed camera 11 and the three-dimensional coordinates where the marker exists in the three-dimensional space before starting the entire process. It is assumed that the fixed camera 11 (projection transformation matrix calculation) is performed in advance using a combination with information. In the step of obtaining the projection transformation matrix of the moving camera, the calibration result (projection transformation matrix) is output via the network 31 in response to the request of the calibration processing units 51 to 5n for the moving camera. At the same time, a marker tracking process is performed on the image captured by the fixed camera 11, and the two-dimensional coordinate information of the marker is also output to the mobile camera calibration processing units 51 to 5 n via the network 31. In the step of calibrating the swing movement of the moving camera, the three-dimensional position of the marker in the space is estimated using images (images of at least two fixed cameras) taken by the other fixed cameras 12 to 1m, The three-dimensional position information of the marker is output via the network 31 in response to a request from the moving camera calibration processing units 51 to 5n. The other fixed camera calibration processing units 22 to 2m are configured similarly to the fixed camera calibration processing unit 21 and operate in the same manner. The fixed camera calibration processing units 21 to 2m usually include a ROM (Read Only Memory), a CPU (Central Processing Unit), a RAM (Random Access Memory), an external storage device, an input device, a communication device, a display device, and the like. However, it may be configured by a dedicated hardware circuit. The marker can be a mark that uses a fluorescent material, but is not particularly limited to this example, and a part that easily captures a feature in an image, a part that easily detects an object corner, a boundary, or the like Etc. may be used.
移動カメラ41は、空間内において任意の位置に移動可能な移動体、例えば、移動する自律走行型ロボット又は人間等に装着され、装着者の移動に伴い空間内の任意の位置に移動可能で且つパン方向及びチルト方向に撮影方向を変更可能な可動カメラから構成され、撮影方向の画像を撮影して画像データを移動カメラ用校正処理部51へ出力する。他の移動カメラ42〜4nも移動カメラ41と同様に構成され、同様に動作する。移動カメラ41〜4nとしては、例えば、ソニー株式会社製高速パン・チルト・ズーム一体型カラーカメラEVI−D100を可動モードで用いることができる。 The moving camera 41 is attached to a moving body that can move to any position in the space, for example, a moving autonomously traveling robot or a human, and can move to any position in the space as the wearer moves. The camera is composed of a movable camera that can change the shooting direction to the pan direction and the tilt direction, takes an image in the shooting direction, and outputs the image data to the calibration processing unit 51 for the moving camera. The other moving cameras 42 to 4n are configured similarly to the moving camera 41 and operate in the same manner. As the moving cameras 41 to 4n, for example, a high-speed pan / tilt / zoom integrated color camera EVI-D100 manufactured by Sony Corporation can be used in a movable mode.
移動カメラ用校正処理部51は、ネットワーク31を介して、固定カメラ用校正処理部21〜2m(少なくとも2台以上)で獲得した校正結果(射影変換行列)と固定カメラ11〜1mが撮影した画像上でマーカーの追跡処理結果を取得し、それらの情報と、移動カメラ41が撮影した画像上でマーカーの追跡処理を行った結果(マーカーの2次元座標)を用いて、移動カメラ41の校正(射影変換行列の算出)を行う。他の移動カメラ用校正処理部52〜5nも移動カメラ用校正処理部51と同様に構成され、同様に動作する。移動カメラ用校正処理部51〜5nとしては、ROM、CPU、RAM、外部記憶装置、入力装置、通信装置及び表示装置等を備える通常のコンピュータを用いることができるが、専用のハードウエア回路により構成してもよい。 The moving camera calibration processing unit 51 receives the calibration results (projection transformation matrix) acquired by the fixed camera calibration processing units 21 to 2m (at least two or more) and the images taken by the fixed cameras 11 to 1m via the network 31. The marker tracking processing result is acquired above, and the result of the marker tracking processing (two-dimensional coordinates of the marker) on the information captured by the moving camera 41 is used to calibrate the moving camera 41 ( Projective transformation matrix is calculated. The other moving camera calibration processing units 52 to 5n are configured similarly to the moving camera calibration processing unit 51 and operate in the same manner. As the moving camera calibration processing units 51 to 5n, a normal computer including a ROM, a CPU, a RAM, an external storage device, an input device, a communication device, a display device, and the like can be used, but it is configured by a dedicated hardware circuit. May be.
図2は、2台の固定カメラ及び1台の移動カメラを用いた場合における図1に示す撮像システムの詳細な構成を示すブロック図である。図2では、固定カメラ11,12及び移動カメラ41を用いて校正する例を示しており、また、図示及び説明を容易にするため、固定カメラ13〜1m、固定カメラ用校正処理部22〜2m、ネットワーク31、移動カメラ42〜4n及び移動カメラ用校正処理部52〜5nの図示を省略している。 FIG. 2 is a block diagram showing a detailed configuration of the imaging system shown in FIG. 1 when two fixed cameras and one moving camera are used. FIG. 2 shows an example in which calibration is performed using the fixed cameras 11 and 12 and the moving camera 41, and for ease of illustration and description, the fixed cameras 13 to 1m and the fixed camera calibration processing units 22 to 2m are illustrated. The network 31, the moving cameras 42 to 4n, and the moving camera calibration processing units 52 to 5n are not shown.
図2に示すように、固定カメラ用校正処理部21は、所定の校正処理プログラムを実行することにより、固定用射影変換行列算出部61及び3次元位置計測部62として機能する。移動カメラ用校正処理部51は、所定の校正処理プログラムを実行することにより、対応点検出部71、エピポーラ方程式算出部72、観測座標算出部73、移動用射影変換行列算出部74、移動カメラ制御部75、パン値及びチルト値取得部76、回転中心算出部77及び首振り移動時射影変換行列算出部78として機能する。 As shown in FIG. 2, the fixed camera calibration processing unit 21 functions as a fixed projective transformation matrix calculation unit 61 and a three-dimensional position measurement unit 62 by executing a predetermined calibration processing program. The moving camera calibration processing unit 51 executes a predetermined calibration processing program, thereby corresponding point detecting unit 71, epipolar equation calculating unit 72, observation coordinate calculating unit 73, moving projection transformation matrix calculating unit 74, moving camera control. It functions as a unit 75, a pan value and tilt value acquisition unit 76, a rotation center calculation unit 77, and a projection transformation matrix calculation unit 78 during swing movement.
固定用射影変換行列算出部61は、ストロングキャリブレーションにより固定カメラ11,12の射影変換行列を算出(固定カメラ11,12の校正)を行う。また、固定用射影変換行列算出部61は、算出した射影変換行列を観測座標算出部73へ出力する。 The fixed projective transformation matrix calculating unit 61 calculates the projective transformation matrix of the fixed cameras 11 and 12 (calibration of the fixed cameras 11 and 12) by strong calibration. Further, the fixed projective transformation matrix calculation unit 61 outputs the calculated projective transformation matrix to the observation coordinate calculation unit 73.
固定カメラ11,12及び移動カメラ41は、空間内の所定位置に配置された複数のマーカーを撮影し、対応点検出部71は、固定カメラ11,12及び移動カメラ41により撮影されたマーカーの対応点を検出してエピポーラ方程式算出部72へ出力する。エピポーラ方程式算出部72は、検出された対応点を用いて固定カメラ11,12と移動カメラ41との間のエピポーラ方程式を算出して観測座標算出部73へ出力する。観測座標算出部73は、射影変換行列とエピポーラ方程式とから空間内の任意の3次元座標に対応する移動カメラ41により撮像された画像上の観測座標を算出し、3次元座標及び観測座標の組み合わせを移動用射影変換行列算出部74へ出力する。移動用射影変換行列算出部74は、3次元座標及び観測座標から移動カメラ41の射影変換行列を算出する。 The fixed cameras 11 and 12 and the moving camera 41 photograph a plurality of markers arranged at predetermined positions in the space, and the corresponding point detection unit 71 corresponds to the markers photographed by the fixed cameras 11 and 12 and the moving camera 41. A point is detected and output to the epipolar equation calculation unit 72. The epipolar equation calculation unit 72 calculates an epipolar equation between the fixed cameras 11 and 12 and the moving camera 41 using the detected corresponding points, and outputs the epipolar equation to the observation coordinate calculation unit 73. The observation coordinate calculation unit 73 calculates observation coordinates on the image captured by the moving camera 41 corresponding to arbitrary three-dimensional coordinates in the space from the projection transformation matrix and the epipolar equation, and a combination of the three-dimensional coordinates and the observation coordinates. Is output to the projective transformation matrix calculation unit 74 for movement. The movement projection transformation matrix calculation unit 74 calculates a projection transformation matrix of the moving camera 41 from the three-dimensional coordinates and the observation coordinates.
3次元位置計測部62は、固定カメラ11,12により撮影されたマーカーの3次元位置をステレオ法を用いて回転中心算出部77へ出力する。移動カメラ制御部75は、移動カメラ41のパン方向及びチルト方向の位置を制御し、マーカートレース撮影時にマーカーが画像中心に位置するように移動カメラ41のパン方向及びチルト方向の位置を制御する。パン値及びチルト値取得部76は、固定カメラ11,12により撮影されたマーカーが画像中心に位置するときの移動カメラ41のパン値及びチルト値を移動カメラ制御部75から取得して回転中心算出部77へ出力する。回転中心算出部77は、3次元位置と対応するパン値及びチルト値とから移動カメラ41のパン方向及びチルト方向の回転中心位置を算出して首振り移動時射影変換行列算出部78へ出力する。 The three-dimensional position measuring unit 62 outputs the three-dimensional position of the marker photographed by the fixed cameras 11 and 12 to the rotation center calculating unit 77 using the stereo method. The moving camera control unit 75 controls the position of the moving camera 41 in the pan direction and the tilt direction, and controls the position of the moving camera 41 in the pan direction and the tilt direction so that the marker is positioned at the center of the image during marker trace shooting. The pan value and tilt value acquisition unit 76 acquires the pan value and tilt value of the moving camera 41 when the marker imaged by the fixed cameras 11 and 12 is located at the center of the image from the moving camera control unit 75 and calculates the rotation center. To the unit 77. The rotation center calculation unit 77 calculates the rotation center position of the moving camera 41 in the pan direction and the tilt direction from the pan value and the tilt value corresponding to the three-dimensional position, and outputs the rotation center position to the projection transformation matrix calculation unit 78 during the swing movement. .
移動カメラ41が首振り運動により移動したとき、パン値及びチルト値取得部76は、移動時のパン値及びチルト値を移動カメラ制御部75から取得して首振り移動時射影変換行列算出部78へ出力する。首振り移動時射影変換行列算出部78は、上記の位置及び姿勢の校正処理により移動用射影変換行列算出部74が推定した射影変換行列を取得し、取得した射影変換行列及び回転中心位置に対する移動カメラ41の首振り移動時のパン値及びチルト値から移動カメラ41の首振り移動時の射影変換行列を算出し、移動カメラ41のパン方向及びチルト方向の校正(首振り運動後の移動カメラ41の校正)を行う。 When the moving camera 41 is moved by the swing motion, the pan value and tilt value acquisition unit 76 acquires the pan value and tilt value at the time of movement from the moving camera control unit 75 and performs a projection transformation matrix calculation unit 78 during the swing movement. Output to. The swing movement projection transformation matrix calculation unit 78 acquires the projection transformation matrix estimated by the movement projection transformation matrix calculation unit 74 by the above-described position and orientation calibration processing, and moves with respect to the obtained projection transformation matrix and rotation center position. A projection transformation matrix when the moving camera 41 is swung is calculated from a pan value and a tilt value when the camera 41 is swung, and the pan and tilt directions of the moving camera 41 are calibrated (the moving camera 41 after the swing motion). Calibration).
本実施の形態では、固定カメラ11〜1mが固定撮像手段の一例に相当し、移動カメラ41〜4nが移動撮像手段及び可動撮像手段の一例に相当し、固定用射影変換行列算出部61が固定用射影変換行列算出手段の一例に相当し、対応点検出部71及びエピポーラ方程式算出部72がエピポーラ方程式算出手段の一例に相当し、観測座標算出部73及び移動用射影変換行列算出部74が移動用射影変換行列算出手段の一例に相当する。また、3次元位置計測部62、回転中心算出部77及びパン値及びチルト値取得部76が回転中心算出手段の一例に相当し、首振り移動時射影変換行列算出部78が移動時射影変換行列算出手段の一例に相当する。 In the present embodiment, the fixed cameras 11 to 1m correspond to an example of a fixed imaging unit, the moving cameras 41 to 4n correspond to an example of a moving imaging unit and a movable imaging unit, and the fixed projective transformation matrix calculation unit 61 is fixed. The corresponding point detecting unit 71 and the epipolar equation calculating unit 72 correspond to an example of the epipolar equation calculating unit, and the observation coordinate calculating unit 73 and the moving projective transformation matrix calculating unit 74 are moved. This corresponds to an example of the projective projection matrix calculation means. The three-dimensional position measurement unit 62, the rotation center calculation unit 77, and the pan value and tilt value acquisition unit 76 correspond to an example of the rotation center calculation unit, and the swing movement projection transformation matrix calculation unit 78 uses the movement projection transformation matrix. This corresponds to an example of calculation means.
次に、上記のように構成された撮像システムによる校正処理について説明する。図3は、図2に示す撮像システムによる校正処理を説明するためのフローチャートである。なお、以下の説明では、固定カメラ11,12及び移動カメラ41の校正処理を例に説明するが、他の固定カメラ13〜1m及び移動カメラ42〜4nの校正処理も同様である。 Next, calibration processing by the imaging system configured as described above will be described. FIG. 3 is a flowchart for explaining the calibration processing by the imaging system shown in FIG. In the following description, the calibration process of the fixed cameras 11 and 12 and the moving camera 41 will be described as an example, but the calibration process of the other fixed cameras 13 to 1m and the moving cameras 42 to 4n is the same.
まず、ステップS1において、固定用射影変換行列算出部61は、公知のストロングキャリブレーションにより以下のようにして固定カメラ11,12の射影変換行列を算出して固定カメラ11の校正を行う。また、固定用射影変換行列算出部61は、算出した射影変換行列を観測座標算出部73へ出力する。 First, in step S <b> 1, the fixed projection transformation matrix calculation unit 61 calibrates the fixed camera 11 by calculating the projection transformation matrix of the fixed cameras 11 and 12 as described below by a known strong calibration. Further, the fixed projective transformation matrix calculation unit 61 outputs the calculated projective transformation matrix to the observation coordinate calculation unit 73.
図4は、ストロングキャリブレーションを説明するための模式図である。図4に示すように、画像座標系とワールド座標系との射影変換を求める手がかりとなる固定カメラC0,C1(例えば、固定カメラ11,12)を設定し、ストロングキャリブレーションにより既知の3次元座標M(X,Y,Z)及び既知の観測位置m0(u0,v0),m1(u1,v1)を用いて固定カメラC0,C1の射影変換行列P0,P1を算出する。なお、本校正処理では、固定カメラの台数が多いほど、移動カメラの校正精度を向上することができるが、処理コストが増加するので、校正精度及び処理コストを考慮して校正に使用される固定カメラの台数は適宜決定される。 FIG. 4 is a schematic diagram for explaining strong calibration. As shown in FIG. 4, fixed cameras C 0 and C 1 (for example, fixed cameras 11 and 12) serving as a clue for obtaining projective transformation between the image coordinate system and the world coordinate system are set, and known 3 is obtained by strong calibration. Projective transformation matrix P 0 of fixed cameras C 0 , C 1 using dimensional coordinates M (X, Y, Z) and known observation positions m 0 (u 0 , v 0 ), m 1 (u 1 , v 1 ). , P 1 is calculated. In this calibration process, the greater the number of fixed cameras, the better the calibration accuracy of the mobile camera. However, the processing cost increases, so the fixed cost used for calibration is considered in consideration of the calibration accuracy and processing cost. The number of cameras is determined as appropriate.
次に、移動カメラ41を新たな撮影場所に移動させる。このとき、移動カメラ41が固定カメラ11,12と共通の領域を撮影することができるように、移動カメラ41の位置及び姿勢を設定する。この条件を満たす限り、移動カメラ41の姿勢は任意であってよい。この位置をステップS6以降のパン値及びチルト値の校正処理における基本姿勢とする。ステップS2において、固定カメラ11,12及び移動カメラ41は、撮影場所の空間内の所定位置に配置された複数のマーカーを撮影し、対応点検出部71は、固定カメラ11,12及び移動カメラ41が撮影したマーカーの画像を取得し、固定カメラ11,12と移動カメラ41との間の対応点を検出してエピポーラ方程式算出部72へ出力する。 Next, the moving camera 41 is moved to a new shooting location. At this time, the position and orientation of the moving camera 41 are set so that the moving camera 41 can capture an area common to the fixed cameras 11 and 12. As long as this condition is satisfied, the posture of the mobile camera 41 may be arbitrary. This position is set as a basic posture in the calibration process of the pan value and the tilt value after step S6. In step S <b> 2, the fixed cameras 11 and 12 and the moving camera 41 photograph a plurality of markers arranged at predetermined positions in the space of the photographing location, and the corresponding point detection unit 71 includes the fixed cameras 11 and 12 and the moving camera 41. The image of the marker photographed by is acquired, the corresponding point between the fixed cameras 11 and 12 and the moving camera 41 is detected and output to the epipolar equation calculation unit 72.
次に、ステップS3において、エピポーラ方程式算出部72は、検出された対応点を用いて固定カメラ11,12と移動カメラ41との間のエピポーラ方程式を算出して観測座標算出部73へ出力する。 Next, in step S <b> 3, the epipolar equation calculation unit 72 calculates an epipolar equation between the fixed cameras 11 and 12 and the moving camera 41 using the detected corresponding points and outputs the epipolar equation to the observation coordinate calculation unit 73.
次に、ステップS4において、観測座標算出部73は、ステップS1で算出された射影変換行列とステップS3で算出されたエピポーラ方程式とから空間内の任意の3次元座標に対応する移動カメラ41の画像上の観測座標を仮想的に獲得し、3次元座標及び観測座標の組み合わせを移動用射影変換行列算出部74へ出力する。 Next, in step S4, the observation coordinate calculation unit 73 is an image of the moving camera 41 corresponding to an arbitrary three-dimensional coordinate in the space from the projective transformation matrix calculated in step S1 and the epipolar equation calculated in step S3. The upper observation coordinates are virtually acquired, and a combination of the three-dimensional coordinates and the observation coordinates is output to the moving projection transformation matrix calculation unit 74.
次に、ステップS5において、移動用射影変換行列算出部74は、獲得した3次元座標及び観測座標の組み合わせからストロングキャリブレーションと同様の処理により移動カメラ41の射影変換行列を算出する。上記の処理により、移動カメラ41の位置及び姿勢が高精度に校正される。 Next, in step S5, the movement projection transformation matrix calculation unit 74 calculates the projection transformation matrix of the moving camera 41 from the combination of the acquired three-dimensional coordinates and observation coordinates by the same process as the strong calibration. With the above processing, the position and orientation of the moving camera 41 are calibrated with high accuracy.
ここで、上記の移動カメラ41の校正処理についてさらに詳細に説明する。3次元空間中の点Mは、射影変換行列Ptにより、移動カメラCt(例えば、移動カメラ41)が撮影した画像上の点mtに射影される。
mt=PtM … (1)
Here, the calibration process of the mobile camera 41 will be described in more detail. The point M in the three-dimensional space is projected onto the point m t on the image taken by the moving camera C t (for example, the moving camera 41) by the projective transformation matrix P t .
m t = P t M (1)
3次元座標が既知な3次元空間中の点とその画像上での観測座標との組み合わせが最低6点あれば、上記の固定カメラと同様にストロングキャリブレーションにより移動カメラCtの射影変換行列Ptを推定することができるが、本実施の形態では、固定カメラC0,C1と移動カメラCtとの間におけるエピポーラ方程式(以下、F行列ともいう)F0,F1と、固定カメラC0,C1の射影変換行列P0,P1とを用いて移動カメラCtの射影変換行列Ptを求める。 If there are at least six combinations of the points in the three-dimensional space with known three-dimensional coordinates and the observed coordinates on the image, the projection transformation matrix P of the moving camera C t by strong calibration as in the case of the fixed camera described above. t can be estimated, but in this embodiment, epipolar equations (hereinafter also referred to as F matrix) F 0 and F 1 between the fixed cameras C 0 and C 1 and the moving camera C t , and the fixed camera C 0, the projective transformation matrix of C 1 by using the P 0, P 1 Request projective transformation matrix P t of the moving camera C t.
図5は、対応点からF行列を算出する処理を説明するための模式図である。図5に示すように、固定カメラC0,C1と移動カメラCtとの間で観測される対応点を8個以上獲得し、この対応点の情報を用いて固定カメラC0,C1と移動カメラCtとの間のF行列F0,F1を推定する。ここで、F行列の算出には3次元情報は不要であるため、マーカーの3次元位置を測定する必要はない。 FIG. 5 is a schematic diagram for explaining the process of calculating the F matrix from the corresponding points. As shown in FIG. 5, eight or more corresponding points observed between the fixed cameras C 0 , C 1 and the moving camera C t are acquired, and the fixed cameras C 0 , C 1 are acquired using information on the corresponding points. F matrices F 0 and F 1 between the moving camera C t and the moving camera C t are estimated. Here, since the three-dimensional information is unnecessary for the calculation of the F matrix, it is not necessary to measure the three-dimensional position of the marker.
一方、移動カメラCtの画像上における対応点の情報からF行列を推定する処理は、画像ノイズの影響を大きく受けるため、安定的に求めることは難しいが、図6に示すように、2台のカメラが相互に投影し合うような組み合わせがあれば、写り込んだカメラの投影像として画像中からエピポール情報を直接獲得し、F行列を安定的に求めることができる。 On the other hand, the process of estimating the F matrix from the information of the corresponding points on the image of the moving camera Ct is greatly affected by image noise and difficult to obtain stably. However, as shown in FIG. If there is a combination in which the two cameras project each other, epipole information can be directly acquired from the image as a projected image of the reflected camera, and the F matrix can be obtained stably.
本実施の形態では、移動カメラCtは、移動カメラ制御部75によりその姿勢を制御することができるので、2台のカメラが相互に投影し合うような状態を容易に作り出すことができ、図6に示すような位置関係に固定カメラC0,C1及び移動カメラCtを設定し、F行列F0,F1を安定的に推定している。 In this embodiment, the moving camera Ct can be controlled in its posture by the moving camera control unit 75, so that a state in which the two cameras project each other can be easily created. The fixed cameras C 0 and C 1 and the moving camera C t are set in the positional relationship as shown in FIG. 6, and the F matrices F 0 and F 1 are stably estimated.
図7は、3次元座標M(X,Y,Z)と観測座標mt(u,v)との組み合わせを取得する処理を説明するための模式図である。図7に示すように、移動カメラCtの画像上の点mt(u,v)に対する固定カメラC0,C1の画像上における対応点をm0,m1とすると、対応点m0,m1は、以下のエピポーラ方程式F0,F1を満たす。
(mt)TF0m0=0 … (2)
(mt)TF1m1=0 … (3)
FIG. 7 is a schematic diagram for explaining processing for obtaining a combination of the three-dimensional coordinates M (X, Y, Z) and the observation coordinates m t (u, v). As shown in FIG. 7, when m 0 and m 1 are corresponding points on the image of the fixed cameras C 0 and C 1 with respect to the point m t (u, v) on the image of the moving camera C t , the corresponding point m 0 , m 1 satisfy the epipolar equation F 0, F 1 below.
(M t ) T F 0 m 0 = 0 (2)
(M t ) T F 1 m 1 = 0 (3)
このとき、3次元空間中の点Mと固定カメラC0,C1の画像上の観測点m0,m1との関係は、固定カメラC0,C1の射影変換行列P0,P1により以下のように表される。
m0=P0M … (4)
m1=P1M … (5)
At this time, the observation point on the three-dimensional M and the fixed camera C 0 point in space, C 1 image m 0, the relationship between m 1 is projective transformation matrix P 0 of the fixed cameras C 0, C 1, P 1 Is expressed as follows.
m 0 = P 0 M (4)
m 1 = P 1 M (5)
上記の式(2)及び(4)、式(3)及び(5)から、3次元空間中の点Mに対応する
移動カメラCtの画像上の2本のエピポーラ線が、以下のように求められる。
(mt)TF0P0M=0 … (6)
(mt)TF1P1M=0 … (7)
Above formula (2) and (4), from equation (3) and (5), two epipolar lines on the moving camera C t of the image corresponding to the point M in the three-dimensional space, as follows Desired.
(M t ) T F 0 P 0 M = 0 (6)
(M t ) T F 1 P 1 M = 0 (7)
上記の式(6)及び(7)に任意の3次元座標M(X,Y,Z)を代入し、それらを解くことにより、3次元座標値M(X,Y,Z)と画像上の観測座標mt(u,v)との組み合わせが獲得される。以降、上記のストロングキャリブレーションと同様の処理により移動カメラCtの射影変換行列Ptを推定する。このように、ステップS1〜S5の処理では、2次元情報から3次元情報を推定する逆問題を解く必要がなくなり、推定誤差の影響を受けずに移動カメラ41の射影変換行列を高精度に求めることができ、移動カメラ41の位置及び姿勢を高精度に校正することができる。 By substituting arbitrary three-dimensional coordinates M (X, Y, Z) into the above equations (6) and (7) and solving them, the three-dimensional coordinate values M (X, Y, Z) and on the image A combination with the observation coordinates m t (u, v) is obtained. Later, to estimate the projective transformation matrix P t of the moving camera C t by the same process as the above strong calibration. As described above, in the processing of steps S1 to S5, it is not necessary to solve the inverse problem of estimating the three-dimensional information from the two-dimensional information, and the projective transformation matrix of the moving camera 41 is obtained with high accuracy without being affected by the estimation error. The position and posture of the mobile camera 41 can be calibrated with high accuracy.
次に、ステップS6において、固定カメラ11,12は、撮影場所の空間内の所定位置に配置されたマーカーを撮影し、3次元位置計測部62は、固定カメラ11,12により撮影されたマーカーの観測位置からステレオ法によりマーカーの3次元位置を計測して回転中心算出部77へ出力する。 Next, in step S 6, the fixed cameras 11 and 12 photograph the marker placed at a predetermined position in the space of the photographing location, and the three-dimensional position measurement unit 62 captures the marker photographed by the fixed cameras 11 and 12. The three-dimensional position of the marker is measured from the observation position by the stereo method and output to the rotation center calculation unit 77.
次に、ステップS7において、移動カメラ制御部75は、固定カメラ11,12により撮影されたマーカーが画像中心に位置するように移動カメラ41のパン方向及びチルト方向の位置を制御し、パン値及びチルト値取得部76は、マーカーが画像中心に位置するときの移動カメラ41のパン値及びチルト値を移動カメラ制御部75から取得して回転中心算出部77へ出力する。 Next, in step S7, the moving camera control unit 75 controls the position of the moving camera 41 in the pan direction and the tilt direction so that the marker photographed by the fixed cameras 11 and 12 is positioned at the center of the image, and the pan value and The tilt value acquisition unit 76 acquires the pan value and tilt value of the moving camera 41 when the marker is positioned at the center of the image from the moving camera control unit 75 and outputs the pan value and tilt value to the rotation center calculation unit 77.
次に、ステップS8において、回転中心算出部77は、マーカーの3次元位置とパン値及びチルト値との組み合わせを所定数、例えば、2つ取得したか否かを判断し、所定数取得していない場合(ステップS8でNO)はステップS6に戻ってマーカートレース撮影を行って新たな3次元位置とパン値及びチルト値との組み合わせを取得し、所定数取得した場合(ステップS8でYES)はステップS9へ処理を移行する。 Next, in step S8, the rotation center calculation unit 77 determines whether or not a predetermined number, for example, two combinations of the three-dimensional position of the marker and the pan value and the tilt value are acquired, and acquires the predetermined number. If not (NO in step S8), the process returns to step S6 to perform marker trace imaging to acquire a new combination of the three-dimensional position, pan value and tilt value, and when a predetermined number is acquired (YES in step S8). The process proceeds to step S9.
3次元位置とパン値及びチルト値との組み合わせを所定数取得した場合(ステップS8でYES)、ステップS9において、回転中心算出部77は、取得した3次元位置とパン値及びチルト値との組み合わせから移動カメラ41のパン方向及びチルト方向の回転中心位置(首振り回転の中心位置)を算出して首振り移動時射影変換行列算出部78へ出力する。 When a predetermined number of combinations of the three-dimensional position and the pan value and the tilt value are acquired (YES in step S8), in step S9, the rotation center calculation unit 77 combines the acquired three-dimensional position, the pan value, and the tilt value. The rotation center position of the moving camera 41 in the pan direction and the tilt direction (center position of the swing rotation) is calculated and output to the projection transformation matrix calculation unit 78 during the swing movement.
次に、移動カメラ41が首振り運動によりパン方向及びチルト方向に移動したとき、ステップS10において、パン値及びチルト値取得部76は、移動時のパン値及びチルト値を移動カメラ制御部75から取得して首振り移動時射影変換行列算出部78へ出力し、首振り移動時射影変換行列算出部78は、入力された移動カメラ41の移動時のパン値及びチルト値及び回転中心位置から移動カメラ41の移動時の剛体変換行列を算出し、算出した剛体変換行列を上記の位置及び姿勢の校正処理で推定した射影変換行列に掛け合わせ、移動時(首振り後)の射影変換行列を算出する。 Next, when the moving camera 41 moves in the pan direction and the tilt direction by the swing motion, in step S10, the pan value and tilt value acquisition unit 76 obtains the pan value and tilt value during movement from the moving camera control unit 75. Obtained and output to the swing movement projection transformation matrix calculation unit 78. The swing movement projection transformation matrix calculation unit 78 moves from the pan and tilt values and the rotation center position when the moving camera 41 is input. A rigid transformation matrix when the camera 41 is moved is calculated, and the calculated rigid transformation matrix is multiplied by the projection transformation matrix estimated by the above-described position and orientation calibration processing to calculate a projection transformation matrix when moving (after swinging). To do.
ここで、上記の移動カメラ41の首振り運動の校正処理についてさらに詳細に説明する。図8は、基本姿勢から首振り運動により移動した状態における射影変換行列の算出処理を説明するための模式図である。まず、上記の移動カメラ41の位置及び姿勢の校正処理により基本姿勢の状態における射影変換行列P0を求める。次に、3次元空間中を移動するマーカーが常に画像の中心で観測されるように、移動カメラ41のパン値及びチルト値を制御するマーカートレース撮影を行い、各マーカー撮影時の移動量であるパン値及びチルト値Rnを求める。同時に、このマーカーを校正済の固定カメラ11,12により撮影し、マーカーの3次元座標(Xn,Yn,Zn)とパン値及びチルト値Rnとから下記の式(8)の連立方程式を解くことにより、移動カメラ41の回転中心位置T(tx,ty,tz)を算出する。
〔Xn−tx Yn−ty Zn−tz〕T=R〔X0−tx Y0−ty Z0−tz〕T … (8)
Here, the calibration process of the swing motion of the mobile camera 41 will be described in more detail. FIG. 8 is a schematic diagram for explaining a projective transformation matrix calculation process in a state in which the base posture is moved by a swing motion. First, the projection transformation matrix P 0 in the basic posture state is obtained by the calibration processing of the position and posture of the mobile camera 41 described above. Next, the marker trace photographing for controlling the pan value and the tilt value of the moving camera 41 is performed so that the marker moving in the three-dimensional space is always observed at the center of the image. A pan value and a tilt value R n are obtained. At the same time, this marker is photographed by the calibrated fixed cameras 11 and 12, and the following equation (8) is derived from the three-dimensional coordinates (X n , Y n , Z n ) of the marker, the pan value and the tilt value R n. By solving the equation, the rotation center position T (t x , t y , t z ) of the moving camera 41 is calculated.
[X n -t x Y n -t y Z n -t z ] T = R [X 0 -t x Y 0 -t y Z 0 -t z ] T ... (8)
ここで、(X0,Y0,Z0)は基本姿勢におけるマーカーの3次元座標、Rはパン値及びチルト値Rnから与えられる3×3の回転行列である。このとき、未知数は3であるから、基本姿勢以外の姿勢でマーカーを最低1点撮影すれば、移動カメラ41の回転中心位置Tを求めることができる。 Here, (X 0 , Y 0 , Z 0 ) is the three-dimensional coordinate of the marker in the basic posture, and R is a 3 × 3 rotation matrix given from the pan value and tilt value R n . At this time, since the unknown is 3, the rotation center position T of the moving camera 41 can be obtained by photographing at least one marker in a posture other than the basic posture.
また、下記の式(9)に示すように、複数回の撮影から算出した値T(txn,tyn,tzn)の重み付き平均を取ることにより、信頼性を向上させるようにしてもよい。この場合、基本姿勢からの移動量が大きいほど高い精度の推定を行うことができるため、パン値及びチルト値を要素とするベクトルのノルムを計測マーカー全体で正規化したものを重みとして用いることが好ましい。 Further, as shown in the following equation (9), reliability is improved by taking a weighted average of values T (t xn , t yn , t zn ) calculated from a plurality of times of photographing. Good. In this case, since the higher the amount of movement from the basic posture, the higher the accuracy of the estimation can be made, the normalized norm of the vector having the pan value and the tilt value as elements is used as the weight. preferable.
次に、上記のようにして求めた回転中心位置Tとパン値及びチルト値Rnとから、移動カメラ41の移動時の剛体変換行列を求め、求めた剛体変換行列を基本姿勢の射影変換行列P0に掛け合わせ、移動カメラ41の移動時(首振り後)の射影変換行列Pnを獲得する。この場合、射影変換行列の分解による外部パラメータ推定処理が必要なくなり、推定誤差の影響を受けずに移動カメラ41の移動時の射影変換行列を高精度に求めることができる。 Next, a rigid transformation matrix at the time of movement of the moving camera 41 is obtained from the rotation center position T obtained as described above, the pan value and the tilt value R n, and the obtained rigid transformation matrix is used as a projective transformation matrix of the basic posture. By multiplying by P 0 , a projective transformation matrix P n when the moving camera 41 is moving (after swinging) is obtained. In this case, it is not necessary to perform the external parameter estimation process by decomposing the projection transformation matrix, and the projection transformation matrix when the mobile camera 41 is moved can be obtained with high accuracy without being affected by the estimation error.
同時に、マーカートレース撮影により獲得したマーカーの3次元座標(Xn,Yn,Zn)と、このマーカーを画像の中心で撮影したときのパン値及びチルト値(pan_n,tilt_n)との組み合わせから(ここで、n=1,…,N)、以下の式(10)を満たす移動カメラ41の制御行列Cを最小二乗法により求める。ここで、λは任意の実数である。 At the same time, from the combination of the three-dimensional coordinates (X n , Y n , Z n ) of the marker acquired by marker trace photography and the pan value and tilt value (pan_n, tilt_n) when this marker is photographed at the center of the image. (Here, n = 1,..., N), a control matrix C of the moving camera 41 that satisfies the following equation (10) is obtained by the method of least squares. Here, λ is an arbitrary real number.
λ〔pan tilt 1〕T=C〔X Y Z 1〕T … (10)
なお、移動物体を撮影するときには、撮影を希望する領域の3次元座標(X,Y,Z)を式(10)の右辺に代入することにより、移動カメラ41の制御値(pan,tilt)を計算して制御及び撮影を行う。具体的には、首振り移動時射影変換行列算出部78は、上記のようにして式(10)の制御行列Cを算出し、算出した制御行列Cを移動カメラパン値及びチルト値算出部81へ出力する。移動カメラパン値及びチルト値算出部81において、撮影命令85として与えられた3次元空間中で注目すべき位置(X,Y,Z)と首振り移動時射影変換行列算出部78から出力された制御行列Cとを用いて式(10)から移動カメラのパン値及びチルト値を算出し、移動カメラ制御部75に制御命令を渡す。移動カメラ制御部75は、受け取ったパン値及びチルト値に移動カメラを制御する。この場合、移動カメラパン値及びチルト値算出部81がパン値及びチルト値算出手段の一例に相当し、移動カメラ制御部75が制御手段の一例に相当し、撮影を希望する撮影対象領域に適したパン値及びチルト値に移動カメラを自動的に制御することができる。
λ [pan tilt 1] T = C [XY Z 1] T (10)
When shooting a moving object, the control value (pan, tilt) of the moving camera 41 is substituted by substituting the three-dimensional coordinates (X, Y, Z) of the region desired to be shot into the right side of Expression (10). Calculate and control and shoot. Specifically, the swing movement projection transformation matrix calculation unit 78 calculates the control matrix C of Expression (10) as described above, and uses the calculated control matrix C as the moving camera pan value and tilt value calculation unit 81. Output to. In the moving camera pan value and tilt value calculation unit 81, the position (X, Y, Z) to be noted in the three-dimensional space given as the shooting command 85 and the swing movement projection transformation matrix calculation unit 78 are output. The pan and tilt values of the moving camera are calculated from the equation (10) using the control matrix C, and a control command is passed to the moving camera control unit 75. The moving camera control unit 75 controls the moving camera to the received pan value and tilt value. In this case, the moving camera pan value / tilt value calculation unit 81 corresponds to an example of a pan value and tilt value calculation unit, and the moving camera control unit 75 corresponds to an example of a control unit, and is suitable for a shooting target region desired to be shot. The moving camera can be automatically controlled to the pan value and tilt value.
拡張現実感映像生成部82では、移動カメラ41で撮影した映像にコンピュータグラフィックスデータ83を重畳することで、現実世界の映像が持つ情報を拡張する処理を行う。首振り移動時射影変換行列算出部78で算出した射影変換行列を用いて、コンピュータグラフィックスデータ83を移動カメラ41の位置及び姿勢からの見え方に変換した後、撮影映像に重畳することで、実写映像とコンピュータグラフィックスの見え方変化を一致させ、拡張現実感映像84を生成する。 The augmented reality video generation unit 82 performs a process of extending information held in the real world video by superimposing the computer graphics data 83 on the video captured by the mobile camera 41. By converting the computer graphics data 83 into a view from the position and orientation of the moving camera 41 using the projection transformation matrix calculated by the projection transformation matrix calculation unit 78 at the time of swing movement, it is superimposed on the captured video, The augmented reality image 84 is generated by matching the changes in the appearance of the real image and the computer graphics.
上記の処理により、本実施の形態では、複数の固定カメラ11〜1mの射影変換行列を算出して複数の固定カメラ11〜1mを高精度に校正することができるとともに、複数の固定カメラ11〜1mと移動カメラ41〜4nとの間のエピポーラ方程式を算出し、算出したエピポーラ方程式と固定カメラ11〜1mの射影変換行列とから移動カメラ41〜4nの射影変換行列を算出しているので、移動カメラ41〜4nのワールド座標系との対応付けを行うことができるとともに、2次元情報から3次元情報を推定する逆問題を解く処理が必要なくなり、推定誤差の影響を受けずに移動カメラ41〜4nの射影変換行列を高精度に求めることができ、移動カメラ41〜4nを高精度に校正することができる。この結果、移動カメラ41〜4nのワールド座標系との対応付けを行うことができるとともに、簡便な校正作業で固定カメラ11〜1m及び移動カメラ41〜4nの校正を高精度に行うことができる。また、移動カメラ41〜4nの可動性を活用して、固定カメラ11〜1mだけではカバーしきれない領域の校正済映像を獲得することができるとともに、固定カメラ11〜1mの高い精度と移動カメラ41〜4nの広い視野を両立したシステムを実現することができる。 According to the above processing, in the present embodiment, the projection transformation matrix of the plurality of fixed cameras 11 to 1m can be calculated to calibrate the plurality of fixed cameras 11 to 1m with high accuracy, and the plurality of fixed cameras 11 to 11 can be calibrated. Since the epipolar equation between 1 m and the moving cameras 41 to 4n is calculated, and the projective transformation matrix of the moving cameras 41 to 4n is calculated from the calculated epipolar equation and the projection transformation matrix of the fixed cameras 11 to 1m, the moving It is possible to associate the cameras 41 to 4n with the world coordinate system, and it is not necessary to solve the inverse problem of estimating the three-dimensional information from the two-dimensional information, and the moving cameras 41 to 41 are not affected by the estimation error. The 4n projective transformation matrix can be obtained with high accuracy, and the mobile cameras 41 to 4n can be calibrated with high accuracy. As a result, the mobile cameras 41 to 4n can be associated with the world coordinate system, and the fixed cameras 11 to 1m and the mobile cameras 41 to 4n can be calibrated with high accuracy by a simple calibration operation. In addition, by using the mobility of the moving cameras 41 to 4n, it is possible to obtain a calibrated image of an area that cannot be covered only by the fixed cameras 11 to 1m, and the high accuracy and the moving camera of the fixed cameras 11 to 1m. A system having both a wide field of view of 41 to 4n can be realized.
さらに、本撮像システムでは、3次元位置センサ(例えば、磁気センサ、赤外線センサ)等の電磁波を能動的に発するアクティブセンサを使用しないため、電磁波の使用が制限される病院や、広すぎて電磁波が届かない野外等においても使用可能であり、さらに、病院及び野外等では目立つマーカーの設置が困難な場合があるが、使用するマーカーは単に三次元位置を計測可能なものであればよいため、壁や天井等の空間内に蛍光塗料等で着色された目立つ画像マーカーを設置する必要がなく、この点においても病院及び野外等に好適に用いることができる。 Furthermore, since this imaging system does not use an active sensor that actively emits electromagnetic waves, such as a three-dimensional position sensor (for example, a magnetic sensor or an infrared sensor), the use of electromagnetic waves is limited, hospitals where use of electromagnetic waves is limited, It can be used in the field where it does not reach, and it may be difficult to place a conspicuous marker in hospitals and outdoors, but the marker used only needs to be able to measure the three-dimensional position. It is not necessary to install a conspicuous image marker colored with a fluorescent paint or the like in a space such as a ceiling or the like, and in this respect also, it can be suitably used in hospitals and outdoors.
最後に、上記の校正方法の有効性について説明する。図9は、撮影を行ったCG空間におけるカメラ配置を示す模式図である。図9に示すように、ワールド座標系の原点の周囲90cm立法の空間にマーカー(ランドマークポイント)を10cm間隔で格子状に配置して2台の固定カメラC0,C1による撮影を行い、マーカーの3次元座標及びその観測位置の情報から固定カメラC0,C1の射影変換行列P0,P1を推定した。次に、固定カメラC0,C1のレンズ中心が画像中に写り込むように移動カメラCtを配置して撮影を行い、移動カメラCtの画像におけるマーカーの観測位置との固定カメラC0,C1の投影像(エピポール)の座標値から、固定カメラC0,C1と移動カメラCtとの間のエピポーラ方程式F0,F1を算出した。 Finally, the effectiveness of the above calibration method will be described. FIG. 9 is a schematic diagram showing the camera arrangement in the CG space where the image is taken. As shown in FIG. 9, markers (landmark points) are arranged in a grid pattern at intervals of 10 cm in a space of 90 cm around the origin of the world coordinate system, and images are taken by two fixed cameras C 0 and C 1 . projective transformation matrix P 0 of the fixed cameras C 0, C 1 from the three-dimensional coordinates and information of the observation position of the marker, were estimated P 1. Next, the moving camera C t is arranged so that the lens centers of the fixed cameras C 0 and C 1 are reflected in the image, and shooting is performed. The fixed camera C 0 with respect to the marker observation position in the image of the moving camera C t is taken. , C 1 , epipolar equations F 0 , F 1 between the fixed cameras C 0 , C 1 and the moving camera C t were calculated from the coordinate values of the projected image (epipole) of C 1 , C 1 .
図10は、移動カメラCtの画像上で獲得されたエピポールと式(6)及び(7)により算出されたマーカーのエピポール線とを重畳した結果を示す図であり、図11は、図10に示す結果を用いて移動カメラCtの射影変換行列Ptを推定し、マーカーを移動カメラCtの画像上に逆投影した結果を示す図である。図11に示す○は撮影されたマーカーを示し、×は逆投影された結果を示しており、図11から固定時において正確な射影変換が行われたことがわかる。 Figure 10 is a diagram showing the result of superimposing the epipole line of the calculated marker by acquired epipoles the formula (6) and (7) on the image of the moving camera C t, 11, 10 estimating the projective transformation matrix P t of the moving camera C t using the results shown in a graph showing the results of reverse projected onto the image of the moving camera C t markers. In FIG. 11, “◯” indicates the photographed marker, and “×” indicates the result of back projection, and it can be seen from FIG. 11 that accurate projective transformation was performed at the time of fixation.
次に、マーカーの位置(X,Y,Z)を(0,45,0)から(0,45,−90)まで10cm間隔で移動させ、その投影像が画像中心で観測されるように移動カメラCtの姿勢を制御しながら撮影を行った。同時に、固定カメラC0,C1におけるマーカーの観測情報からその3次元位置を推定した。次に、移動カメラCtの回転中心位置を算出し、移動カメラCtの基本姿勢時の射影変換行列を剛体変換することにより、パン値及びチルト値から各姿勢の射影変換行列を求めた。 Next, the marker position (X, Y, Z) is moved from (0, 45, 0) to (0, 45, -90) at 10 cm intervals, and moved so that the projected image is observed at the image center. Photographing was performed while controlling the posture of the camera Ct . At the same time, the three-dimensional position was estimated from the marker observation information in the fixed cameras C 0 and C 1 . Then, to calculate the rotation center position of the moving camera C t, the projective transformation matrix when the basic orientation of the moving camera C t by rigid body transformation to determine the projective transformation matrix for each position from the pan value and tilt values.
図12は、(X,Y,Z)=(0,45,0)のマーカーが中心に移るように移動カメラCtを制御した撮影時の射影変換行列を推定し、マーカーを移動カメラCtの画像上に逆投影した結果を示す図である。図12に示す○は撮影されたマーカーを示し、×は逆投影された結果を示しており、図12から移動時においても正確な射影変換が行われたことがわかる。 FIG. 12 shows a projection transformation matrix at the time of shooting in which the moving camera C t is controlled so that the marker (X, Y, Z) = (0, 45, 0) moves to the center, and the marker is moved to the moving camera C t. It is a figure which shows the result of having projected back on the image of no. In FIG. 12, ◯ indicates the photographed marker, and x indicates the result of back projection, and it can be seen from FIG. 12 that accurate projective transformation was performed even during movement.
なお、上記の説明では、固定カメラ及び移動カメラ毎に固定カメラ用校正処理部及び移動カメラ用校正処理部を設けたが、校正部の構成はこの例に特に限定されず、固定カメラ用校正処理部及び移動カメラ用校正処理部の全ての機能を1台のコンピュータで実行したり、複数のコンピュータで分散して実行する等の種々の変更が可能である。 In the above description, the fixed camera calibration processing unit and the mobile camera calibration processing unit are provided for each of the fixed camera and the moving camera. However, the configuration of the calibration unit is not particularly limited to this example, and the fixed camera calibration processing is performed. It is possible to perform various modifications such as executing all functions of the image processing unit and the moving camera calibration processing unit with a single computer or executing the functions in a distributed manner with a plurality of computers.
11〜1m 固定カメラ
21〜2m 固定カメラ用校正処理部
31 ネットワーク
41〜4n 移動カメラ
51〜5n 移動カメラ用校正処理部
61 固定用射影変換行列算出部
62 3次元位置計測部
71 対応点検出部
72 エピポーラ方程式算出部
73 観測座標算出部
74 移動用射影変換行列算出部
75 移動カメラ制御部
76 パン値及びチルト値取得部
77 回転中心算出部
78 首振り移動時射影変換行列算出部
11 to 1 m Fixed camera 21 to 2 m Fixed camera calibration processing unit 31 Network 41 to 4n Mobile camera 51 to 5n Mobile camera calibration processing unit 61 Fixed projection transformation matrix calculation unit 62 Three-dimensional position measurement unit 71 Corresponding point detection unit 72 Epipolar equation calculation unit 73 Observation coordinate calculation unit 74 Projection transformation matrix calculation unit for movement 75 Moving camera control unit 76 Pan value and tilt value acquisition unit 77 Rotation center calculation unit 78 Projection transformation matrix calculation unit during swing movement
Claims (5)
前記空間内において任意の位置に移動可能な移動体に装着され、前記移動体の移動に伴い、前記空間内において任意の位置に移動可能な移動撮像手段と、
ストロングキャリブレーションにより、前記複数の固定撮像手段によって予め撮影されたマーカーの画像上の既知の観測座標及び前記マーカーの既知の3次元座標から前記複数の固定撮像手段の射影変換行列を算出する固定用射影変換行列算出手段と、
前記複数の固定撮像手段と前記移動撮像手段との間のエピポーラ方程式を算出するエピポーラ方程式算出手段と、
前記射影変換行列と前記エピポーラ方程式とから前記移動撮像手段の射影変換行列を算出する移動用射影変換行列算出手段と、
前記移動用射影変換行列算出手段により算出された前記移動撮像手段の射影変換行列を取得し、前記移動撮像手段の校正を行う校正手段とを備え、
前記複数の固定撮像手段及び前記移動撮像手段は、前記空間内の所定位置に配置された複数のマーカーを撮影し、
前記エピポーラ方程式算出手段は、前記複数の固定撮像手段及び前記移動撮像手段により撮影されたマーカーの、前記複数の固定撮像手段及び前記移動撮像手段の各画像上の対応点を検出し、検出した対応点の画像上の観測座標を用いて前記エピポーラ方程式を算出し、
前記移動用射影変換行列算出手段は、前記固定撮像手段の射影変換行列をP、前記固定撮像手段と前記移動撮像手段との間のエピポーラ方程式をF、前記空間内の任意の3次元座標をMとしたとき、前記空間内の任意の3次元座標Mを代入して(m t ) T FPM=0を解くことにより、前記空間内の任意の3次元座標Mに対応する、前記移動撮像手段の画像上の観測座標m t を算出し、前記3次元座標M及び前記観測座標m t から前記移動撮像手段の射影変換行列を算出することを特徴とする撮像システム。 A plurality of fixed imaging means fixed at predetermined positions in space;
A moving imaging means that is mounted on a movable body that can move to an arbitrary position in the space, and that can move to an arbitrary position in the space as the movable body moves .
For fixing, by using strong calibration, a projection transformation matrix of the plurality of fixed imaging means is calculated from known observation coordinates on the image of the marker previously captured by the plurality of fixed imaging means and the known three-dimensional coordinates of the marker . A projective transformation matrix calculating means;
An epipolar equation calculating means for calculating an epipolar equation between the plurality of fixed imaging means and the moving imaging means;
A moving projection transformation matrix calculating means for calculating a projecting transformation matrix of the moving imaging means from the projective transformation matrix and the epipolar equation ;
A calibration unit that obtains a projection transformation matrix of the moving imaging unit calculated by the moving projection transformation matrix calculating unit and calibrates the moving imaging unit ;
The plurality of fixed imaging means and the moving imaging means shoot a plurality of markers arranged at predetermined positions in the space,
The epipolar equation calculating means detects corresponding points on each image of the plurality of fixed imaging means and the moving imaging means of the markers photographed by the plurality of fixed imaging means and the moving imaging means, and the detected correspondence Calculate the epipolar equation using the observed coordinates on the point image,
The moving projective transformation matrix calculation means is configured such that P is a projective transformation matrix of the fixed imaging means, F is an epipolar equation between the fixed imaging means and the moving imaging means, and M is an arbitrary three-dimensional coordinate in the space. Then, by substituting an arbitrary three-dimensional coordinate M in the space and solving for (m t ) T FPM = 0, the moving imaging means corresponding to the arbitrary three-dimensional coordinate M in the space imaging system characterized by calculating the observed coordinates m t on the image, and calculates a projective transformation matrix of the moving image pickup means from the 3-dimensional coordinates M and the observation coordinate m t.
前記移動撮像手段の回転中心位置を算出する回転中心算出手段をさらに備え、
前記校正手段は、前記回転中心算出手段により算出された回転中心位置と、前記回転中心位置に対する前記移動撮像手段の移動時のパン値及びチルト値とから、前記移動撮像手段の移動時の剛体変換行列を求め、求めた剛体変換行列を前記移動用射影変換行列算出手段により算出された射影変換行列に掛け合わせて前記移動撮像手段の移動時の射影変換行列を算出し、前記移動撮像手段のパン方向及びチルト方向の校正を行う移動時射影変換行列算出手段を備えることを特徴とする請求項1記載の撮像システム。 The moving imaging means includes a movable imaging means capable of changing an imaging direction in a pan direction and a tilt direction,
A rotation center calculating means for calculating a rotation center position of the moving image pickup means;
Said calibration means, said a rotational center position calculated by the rotation center calculation means, from said pan value and the tilt value when the movement of the moving imaging means relative to the rotational center position, rigid transformation during the movement of said moving image pickup means A matrix is obtained, and the obtained rigid transformation matrix is multiplied by the projection transformation matrix calculated by the movement projection transformation matrix calculation means to calculate a projection transformation matrix when the moving imaging means is moved , and the pan of the moving imaging means is obtained. the imaging system of claim 1, wherein further comprising a moving time projective transformation matrix calculation means for performing a calibration of the direction and the tilt direction.
前記可動撮像手段の撮影対象領域の入力を受け付け、前記撮影対象領域として与えられた3次元空間中で注目すべき位置(X,Y,Z)と、前記移動時射影変換行列算出手段により算出された制御行列Cとを用いて、λ〔pan tilt 1〕 T =C〔X Y Z 1〕 T から、受け付けた撮影対象領域に応じたパン値及びチルト値を算出するパン値及びチルト値算出手段と、
前記パン値及びチルト値算出手段により算出されたパン値及びチルト値に応じて前記可動撮像手段を制御する制御手段とをさらに備えることを特徴とする請求項3記載の撮像システム。 The moving projection transformation matrix calculating means C is a control matrix that defines the relationship between the shooting position of the movable imaging means and the pan value and tilt value of the movable imaging means, and the three-dimensional position of the marker is (X, Y , Z), where the pan value when the marker is photographed at the center of the image is pan, the tilt value is tilt, and an arbitrary real number is λ, λ [pan tilt 1] T = C [XY Z 1] T A control matrix C of the movable imaging means satisfying the above is calculated by a least square method ,
The input of the imaging target area of the movable imaging means is received, and the position (X, Y, Z) to be noted in the three-dimensional space given as the imaging target area and the moving projection transformation matrix calculation means are calculated. Pan value and tilt value calculating means for calculating a pan value and a tilt value corresponding to the received imaging target area from λ [pan tilt 1] T = C [XY Z 1] T using the control matrix C When,
4. The imaging system according to claim 3 , further comprising control means for controlling the movable imaging means in accordance with the pan value and tilt value calculated by the pan value and tilt value calculation means.
ストロングキャリブレーションにより、前記複数の固定撮像手段によって予め撮影されたマーカーの画像上の既知の観測座標及び前記マーカーの既知の3次元座標から前記複数の固定撮像手段の射影変換行列を算出するステップと、
前記複数の固定撮像手段及び前記移動撮像手段により撮影されたマーカーの、前記複数の固定撮像手段及び前記移動撮像手段の各画像上の対応点を検出し、検出した対応点の画像上の観測座標を用いて、前記複数の固定撮像手段と前記移動撮像手段との間のエピポーラ方程式を算出するステップと、
前記固定撮像手段の射影変換行列をP、前記固定撮像手段と前記移動撮像手段との間のエピポーラ方程式をF、前記空間内の任意の3次元座標をMとしたとき、前記空間内の任意の3次元座標Mを代入して(m t ) T FPM=0を解くことにより、前記空間内の任意の3次元座標Mに対応する、前記移動撮像手段の画像上の観測座標m t を算出し、前記3次元座標M及び前記観測座標m t から前記移動撮像手段の射影変換行列を算出し、前記移動撮像手段の校正を行うステップとを含むことを特徴とする校正方法。 A movable imaging means that is mounted on a movable body that is movable to an arbitrary position in a space in which a plurality of fixed imaging means are fixed at a predetermined position, and that can move to an arbitrary position in the space as the movable body moves. The calibration method of
Calculating a projective transformation matrix of the plurality of fixed imaging means from the known observation coordinates on the image of the marker previously captured by the plurality of fixed imaging means and the known three-dimensional coordinates of the marker by strong calibration ; ,
Corresponding points on the images of the plurality of fixed imaging means and the moving imaging means of the markers photographed by the plurality of fixed imaging means and the moving imaging means are detected, and the observed coordinates of the detected corresponding points on the image with a step of calculating a epipolar equation between the mobile imaging means and the plurality of the solid-state image pickup device,
When the projection transformation matrix of the fixed imaging means is P, the epipolar equation between the fixed imaging means and the moving imaging means is F, and any three-dimensional coordinate in the space is M, any arbitrary space in the space By substituting the three-dimensional coordinates M and solving (m t ) T FPM = 0, the observation coordinates m t on the image of the moving imaging means corresponding to the arbitrary three-dimensional coordinates M in the space are calculated. , calibration method calculates the projective transformation matrix of the moving image pickup means from the 3-dimensional coordinates M and the observation coordinate m t, characterized in that it comprises the steps of performing a calibration of the moving imaging means.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004049108A JP4227037B2 (en) | 2004-02-25 | 2004-02-25 | Imaging system and calibration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004049108A JP4227037B2 (en) | 2004-02-25 | 2004-02-25 | Imaging system and calibration method |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2005241323A JP2005241323A (en) | 2005-09-08 |
JP4227037B2 true JP4227037B2 (en) | 2009-02-18 |
Family
ID=35023218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2004049108A Expired - Fee Related JP4227037B2 (en) | 2004-02-25 | 2004-02-25 | Imaging system and calibration method |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP4227037B2 (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4604774B2 (en) * | 2005-03-14 | 2011-01-05 | オムロン株式会社 | Calibration method for 3D measurement |
JP4892982B2 (en) * | 2006-01-12 | 2012-03-07 | 株式会社島津製作所 | Magnetic mapping device |
JP4991395B2 (en) * | 2007-05-28 | 2012-08-01 | キヤノン株式会社 | Information processing method and information processing apparatus |
US8044991B2 (en) * | 2007-09-28 | 2011-10-25 | The Boeing Company | Local positioning system and method |
JP2011227073A (en) * | 2010-03-31 | 2011-11-10 | Saxa Inc | Three-dimensional position measuring device |
JP5715735B2 (en) | 2012-06-29 | 2015-05-13 | 富士フイルム株式会社 | Three-dimensional measurement method, apparatus and system, and image processing apparatus |
JP5745178B2 (en) | 2012-06-29 | 2015-07-08 | 富士フイルム株式会社 | Three-dimensional measurement method, apparatus and system, and image processing apparatus |
JP6180925B2 (en) * | 2013-12-26 | 2017-08-16 | 日本放送協会 | Robot camera control device, program thereof, and multi-viewpoint robot camera system |
JP2016125956A (en) * | 2015-01-07 | 2016-07-11 | ソニー株式会社 | Information processor, information processing method and information processing system |
JP2018053616A (en) * | 2016-09-30 | 2018-04-05 | 積水化学工業株式会社 | Communication system in sewer line, monitoring system in sewer line, and communication device in sewer line |
JP6985593B2 (en) * | 2017-10-18 | 2021-12-22 | 富士通株式会社 | Image processing program, image processing device and image processing method |
CN113330275B (en) * | 2019-01-23 | 2023-05-09 | 株式会社索思未来 | Camera information calculation device, camera information calculation system, camera information calculation method, and recording medium |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07111324B2 (en) * | 1991-10-04 | 1995-11-29 | 株式会社エイ・ティ・アール視聴覚機構研究所 | Calibration device using camera rotation mechanism |
JP3122629B2 (en) * | 1997-07-16 | 2001-01-09 | 株式会社エイ・ティ・アール知能映像通信研究所 | Arbitrary viewpoint image generation device |
JPH1153549A (en) * | 1997-08-01 | 1999-02-26 | Sony Corp | Device and method for processing image and transmission medium |
JP4235858B2 (en) * | 1999-05-07 | 2009-03-11 | ソニー株式会社 | Robot apparatus and obstacle map generation method for robot apparatus |
JP4453119B2 (en) * | 1999-06-08 | 2010-04-21 | ソニー株式会社 | Camera calibration apparatus and method, image processing apparatus and method, program providing medium, and camera |
JP2002042139A (en) * | 2000-07-25 | 2002-02-08 | Hitachi Ltd | Image recognizing method and image display device |
JP2002350131A (en) * | 2001-05-25 | 2002-12-04 | Minolta Co Ltd | Calibration method for and apparatus of multiocular camera and computer program |
JP2003279315A (en) * | 2002-01-16 | 2003-10-02 | Advanced Telecommunication Research Institute International | Automatic calibration method for camera |
JP3996805B2 (en) * | 2002-06-06 | 2007-10-24 | 株式会社日立製作所 | Surveillance camera device, surveillance camera system device, and imaging screen mask method |
JP4027294B2 (en) * | 2003-09-26 | 2007-12-26 | 株式会社国際電気通信基礎技術研究所 | Moving object detection apparatus, moving object detection method, and moving object detection program |
-
2004
- 2004-02-25 JP JP2004049108A patent/JP4227037B2/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
JP2005241323A (en) | 2005-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765498B (en) | Monocular vision tracking, device and storage medium | |
JP6734940B2 (en) | Three-dimensional measuring device | |
JP5746477B2 (en) | Model generation device, three-dimensional measurement device, control method thereof, and program | |
JP6635690B2 (en) | Information processing apparatus, information processing method and program | |
JP4437748B2 (en) | Position and orientation reading by projector | |
JP3859574B2 (en) | 3D visual sensor | |
US8625854B2 (en) | 3D scene scanner and a position and orientation system | |
CN111025283B (en) | Method and device for linking radar and dome camera | |
JP2008506953A5 (en) | ||
JP6658001B2 (en) | Position estimation device, program, position estimation method | |
JP5566281B2 (en) | Apparatus and method for specifying installation condition of swivel camera, and camera control system provided with the apparatus for specifying installation condition | |
Burschka et al. | V-GPS (SLAM): Vision-based inertial system for mobile robots | |
CA2573728A1 (en) | Method and apparatus for machine-vision | |
Castaneda et al. | SLAM combining ToF and high-resolution cameras | |
WO2011105522A1 (en) | Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium | |
JP4227037B2 (en) | Imaging system and calibration method | |
JP2007033257A (en) | Information processing method and device | |
JP2000097637A5 (en) | Posture position detection device and moving body posture detection device | |
Aliakbarpour et al. | An efficient algorithm for extrinsic calibration between a 3d laser range finder and a stereo camera for surveillance | |
JP4132068B2 (en) | Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus | |
Kyriakoulis et al. | Color-based monocular visuoinertial 3-D pose estimation of a volant robot | |
Núnez et al. | Data Fusion Calibration for a 3D Laser Range Finder and a Camera using Inertial Data. | |
WO2007041696A2 (en) | System and method for calibrating a set of imaging devices and calculating 3d coordinates of detected features in a laboratory coordinate system | |
JP2007025863A (en) | Photographing system, photographing method, and image processing program | |
WO2018134866A1 (en) | Camera calibration device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20070226 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20070828 |
|
A521 | Written amendment |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20071016 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20081125 |
|
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20081127 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20111205 Year of fee payment: 3 |
|
R150 | Certificate of patent or registration of utility model |
Free format text: JAPANESE INTERMEDIATE CODE: R150 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20121205 Year of fee payment: 4 |
|
LAPS | Cancellation because of no payment of annual fees |