JPH07159167A - Calculation of camera position - Google Patents

Calculation of camera position

Info

Publication number
JPH07159167A
JPH07159167A JP5308229A JP30822993A JPH07159167A JP H07159167 A JPH07159167 A JP H07159167A JP 5308229 A JP5308229 A JP 5308229A JP 30822993 A JP30822993 A JP 30822993A JP H07159167 A JPH07159167 A JP H07159167A
Authority
JP
Japan
Prior art keywords
camera
plane
points
point
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP5308229A
Other languages
Japanese (ja)
Inventor
Hirokazu Murata
弘和 村田
Mieko Tsuburaya
実枝子 圓谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nikon Corp
Original Assignee
Nikon Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nikon Corp filed Critical Nikon Corp
Priority to JP5308229A priority Critical patent/JPH07159167A/en
Publication of JPH07159167A publication Critical patent/JPH07159167A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

PURPOSE:To easily obtain the focus position without using complicate equation by supposing a plane vertical to the camera optical axis in a space and detecting the min. error position by a means for projecting the reference points of at least three points, on the plane. CONSTITUTION:The camera focus position is set at FC, and the three dimensional coordinate values are represented by Fx, Fy, and Fz, and the vertical plane P relative to the camera optical axis is set. A plurality of characteristic points on an object having the well-known measurement values of the dimension and shape are projected on the plane, setting the camera focus position FC as the center. Each projected point Si' is represented, having the camera focus coordinates as parameters. The camera focus position FC is scanned (shifted) in a space by using a computer, etc., and the min. error position is searched, and the position is set as the camera focus position. The plane P is set, and since the distance between two points in two pairs in the projection of the reference point on the object and the corresponding camera position can be obtained, only three points are enough as the reference points. Further, the error is easy to understand since the error is calculated as a simple numerical value through the comparison of the distance, and the necessity of the complicated calculation is obviated.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、カメラなどで撮影した
画像から被写体の三次元形状を計測する三次元形状計測
方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a three-dimensional shape measuring method for measuring a three-dimensional shape of a subject from an image taken by a camera or the like.

【0002】[0002]

【従来の技術】カメラを用いて物体の計測を行ったり、
カメラと外界の位置関係を得るには、カメラの位置が正
確に分らなければならない。カメラ位置を表わすには、
カメラの画像面、もしくはレンズの焦点位置が、計測物
体を中心とした座標系上の座標値として表わされるのが
一般的である。しかし、カメラの位置は、カメラ自身を
移動したときは当然だが、使用するレンズを換えた場合
も、画像上と計測物体との位置関係が変ってくるため、
そのつどキャリブレーション(較正)を行う必要があ
る。測定対象に用いる座標系を「物体座標系」と呼び、
カメラに原点を置いた座標系を「カメラ座標系」と呼
ぶ。この2つの座標系間の変換パラメータを求めること
が、カメラ・キャリブレーションである。以下、従来の
カメラ・キャリブレーションによる変換パラメータの求
め方について述べる。
2. Description of the Related Art An object is measured using a camera,
To obtain the positional relationship between the camera and the outside world, the position of the camera must be known accurately. To show the camera position,
The image plane of the camera or the focal position of the lens is generally represented as coordinate values on a coordinate system centered on the measurement object. However, the position of the camera is natural when the camera itself is moved, but even when the lens used is changed, the positional relationship between the image and the measurement object changes,
It is necessary to calibrate each time. The coordinate system used for the measurement target is called the "object coordinate system",
The coordinate system with the origin at the camera is called the "camera coordinate system". Obtaining the conversion parameter between these two coordinate systems is camera calibration. Hereinafter, a method of obtaining conversion parameters by conventional camera calibration will be described.

【0003】図3にピンホールカメラに基づいた透視変
換モデルを表わす。カメラを理想化したこの透視変換モ
デルは、レンズ面の中心にピンホールが開いたものであ
り、視線はこの点を通る1本の直線として定義される。
ガラスレンズを用いた一般の結像系も、歪曲収差が無視
できるほど小さければ、この簡易なモデルで表現でき
る。実際のカメラは対象、レンズ、結像面の順に配置さ
れるが、これでは像が逆転してわかりにくいため、仮想
的に結像面をレンズの前に置き、対象、結像面、レンズ
の順で配置している。図中のFはレンズの中心、fはレ
ンズの焦点距離、Iは結像面である。結像面をカメラに
固定した座標系の基準に考えているため、Iの中心を座
標軸の原点にしている。
FIG. 3 shows a perspective transformation model based on a pinhole camera. This perspective transformation model in which the camera is idealized has a pinhole opened at the center of the lens surface, and the line of sight is defined as a straight line passing through this point.
A general imaging system using a glass lens can also be represented by this simple model if the distortion aberration is small enough to be ignored. In an actual camera, the target, the lens, and the image plane are arranged in this order, but this makes the image reverse and is difficult to see. Therefore, the image plane is virtually placed in front of the lens, and the object, image plane, and lens They are arranged in order. In the figure, F is the center of the lens, f is the focal length of the lens, and I is the image plane. Since the image plane is considered as the reference of the coordinate system fixed to the camera, the center of I is the origin of the coordinate axis.

【0004】空間中のある一点V(X,Y,Z)を結像面
Iへ透視変換した点S(Xc,Yc,Zc)、すなわち測定
点Vへ向かっている視線と結像面との交点は、
A point S (Xc, Yc, Zc) obtained by perspective-transforming a certain point V (X, Y, Z) in space into an image plane I, that is, the line of sight toward the measuring point V and the image plane. The intersection is

【0005】[0005]

【数1】 [Equation 1]

【0006】で与えられる。また、表現を変えればIs given by Also, if you change the expression

【0007】[0007]

【数2】 [Equation 2]

【0008】で表わせる。透視変換はこのように非線形
な変換であるが、三次元の座標を媒介とする変数を一つ
加え、一次元高めた表現を用いることにより、線形化す
ることができる。これを同次座標系と呼んでいる。三次
元の点(X,Y,Z)を、
Can be expressed as Although the perspective transformation is a non-linear transformation as described above, it can be linearized by adding one variable mediated by three-dimensional coordinates and using a one-dimensionally enhanced expression. This is called the homogeneous coordinate system. Three-dimensional point (X, Y, Z)

【0009】[0009]

【数3】 [Equation 3]

【0010】に示すようにWhを媒介とする4次元の点
(Xh,Yh,Zh,Wh)で表現したものが同次座標であ
る。この同次座標系により、透視変換は、
As shown in (4), homogeneous coordinates are expressed by four-dimensional points (Xh, Yh, Zh, Wh) mediated by Wh. With this homogeneous coordinate system, the perspective transformation is

【0011】[0011]

【数4】 [Equation 4]

【0012】のように4×4の行列演算で記述すること
ができる。 前記の透視変換は、カメラに固定された座
標系で表現されているときに適応できる。測定対象であ
る点Pを別の独立した座標系で表わす場合、測定対象に
用いる座標系である「物体座標系」と、カメラに原点を
おいた座標系である「カメラ座標系」があることは先に
述べたが、この2つの座標系を関係づける変換Tは、同
次座標系表現では回転と平行移動を含めて、
As described above, it can be described by a 4 × 4 matrix operation. The perspective transformation described above can be applied when it is represented in a coordinate system fixed to the camera. When the point P that is the measurement target is represented by another independent coordinate system, there must be an "object coordinate system" that is the coordinate system used for the measurement target and a "camera coordinate system" that is the coordinate system with the origin of the camera. As described above, the transformation T that relates these two coordinate systems is expressed by the homogeneous coordinate system expression including rotation and translation.

【0013】[0013]

【数5】 [Equation 5]

【0014】で表わすことができる。物体座標系での点
Vからカメラ座標系での点Sへの変換は、
Can be represented by The conversion from the point V in the object coordinate system to the point S in the camera coordinate system is

【0015】[0015]

【数6】 [Equation 6]

【0016】で表わせる。カメラ座標系での結像面Iは
Zch=0なので、結像面I上で二次元座標(Xc,Y
c)、つまり入力画像中での画素の位置で前式を簡略化
すると、
Can be expressed as Since the image plane I in the camera coordinate system is Zch = 0, the two-dimensional coordinates (Xc, Y
c), that is, simplifying the previous equation by the position of the pixel in the input image,

【0017】[0017]

【数7】 [Equation 7]

【0018】のように記述できる。この式中の3×4の
C行列を「カメラ・パラメータ」と呼び、C行列から図
3中の主点位置Fc(焦点位置)を求めることができ
る。このカメラ・パラメータを求めるために、キャリブ
レーションを行う。物体座標系で基準となる点(X,Y,
Z)と、それに対応するカメラ結像面での位置(Xc,Y
c)が1組わかれば、
It can be described as follows. The 3 × 4 C matrix in this equation is called a “camera parameter”, and the principal point position Fc (focal point position) in FIG. 3 can be obtained from the C matrix. Calibration is performed to obtain the camera parameters. The reference point (X, Y,
Z) and the corresponding position (Xc, Y on the camera image plane)
If you know one c),

【0019】[0019]

【数8】 [Equation 8]

【0020】が成立する。したがってカメラ・パラメー
タのC11からC34までの12個の未知数を求めるために
は、同一平面上にない6個の基準点を用いればよい。キ
ャリブレーションの精度を高めるには、6個以上多数の
基準点を用い、最小二乗法によってパラメータを同定す
るのが通常である。
Is satisfied. Therefore, in order to obtain twelve unknowns C11 to C34 of the camera parameters, it is sufficient to use six reference points that are not on the same plane. In order to improve the accuracy of calibration, it is usual to use six or more reference points and identify the parameters by the least squares method.

【0021】[0021]

【発明が解決しようとする課題】ところが、上述した
「三次元画像計測」(井口、佐藤著)に記述されている
カメラパラメータの算出方法では、基準点が最小6箇所
必要であり、さらに逆行列の計算が必要である。しか
し、逆行列の計算はつねに解が求まるとは限らず、精度
を上げるために基準点数を増加させると、その分、行列
式が大きくなり、計算は複雑で時間がかかってしまう。
また、カメラ画像上の座標値は、整数値であるため中間
値がなく、誤差を含んでおり、最小2乗法を用いて解決
しなければならない。
However, the camera parameter calculation method described in "Three-dimensional image measurement" (Iguchi, Sato) described above requires a minimum of six reference points, and further the inverse matrix. Needs to be calculated. However, the solution of the inverse matrix is not always obtained, and if the number of reference points is increased in order to improve the accuracy, the determinant becomes larger and the calculation becomes complicated and time consuming.
Further, since the coordinate value on the camera image is an integer value, there is no intermediate value and it contains an error, and it must be solved by using the least square method.

【0022】従来の方法では上記のような問題があり、
よって、本発明の目的は最小3つの基準点で、複雑な方
程式や逆行列計算を用いることなく、容易に焦点位置を
求めることにある。
The conventional method has the above problems,
Therefore, an object of the present invention is to easily obtain the focal position with a minimum of three reference points without using complicated equations and inverse matrix calculations.

【0023】[0023]

【課題を解決するための手段】そこで本発明では、寸法
形状既知の物体を映した画像から、前記画像を得るカメ
ラと物体との3次元位置関係を算出する装置において、
空間中に前記カメラの光軸と垂直をなす平面を仮定し、
最小3点の基準点を前記平面に投影する手段にて、前記
投影した点の位置とカメラ画像上との対応する点の位置
関係から、カメラ焦点位置を走査しながら電算機等を用
いて座標位置を算出し、最も誤差の少ない位置を検出す
ることにより、基準となる前記物体がもつ座標系上での
カメラ位置座標を求める方法を用い、課題を解決出来る
ことを見出し、本発明を成すに至った。
Therefore, according to the present invention, in an apparatus for calculating a three-dimensional positional relationship between a camera and an object for obtaining the image from an image showing an object of known size and shape,
Assuming a plane perpendicular to the optical axis of the camera in space,
By means of projecting a minimum of three reference points onto the plane, the coordinates of the projected points and the corresponding points on the camera image are scanned using a computer or the like while scanning the camera focal point position. The present invention has been found to be able to solve the problems by using the method of calculating the position and detecting the position with the least error to obtain the camera position coordinates on the coordinate system of the reference object. I arrived.

【0024】[0024]

【作用】視点から比較的遠くにある立方体は、どの辺の
長さも同じに見えるが、近くで見ると、手前にある辺は
長く奥のは短く見える。見る位置を変えることにより、
物体のパースペクティブ(遠近感)は変化する。この変
化を利用して、カメラと物体との位置関係を知ることが
できる。
[Function] A cube that is relatively far from the viewpoint has the same length on all sides, but when viewed close, the side on the near side looks long and the side on the back looks short. By changing the viewing position,
The perspective of an object changes. By utilizing this change, the positional relationship between the camera and the object can be known.

【0025】装置の概略を図1に示す。寸法形状既知の
物体(基準物体)をカメラで映し、画像処理部で物体を
認識し、物体上の基準点(頂点など物体中で位置の分る
点)の座標値(カメラ画像上)を抽出する。この座標値
と基準物体の形状データからカメラ位置計算部で物体座
標上でのカメラ座標位置を算出する。図2に従って原理
を述べる。カメラの焦点位置をFc、Fcの3次元座標値
をFx,Fy,Fzとし、カメラの光軸と垂直な平面Pを設
定する。 寸法・形状の既知な計測物体(カメラ・キ
ャリブレーション用の基準物体など)上の複数の特徴
点、たとえば頂点Vi(以降、i,jは特徴点の番号とす
る)を、カメラの焦点位置Fcを中心としてこの平面上
に投影する。投影された各点Si'は、カメラの焦点座標
(Fx, Fy, Fz)をパラメータとして表わされてい
る。平面P上に投影された各点Si'とそれらの点に対応
するカメラ画像上の各点Siの間で、同じ位置関係とな
るようにカメラ焦点座標Fcを求めればよい。点Siの単
位は2次元上の画素値であり、点Si'の単位は3次元座
標値で表わされているため、ここではそれぞれの対応す
る2点間の距離の比を用いて、位置関係を求めることに
する。Dijを点Siと点Sjとの距離、Dij'を点Si'と
点Sj'との距離とするとDijとDij'間には比例関係が
あり、
An outline of the apparatus is shown in FIG. The object (reference object) of known dimensions and shape is projected by the camera, the object is recognized by the image processing unit, and the coordinate value (on the camera image) of the reference point on the object (a point whose position is known in the object such as a vertex) is extracted. To do. From the coordinate values and the shape data of the reference object, the camera position calculation unit calculates the camera coordinate position on the object coordinates. The principle will be described with reference to FIG. The focus position of the camera is Fc, and the three-dimensional coordinate value of Fc is Fx, Fy, Fz, and a plane P perpendicular to the optical axis of the camera is set. A plurality of feature points on a measurement object whose dimensions and shape are known (a reference object for camera calibration, etc.), for example, a vertex Vi (hereinafter, i and j are feature point numbers), are used as the focus position Fc of the camera. Is projected on this plane. Each projected point Si ′ is represented with the focus coordinates (Fx, Fy, Fz) of the camera as a parameter. The camera focus coordinates Fc may be obtained so that the points Si ′ projected on the plane P and the points Si on the camera image corresponding to those points have the same positional relationship. Since the unit of the point Si is a two-dimensional pixel value and the unit of the point Si ′ is represented by a three-dimensional coordinate value, here, the position ratio is calculated using the ratio of the distances between the corresponding two points. I will seek a relationship. If Dij is the distance between points Si and Sj and Dij 'is the distance between points Si' and Sj ', there is a proportional relationship between Dij and Dij'.

【0026】[0026]

【数9】 [Equation 9]

【0027】の関係が成り立つ。ここで、Dij'もカメ
ラの焦点座標をパラメータとして表わされている。ただ
し、上記の方法において、カメラ画像上の値は整数値で
あり、測定上の誤差を含んで入るため、方程式を解いて
も必ずしもカメラ位置が求まるとは限らない。よって本
発明では、電算機等を用いてカメラ焦点位置Fcを、空
間中で走査(移動)し、もっとも誤差の少ない位置を捜
しだし、カメラ焦点位置とする。
The following relationship holds. Here, Dij 'is also represented by using the focus coordinates of the camera as a parameter. However, in the above method, since the value on the camera image is an integer value and includes an error in measurement, the camera position is not always found even if the equation is solved. Therefore, in the present invention, the camera focus position Fc is scanned (moved) in space using a computer or the like, and the position with the smallest error is searched for and set as the camera focus position.

【0028】以上のように、平面Pを設定し、物体上の
基準点を投影した2組の2点間の距離と、対応するカメ
ラ画像上の2組の2点間の距離との比較によりカメラ位
置を求めることができるので、基準点は最小3点でよ
く、また、誤差は距離の比較による単純な数値として算
出されるので分りやすく、複雑な計算を必要としない。
ただし、各基準点は全てがカメラの光軸と垂直な平面上
にあってはならない。
As described above, by setting the plane P and comparing the distance between two sets of two points on which the reference point on the object is projected with the distance between the two sets of two points on the corresponding camera image, Since the camera position can be obtained, the minimum number of reference points is 3, and the error is calculated as a simple numerical value by comparing the distances, which is easy to understand and does not require complicated calculation.
However, each reference point must not be on a plane perpendicular to the optical axis of the camera.

【0029】以下、図面を引用して、実施例により本発
明をより具体的に説明するが、本発明は、これに限られ
るものではない。
Hereinafter, the present invention will be described in more detail by way of examples with reference to the drawings, but the present invention is not limited thereto.

【0030】[0030]

【実施例】図4に2台のカメラでステレオ方式の計測を
行う場合を示す。この方法で、通常2台のカメラの位置
関係(光軸の向き)が既知である必要があるが、2つの
光軸方向を求めるのは難しい。簡単にするために2台の
カメラを平行に置く方法もあるが、計測範囲が限られて
しまう。本発明を用いることにより、任意の位置にカメ
ラを設置することができ、カメラ間の位置関係も容易に
求めることができるようになる。また、カメラを複数台
使用する場合でも同様である。
EXAMPLE FIG. 4 shows the case where stereo measurement is performed by two cameras. In this method, it is usually necessary to know the positional relationship (direction of the optical axis) of the two cameras, but it is difficult to find the two optical axis directions. There is also a method of placing two cameras in parallel for simplification, but the measurement range is limited. By using the present invention, the cameras can be installed at arbitrary positions and the positional relationship between the cameras can be easily obtained. The same applies when using a plurality of cameras.

【0031】原理について、最も簡単な例としてカメラ
の光軸が、寸法既知の直方体上のある頂点を通過し、か
つその点を物体座標系の原点として、立方体の3辺をそ
れぞれx軸,y軸,z軸とする場合について述べる。図5
において、カメラの光軸LFは物体上の点Voを通過して
いる。物体座標系は、Voを原点とし、V1,V2,V3を通
る軸をそれぞれx軸,y軸,z軸とすると、立方体上の各
点は、物体座標系上の値で表わすことができる。カメラ
の光軸に垂直な平面Pは、平面の式を簡単にするためこ
こでは、原点Voを含むものとすると、カメラの焦点座
標をパラメータとして
In principle, the simplest example is that the optical axis of a camera passes through a certain vertex on a rectangular parallelepiped of known dimensions, and that point is the origin of the object coordinate system, and the three sides of the cube are respectively the x-axis and the y-axis. The case where the axes are z and z will be described. Figure 5
At, the optical axis LF of the camera passes through a point Vo on the object. In the object coordinate system, if Vo is the origin and the axes passing through V1, V2, and V3 are the x-axis, y-axis, and z-axis, each point on the cube can be represented by a value on the object coordinate system. The plane P perpendicular to the optical axis of the camera is assumed to include the origin Vo here in order to simplify the equation of the plane. Here, the focal coordinate of the camera is used as a parameter.

【0032】[0032]

【数10】 [Equation 10]

【0033】で表わされる。図5において、立方体上の
点Vi(iは立方体の頂点番号とする、座標値はVix,Vi
y,Viz)を平面Pへの投影した点Si'は、カメラの焦点
と点Viを通る直線Lvi
It is represented by In FIG. 5, a point Vi on the cube (i is the vertex number of the cube, and the coordinate values are Vix, Vi
A point Si ′ obtained by projecting y, Viz) onto the plane P is a straight line Lvi passing through the focal point of the camera and the point Vi.

【0034】[0034]

【数11】 [Equation 11]

【0035】と、平面Pとの交点And the intersection with the plane P

【0036】[0036]

【数12】 [Equation 12]

【0037】である。Si'の座標値もまた、カメラの焦
点座標をパラメータとして表わされる。点SiとSj間の
長さをDijとし、対応するSi'とSj'間の長さをDij'
とすると、任意のi,jに対してDijとDij'は比例関係が
存在する。ここでDijとDij'は単位が違うので、
It is The coordinate value of Si 'is also represented by the focal coordinate of the camera as a parameter. Let the length between the points Si and Sj be Dij, and let the length between the corresponding Si 'and Sj' be Dij '.
Then, there is a proportional relationship between Dij and Dij 'for any i, j. Since the units of Dij and Dij 'are different,

【0038】[0038]

【数9】 [Equation 9]

【0039】のように表わすことにする。この式を解く
ことによって、パラメータである焦点座標値Fcを求め
ることができるのだが、Dijが画像上の値(整数値)で
あるため、実際にはこの式は等号ではなく、
It will be expressed as follows. By solving this equation, the focus coordinate value Fc which is a parameter can be obtained. However, since Dij is a value (integer value) on the image, this equation is not actually an equal sign,

【0040】[0040]

【数13】 [Equation 13]

【0041】のようになる。よって誤差KをIt becomes as follows. Therefore, the error K

【0042】[0042]

【数14】 [Equation 14]

【0043】のように表わし、パラメータである焦点位
置座標を空間中で走査していき座標値を実際に代入し、
もっとも誤差の少ない位置を求める。得られた位置を焦
点座標とする。走査する空間は、物体座標系を設定する
向きにより限定できる。以上、カメラの光軸が物体座標
系の原点を通る場合について述べたが、光軸が既知の点
を通らない場合であっても、前述した平面Pの式が得ら
れれば、同様にして焦点位置を求めることができる。こ
の場合、他の既知の点(基準点)との位置関係から通過
点を求め、前述した方法によって焦点位置を求めればよ
い。 図6に示すように、カメラ位置情報をもとにカメ
ラの方向を自由に変えることができる装置を付加するこ
とによって、カメラシステム自身で基準物体を捜し出す
ことができ、さらに基準点をカメラの光軸と一致させる
ことができるため、カメラ位置の計算が高速化される。
また、複数の基準物体がある場合、それらの位置関係を
求めることができる。
The focus position coordinates, which are the parameters, are scanned in space and the coordinate values are actually substituted.
Find the position with the least error. The obtained position is used as the focus coordinate. The space to be scanned can be limited by the direction in which the object coordinate system is set. The case in which the optical axis of the camera passes through the origin of the object coordinate system has been described above. However, even if the optical axis does not pass through a known point, if the above-described formula of the plane P is obtained, the focus is similarly set. The position can be calculated. In this case, the passing point may be obtained from the positional relationship with another known point (reference point), and the focus position may be obtained by the method described above. As shown in FIG. 6, by adding a device that can freely change the direction of the camera based on the camera position information, the camera system itself can search for a reference object, and the reference point can be used to determine the light of the camera. The camera position can be calculated faster because it can be aligned with the axis.
Further, when there are a plurality of reference objects, the positional relationship between them can be obtained.

【0044】[0044]

【発明の効果】物体との位置関係を認識できるため、移
動機器(車、飛行機)の位置情報の取得に使用すること
ができる。道路上に限らず、地上に設置されているもの
の多くは、ある規格・寸法で製作されているため、基準
物体として利用できるので、必ずしも新たに位置情報用
の専用マーカーを必要としない。
Since the positional relationship with the object can be recognized, it can be used for acquiring the positional information of the mobile device (car, airplane). Most of the objects installed on the ground, not only on the road, are manufactured according to a certain standard and size, and can be used as a reference object. Therefore, a new dedicated marker for position information is not necessarily required.

【図面の簡単な説明】[Brief description of drawings]

【図1】は、本発明の実施例にかかる概略図である。FIG. 1 is a schematic diagram according to an embodiment of the present invention.

【図2】は、本実施例の原理図である。FIG. 2 is a principle diagram of this embodiment.

【図3】は、ピンホールカメラに基づいた透視変換モデ
ルの図である。
FIG. 3 is a diagram of a perspective transformation model based on a pinhole camera.

【図4】は、ステレオ方式の計測の図である。FIG. 4 is a diagram of stereo measurement.

【図5】は、本実施例にかかる、カメラの光軸が既知の
直方体の頂点を通る場合を表す図である。
FIG. 5 is a diagram illustrating a case where an optical axis of a camera passes through a known vertex of a rectangular parallelepiped according to the present embodiment.

【図6】は、本実施例の一例である。 以上FIG. 6 is an example of the present embodiment. that's all

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 寸法形状既知の物体を映した画像から、
前記画像を得るカメラと物体との3次元位置関係を算出
する装置において、空間中に前記カメラの光軸と垂直を
なす平面を仮定し、最小3点の基準点を前記平面に投影
する手段にて、前記投影した点の位置とカメラ画像上と
の対応する点の位置関係から、カメラ焦点位置を走査し
ながら電算機等を用いて座標位置を算出し、最も誤差の
少ない位置を検出することにより、基準となる前記物体
がもつ座標系上でのカメラ位置座標を求めることを特徴
とするカメラ位置算出方法。
1. From an image showing an object of known size and shape,
In a device for calculating a three-dimensional positional relationship between a camera and an object that obtains the image, a plane that is perpendicular to the optical axis of the camera is assumed in space, and means for projecting a minimum of three reference points onto the plane. Then, based on the positional relationship between the projected point position and the corresponding point on the camera image, the coordinate position is calculated using a computer or the like while scanning the camera focal point position, and the position with the least error is detected. The camera position calculation method is characterized in that the camera position coordinates on the coordinate system of the reference object are obtained by
JP5308229A 1993-12-08 1993-12-08 Calculation of camera position Pending JPH07159167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP5308229A JPH07159167A (en) 1993-12-08 1993-12-08 Calculation of camera position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP5308229A JPH07159167A (en) 1993-12-08 1993-12-08 Calculation of camera position

Publications (1)

Publication Number Publication Date
JPH07159167A true JPH07159167A (en) 1995-06-23

Family

ID=17978488

Family Applications (1)

Application Number Title Priority Date Filing Date
JP5308229A Pending JPH07159167A (en) 1993-12-08 1993-12-08 Calculation of camera position

Country Status (1)

Country Link
JP (1) JPH07159167A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09187038A (en) * 1995-12-27 1997-07-15 Canon Inc Three-dimensional shape extract device
JP2009294109A (en) * 2008-06-05 2009-12-17 Fujitsu Ltd Calibration apparatus
JP2012103213A (en) * 2010-11-12 2012-05-31 Fujitsu Ltd Image processing program and image processing device
CN112816967A (en) * 2021-02-03 2021-05-18 成都康烨科技有限公司 Image distance measuring method, device, distance measuring equipment and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09187038A (en) * 1995-12-27 1997-07-15 Canon Inc Three-dimensional shape extract device
JP2009294109A (en) * 2008-06-05 2009-12-17 Fujitsu Ltd Calibration apparatus
JP2012103213A (en) * 2010-11-12 2012-05-31 Fujitsu Ltd Image processing program and image processing device
CN112816967A (en) * 2021-02-03 2021-05-18 成都康烨科技有限公司 Image distance measuring method, device, distance measuring equipment and readable storage medium

Similar Documents

Publication Publication Date Title
Luhmann et al. Sensor modelling and camera calibration for close-range photogrammetry
Schmalz et al. Camera calibration: active versus passive targets
CN109859272B (en) Automatic focusing binocular camera calibration method and device
Zhang et al. A robust and rapid camera calibration method by one captured image
Kanatani Calibration of ultrawide fisheye lens cameras by eigenvalue minimization
CN113034612B (en) Calibration device, method and depth camera
JP2009042162A (en) Calibration device and method therefor
Lim et al. Virtual stereovision system: new understanding on single-lens stereovision using a biprism
JP2010276433A (en) Imaging device, image processor, and distance measuring device
CN114299156A (en) Method for calibrating and unifying coordinates of multiple cameras in non-overlapping area
Bergström et al. Virtual projective shape matching in targetless CAD-based close-range photogrammetry for efficient estimation of specific deviations
Gong et al. High-precision calibration of omnidirectional camera using an iterative method
Fasogbon et al. Intrinsic camera calibration equipped with Scheimpflug optical device
Yamauchi et al. Calibration of a structured light system by observing planar object from unknown viewpoints
Zhang et al. Projector calibration from the camera image point of view
Hui et al. Determination of line scan camera parameters via the direct linear transformation
CN111754584A (en) Remote large-field-of-view camera parameter calibration system and method
CN100582653C (en) System and method for determining position posture adopting multi- bundle light
JPH07159167A (en) Calculation of camera position
CN115375773A (en) External parameter calibration method and related device for monocular laser speckle projection system
Qian et al. Image distortion correction for single-lens stereo vision system employing a biprism
Frosio et al. Camera re-calibration after zooming based on sets of conics
Chai et al. Mirror binocular calibration method based on sole principal point
JPH11194027A (en) Three-dimensional coordinate measuring instrument
Zhu et al. Calibration of binocular cameras with non-overlapping fields of view based on planar mirrors