JPH055609A - Cubic recognition method of image - Google Patents

Cubic recognition method of image

Info

Publication number
JPH055609A
JPH055609A JP3156347A JP15634791A JPH055609A JP H055609 A JPH055609 A JP H055609A JP 3156347 A JP3156347 A JP 3156347A JP 15634791 A JP15634791 A JP 15634791A JP H055609 A JPH055609 A JP H055609A
Authority
JP
Japan
Prior art keywords
line
cameras
screen
screens
reference point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP3156347A
Other languages
Japanese (ja)
Other versions
JP2882910B2 (en
Inventor
Satoshi Ishii
聡 石井
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to JP3156347A priority Critical patent/JP2882910B2/en
Publication of JPH055609A publication Critical patent/JPH055609A/en
Application granted granted Critical
Publication of JP2882910B2 publication Critical patent/JP2882910B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Numerical Control (AREA)
  • Image Processing (AREA)
  • Control Of Position Or Direction (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To enable accurate detection of the position of an object concerning a cubic recognition method of images by which the object is photographed with a plurality of cameras to determine features thereof. CONSTITUTION:A plurality of cameras 21 and 22 are set so that an object 20 can be projected on screens of the cameras 21 and 22. An operator is selected to extract a contour line closer to a vertical component of the object with respect to a second straight line which is generated by projecting a plane containing a straight line connecting projection centers of the two cameras representing the plurality of the cameras 21 and 22. The contour line of the object is extracted and stored by the operator to be displayed on a screen. A reference line is drawn parallel with the second straight line on the screens to determine a reference point as intersection between the reference line and the contour line. Thus, the position of a part corresponding to the reference point of the object is determined from the positions of the reference points on the screens and a relative position between the cameras.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は複数のカメラを用いて物
体を撮影しその特徴を捉える画像立体認識方法に関し、
特に物体の輪郭線を抽出して画面上に表示しその特徴を
捉える画像立体認識方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image stereoscopic recognition method for photographing an object by using a plurality of cameras and capturing its characteristics.
In particular, the present invention relates to an image stereoscopic recognition method for extracting the outline of an object, displaying it on the screen, and capturing its features.

【0002】このような画像立体認識方法は、工場や各
種施設で動作するロボット等に用いられる。例えば、工
場では、ワークをハンドリングしたり製品を検査したり
するために、ロボットとワークの距離等の情報を知る必
要がある。そのためには、カメラとワークとの距離を正
確に検出しなければならない。
Such an image stereoscopic recognition method is used for a robot or the like operating in a factory or various facilities. For example, in a factory, in order to handle a work or inspect a product, it is necessary to know information such as a distance between the robot and the work. For that purpose, it is necessary to accurately detect the distance between the camera and the work.

【0003】[0003]

【従来の技術】従来の画像立体認識方法は、例えば図6
に示すように、2台のカメラ61,62で物体63を撮
影する場合、2台のカメラの投影中心61a,62aを
結ぶ直線71を作成し、その直線を含む一つの平面64
を形成する。平面64は、一般にはエピポーラ面と呼ば
れている面である。図7は各カメラ61,62のそれぞ
れの映像画面61b,62bを示す図である。各画面上
の線65,66は、物体63の情報を得るための基準線
である。ここでは、映像画面61b,62bに図6の平
面64を,投影したものが基準線65,66になる。
2. Description of the Related Art A conventional image stereoscopic recognition method is shown in FIG.
As shown in FIG. 7, when the object 63 is photographed by the two cameras 61 and 62, a straight line 71 connecting the projection centers 61a and 62a of the two cameras is created, and one plane 64 including the straight line is formed.
To form. The plane 64 is a surface generally called an epipolar surface. FIG. 7 is a diagram showing the video screens 61b and 62b of the cameras 61 and 62, respectively. The lines 65 and 66 on each screen are reference lines for obtaining information on the object 63. Here, the projections of the plane 64 of FIG. 6 on the video screens 61b and 62b become the reference lines 65 and 66.

【0004】物体63の特徴を検出するためには、先ず
この基準線65,66が各映像画面61b,62b上で
の物体63の輪郭線と交わる交点を求め、この交点を基
準点とする。図では、この基準点として点67と68が
該当するが、通常は何れか一方を選択する。ここでは点
67を選択する。
In order to detect the characteristics of the object 63, first, an intersection point where the reference lines 65 and 66 intersect with the contour line of the object 63 on each of the video screens 61b and 62b is obtained, and this intersection point is used as a reference point. In the figure, points 67 and 68 correspond to this reference point, but normally either one is selected. Here, the point 67 is selected.

【0005】基準点67が定まると、この基準点67の
映像画面61b,62b上の位置と、カメラ61,62
間の距離等から、基準点67の位置が特定できる。これ
により、ロボットとワークの距離を三角測量によって求
めることができる。
When the reference point 67 is determined, the positions of the reference point 67 on the video screens 61b and 62b and the cameras 61 and 62 are determined.
The position of the reference point 67 can be specified from the distance between them. Thereby, the distance between the robot and the work can be obtained by triangulation.

【0006】[0006]

【発明が解決しようとする課題】ところで、より正確な
位置を得るため、基準線65,66の他に基準線69,
70を図のように引いたとする(あるいは最初に引いた
基準線が偶然この位置に重なったとする)。このような
場合、基準線69,70は物体63の輪郭線と重なって
しまうため、画面61b,62bで抽出される基準点
は、それぞれ異なった位置になることが多い。特に、光
の加減等によりそれぞれの画面上で輪郭線の濃淡が異な
る場合には、正確な基準点を抽出することができない。
By the way, in order to obtain a more accurate position, in addition to the reference lines 65, 66, the reference line 69,
Suppose 70 is drawn as shown (or the first drawn reference line happens to overlap this position). In such a case, since the reference lines 69 and 70 overlap the contour line of the object 63, the reference points extracted on the screens 61b and 62b are often at different positions. In particular, when the shading of the contour line is different on each screen due to the adjustment of light or the like, an accurate reference point cannot be extracted.

【0007】本発明はこのような点に鑑みてなされたも
のであり、任意の基準線上であっても正確な基準点を抽
出し、物体の位置を正確に求めることのできる画像立体
認識方法を提供することを目的とする。
The present invention has been made in view of the above circumstances, and provides an image stereoscopic recognition method capable of accurately determining the position of an object by extracting an accurate reference point even on an arbitrary reference line. The purpose is to provide.

【0008】[0008]

【課題を解決するための手段】図1は上記目的を達成す
る本発明の画像立体認識方法の原理図である。複数のカ
メラの画面上に物体が写るように複数のカメラを設置し
(ステップS1)、複数のカメラのうち代表される2つ
のカメラの投影中心を結ぶ直線を含む平面を前記画面上
に投影してできる第2の直線に対して物体の垂直成分に
近い輪郭線を抽出するためのオペレータを選択し(ステ
ップS2)、オペレータにより前記物体の輪郭線を抽出
して記憶し前記画面上に表示し(ステップS3)、各画
面上に前記第2の直線に平行な基準線を引き(ステップ
S4)、基準線と輪郭線との交点である基準点を求め
(ステップS5)、各画面上での基準点の位置とカメラ
間の相対位置とにより物体の基準点に対応する部分の位
置を求める(ステップS6)。
FIG. 1 is a principle diagram of an image stereoscopic recognition method of the present invention for achieving the above object. A plurality of cameras are installed so that objects can be imaged on the screens of the plurality of cameras (step S1), and a plane including a straight line connecting the projection centers of two cameras represented by the plurality of cameras is projected on the screen. The operator for extracting the contour line close to the vertical component of the object with respect to the second straight line that can be created is selected (step S2), and the contour line of the object is extracted and stored by the operator and displayed on the screen. (Step S3), a reference line parallel to the second straight line is drawn on each screen (step S4), a reference point which is an intersection of the reference line and the contour line is obtained (step S5), and on each screen. The position of the portion corresponding to the reference point of the object is obtained from the position of the reference point and the relative position between the cameras (step S6).

【0009】[0009]

【作用】物体の輪郭を抽出するに当たって、基準線に対
して物体のほぼ垂直成分の輪郭線を抽出するので、基準
線と輪郭線との交点である基準点を正確に求めることが
できる。
In extracting the contour of the object, the contour line of the component substantially perpendicular to the reference line is extracted, so that the reference point which is the intersection of the reference line and the contour line can be accurately obtained.

【0010】[0010]

【実施例】以下、本発明の一実施例を図面に基づいて説
明する。図2は本発明の画像立体認識方法を実施する画
像認識装置の構成を示す図である。本実施例では、物体
20の撮影に2つのカメラ21,22を使用する。カメ
ラ21,22は、それぞれの投影中心21a,22aを
結ぶ直線26がエピポーラ面23に含まれるように設置
されている。カメラ21,22は、アクチュエータによ
って位置や撮影角度が制御される。また、一定の位置に
固定されるようにしてもよい。なお、ここでは、説明を
簡単にするため、光軸21cと22cは互いに平行であ
るとする。さらに各カメラ21,22の画面の水平成分
がエピポーラ面と平行となるよう設置されているとす
る。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS An embodiment of the present invention will be described below with reference to the drawings. FIG. 2 is a diagram showing the configuration of an image recognition apparatus for implementing the image stereoscopic recognition method of the present invention. In this embodiment, two cameras 21 and 22 are used to capture the object 20. The cameras 21 and 22 are installed so that the straight line 26 connecting the respective projection centers 21a and 22a is included in the epipolar surface 23. The positions and shooting angles of the cameras 21 and 22 are controlled by actuators. Alternatively, it may be fixed at a fixed position. In addition, here, for simplification of description, it is assumed that the optical axes 21c and 22c are parallel to each other. Furthermore, it is assumed that the horizontal components of the screens of the cameras 21 and 22 are set to be parallel to the epipolar plane.

【0011】カメラ21,22が撮影した画像は、制御
装置24に送られる。制御装置24に送られた画像は、
表示装置25の表示画面に表示され、制御装置24内の
メモリに記憶される。制御装置24は、送られた画像を
処理して物体20の情報を求め、ロボット等を制御す
る。
The images taken by the cameras 21 and 22 are sent to the control device 24. The image sent to the controller 24 is
It is displayed on the display screen of the display device 25 and stored in the memory in the control device 24. The control device 24 processes the sent image to obtain information on the object 20, and controls the robot and the like.

【0012】図3は、表示装置25の表示画面21b,
22bの状態を示す図である。表示画面21bはカメラ
21が撮影した映像の輪郭を、一方表示画面22bはカ
メラ22が撮影し、以下に述べる画像処理をした輪郭を
表示する。図からも分かるように、物体20の輪郭は画
面の基準線に対して垂直な成分のみが表示されている。
FIG. 3 shows a display screen 21b of the display device 25,
It is a figure which shows the state of 22b. The display screen 21b displays the outline of the image captured by the camera 21, while the display screen 22b displays the outline captured by the camera 22 and subjected to the image processing described below. As can be seen from the figure, the contour of the object 20 shows only the component perpendicular to the reference line of the screen.

【0013】図4はこの輪郭線を抽出するために用いら
れるオペレータの例を示す図である。本実施例では、図
4(a)のオペレータを使用して画像処理を行い、画面
の水平面に対して垂直な成分の輪郭線を強調して抽出す
る。ただし、基準線が傾いている場合、つまりカメラの
相対位置がななめにある場合必ずしもこれに限られず、
例えば図4(b)のように、垂直線から30°傾いた輪
郭線を強調するようなオペレータを使用してもよい。ま
た、図4では3×3のオペレータの例を示したが、5×
5や7×7等の多数行列のオペレータであれば、さらに
正確な輪郭線を抽出することができる。
FIG. 4 is a diagram showing an example of an operator used to extract the contour line. In the present embodiment, image processing is performed using the operator of FIG. 4A to emphasize and extract the outline of the component perpendicular to the horizontal plane of the screen. However, when the reference line is inclined, that is, when the relative position of the camera is licked, it is not necessarily limited to this.
For example, as shown in FIG. 4B, an operator who emphasizes a contour line inclined by 30 ° from a vertical line may be used. Further, in FIG. 4, an example of a 3 × 3 operator is shown, but 5 ×
An operator having a large number of matrices such as 5 or 7 × 7 can extract more accurate contour lines.

【0014】図3に戻り、画面21b,22bでは、物
体20の位置等の情報を得るための基準点を求めるため
に、それぞれ基準線31、32を引く。これは前述のエ
ピポーラ面23を画面21b,22bに投影したいわゆ
るエピポーラ線であってもよいし、このエピポーラ線に
平行な直線でもよい。基準点には、各画面上で物体20
の輪郭線と基準線とが交わった点が選ばれる。ただし、
この点が複数個ある場合には、本実施例では、最も左側
にある点を基準点とする。したがって、図3において
は、それぞれ点33,34が基準点に決定される。これ
らの基準点33,34は、ともに図2における点Pに対
応する。
Returning to FIG. 3, on the screens 21b and 22b, reference lines 31 and 32 are drawn in order to obtain reference points for obtaining information such as the position of the object 20. This may be a so-called epipolar line obtained by projecting the epipolar surface 23 on the screens 21b and 22b, or may be a straight line parallel to the epipolar line. The reference point is the object 20 on each screen.
A point at which the contour line of and the reference line intersect is selected. However,
When there are a plurality of points, in this embodiment, the leftmost point is used as the reference point. Therefore, in FIG. 3, the points 33 and 34 are determined as the reference points, respectively. These reference points 33 and 34 both correspond to the point P in FIG.

【0015】このように、画面21b,22b上では、
垂直方向の輪郭線だけが表示されるので、基準線31,
32と重なることがない。したがって、基準線をどの場
所に引いても正確な基準点を求めることができる。
Thus, on the screens 21b and 22b,
Since only the vertical contour line is displayed, the reference line 31,
It does not overlap with 32. Therefore, an accurate reference point can be obtained no matter where the reference line is drawn.

【0016】基準点33,34が求められると、制御装
置24は、点Pの位置を特定する演算を行う。これによ
り、物体20の位置を特定する。図5はこの物体20の
位置を特定する方法を示す図である。カメラ21の投影
中心21aと基準点33とを結ぶ線を41、カメラ22
の投影中心22aと基準点34とを結ぶ線を42とす、
線41と線42とが交わる点に物体20の点Pがあると
考えられる。
When the reference points 33 and 34 are obtained, the control device 24 performs a calculation for specifying the position of the point P. Thereby, the position of the object 20 is specified. FIG. 5 is a diagram showing a method for specifying the position of the object 20. The line connecting the projection center 21a of the camera 21 and the reference point 33 is 41, and the camera 22
The line connecting the projection center 22a of the and the reference point 34 is 42,
It is considered that the point P of the object 20 is at the intersection of the line 41 and the line 42.

【0017】線41と光軸21aとの間の角度をα、線
42と光軸22aとの間の角度をβとすると、これらの
角度は各画面21b,22bの中心点21d,22dか
らの投影中心21a,22aの距離、および各基準点3
3,34の画面上の位置等から容易に求めることができ
る。さらに、これらの角度α、βや、予め分かっている
カメラ間の距離L等から、線41および線42の長さが
三角測量の原理により計算できる。これにより、カメラ
と物体との距離を正確に求めることができる。
When the angle between the line 41 and the optical axis 21a is α and the angle between the line 42 and the optical axis 22a is β, these angles are from the center points 21d and 22d of the screens 21b and 22b. The distance between the projection centers 21a and 22a and each reference point 3
It can be easily obtained from the positions of 3, 34 on the screen. Furthermore, the lengths of the lines 41 and 42 can be calculated from the angles α and β and the distance L between the cameras which is known in advance by the principle of triangulation. As a result, the distance between the camera and the object can be accurately obtained.

【0018】なお、上記実施例では説明を簡単にするた
めに、カメラの光軸21c,22cを互いに平行とし、
さらに各カメラ21,22の画面の水平成分がエピポー
ラ面23と平行となるように設置したが、必ずしもこれ
に従うものではなく、画面21b,22b上に物体20
が確実に写るようにカメラ21,22を設置すれば、上
述同様にカメラと物体間の距離を求めることができる。
すなわち、光軸21c,22cがずれ、かつ画面21
b,22bの水平成分がエピポーラ面23と平行でなく
ても、斜めに引かれた基準線31,32と垂直な輪郭線
を抽出することにより、正確な基準点33,34を求め
ることができる。したがって、上記実施例同様、カメラ
と物体間の距離を正確に求めることができる。
In the above embodiment, in order to simplify the explanation, the optical axes 21c and 22c of the camera are made parallel to each other,
Furthermore, although the horizontal components of the screens of the cameras 21 and 22 are set to be parallel to the epipolar surface 23, this is not always the case, and the objects 20 on the screens 21b and 22b are not necessarily in compliance with this.
If the cameras 21 and 22 are installed so that the image can be reliably captured, the distance between the camera and the object can be obtained as described above.
That is, the optical axes 21c and 22c are deviated, and the screen 21
Even if the horizontal components of b and 22b are not parallel to the epipolar surface 23, accurate reference points 33 and 34 can be obtained by extracting the contour lines that are perpendicular to the obliquely drawn reference lines 31 and 32. .. Therefore, as in the above embodiment, the distance between the camera and the object can be accurately obtained.

【0019】[0019]

【発明の効果】以上説明したように本発明では、物体の
輪郭を抽出するに当たって、基準線に対して物体のほぼ
垂直成分の輪郭線を抽出するので、基準線と輪郭線との
交点である基準点を正確に求めることができる。したが
って、物体の位置を正確に求めることができる。
As described above, according to the present invention, when the contour of the object is extracted, the contour line of the component almost vertical to the reference line is extracted, and therefore, it is the intersection of the reference line and the contour line. The reference point can be accurately obtained. Therefore, the position of the object can be accurately obtained.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の画像立体処理方法の原理図である。FIG. 1 is a principle diagram of an image stereoscopic processing method of the present invention.

【図2】画像認識装置の構成を示す図である。FIG. 2 is a diagram showing a configuration of an image recognition device.

【図3】表示画面の状態を示す図である。FIG. 3 is a diagram showing a state of a display screen.

【図4】オペレータの例を示す図である。FIG. 4 is a diagram illustrating an example of an operator.

【図5】物体の位置を特定する方法を示す図である。FIG. 5 is a diagram showing a method for identifying the position of an object.

【図6】従来の画像処理装置の構成図である。FIG. 6 is a block diagram of a conventional image processing apparatus.

【図7】従来の画像立体認識方法の画面の状態を示す図
である。
FIG. 7 is a diagram showing a screen state of a conventional image stereoscopic recognition method.

【符号の説明】[Explanation of symbols]

20 物体 21,22 カメラ 24 制御装置 25 表示装置 20 Object 21, 22 Camera 24 Control Device 25 Display Device

Claims (1)

【特許請求の範囲】 【請求項1】 複数のカメラを用いて物体を撮影しその
特徴を捉える画像立体認識方法において、前記複数のカ
メラの画面上に物体が写るように前記複数のカメラを設
置し(ステップS1)、前記複数のカメラのうち2つの
カメラの投影中心を結ぶ第1の直線を含む平面を前記画
面上に投影してできる第2の直線に対して前記物体の垂
直成分に近い輪郭線を抽出するためのオペレータを選択
し(ステップS2)、前記オペレータにより前記物体の
輪郭線を抽出して記憶し前記画面上に表示し(ステップ
S3)、前記各画面上に前記第2の直線に平行な基準線
を引き(ステップS4)、前記基準線と前記輪郭線との
交点である基準点を求め(ステップS5)、前記各画面
上での前記基準点の位置と前記カメラ間の相対位置とに
より前記物体の前記基準点に対応する部分の位置を求め
る(ステップS6)ことを特徴とする画像立体認識方
法。 【請求項2】 前記基準線は、エピポーラ線であること
を特徴とする請求項1記載の画像立体認識方法。
Claim: What is claimed is: 1. An image stereoscopic recognition method in which an object is photographed by using a plurality of cameras and the features thereof are captured, and the plurality of cameras are installed so that the object appears on the screens of the plurality of cameras. Then, (step S1), a vertical line of the object is close to a second straight line formed by projecting a plane including a first straight line connecting the projection centers of two cameras of the plurality of cameras on the screen. An operator for extracting the contour line is selected (step S2), the contour line of the object is extracted and stored by the operator and displayed on the screen (step S3), and the second line is displayed on each screen. A reference line parallel to a straight line is drawn (step S4), a reference point which is an intersection of the reference line and the contour line is obtained (step S5), and the position of the reference point on each screen and the camera are separated. Relative position and Determining the position of the portion to further corresponding to the reference point of the object (step S6) image solid recognition method, characterized in that. 2. The stereoscopic image recognition method according to claim 1, wherein the reference line is an epipolar line.
JP3156347A 1991-06-27 1991-06-27 3D image recognition method Expired - Fee Related JP2882910B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3156347A JP2882910B2 (en) 1991-06-27 1991-06-27 3D image recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3156347A JP2882910B2 (en) 1991-06-27 1991-06-27 3D image recognition method

Publications (2)

Publication Number Publication Date
JPH055609A true JPH055609A (en) 1993-01-14
JP2882910B2 JP2882910B2 (en) 1999-04-19

Family

ID=15625776

Family Applications (1)

Application Number Title Priority Date Filing Date
JP3156347A Expired - Fee Related JP2882910B2 (en) 1991-06-27 1991-06-27 3D image recognition method

Country Status (1)

Country Link
JP (1) JP2882910B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120032139A (en) * 2010-09-28 2012-04-05 서울대학교산학협력단 Conductive bio-nano chains and method of manufacturing the same
US20120292931A1 (en) * 2011-05-20 2012-11-22 Suzuki Motor Corporation Vehicle Bumper
JP2013024773A (en) * 2011-07-22 2013-02-04 Canon Inc Three-dimensional measuring method
WO2013062087A1 (en) * 2011-10-28 2013-05-02 富士フイルム株式会社 Image-capturing device for three-dimensional measurement, three-dimensional measurement device, and measurement program
WO2013094420A1 (en) 2011-12-22 2013-06-27 Canon Kabushiki Kaisha Three dimension measurement method, three dimension measurement program and robot device
CN111462244A (en) * 2019-01-22 2020-07-28 上海欧菲智能车联科技有限公司 On-line calibration method, system and device for vehicle-mounted all-round-looking system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120032139A (en) * 2010-09-28 2012-04-05 서울대학교산학협력단 Conductive bio-nano chains and method of manufacturing the same
US20120292931A1 (en) * 2011-05-20 2012-11-22 Suzuki Motor Corporation Vehicle Bumper
DE102012208348A1 (en) 2011-05-20 2012-11-22 Suzuki Motor Corp. Vehicle bumpers
US8550510B2 (en) 2011-05-20 2013-10-08 Suzuki Motor Corporation Vehicle bumper
JP2013024773A (en) * 2011-07-22 2013-02-04 Canon Inc Three-dimensional measuring method
US8941732B2 (en) 2011-07-22 2015-01-27 Canon Kabushiki Kaisha Three-dimensional measuring method
WO2013062087A1 (en) * 2011-10-28 2013-05-02 富士フイルム株式会社 Image-capturing device for three-dimensional measurement, three-dimensional measurement device, and measurement program
JP5600220B2 (en) * 2011-10-28 2014-10-01 富士フイルム株式会社 3D measuring device
WO2013094420A1 (en) 2011-12-22 2013-06-27 Canon Kabushiki Kaisha Three dimension measurement method, three dimension measurement program and robot device
US9292932B2 (en) 2011-12-22 2016-03-22 Canon Kabushiki Kaisha Three dimension measurement method, three dimension measurement program and robot device
CN111462244A (en) * 2019-01-22 2020-07-28 上海欧菲智能车联科技有限公司 On-line calibration method, system and device for vehicle-mounted all-round-looking system
CN111462244B (en) * 2019-01-22 2024-02-06 上海欧菲智能车联科技有限公司 On-line calibration method, system and device for vehicle-mounted looking-around system

Also Published As

Publication number Publication date
JP2882910B2 (en) 1999-04-19

Similar Documents

Publication Publication Date Title
JP4681856B2 (en) Camera calibration method and camera calibration apparatus
JP4889351B2 (en) Image processing apparatus and processing method thereof
JPH07294215A (en) Method and apparatus for processing image
JP2005347790A (en) Projector provided with trapezoidal distortion correction apparatus
JPH08331610A (en) Automatic image controller
KR20160047846A (en) Method of image registration
KR102414362B1 (en) Aligning digital images
JPH03200007A (en) Stereoscopic measuring instrument
JP3741136B2 (en) Obstacle adaptive projection display
JP2882910B2 (en) 3D image recognition method
JPH10122819A (en) Method and device for calibration
JP2559939B2 (en) Three-dimensional information input device
JP3103478B2 (en) Compound eye imaging device
JPH1144533A (en) Preceding vehicle detector
JP2007010419A (en) Three-dimensional shape of object verifying system
KR20190127543A (en) A method for detecting motion in a video sequence
JPH09329440A (en) Coordinating method for measuring points on plural images
JPH0875454A (en) Range finding device
JP3912638B2 (en) 3D image processing device
JPH0252204A (en) Measuring instrument for three-dimensional coordinate
JP2005309782A (en) Image processor
JP3340599B2 (en) Plane estimation method
JP2809348B2 (en) 3D position measuring device
JPH0953914A (en) Three-dimensional coordinate measuring instrument
JPH05118524A (en) Method of monitoring combustion in incinerator

Legal Events

Date Code Title Description
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 19990119

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080205

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090205

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090205

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100205

Year of fee payment: 11

LAPS Cancellation because of no payment of annual fees