JPH0486957A - Method for inputting appearance data of stereoscopic object - Google Patents

Method for inputting appearance data of stereoscopic object

Info

Publication number
JPH0486957A
JPH0486957A JP2201194A JP20119490A JPH0486957A JP H0486957 A JPH0486957 A JP H0486957A JP 2201194 A JP2201194 A JP 2201194A JP 20119490 A JP20119490 A JP 20119490A JP H0486957 A JPH0486957 A JP H0486957A
Authority
JP
Japan
Prior art keywords
dimensional
feature points
picture
feature point
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2201194A
Other languages
Japanese (ja)
Inventor
Kazuhiko Fukuda
和彦 福田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuji Electric Co Ltd
Fuji Facom Corp
Original Assignee
Fuji Electric Co Ltd
Fuji Facom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuji Electric Co Ltd, Fuji Facom Corp filed Critical Fuji Electric Co Ltd
Priority to JP2201194A priority Critical patent/JPH0486957A/en
Publication of JPH0486957A publication Critical patent/JPH0486957A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE:To automatically process the three-dimensional coordinate measurement of a stereoscopic object and the input to a CAD by extracting the sequence of feature points from a picture group for which the image of the object is picked up from plural directions, and automatically obtaining the three-dimensional coordinates of the object by using a standard three-dimensional stereoscope model. CONSTITUTION:The image of the stereoscopic object is picked up from plural directions, and the picture data is stored in the frame memory of a picture processor 4. Since the coordinate value is made correspondent to a coordinate value in a reference coordinate system, plural reference feature points are calculated in advance and measured in the respective image pickup directions, and when the picture is analyzed, the positions of respective feature points of the picture in the respective plural directions are collated and corrected. After a series of image pickup processes are completed, a picture analysis process is started. First, the feature points are extracted concerning the first picture. The group of windows for extracting feature points is set to the picture of the object so as to instruct a range where feature points exist, and among the feature points, feature points having luminance higher than fixed one are extracted. The group of extracted feature points is collected in a feature point file. Then, the feature points are collected from the feature file to a two-dimensional feature point coordinate system and the feature point coordinates are made three-dimensional.

Description

【発明の詳細な説明】 〔産業上の利用分野〕 この発明は、立体形状物の外観の三次元座標データを自
動的に抽出し、対象物のグラフインク表示や立体的寸法
の測定などの用途に供するための立体物外観形状データ
の入力方法に関する。
[Detailed Description of the Invention] [Field of Industrial Application] This invention automatically extracts three-dimensional coordinate data of the external appearance of a three-dimensional object, and has applications such as graphic ink display of the object and measurement of three-dimensional dimensions. The present invention relates to a method for inputting external shape data of a three-dimensional object.

〔従来の技術〕[Conventional technology]

従来、立体形状物を表示したり寸法を測定したりすると
き、予めCAD (コンピュータ利用設計装置)等で入
力されている場合はCADデータの利用が可能であるが
、対象物だけあってCADデータがない場合は、対象物
を各方向から撮像して人手で座標を入力したり、対象物
にメジャーを直接光てて測定するなどの方法しかないの
が現状である。なお、このような技術は、例えば下記の
分野で必要となる。
Conventionally, when displaying or measuring the dimensions of a three-dimensional object, it is possible to use CAD data if it has been entered in advance using a CAD (computer-aided design device), etc.; If one is not available, the only methods currently available are to take images of the object from each direction and enter the coordinates manually, or to measure by shining a measuring tape directly onto the object. Note that such technology is required, for example, in the following fields.

i)アパレルデザインの場合 衣服のデザイン、特にオーダメートのデザインの場合は
、従来はメジャー等で対象の人間の寸法測定を行なって
いる。また、既成服の場合は測定は特に行なわずに製造
しているのが普通である。
i) In the case of apparel design In the case of clothing design, especially custom-made design, conventionally the dimensions of the target person are measured using a tape measure or the like. Furthermore, in the case of ready-made clothes, it is common to manufacture them without any particular measurements being taken.

今後、個性的なデザインが要求される時代になると、メ
ジャー等で測ったデータはCAD化が難しいので、特に
アパレル用CADが普及してくる場合の、人体の寸法や
服の外観のデータベース入力手段として、立体形状の入
力装置が必要となることが予測される。
In the future, when we enter an era where unique designs are required, it will be difficult to convert data measured with a measuring tape into CAD, so we will need a database input method for human body dimensions and clothing appearance, especially when CAD for apparel becomes popular. Therefore, it is predicted that a three-dimensional input device will be required.

ii )特殊な曲線の形が性能を左右するオーダーメイ
ド製品の場合 ゴルフクラブのヘッドの形状、靴の形状、金属加工品ま
たは工芸品などは現状は手作りで行なわれているが、今
後これらの分野でCADによるデザインが普及するよう
になると、CADに対する入力手段としての立体形状の
入力装置が必要になって来る。
ii) In the case of custom-made products whose performance is influenced by the shape of a special curve: The shape of golf club heads, the shape of shoes, metalwork, or handicrafts are currently made by hand, but in the future these fields will expand. As design using CAD becomes widespread, a three-dimensional shape input device becomes necessary as an input means for CAD.

111)直接計測することができない立体物の外観形状
入力の場合 建物や巨大建造物については直接計測することが実質上
不可能であり、従来は例えば三角測量等で計測している
。しかし、精密なデータが余り要求されない場合に、外
観形状の座標がより簡華に入力できる装置の出現が望ま
れている。
111) When inputting the external shape of a three-dimensional object that cannot be directly measured It is virtually impossible to directly measure buildings and huge structures, and conventionally, they have been measured by, for example, triangulation. However, when very precise data is not required, there is a desire for a device that can more easily input the coordinates of an external shape.

このような各種ニーズに対処する方法として、従来は例
えば人手による手入力作業か、またはレーザスリット光
を照射してスリットパターンをテレビカメラ等により計
測し、三次元形状を得る方法などが知られている。
Conventionally, methods known to address these various needs include, for example, using manual input operations, or irradiating laser slit light and measuring the slit pattern with a television camera, etc., to obtain a three-dimensional shape. There is.

〔発明が解決しようとする課題〕[Problem to be solved by the invention]

しかしながら、人手による作業は時間が掛かり、またレ
ーザスリット光パターンによるものは高精度で測定でき
るメリットはあるものの、対象物の表面がレーザ光に対
して乱反射するようなもの(金属光沢面など)には適用
できず、アパレルデザイン分野のように人間を相手にす
る場合はレーザ光の照射は好ましくなく、実用化は不可
能であり、さらに、建築物のような外界での測定の場合
も、レーザ光の照射は無理である。
However, manual work is time-consuming, and although the laser slit light pattern has the advantage of being able to measure with high precision, it is difficult to measure when the surface of the object reflects diffusely against the laser beam (such as a shiny metallic surface). Laser light irradiation is undesirable when dealing with humans, such as in the apparel design field, and cannot be put to practical use.Furthermore, laser light irradiation is not suitable for measurements in the outside world such as buildings. Irradiation of light is impossible.

したがって、この発明の課題はテレビカメラを用いて立
体物の外観形状データを容易に入力できるようにするこ
とにある。
Therefore, an object of the present invention is to enable easy input of external shape data of a three-dimensional object using a television camera.

〔課題を解決するための手段〕[Means to solve the problem]

対象とする立体形状物(または模型)の特徴点くまたは
特徴となる線)にマーカーを付与して揚傷装置にて対象
物を複数方向から撮像し、得られた画像から標準の三次
元立体モデルを用いて立体形状の特徴点(または線)の
三次元座標を抽出し、そのデータをデイスプレィ装置へ
原画像と重ね合わせてグラフィック表示し、必要な場合
はマニュアルによる修正を施してCADへ入力する。
Markers are attached to the characteristic points (or characteristic lines) of the target three-dimensional object (or model), the object is imaged from multiple directions using a lifting device, and the obtained images are used to create a standard three-dimensional object. Extract the three-dimensional coordinates of the feature points (or lines) of the three-dimensional shape using the model, display the data graphically on a display device by superimposing it on the original image, make manual corrections if necessary, and input it to CAD. do.

〔作用〕[Effect]

対象物を複数方向から撮像した画像群より特徴点列を抽
出し、標準の三次元立体モデルを用いて対象物の三次元
座標を自動的に得てCADへ入力できるようにする。
A feature point sequence is extracted from a group of images of a target object taken from multiple directions, and the three-dimensional coordinates of the target object are automatically obtained using a standard three-dimensional solid model and can be input into CAD.

具体的には、以下の如きステップにて行なわれる。Specifically, the steps are as follows.

)揚傷過程 解析対象となる立体形状物の特徴点(または特徴線)に
周囲の背景と輝度レベルの差があるマーカーを点状に付
与し、カメラ等により複数方向から撮像する。なお、マ
ーカーは抽出したい特徴点に任意に付与することができ
る。
) Markers with different luminance levels from the surrounding background are attached to the feature points (or feature lines) of the three-dimensional object to be analyzed during the lifting and injury process, and images are taken from multiple directions using a camera or the like. Note that markers can be arbitrarily attached to feature points that are desired to be extracted.

ii )特徴点抽出およびラベル付は過程撮像された複
数の画像から、コンピュータにより特徴点の抽出が行な
われる。この特徴点の抽出は上記マーカーにより、周囲
の背景と輝度レベルの差があることを利用して行なわれ
、複数方向から撮像された各画像単位で行なわれる。各
画像には複数の特徴点があるが、各特徴点間の区別を行
なうために、ここでは各点のラベル付けを以下のように
行なう。
ii) Feature point extraction and labeling process Feature points are extracted by a computer from a plurality of captured images. This feature point extraction is performed using the difference in luminance level from the surrounding background using the marker, and is performed for each image taken from a plurality of directions. Each image has a plurality of feature points, but in order to distinguish between each feature point, each point is labeled as follows.

すなわち、付与したマーカーの位置は予め予想される範
囲にあることから、各特徴点の検索領域(特徴点抽出用
ウィンドウ)を設定する。このように、各画像での対応
する特徴点の存在範囲をウィンドウで指定することによ
り、各画像間で対応する特徴点同士を同じ番号でラベル
付けすることができる。
That is, since the position of the assigned marker is within a range that is expected in advance, a search area (window for extracting feature points) for each feature point is set. In this way, by specifying the existence range of the corresponding feature points in each image using the window, it is possible to label the corresponding feature points in each image with the same number.

iii )特徴点の補間 カメラで複数方向から画像を撮像する場合、すべての特
徴点が画像に現れるわけではなく、方向によってはカメ
ラの撮像視野から隠れることもある。そこで、撮像する
方向が異なる複数の画像の特徴点からの相互補間を行な
う。
iii) Interpolation of feature points When images are captured from multiple directions with a camera, not all feature points appear in the image, and depending on the direction, they may be hidden from the camera's imaging field of view. Therefore, mutual interpolation is performed from feature points of a plurality of images captured in different directions.

iv)特徴点群の抽出座標の連結および三次元座標化。iv) Concatenation and three-dimensional coordinate extraction of feature point group extraction coordinates.

上記ii) 、  1ii)項で抽出、ラベル付けされ
た特徴点群を、各特徴点単位で三次元空間軸方向で並べ
なおし、立体形状物の特徴点座標の最終的決定を行ない
CADへ入力する。1ii) 、 iv)項の処理を行
なうため、ここでは基準の三次元格子状モデルを用いて
各格子点の座標を計測しておき、隠れた点の補間および
二次元座標から三次元座標−・の変換等を容易にしてい
る。
The feature point groups extracted and labeled in ii) and 1ii) above are rearranged in the three-dimensional space axis direction for each feature point, and the feature point coordinates of the three-dimensional object are finally determined and input to CAD. . In order to process items 1ii) and iv), here we measure the coordinates of each grid point using a standard three-dimensional grid model, interpolate the hidden points, and calculate the three-dimensional coordinates from the two-dimensional coordinates. This makes it easy to convert etc.

〔実施例〕〔Example〕

第1図に外観形状入力装置の概要を示す。同図の符号1
はゴルフクラブ、2はテレビカメラ、3は照明器、4は
画像処理装置、5はイメージデイスプレィで、比較的小
さな立体形状物を対象とする例である。なお、テレビカ
メラ2はここでは1台とし、位置を変えて複数方向から
撮像する場合を想定しているが、複数台設置しておくよ
うにしても良い。
Figure 1 shows an overview of the external shape input device. Number 1 in the same figure
2 is a golf club, 2 is a television camera, 3 is an illuminator, 4 is an image processing device, and 5 is an image display, which is an example of a relatively small three-dimensional object. Note that although it is assumed here that one television camera 2 is used and images are taken from multiple directions by changing the position, a plurality of television cameras 2 may be installed.

ここで用いるカメラとしては、特徴点抽出のために付与
するマーカーの性質との関連で決められるが、ここでは
特徴点抽出用マーカーとしてカラーマーカーを使用する
こととする。このカラーマーカーは特徴点色が背景色と
明確に区別できる特定の色、例えば特徴点色とそれ以外
の色(背景色)とで色相が対極の関係にある色、赤に対
する緑、白に対する黒の如く選ぶようにする。
The camera used here is determined in relation to the nature of the marker provided for feature point extraction, but here we will use a color marker as the feature point extraction marker. This color marker is a specific color that allows the feature point color to be clearly distinguished from the background color, such as a color in which the feature point color and other colors (background color) are opposite in hue, green for red, black for white, etc. Make sure to choose as follows.

第2図に色相周波数と強度との関係を示し、色相周波数
を適宜に選べば、特徴点を背景から区別できることが分
かる。また、立体形状物がゴルフクラブのヘッドである
場合の、カラーマーカーの例を第3図に示す。個々のマ
ーカーの大きさは検出精度と撮像視野の分解能および特
徴点の動作範囲等から適宜に決定し、カメラとしては特
徴点の色を通し、背景色を力・ツトする色フィルタを備
えたカラーカメラを用いることとする。
FIG. 2 shows the relationship between hue frequency and intensity, and it can be seen that feature points can be distinguished from the background by appropriately selecting the hue frequency. Further, FIG. 3 shows an example of a color marker when the three-dimensional object is a golf club head. The size of each marker is determined appropriately based on the detection accuracy, the resolution of the imaging field of view, and the operating range of the feature points. We will use a camera.

このような条件で立体形状物を1つまたは複数のカメラ
で複数方向から撮像し、画像データを画像処理装置4内
のフレームメモリに格納する。このとき、各画像におけ
る特徴点の座標値(以下、カメラ座標系での値ともいう
)と基準の座標系での座標値(以下、基準座標系での値
ともいう)との対応付けを行なうために、予め基準とな
る複数の特徴点を決め、各撮像方向でのその基準特徴点
を測定しておき、画像解析時に各複数方向画像の特徴点
位置の照合、補正(較正)を行なう。この様子を示すの
が第4図で、同図(イ)の符号Mは基準となる格子状モ
デル(キャリブレータ)を示し、このモデルMの各格子
点にマーカーを付与して例えば3台のカメラ2A、2B
、2Cで撮像し、読み取った各格子点座標から同図(ロ
)の如きカメラ座標系と基準座標系との変換テーブルT
を作成する。このとき、各方向の画像から抽出された特
徴点群を変換、補正するために、基準となるカメラまた
はカメラ位置(カメラ座標)を決めておくこととする。
Under these conditions, a three-dimensional object is imaged from multiple directions using one or more cameras, and the image data is stored in a frame memory within the image processing device 4. At this time, the coordinate values of the feature points in each image (hereinafter also referred to as values in the camera coordinate system) are associated with the coordinate values in the reference coordinate system (hereinafter also referred to as values in the reference coordinate system). For this purpose, a plurality of reference feature points are determined in advance, the reference feature points are measured in each imaging direction, and the feature point positions of the images in each plurality of directions are compared and corrected (calibrated) during image analysis. This situation is shown in Fig. 4, where the symbol M in Fig. 4 (a) indicates a reference grid model (calibrator), and markers are attached to each grid point of this model M, so that, for example, three cameras can be set up. 2A, 2B
From the coordinates of each grid point captured and read by 2C, a conversion table T between the camera coordinate system and the reference coordinate system as shown in the figure (b) is created.
Create. At this time, in order to transform and correct feature points extracted from images in each direction, a reference camera or camera position (camera coordinates) is determined in advance.

例えば、正面のカメラまたは正面方向からのカメラ位置
を基準カメラ位置とし、この座標系を基準カメラ座標系
とする。
For example, the front camera or the camera position from the front direction is set as the reference camera position, and this coordinate system is set as the reference camera coordinate system.

第5図に同じく1台のカメラで格子状モデルを撮像し、
各格子点を計測する場合の例を示す。カメラ1台で位置
を変えて撮像し計測する他は第4図の場合と同様なので
、詳細は省略する。なお、第5図(イ)は格子状モデル
とカメラとの関係を示し、同図(ロ)は変換テーブルT
を示す。
As shown in Figure 5, a grid model is imaged with one camera,
An example of measuring each grid point is shown below. The process is the same as the case shown in FIG. 4 except that one camera is used to take images and measure at different positions, so the details will be omitted. In addition, FIG. 5(A) shows the relationship between the grid model and the camera, and FIG. 5(B) shows the conversion table T.
shows.

一連の撮像過程が終了したら画像解析過程に移行する。When a series of imaging processes are completed, the process moves to an image analysis process.

画像解析の対象は、上記複数方向がら撮像した一定範囲
の画像群である。
The object of image analysis is a group of images in a certain range taken from the plurality of directions.

まず、第1の画像(フレーム)のにつき特徴点の抽出を
行なう。撮像カメラの位置と対象物位置とは固定されて
いるので、対象物画像に対し特徴点の存在範囲を指示す
る特徴点抽出用ウィンドウ群を設定し、その中で一定以
上の輝度を有するものが抽出される。抽出された特徴点
群は、対応する特徴点抽出用ウィンドウ群と同じ番号■
でラベル付けされる。次の画像■についても、上記と同
じく予め用意された特徴点抽出用ウィンドウ群■により
、特徴点の抽出が行なわれる。抽出された特徴点のラベ
ル付けは、対応する特徴点抽出用ウィンドウ群■と同じ
番号でラベル付けされ、以下同様にして画像■以降の特
徴点群が抽出され、ラベル付けされる。そして、最終の
フレームまで処理したら終了する。こうして抽出された
特徴点群は、特徴点ファイルにまとめられる。
First, feature points are extracted from the first image (frame). Since the position of the imaging camera and the object position are fixed, a group of feature point extraction windows are set to indicate the range of feature points in the object image, and those with brightness above a certain level are selected. Extracted. The extracted feature point group has the same number as the corresponding feature point extraction window group.
labeled with. Regarding the next image (2), feature points are extracted using the feature point extraction window group (2) prepared in advance in the same manner as above. The extracted feature points are labeled with the same number as the corresponding feature point extraction window group ■, and feature point groups after image ■ are extracted and labeled in the same manner. After processing up to the last frame, the process ends. The feature point group extracted in this way is compiled into a feature point file.

各カメラから抽出された特徴点座標値は各カメラの固有
座標系での値であり、また各方向から撮像された画像に
おいては、撮像視野から隠れた特徴点が存在し、またカ
メラの歪みなどによる誤差が発生する。そこで、上記特
徴点抽出およびラベル付は過程で作成された特徴点ファ
イルから、同じ特徴点同士での座標を相互に照合、補正
し、各特徴点座標を基準カメラ座標系での2次元特徴点
座標系にまとめる。その後、カメラ座標系と予め標準の
モデルを介して得られる基準座標系との変換テーブルを
用いて特徴点座標の三次元座標化を行なう。これも、第
4図または第5図のような三次元格子状モデルを用いて
行なわれる。
The feature point coordinate values extracted from each camera are values in the unique coordinate system of each camera, and in images captured from each direction, there may be feature points hidden from the imaging field of view, and camera distortion may occur. An error occurs due to Therefore, in the above feature point extraction and labeling, the coordinates of the same feature points are mutually verified and corrected from the feature point file created in the process, and each feature point coordinate is converted into a two-dimensional feature point in the standard camera coordinate system. Summarize into a coordinate system. Thereafter, feature point coordinates are converted into three-dimensional coordinates using a conversion table between the camera coordinate system and a reference coordinate system obtained in advance via a standard model. This is also done using a three-dimensional lattice model as shown in FIG. 4 or 5.

第6図はこの発明の他の実施例を説明するための説明図
である。
FIG. 6 is an explanatory diagram for explaining another embodiment of the invention.

これは、第1図と同じくゴルフクラブ1のヘッドに適用
した例であるが、点状のカラーマーカを用いる代わりに
線状の近赤外マーカーを用いる点で異なっている。しか
し、線状のものを細線化したりあるいは点の集まりとし
て処理することにより、この場合も第1図と同様に扱う
ことができるので、詳細は省略する。なお、同図(イ)
は近赤外マーカーを示し、同図(ロ)はウィンドウと特
m線(2値化線分パターン)を示している。
This is an example applied to the head of a golf club 1 like in FIG. 1, but the difference is that a linear near-infrared marker is used instead of a dotted color marker. However, by thinning the linear object or processing it as a collection of points, this case can be handled in the same manner as in FIG. 1, so the details will be omitted. In addition, the same figure (a)
shows a near-infrared marker, and the same figure (b) shows a window and a special m line (binarized line segment pattern).

以上は比較的小さい立体形状物を対象としたが、構造物
等の大きいものについては、マーカーラ例えば色付ラン
プ(カラーランプ)とし、これを対象物に適宜に取り付
けることにより、上記と同様に処理することが可能であ
る。
The above targets relatively small three-dimensional objects, but large objects such as structures can be treated in the same manner as above by using a marker, such as a colored lamp, and attaching it to the object as appropriate. It is possible to do so.

〔発明の効果〕〔Effect of the invention〕

この発明によれば、従来手作業にて行なっていた立体形
状物の三次元座標測定とCADへの入力ヲテレビカメラ
とコンピュータにて自動的に処理することが可能となり
、各種業界の要求等に応えることができる。
According to this invention, it is now possible to automatically process the three-dimensional coordinate measurement of three-dimensional objects and input to CAD using a television camera and computer, which was previously done manually, and can meet the demands of various industries. I can respond.

【図面の簡単な説明】[Brief explanation of drawings]

第1図はこの発明の実施例を示す概要図、第2図は色相
周波数による特徴点抽出方法を説明するための説明図、
第3図は点状マーカを説明するための説明図、第4図お
よび第5図は各方向画像の三次元格子状モデルによる補
間および変換の原理を説明するための説明図、第6図は
この発明の他の実施例を説明するための説明図である。 1・・・ゴルフクラブヘッド、2.2A、2B、2C・
・・テレビカメラ、3・・・照明器、4・・・画像処理
装置、5・・・イメージデイスプレィ。 第2図 第3図
FIG. 1 is a schematic diagram showing an embodiment of the present invention, FIG. 2 is an explanatory diagram for explaining a feature point extraction method using hue frequency,
Fig. 3 is an explanatory diagram for explaining the dot marker, Figs. 4 and 5 are explanatory diagrams for explaining the principle of interpolation and transformation using a three-dimensional grid model of images in each direction, and Fig. 6 is an explanatory diagram for explaining the principle of interpolation and conversion using a three-dimensional grid model of images in each direction. It is an explanatory view for explaining other examples of this invention. 1... Golf club head, 2.2A, 2B, 2C.
...TV camera, 3.. Illuminator, 4.. Image processing device, 5.. Image display. Figure 2 Figure 3

Claims (1)

【特許請求の範囲】 1)立体形状物の各特徴点に点状または線状のマーカー
を付与して1台または複数台の撮像手段にて複数方向か
ら撮像し、各画像毎に対象物に付与されたマーカーの各
特徴点の座標を抽出し、この座標を予め基準となる三次
元格子状モデルの各格子点を各方向から計測した結果と
照合、補間して各特徴点座標を或る方向画面を基準とす
る二次元特徴点座標群として表現し、これらの座標群を
前記三次元格子状モデルを計測したときに得られる三次
元座標との対応関係にもとづき三次元座標化し、コンピ
ュータ利用設計装置に入力可能にしてなることを特徴と
する立体物外観形状データの入力方法。 2)前記マーカーを、背景とは色相が反対方向にある色
とすることを特徴とする請求項1)に記載の立体物外観
形状データの入力方法。 3)前記マーカーを近赤外反射テープを含む近赤外反射
マーカーとすることを特徴とする請求項1)に記載の立
体物外観形状データの入力方法。
[Claims] 1) Point-like or linear markers are attached to each feature point of a three-dimensional object, and images are taken from multiple directions using one or more imaging means, and each image is attached to the object. The coordinates of each feature point of the assigned marker are extracted, and these coordinates are compared with the results of measuring each grid point from each direction of the three-dimensional grid model that serves as a reference, and the coordinates of each feature point are determined by interpolation. It is expressed as a two-dimensional feature point coordinate group using the direction screen as a reference, and these coordinate groups are converted into three-dimensional coordinates based on the correspondence with the three-dimensional coordinates obtained when measuring the three-dimensional grid model, and then used on a computer. A method for inputting external shape data of a three-dimensional object, characterized in that the data can be input into a design device. 2) The method for inputting three-dimensional object external shape data according to claim 1, wherein the marker is of a color whose hue is opposite to that of the background. 3) The method for inputting three-dimensional object external shape data according to claim 1, wherein the marker is a near-infrared reflective marker including a near-infrared reflective tape.
JP2201194A 1990-07-31 1990-07-31 Method for inputting appearance data of stereoscopic object Pending JPH0486957A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2201194A JPH0486957A (en) 1990-07-31 1990-07-31 Method for inputting appearance data of stereoscopic object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2201194A JPH0486957A (en) 1990-07-31 1990-07-31 Method for inputting appearance data of stereoscopic object

Publications (1)

Publication Number Publication Date
JPH0486957A true JPH0486957A (en) 1992-03-19

Family

ID=16436907

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2201194A Pending JPH0486957A (en) 1990-07-31 1990-07-31 Method for inputting appearance data of stereoscopic object

Country Status (1)

Country Link
JP (1) JPH0486957A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5706419A (en) * 1995-02-24 1998-01-06 Canon Kabushiki Kaisha Image capturing and processing apparatus and image capturing and processing method
US5819016A (en) * 1993-10-05 1998-10-06 Kabushiki Kaisha Toshiba Apparatus for modeling three dimensional information
JP2002311523A (en) * 2001-04-19 2002-10-23 Shimadzu Corp Three-dimensional optical camera
WO2011105616A1 (en) * 2010-02-26 2011-09-01 Canon Kabushiki Kaisha Three-dimensional measurement apparatus, model generation apparatus, processing method thereof, and non-transitory computer-readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819016A (en) * 1993-10-05 1998-10-06 Kabushiki Kaisha Toshiba Apparatus for modeling three dimensional information
US5706419A (en) * 1995-02-24 1998-01-06 Canon Kabushiki Kaisha Image capturing and processing apparatus and image capturing and processing method
JP2002311523A (en) * 2001-04-19 2002-10-23 Shimadzu Corp Three-dimensional optical camera
WO2011105616A1 (en) * 2010-02-26 2011-09-01 Canon Kabushiki Kaisha Three-dimensional measurement apparatus, model generation apparatus, processing method thereof, and non-transitory computer-readable storage medium
JP2011179908A (en) * 2010-02-26 2011-09-15 Canon Inc Three-dimensional measurement apparatus, method for processing the same, and program
US9355453B2 (en) 2010-02-26 2016-05-31 Canon Kabushiki Kaisha Three-dimensional measurement apparatus, model generation apparatus, processing method thereof, and non-transitory computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN109215063B (en) Registration method of event trigger camera and three-dimensional laser radar
TWI419081B (en) Method and system for providing augmented reality based on marker tracing, and computer program product thereof
CN104766292B (en) Many stereo camera calibration method and systems
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
US7456842B2 (en) Color edge based system and method for determination of 3D surface topology
US7298889B2 (en) Method and assembly for the photogrammetric detection of the 3-D shape of an object
CN106874884B (en) Human body recognition methods again based on position segmentation
CN108926355A (en) X-ray system and method for object of standing
GB2524983A (en) Method of estimating imaging device parameters
US20190073796A1 (en) Method and Image Processing System for Determining Parameters of a Camera
JP2009081853A (en) Imaging system and method
JP4761670B2 (en) Moving stereo model generation apparatus and method
JP4946878B2 (en) Image identification apparatus and program
CN112241700A (en) Multi-target forehead temperature measurement method for forehead accurate positioning
JPH04102178A (en) Object model input device
Gan et al. A photogrammetry-based image registration method for multi-camera systems–with applications in images of a tree crop
KR20050061115A (en) Apparatus and method for separating object motion from camera motion
JPH0486957A (en) Method for inputting appearance data of stereoscopic object
KR100269116B1 (en) Apparatus and method for tracking 3-dimensional position of moving abject
JP2003078811A (en) Method for associating marker coordinate, method and system for acquiring camera parameter and calibration pattern
JP3919722B2 (en) Skin shape measuring method and skin shape measuring apparatus
JPH0766436B2 (en) 3D model construction device using continuous silhouette images
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
Uma et al. Marker based augmented reality food menu
JP2981382B2 (en) Pattern matching method