JPH0814858A - Data acquisition device for three-dimensional object - Google Patents

Data acquisition device for three-dimensional object

Info

Publication number
JPH0814858A
JPH0814858A JP6143234A JP14323494A JPH0814858A JP H0814858 A JPH0814858 A JP H0814858A JP 6143234 A JP6143234 A JP 6143234A JP 14323494 A JP14323494 A JP 14323494A JP H0814858 A JPH0814858 A JP H0814858A
Authority
JP
Japan
Prior art keywords
dimensional
subject
shape information
rotation angle
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP6143234A
Other languages
Japanese (ja)
Other versions
JP3352535B2 (en
Inventor
Hideki Mitsumine
秀樹 三ッ峰
Seiki Inoue
誠喜 井上
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Japan Broadcasting Corp
Original Assignee
Nippon Hoso Kyokai NHK
Japan Broadcasting Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Hoso Kyokai NHK, Japan Broadcasting Corp filed Critical Nippon Hoso Kyokai NHK
Priority to JP14323494A priority Critical patent/JP3352535B2/en
Publication of JPH0814858A publication Critical patent/JPH0814858A/en
Application granted granted Critical
Publication of JP3352535B2 publication Critical patent/JP3352535B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To reduce the cost of a device for obtaining three-dimensional data from an object, by using a spacial light source so as to prevent detrimental affections to an object to be picked up, while simultaneously obtaining three dimensional data and surface texture data from the object, and by simplifying the structure of the device. CONSTITUTION:The motion of a feature point on an object 2 to be picked up is detected from an image (two-dimensional image) per unit turn angle picked up from the object 2 to be picked up by a video camera 5 while the turn angle of a stepping motor 4 is controlled by a computer 6, and from a turn angle of the stepping motor 4, so as to seek the rotating axis of the object to be picked up. Further, three-dimensional data of the object 3 to be picked up are obtained on the basis of the rotating axis, and further, surface texture data are obtained from thus obtained threedimensional data and a two-dimensional image per unit turn angle.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、一般撮影条件下で撮影
された2次元画像および撮影時の角度情報のみで、動画
像の圧縮、物体の認識、コンピュータグラフィックス作
成などの素材情報となる立体物の正確な3次元形状情報
と、正確な表面テクスチャー(Surface Tex
ture)情報と得るようにした立体物データ取得装置
に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention provides material information for compression of moving images, object recognition, computer graphics creation, etc., using only two-dimensional images taken under general shooting conditions and angle information at the time of shooting. Accurate three-dimensional shape information of three-dimensional objects and accurate surface texture (Surface Tex)
The present invention relates to a three-dimensional object data acquisition device configured to obtain information.

【0002】[発明の概要]本発明は測定補助のめたの
特殊照明や特殊センサなどを用いることなく、2次元画
像である撮影映像から、立体物の3次元形状情報と、そ
れに対応する表面テクスチャー情報とを得る装置に関す
るもので、目的の被写体を回転台上に載せてこれを回転
させながら、ビデオカメラなどで撮影を行なって得られ
る2次元映像から被写体の特徴点を抽出するとともに、
これらの各特徴点の動き情報に基づき、ビデオカメラの
レンズ光軸に対する回転軸の相対的な傾きを反復演算す
ることにより、被写体の存在する3次元空間と、2次元
であるカメラ撮像面との対応を中心射影として考え、反
復演算によって得られた精度の高い回転軸情報と、既知
の回転角度情報とを利用して、レーザスリット光などの
光源や正確なカメラ位置情報などを必要とする従来の手
法に比べて、簡単な構成で、立体物の3次元形状情報
と、それに対応する表面テクスチャー情報とを得るもの
である。
SUMMARY OF THE INVENTION According to the present invention, a three-dimensional shape information of a three-dimensional object and a surface corresponding to the three-dimensional shape information are obtained from a photographed image which is a two-dimensional image without using a special illumination or a special sensor to assist the measurement. The present invention relates to an apparatus for obtaining texture information, and while extracting an object feature point from a two-dimensional image obtained by shooting an image with a video camera or the like while placing the object on a turntable and rotating the object,
By repeatedly calculating the relative inclination of the rotation axis with respect to the lens optical axis of the video camera based on the motion information of each of these feature points, the three-dimensional space in which the subject exists and the two-dimensional camera imaging plane Considering the correspondence as a central projection, using the highly accurate rotation axis information obtained by iterative calculation and known rotation angle information, it is necessary to use a light source such as a laser slit light or accurate camera position information. Compared to the method described above, the three-dimensional shape information of the three-dimensional object and the surface texture information corresponding thereto are obtained with a simpler configuration.

【0003】[0003]

【従来の技術】動画像の圧縮、物体の認識、コンピュー
タグラフィックス作成などの素材情報となる立体物の3
次元形状情報と、表面テクスチャー情報と得る手法とし
て、従来、レーザスリット光などの特殊光源や特殊なセ
ンサなど使用する能動的な計測手法と、ビデオカメラな
どの撮影映像を用いる受動的な計測手法とがある。
2. Description of the Related Art Three-dimensional objects used as material information for compressing moving images, recognizing objects, creating computer graphics, etc.
Conventionally, as methods for obtaining dimensional shape information and surface texture information, active measurement methods that use special light sources such as laser slit light and special sensors, and passive measurement methods that use captured images such as video cameras There is.

【0004】前者の例としては、スリット光投影法があ
り、このスリット光投影法では、次に述べるようにし
て、立体物の3次元形状情報を得る。
An example of the former is a slit light projection method, and in this slit light projection method, three-dimensional shape information of a three-dimensional object is obtained as described below.

【0005】まず、図7に示すように、被写体101が
載せられた台102から所定距離だけ離れた位置に、台
103を設け、この台103上にレーザ光源104を配
置するとともに、このレーザ光源104から出射される
レーザ光をスリット光(レーザスリット光)Sに変換す
る円柱レンズ105を配置し、さらに前記被写体101
が載せられている台102から所定距離だけ離れた位置
にカメラ106を配置する。
First, as shown in FIG. 7, a table 103 is provided at a position separated from a table 102 on which a subject 101 is placed by a predetermined distance, a laser light source 104 is arranged on the table 103, and the laser light source is arranged. A cylindrical lens 105 for converting laser light emitted from 104 into slit light (laser slit light) S is arranged, and the subject 101 is further provided.
The camera 106 is arranged at a position separated by a predetermined distance from the table 102 on which is mounted.

【0006】そして、被写体101を台102の上に載
せた状態で、レーザ光源104からレーザ光を出射さ
せ、円柱レンズ105でこのレーザ光をスリット光(レ
ーザスリット光)Sに変換して、前記被写体101に照
射するとともに、カメラ106によって前記被写体10
1を撮影し、このカメラ106の撮像面107上に、レ
ーザスリット光Sに対応する被写体101のスリット画
像をとらえさせる。
Then, with the subject 101 placed on the base 102, laser light is emitted from the laser light source 104, and this laser light is converted into slit light (laser slit light) S by the cylindrical lens 105, The subject 101 is illuminated and the subject 10 is captured by the camera 106.
1 is captured, and a slit image of the subject 101 corresponding to the laser slit light S is captured on the imaging surface 107 of the camera 106.

【0007】これにより、被写体101の表面形状を示
す3次元空間座標のうち、レーザスリット光Sのなす平
面上にある点Pの3次元空間座標が、この点Pと、カメ
ラ106のレンズ中心Oとを通る直線Lを介して撮像面
107上のある1点P’に投影されて、撮像面107上
に、前記被写体101の表面形状のうち、前記レーザス
リット光Sのなす平面で切り取られた断面の外形線(ス
リット画像)が投影される。
As a result, of the three-dimensional space coordinates showing the surface shape of the subject 101, the three-dimensional space coordinates of the point P on the plane formed by the laser slit light S are the point P and the lens center O of the camera 106. It is projected onto a certain point P ′ on the image pickup surface 107 via a straight line L passing through and is cut out on the image pickup surface 107 by the plane formed by the laser slit light S of the surface shape of the subject 101. The outline of the cross section (slit image) is projected.

【0008】以下、レーザ光源104のレーザ出射方向
を水平方向に少しずつ、動かしながら、上述した撮影を
行なって、各撮像面107上に、被写体101の表面形
状のうち、前記各レーザスリット光Sのなす各平面で切
り取られた断面の外形線を投影させる。
Hereinafter, the above-mentioned photographing is performed while moving the laser emission direction of the laser light source 104 little by little in the horizontal direction, and the laser slit light S among the surface shapes of the subject 101 is formed on each image pickup surface 107. The outline of the cross section cut by each plane is projected.

【0009】そして、被写体101全体について、上述
した撮影処理が終了したときに得られた、各スリット画
像上の各外形線を、前記被写体101の3次元情報とし
て、動画像の圧縮、物体の認識、コンピュータグラフィ
ックス作成などを行なう処理装置に供給し、前記各外形
線の処理を行なわせる。
Then, with respect to the entire subject 101, each contour line on each slit image obtained when the above-mentioned photographing processing is completed is used as the three-dimensional information of the subject 101 to compress a moving image and recognize an object. , And supplies it to a processing device that creates computer graphics, etc., and causes it to process each of the outlines.

【0010】また、後者の例としては、3次元ボーディ
ング法があり、この3次元ボーディング法では、次に述
べるようにして、立体物の3次元形状情報を得る。
As the latter example, there is a three-dimensional boarding method. In this three-dimensional boarding method, three-dimensional shape information of a three-dimensional object is obtained as described below.

【0011】まず、カメラによって被写体を複数の視点
から撮影し、図8に示すように、これによって得られた
各画像から前記被写体の特徴となる点(2次元特徴点)
を抽出して、これらの各2次元特徴点とカメラのレンズ
中心とを通る視線を求める。
First, a subject is photographed by a camera from a plurality of viewpoints, and as shown in FIG. 8, points which are the features of the subject from the respective images obtained (two-dimensional feature points).
Is extracted, and the line of sight passing through each of these two-dimensional feature points and the lens center of the camera is obtained.

【0012】次いで、被写体の存在する空間を3次元画
素(以下、これをボクセルと称する)に分割し、各視線
が通る各ボクセルに対して、予め定められている一定値
を加算して、各ボクセル毎の集積値を求めることによ
り、各視線が多く交差しているボクセルを被写体の3次
元特徴点として抽出し、これらの各3次元特徴点を、前
記被写体の3次元情報として、動画像の圧縮、物体の認
識、コンピュータグラフィックス作成などを行なう処理
装置に供給し、前記各3次元特徴点の処理を行なわせ
る。
Next, the space in which the subject is present is divided into three-dimensional pixels (hereinafter referred to as voxels), and a predetermined value is added to each voxel through which each line of sight passes, and By obtaining the integrated value for each voxel, the voxels where many lines of sight intersect each other are extracted as the three-dimensional feature points of the subject, and each of these three-dimensional feature points is used as the three-dimensional information of the subject in the moving image. The data is supplied to a processing device that performs compression, object recognition, computer graphics creation, etc., and causes processing of each of the three-dimensional feature points.

【0013】[0013]

【発明が解決しようとする課題】しかしながら、上述し
たスリット光投影法や3次元ボーディング法において
は、次に述べるような問題があった。
However, the slit light projection method and the three-dimensional boarding method described above have the following problems.

【0014】まず、図7に示すスリット光投影法では、
計測精度を高めるために、光源として、レーザ光源10
4を使用して、レーザスリット光Sの幅を狭くしている
ので、被写体101が紙などでできている美術品などで
あるとき、レーザスリット光Sの熱で美術品の表面を乾
燥させてしまったり、変色させてしまったりする恐れが
あるとともに、被写体101の3次元形状情報しか得る
ことができないので、被写体の表面を別個に撮影して、
被写体101の表面テクスチャー情報を得なければなら
ないという問題があった。
First, in the slit light projection method shown in FIG.
A laser light source 10 is used as a light source to improve measurement accuracy.
4 is used to narrow the width of the laser slit light S, so that when the subject 101 is a work of art such as paper, the surface of the work of art is dried by the heat of the laser slit light S. There is a possibility that the object 101 will be discolored or discolored, and since only the three-dimensional shape information of the object 101 can be obtained, the surface of the object is photographed separately,
There is a problem that the surface texture information of the subject 101 must be obtained.

【0015】また、図8に示す3次元ボーディング法で
は、複数の3次元特徴点を同時に抽出したとき、各3次
元特徴点に対する処理が干渉し合い、偽りの3次元特徴
点が発生し、また画像の量子化誤差などによって、加算
した集積値があるしきい値以下となった特徴点を、3次
元特徴点として、抽出することができないとともに、設
定したボクセル空間の量子化度合いによって、形状の精
度が決まってしまうという問題があった。
Further, in the three-dimensional boarding method shown in FIG. 8, when a plurality of three-dimensional feature points are simultaneously extracted, the processes for the respective three-dimensional feature points interfere with each other to generate false three-dimensional feature points. Due to image quantization error and the like, feature points whose added values are below a certain threshold value cannot be extracted as three-dimensional feature points, and the shape of the shape is changed depending on the quantization degree of the set voxel space. There was a problem that the accuracy was decided.

【0016】さらに、原理的に、正確なカメラ位置の情
報を必要とするが、カメラレンズの中心位置を正確に求
めるのが非常に難しいことから、正確な3次元特徴点を
抽出するのが難しく、現時点では、まだ技術的に確立し
た手法になっていない。
Further, in principle, accurate information on the camera position is required, but it is very difficult to accurately determine the center position of the camera lens, so it is difficult to extract accurate three-dimensional feature points. , At the moment, it is not a technically established method.

【0017】本発明は上記の事情に鑑み、特殊な光源を
使用することによる被写体への悪影響を防止しながら、
被写体の正確な3次元形状情報と、表面テクスチャー情
報とを同時に、取得することができるとともに、装置構
成を非常に簡素化して、装置の製造コストを大幅に低減
することができる立体物データ取得装置を提供すること
を目的としている。
In view of the above circumstances, the present invention prevents adverse effects on a subject due to the use of a special light source,
A three-dimensional object data acquisition apparatus capable of acquiring accurate three-dimensional shape information of a subject and surface texture information at the same time, and greatly simplifying the apparatus configuration and significantly reducing the manufacturing cost of the apparatus. Is intended to provide.

【0018】[0018]

【課題を解決するための手段】上記の目的を達成するた
めに本発明による立体物データ取得装置は、請求項1で
は、被写体を所定角度単位で回転させる回転駆動機構
と、この回転駆動機構によって回転駆動された各回転角
度毎に、前記被写体を撮影して2次元画像を出力する撮
影機構と、この撮影機構によって得られた各回転角度毎
の2次元画像と前記回転駆動機構の回転角度とに基づ
き、前記被写体の表面にある特徴点の動きを検出して、
前記被写体の回転軸を検索し、この回転軸と前記特徴点
とに基づき、前記被写体の3次元形状情報を作成する3
次元形状情報取得処理部と、この3次元形状情報取得処
理部によって得られた前記被写体の3次元形状情報と前
記撮影機構によって得られた各回転角度毎の2次元画像
とに基づき、表面テクスチャー情報を作成する表面テク
スチャー情報取得処理部とを備えたことを特徴としてい
る。
In order to achieve the above object, a three-dimensional object data acquiring apparatus according to the present invention is, in claim 1, a rotary drive mechanism for rotating an object in a predetermined angle unit, and the rotary drive mechanism. An imaging mechanism that images the subject and outputs a two-dimensional image for each rotationally driven rotational angle, a two-dimensional image for each rotational angle obtained by the imaging mechanism, and a rotational angle of the rotational drive mechanism. Based on, by detecting the movement of the feature points on the surface of the subject,
A rotation axis of the subject is searched for, and three-dimensional shape information of the subject is created based on the rotation axis and the feature points 3
Surface texture information based on a three-dimensional shape information acquisition processing unit, three-dimensional shape information of the subject obtained by the three-dimensional shape information acquisition processing unit, and a two-dimensional image for each rotation angle obtained by the photographing mechanism. And a surface texture information acquisition processing unit for creating

【0019】また、請求項2では、請求項1記載の立体
物データ取得装置において、前記3次元形状情報取得処
理部は各回転角度毎の2次元画像から特徴点を抽出する
とともに、これらの各特徴点の動きを検出して、各特徴
点に対応する回転軸を検出した後、これらの各回転軸を
直線状に配置して前記各特徴点の3次元空間上の位置を
決定して被写体の3次元形状情報を作成し、また前記表
面テクスチャー情報取得処理部は前記被写体の3次元形
状情報に基づき、各特徴点で囲まれた表面領域が前記撮
影機構に対し、正対する回転角度を検出し、この回転角
度の2次元画像の模様に基づき、表面テクスチャー情報
を作成することを特徴としている。
According to a second aspect of the present invention, in the three-dimensional object data acquisition apparatus according to the first aspect, the three-dimensional shape information acquisition processing section extracts feature points from a two-dimensional image for each rotation angle, and each of these After detecting the movement of the characteristic points and detecting the rotation axes corresponding to the respective characteristic points, the respective rotation axes are linearly arranged to determine the positions of the respective characteristic points in the three-dimensional space, and the object is determined. 3D shape information is created, and the surface texture information acquisition processing unit detects a rotation angle at which the surface area surrounded by each feature point faces the imaging mechanism based on the 3D shape information of the subject. The surface texture information is created based on the pattern of the two-dimensional image of this rotation angle.

【0020】[0020]

【作用】上記の構成において、請求項1の立体物データ
取得装置では、回転駆動機構によって被写体を所定角度
単位で回転させながら、撮影機構によって各回転角度毎
に、前記被写体を撮影して2次元画像を生成するととも
に、3次元形状情報取得処理部によって前記各回転角度
毎の2次元画像および前記回転駆動機構の回転角度に基
づき、前記被写体の表面にある特徴点の動きを検出し
て、前記被写体の回転軸を検索した後、この回転軸と前
記特徴点とに基づき、前記被写体の3次元形状情報を作
成し、さらに表面テクスチャー情報取得処理部によって
前記3次元形状情報取得処理部で得られた前記被写体の
3次元形状情報と前記撮影機構で得られた各回転角度毎
の2次元画像とに基づき、表面テクスチャー情報を作成
することにより、特殊な光源を使用することによる被写
体への悪影響を防止しながら、被写体の正確な3次元形
状情報と、表面テクスチャー情報とを同時に、取得する
とともに、装置構成を非常に簡素化して、装置の製造コ
ストを大幅に低減する。
In the above structure, in the three-dimensional object data acquiring device according to claim 1, the object is photographed by the photographing mechanism for each rotation angle while rotating the subject by a predetermined angle unit by the rotation driving mechanism, and the object is two-dimensionally measured. While generating an image, the three-dimensional shape information acquisition processing unit detects the movement of the feature point on the surface of the subject based on the two-dimensional image for each rotation angle and the rotation angle of the rotation drive mechanism, After searching the rotation axis of the subject, three-dimensional shape information of the subject is created based on the rotation axis and the feature points, and further obtained by the surface texture information acquisition processing unit by the three-dimensional shape information acquisition processing unit. By creating the surface texture information based on the three-dimensional shape information of the subject and the two-dimensional image for each rotation angle obtained by the photographing mechanism, The accurate production of 3D shape information and surface texture information of the subject is simultaneously obtained while preventing the adverse effect on the subject due to the use of various light sources, and the configuration of the device is greatly simplified. Is significantly reduced.

【0021】また、請求項2では、請求項1記載の立体
物データ取得装置において、前記3次元形状情報取得処
理部で、各回転角度毎の2次元画像から特徴点を抽出す
るとともに、これらの各特徴点の動きを検出して、各特
徴点に対応する回転軸を検出した後、これらの各回転軸
を直線状に配置して前記各特徴点の3次元空間上の位置
を決定して被写体の3次元形状情報を作成し、また前記
表面テクスチャー情報取得処理部で、前記被写体の3次
元形状情報に基づき、各特徴点で囲まれた表面領域が前
記撮影機構に対し、正対する回転角度を検出し、この回
転角度の2次元画像の模様に基づき、表面テクスチャー
情報を作成することにより、請求項1と同様に、特殊な
光源を使用することによる被写体への悪影響を防止しな
がら、被写体の正確な3次元形状情報と、表面テクスチ
ャー情報とを同時に、取得するとともに、装置構成を非
常に簡素化して、装置の製造コストを大幅に低減する。
According to a second aspect of the present invention, in the three-dimensional object data acquisition apparatus according to the first aspect, the three-dimensional shape information acquisition processing section extracts feature points from a two-dimensional image for each rotation angle, and After detecting the movement of each feature point and detecting the rotation axis corresponding to each feature point, these rotation axes are linearly arranged to determine the position of each feature point in the three-dimensional space. Based on the three-dimensional shape information of the subject, the three-dimensional shape information of the subject is created, and based on the three-dimensional shape information of the subject, the surface area surrounded by each feature point faces the photographing mechanism directly. Is detected and surface texture information is created based on the pattern of the two-dimensional image of this rotation angle, thereby preventing adverse effects on the object due to the use of a special light source, as in claim 1. Positive A three-dimensional shape information, and a surface texture information at the same time, acquires, very simplified device structure significantly reduces the cost of the manufacturing apparatus.

【0022】[0022]

【実施例】図1は本発明による立体物データ取得装置の
一実施例を示す構成図である。
FIG. 1 is a block diagram showing an embodiment of a three-dimensional object data acquisition device according to the present invention.

【0023】この図に示す立体物データ取得装置1は、
被写体2が載せられるターンテーブル3と、このターン
テーブル3を一定角度ずつ、回転させるステッピングモ
ータ4と、前記被写体2から所定距離だけ離れた位置に
配置され、前記被写体2を撮影するビデオカメラ5と、
前記ステッピングモータ4の動作を制御しながら、前記
ビデオカメラ5によって得られた映像を取り込んで、前
記被写体2の正確な3次元形状情報と、表面テクスチャ
ー情報とを同時に、求めるコンピュータ6とを備えてい
る。
The three-dimensional object data acquisition device 1 shown in this figure is
A turntable 3 on which the subject 2 is placed, a stepping motor 4 for rotating the turntable 3 by a constant angle, and a video camera 5 arranged at a position separated from the subject 2 by a predetermined distance to photograph the subject 2. ,
While controlling the operation of the stepping motor 4, a computer 6 that captures an image obtained by the video camera 5 and simultaneously obtains accurate three-dimensional shape information of the subject 2 and surface texture information is provided. There is.

【0024】そして、コンピュータ6によって前記ステ
ッピングモータ4の回転角度を制御しながら、ビデオカ
メラ5によって前記被写体2を撮影して得られた各回転
角度毎の画像(2次元画像)と、前記ステッピングモー
タ4の回転角度とに基づき、前記被写体2の表面にある
特徴点の動きを検出して、前記被写体2の回転軸を検索
し、この回転軸の情報を元にして前記被写体2の3次元
形状情報を作成するとともに、この3次元形状情報と、
前記各回転角度毎の2次元画像とに基づき、表面テクス
チャー情報を作成して、これら3次元形状情報と、表面
テクスチャー情報とを動画像の圧縮、物体の認識、コン
ピュータグラフィックス作成などを行なう処理装置に供
給する。
Then, while controlling the rotation angle of the stepping motor 4 by the computer 6, an image (two-dimensional image) for each rotation angle obtained by photographing the subject 2 by the video camera 5 and the stepping motor Based on the rotation angle of 4, the movement of the characteristic points on the surface of the subject 2 is detected, the rotation axis of the subject 2 is searched, and the three-dimensional shape of the subject 2 is obtained based on the information of the rotation axis. While creating information, this three-dimensional shape information,
A process of creating surface texture information based on the two-dimensional image for each rotation angle and performing compression of a moving image, recognition of an object, creation of computer graphics, etc., with the three-dimensional shape information and the surface texture information. Supply to the device.

【0025】次に、図2に示すブロック図を参照しなが
ら、図1に示すコンピュータ6の処理を詳細に説明す
る。
Next, the processing of the computer 6 shown in FIG. 1 will be described in detail with reference to the block diagram shown in FIG.

【0026】まず、コンピュータ6は図2に示す如く撮
影および角度制御部10と、特徴点検出部11と、動き
検出部12と、回転軸検索部13と、再投影部14と、
テクスチャー幾何学変換および平均部15とに機能が別
れており、撮影および角度制御部10によって各回転角
度毎の画像取得処理を行ない、また特徴点検出部11
と、動き検出部12と、回転軸検索部13と、再投影部
14とによって3次元形状情報の取得処理を行ない、さ
らにテクスチャー幾何学変換および平均部15によって
表面テクスチャー情報の取得処理を行なう。
First, the computer 6, as shown in FIG. 2, has a photographing and angle control unit 10, a feature point detection unit 11, a motion detection unit 12, a rotation axis search unit 13, a reprojection unit 14, and a reprojection unit 14.
The function is divided into the texture geometric transformation and averaging unit 15, and the image capturing and angle control unit 10 performs image acquisition processing for each rotation angle, and the feature point detection unit 11
The motion detection unit 12, the rotation axis search unit 13, and the reprojection unit 14 perform the acquisition processing of the three-dimensional shape information, and the texture geometric conversion and averaging unit 15 performs the acquisition processing of the surface texture information.

【0027】《各回転角度毎の画像取得処理》 <撮影および角度制御部10>撮影および角度制御部1
0は1ステップ単位、またはハーフステップ単位などの
回転角度単位で、ステッピングモータ4を回転駆動し
て、被写体2が載せられたターンテーブル3を回転させ
るとともに、各回転角度毎に、ビデオカメラ5で被写体
2を撮影させて、被写体2の画像(2次元画像)を取込
み、これを図3に示すように、被写体2の縦方向を行と
し、前記被写体2の横方向を列とする2次元配列された
画素として保持する。なお、2次元上に配置された画素
のデータ(画素データ)は明るいほど、大きな数値を持
つ。
<< Image Acquisition Processing for Each Rotation Angle >><Photographing and Angle Control Unit 10> Photographing and Angle Control Unit 1
0 is a rotation angle unit such as a one-step unit or a half-step unit, and the stepping motor 4 is rotationally driven to rotate the turntable 3 on which the subject 2 is placed, and at the same time, with the video camera 5 for each rotation angle. A subject 2 is photographed and an image (two-dimensional image) of the subject 2 is captured, and as shown in FIG. 3, a two-dimensional array in which the vertical direction of the subject 2 is a row and the horizontal direction of the subject 2 is a column. The pixel is retained as a pixel. The brighter the pixel data (pixel data) arranged two-dimensionally, the larger the numerical value.

【0028】そして、前記被写体2を1回転させて得ら
れる各回転角度毎の2次元画像を特徴点検出部11と、
動き検出部12と、テクスチャー幾何学変換および平均
部15とに供給する。
Then, a two-dimensional image for each rotation angle obtained by rotating the subject 2 once is used by the feature point detection unit 11,
It is supplied to the motion detection unit 12 and the texture geometric transformation and averaging unit 15.

【0029】《3次元形状情報の取得処理》 <特徴点検出部11>特徴点検出部11は前記撮影およ
び角度制御部10から出力される各回転角度毎の2次元
画像を取り込むとともに、この2次元画像の各画素を行
方向、列方向に8画素ずつ区切って複数の領域に分割し
た後、各画像データの輪郭強度を求め、これらの各領域
に含まれている輪郭強度の2乗総和が、予め設定されて
いるしきい値以上となっている全ての領域の中心点をこ
の2次元画像の特徴点として抽出し、これらの各特徴点
を原特徴点として動き検出部12と、回転軸検索部12
と、再投影部14とに供給する。
<< Acquisition Processing of Three-Dimensional Shape Information >><Characteristic Point Detection Unit 11> The characteristic point detection unit 11 captures the two-dimensional image for each rotation angle output from the photographing and angle control unit 10 and After dividing each pixel of the three-dimensional image into a plurality of areas by dividing each pixel by 8 pixels in the row direction and the column direction, the contour strength of each image data is obtained, and the square sum of the contour strengths included in each of these areas is calculated. , The center points of all regions that are equal to or greater than a preset threshold value are extracted as feature points of this two-dimensional image, and these feature points are used as original feature points and the motion detection unit 12 and the rotation axis Search unit 12
And the reprojection unit 14.

【0030】<動き検出部12>動き検出部12は前記
特徴点検出部11から出力される各回転角度毎の原特徴
点と、前記撮影および角度制御部10から出力される各
回転角度毎の2次元画像とを取り込むとともに、1つの
2次元画像上の原特徴点に基づき、対応を求めるべき他
の2次元画像内の最も似通った内容の部分を抽出して、
この部分を各2次元画像での特徴点座標とする手法のブ
ロックマッチングを行なって、被写体2の各特徴点が回
転することにより、ビデオカメラ5の撮像面上で、どの
ように動いたかを座標値として求める。
<Motion Detection Unit 12> The motion detection unit 12 outputs the original feature point for each rotation angle output from the feature point detection unit 11 and the rotation angle output for each rotation angle output from the photographing and angle control unit 10. While capturing the two-dimensional image, based on the original feature points on the one two-dimensional image, the most similar content portion in the other two-dimensional image for which correspondence should be obtained is extracted,
Block matching is performed using a method in which this portion is used as feature point coordinates in each two-dimensional image, and each feature point of the subject 2 rotates to determine how the image plane of the video camera 5 moves. Calculate as a value.

【0031】なお、実際には、1つの2次元画像上の原
特徴点に基づいて、対応を求めるべき他の2次元画像中
にある領域を選択し、この領域内で同じ座標同士の画素
輝度の差を求める処理を、領域内の全ての画素について
行ない、これによって得られる各画素の2乗総和(評価
値)が最小となる部分を、前記原特徴点に対応する特徴
点としている。
Actually, based on the original feature points on one two-dimensional image, an area in another two-dimensional image for which correspondence should be obtained is selected, and the difference in pixel brightness between the same coordinates in this area is selected. Is calculated for all the pixels in the area, and the portion where the sum of squares (evaluation value) of each pixel obtained thereby is the minimum is set as the characteristic point corresponding to the original characteristic point.

【0032】そして、このような、対応する原特徴点の
座標に対して、さらに異なる角度の2次元画像とのブロ
ックマッチングを行なって、各2次元画像の各原特徴点
のうち、対応する原特徴点同士をまとめ、これらの各原
特徴点の座標(各2次元画像上での座標)を特徴点座標
群として回転軸検索部13と、テクスチャー幾何学変換
および平均部15とに供給する。
Then, the coordinates of the corresponding original feature points are subjected to block matching with the two-dimensional images at different angles, and the corresponding original feature points of the respective two-dimensional images are assigned to the corresponding original feature points. The feature points are put together and the coordinates (the coordinates on each two-dimensional image) of each of these original feature points are supplied to the rotation axis search unit 13 and the texture geometric transformation / averaging unit 15 as a feature point coordinate group.

【0033】但し、2次元画像によっては、対応する原
特徴点が存在しない場合があるので、評価値があるしき
い値以下であるときには、その回転角度で撮影した2次
元画像上で、対応する原特徴点が存在しないと判断する
処理を行ない、3個以上の原特徴点が対応する場合にの
み、これらの原特徴点を特徴点座標群として回転軸検索
部13と、テクスチャー幾何学変換および平均部15と
に供給する。
However, since the corresponding original feature point may not exist depending on the two-dimensional image, when the evaluation value is less than a certain threshold value, the corresponding two-dimensional image is taken on the two-dimensional image photographed at the rotation angle. The process of determining that the original feature points do not exist is performed, and only when three or more original feature points correspond to each other, the original feature points are used as the feature point coordinate group, the rotation axis search unit 13, the texture geometric transformation, and Supply to the averaging unit 15.

【0034】<回転軸検索部13>回転軸検索部13は
前記動き検出部12から出力される各特徴点座標群と、
前記特徴点検出部11から出力される各原特徴点とを取
り込むとともに、各特徴点座標群のうちから1つの特徴
点座標群を選択し、この特徴点座標群を構成する各原特
徴点のうちから3つの原特徴点を抽出した後、図4に示
す如くこれらの各原特徴点と、ビデオカメラ5のレンズ
中心とを通る3つの直線を含む1つの平面(ある1つの
ベクトルを法線とする平面)を仮定する。
<Rotation Axis Search Unit 13> The rotation axis search unit 13 includes each feature point coordinate group output from the motion detection unit 12,
The original feature points output from the feature point detection unit 11 are fetched, one feature point coordinate group is selected from each feature point coordinate group, and each of the original feature points constituting the feature point coordinate group is selected. After extracting three original feature points from among them, as shown in FIG. 4, one plane including three straight lines passing through each of these original feature points and the center of the lens of the video camera 5 (a certain vector is a normal line). Plane) is assumed.

【0035】そして、これら3つの原特徴点を各直線を
介して前記平面上に投影して得られた各投影点が図5に
示す如く、被写体2の回転軸を中心として、斜辺の1つ
を共通にする2つの2等辺3角形をなすことから、底辺
の長さL1、L2が等しくなるとともに、各2次元画像
を得るときにおける、ターンテーブル3の回転角度をθ
とすると、これらの各2等辺3角形の内角の和がπにな
ることから、次式が成り立ち、 θ+φ/2+φ/2=π 但し、φ/2:2等辺3角形の底角 これを整理すると、次式が得られる。
As shown in FIG. 5, each projection point obtained by projecting these three original feature points on each plane through each straight line is one of the hypotenuses with the rotation axis of the subject 2 as the center. Since the two isosceles triangles having the same shape are formed, the lengths L1 and L2 of the bases are equal, and the rotation angle of the turntable 3 when obtaining each two-dimensional image is θ.
Then, since the sum of the interior angles of each of these isosceles triangles is π, the following equation holds and θ + φ / 2 + φ / 2 = π where φ / 2: the base angle of the isosceles triangle , The following equation is obtained.

【0036】π−θ=φ したがって、次式に示す条件1を満たす投影点を選択す
れば、ターンテーブル3の回転によって生じた、移動す
る原特徴点の投影点を見つけ出すことができる。
[Pi]-[theta] = [phi] Therefore, if a projection point satisfying the condition 1 shown in the following equation is selected, it is possible to find the projection point of the moving original feature point caused by the rotation of the turntable 3.

【0037】条件1: π−θ=φ L1=L2 但し、φ:各原特徴点の投影点を結ぶ2つの直線がなす
角度 L1:2つの原特徴点の投影点間の距離 L2:2つの原特徴点の投影点間の距離 そして、これらの各原特徴点の投影点の位置に基づいて
回転軸方向を検出し、これを、ビデオカメラ5のレンズ
光軸に対する相対的な傾きを持つ軸ベクトルとして、再
投影部14に供給する。
Condition 1: π-θ = φ L1 = L2 where φ: angle formed by two straight lines connecting projection points of each original feature point L1: distance between projection points of two original feature points L2: two Distance between projection points of original feature points Then, the rotation axis direction is detected based on the position of the projection point of each of these original feature points, and this is used as an axis having a relative inclination with respect to the lens optical axis of the video camera 5. The vector is supplied to the reprojection unit 14.

【0038】なお、この実施例では、撮影時に、被写体
2の回転方向を入力していないが、このシーケンスで求
めた軸ベクトルを法線とする平面上での投影点の動き
が、撮影時の軸ベクトルに対する回転方向となる。
In this embodiment, the rotation direction of the subject 2 is not input at the time of shooting, but the movement of the projection point on the plane whose normal is the axis vector obtained in this sequence is the same as that at the time of shooting. It is the direction of rotation with respect to the axis vector.

【0039】以下、各特徴点座標群に対して、上述した
処理を行なって、各特徴点座標群を構成する各原特徴点
に対応する軸ベクトルを検出して、これを再投影部14
に供給する。
Thereafter, the above-described processing is performed on each feature point coordinate group to detect the axis vector corresponding to each original feature point forming each feature point coordinate group, and this is reprojected by the reprojection unit 14.
Supply to.

【0040】<再投影部14>再投影部14は前記回転
軸検索部13から出力される各軸ベクトルと、前記特徴
点検出部11から出力される各原特徴点とを取り込むと
ともに、各原特徴点の情報に基づき、特徴点座標が3つ
以上得られた特徴点座標群については、軸ベクトルを法
線とする平面上の各投影点から等距離にある点を基準点
とし、また特徴点座標が2つである特徴点座標群につい
ては、軸ベクトルを法線とする平面上の投影点から等距
離で、かつその点を中心とした場合に、前記回転軸検索
部13からの軸ベクトルに対する回転方向の条件を満た
す点を基準点とする。
<Reprojection Unit 14> The reprojection unit 14 takes in each axis vector output from the rotation axis search unit 13 and each original feature point output from the feature point detection unit 11 and also outputs each original point. Regarding the feature point coordinate group in which three or more feature point coordinates are obtained based on the feature point information, the points equidistant from each projection point on the plane whose axis is the normal line are the reference points, and With respect to the feature point coordinate group having two point coordinates, the axes from the rotation axis search unit 13 are equidistant from the projection point on the plane having the axis vector as the normal and when the point is the center. A point that satisfies the condition of the rotation direction with respect to the vector is set as a reference point.

【0041】この後、図6に示す如く各特徴点座標群に
対して求められた基準点が、3次元空間で軸ベクトルと
同じ方向となり、直線状に並ぶように、法線方向を保持
した状態で、投影平面を移動させ、これによって得られ
た各原特徴点の投影点に基づき形状データを求め、これ
を前記被写体2の3次元形状情報として、テクスチャー
幾何学変換および平均部15に供給するとともに、動画
像の圧縮、物体の認識、コンピュータグラフィックス作
成などを行なう処理装置に供給する。
After that, as shown in FIG. 6, the reference points obtained for the respective feature point coordinate groups are in the same direction as the axis vector in the three-dimensional space, and the normal direction is held so that they are aligned in a straight line. In this state, the projection plane is moved, shape data is obtained based on the projection points of the respective original feature points obtained thereby, and this is supplied to the texture geometric transformation and averaging unit 15 as the three-dimensional shape information of the subject 2. At the same time, the image data is supplied to a processing device that compresses moving images, recognizes objects, creates computer graphics, and the like.

【0042】なお、実際には、ある特徴点座標群から求
めた基準点を通る軸ベクトルと方向が等しい直線状に、
他の特徴点座標群の基準点が最も近づくように、投影す
る平面移動させて、再計算を行ない、これによって得ら
れた各原特徴点の投影点を、前記被写体2の3次元形状
情報としている。
Actually, a straight line whose direction is equal to the axis vector passing through the reference point obtained from a certain characteristic point coordinate group
The plane to be projected is moved so that the reference points of the other feature point coordinate groups are closest to each other, recalculation is performed, and the projection points of the respective original feature points obtained by this are used as the three-dimensional shape information of the subject 2. There is.

【0043】《表面テクスチャー情報の取得処理》 <テクスチャー幾何学変換および平均部15>テクスチ
ャー幾何学変換および平均部15は前記再投影部14か
ら出力される前記被写体2の3次元形状情報と、前記動
き検出部12から出力される各軸ベクトルと、前記撮影
および角度制御部10から出力される各2次元画像とを
取り込むとともに、前記被写体2の3次元形状情報に基
づいて3つの各特徴点で囲まれた表面領域が3次元空間
で、ビデオカメラ5のレンズ光軸に対し、正対する回転
角度を検出し、この回転角度の2次元画像の模様および
この回転角度に対応する他の2次元画像(前記表面領域
を含む2次元画像)を幾何学変換して表面の模様を求め
た後、これらを平均化して、前記表面領域の平均的な模
様を求める。
<< Process for Obtaining Surface Texture Information >><Texture Geometric Transformation and Averaging Unit 15> The texture geometric transformation and averaging unit 15 includes the three-dimensional shape information of the subject 2 output from the reprojection unit 14, and Each axis vector output from the motion detection unit 12 and each two-dimensional image output from the shooting and angle control unit 10 are captured, and at each of three feature points based on the three-dimensional shape information of the subject 2. The enclosed surface area is a three-dimensional space, and the rotation angle facing the lens optical axis of the video camera 5 is detected, and the pattern of the two-dimensional image of this rotation angle and another two-dimensional image corresponding to this rotation angle. After geometrically transforming (two-dimensional image including the surface area) to obtain a surface pattern, these are averaged to obtain an average pattern of the surface area.

【0044】なお、囲む特徴点の位置が異なる場合に
は、重なっている部分について平均化して、平均的な模
様を求める。
When the positions of the surrounding characteristic points are different, the overlapping portions are averaged to obtain an average pattern.

【0045】以下、被写体2の表面にある3つの特徴点
で囲まれた他の表面領域に対しても、上述した処理を行
なって、前記被写体2の表面全体の平均的な模様を求
め、これを表面テクスチャー情報として、動画像の圧
縮、物体の認識、コンピュータグラフィックス作成など
を行なう処理装置に供給する。
Hereinafter, the above-described processing is performed on the other surface area surrounded by the three feature points on the surface of the subject 2 to obtain an average pattern of the entire surface of the subject 2, Is supplied as surface texture information to a processing device that performs moving image compression, object recognition, computer graphics creation, and the like.

【0046】このようにこの実施例においては、コンピ
ュータ6によって前記ステッピングモータ4の回転角度
を制御しながら、ビデオカメラ5によって前記被写体2
を撮影して得られた各回転角度毎の画像(2次元画像)
と、前記ステッピングモータ4の回転角度とに基づき、
前記被写体2の表面にある特徴点の動きを検出して、前
記被写体2の回転軸を検索し、この回転軸を元にして前
記被写体2の3次元形状情報を作成するとともに、この
3次元形状情報と、前記各回転角度毎の2次元画像とに
基づき、表面テクスチャー情報を作成するようにしたの
で、次に述べる効果を得ることができる。
As described above, in this embodiment, the computer 6 controls the rotation angle of the stepping motor 4 while the video camera 5 controls the object 2 to be photographed.
Image (two-dimensional image) for each rotation angle obtained by shooting
And based on the rotation angle of the stepping motor 4,
The movement of the characteristic points on the surface of the subject 2 is detected, the rotation axis of the subject 2 is searched, and the three-dimensional shape information of the subject 2 is created based on this rotation axis. Since the surface texture information is created based on the information and the two-dimensional image for each rotation angle, the following effects can be obtained.

【0047】まず、従来の手法であるスリット光投影法
のように、レーザスリット光などのような特殊な光源を
使用することなく、被写体2を回転させて撮影すること
で、3次元形状情報と、表面テクスチャー情報とを同時
に得ることができる。
First, unlike the conventional slit light projection method, the subject 2 is rotated and photographed without using a special light source such as a laser slit light, thereby obtaining three-dimensional shape information. , Surface texture information can be obtained at the same time.

【0048】さらに、従来の手法である3次元ボーディ
ング法のように、しきい値以下の集積値を持っている3
次元特徴点の抽出失敗や、正確なカメラ位置情報を得る
ことが難しいことに起因する精度低下、量子化されたボ
クセル空間を用いることによる精度限界、および全特徴
点に対して同時に処理を行なったときに生じる偽りの3
次元特徴点抽出などの不都合を無くしながら、正確な3
次元特徴点のみを抽出することができるとともに、正確
な表面テクスチャー情報を得ることができる。
Furthermore, as in the conventional three-dimensional boarding method, the integrated value 3 is less than the threshold value.
Dimensional feature points were unsuccessfully extracted, accuracy was reduced due to difficulty in obtaining accurate camera position information, precision limit due to use of quantized voxel space, and all feature points were processed simultaneously. False 3 that sometimes occurs
Accurate 3 while eliminating inconveniences such as 3D feature point extraction
It is possible to extract only the dimensional feature points and obtain accurate surface texture information.

【0049】また、上述した実施例においては、特徴点
検出部11によって、撮影および角度制御部10から出
力される各回転角度毎の2次元画像を8画素毎に区分し
て、複数の領域にしているが、他の区切り方によって各
領域に分割しても良く、また2乗総和以外の手法によっ
て特徴点を抽出するようにしても良い。
Further, in the above-described embodiment, the feature point detection unit 11 divides the two-dimensional image for each rotation angle output from the photographing and angle control unit 10 into eight pixels to form a plurality of regions. However, the regions may be divided into other regions by other division methods, and the feature points may be extracted by a method other than the sum of squares.

【0050】[0050]

【発明の効果】以上説明したように本発明によれば、請
求項1、2では、特殊な光源を使用することによる被写
体への悪影響を防止しながら、被写体の正確な3次元形
状情報と、表面テクスチャー情報とを同時に、取得する
ことができるとともに、装置構成を非常に簡素化して、
装置の製造コストを大幅に低減することができる。
As described above, according to the present invention, in claims 1 and 2, accurate three-dimensional shape information of a subject is obtained while preventing adverse effects on the subject due to the use of a special light source. Surface texture information can be acquired at the same time, and the device configuration is greatly simplified,
The manufacturing cost of the device can be significantly reduced.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明による立体物データ取得装置の一実施例
を示す構成図である。
FIG. 1 is a configuration diagram showing an embodiment of a three-dimensional object data acquisition device according to the present invention.

【図2】図1に示すコンピュータの機能構成例を示すブ
ロック図である。
FIG. 2 is a block diagram showing a functional configuration example of a computer shown in FIG.

【図3】図1に示す撮影および角度制御部から出力され
る2次元画像のフォーマット例を示す模式図である。
FIG. 3 is a schematic diagram showing a format example of a two-dimensional image output from the shooting and angle control unit shown in FIG.

【図4】図1に示す回転軸検索部の回転軸検索処理例を
示す模式図である。
FIG. 4 is a schematic diagram showing an example of a rotation axis search process of a rotation axis search unit shown in FIG.

【図5】図1に示す回転軸検索部の回転軸検索処理例を
示す模式図である。
5 is a schematic diagram showing an example of a rotation axis search process of a rotation axis search unit shown in FIG.

【図6】図1に示す再投影部の再投影処理例を示す模式
図である。
6 is a schematic diagram showing an example of reprojection processing by the reprojection unit shown in FIG.

【図7】立体物の3次元形状情報を取得する手法とし
て、従来から知られている能動的な計測手法のうち、ス
リット光投影法を説明するための構成図である。
FIG. 7 is a configuration diagram for explaining a slit light projection method among active measurement methods conventionally known as a method for acquiring three-dimensional shape information of a three-dimensional object.

【図8】立体物の3次元形状情報を取得する手法とし
て、従来から知られている受動的な計測手法のうち、3
次元ボーディング法を説明するための構成図である。
FIG. 8 is a diagram showing a method of acquiring three-dimensional shape information of a three-dimensional object, which is one of the three known passive measurement methods.
It is a block diagram for explaining a dimensional boarding method.

【符号の説明】[Explanation of symbols]

1 立体物データ取得装置 2 被写体 3 ターンテーブル(回転駆動機構) 4 ステッピングモータ(回転駆動機構) 5 ビデオカメラ(撮影機構) 6 コンピュータ 10 撮影および角度制御部(回転駆動機構、撮影機
構) 11 特徴点検出部(3次元形状情報取得処理部) 12 動き検出部(3次元形状情報取得処理部) 13 回転軸検索部(3次元形状情報取得処理部) 14 再投影部(3次元形状情報取得処理部) 15 テクスチャー幾何学変換および平均部(表面テク
スチャー情報取得処理部)
1 three-dimensional object data acquisition device 2 subject 3 turntable (rotational drive mechanism) 4 stepping motor (rotational drive mechanism) 5 video camera (shooting mechanism) 6 computer 10 shooting and angle control unit (rotational drive mechanism, shooting mechanism) 11 feature check Output unit (3D shape information acquisition processing unit) 12 Motion detection unit (3D shape information acquisition processing unit) 13 Rotation axis search unit (3D shape information acquisition processing unit) 14 Reprojection unit (3D shape information acquisition processing unit) ) 15 Texture geometric transformation and averaging section (surface texture information acquisition processing section)

Claims (2)

【特許請求の範囲】[Claims] 【請求項1】 被写体を所定角度単位で回転させる回転
駆動機構と、 この回転駆動機構によって回転駆動された各回転角度毎
に、前記被写体を撮影して2次元画像を出力する撮影機
構と、 この撮影機構によって得られた各回転角度毎の2次元画
像と前記回転駆動機構の回転角度とに基づき、前記被写
体の表面にある特徴点の動きを検出して、前記被写体の
回転軸を検索し、この回転軸と前記特徴点とに基づき、
前記被写体の3次元形状情報を作成する3次元形状情報
取得処理部と、 この3次元形状情報取得処理部によって得られた前記被
写体の3次元形状情報と前記撮影機構によって得られた
各回転角度毎の2次元画像とに基づき、表面テクスチャ
ー情報を作成する表面テクスチャー情報取得処理部と、 を備えたことを特徴とする立体物データ取得装置。
1. A rotary drive mechanism for rotating a subject in units of a predetermined angle, and a photographing mechanism for photographing the subject and outputting a two-dimensional image for each rotation angle rotationally driven by the rotary drive mechanism. Based on the two-dimensional image for each rotation angle obtained by the photographing mechanism and the rotation angle of the rotation drive mechanism, the movement of the characteristic point on the surface of the subject is detected to search the rotation axis of the subject, Based on this rotation axis and the characteristic points,
A three-dimensional shape information acquisition processing unit that creates three-dimensional shape information of the subject, and three-dimensional shape information of the subject obtained by the three-dimensional shape information acquisition processing unit and each rotation angle obtained by the imaging mechanism. A three-dimensional object data acquisition device, comprising: a surface texture information acquisition processing section that creates surface texture information based on the two-dimensional image.
【請求項2】 請求項1記載の立体物データ取得装置に
おいて、 前記3次元形状情報取得処理部は各回転角度毎の2次元
画像から特徴点を抽出するとともに、これらの各特徴点
の動きを検出して、各特徴点に対応する回転軸を検出し
た後、これらの各回転軸を直線状に配置して前記各特徴
点の3次元空間上の位置を決定して被写体の3次元形状
情報を作成し、 また前記表面テクスチャー情報取得処理部は前記被写体
の3次元形状情報に基づき、各特徴点で囲まれた表面領
域が前記撮影機構に対し、正対する回転角度を検出し、
この回転角度の2次元画像の模様に基づき、表面テクス
チャー情報を作成する、 ことを特徴とする立体物データ取得装置。
2. The three-dimensional object data acquisition device according to claim 1, wherein the three-dimensional shape information acquisition processing unit extracts a feature point from a two-dimensional image for each rotation angle, and calculates the movement of each feature point. After detecting and detecting the rotation axis corresponding to each feature point, these rotation axes are arranged in a straight line to determine the position of each feature point in the three-dimensional space to obtain the three-dimensional shape information of the subject. Further, the surface texture information acquisition processing unit detects a rotation angle at which the surface area surrounded by each feature point faces the photographing mechanism based on the three-dimensional shape information of the subject,
A three-dimensional object data acquisition device characterized in that surface texture information is created based on the pattern of a two-dimensional image of this rotation angle.
JP14323494A 1994-06-24 1994-06-24 3D object data acquisition device Expired - Fee Related JP3352535B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP14323494A JP3352535B2 (en) 1994-06-24 1994-06-24 3D object data acquisition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP14323494A JP3352535B2 (en) 1994-06-24 1994-06-24 3D object data acquisition device

Publications (2)

Publication Number Publication Date
JPH0814858A true JPH0814858A (en) 1996-01-19
JP3352535B2 JP3352535B2 (en) 2002-12-03

Family

ID=15334025

Family Applications (1)

Application Number Title Priority Date Filing Date
JP14323494A Expired - Fee Related JP3352535B2 (en) 1994-06-24 1994-06-24 3D object data acquisition device

Country Status (1)

Country Link
JP (1) JP3352535B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998009253A1 (en) * 1996-08-29 1998-03-05 Sanyo Electric Co., Ltd. Texture information giving method, object extracting method, three-dimensional model generating method and apparatus for the same
WO2000004506A1 (en) * 1998-07-20 2000-01-27 Geometrix, Inc. Method and system for generating fully-textured 3-d models
KR20010096556A (en) * 2000-03-01 2001-11-07 정재문 3D imaging equipment and method
JP2004318791A (en) * 2003-04-16 2004-11-11 Ryuichi Yokota Electronic commerce support system and method
US6999073B1 (en) 1998-07-20 2006-02-14 Geometrix, Inc. Method and system for generating fully-textured 3D
EP2680594A1 (en) * 2011-02-24 2014-01-01 Kyocera Corporation Electronic apparatus, image display method and image display program
CN109737885A (en) * 2019-02-28 2019-05-10 沈阳航空航天大学 A kind of deformation quantity measuring method of composite material parts
CN109831620A (en) * 2018-12-29 2019-05-31 上海与德通讯技术有限公司 A kind of image acquisition method, device, electronic equipment and storage medium

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847371B2 (en) 1996-08-29 2005-01-25 Sanyo Electric Co., Ltd. Texture information assignment method, object extraction method, three-dimensional model generating method, and apparatus thereof
US6356272B1 (en) 1996-08-29 2002-03-12 Sanyo Electric Co., Ltd. Texture information giving method, object extracting method, three-dimensional model generating method and apparatus for the same
CN1131495C (en) * 1996-08-29 2003-12-17 三洋电机株式会社 Texture information giving method, object extracting method, three-D model generating method and apparatus for same
WO1998009253A1 (en) * 1996-08-29 1998-03-05 Sanyo Electric Co., Ltd. Texture information giving method, object extracting method, three-dimensional model generating method and apparatus for the same
US7106348B2 (en) 1996-08-29 2006-09-12 Sanyo Electric Co., Ltd. Texture information assignment method, object extraction method, three-dimensional model generating method, and apparatus thereof
WO2000004506A1 (en) * 1998-07-20 2000-01-27 Geometrix, Inc. Method and system for generating fully-textured 3-d models
US6999073B1 (en) 1998-07-20 2006-02-14 Geometrix, Inc. Method and system for generating fully-textured 3D
KR20010096556A (en) * 2000-03-01 2001-11-07 정재문 3D imaging equipment and method
JP2004318791A (en) * 2003-04-16 2004-11-11 Ryuichi Yokota Electronic commerce support system and method
EP2680594A1 (en) * 2011-02-24 2014-01-01 Kyocera Corporation Electronic apparatus, image display method and image display program
EP2680594A4 (en) * 2011-02-24 2014-07-09 Kyocera Corp Electronic apparatus, image display method and image display program
CN109831620A (en) * 2018-12-29 2019-05-31 上海与德通讯技术有限公司 A kind of image acquisition method, device, electronic equipment and storage medium
CN109737885A (en) * 2019-02-28 2019-05-10 沈阳航空航天大学 A kind of deformation quantity measuring method of composite material parts

Also Published As

Publication number Publication date
JP3352535B2 (en) 2002-12-03

Similar Documents

Publication Publication Date Title
US8121352B2 (en) Fast three dimensional recovery method and apparatus
JP5480914B2 (en) Point cloud data processing device, point cloud data processing method, and point cloud data processing program
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
JP2919284B2 (en) Object recognition method
US9816809B2 (en) 3-D scanning and positioning system
US20160086343A1 (en) Contour line measurement apparatus and robot system
JP4419570B2 (en) 3D image photographing apparatus and method
JP7353757B2 (en) Methods for measuring artifacts
JPH11166818A (en) Calibrating method and device for three-dimensional shape measuring device
CN112254670A (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
JPH0814858A (en) Data acquisition device for three-dimensional object
JP2004280776A (en) Method for determining shape of object in image
Takatsuka et al. Low-cost interactive active monocular range finder
US7046839B1 (en) Techniques for photogrammetric systems
CN112435080A (en) Virtual garment manufacturing equipment based on human body three-dimensional information
KR20050061115A (en) Apparatus and method for separating object motion from camera motion
CN112254638B (en) Intelligent visual 3D information acquisition equipment that every single move was adjusted
CN112253913B (en) Intelligent visual 3D information acquisition equipment deviating from rotation center
JPH0766436B2 (en) 3D model construction device using continuous silhouette images
CN112254678B (en) Indoor 3D information acquisition equipment and method
JPH0875454A (en) Range finding device
JP2003067726A (en) Solid model generation system and method
JPH11194027A (en) Three-dimensional coordinate measuring instrument
Uyanik et al. A method for determining 3D surface points of objects by a single camera and rotary stage
CN113689397A (en) Workpiece circular hole feature detection method and workpiece circular hole feature detection device

Legal Events

Date Code Title Description
R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090920

Year of fee payment: 7

LAPS Cancellation because of no payment of annual fees