JPH05215531A - Three-dimensional-body identifying and processing method - Google Patents

Three-dimensional-body identifying and processing method

Info

Publication number
JPH05215531A
JPH05215531A JP4022523A JP2252392A JPH05215531A JP H05215531 A JPH05215531 A JP H05215531A JP 4022523 A JP4022523 A JP 4022523A JP 2252392 A JP2252392 A JP 2252392A JP H05215531 A JPH05215531 A JP H05215531A
Authority
JP
Japan
Prior art keywords
face
dimensional object
dimensional
feature
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP4022523A
Other languages
Japanese (ja)
Inventor
Nobuhiko Masui
信彦 増井
Shigeru Akamatsu
茂 赤松
Yasuhito Suenaga
康仁 末永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP4022523A priority Critical patent/JPH05215531A/en
Publication of JPH05215531A publication Critical patent/JPH05215531A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To extract more stabler characteristics by identifying a body by using the three- dimensional information such as the normal lines of the surface of a face as the characteristics, which are not subjected to the change in observing point. CONSTITUTION:An input means 1 performs the adeguate conversion of the three-dimensional shape data of a person's face by the three-dimensional measurement and sends the result into a reference-point extracting means 2. The means 2 investigates the curvature of the surface of the face and extracts the nose-top point, the ear-hole points and the like, which indicate the maximum values, as the reference points. A reference-attitude computing means 3 computes the middle point between the right and left ear-hole points as the reference position and the vector connecting the reference position and the nose-top point as the reference direction. The values are converted into the X-Y-Z coordinate systems with a reference-attitude converting means 4. Then, the position of a collating region is computed with a collating- region-position computing means 5. A required region is cut out with a region cutting-out means 6. Characteristic extraction processing is performed for the data with a characteristic extracting means 7, and a collating pattern is formed. In a collating means 8, the pattern is collated with a standard pattern in an identifying dictionary file 9, and the similarity scale between both patterns is made to be the numeric value. The value is judged with a judging means 10, and the result is outputted.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、形状の違いによって複
数の識別対象カテゴリが定義される3次元物体を、その
表面形状情報を用いてその物体の所属クラスを識別する
3次元物体の識別処理方法に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a three-dimensional object identification process for identifying a class to which a plurality of identification target categories are defined by different shapes by using the surface shape information. It is about the method.

【0002】以下では、上記のような性質をもつ識別対
象物体の一例として主に人物の顔をとりあげ、顔形状に
よる個人識別への適用を例として説明を行うこととする
が、本発明自体は各種の3次元物体の識別に広く適用で
きる方法であることは言うまでもない。
In the following, a person's face will be mainly taken up as an example of an object to be identified having the above-mentioned properties, and an application to personal identification based on a face shape will be described as an example. It goes without saying that this method is widely applicable to identification of various three-dimensional objects.

【0003】[0003]

【従来の技術】従来、人物の顔の識別に当っては、特徴
として顔の2次元濃淡画像から得られる情報を利用して
いた。まず、濃淡画像そのものを特徴とする方法があ
る。例えば、「小杉信、「ニューラルネットを用いた顔
画像の識別と特徴抽出」、情処学技報,Vol.91, No.56,
pp9-16 」に述べられている。これは正規化した顔領域
の2次元濃淡画像を特徴として顔の識別を行っている。
また、2次元濃淡画像から得られる顔の造作等の2次元
的な位置・形状を特徴とする方法がある。例えば、「萩
原栄一、増田功、「パターンマッチングを主体にした顔
画像による個人ID」、信学技報,Vol.88, No.112, pp
53-60 」に述べられている。これは2次元濃淡画像から
得られる顔の造作の長さ等を特徴として顔の識別を行っ
ている。
2. Description of the Related Art Conventionally, in identifying a person's face, information obtained from a two-dimensional grayscale image of the face has been used as a feature. First, there is a method that features a grayscale image itself. For example, "Kosugi Shin," Facial Image Identification and Feature Extraction Using Neural Networks ", Jitsugaku Giho, Vol.91, No.56,
pp9-16 ". In this, the face is identified by using a normalized two-dimensional grayscale image of the face area.
There is also a method that features a two-dimensional position / shape such as a face feature obtained from a two-dimensional grayscale image. For example, "Eiichi Hagiwara, Isao Masuda," Personal ID based on facial images mainly for pattern matching ", IEICE Technical Report, Vol.88, No.112, pp.
53-60 ". In this, the face is identified based on the feature length of the face obtained from the two-dimensional grayscale image.

【0004】しかしながら、これらの方法では、顔の向
き・傾き、照明等の影響を受けやすく、一定の2次元濃
淡画像を得ることができない。さらに、顔の凹凸や奥行
き等の情報を十分に用いることができず、顔の微妙な形
状の表現には不十分である。その結果、識別がうまく行
えなくなるという問題があった。
However, these methods are easily affected by the direction and inclination of the face, illumination, etc., and it is not possible to obtain a constant two-dimensional grayscale image. Furthermore, information such as unevenness and depth of the face cannot be used sufficiently, which is insufficient for expressing a subtle shape of the face. As a result, there is a problem that identification cannot be performed well.

【0005】これに対して、特徴として顔の3次元形状
情報から得られる距離画像を利用する方法がある。例え
ば、「増井信彦、赤松茂、末永康仁、「3D計測による
顔画像認識の基礎検討」、TV学技報,Vol.14, No.36,
pp7-12 」に述べられている。これはより直接的に顔表
面の3次元形状の分布状態に着目するアプローチであ
り、顔の大局的な3次元形状を表す特徴である顔正面像
における距離画像を特徴として顔の識別を行っている。
そのため、この方法では、照明の影響を受けず、顔の凹
凸や奥行き等の顔の特徴をよく表現することができる。
On the other hand, as a feature, there is a method of using a distance image obtained from the three-dimensional shape information of the face. For example, "Nobuhiko Masui, Shigeru Akamatsu, Yasuhito Suenaga," Fundamental Study of Face Image Recognition by 3D Measurement ", TV Gakugiho, Vol.14, No.36,
pp7-12 ”. This is an approach that focuses more directly on the distribution of the three-dimensional shape of the face surface, and identifies the face using the distance image in the front face image, which is the feature that represents the overall three-dimensional shape of the face. There is.
Therefore, in this method, facial features such as unevenness and depth of the face can be well expressed without being affected by illumination.

【0006】[0006]

【発明が解決しようとする課題】しかしながら、この方
法でも、顔の向き・傾き、すなわち人物頭部の姿勢とい
う入力条件の変動の影響を受けるため、特徴の識別能力
の安定性という点ではなお不十分である。その結果、識
別がうまく行えなくなるという問題があった。
However, even this method is affected by the variation of the input condition such as the orientation / tilt of the face, that is, the posture of the human head, and therefore, the stability of the feature identifying ability is still unsatisfactory. It is enough. As a result, there is a problem that identification cannot be performed well.

【0007】本発明は、例えば顔の大局的な3次元形状
を表す特徴である顔の距離画像に着目する方法におい
て、距離画像を作成するための顔の基準方向の決定方
法、すなわち視点の正規化を如何に正確に行うかという
ことを解決することを目的としている。
According to the present invention, for example, in a method of focusing on a distance image of a face, which is a feature representing a global three-dimensional shape of a face, a method of determining a reference direction of a face for creating a distance image, that is, a normal viewpoint is used. The purpose is to solve the problem of how to accurately perform the conversion.

【0008】[0008]

【課題を解決するための手段】前記の問題を解決するた
めに、特徴として、入力条件の変化に対しても不変であ
る所の顔の3次元情報を用いるようにする。すなわち、
視点の変化に対しても影響を受けない特徴として顔表面
の法線という3次元情報を用いて識別を行うようにす
る。
In order to solve the above-mentioned problems, as a feature, three-dimensional information of a face that is invariant to changes in input conditions is used. That is,
As a feature that is not affected by changes in the viewpoint, three-dimensional information called the normal line of the face surface is used for identification.

【0009】[0009]

【作用】前述の手段によれば、人物顔形状を用いた個人
識別を行う際、識別のための特徴に、従来用いられてい
た2次元濃淡画像や距離画像を用いるのではなく、3次
元形状から得られる法線という3次元情報を用いること
により、より安定な特徴を抽出できるようになり、さら
に、微妙な顔の3次元形状を表現でき、顔形状による個
人識別を安定に行うことができる。しかも、特徴を3次
元形状データから法線データに変換しているため、情報
量を圧縮できる。
According to the above-mentioned means, when performing personal identification using the human face shape, the three-dimensional shape is not used as the characteristic for identification, instead of using the conventionally used two-dimensional grayscale image or range image. It becomes possible to extract more stable features by using the three-dimensional information called the normal line obtained from the above, and moreover, it is possible to express a subtle three-dimensional shape of the face, and it is possible to perform stable individual identification by the face shape. .. Moreover, since the features are converted from the three-dimensional shape data to the normal line data, the amount of information can be compressed.

【0010】[0010]

【実施例】以下、本発明の一実施例を図面を用いて具体
的に説明する。なお、実施例を説明するための全図にお
いて、同一機能を有するものは同一符号を付け、その繰
り返しの説明は省略する。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS An embodiment of the present invention will be specifically described below with reference to the drawings. In all the drawings for explaining the embodiments, parts having the same function are designated by the same reference numerals, and repeated description thereof will be omitted.

【0011】図1は、本発明の3次元物体の識別方法の
一実施例の概略構成を示すブロック図であり、図2は、
図1の特徴抽出手段の機能システムの構成を示すブロッ
ク図である。
FIG. 1 is a block diagram showing a schematic configuration of an embodiment of a three-dimensional object identifying method of the present invention, and FIG.
It is a block diagram which shows the structure of the functional system of the feature extraction means of FIG.

【0012】図1において、1は入力手段、2は基準点
抽出手段、3は基準姿勢算出手段、4は基準姿勢変換手
段、5は照合領域位置算出手段、6は照合領域切り出し
手段、7は特徴抽出手段、8は照合処理手段、9は識別
辞書ファイル、10は判定処理手段、11は全体の処理
の進行を管理する制御手段である。
In FIG. 1, 1 is input means, 2 is reference point extraction means, 3 is reference posture calculation means, 4 is reference posture conversion means, 5 is collation area position calculation means, 6 is collation area cutout means, and 7 is Feature extraction means, 8 is collation processing means, 9 is an identification dictionary file, 10 is determination processing means, and 11 is control means for managing the progress of the entire processing.

【0013】なお、ここで、基準点抽出手段2、基準姿
勢算出手段3、…、制御手段11は同一計算機内に構築
することも可能な構成要素である。前記特徴抽出手段7
は、図2に示すように、特徴量変換部71、法線分布抽
出部72で構成されている。
The reference point extraction means 2, the reference attitude calculation means 3, ..., And the control means 11 are constituent elements that can be built in the same computer. Feature extraction means 7
As shown in FIG. 2, is composed of a feature amount conversion unit 71 and a normal distribution extraction unit 72.

【0014】入力手段1では、3次元計測装置を用い
て、人物の顔の3次元形状データの入力が行われ、この
データを以後の処理にあった形式に変換を施し、基準点
抽出手段2と基準姿勢変換手段4に送られる。
In the input means 1, the three-dimensional measuring device is used to input the three-dimensional shape data of the person's face, the data is converted into a format suitable for the subsequent processing, and the reference point extracting means 2 is used. Is sent to the reference attitude conversion means 4.

【0015】3次元物体の表面形状を計測する方法につ
いては、例えば、「井口征士、「三次元センシング技術
の現状」、画像ラボ, Vol.1, No.4, pp44-47」に記載さ
れた方法を用いることができる。
A method for measuring the surface shape of a three-dimensional object is described in, for example, "Seiji Iguchi," Current State of Three-dimensional Sensing Technology ", Image Lab, Vol.1, No.4, pp44-47". Any method can be used.

【0016】基準点抽出手段2では、3次元形状データ
に基づいて基準姿勢算出のための基準点と照合領域算出
のための基準点が抽出される。顔の基準点の抽出につい
ては、例えば、顔表面の曲率の変化を調べることにより
抽出される。具体的には、顔の正面の中央付近で、曲率
の絶対値が最大となる点が鼻頂点であり、顔の側面の中
央付近で、曲率の絶対値が最大となる点が耳穴点であ
る、として抽出される。
The reference point extracting means 2 extracts a reference point for calculating a reference posture and a reference point for calculating a matching area based on the three-dimensional shape data. The face reference points are extracted by, for example, examining changes in the curvature of the face surface. Specifically, near the center of the front of the face, the point where the absolute value of curvature is maximum is the apex of the nose, and near the center of the side of the face, the point where the absolute value of curvature is maximum is the ear hole point. , Is extracted as.

【0017】基準姿勢算出手段3では、基準点抽出手段
2で得られた基準点を用いて基準姿勢が算出される。基
準姿勢の算出は、例えば、図3に示すように基準点(鼻
頂点と左右の耳穴点)を基にして基準位置と基準方向と
して算出される。具体的には、左右の耳穴点L,Rの中
点Cを基準位置とし、左右の耳穴点の中点と鼻頂点Nを
結ぶベクトルCNを基準方向として算出される。
The reference attitude calculating means 3 calculates the reference attitude using the reference points obtained by the reference point extracting means 2. The reference posture is calculated, for example, as the reference position and the reference direction based on the reference points (the apex of the nose and the left and right ear hole points) as shown in FIG. Specifically, the midpoint C of the left and right ear hole points L and R is used as a reference position, and the vector CN connecting the mid point of the left and right ear hole points and the nose vertex N is used as the reference direction.

【0018】基準姿勢変換手段4では、入力手段1で入
力されたデータに対して、基準姿勢算出手段3で算出さ
れた基準姿勢を用いて基準姿勢データが作成される。基
準姿勢への変換は、例えば、基準位置CがXYZ座標系
の原点と一致するように移動し、基準方向ベクトルCN
がZ軸正方向と一致するように回転する。これを基準姿
勢データとする。
The reference attitude conversion means 4 creates reference attitude data for the data input by the input means 1 using the reference attitude calculated by the reference attitude calculation means 3. The conversion to the reference orientation is performed by, for example, moving the reference position C so that it coincides with the origin of the XYZ coordinate system, and then changing the reference direction vector CN.
Rotates so as to coincide with the Z axis positive direction. This is the reference attitude data.

【0019】照合領域位置算出手段5では、基準姿勢変
換手段4で作成された基準姿勢データに対して、基準点
抽出手段2で得られた基準点を用いて、照合の際に必要
な照合領域位置が算出される。
The matching area position calculating means 5 uses the reference points obtained by the reference point extracting means 2 for the reference attitude data created by the reference attitude converting means 4 and uses the matching areas required for matching. The position is calculated.

【0020】照合領域位置の算出は、例えば図4に示す
ように基準点(鼻頂点と左右の耳穴点)を基にして算出
される。具体的には、以下の手順で行われる。 (1)鼻頂点と左右の耳穴点の距離の平均Mを求める。 (2)鼻頂点からの距離がMの定数倍(αM)の領域を
照合領域とする。
The position of the matching area is calculated, for example, based on the reference points (nasal vertex and left and right ear hole points) as shown in FIG. Specifically, the procedure is as follows. (1) Obtain the average M of the distances between the apex of the nose and the left and right ear hole points. (2) An area in which the distance from the apex of the nose is a constant multiple of M (αM) is set as the matching area.

【0021】照合領域切り出し手段6では、基準姿勢変
換手段4で作成された基準姿勢データに対して、照合領
域位置算出手段5で算出された照合領域の位置情報に従
って、照合の際に必要な領域が切り出される。
In the collation area cut-out means 6, an area required for collation with respect to the reference orientation data created by the reference orientation conversion means 4 according to the position information of the collation area calculated by the collation area position calculation means 5. Is cut out.

【0022】特徴抽出手段7では、照合領域切り出し手
段6で切り出された照合用3次元形状データに対して、
照合時に比較する特徴の抽出処理が行われ、照合パタン
が生成される。
In the feature extracting means 7, for the three-dimensional shape data for collation cut out by the collation area cutting means 6,
The extraction process of the features to be compared at the time of matching is performed, and the matching pattern is generated.

【0023】ここでの処理は図2に沿って説明する。特
徴抽出手段7に送られてきた3次元形状データは、特徴
量変換部71で任意の大きさで(勿論局所的な曲面でも
可)曲面を平面に近似し、その平面の単位法線ベクトル
とその面積が求められる。曲面を平面に近似するには、
曲面上の最低3つの点を用いて当該3つの点を通る平面
を求め、当該曲面を当該平面に近似することにより求め
られる。上記近似した『平面の単位法線ベクトル』と
は、当該近似された平面における法線を考えて与える。
その結果から法線分布抽出部72で各単位法線ベクトル
の大きさをその平面の面積で表した法線分布、すなわち
その面積に比例する大きさを持ったベクトルの集合、が
求められる。3次元形状データは法線分布に特徴変換さ
れ照合処理手段8に送られる。
The processing here will be described with reference to FIG. The three-dimensional shape data sent to the feature extraction means 7 approximates a curved surface to a plane with an arbitrary size (of course, a local curved surface is possible) in the feature amount conversion unit 71, and uses it as a unit normal vector of the plane. The area is required. To approximate a curved surface to a plane,
It is obtained by using at least three points on the curved surface to find a plane passing through the three points, and approximating the curved surface to the plane. The approximated “unit normal vector of plane” is given in consideration of the normal line in the approximated plane.
From the result, the normal distribution extraction unit 72 obtains a normal distribution in which the size of each unit normal vector is represented by the area of the plane, that is, a set of vectors having a size proportional to the area. The three-dimensional shape data is feature-converted into a normal distribution and sent to the matching processing means 8.

【0024】次に、照合処理手段8では、特徴抽出手段
7で抽出された特徴からなる特徴パタンを、前記特徴抽
出手段7までの処理を施して、予め登録することによ
り、用意された識別辞書ファイル9中の標準パタンと照
合し、両者の間の類似性尺度が数値化される。
Next, in the collation processing means 8, the feature pattern consisting of the features extracted by the feature extracting means 7 is subjected to the processing up to the feature extracting means 7 and registered in advance so that the prepared identification dictionary is prepared. By collating with the standard pattern in the file 9, the similarity measure between the two is digitized.

【0025】判定処理手段10では、前記照合処理手段
8で計算された入力パタンと各カテゴリの標準パタンと
の間の類似性尺度のデータ群を利用しようとする形態に
最適な値によるしきい値処理などによって判定し、その
結果が出力される。
In the judgment processing means 10, a threshold value having an optimum value for the form in which the data group of the similarity measure between the input pattern calculated by the matching processing means 8 and the standard pattern of each category is to be used It is determined by processing and the result is output.

【0026】以上の説明からわかるように、本発明によ
れば、人物顔形状を用いた個人識別を行う際、2次元濃
淡画像を用いるのではなく、顔表面の法線という3次元
形状情報を用いるので、顔の向き・傾き、照明等の影響
を受けず、また、顔の凹凸や奥行き等の情報を十分に用
いることができて、顔の微妙な形状の表現ができるた
め、安定な特徴を抽出することができて、顔形状による
個人識別を安定に行うことができる。また、3次元形状
データをそのまま用いるのではなく、特徴を法線データ
に変換しているため、情報量を圧縮できる。
As can be seen from the above description, according to the present invention, when performing personal identification using a human face shape, three-dimensional shape information, which is a normal to the face surface, is used instead of using a two-dimensional grayscale image. Since it is used, it is not affected by the orientation / tilt of the face, lighting, etc., and information such as the unevenness and depth of the face can be used sufficiently, and a delicate shape of the face can be expressed, so a stable feature Can be extracted, and individual identification by face shape can be performed stably. Further, since the features are converted into normal line data instead of using the three-dimensional shape data as they are, the amount of information can be compressed.

【0027】以上、本発明を主に人物の顔を識別の対象
とする場合の識別システムにおける実施例に基づいて具
体的に説明したが、本発明は、前記実施例に限定される
ものではなく、その要旨を逸脱しない範囲において種々
変更可能であることは言うまでもない。
The present invention has been specifically described above based on the embodiment of the identification system mainly for identifying the face of a person, but the present invention is not limited to the above-mentioned embodiment. Needless to say, various changes can be made without departing from the gist of the invention.

【0028】[0028]

【発明の効果】以上、説明したように、本発明によれ
ば、例えば顔データについて、3次元形状情報を特徴と
するため安定な特徴抽出が行え、かつ、顔の3次元形状
が表現できるので、顔形状による個人識別を安定に行う
ことができる。また、3次元形状データをそのまま用い
るのではなく、特徴を法線データに変換しているためデ
ータ量が圧縮できる。
As described above, according to the present invention, since the face data is characterized by the three-dimensional shape information, stable feature extraction can be performed and the three-dimensional shape of the face can be expressed. , It is possible to stably perform personal identification by face shape. Further, since the features are converted into normal line data instead of using the three-dimensional shape data as they are, the data amount can be compressed.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の3次元物体の識別方法の一実施例の概
略構成を示すブロック図である。
FIG. 1 is a block diagram showing a schematic configuration of an embodiment of a three-dimensional object identification method of the present invention.

【図2】図1の特徴抽出手段の機能システムの構成を示
すブロック図である。
FIG. 2 is a block diagram showing the configuration of a functional system of the feature extraction means of FIG.

【図3】顔の基準姿勢を求める方法の一例を示す図であ
る。
FIG. 3 is a diagram showing an example of a method for obtaining a reference posture of a face.

【図4】顔の照合領域の算出の方法の一例を示す図であ
る。
FIG. 4 is a diagram showing an example of a method of calculating a face matching area.

【符号の説明】[Explanation of symbols]

1 入力手段 2 基準点抽出手段 3 基準姿勢算出手段 4 基準姿勢変換手段 5 照合領域位置算出手段 6 照合領域切り出し手段 7 特徴抽出手段 8 照合処理手段 9 識別辞書ファイル 10 判定処理手段 11 制御手段 71 特徴量変換部 72 法線分布抽出部 DESCRIPTION OF SYMBOLS 1 input means 2 reference point extraction means 3 reference attitude calculation means 4 reference attitude conversion means 5 collation area position calculation means 6 collation area cutout means 7 feature extraction means 8 collation processing means 9 identification dictionary file 10 determination processing means 11 control means 71 features Quantity conversion unit 72 Normal distribution extraction unit

Claims (2)

【特許請求の範囲】[Claims] 【請求項1】 3次元計測装置によって対象とする3次
元物体の表面形状の入力処理を行う入力手段と、 入力された3次元物体の基準姿勢と照合領域とを求める
ための基準点の抽出を行う基準点抽出手段と、 抽出された基準点を基に3次元物体の基準姿勢を求める
基準姿勢算出手段と、 入力された3次元物体を基準姿勢に移動する基準姿勢変
換手段と、 抽出された基準点を基に照合領域の位置を算出する照合
領域位置算出手段と、 算出結果を基に基準姿勢3次元物体に対して切り出し操
作を施し、照合に必要となる領域を切り出す照合領域切
り出し手段と、 照合領域から照合の際に必要な特徴パタンを抽出する処
理を行う特徴抽出手段と、 抽出された特徴パタンとあらかじめ用意しておいた標準
パタンとの照合処理を行う照合処理手段と、 照合結果が妥当であるかを判断する判定処理手段と、 各処理部を連絡し制御する制御手段とを具備し、3次元
物体の識別を行うことを特徴とする3次元物体の識別処
理方法。
1. Input means for inputting the surface shape of a target three-dimensional object by a three-dimensional measuring device, and extraction of a reference point for obtaining a reference attitude and a matching area of the input three-dimensional object. A reference point extracting means for performing the reference attitude, a reference attitude calculating means for obtaining the reference attitude of the three-dimensional object based on the extracted reference points, a reference attitude converting means for moving the input three-dimensional object to the reference attitude, and the extracted Collation area position calculation means for calculating the position of the collation area based on the reference point, and collation area cutout means for performing the clipping operation on the reference posture three-dimensional object based on the calculation result to cut out the area required for collation. , A feature extraction means for performing a process of extracting a feature pattern required for a match from the match region, and a match processing means for performing a match process of the extracted feature pattern and a standard pattern prepared in advance, A determination process means for matching result to determine whether it is appropriate, and control means for controlling contact the respective processing unit, identification processing method of a three-dimensional object which is characterized in that the identification of the three-dimensional object.
【請求項2】 前記特徴抽出手段は、任意の大きさで曲
面を平面に近似し、その平面の単位法線ベクトルとその
面積を求める特徴量変換部と、その結果を基に法線分布
を求める法線分布抽出部とで構成されることを特徴とす
る請求項1記載の3次元物体の識別処理方法。
2. The feature extracting means approximates a curved surface to a plane with an arbitrary size and obtains a unit normal vector and its area of the plane, and a feature quantity conversion unit, and based on the result, a normal distribution. The three-dimensional object identification processing method according to claim 1, wherein the three-dimensional object identification processing unit is configured with a normal distribution extracting unit to be obtained.
JP4022523A 1992-02-07 1992-02-07 Three-dimensional-body identifying and processing method Pending JPH05215531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP4022523A JPH05215531A (en) 1992-02-07 1992-02-07 Three-dimensional-body identifying and processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP4022523A JPH05215531A (en) 1992-02-07 1992-02-07 Three-dimensional-body identifying and processing method

Publications (1)

Publication Number Publication Date
JPH05215531A true JPH05215531A (en) 1993-08-24

Family

ID=12085147

Family Applications (1)

Application Number Title Priority Date Filing Date
JP4022523A Pending JPH05215531A (en) 1992-02-07 1992-02-07 Three-dimensional-body identifying and processing method

Country Status (1)

Country Link
JP (1) JPH05215531A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07296299A (en) * 1994-04-20 1995-11-10 Nissan Motor Co Ltd Image processor and warning device against doze at the wheel using the same
JPH11219421A (en) * 1998-01-30 1999-08-10 Toshiba Corp Image recognizing device and method therefor
JP2003518294A (en) * 1999-12-22 2003-06-03 ナショナル・リサーチ・カウンシル・オブ・カナダ 3D image search method
KR100474837B1 (en) * 2000-09-25 2005-03-08 삼성전자주식회사 Apparatus and method for normalization and feature extraction of 3-Dimension facial data
JP2007026073A (en) * 2005-07-15 2007-02-01 National Univ Corp Shizuoka Univ Face posture detection system
CN1319013C (en) * 2005-03-16 2007-05-30 沈阳工业大学 Combined recognising method for man face and ear characteristics
CN1319014C (en) * 2005-03-16 2007-05-30 沈阳工业大学 Personal identity recognising method based on pinna geometric parameter
JP2014075765A (en) * 2012-10-05 2014-04-24 Mitsubishi Heavy Ind Ltd Monitoring device and monitoring method
JP2017054304A (en) * 2015-09-09 2017-03-16 株式会社東芝 Identification apparatus and authentication system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07296299A (en) * 1994-04-20 1995-11-10 Nissan Motor Co Ltd Image processor and warning device against doze at the wheel using the same
JPH11219421A (en) * 1998-01-30 1999-08-10 Toshiba Corp Image recognizing device and method therefor
JP2003518294A (en) * 1999-12-22 2003-06-03 ナショナル・リサーチ・カウンシル・オブ・カナダ 3D image search method
KR100474837B1 (en) * 2000-09-25 2005-03-08 삼성전자주식회사 Apparatus and method for normalization and feature extraction of 3-Dimension facial data
CN1319013C (en) * 2005-03-16 2007-05-30 沈阳工业大学 Combined recognising method for man face and ear characteristics
CN1319014C (en) * 2005-03-16 2007-05-30 沈阳工业大学 Personal identity recognising method based on pinna geometric parameter
JP2007026073A (en) * 2005-07-15 2007-02-01 National Univ Corp Shizuoka Univ Face posture detection system
JP4501003B2 (en) * 2005-07-15 2010-07-14 国立大学法人静岡大学 Face posture detection system
JP2014075765A (en) * 2012-10-05 2014-04-24 Mitsubishi Heavy Ind Ltd Monitoring device and monitoring method
JP2017054304A (en) * 2015-09-09 2017-03-16 株式会社東芝 Identification apparatus and authentication system
US9858471B2 (en) 2015-09-09 2018-01-02 Kabushiki Kaisha Toshiba Identification apparatus and authentication system

Similar Documents

Publication Publication Date Title
US6819782B1 (en) Device and method for recognizing hand shape and position, and recording medium having program for carrying out the method recorded thereon
Tanaka et al. Curvature-based face surface recognition using spherical correlation. principal directions for curved object recognition
US9372546B2 (en) Hand pointing estimation for human computer interaction
KR101588254B1 (en) Improvements in or relating to three dimensional close interactions
US6430307B1 (en) Feature extraction system and face image recognition system
US20160253807A1 (en) Method and System for Determining 3D Object Poses and Landmark Points using Surface Patches
JP2017016192A (en) Three-dimensional object detection apparatus and three-dimensional object authentication apparatus
CN107463865B (en) Face detection model training method, face detection method and device
WO2007062478A1 (en) Visual tracking of eye glasses in visual head and eye tracking systems
WO2021084677A1 (en) Image processing device, image processing method, and non-transitory computer-readable medium having image processing program stored thereon
JP2004133889A (en) Method and system for recognizing image object
KR20070026080A (en) Image processing apparatus and method, and program
Arbeiter et al. Evaluation of 3D feature descriptors for classification of surface geometries in point clouds
CN110895683B (en) Kinect-based single-viewpoint gesture and posture recognition method
CN111553284A (en) Face image processing method and device, computer equipment and storage medium
CN110796101A (en) Face recognition method and system of embedded platform
JPH05215531A (en) Three-dimensional-body identifying and processing method
CN114894337B (en) Temperature measurement method and device for outdoor face recognition
CN110569775A (en) Method, system, storage medium and electronic device for recognizing human body posture
KR20020022295A (en) Device And Method For Face Recognition Using 3 Dimensional Shape Information
JP2008123216A (en) Authentication system and method
CN108090476A (en) It is a kind of to be directed to the external 3D face identification methods blocked
JPH05108804A (en) Identifying method and executing device for three-dimensional object
CN104331412A (en) Method for carrying out face retrieval in normalized three-dimension face database
CN116386118B (en) Drama matching cosmetic system and method based on human image recognition