JP2001307098A - Individual attribute estimation method and individual attribute estimation device - Google Patents

Individual attribute estimation method and individual attribute estimation device

Info

Publication number
JP2001307098A
JP2001307098A JP2000117305A JP2000117305A JP2001307098A JP 2001307098 A JP2001307098 A JP 2001307098A JP 2000117305 A JP2000117305 A JP 2000117305A JP 2000117305 A JP2000117305 A JP 2000117305A JP 2001307098 A JP2001307098 A JP 2001307098A
Authority
JP
Japan
Prior art keywords
person
attribute
feature vector
individual
moving image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2000117305A
Other languages
Japanese (ja)
Inventor
Atsushi Miyama
篤 深山
Minako Sawaki
美奈子 澤木
Hiroshi Murase
洋 村瀬
Norihiro Hagita
紀博 萩田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2000117305A priority Critical patent/JP2001307098A/en
Publication of JP2001307098A publication Critical patent/JP2001307098A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To solve the problems that there are many restrictions of a photographing condition or the like at the time of estimating an individual from the feature vector of a face image or the like and there are the restrictions of a walking place or the like at the time of estimating it from the pressure distribution of the shoe sole of the individual and footsteps, etc. SOLUTION: In preprocessing 1, a background is removed from an input moving image frame. In individual area extraction processing 2, the circumscribing graphic of an individual area is obtained from respective frames and a circumscribing graphic parameter for defining the shape of the circumscribing graphic and a position inside the frame is calculated. In individual attribute feature vector extraction processing 3, the walking action characteristics and walking time sequence characteristics of the individual in input moving images are calculated from the time sequence of the circumscribing graphic parameter of the respective frames and an individual attribute feature vector for indicating the attribute of the individual is calculated from the characteristics. In identification processing 4, the human attributes are estimated by the vector collation process of collating the individual attribute feature vector with a reference vector prepared from moving images for learning.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、動画像を用いて歩
行動作中の人物の年齢、性別をはじめとする属性を推定
する人物属性推定方法及び装置に関する。
[0001] 1. Field of the Invention [0002] The present invention relates to a person attribute estimating method and apparatus for estimating attributes, such as age and gender, of a walking person using a moving image.

【0002】[0002]

【従来の技術】この種の装置は、例えば、不特定の人物
が通行する場所における通行人の監視、属性の調査、及
び不特定の人物が使用する機械の使用者への適応などに
利用される。
2. Description of the Related Art This type of apparatus is used, for example, for monitoring a pedestrian in a place where an unspecified person passes, investigating attributes, and adapting a machine used by an unspecified person to a user. You.

【0003】画像から年齢、性別などの人物の属性を推
定するための技術としては、顔画像を用いたものが多く
見られる。このような従来技術には、入力された顔画像
を画素ごとの輝度値からなるベクトルと考え、主成分分
析などの方法で輝度値ベクトルから特徴ベクトルを抽出
し、部分空間法などを用いて属性クラスを代表する参照
ベクトルとの類似値を計算する手法が提案されている
(例えば、特開平11−175724号公報)。
As a technique for estimating the attributes of a person such as age and gender from an image, there are many techniques using a face image. In such prior art, an input face image is considered as a vector composed of luminance values for each pixel, a feature vector is extracted from a luminance value vector by a method such as principal component analysis, and an attribute is extracted using a subspace method or the like. A method of calculating a similar value with a reference vector representing a class has been proposed (for example, Japanese Patent Application Laid-Open No. H11-175724).

【0004】また、属性の推定ではないが、顔画像から
の人物識別の分野では、目や鼻などの特徴点を顔画像か
ら抽出し、2つの顔画像の対応する特徴点間の距離で類
似値を計算する手法が提案されている(特開平6−16
8317号公報)。
[0004] In the field of person identification from face images, which is not attribute estimation, feature points such as eyes and nose are extracted from face images, and similarities are determined by the distance between corresponding feature points of two face images. A method of calculating a value has been proposed (Japanese Patent Laid-Open No. 6-16 / 1994).
8317).

【0005】また、顔画像を使わずに年齢、性別などの
人物の属性を推定するための技術には、歩行中の人物の
靴底の圧力分布を圧力センサから、足音をマイクから、
脚画像をカメラから取り込み、それらから得られる特徴
を統合して属性を識別する手法が提案されている(特開
平7−160883号公報)。
Techniques for estimating the attributes of a person, such as age and gender, without using a face image include pressure distribution of the sole of a walking person from a pressure sensor, footsteps from a microphone,
A method has been proposed in which a leg image is captured from a camera, features obtained from the image are integrated, and an attribute is identified (Japanese Patent Laid-Open No. Hei 7-160883).

【0006】[0006]

【発明が解決しようとする課題】上記の顔画像を用いる
手法では、顔の正面画像を撮影できることを仮定してお
り、また、顔の傾きや回転を補正する手法も提案されて
いるが、いずれにしても近距離から顔を撮影しなければ
ならないという問題がある。
In the above-described method using a face image, it is assumed that a front image of the face can be photographed, and a method of correcting the inclination and rotation of the face has been proposed. However, there is a problem that the face must be photographed from a short distance.

【0007】また、輝度値ベクトルや特徴点ベクトルな
ど、入力画像に高い精度を必要とする特徴を用いている
ため、照明などの撮影条件の変動に対して脆弱であると
いう問題がある。
Further, since features that require high precision in an input image, such as a luminance value vector and a feature point vector, are used, there is a problem that the input image is vulnerable to changes in shooting conditions such as illumination.

【0008】加えて、複雑な処理を要求する特徴を用い
ることで、処理に時間がかかることになるという問題点
も存在する。
[0008] In addition, there is a problem that processing takes a long time by using a feature that requires complicated processing.

【0009】一方、上記の歩行動作から特徴を抽出して
属性の推定に用いる手法では、特別な設備が必要となる
点に加えて、属性を推定する対象の人物が圧力センサ上
を歩行していなければならず、人物の歩行している場所
の許容範囲が狭いという問題がある。
[0009] On the other hand, in the method of extracting features from the above-described walking motion and using them for attribute estimation, in addition to the point that special equipment is required, a person whose attribute is to be estimated is walking on the pressure sensor. Therefore, there is a problem that an allowable range of a place where a person is walking is narrow.

【0010】本発明の目的は、上記の課題を解決した人
物属性推定方法および装置を提供することにある。
[0010] It is an object of the present invention to provide a method and apparatus for estimating a person attribute which has solved the above-mentioned problems.

【0011】[0011]

【課題を解決するための手段】上記の課題を解決するた
め、本発明は、人物の歩行動作中の動画像を用いること
により、顔画像を使用する場合に比べてより遠方からの
撮影が可能とし、また、人物領域の外接図形を用いるこ
とにより、粗い画像にも対応でき、撮影条件の変動に対
する頑健性を得、加えて、外接図形の抽出やその後の人
物属性特徴ベクトルの計算は簡単な処理で実現できるこ
とから処理時間を短縮し、さらに、特別なセンサを不要
にすると共に、人物属性特徴ベクトルを正規化して人物
ごとのカメラまでの距離のばらつきの影響を除去するこ
とで、人物の歩行している場所の許容範囲を広くするこ
とができるようにしたもので、以下の方法および装置を
特徴とする。
In order to solve the above-mentioned problems, the present invention makes it possible to take a picture from a far distance by using a moving image of a person while walking, as compared with the case of using a face image. In addition, by using a circumscribed figure of a person area, it is possible to cope with a coarse image, to obtain robustness against a change in imaging conditions, and in addition, extraction of a circumscribed figure and subsequent calculation of a person attribute feature vector are simple. The processing time can be shortened because it can be realized by processing, and furthermore, a special sensor is not required, and the influence of the variation in the distance to the camera for each person by normalizing the person attribute feature vector is eliminated. It is possible to widen the allowable range of the place where the operation is performed, and is characterized by the following method and apparatus.

【0012】(人物属性推定方法)入力動画像から特徴
ベクトルを抽出し、あらかじめ学習用画像から作成され
た参照ベクトルと照合して、該入力動画像に含まれる人
物の属性を推定する人物属性推定方法であって、入力動
画像の各フレームにおいて、人物領域の外接図形を求
め、該外接図形の形状と該フレーム内での位置を定義す
る外接図形パラメータを計算する人物領域抽出過程と、
各フレームから得られた前記外接図形パラメータの時系
列から入力動画像中の人物の歩行動作特性と歩行時系列
特性を計算し、該歩行動作特性のみまたは該歩行動作特
性と該歩行時系列特性から、人物の属性を表す人物属性
特徴ベクトルを計算する人物属性特徴ベクトル抽出過程
と、前記人物属性特徴ベクトルを、学習用動画像から作
成された参照ベクトルと照合する識別過程と、により人
物属性を推定することを特徴とする。
(Person Attribute Estimation Method) A feature vector is extracted from an input moving image and collated with a reference vector created in advance from a learning image to estimate a person attribute included in the input moving image. A method for determining a circumscribed figure of a person area in each frame of the input moving image, and calculating a circumscribed figure parameter defining a shape of the circumscribed figure and a position in the frame;
From the time series of the circumscribed figure parameters obtained from each frame, the walking motion characteristics and the walking time series characteristics of the person in the input moving image are calculated, and only the walking motion characteristics or the walking motion characteristics and the walking time series characteristics are calculated. A person attribute feature vector extraction process of calculating a person attribute feature vector representing a person attribute, and a discrimination process of comparing the person attribute feature vector with a reference vector created from a learning moving image to estimate a person attribute. It is characterized by doing.

【0013】また、前記人物属性特徴ベクトル抽出過程
は、前記人物属性特徴ベクトルを前記外接図形パラメー
タで正規化することを特徴とする。
[0013] In addition, the person attribute feature vector extracting step is characterized in that the person attribute feature vector is normalized by the circumscribed figure parameter.

【0014】(人物属性推定装置)入力動画像から特徴
ベクトルを抽出し、あらかじめ学習用画像から作成され
た参照ベクトルと照合して、該入力動画像に含まれる人
物の属性を推定する人物属性推定装置であって、入力動
画像の各フレームにおいて、人物領域の外接図形を求
め、該外接図形の形状と該フレーム内での位置を定義す
る外接図形パラメータを計算する人物領域抽出手段と、
各フレームから得られた前記外接図形パラメータの時系
列から該入力動画像中の人物の歩行動作特性と歩行時系
列特性を計算し、該歩行動作特性のみまたは該歩行動作
特性と該歩行時系列特性から、人物の属性を表す人物属
性特徴ベクトルを計算する人物属性特徴ベクトル抽出手
段と、前記人物属性特徴ベクトルを、学習用動画像から
作成された参照ベクトルと照合する識別手段と、を備え
て人物属性を推定することを特徴とする。
(Person Attribute Estimation Apparatus) Extracts a feature vector from an input moving image, compares it with a reference vector created in advance from a learning image, and estimates a person attribute included in the input moving image. In the apparatus, in each frame of the input moving image, a circumscribed figure of a person area is obtained, and a person area extraction unit that calculates a circumscribed figure parameter that defines a shape of the circumscribed figure and a position in the frame,
The walking motion characteristics and walking time series characteristics of the person in the input moving image are calculated from the time series of the circumscribed figure parameters obtained from each frame, and only the walking motion characteristics or the walking motion characteristics and the walking time series characteristics are calculated. A person attribute feature vector extracting means for calculating a person attribute feature vector representing an attribute of the person, and an identification means for comparing the person attribute feature vector with a reference vector created from a learning moving image. It is characterized by estimating attributes.

【0015】また、前記人物属性特徴ベクトル抽出手段
は、前記人物属性特徴ベクトルを前記外接図形パラメー
タで正規化することを特徴とする。
Further, the personal attribute feature vector extracting means normalizes the personal attribute feature vector with the circumscribed graphic parameter.

【0016】[0016]

【発明の実施の形態】本実施形態は、人物属性として年
齢層を用い、入力画像中の歩行している人物が、子供
(6〜12歳)・大人(13〜64歳)・高齢者(65
〜80歳)のどの年齢層に属するかを推定する場合であ
る。
DESCRIPTION OF THE PREFERRED EMBODIMENTS In this embodiment, an age group is used as a personal attribute, and walking persons in an input image are children (6 to 12 years old), adults (13 to 64 years old), and the elderly ( 65
This is a case of estimating which age group the child belongs to.

【0017】図1は、本発明の実施形態を示す処理手順
図である。同図中、1は前処理、2は人物領域抽出処
理、3は人物属性特徴抽出処理、4は識別処理を表す。
FIG. 1 is a processing procedure diagram showing an embodiment of the present invention. In the figure, 1 indicates pre-processing, 2 indicates a person region extraction process, 3 indicates a person attribute feature extraction process, and 4 indicates an identification process.

【0018】前処理1では、入力された動画像の各フレ
ームに閾値処理を施すことにより背景の除去が行なわれ
る。人物領域抽出処理2では、動画像の各フレームにつ
いて歩行動作中の人物領域に外接する外接矩形を抽出す
る。この抽出は、図2に示すように、外接矩形の形状と
位置を表す外接矩形パラメータw(幅),h(高さ),
bx,by(フレームのx,y座標上の位置)を求め
る。
In the pre-processing 1, the background is removed by performing threshold processing on each frame of the input moving image. In the person area extraction process 2, a circumscribed rectangle circumscribing the walking person area is extracted for each frame of the moving image. In this extraction, as shown in FIG. 2, circumscribed rectangle parameters w (width), h (height),
bx and by (the position on the x and y coordinates of the frame) are obtained.

【0019】人物属性特徴抽出処理3では、外接矩形が
抽出されたW枚のフレームから求められたN組の外接矩
形パラメータw(ti),h(ti),bx(ti)by
(ti)から、歩行動作特性を求める。但し、tiはフレ
ームiが撮影された時刻を示し、1≦i≦Nである。例
として、歩幅Lと歩速Sは次のように計算される。
In the person attribute feature extraction processing 3, N sets of circumscribed rectangle parameters w (t i ), h (t i ), and bx (t i ) by obtained from W frames from which the circumscribed rectangle is extracted.
From (t i ), a walking motion characteristic is obtained. Here, t i indicates the time at which the frame i was photographed, and 1 ≦ i ≦ N. As an example, the stride L and the stride S are calculated as follows.

【0020】[0020]

【数1】 (Equation 1)

【0021】ただし、m(1≦m≦M)はw(tm)を
極大にする整数 x(ti)=(bx(ti),by(ti)) 次に、これらをそれぞれh(ti)の平均Hで正規化し
た後、学習用動画像から得られる(L/H,S/H)の
平均と分散を用いて標準化したものを要素とするベクト
ルを、撮影された人物の人物属性特徴ベクトルとする。
Here, m (1 ≦ m ≦ M) is an integer that maximizes w (t m ) x (t i ) = (bx (t i ), by (t i )) After normalizing with the average H of (t i ), a vector whose elements are standardized using the average and variance of (L / H, S / H) obtained from the learning moving image is used as an element of the photographed person. Of the person attribute feature vector.

【0022】識別処理5では、公知の識別手法により識
別を行なう。本実施形態では、k−NN(k−Near
est Neighbor)法によって入力画像の人物
属性特徴ベクトルを年齢層に識別する。k−NN法で
は、テストデータベクトルと学習データベクトルから識
別を行う方法になり、本実施形態での人物属性の推定に
は上記の人物属性特徴ベクトルと学習用画像から作成さ
れた参照ベクトルと照合して、人物の動画像に含まれる
属性を推定することになる。
In the identification process 5, identification is performed by a known identification method. In the present embodiment, k-NN (k-Near
The person attribute feature vector of the input image is identified to the age group by the east neighbor method. In the k-NN method, the identification is performed from the test data vector and the learning data vector. In the present embodiment, the personal attribute is estimated by comparing the personal attribute feature vector with the reference vector created from the learning image. Then, attributes included in the moving image of the person are estimated.

【0023】図3に示す年齢分布の男性15人の歩行動
作画像から得られた人物属性特徴ベクトルを図4にL/
H,S/Hをパラメータとして示す。このデータについ
て、leave−one−outにより推定実験を行な
ったところ、本実施形態で用いた画像に関しては、k−
NNのパラメータkが、k=5のときに最も精度が高
く、76%の認識率が得られた。
FIG. 4 shows a person attribute feature vector obtained from a walking motion image of 15 men of the age distribution shown in FIG.
H and S / H are shown as parameters. An estimation experiment was performed on this data by leave-one-out. As for the image used in the present embodiment, k-
When the parameter k of the NN is k = 5, the accuracy is highest and a recognition rate of 76% is obtained.

【0024】なお、実施形態では、歩幅Lと歩速Sによ
って人物属性を推定する場合を示すが、人物の歩行動作
特性や時系列動作特性になる歩調や移動加速度、頭部の
垂直移動量などを含めて推定することができるのは明ら
かである。
In the embodiment, the case where the person attribute is estimated based on the stride length L and the step speed S is shown. It is clear that the estimation can be performed including the following.

【0025】[0025]

【発明の効果】以上説明したように、本発明によれば、
簡単な処理で人物の歩行動作特性を用いて人物の属性を
推定することが可能であり、カメラから人物までの距離
が長く顔が判別できない場合、及び距離が不明である場
合にも、人物の属性推定が可能である。
As described above, according to the present invention,
It is possible to estimate the attributes of a person by using the walking motion characteristics of the person with simple processing, and even when the distance from the camera to the person is long and the face cannot be determined, and when the distance is unknown, the person's Attribute estimation is possible.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明の実施形態の処理手順図。FIG. 1 is a processing procedure diagram according to an embodiment of the present invention.

【図2】外接矩形で抽出された人物領域と外接矩形パラ
メータの例。
FIG. 2 is a diagram illustrating an example of a person region extracted as a circumscribed rectangle and circumscribed rectangle parameters.

【図3】実施形態で学習用及びテスト用入力画像に用い
られた人物の年齢分布例。
FIG. 3 is an example of a person's age distribution used for learning and test input images in the embodiment.

【図4】実施形態での人物属性特徴ベクトルの分布例。FIG. 4 is a distribution example of a person attribute feature vector in the embodiment.

【符号の説明】[Explanation of symbols]

1…前処理 2…人物領域抽出処理 3…人物属性特徴抽出処理 4…識別処理 Reference Signs List 1 preprocessing 2 person region extraction processing 3 person attribute feature extraction processing 4 identification processing

───────────────────────────────────────────────────── フロントページの続き (72)発明者 村瀬 洋 東京都千代田区大手町二丁目3番1号 日 本電信電話株式会社内 (72)発明者 萩田 紀博 東京都千代田区大手町二丁目3番1号 日 本電信電話株式会社内 Fターム(参考) 5B057 AA19 DA01 DA08 DB02 DC09 DC36 5L096 BA02 BA18 EA13 FA18 FA32 FA33 HA02 HA09 JA11  ──────────────────────────────────────────────────続 き Continuing on the front page (72) Inventor Hiroshi Murase 2-3-1 Otemachi, Chiyoda-ku, Tokyo Nippon Telegraph and Telephone Corporation (72) Inventor Norihiro Hagita 2-3-3, Otemachi, Chiyoda-ku, Tokyo No. 1 Nippon Telegraph and Telephone Corporation F-term (reference) 5B057 AA19 DA01 DA08 DB02 DC09 DC36 5L096 BA02 BA18 EA13 FA18 FA32 FA33 HA02 HA09 JA11

Claims (4)

【特許請求の範囲】[Claims] 【請求項1】 入力動画像から特徴ベクトルを抽出し、
あらかじめ学習用画像から作成された参照ベクトルと照
合して、該入力動画像に含まれる人物の属性を推定する
人物属性推定方法であって、 入力動画像の各フレームにおいて、人物領域の外接図形
を求め、該外接図形の形状と該フレーム内での位置を定
義する外接図形パラメータを計算する人物領域抽出過程
と、 各フレームから得られた前記外接図形パラメータの時系
列から入力動画像中の人物の歩行動作特性と歩行時系列
特性を計算し、該歩行動作特性のみまたは該歩行動作特
性と該歩行時系列特性から、人物の属性を表す人物属性
特徴ベクトルを計算する人物属性特徴ベクトル抽出過程
と、 前記人物属性特徴ベクトルを、学習用動画像から作成さ
れた参照ベクトルと照合する識別過程と、により人物属
性を推定することを特徴とする人物属性推定方法。
1. A feature vector is extracted from an input moving image,
A method for estimating the attribute of a person included in the input moving image by comparing with a reference vector created in advance from a learning image, comprising the steps of: Calculating a circumscribed figure parameter that defines the shape of the circumscribed figure and its position in the frame; and extracting a person region in the input moving image from the time series of the circumscribed figure parameter obtained from each frame. Calculating a walking motion characteristic and a walking time series characteristic, and calculating a person attribute feature vector representing a human attribute from only the walking motion characteristic or the walking motion characteristic and the walking time series characteristic; A discriminating step of comparing the person attribute feature vector with a reference vector created from a learning moving image, and estimating a person attribute. Attribute estimation method.
【請求項2】 請求項1に記載の人物属性推定方法にお
いて、前記人物属性特徴ベクトル抽出過程は、前記人物
属性特徴ベクトルを前記外接図形パラメータで正規化す
ることを特徴とする人物属性推定方法。
2. The personal attribute estimating method according to claim 1, wherein said personal attribute feature vector extracting step normalizes said personal attribute feature vector with said circumscribed graphic parameter.
【請求項3】 入力動画像から特徴ベクトルを抽出し、
あらかじめ学習用画像から作成された参照ベクトルと照
合して、該入力動画像に含まれる人物の属性を推定する
人物属性推定装置であって、 入力動画像の各フレームにおいて、人物領域の外接図形
を求め、該外接図形の形状と該フレーム内での位置を定
義する外接図形パラメータを計算する人物領域抽出手段
と、 各フレームから得られた前記外接図形パラメータの時系
列から該入力動画像中の人物の歩行動作特性と歩行時系
列特性を計算し、該歩行動作特性のみまたは該歩行動作
特性と該歩行時系列特性から、人物の属性を表す人物属
性特徴ベクトルを計算する人物属性特徴ベクトル抽出手
段と、 前記人物属性特徴ベクトルを、学習用動画像から作成さ
れた参照ベクトルと照合して人物属性を推定する識別手
段と、を備えて人物属性を推定することを特徴とする人
物属性推定装置。
3. Extracting a feature vector from an input moving image,
A person attribute estimating apparatus for estimating an attribute of a person included in the input moving image by comparing with a reference vector created in advance from a learning image, comprising: Calculating a circumscribed figure parameter defining the shape of the circumscribed figure and its position in the frame; and extracting a person in the input moving image from a time series of the circumscribed figure parameter obtained from each frame. Calculating a walking motion characteristic and a walking time series characteristic, and calculating a person attribute feature vector representing an attribute of a person from only the walking motion characteristic or the walking motion characteristic and the walking time series characteristic; Identifying means for estimating a person attribute by comparing the person attribute feature vector with a reference vector created from a learning moving image. People attribute estimation apparatus according to claim Rukoto.
【請求項4】 請求項3に記載の人物属性推定装置にお
いて、前記人物属性特徴ベクトル抽出手段は、前記人物
属性特徴ベクトルを前記外接図形パラメータで正規化す
ることを特徴とする人物属性推定装置。
4. The personal attribute estimating apparatus according to claim 3, wherein said personal attribute feature vector extracting means normalizes said personal attribute feature vector with said circumscribed graphic parameter.
JP2000117305A 2000-04-19 2000-04-19 Individual attribute estimation method and individual attribute estimation device Pending JP2001307098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2000117305A JP2001307098A (en) 2000-04-19 2000-04-19 Individual attribute estimation method and individual attribute estimation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2000117305A JP2001307098A (en) 2000-04-19 2000-04-19 Individual attribute estimation method and individual attribute estimation device

Publications (1)

Publication Number Publication Date
JP2001307098A true JP2001307098A (en) 2001-11-02

Family

ID=18628639

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2000117305A Pending JP2001307098A (en) 2000-04-19 2000-04-19 Individual attribute estimation method and individual attribute estimation device

Country Status (1)

Country Link
JP (1) JP2001307098A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018206228A (en) * 2017-06-08 2018-12-27 株式会社日立製作所 Computer system, interaction control method, and computer

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018206228A (en) * 2017-06-08 2018-12-27 株式会社日立製作所 Computer system, interaction control method, and computer

Similar Documents

Publication Publication Date Title
US8515136B2 (en) Image processing device, image device, image processing method
JP7132387B2 (en) Image processing device, image processing method and program
Kawaguchi et al. Detection of eyes from human faces by Hough transform and separability filter
US8374422B2 (en) Face expressions identification
JP4241763B2 (en) Person recognition apparatus and method
Lee et al. Learning pedestrian models for silhouette refinement
JP5476955B2 (en) Image processing apparatus, image processing method, and program
JP5629803B2 (en) Image processing apparatus, imaging apparatus, and image processing method
JP5675229B2 (en) Image processing apparatus and image processing method
JP4743823B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN105447432B (en) A kind of face method for anti-counterfeit based on local motion mode
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
US20120288148A1 (en) Image recognition apparatus, method of controlling image recognition apparatus, and storage medium
KR20040059313A (en) Method of extracting teeth area from teeth image and personal identification method and apparatus using teeth image
CN107944395B (en) Method and system for verifying and authenticating integration based on neural network
TW201835805A (en) Method, system, and computer-readable recording medium for long-distance person identification
JP2015082245A (en) Image processing apparatus, image processing method, and program
JP3490910B2 (en) Face area detection device
Huong et al. Static hand gesture recognition for vietnamese sign language (VSL) using principle components analysis
Lee et al. An automated video-based system for iris recognition
US20090087100A1 (en) Top of head position calculating apparatus, image processing apparatus that employs the top of head position calculating apparatus, top of head position calculating method and recording medium having a top of head position calculating program recorded therein
CN114616591A (en) Object tracking device and object tracking method
Kim et al. Lip print recognition for security systems by multi-resolution architecture
WO2002007096A1 (en) Device for tracking feature point on face
Jindal et al. Sign Language Detection using Convolutional Neural Network (CNN)

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20040913

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20040928

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20050208