JP2859315B2 - Head rotation direction detection method - Google Patents

Head rotation direction detection method

Info

Publication number
JP2859315B2
JP2859315B2 JP1221744A JP22174489A JP2859315B2 JP 2859315 B2 JP2859315 B2 JP 2859315B2 JP 1221744 A JP1221744 A JP 1221744A JP 22174489 A JP22174489 A JP 22174489A JP 2859315 B2 JP2859315 B2 JP 2859315B2
Authority
JP
Japan
Prior art keywords
head
image
processing unit
area
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP1221744A
Other languages
Japanese (ja)
Other versions
JPH0385685A (en
Inventor
英明 境野
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP1221744A priority Critical patent/JP2859315B2/en
Publication of JPH0385685A publication Critical patent/JPH0385685A/en
Application granted granted Critical
Publication of JP2859315B2 publication Critical patent/JP2859315B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Description

【発明の詳細な説明】 (発明の属する技術分野) 本発明は、人の頭部の画像情報を仮想格子を用いて部
分画像情報を抽出し、頭部の回転量を算出する頭部回転
方向検出方法に関する。
Description: TECHNICAL FIELD The present invention relates to a head rotation direction for extracting partial image information of a human head image information using a virtual grid and calculating a head rotation amount. It relates to the detection method.

(従来の技術) 知的符号化通信分野やマン・マシンインターフェース
を必要とする技術分野において、人の頭部の移動量の検
出,動作認識等の情報抽出は不可欠である。その場合、
従来は画像処理に依存する処理方法が中心的であった。
(Prior Art) In the field of intelligent coding communication and the field of technology requiring a man-machine interface, information extraction such as detection of the movement amount of a human head and motion recognition is indispensable. In that case,
Conventionally, processing methods relying on image processing have been central.

第8図は従来方法による人の肩上像をカメラより取込
み画像処理による頭部の回転量を算出する動作フローを
示す。
FIG. 8 shows an operation flow for calculating an amount of head rotation by image processing by capturing an image of a shoulder of a person from a camera by a conventional method.

肩上像入力処理部1において、人2をカメラ3より取
込んだデータを画像処理部4へ入力し、ここで、肩上像
5と背景像6から差分画像7を生成する。そして、頭部
の輪郭線8と頭髪領域9と顔部領域10の境界線11を所定
の閾値処理により検出し、頭部の重心12と顔部の重心13
とを算出する。次にこれら2つの重心12及び13の情報を
もとに頭部の回転量を回転量算出処理部14で検出を行な
うシステムである。
In the upper-shoulder image input processing unit 1, data obtained by capturing the person 2 from the camera 3 is input to the image processing unit 4, where a difference image 7 is generated from the upper-shoulder image 5 and the background image 6. Then, the contour line 8 of the head, the boundary line 11 between the hair region 9 and the face region 10 are detected by predetermined threshold processing, and the center of gravity 12 of the head and the center of gravity 13 of the face are detected.
Is calculated. Next, a system in which the rotation amount of the head is detected by the rotation amount calculation processing unit 14 based on the information of these two centers of gravity 12 and 13 is described.

このシステムにおいて、頭部の形状は複雑かつ非対称
であるので、冗長な情報の切落しが不可欠である。とこ
ろが、従来のシステムでは、冗長な情報を含んでいる頭
部画像情報のすべてを用いて回転量を算出しているの
で、検出効率が悪く、部分的な頭部情報を用いて処理を
行うことは不可能であった。
In this system, since the shape of the head is complicated and asymmetric, it is essential to cut off redundant information. However, in the conventional system, since the rotation amount is calculated using all of the head image information including redundant information, the detection efficiency is low, and processing is performed using partial head information. Was impossible.

(発明の目的) 本発明は上記従来方法の欠点を解決し、比較的に簡単
な方法により頭部の回転量を効率よく検出することを目
的とする。
(Object of the Invention) An object of the present invention is to solve the above-mentioned drawbacks of the conventional method and to efficiently detect the amount of head rotation by a relatively simple method.

(発明の構成) (発明特徴と従来技術との差異) 本発明は上記目的を達成するため、カメラ入力した人
物画像について、頭頂と首を結ぶ直線の傾きを検出し、
この直線と平行な複数の格子を生成後、その複数格子の
任意の格子を選定し、選定された各格子内の部分領域に
含まれる頭髪領域と顔部領域の変化から頭部の回転方向
を検出することを最も主要な特徴とする。
(Constitution of the Invention) (Differences between Inventive Features and Conventional Technology) In order to achieve the above object, the present invention detects the inclination of a straight line connecting the top and neck of a human image input by a camera,
After generating a plurality of grids parallel to this straight line, an arbitrary grid of the plurality of grids is selected, and the rotation direction of the head is determined from changes in the hair area and the face area included in the partial area in each of the selected grids. The main feature is to detect.

従来技術とはカメラより入力された頭部画像情報に仮
想的に種々の格子を張付けた情報処理により部分的な頭
部情報を用いて、効率よく行なえる点が異なる。
It is different from the prior art in that the head image information input from the camera can be efficiently performed using partial head information by information processing in which various lattices are virtually attached.

(実施例) 第1図は本発明方法を実施するための装置の基本構成
図を示す。図の矢印方向の各部15〜21の動作が順次繰返
される構成となっており、これは、頭部動作を行う空間
における頭部を入力する肩上像入力処理部15、頭部領域
抽出処理部16、頭部の傾き検出処理部17、格子状パター
ン張付け処理部18、小領域処理部19、移動量算出処理部
20及び反復処理部21から構成されている。
(Embodiment) FIG. 1 shows a basic configuration diagram of an apparatus for carrying out the method of the present invention. The operation of each of the parts 15 to 21 in the direction of the arrow in the figure is configured to be sequentially repeated, and this is an over-shoulder image input processing unit 15 for inputting a head in a space where the head motion is performed, 16, head inclination detection processing unit 17, lattice pattern pasting processing unit 18, small area processing unit 19, movement amount calculation processing unit
20 and an iterative processing unit 21.

次に動作を第2図の画像処理過程を示す図を用いての
べる。まず、第1図の肩上像入力処理部15では、自由空
間内で人2の頭部の運動をカメラ3より入力し、頭部領
域抽出処理部16において、背景像6と肩上像5の差分画
像7を生成後、境界探索法により顎の輪郭線を肩上像よ
り検出し、頭部像22の輪郭線8を抽出処理する。頭部の
傾き検出処理部17では、頭部の頂点22Aと首の中点もし
くは顎の先端部23とを結ぶベクトル24と重力方向ベクト
ル24′のなす角度θから傾いた頭部像22−1の傾きを検
出処理する。格子状パターン張付け処理部18では、前段
の頭部の傾き検出処理部17で検出された頭部の傾き角度
θを用いて、角度に応じた格子状パターンを頭部像に張
付ける処理を行う。小領域処理部19では、格子状パター
ン間の部分的な頭部像を切出す処理を行う。移動量算出
処理部20では、前段の小領域処理部19で切出された部分
的な頭部像から、頭髪領域と顔部領域の各面積、格子状
領域と頭部との交点等の情報を元に頭部の回転量を算出
処理する。
Next, the operation will be described with reference to FIG. First, an on-shoulder image input processing unit 15 shown in FIG. 1 inputs the motion of the head of the person 2 in the free space from the camera 3, and a head region extraction processing unit 16 outputs a background image 6 and an on-shoulder image 5 After the difference image 7 is generated, the contour of the chin is detected from the upper shoulder image by the boundary search method, and the contour 8 of the head image 22 is extracted. In the head inclination detection processing unit 17, the head image 22-1 tilted from the angle θ formed by the vector 24 connecting the vertex 22A of the head and the midpoint of the neck or the tip 23 of the chin and the gravity direction vector 24 '. Is detected. The lattice pattern pasting processing unit 18 performs processing of pasting a lattice pattern corresponding to the angle to the head image using the head inclination angle θ detected by the head inclination detection processing unit 17 in the preceding stage. . The small area processing unit 19 performs a process of cutting out a partial head image between the lattice patterns. In the movement amount calculation processing unit 20, from the partial head image cut out by the preceding small area processing unit 19, information such as each area of the hair region and the face region, the intersection of the lattice region and the head, etc. Is used to calculate the amount of rotation of the head.

そして、反復処理部21により前記肩上像入力処理部15
へ処理を戻し、頭部回転量が得られるまで必要回数だけ
行なわれる。
Then, the on-shoulder image input processing unit 15 is
The processing is returned to and the necessary number of times is performed until the head rotation amount is obtained.

第3図は第1図の頭部領域抽出処理部16の動作説明図
であり、背上像5と背景像6から得られた差分画像7に
より生成された画像25に対して、頭部(衣服の外側)と
衣服の顔部領域10を求める。
FIG. 3 is a diagram for explaining the operation of the head region extraction processing unit 16 in FIG. 1. The image (25) generated from the difference image 7 obtained from the upper back image 5 and the background image 6 has a head ( (Outside of the clothes) and the face area 10 of the clothes.

まず、画像25の上方より水平ラスタ走査26を行ない、
所定の閾値により背景内の頭部の輪郭線10Aを抽出す
る。次に境界探索法により衣服27上の顎の線28を探索す
る。その境界の探索開始点29及び探索終了点30の周辺の
領域は、背景領域31(S1)と衣服領域32(S2)と顔部領
域10(S3)から3つの特徴的な領域が存在する。
First, a horizontal raster scan 26 is performed from above the image 25,
An outline 10A of the head in the background is extracted by a predetermined threshold. Next, the chin line 28 on the clothes 27 is searched for by the boundary search method. The area around the search start point 29 and the search end point 30 at the boundary includes three characteristic areas from a background area 31 (S 1 ), a clothing area 32 (S 2 ), and a face area 10 (S 3 ). Exists.

ここで、衣服27上の顔部領域10を求めることは、衣服
領域32(S2)と顔部領域10(S3)とを識別することであ
る。この3つの領域31,32,10の交点、つまり境界の探索
開始点29から探索終了点30まで、顔部領域10(S3)の画
素値に属する画素に出会うまで画像を走査方向33で走査
する。この場合、走査過程の画素が顔部領域10に属する
ものならば左(矢印)へ向いて1つ進む。もしも顔部領
域10に属さないならば右(矢印)を向いて1つ進む。こ
のようにして肩上像5より顔部領域10を切出す。
Here, obtaining the face area 10 on the clothing 27 means identifying the clothing area 32 (S 2 ) and the face area 10 (S 3 ). The image is scanned in the scanning direction 33 from the intersection of these three regions 31, 32, 10; that is, from the boundary search start point 29 to the search end point 30 until a pixel belonging to the pixel value of the face region 10 (S 3 ) is encountered. I do. In this case, if the pixel in the scanning process belongs to the face area 10, the display advances by one to the left (arrow). If it does not belong to the face area 10, it proceeds to the right (arrow) by one. Thus, the face region 10 is cut out from the upper shoulder image 5.

第4図は第1図の頭部の傾き検出処理部17及び格子状
パターン張付け処理部18の画像処理過程を示す図であ
る。
FIG. 4 is a diagram showing an image processing process of the head inclination detection processing section 17 and the lattice pattern pasting processing section 18 of FIG.

第4図(1)の頭部領域が抽出された頭部画像34にお
いて、正面向きの頭部34A,ベクトル24から角度θだけ
右傾した頭部34B,ベクトル24から角度θ(θ
θ)だけ更に右傾した頭部34Cの各頂点34a,34b,34cと
顎の先頭部23とを検出し、検出した角度θ1(重力
方向)に応じて、頭部画像に格子状パターンを張付け
る。
Fourth in head image 34 the head region is extracted in Figure (1), frontally head 34A, the head and right-inclined from the vector 24 by an angle theta 1 34B, the angle theta 2 from vector 24 (theta 2>
θ 1 ), the vertices 34a, 34b, 34c of the head 34C tilted further to the right and the head 23 of the chin are detected, and a grid is added to the head image according to the detected angles θ 1 , θ 2 (gravity direction). Paste the shape pattern.

第4図(2)及び(4)は(1)の右傾した頭部34B
(または34C)に太めの格子パターン35−1,細めの格子
パターン35−2を張付けた例、第4図(3)及び(5)
は(1)で図示していないが左傾した頭部34Dに太めの
格子パターン36−1,細めの格子パターン36−2を張付け
た例を夫々示す。
FIGS. 4 (2) and (4) show the head 34B inclined to the right in (1).
4 (3) and (5) in which a thicker grid pattern 35-1 and a thinner grid pattern 35-2 are attached to (or 34C).
(1) shows an example in which a thick grid pattern 36-1 and a thin grid pattern 36-2 are attached to the head 34D inclined leftward, though not shown in (1).

第5図は第1図の小領域処理部19及び移動量算出処理
部20の画像処理過程を示す図である。前記格子状パター
ン張付け処理部18(第4図)で処理された第5図(1)
の格子パターン張付け画像37は頭部画像34の位置34A−
1,34B−1の位置に追従し、格子パターン35(または3
6)を移動させる。そして、格子の間から切出された第
5図(2)に示す頭部画像の部分画像38,39,40におい
て、最も特徴量を示す例えば正面領域の部分画像39を格
子領域から抽出する。
FIG. 5 is a diagram showing an image processing process of the small area processing section 19 and the movement amount calculation processing section 20 of FIG. FIG. 5 (1) processed by the lattice pattern pasting processing unit 18 (FIG. 4)
The grid pattern pasted image 37 of the position 34A−
Following the position of 1,34B-1, the grid pattern 35 (or 3
6) Move. Then, in the partial images 38, 39, and 40 of the head image shown in FIG. 5 (2) cut out from between the lattices, a partial image 39 of, for example, the front region showing the most feature amount is extracted from the lattice region.

この抽出される諸特徴量としては、第5図(3)に示
すように部分画像39(頭髪領域)と格子パターン35(ま
たは36)の格子との交点41と、各交点間の距離n1,m1,
n2,m2の違いによる特徴量、格子断面のヒストグラム分
布Hから頭髪領域9の面積42(S4)、顔部領域10の面積
43(S5)等の情報が得られる。このようにして、格子パ
ターンの各格子から諸特徴量を比較することで頭部の回
転量が検出される。
The extracted feature amounts include, as shown in FIG. 5 (3), an intersection 41 between the partial image 39 (hair region) and the lattice of the lattice pattern 35 (or 36), and a distance n 1 between the intersections. , m 1 ,
From the feature amount due to the difference between n 2 and m 2 and the histogram distribution H of the grid cross section, the area 42 of the hair region 9 (S 4 ) and the area of the face region 10
43 (S 5) such information is obtained. In this way, the amount of rotation of the head is detected by comparing various feature amounts from each grid of the grid pattern.

第6図は代表的な頭部の回転量検出結果を示すグラフ
で、第6図(1)は各格子間の頭髪領域9と顔部領域10
の面積S4,S5の比の変化を示し、頭部の正面方向の検出
例Eと、頭部の右方向の検出例Fを示す。グラフの縦軸
は前記頭髪領域9の面積42(S4)と顔部領域10の面積43
(S5)の比を表し、横軸は切出された頭部の部分領域、
即ち格子番号を示す。その結果、正面方向の検出例Eは
格子内の2つの領域9,10の面積比(S4/S5)が、ほぼ対
称な分布をなし、また、右方向を向いた場合の検出例F
は右下りの分布を得た。また、第6図(2)は同時に検
出された特徴量の1つである格子と頭髪領域9の交点41
間の距離の変化を示し、頭部の正面方向の検出例eと、
頭部の右方向の検出例fを示す。グラフの縦軸は第6図
(3)に示す格子側面点間の距離n1,n1+1の差P1を表
し、横軸は切出された頭部の部分領域、即ち格子番号を
示し、第6図(1)と(2)との横軸は対応する。その
結果、正面方向の検出例eはほぼ対称な分布を示し、右
方向を向いた場合の検出例fは右上りの分布を得た。な
おGは1つの格子両面の交点間の距離が等しい場合であ
る。
FIG. 6 is a graph showing a typical head rotation amount detection result. FIG. 6 (1) shows a hair region 9 and a face region 10 between lattices.
5 shows a change in the ratio between the areas S 4 and S 5 , and shows a detection example E in the front direction of the head and a detection example F in the right direction of the head. The vertical axis of the graph indicates the area 42 (S 4 ) of the hair region 9 and the area 43 of the face region 10.
(S 5 ), where the horizontal axis is the partial region of the cut out head,
That is, it indicates the lattice number. As a result, the detection example E in the front direction is a detection example F in the case where the area ratio (S 4 / S 5 ) of the two regions 9 and 10 in the grid has a substantially symmetric distribution and is directed rightward.
Got a right-down distribution. FIG. 6 (2) shows an intersection 41 between the lattice and the hair region 9, which is one of the feature amounts detected at the same time.
Shows a change in distance between the two, and shows a detection example e in the front direction of the head;
The example f of detection of the right direction of the head is shown. The vertical axis of the graph represents the difference P 1 between the distances n 1 and n 1 +1 between the lattice side points shown in FIG. 6 (3), and the horizontal axis represents the partial region of the cut out head, that is, the lattice number. 6 (1) and (2) correspond to the horizontal axes. As a result, the detection example e in the front direction showed a substantially symmetric distribution, and the detection example f in the case of turning to the right obtained an upper right distribution. G is the case where the distance between the intersections on both sides of one lattice is equal.

第7図は頭部の回転方向を上方向(1)と右方向
(2)へ行なった時の実測結果Kと本発明方法を用いた
検出結果Lのグラフを示し、両方向とも縦軸は頭部の回
転量(角度)、横軸は頭部の一方向の連続動作から抽出
された画像のフレーム数である。図から分るように実測
結果Kと検出結果Lの値はほぼ一致をみた。
FIG. 7 shows a graph of an actual measurement result K and a detection result L using the method of the present invention when the head is rotated upward (1) and rightward (2). The rotation amount (angle) of the part and the horizontal axis are the number of frames of the image extracted from the continuous movement of the head in one direction. As can be seen from the figure, the values of the actual measurement result K and the detection result L almost coincided.

このように本発明は、画像処理を行う処理部へ頭部画
像を伝達し、頭部の傾きに応じた格子を頭部画像に張付
けることで情報処理量を大幅に減らした頭部情報から頭
部の回転量検出が確認できた。
As described above, the present invention transmits a head image to a processing unit that performs image processing, and attaches a grid corresponding to the inclination of the head to the head image, thereby reducing the amount of information processing from the head information. The amount of rotation of the head was detected.

(発明の効果) 以上説明したように本発明は、比較的簡単な手段であ
る格子状パターンを頭部画像に張付けることで、頭部画
像の情報をすべて用いることなく、回転量を精度よく検
出でき、しかも従来装置にくらべて検出時間が格子の間
隔に応じて最高60%の効率が達成できる。
(Effect of the Invention) As described above, according to the present invention, by attaching a lattice-like pattern, which is relatively simple means, to a head image, the amount of rotation can be accurately determined without using all the information of the head image. It can be detected, and the detection time can be up to 60% higher than that of the conventional device, depending on the grid spacing.

【図面の簡単な説明】[Brief description of the drawings]

第1図は本発明方法を実施するための装置の基本構成
図、第2図は第1図の画像処理過程を示す図、第3図は
第1図の頭部領域抽出処理部16の動作説明図、第4図は
第1図の頭部の傾き検出処理部17及び格子状パターン張
付け処理部18の画像処理過程を示す図、第5図は第1図
の小領域処理部19及び移動量算出処理部20の画像処理過
程を示す図、第6図は代表的な頭部の回転量検出結果を
示すグラフ、第7図は頭部の回転方向を上方向、右方向
へ行なった時の実測結果と本発明方法を用いた検出結果
のグラフ、第8図は従来方法による人の肩上像をカメラ
より取込み画像処理により頭部の回転量を算出する動作
フローを示す図である。 1,15……肩上像入力処理部、2……人、3……カメラ、
5……肩上像、6……背景像、7……差分画像、8……
頭部の輪郭線、9……頭髪領域、10……顔部領域、10A
……頭部の輪郭線、11……頭髪領域と顔部領域の境界
線、16……頭部領域抽出処理部、17……頭部の傾き検出
処理部、18……格子状パターン張付け処理部、19……小
領域処理部、20……移動量算出処理部、21……反復処理
部、22……頭部像、22A,32a〜34c……頂点、23……顎の
先端部、24……ベクトル、26……水平ラスタ走査、27…
…衣服、28……顎の線、29,30……頭部の境界探索開始
点,同終了点、31……背景領域、32……衣服領域、33…
…境界探索の走査方向、34……頭部画像、34A〜34C……
頭部、35,36,35−1,35−2,36−1,36−2……格子パター
ン、37……格子パターン張付け画像、38……左領域部分
画像、39……正面領域部分画像、40……右領域部分画
像、41……交点、42……頭髪領域の面積(S4)、43……
顔部領域の面積(S5)。
FIG. 1 is a diagram showing a basic configuration of an apparatus for carrying out the method of the present invention, FIG. 2 is a diagram showing an image processing process in FIG. 1, and FIG. 3 is an operation of a head region extraction processing unit 16 in FIG. FIG. 4 is a diagram showing an image processing process of a head inclination detection processing unit 17 and a lattice pattern pasting processing unit 18 of FIG. 1, and FIG. 5 is a small area processing unit 19 and a movement of FIG. FIG. 6 is a diagram showing an image processing process of the amount calculation processing unit 20, FIG. 6 is a graph showing a typical head rotation amount detection result, and FIG. FIG. 8 is a graph showing an actual measurement result and a detection result obtained by using the method of the present invention, and FIG. 8 is a diagram showing an operation flow for calculating the amount of head rotation by image processing by capturing an image of the shoulder of a person from a camera by a conventional method. 1,15 ... shoulder image input processing unit, 2 ... people, 3 ... camera,
5 ... shoulder image, 6 ... background image, 7 ... difference image, 8 ...
Head outline, 9 ... hair area, 10 ... face area, 10A
...... Head contour line, 11 ... Boundary line between hair region and face region, 16 ... Head region extraction processing unit, 17 ... Head inclination detection processing unit, 18 ... Lattice pattern pasting process , 19 ... Small area processing unit, 20 ... Moving amount calculation processing unit, 21 ... Repetition processing unit, 22 ... Head image, 22A, 32a-34c ... Vertex, 23 ... Tip of jaw, 24 ... Vector, 26 ... Horizontal raster scanning, 27 ...
... clothes, 28 ... jaw line, 29, 30 ... head boundary search start and end points, 31 ... background area, 32 ... clothes area, 33 ...
... Boundary search scanning direction, 34 ... Head images, 34A-34C ...
Head, 35, 36, 35-1, 35-2, 36-1, 36-2: Grid pattern, 37: Grid pattern pasted image, 38: Left area partial image, 39: Front area partial image , 40 ...... right area portion images, 41 ...... intersection 42 area ...... hair region (S 4), 43 ......
Area of the face region (S 5).

───────────────────────────────────────────────────── フロントページの続き (58)調査した分野(Int.Cl.6,DB名) G06T 7/20 JOIS──────────────────────────────────────────────────続 き Continued on front page (58) Field surveyed (Int.Cl. 6 , DB name) G06T 7/20 JOIS

Claims (1)

(57)【特許請求の範囲】(57) [Claims] 【請求項1】カメラ入力した人物画像について、頭頂と
首を結ぶ直線の傾きを検出し、この直線と平行な複数の
格子を生成後、その複数格子の任意の格子を選定し、選
定された各格子内の部分領域に含まれる頭髪領域と顔部
領域の変化から頭部の回転方向を検出することを特徴と
する頭部回転方向検出方法。
An inclination of a straight line connecting a vertex and a neck is detected from a human image input to a camera, a plurality of grids parallel to the straight line are generated, and an arbitrary one of the plurality of grids is selected. A head rotation direction detection method, comprising: detecting a head rotation direction from changes in a hair region and a face region included in a partial region in each lattice.
JP1221744A 1989-08-30 1989-08-30 Head rotation direction detection method Expired - Fee Related JP2859315B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1221744A JP2859315B2 (en) 1989-08-30 1989-08-30 Head rotation direction detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1221744A JP2859315B2 (en) 1989-08-30 1989-08-30 Head rotation direction detection method

Publications (2)

Publication Number Publication Date
JPH0385685A JPH0385685A (en) 1991-04-10
JP2859315B2 true JP2859315B2 (en) 1999-02-17

Family

ID=16771546

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1221744A Expired - Fee Related JP2859315B2 (en) 1989-08-30 1989-08-30 Head rotation direction detection method

Country Status (1)

Country Link
JP (1) JP2859315B2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4595104B2 (en) * 1999-08-30 2010-12-08 勝義 川崎 Objective severity assessment method for movement disorders
KR100464040B1 (en) 2002-12-16 2005-01-03 엘지전자 주식회사 Method for controlling of mobile communication device using face moving
JP2006338329A (en) * 2005-06-02 2006-12-14 Seiko Epson Corp Face orientation detection method, device and program and recording medium with the program recorded thereon
JP4997306B2 (en) * 2010-03-12 2012-08-08 オリンパス株式会社 How to determine the orientation of a person's face
JP5706452B2 (en) * 2011-02-15 2015-04-22 株式会社日立メディコ X-ray diagnostic imaging apparatus and image display method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
情報処理学会第40回全国大会講演論文集,平成2年3月,P.392

Also Published As

Publication number Publication date
JPH0385685A (en) 1991-04-10

Similar Documents

Publication Publication Date Title
EP1550082B1 (en) Three dimensional face recognition
CN103778635B (en) For the method and apparatus processing data
KR100896643B1 (en) Method and system for modeling face in three dimension by means of aam, and apparatus applied to the same
JPH0685183B2 (en) Identification method of 3D object by 2D image
WO2007102537A1 (en) Posture estimating device and method
KR101759188B1 (en) the automatic 3D modeliing method using 2D facial image
JP2000251078A (en) Method and device for estimating three-dimensional posture of person, and method and device for estimating position of elbow of person
Iwasawa et al. Real-time, 3D estimation of human body postures from trinocular images
JP2967086B1 (en) Estimation of 3D pose of a person by multi-view image processing
JP4729188B2 (en) Gaze detection device
Lee et al. Hand gesture recognition using orientation histogram
JP2859315B2 (en) Head rotation direction detection method
Dariush et al. Spatiotemporal analysis of face profiles: Detection, segmentation, and registration
JPH06138137A (en) Moving-object extraction apparatus
Takahashi et al. Real-time estimation of human body postures using Kalman filter
KR100951315B1 (en) Method and device detect face using AAMActive Appearance Model
JPH0863603A (en) Image analyzer
JP2892610B2 (en) Attitude detection device
JPH0273471A (en) Estimating method for three-dimensional form
JPH08153187A (en) Image recognizing method
JPH01129358A (en) Arithmetic unit for table numerical value
CN117218686B (en) Palm vein ROI extraction method and system under open scene
JP3315175B2 (en) 3D measuring device
CN113269207B (en) Image feature point extraction method for grid structure light vision measurement
Proesmans et al. Getting facial features and gestures in 3D

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees