JP2000172163A5 - - Google Patents
Download PDFInfo
- Publication number
- JP2000172163A5 JP2000172163A5 JP1999271122A JP27112299A JP2000172163A5 JP 2000172163 A5 JP2000172163 A5 JP 2000172163A5 JP 1999271122 A JP1999271122 A JP 1999271122A JP 27112299 A JP27112299 A JP 27112299A JP 2000172163 A5 JP2000172163 A5 JP 2000172163A5
- Authority
- JP
- Japan
- Prior art keywords
- hand
- area
- region
- extracted
- control information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000003491 Skin Anatomy 0.000 description 4
- 230000005484 gravity Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Description
ここで、手領域および胴体の抽出方法について説明する。
最初、身体特徴抽出部302は、上記ステップS403と同様の方法で、手領域を抽出する。すなわち、入力画像から肌色領域を抽出して、抽出された肌色領域のうち頭部領域と重複しない部分を取り出し、それを手領域とする。
図7の場合、肌色領域のうち頭部領域と重複しない領域、すなわち手の肌色領域703が抽出される。
胴体については、ステップS402で検出された人物領域を、そのまま胴体とする。
Here, a method of extracting the hand region and the body will be described.
First, the body feature extraction unit 302 extracts the hand region in the same manner as in step S403. That is, the skin color region is extracted from the input image, and the portion of the extracted skin color region that does not overlap with the head region is extracted and used as the hand region.
In the case of FIG. 7, a region of the skin color region that does not overlap with the head region, that is, the skin color region 703 of the hand is extracted.
As for the torso, the person area detected in step S402 is used as the torso as it is.
抽出した手の数1701が0の場合、1つめの手の重心座標1702、および2つめの手の重心座標1704に、それぞれ(0,0)を設定し、また、1つめの手の面積1703、および2つめの手の面積1705に、それぞれ0を設定する。
抽出した手の数1701が「1」の場合、手領域の重心座標および面積を計算して、1つめ手の重心座標1702、および1つめの手の面積1703にセットする。また、2つめの手の重心座標1704に(0、0)をセットし、2つめの手の面積1705に0をセットする。
抽出した手の数1701が「2」の場合、2つの手領域のうち左側の領域の重心座標および面積を計算して、1つめ手の重心座標1702、および1つめの手の面積1703にセットする。また、2つの手領域のうち右側の領域の重心座標および面積を計算して、2つめ手の重心座標1704、および2つめの手の面積1705にセットする。
胴体情報body[i]は、顔領域情報face[i]と同様、図8の構成で実現できる。
その後、手話動作セグメンテーション装置は、ステップS404に進む。
When the number of extracted hands 1701 is 0, (0,0) is set in the barycentric coordinates 1702 of the first hand and the barycentric coordinates 1704 of the second hand, respectively, and the area of the first hand 1703. , And the area of the second hand 1705 are set to 0, respectively.
When the number of extracted hands 1701 is "1", the barycentric coordinates and area of the hand area are calculated and set in the barycentric coordinates 1702 of the first hand and the area 1703 of the first hand. Further, (0,0) is set in the barycentric coordinates 1704 of the second hand, and 0 is set in the area 1705 of the second hand.
When the number of extracted hands 1701 is "2", the coordinates and area of the center of gravity of the left region of the two hand regions are calculated and set to the coordinates of the center of gravity of the first hand 1702 and the area of the first hand 1703. To do. Further, the barycentric coordinates and area of the right region of the two hand regions are calculated and set to the barycentric coordinates 1704 of the second hand and the area 1705 of the second hand.
The body information body [i] can be realized by the configuration shown in FIG. 8 as in the face area information face [i].
After that, the sign language motion segmentation device proceeds to step S404.
認識結果入力部3001は、入力された認識状況情報を、誘導制御情報生成部3003に送る。セグメント結果入力部3002は、入力されたセグメント状況情報を、誘導制御情報生成部3003に送る。誘導制御情報生成部3003は、認識状況情報とセグメント状況情報とをもとに、誘導規則記憶部3005に記憶された誘導規則を使って誘導制御情報を生成し、出力部3004に送る。出力部3004は、出力部3004に接続された手話アニメーション装置等(図示せず)に、誘導制御情報を出力する。 The recognition result input unit 3001 sends the input recognition status information to the guidance control information generation unit 3003. The segment result input unit 3002 sends the input segment status information to the guidance control information generation unit 3003. The guidance control information generation unit 3003 generates guidance control information using the guidance rules stored in the guidance rule storage unit 3005 based on the recognition status information and the segment status information, and sends the guidance control information to the output unit 3004. The output unit 3004 outputs guidance control information to a sign language animation device or the like (not shown) connected to the output unit 3004.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP27112299A JP4565200B2 (en) | 1998-09-28 | 1999-09-24 | Manual motion segmentation method and apparatus |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP27396698 | 1998-09-28 | ||
JP10-273966 | 1998-09-28 | ||
JP27112299A JP4565200B2 (en) | 1998-09-28 | 1999-09-24 | Manual motion segmentation method and apparatus |
Publications (3)
Publication Number | Publication Date |
---|---|
JP2000172163A JP2000172163A (en) | 2000-06-23 |
JP2000172163A5 true JP2000172163A5 (en) | 2006-10-12 |
JP4565200B2 JP4565200B2 (en) | 2010-10-20 |
Family
ID=26549546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP27112299A Expired - Fee Related JP4565200B2 (en) | 1998-09-28 | 1999-09-24 | Manual motion segmentation method and apparatus |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP4565200B2 (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005310068A (en) * | 2004-04-26 | 2005-11-04 | Noritsu Koki Co Ltd | Method for correcting white of eye, and device for executing the method |
JP4792824B2 (en) * | 2004-11-05 | 2011-10-12 | 富士ゼロックス株式会社 | Motion analysis device |
JP2006301906A (en) * | 2005-04-20 | 2006-11-02 | Nec Corp | Portable telephone terminal with camera, and operation method and operation program therefor |
KR100817298B1 (en) | 2005-12-08 | 2008-03-27 | 한국전자통신연구원 | Method for detecting and tracking both hands |
JP2007310914A (en) * | 2007-08-31 | 2007-11-29 | Nippon Telegr & Teleph Corp <Ntt> | Mouse alternating method, mouse alternating program and recording medium |
KR101652535B1 (en) * | 2008-06-18 | 2016-08-30 | 오블롱 인더스트리즈, 인크 | Gesture-based control system for vehicle interfaces |
JP5598751B2 (en) * | 2010-03-05 | 2014-10-01 | 日本電気株式会社 | Motion recognition device |
JP5915000B2 (en) * | 2011-06-13 | 2016-05-11 | ソニー株式会社 | Information processing apparatus and program |
US9996740B2 (en) | 2013-09-30 | 2018-06-12 | Sony Interactive Entertainment Inc. | Information processing device, information processing method, program, and information storage medium |
JP6177655B2 (en) * | 2013-10-11 | 2017-08-09 | 株式会社Nttドコモ | Image recognition apparatus and image recognition method |
JP6144192B2 (en) * | 2013-12-27 | 2017-06-07 | 株式会社Nttドコモ | Image recognition apparatus and image recognition method |
WO2016017101A1 (en) * | 2014-07-30 | 2016-02-04 | ソニー株式会社 | Information processing device, information processing method and program |
CN104616028B (en) * | 2014-10-14 | 2017-12-12 | 北京中科盘古科技发展有限公司 | Human body limb gesture actions recognition methods based on space segmentation study |
-
1999
- 1999-09-24 JP JP27112299A patent/JP4565200B2/en not_active Expired - Fee Related
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2000172163A5 (en) | ||
Cedras et al. | Motion-based recognition a survey | |
Starner et al. | A wearable computer based american sign language recognizer | |
Li et al. | A web-based sign language translator using 3d video processing | |
Corradini | Dynamic time warping for off-line recognition of a small gesture vocabulary | |
Balomenos et al. | Emotion analysis in man-machine interaction systems | |
Sun et al. | Latent support vector machine modeling for sign language recognition with Kinect | |
Geetha et al. | A vision based dynamic gesture recognition of indian sign language on kinect based depth images | |
CN111680550B (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN107678550A (en) | A kind of sign language gesture recognition system based on data glove | |
Ghotkar et al. | Dynamic hand gesture recognition using hidden Markov model by Microsoft Kinect sensor | |
CN106909909A (en) | A kind of Face datection and alignment schemes based on shared convolution feature | |
CN109409274A (en) | A kind of facial image transform method being aligned based on face three-dimensional reconstruction and face | |
Krishnaraj et al. | A Glove based approach to recognize Indian Sign Languages | |
Vogler et al. | A framework for motion recognition with applications to American sign language and gait recognition | |
CN113807287B (en) | 3D structured light face recognition method | |
Su et al. | A fuzzy rule-based approach to recognizing 3-D arm movements | |
Sagawa et al. | Pattern recognition and synthesis for a sign language translation system | |
Thalmann et al. | Direct face-to-face communication between real and virtual humans | |
CN209514551U (en) | A kind of intelligent AC robot and AC system | |
Vidalón et al. | Continuous sign recognition of brazilian sign language in a healthcare setting | |
Proença et al. | A gestural recognition interface for intelligent wheelchair users | |
Gao et al. | HandTalker II: a Chinese sign language recognition and synthesis system | |
CN115454256A (en) | Digital oath word tombstone device | |
Gao et al. | Building language communication between deaf people and hearing society through multimodal human-computer interface |