JPH1049707A - Human body reproducing method - Google Patents

Human body reproducing method

Info

Publication number
JPH1049707A
JPH1049707A JP8200239A JP20023996A JPH1049707A JP H1049707 A JPH1049707 A JP H1049707A JP 8200239 A JP8200239 A JP 8200239A JP 20023996 A JP20023996 A JP 20023996A JP H1049707 A JPH1049707 A JP H1049707A
Authority
JP
Japan
Prior art keywords
expression
real time
reproduction
stored
reproduced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP8200239A
Other languages
Japanese (ja)
Inventor
Kazuyuki Ebihara
一之 海老原
Jiyun Kurumisawa
順 楜沢
Atsushi Otani
淳 大谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ATR CHINOU EIZO TSUSHIN KENKYU
ATR CHINOU EIZO TSUSHIN KENKYUSHO KK
Original Assignee
ATR CHINOU EIZO TSUSHIN KENKYU
ATR CHINOU EIZO TSUSHIN KENKYUSHO KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ATR CHINOU EIZO TSUSHIN KENKYU, ATR CHINOU EIZO TSUSHIN KENKYUSHO KK filed Critical ATR CHINOU EIZO TSUSHIN KENKYU
Priority to JP8200239A priority Critical patent/JPH1049707A/en
Publication of JPH1049707A publication Critical patent/JPH1049707A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

PROBLEM TO BE SOLVED: To perform subjective and correct human body reproduction with high reproduction quality by detecting the expression of a human body in real time, preserving the result, also preserving data for reproducing the human body in real time, learning them so as to relate them and reproducing the expression and operation of the human body by using a learned result. SOLUTION: A real time expression detected by a real time expression detection circuit l is stored in a real time expression detection storage circuit 3. In the meantime, real time information reproduced by a real time expression reproducing device 2 is stored in a real time expression reproduction storage circuit 4. Based on the detected real time information stored in the real time expression detection storage circuit 3 and the reproduced real time expression stored in the real time expression reproduction storage circuit 4, a learning circuit 5 learns the relation of both. In this case, for the expression reproduced by a computer graphic, the relation with the actual expression is learned. Then, after learning is ended, reproduction is performed based on learned data.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】この発明は人物再現方法に関
し、特に、対象物が任意の動きをした際の状態を再現す
る状態再現法に関し、特に人物の表情や動作を3次元モ
デルを用いて自然に再現することのできるような人物再
現方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a person reproducing method, and more particularly to a state reproducing method for reproducing a state when an object moves arbitrarily. The present invention relates to a method for reproducing a person that can be reproduced on a personal computer.

【0002】[0002]

【従来の技術】図3は従来の表情再現方法を示すブロッ
ク図であり、図4は表情の再現例を示す図である。
2. Description of the Related Art FIG. 3 is a block diagram showing a conventional expression reproducing method, and FIG. 4 is a diagram showing an example of reproducing an expression.

【0003】CCDカメラなどの非接触入力装置によっ
て、3次元モデルとしての人物の顔の表情が撮像され、
実時間表情検出回路1によって顔の表情が検出され、そ
の検出出力に基づいて、実時間表情再現装置2によって
人物の顔の表情が再現されてディスプレイに表示された
り、ロボットで再現される。
[0003] A non-contact input device such as a CCD camera captures a facial expression of a person as a three-dimensional model.
The facial expression is detected by the real-time facial expression detection circuit 1, and based on the detection output, the facial expression of the human face is reproduced by the real-time facial expression reproducing device 2 and displayed on a display or reproduced by a robot.

【0004】[0004]

【発明が解決しようとする課題】ところが、従来の表情
再現方法では、忠実に3次元表現を行なっても人間は必
ずしも両者を同一の表情と認識されるような忠実度で表
情再現を実現できない。また、各自の表出する表情には
個人差があり、本人が意図する表情が必ずしも他の人に
理解可能な表情になるとは限らない。
However, in the conventional expression reproducing method, even if the three-dimensional expression is faithfully performed, the human cannot necessarily reproduce the expression with the fidelity such that both are recognized as the same expression. In addition, there are individual differences in the facial expressions that each person expresses, and the facial expression intended by the individual is not always a facial expression that can be understood by other people.

【0005】たとえば、図3に示した従来の表情再現方
法では、図4(a)に示すような実時間で検出した人物
の顔を数値的(同一寸法)に忠実な形状で再現してコン
ピュータグラフィックに表示すると、図4(b)に示す
ように、実際の表情とは同一にもかかわらず、両者は異
なった表情に受取られる表情再現が行なわれる場合があ
る。また、モデルとなる人物が笑った表情をしても、個
人差の問題により、他の人に笑ったように受けとめられ
る表情再現とならない場合があった。
For example, in the conventional facial expression reproducing method shown in FIG. 3, a face of a person detected in real time as shown in FIG. When graphically displayed, as shown in FIG. 4B, there may be a case where the facial expressions are reproduced in such a way that they are received with different facial expressions even though they are the same as the actual facial expressions. In addition, even if the expression of a person who becomes a model laughs, the expression may not be reproduced as if the expression was laughed by other people due to the problem of individual differences.

【0006】それゆえに、この発明の主たる目的は、よ
り再現品質の高く主観的に正しい人物再現を実現し得る
人物再現方法を提供することである。
[0006] Therefore, a main object of the present invention is to provide a person reproducing method capable of realizing subjectively correct person reproduction with higher reproduction quality.

【0007】[0007]

【課題を解決するための手段】請求項1に係る発明は、
人物の表情や動作を検出して再現する人物再現方法であ
って、人物の表情や動作を非接触な形態で実時間で検出
した結果を時系列に保存するとともに、人物を実時間で
再現するためのデータを時系列に保存しておき、保存し
た両者を関連づけるように学習し、学習した結果を用い
て人物の表情や動作を再現する。
The invention according to claim 1 is
This is a person reproduction method that detects and reproduces facial expressions and movements of a person, and saves the results of detecting facial expressions and movements of a person in a non-contact form in real time, and reproduces the person in real time. Is stored in a time-series manner, learning is performed so as to associate the stored data with each other, and facial expressions and actions of a person are reproduced using the learned result.

【0008】[0008]

【発明の実施の形態】図1および図2はこの発明の一実
施形態を示すブロック図であり、図1は学習時を示し、
図2は再現時を示す。
1 and 2 are block diagrams showing an embodiment of the present invention. FIG. 1 shows a learning state.
FIG. 2 shows a reproduction time.

【0009】図1において、実時間表情検出回路1によ
って検出された実時間表情が実時間表情検出記憶回路3
に記憶される。一方、実時間表情再現装置2によって再
現された実時間情報が実時間表情再現記憶回路4に記憶
される。学習回路5は実時間表情検出記憶回路3に記憶
されている検出された実時間情報と実時間表情再現記憶
回路4に記憶されている再現された実時間表情をもとに
して、両者の関連を学習する。
In FIG. 1, a real-time facial expression detected by a real-time facial expression detection circuit 1 is stored in a real-time facial expression detection storage circuit 3.
Is stored. On the other hand, real-time information reproduced by the real-time expression reproduction device 2 is stored in the real-time expression reproduction storage circuit 4. Based on the detected real-time information stored in the real-time facial expression detection storage circuit 3 and the reproduced real-time facial expression stored in the real-time facial expression reproduction storage circuit 4, the learning circuit 5 relates the two. To learn.

【0010】ここで、コンピュータグラフィックで再現
する表情は、予め美術解剖学に基づいてデフォルメ(変
形,誇張)を施しておき、実際の表情との関連を学習す
る。この学習においてはGA(Genetic Algorithm :遺
伝的アルゴリズム)や、HMM(Hidden Markov Mode
l),ニューロ,最小二乗法などを用いて学習する。
Here, the expression reproduced by computer graphics is deformed (deformed, exaggerated) based on art anatomy in advance, and the relation with the actual expression is learned. In this learning, GA (Genetic Algorithm), HMM (Hidden Markov Mode)
l) Learning using neuro, least squares method, etc.

【0011】学習が終了した後は、図2に示すように、
関連付け回路6が学習されたデータをもとにして再現を
行なう。このように、実際の表情変化と予め主観に基づ
いて変形された再現結果を用い、表情再現を行なうと主
観的に正しい表情再現が実現される。たとえば、表情変
化の中で特殊な表情表出を要求される歌舞伎の場合にお
いて、一般人が実際に歌舞伎役者の表情を表出すること
は極めて困難であるが、歌舞伎役者の表出する表情を予
めコンピュータグラフィックで作成しておき、被検者が
この表情を表出する。表出される表情は、歌舞伎役者の
表情と異なっていても、両者の関係を学習することによ
り、実際に歌舞伎役者の顔を再現できる。同様に、再現
する表情が歌舞伎役者のような人間でなくとも、犬や猫
などの動物の表情を用いても同等の結果を得ることが可
能となる。
After the learning is completed, as shown in FIG.
The association circuit 6 performs reproduction based on the learned data. As described above, when the facial expression is reproduced using the actual facial expression change and the reproduction result deformed based on the subjectivity in advance, subjectively correct facial expression reproduction is realized. For example, in the case of Kabuki that requires a special facial expression during facial expression changes, it is extremely difficult for ordinary people to actually express the facial expressions of Kabuki actors, but Kabuki actors do. The facial expression is created in advance by computer graphics, and the subject expresses this facial expression. Even if the expression that is expressed is different from the expression of the Kabuki actor, the face of the Kabuki actor can be actually reproduced by learning the relationship between the two. Similarly, even if the expression to be reproduced is not a human such as a kabuki actor, the same result can be obtained by using the expression of an animal such as a dog or a cat.

【0012】[0012]

【発明の効果】以上のように、この発明によれば、人物
の表情や動作を非接触な形態で実時間で検出してその結
果を時系列に保存するとともに、人物を実時間で再現す
るためのデータを時系列に保存しておき、保存した両者
を関連づけるように学習し、学習した結果を用いて人物
の表情や動作を再現するようにしたので、より再現品質
の高い人物再現を実現することができる。
As described above, according to the present invention, the expression and movement of a person are detected in a non-contact manner in real time, the results are stored in a time series, and the person is reproduced in real time. Data is stored in chronological order, and learning is performed so as to associate the stored data with each other, and the expressions and movements of the person are reproduced using the learned result, realizing a higher-quality reproduction of the person. can do.

【図面の簡単な説明】[Brief description of the drawings]

【図1】この発明の一実施形態における学習時のブロッ
ク図である。
FIG. 1 is a block diagram at the time of learning according to an embodiment of the present invention.

【図2】この発明の一実施形態の再現時のブロック図で
ある。
FIG. 2 is a block diagram at the time of reproduction of an embodiment of the present invention.

【図3】従来の表情再現方法を示す図である。FIG. 3 is a diagram showing a conventional expression reproducing method.

【図4】従来の表情再現例を示す図である。FIG. 4 is a diagram showing a conventional facial expression reproduction example.

【符号の説明】[Explanation of symbols]

1 実時間表情検出回路 2 実時間表情再現装置 3 実時間表情検出記憶回路 4 実時間表情再現記憶回路 5 学習回路 6 関連付け回路 REFERENCE SIGNS LIST 1 real-time facial expression detection circuit 2 real-time facial expression reproducing device 3 real-time facial expression detection storage circuit 4 real-time facial expression reproduction storage circuit 5 learning circuit 6 association circuit

───────────────────────────────────────────────────── フロントページの続き (72)発明者 楜沢 順 京都府相楽郡精華町大字乾谷小字三平谷5 番地 株式会社エイ・ティ・アール知能映 像通信研究所内 (72)発明者 大谷 淳 京都府相楽郡精華町大字乾谷小字三平谷5 番地 株式会社エイ・ティ・アール知能映 像通信研究所内 ────────────────────────────────────────────────── ─── Continuing on the front page (72) Jun Kurumizawa, Inventor 5 Shiraya, Inaya, Seika-cho, Soraku-cho, Kyoto Prefecture AT / IR Intelligent Motion Imaging Communications Laboratory (72) Inventor Atsushi Ohtani Atsushi Kyoto No. 5, Hiratani, Seiya-cho, Gunma-cho, Gunma ATI Intelligent Imaging Communications Laboratory

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 人物の表情や動作を検出して再現する人
物再現方法であって、 前記人物の表情や動作を非接触な形態で実時間で検出し
た結果を時系列に保存するとともに、前記人物を実時間
で再現するためのデータを時系列に保存しておき、 前記保存した両者を関連づけるように学習し、学習した
結果を用いて人物の表情や動作を再現することを特徴と
する、人物再現方法。
1. A person reproducing method for detecting and reproducing a facial expression or a motion of a person, wherein a result of detecting the facial expression or the motion of the person in a non-contact form in real time is stored in a time series, and Data for reproducing a person in real time is stored in chronological order, learning is performed so as to associate the stored two, and the expression and motion of the person are reproduced using the learned result. The person reproduction method.
JP8200239A 1996-07-30 1996-07-30 Human body reproducing method Pending JPH1049707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP8200239A JPH1049707A (en) 1996-07-30 1996-07-30 Human body reproducing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP8200239A JPH1049707A (en) 1996-07-30 1996-07-30 Human body reproducing method

Publications (1)

Publication Number Publication Date
JPH1049707A true JPH1049707A (en) 1998-02-20

Family

ID=16421125

Family Applications (1)

Application Number Title Priority Date Filing Date
JP8200239A Pending JPH1049707A (en) 1996-07-30 1996-07-30 Human body reproducing method

Country Status (1)

Country Link
JP (1) JPH1049707A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2130667A1 (en) 2008-06-05 2009-12-09 Yamamoto Kogaku Co., Ltd. Polarizing laminate and process for producing the same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2130667A1 (en) 2008-06-05 2009-12-09 Yamamoto Kogaku Co., Ltd. Polarizing laminate and process for producing the same

Similar Documents

Publication Publication Date Title
US11858118B2 (en) Robot, server, and human-machine interaction method
KR102334942B1 (en) Data processing method and device for caring robot
CN1692341B (en) Information processing device and method
Sallai Defining infocommunications and related terms
KR20200074114A (en) Information processing apparatus, information processing method, and program
CN114242069A (en) Switching method, device and equipment of human-computer customer service and storage medium
Ye et al. A novel active object detection network based on historical scenes and movements
Goswami et al. Towards social & engaging peer learning: Predicting backchanneling and disengagement in children
Garello et al. Property-aware robot object manipulation: a generative approach
Higgins et al. Head pose for object deixis in vr-based human-robot interaction
JPH11265239A (en) Feeling generator and feeling generation method
JPH1049707A (en) Human body reproducing method
Ondras et al. Automatic replication of teleoperator head movements and facial expressions on a humanoid robot
Vignolo et al. The complexity of biological motion
US12008702B2 (en) Information processing device, information processing method, and program
Hosseini et al. Teaching persian sign language to a social robot via the learning from demonstrations approach
JPWO2019087490A1 (en) Information processing equipment, information processing methods, and programs
Dornaika et al. Inferring facial expressions from videos: Tool and application
Mittal et al. Cognitive Computing for Human-Robot Interaction: Principles and Practices
Peng et al. Image-based object state modeling of a transfer task in simulated surgical training
Matsufuji et al. A System of Associated Intelligent Integration for Human State Estimation
Pico et al. On robots imitating movements through motor noise prediction
Attamimi et al. The study of attention estimation for child-robot interaction scenarios
Sahindal Detecting Conversational Failures in Task-Oriented Human-Robot Interactions
Naeem et al. An AI based Voice Controlled Humanoid Robot

Legal Events

Date Code Title Description
A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20020212