JP2003141563A - Method of generating face 3d computer graphics, its program, and recording medium - Google Patents

Method of generating face 3d computer graphics, its program, and recording medium

Info

Publication number
JP2003141563A
JP2003141563A JP2001334370A JP2001334370A JP2003141563A JP 2003141563 A JP2003141563 A JP 2003141563A JP 2001334370 A JP2001334370 A JP 2001334370A JP 2001334370 A JP2001334370 A JP 2001334370A JP 2003141563 A JP2003141563 A JP 2003141563A
Authority
JP
Japan
Prior art keywords
face
dimensional
subject
computer graphics
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2001334370A
Other languages
Japanese (ja)
Inventor
Daiki Mori
大樹 森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2001334370A priority Critical patent/JP2003141563A/en
Publication of JP2003141563A publication Critical patent/JP2003141563A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a method of generating face 3D CG reflecting a subject's appearance information and character information available from the subject's behavior. SOLUTION: The face characteristic points required to identify an individual are extracted from photographic information including images of the subject's head taken from two directions, front and side. Based on the characteristic points, the 3D structures of the parts of the face, including the head skeleton, nose, mouth, eyebrows, and eyes, and the face parts are integrated together to restore a face 3D configuration. The face 3D configuration is compared with average face data to emphasize the characteristic parts of the subject's face. The displacements of the characteristic points of the face are calculated on the basis of image information resulting from continued imaging of the subject's head. Based on the displacements of the characteristic points, the character of the subject is classified. The 3D configuration of each part of the face is varied depending on the classification.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【発明の属する技術分野】本発明は、仮想空間等での自
分の分身(キャラクタ)の顔を、本人の外見情報と性格
分類に基づいて3次元コンピュータグラフィックスで表
現することを可能とした顔3次元コンピュータグラフィ
ックス生成方法、そのためのプログラム及びそのプログ
ラムを記録した記録媒体に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention makes it possible to express the face of a character (character) in a virtual space or the like by three-dimensional computer graphics based on the appearance information and personality classification of the person. The present invention relates to a three-dimensional computer graphics generation method, a program therefor, and a recording medium recording the program.

【0002】[0002]

【従来の技術】近年、ネットワークに接続された複数の
計算機の利用者同士が、ネットワーク上に構築された3
次元仮想空間に参加してコミュニケーションを行うマル
チユーザシステムの研究開発が盛んに行われており、遠
隔教育や遠隔会議、3次元チャットコミュニティ等に利
用されている。この仮想空間において、利用者はキャラ
クタと呼ばれる自分の分身として表現され、仮想空間に
対する利用者の視点移動に従って分身の位置や方向を変
化させながら仮想空間内を行動し、他の利用者とのコミ
ュニケーションを行う。このような仮想空間システムで
は、利用者の分身であるキャラクタの表現が重要であ
り、なかでも個人特徴を最も反映する箇所である顔部分
に対する表現方法が注目されている。
2. Description of the Related Art In recent years, users of a plurality of computers connected to a network have been constructed on the network.
Research and development of a multi-user system that participates in and communicates with a three-dimensional virtual space has been actively performed, and is used for remote education, remote conference, three-dimensional chat community, and the like. In this virtual space, the user is represented as his or her own altercation called a character, and moves in the virtual space while changing the position and direction of the alternation according to the viewpoint movement of the user with respect to the virtual space, and communicates with other users. I do. In such a virtual space system, it is important to represent a character, which is a user's alter ego, and in particular, an expression method for a face portion, which is a portion where individual characteristics are most reflected, is drawing attention.

【0003】従来、「3DMeNow」(http://www.bio
virtual.com/2001.08.01)や「アニメーション制作のた
めの顔の曲面モデル」(小松、“アニメーション制作の
ための顔の曲面モデル”、情報処理学会論文誌、Vol.
30,No.5,pp.633−641,1989)のよう
に、撮影された顔画像を用いることによって、実際の頭
部と同様な形状を有する3次元キャラクタを生成する方
法がある(方法1)。また、「表情記述のための顔3次
元モデル」(崔、原島、武部、“顔の3次元モデルに基
づく表情の記述と合成”、電子情報通信学会論文誌、V
OL.J73−A,NO.7,pp.1270−1280,
1990)のように、ワイヤフレームモデルを用いて様
々な表情を合成する顔3次元モデル表現方法がある(方
法2)。さらには、「サイバースペース上での多人数コ
ミュニケーションシステム」(大室、伊藤、島田、森
島、“唇の特徴点抽出と音声分析を併用した音声に同期
する唇画像の実時間合成”、電子情報通信学会総合大会
講演論文集、A−15−11、pp.310,2001)
のように、画像と音声を併用した唇形状表現方法がある
(方法3)。
Conventionally, "3D Me Now" (http: ///www.bio
virtual.com/2001.08.01) and "Face surface model for animation production" (Komatsu, "Face surface model for animation production", Information Processing Society of Japan, Vol.
30, No. 5, pp. 633-641, 1989), there is a method of generating a three-dimensional character having a shape similar to an actual head by using a captured face image (method 1). ). "Face 3D model for facial expression description" (Cui, Harajima, Takebe, "Facial expression description and synthesis based on 3D facial model", IEICE Transactions, V
OL. J73-A, NO.7, pp.1270-1280,
(1990), there is a face three-dimensional model representation method that synthesizes various facial expressions using a wireframe model (method 2). In addition, "Multi-person communication system on cyberspace" (Omuro, Ito, Shimada, Morishima, "Real-time synthesis of lip image synchronized with speech using lip feature extraction and speech analysis", electronic information Proceedings of the IEICE General Conference, A-15-11, pp.310, 2001)
As described above, there is a lip shape expression method that uses both images and sounds (method 3).

【0004】[0004]

【発明が解決しようとする課題】しかしながら、上記方
法1では、細部まで表現しようとするため、3次元形状
モデルが複雑となり、また多数の顔特徴点を画像から抽
出する必要がある。このため自動生成に要する計算処理
が増え、利用者自身による操作の手間が増える、といっ
た問題点が生じる。
However, in the above method 1, since the details are to be expressed, the three-dimensional shape model becomes complicated, and it is necessary to extract a large number of facial feature points from the image. Therefore, there is a problem that the calculation process required for the automatic generation increases, and the user's own labor for the operation increases.

【0005】また、上記方法2では、撮影画像に加工処
理した画像を貼り付けるため、実際の頭部形状に忠実な
復元には適しているが、特徴的な顔部分を強調するとい
った抽象化表現には向いていない。
Further, in the above method 2, since the processed image is pasted on the photographed image, it is suitable for restoration faithful to the actual head shape, but it is an abstract expression in which a characteristic face portion is emphasized. Not suitable for

【0006】また、上記方法3では、実時間での動作追
跡を主体としているため、外見情報の再現までに留まっ
ており、人物の個性を表現するにあたって重要な要素で
ある、性格的な情報は反映されていない。
Further, in the above method 3, since motion tracking in real time is the main subject, it is limited to reproduction of appearance information, and personal information, which is an important element in expressing the personality of a person, is not available. Not reflected.

【0007】本発明は、上記従来方法の問題点に鑑みな
されたものであり、対象者の外見情報と対象者の仕草か
ら得られる性格を反映した顔3次元コンピュータグラフ
ィックス生成方法、そのためのプログラム、及び、その
プログラムを記録した記録媒体を提供することを目的と
する。
The present invention has been made in view of the problems of the above-mentioned conventional methods, and a face three-dimensional computer graphics generation method that reflects the appearance information of the subject and the character obtained from the subject's gesture, and a program therefor. , And a recording medium having the program recorded therein.

【0008】[0008]

【課題を解決するための手段】本発明は、対象者の外見
情報に基づいて顔3次元形状を復元する処理と、対象者
の性格を分類し、該分類された対象者の性格を、前記対
象者に外見情報により復元された顔3次元形状に反映す
る処理とに大別されることを主要な特徴とする。
According to the present invention, a process of restoring a three-dimensional shape of a face based on appearance information of a subject, a personality of the subject is classified, and the personality of the classified subject is described above. The main feature is that the subject is roughly classified into a process of reflecting the three-dimensional shape of the face restored by appearance information.

【0009】本発明の一実施例においては、対象者の外
見情報に基づいて顔3次元形状を復元する処理では、対
象者の頭部を正面と側面の2方向から撮影した撮影情報
から、個人特定に必要な顔特徴点の2次元配置情報を抽
出し、該顔特徴点の2次元配置情報に基づいて、顔特徴
点の3次元配置情報を計算し、顔特徴点の3次元配置情
報に基づいて、曲線と直線及び各線からなる面によって
表現される3次元幾何図形で頭部骨格、鼻、口、眉、目
といった各顔部品の3次元構造を復元し、これら各顔部
品を一体化することによって、顔3次元形状を復元す
る。さらに、対象者の特徴的な顔部品をより強調するた
めに、各顔部品の配置場所やスケール、形状等を平均顔
データと比較し、その差分に応じて各顔部品の3次元形
状を変化させる。
In one embodiment of the present invention, in the process of restoring the three-dimensional shape of the face based on the appearance information of the subject, the individual's head is photographed from two directions, the front and the side, The two-dimensional arrangement information of face feature points necessary for identification is extracted, and the three-dimensional arrangement information of the face feature points is calculated based on the two-dimensional arrangement information of the face feature points to obtain the three-dimensional arrangement information of the face feature points. Based on this, the 3D structure of each face part such as the head skeleton, nose, mouth, eyebrows, and eyes is restored with a 3D geometrical figure expressed by a surface consisting of curves, straight lines, and each line, and these face parts are integrated. By doing so, the three-dimensional shape of the face is restored. Furthermore, in order to further emphasize the characteristic face parts of the target person, the placement location, scale, shape, etc. of each face part are compared with the average face data, and the three-dimensional shape of each face part is changed according to the difference. Let

【0010】また、対象者の性格を顔3次元形状に反映
する処理では、対象者の頭部を正面方向から撮影し続け
た映像情報から、顔特徴点の2次元配置情報の変位量を
算出し、該顔特徴点の2次元配置情報の変位量に基づい
て、性格分類テーブルなどを照合して対象者の性格を分
類し、その分類に応じて各顔部品の3次元形状に変化を
与える。
Further, in the processing of reflecting the character of the target person in the three-dimensional shape of the face, the displacement amount of the two-dimensional arrangement information of the facial feature points is calculated from the image information obtained by continuously photographing the head of the target person from the front direction. Then, the personality of the target person is classified by collating a personality classification table or the like based on the displacement amount of the two-dimensional arrangement information of the facial feature points, and the three-dimensional shape of each face part is changed according to the classification. .

【0011】このように、本発明では、外見情報だけを
基に復元するのではなく、人物の個性を表現するにあた
って最も重要となる要素である性格情報を取り入れ、顔
の3次元形状表現に反映させた点が、従来方法と最も異
なる。
As described above, according to the present invention, the personality information, which is the most important factor in expressing the personality of a person, is not restored based on only the appearance information, but is reflected in the three-dimensional shape representation of the face. This is the difference from the conventional method.

【0012】[0012]

【発明の実施の形態】以下、図面を用いて本発明の実施
の形態を詳細に説明する。本発明の一実施形態に適用さ
れる顔3次元コンピュータグラフィックス(CG)生成
システムの構成例を図1に示す。本構成例では、利用者
端末装置10と単数もしくは複数の撮影機器20が直接
もしくはネットワーク30を介して接続されている。
BEST MODE FOR CARRYING OUT THE INVENTION Embodiments of the present invention will be described in detail below with reference to the drawings. FIG. 1 shows a configuration example of a face three-dimensional computer graphics (CG) generation system applied to one embodiment of the present invention. In the present configuration example, the user terminal device 10 and one or a plurality of photographing devices 20 are connected directly or via a network 30.

【0013】利用者端末装置10は、図2に示すよう
に、CPU11、ハードディスク等の記憶装置12、キ
ーボード13、マウス14、撮影機器インタフェース1
5、ディスプレイ16等を備えているパソコンであり、
必要に応じて撮影機器インタフェース15を介して撮影
機器20やネットワーク30に接続される。CPU11
は、各部を制御するとともに、本発明の顔3次元CG生
成処理を実行する。記憶装置12は、CPU11で実行
するプログラム、後述の平均顔特徴点配置情報、性格分
類テーブル等を保持している。また、記憶装置12は、
撮影機器20による撮影画像情報、生成された顔3次元
形状情報等を記憶するためにも用いられる。なお、該利
用者端末装置10は、撮影機器20が内蔵されているカ
メラ搭載型のマルチメディアパソコンでもよい。
As shown in FIG. 2, the user terminal device 10 includes a CPU 11, a storage device 12 such as a hard disk, a keyboard 13, a mouse 14, and a photographing equipment interface 1.
5, a personal computer equipped with a display 16, etc.,
If necessary, it is connected to the photographing device 20 or the network 30 via the photographing device interface 15. CPU11
Controls each unit and executes the face three-dimensional CG generation processing of the present invention. The storage device 12 holds a program executed by the CPU 11, average face feature point arrangement information described later, a personality classification table, and the like. In addition, the storage device 12 is
It is also used to store the captured image information by the imaging device 20, the generated face three-dimensional shape information, and the like. The user terminal device 10 may be a camera-installed multimedia personal computer in which the photographing device 20 is built.

【0014】まず、図1、図2を用いて、本発明による
顔3次元CG生成処理時の動作概要を説明する。本発明
の顔3次元CG生成処理は、図3に示すように、対象者
の外見情報による顔3次元形状の復元(処理310)
と、この復元された顔3次元形状の顔部品について、対
象者の性格による強調変形(処理320)とに大別され
る。
First, referring to FIGS. 1 and 2, an outline of operation in the face three-dimensional CG generation processing according to the present invention will be described. In the face three-dimensional CG generation process of the present invention, as shown in FIG. 3, the face three-dimensional shape is restored by the appearance information of the subject (process 310).
And the face parts having the three-dimensional shape of the restored face are roughly classified into emphasis deformation (process 320) depending on the character of the subject.

【0015】〈処理310〉利用者が、キーボード13
やマウス14によって撮影要求を入力すると、利用者端
末装置10(CPU11)は、撮影機器インタフェース
15を介し、該利用者の撮影要求を撮影機器20に送信
する。撮影機器20は、利用者の端末装置10から撮影
要求を受信すると、対象者の頭部を正面と側面から撮影
し、撮影画像情報を利用者端末装置10へ送信する。利
用者端末装置10は、撮影機器インターフェース15を
介し、撮影機器20からの撮影画像情報を受信し、記憶
装置12に格納する。
<Process 310> The user operates the keyboard 13
When a shooting request is input using the mouse 14 or the mouse 14, the user terminal device 10 (CPU 11) transmits the shooting request of the user to the shooting device 20 via the shooting device interface 15. When the imaging device 20 receives the imaging request from the user's terminal device 10, the imaging device 20 images the subject's head from the front and side surfaces and transmits the captured image information to the user terminal device 10. The user terminal device 10 receives the photographed image information from the photographing device 20 via the photographing device interface 15, and stores it in the storage device 12.

【0016】次に、利用者装置10(CPU11)は、
記憶装置12に格納された撮影画像情報(対象者の外見
情報)から、個人特定に必要な顔特徴点の配置情報を抽
出し、まず、該顔特徴点の配置情報に基づいて、顔3次
元形状を復元する。次に、利用者端末装置10は、抽出
した顔特徴点配置情報について、あらかじめ記憶装置1
2に保持されている平均顔特徴点配置情報と比較し、そ
の差分に応じ、対象者の特徴的な顔部品がより強調され
るように顔3次元形状を変形する。
Next, the user device 10 (CPU 11)
From the photographed image information (appearance information of the target person) stored in the storage device 12, layout information of face feature points necessary for individual identification is extracted, and first, based on the layout information of the face feature points, a three-dimensional face is extracted. Restore the shape. Next, the user terminal device 10 preliminarily stores the extracted facial feature point arrangement information in the storage device 1.
The average facial feature point arrangement information stored in 2 is compared, and the three-dimensional shape of the face is transformed according to the difference so that the characteristic facial parts of the subject are more emphasized.

【0017】ここまでで、対象者の外見情報による顔3
次元形状の復元が終了する。利用者端末装置10は、復
元した顔3次元形状情報を記憶装置12に格納し、ま
た、必要に応じディスプレイ16に表示する。
Up to this point, the face 3 according to the appearance information of the target person
The restoration of the dimensional shape is completed. The user terminal device 10 stores the restored face three-dimensional shape information in the storage device 12 and also displays it on the display 16 as necessary.

【0018】〈処理320〉利用者は、対象者の性格を
顔3次元形状に反映させたい場合、キーボード13やマ
ウス14によって連続撮影要求を入力する。利用者端末
装置10は、撮影機器インタフェース15を介し、該利
用者の連続撮影要求を撮影機器20に送信する。なお、
利用者端末10では、上記処理310の動作終了後、自
動的に連続撮影要求を撮影機器20に送信することでも
よい。
<Process 320> When the user wants to reflect the character of the target person in the three-dimensional shape of the face, the user inputs a continuous photographing request using the keyboard 13 or the mouse 14. The user terminal device 10 transmits a continuous shooting request of the user to the photographing device 20 via the photographing device interface 15. In addition,
The user terminal 10 may automatically transmit a continuous shooting request to the shooting device 20 after the operation of the process 310 is completed.

【0019】撮影機器20は、利用者端末装置10から
連続撮影要求を受信すると、対象者の頭部撮影および利
用者端末装置10への撮影画像情報の送信を一定時間に
わたって繰り返す。利用者端末装置10は、撮影機器2
0からの撮影画像情報を撮影機器インタフェース15を
介して受信し、順次、記憶装置12に格納して、同様に
顔特徴点の配置情報の抽出を一定時間にわたって繰り返
す。
When the photographing device 20 receives the continuous photographing request from the user terminal device 10, the photographing device 20 repeats photographing the head of the subject and transmitting the photographed image information to the user terminal device 10 for a certain period of time. The user terminal device 10 is the photographing device 2
The photographed image information from 0 is received via the photographing device interface 15, sequentially stored in the storage device 12, and similarly, the extraction of the facial feature point arrangement information is repeated for a certain period of time.

【0020】次に、利用者端末装置10は、上記一定時
間にわたり対象者の頭部を撮影し続けた撮影画像情報
(映像情報)から抽出した顔特徴配置情報をもとに、顔
特徴点の変位量を計測する。そして、利用者端末装置1
0は、この顔特徴点の変位量を、あらかじめ記憶装置1
2に保持されている性格分類テーブルと照合して、対象
者の性格(動作性格)を分類し、該性格に応じて、先の
処理310で得られている顔3次元形状の顔部品を変形
する。
Next, the user terminal device 10 determines the facial feature points based on the facial feature arrangement information extracted from the photographed image information (video information) obtained by continuously photographing the subject's head for the fixed time. Measure the amount of displacement. Then, the user terminal device 1
0 indicates the displacement amount of this facial feature point in advance in the storage device 1
The personality (behavioral personality) of the target person is classified by collating with the personality classification table held in No. 2, and the facial parts of the face three-dimensional shape obtained in the previous processing 310 are transformed according to the personality. To do.

【0021】利用者端末装置10は、このようにして復
元された顔3次元形状を3次元CGで描画してディスプ
レイ16に出力する。利用者端末装置10は、3次元C
Gを描画しディスプレイ16に出力した際に、利用者か
らの編集要求があれば、利用者の操作するキーボード1
3やマウス14からの入力に従って変更された顔3次元
形状を再描画する。
The user terminal device 10 draws the three-dimensional shape of the face thus restored in three-dimensional CG and outputs it to the display 16. The user terminal device 10 is a three-dimensional C
When G is drawn and output to the display 16, if there is an edit request from the user, the keyboard 1 operated by the user
3 and the three-dimensional face shape changed according to the input from the mouse 14 are redrawn.

【0022】図4に、本発明による顔3次元CG生成処
理の一実施例の詳細フローチャートを示す。図4中、ス
テップ401〜406が対象者の外見情報による顔3次
元形状の復元処理(図3の処理310)であり、ステッ
プ407〜412が対象者の性格分類(行動性格分類)
による顔3次元形状の変形処理(図3の処理320)で
ある。以下、本発明による顔3次元CG生成処理の一実
施例を図4に基づいて具体的に説明する。
FIG. 4 shows a detailed flowchart of an embodiment of the face three-dimensional CG generation processing according to the present invention. In FIG. 4, steps 401 to 406 are a process of restoring the three-dimensional shape of the face based on the appearance information of the target person (process 310 in FIG. 3), and steps 407 to 412 are the personality classification (behavioral personality classification) of the target person.
3 is a process of deforming the three-dimensional shape of the face (process 320 in FIG. 3). An embodiment of the face three-dimensional CG generation processing according to the present invention will be specifically described below with reference to FIG.

【0023】〈ステップ401〉利用者端末装置10
は、撮影要求を撮影機器20に送信し、撮影機器20は
該撮影要求を受信して、対象者の頭部を正面と側面の2
方向から撮影する。図5は、撮影機器20によって対象
者40を側面方向から側面顔を撮影している例を示し、
図6は撮影機器20によって対象者1002を正面方向
から正面顔を撮影している例を示す。ここで、撮影機器
20が1台の場合は、対象もしくは撮影機器を移動させ
ながら別々に撮影を行うが、2台の場合は、図7で示す
ように、対象者40に対して側面、正面それぞれの方向
に設置しておき、同時に撮影しても良い。
<Step 401> User terminal device 10
Sends a shooting request to the shooting device 20, and the shooting device 20 receives the shooting request and sets the head of the subject to the front and side.
Shoot from the direction. FIG. 5 shows an example in which a side face is photographed from the side direction of the target person 40 by the photographing device 20,
FIG. 6 shows an example in which the front face of the target person 1002 is photographed by the photographing device 20 from the front direction. Here, when the number of the photographing devices 20 is one, the photographing is performed separately while moving the target or the photographing device, but when the number of the photographing devices 20 is two, as shown in FIG. You may install in each direction and shoot at the same time.

【0024】〈ステップ402〉利用者端末装置10
は、撮影機器20が撮影した利用者40の側面顔と正面
顔の撮影画像情報を受信して、一旦、記憶装置12に格
納し、該撮影画像情報から、顔特徴点を自動抽出する。
顔特徴点は、側面顔と正面顔の外形輪郭および各顔部品
領域の縁上に存在する顔特徴点の2次元配置情報として
抽出する。これは、例えば次のようにして行う。
<Step 402> User terminal device 10
Receives the photographed image information of the side face and front face of the user 40 photographed by the photographing device 20, temporarily stores it in the storage device 12, and automatically extracts the facial feature points from the photographed image information.
The facial feature points are extracted as the two-dimensional arrangement information of the facial feature points existing on the outer contours of the side face and the front face and the edges of each face component area. This is done as follows, for example.

【0025】色情報による領域分割処理と微分フィルタ
を用いたエッジ抽出処理によって、側面顔、正面顔の顔
外形と顔部品の縁情報をそれぞれ自動抽出し、図8で示
すように、側面顔の外形綾上に存在する定点群(眉、
目、鼻頂上、鼻下、上唇頂上、唇中央、下唇頂上、あご
中央口、あご先端等)と同様に正面顔の外形綾上に存在
する定点群(眉、目、鼻頂上、鼻下、上唇頂上、唇中
央、下唇頂上、あご中央口、あご先端等)および各顔部
品領域縁上に存在する定点群(眉および目領域の各上下
左右端点、鼻および口領域の各左右端点等)を顔特徴点
の2次元配置情報として抽出する。図8中、側面顔、正
面顔で抽出された顔特徴点801を四角印で示す。
By the area division processing by the color information and the edge extraction processing using the differential filter, the face outlines of the side face and the front face and the edge information of the face parts are automatically extracted, respectively, and as shown in FIG. Fixed point groups (eyebrows,
Eyes, top of nose, bottom of nose, top of upper lip, center of lip, top of lower lip, center of chin, tip of chin, etc.) as well as fixed point groups (eyebrows, eyes, top of nose, bottom of nose, Fixed point groups (upper and lower left and right end points of the eyebrow and eye areas, left and right end points of the nose and mouth area, etc.) that exist on the top of the upper lip, the center of the lip, the top of the lower lip, the center of the chin, the tip of the chin, and the edges of each face part area. ) Is extracted as two-dimensional arrangement information of facial feature points. In FIG. 8, the face feature points 801 extracted for the side face and the front face are indicated by square marks.

【0026】この抽出された顔特徴点801の2次元配
置情報に基づいて、顔特徴点の3次元情報を計算する。
即ち、抽出された顔特徴点801を正面と側面のマッチ
ング(ステレオマッチング)によって3次元座標系に変
換し、目の位置、縦横幅や鼻の高さといった顔3次元特
徴点を算出する。
Based on the extracted two-dimensional arrangement information of the facial feature points 801, the three-dimensional information of the facial feature points is calculated.
That is, the extracted face feature points 801 are converted into a three-dimensional coordinate system by front-side matching (stereo matching), and face three-dimensional feature points such as eye positions, vertical and horizontal widths, and nose heights are calculated.

【0027】〈ステップ403〉顔特徴点の3次元配置
情報に基づいて頭部骨格、眉、目、鼻、口等の3次元構
造をそれぞれ復元する。これは、例えば、次のように行
う。
<Step 403> The three-dimensional structures of the head skeleton, eyebrows, eyes, nose, mouth and the like are restored based on the three-dimensional arrangement information of the facial feature points. This is done as follows, for example.

【0028】頭部骨格の3次元構造は、全体頭部を水平
にスライスした部分の集合体として定義する。スライス
位置は顔特徴点の3次元配置情報によって決定され、各
スライス位置での輪郭形状は、図9で示すように、顔特
徴点を通過する直線901と曲線902で描かれる。こ
れらスライスした部分の輪郭形状を全て統合し、直線と
曲線がなす面を補完する図形903を頭部骨格の3次元
構造とする。
The three-dimensional structure of the head skeleton is defined as an aggregate of horizontally sliced parts of the entire head. The slice position is determined by the three-dimensional arrangement information of the facial feature points, and the contour shape at each slice position is drawn by a straight line 901 passing through the facial feature point and a curve 902, as shown in FIG. All the contour shapes of these sliced portions are integrated, and the figure 903 that complements the surface formed by the straight line and the curved line is made a three-dimensional structure of the head skeleton.

【0029】眉の3次元構造は、眉部品を構成する顔特
徴点の3次元配置情報に基づいて、図10で示すよう
に、直線と曲線からなる輪郭形状で定義し、直線と曲線
がなす面を補完する図形1001を眉の3次元構造とす
る。
The three-dimensional structure of the eyebrows is defined by a contour shape consisting of straight lines and curved lines, as shown in FIG. 10, based on the three-dimensional arrangement information of the facial feature points that form the eyebrow parts, and the straight lines and curved lines form. The figure 1001 that complements the surface has a three-dimensional structure of the eyebrows.

【0030】目の3次元構造は、目部品を構成する顔特
徴点の3次元配置情報に基づいて、図11で示すように
直線と曲線からなる輪郭形状で定義し、直線と曲線がな
す面を補完する図形1101を目の3次元構造とする。
The three-dimensional structure of the eye is defined by the contour shape consisting of straight lines and curved lines as shown in FIG. 11 based on the three-dimensional arrangement information of the facial feature points forming the eye parts, and the surface formed by the straight lines and curved lines. The figure 1101 that complements the above is defined as the three-dimensional structure of the eye.

【0031】鼻の3次元構造は、鼻部品を構成する顔特
徴点の3次元配置情報に基づいて、図12で示すように
直線と曲線からなる輪郭形状を定義し、直線と曲線がな
す面を補完する図形1201を鼻の3次元構造とする。
The three-dimensional structure of the nose defines a contour shape consisting of straight lines and curved lines as shown in FIG. 12 based on the three-dimensional arrangement information of the facial feature points forming the nose parts, and the surface formed by the straight lines and curved lines. The figure 1201 that complements the above is defined as the three-dimensional structure of the nose.

【0032】口の3次元構造も、口部品を構成する顔特
徴点の3次元配置情報に基づいて、図13で示すよう
に、直線と曲線からなる輪郭形状を定義し、直線と曲線
がなす面を補完する図形1301を口の3次元構造とす
る。
As for the three-dimensional structure of the mouth, as shown in FIG. 13, a contour shape consisting of straight lines and curved lines is defined based on the three-dimensional arrangement information of the facial feature points forming the mouth parts, and the straight lines and curved lines form. A figure 1301 that complements the surface is a three-dimensional structure of the mouth.

【0033】〈ステップ404〉各顔部品(頭部骨格、
眉、目、鼻、口等)を統合することによって、顔3次元
形状を復元する。図14に顔3次元形状1401の復元
例を示す。
<Step 404> Each face part (head skeleton,
3D shape of the face is restored by integrating (eyebrows, eyes, nose, mouth, etc.). FIG. 14 shows an example of restoration of the three-dimensional face 1401.

【0034】〈ステップ405、406〉復元された顔
3次元形状に対して、より対象者の個性を顕著にするた
めに強調表現(誇張表現)を行う。これは、例えば次の
ように行う。
<Steps 405 and 406> Emphasized expression (exaggerated expression) is performed on the restored three-dimensional shape of the face in order to make the individuality of the subject more remarkable. This is done, for example, as follows.

【0035】記憶装置12にあらかじめ用意されている
図15で示すような複数対象の顔特徴点の2次元配置情
報を平均化した平均顔特徴点1501と図8で示したよ
うな対象者の顔特徴点801との比較に基づいて、各顔
部品の3次元形状を変化させ、対象者の3次元顔特徴を
より顕著に表現することによって行う。具体的には、平
均顔特徴点配置情報aと対象者の特徴点配置情報bとを
比較して、その差分(b−a)を求め、この差分と任意
に設定した誇張率の積を補正量とし、この補正量だけ顔
部品を拡大(補正量が正の場合)、あるいは縮小(補正
量が負の場合)する。
An average face feature point 1501 obtained by averaging the two-dimensional arrangement information of the face feature points of a plurality of targets as shown in FIG. 15 prepared in the storage device 12 and the face of the target person as shown in FIG. Based on the comparison with the feature points 801, the three-dimensional shape of each face part is changed to more prominently express the three-dimensional facial features of the subject. Specifically, the average face feature point arrangement information a is compared with the target person's feature point arrangement information b to obtain the difference (ba), and the product of this difference and an arbitrarily set exaggeration rate is corrected. The face part is enlarged (when the correction amount is positive) or reduced (when the correction amount is negative) by this correction amount.

【0036】例えば、鼻の強調は、鼻のスケールを拡
大、または縮小することによって行う。図16に、鼻1
601について、拡大した鼻1602と縮小した鼻16
03の強調例を示す。
For example, the nose is emphasized by enlarging or reducing the scale of the nose. In FIG. 16, the nose 1
Nose 601, enlarged nose 1602 and reduced nose 16
An emphasis example of 03 is shown.

【0037】〈ステップ407、408、409〉利用
者端末装置10は、連続撮影要求を撮影機器20に送信
し、撮影機器20は、対象者の頭部正面方向の撮影及び
撮影画像情報の利用者端末装置10への送信を一定時間
にわたって繰り返す。利用者端末装置10は、順次、撮
影機器20からの撮影画像情報を受信し、それぞれ顔特
徴点の自動抽出を同様に一定時間にわたって繰り返す。
一定時間は、対象者の仕草により性格が推定できる時
間、例えば、1分から数分程度に設定する。
<Steps 407, 408, 409> The user terminal device 10 transmits a continuous photographing request to the photographing device 20, and the photographing device 20 photographs the subject in the front direction of the head and the user of the photographed image information. The transmission to the terminal device 10 is repeated for a fixed time. The user terminal device 10 sequentially receives the photographed image information from the photographing device 20, and similarly repeats the automatic extraction of the facial feature points for a certain period of time.
The fixed time is set to a time in which the character can be estimated depending on the behavior of the target person, for example, about 1 minute to several minutes.

【0038】〈ステップ410〉対象者の頭部を撮影し
続けた連続撮影情報(映像情報)から得られた顔特徴点
の変位量を抽出する。例えば、眉、目、口等の動作の変
位量を抽出する。
<Step 410> The displacement amount of the facial feature point obtained from the continuous photographing information (video information) of continuously photographing the head of the subject is extracted. For example, the amount of displacement of the motion of the eyebrows, eyes, mouth, etc. is extracted.

【0039】〈ステップ411、412〉抽出された顔
特徴点の変位量、即ち、眉、目、口等の各動作の変位
を、記憶装置12にあらかじめ用意されている図17で
示されるような性格分類テーブル1700と照らし合わ
せて対象者の性格(行動性格)を分類し、各顔部品の3
次元形状を変化させる。
<Steps 411, 412> The displacements of the extracted facial feature points, that is, the displacements of the respective motions of the eyebrows, eyes, mouth, etc. are prepared in the storage device 12 as shown in FIG. The personality (behavioral personality) of the target person is classified by comparing with the personality classification table 1700, and 3 of each face part is classified.
Change the dimensional shape.

【0040】例えば、目の上下端点の幅が大きく変動す
る場合は、瞬きの回数が多く、せっかちな性格であると
判断され、目の縦幅のサイズをより小さくし、逆に目の
上下端点の幅があまり変動しない場合は、ノンビリした
性格と判断され、目の縦幅のサイズを大きくする。ま
た、目の左右端点のうち、内側に位置する特徴点が下が
り、外側が上がることが多い場合は、怒りっぽい性格と
判断され、目の形状は外側につり上げて変形する。図1
8に、性格分類による強調前の目1801について、外
側につり上がった目1802と内側につり上がった目1
803の強調例を示す。
For example, if the widths of the upper and lower end points of the eyes fluctuate greatly, it is determined that the number of blinks is large and the character is impatient. If the width does not fluctuate much, it is determined that the character is non-slippery and the vertical size of the eyes is increased. In addition, when the inner characteristic points of the left and right end points of the eye often fall and the outer characteristic points rise, it is determined that the character is irritable, and the shape of the eye is lifted outward and deformed. Figure 1
8 shows an eye 1801 before being emphasized by the personality classification, an eye 1802 that is raised to the outside and an eye 1801 that is raised to the inside.
An emphasis example of 803 will be shown.

【0041】このように、対象者の連続撮影画像情報か
ら得られた顔特徴点の変位量の度合によって、性格の分
類およびその度合を決定し、それに対応する顔部品の形
状変形の度合を決定する。
As described above, the classification of the personality and the degree thereof are determined by the degree of displacement of the facial feature points obtained from the continuously photographed image information of the subject, and the degree of shape deformation of the face part corresponding thereto is determined. To do.

【0042】以上、本発明の一実施例について説明した
が、図2で示した利用者端末装置におけるCPUの一部
もしくは全部の処理機能をコンピュータのプログラムで
構成し、そのプログラムをコンピュータを用いて実行し
て本発明を実現することができること、あるいは、図
3、図4で示した処理手順をコンピュータ(CPU)の
プログラムで構成し、そのプログラムをコンピュータに
実行させることができることは言うまでもなく、また、
コンピュータでその処理機能を実現するためのプログラ
ム、あるいは、コンピュータにその処理手順を実行させ
るためのプログラムを、そのコンピュータが読み取り可
能な記録媒体、例えば、FD(フロッピー(登録商標)
ディスク)や、MO、ROM、メモリカード、CD、D
VD、リムーバブルディスクなどに記録して、保存した
り、提供したりすることができるとともに、インターネ
ット等のネットワークを通してそのプログラムを配布し
たりすることが可能である。
Although one embodiment of the present invention has been described above, some or all of the processing functions of the CPU in the user terminal device shown in FIG. 2 are configured by a computer program, and the program is used by the computer. Needless to say, the present invention can be implemented by executing the program, or the processing procedure shown in FIGS. 3 and 4 can be configured by a program of a computer (CPU) and the program can be executed by the computer. ,
A computer-readable recording medium, such as an FD (floppy (registered trademark)), stores a program for realizing the processing function of the computer or a program for causing the computer to execute the processing procedure.
Disk), MO, ROM, memory card, CD, D
The program can be recorded in a VD, a removable disk, etc., and saved or provided, and the program can be distributed through a network such as the Internet.

【0043】[0043]

【発明の効果】本発明によれば、対象者の外見情報と動
作から得られる性格情報を反映した顔3次元CGを簡易
に自動生成することが出来るという効果がある。また、
顔の造形と仕草から得られる情報に基づいた顔部品の強
調により、対象者の個性をより際立たせた顔3次元CG
の作成を実現できるという効果がある。
According to the present invention, there is an effect that a face three-dimensional CG reflecting the appearance information of the subject and the personality information obtained from the motion can be easily and automatically generated. Also,
Face 3D CG that emphasizes the individuality of the target person by emphasizing the facial parts based on information obtained from facial modeling and gestures
The effect is that the creation of can be realized.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明が通用されるシステム全体の構成図であ
る。
FIG. 1 is a configuration diagram of an entire system to which the present invention is applicable.

【図2】利用者端末装置の構成例を示す図である。FIG. 2 is a diagram showing a configuration example of a user terminal device.

【図3】本発明の全体的処理フロー図である。FIG. 3 is an overall process flow diagram of the present invention.

【図4】本発明の一実施例の詳細処理フロー図である。FIG. 4 is a detailed processing flowchart of an embodiment of the present invention.

【図5】側面顔の撮影例を示す図である。FIG. 5 is a diagram showing an example of photographing a side face.

【図6】正面顔の撮影例を示す図である。FIG. 6 is a diagram illustrating an example of capturing a front face.

【図7】側面顔と側面顔の同時撮影例を示す図である。FIG. 7 is a diagram showing an example of simultaneous photographing of a side face and a side face.

【図8】顔特徴点の抽出例を示す図である。FIG. 8 is a diagram showing an example of extraction of facial feature points.

【図9】頭部骨格の3次元構造復元例を示す図である。FIG. 9 is a diagram showing an example of restoring a three-dimensional structure of a head skeleton.

【図10】眉の3次元構造復元例を示す図である。FIG. 10 is a diagram showing an example of restoring a three-dimensional structure of eyebrows.

【図11】目の3次元構造復元例を示す図である。FIG. 11 is a diagram showing an example of restoring the three-dimensional structure of an eye.

【図12】鼻の3次元構造復元例を示す図である。FIG. 12 is a diagram showing an example of restoring the three-dimensional structure of the nose.

【図13】口の3次元構造復元例を示す図である。FIG. 13 is a diagram showing an example of restoring a three-dimensional structure of a mouth.

【図14】顔次元形状の復元例を示す図である。FIG. 14 is a diagram showing an example of restoration of a three-dimensional shape of a face.

【図15】平均顔の顔特徴点配置例を示す図である。FIG. 15 is a diagram showing an example of arranging facial feature points of an average face.

【図16】鼻の強調表現例を示す図である。[Fig. 16] Fig. 16 is a diagram illustrating an example of emphasized expression of a nose.

【図17】性格分類テーブルによる顔特徴点7の変位量
と性格分類、形状変形の相関例を示す図である。
FIG. 17 is a diagram showing an example of the correlation between the amount of displacement of the facial feature points 7 and the personality classification and shape deformation based on the personality classification table.

【図18】性格分類による目の強調例を示す図である。[Fig. 18] Fig. 18 is a diagram illustrating an example of eye enhancement based on personality classification.

【符号の説明】[Explanation of symbols]

10 利用者端末装置 20 撮影機器 30 ネットワーク 400 対象者 801 顔特徴点 903 頭部骨格 1001 眉 1101 目 1201 鼻 1301 口 1401 顔3次元形状 1501 平均顔特徴点 1601 強調前の鼻モデル 1602 拡大した鼻モデル 1603 縮小した鼻モデル 1801 性格分類による強調前の目モデル 1802 外側につり上がった目モデル 1803 内側につり上がった目モデル 10 user terminal 20 Photography equipment 30 network 400 people 801 facial feature points 903 head skeleton 1001 eyebrows 1101th eye 1201 nose 1301 mouth 1401 three-dimensional face shape 1501 Average facial feature points 1601 Nose model before emphasis 1602 Enlarged nose model 1603 Reduced nose model 1801 Eye model before emphasis by personality classification 1802 Eye model lifted to the outside 1803 Eye model raised inside

───────────────────────────────────────────────────── フロントページの続き Fターム(参考) 5B050 AA10 BA07 BA08 BA09 BA12 CA07 DA01 EA06 EA19 EA24 EA28 FA02 FA08 FA17 5B057 AA20 BA02 CA12 CA16 CB13 CB17 CE15 DA08 DA17 DB02 DC05 DC36 5L096 CA02 FA09 GA08 HA04    ─────────────────────────────────────────────────── ─── Continued front page    F-term (reference) 5B050 AA10 BA07 BA08 BA09 BA12                       CA07 DA01 EA06 EA19 EA24                       EA28 FA02 FA08 FA17                 5B057 AA20 BA02 CA12 CA16 CB13                       CB17 CE15 DA08 DA17 DB02                       DC05 DC36                 5L096 CA02 FA09 GA08 HA04

Claims (7)

【特許請求の範囲】[Claims] 【請求項1】 対象者の顔を3次元コンピュータグラフ
ィックスで表現する顔3次元コンピュータグラフイック
ス生成方法において、 対象者の外見情報に基づいて顔3次元形状を復元する処
理と、 対象者の性格を分類し、該分類した性格を前記顔3次元
形状に反映する処理と、 を有することを特徴とする顔3次元コンピュータグラフ
イックス生成方法。
1. A face 3D computer graphics generating method for expressing a face of a subject by 3D computer graphics, a process of restoring a 3D shape of a face based on appearance information of the subject, and a personality of the subject. And a process of reflecting the classified personality in the face three-dimensional shape, and a face three-dimensional computer graphic generation method.
【請求項2】 請求項1記載の顔3次元コンピュータグ
ラフイックス生成方法において、 対象者の外見情報に基づいて顔3次元形状を復元する処
理では、 対象者の頭部を正面と側面の2方向から撮影した撮影画
像情報から顔特徴点を抽出し、 前記顔特徴点に基づいて、頭部骨格、眉、目、鼻、口等
の各顔部品の3次元構造を復元し、 前記各顔部品の3次元構造を一体化して顔3次元形状を
復元することを特徴とする顔3次元コンピュータグラフ
イックス生成方法。
2. The face 3D computer graphics generation method according to claim 1, wherein in the process of restoring the 3D face shape based on the appearance information of the subject, the head of the subject is in two directions, the front and the side. Facial feature points are extracted from the captured image information captured from, and the three-dimensional structure of each facial part such as the head skeleton, eyebrows, eyes, nose, mouth, etc. is restored based on the facial feature points, and each facial part is extracted. A three-dimensional face computer graphics generation method, characterized in that the three-dimensional structure of the above is integrated to restore the three-dimensional shape of the face.
【請求項3】 請求項2記載の顔3次元コンピュータグ
ラフィックス生成方法において、 対象者の外見情報に基づいて顔3次元形状を復元する処
理では、 対象者の顔特徴点と平均顔の顔特徴点とを比較し、その
差分に応じて、復元された顔3次元形状の顔部品構造に
変更を加えることを特徴とする顔3次元コンピュータグ
ラフィックス生成方法。
3. The face 3D computer graphics generation method according to claim 2, wherein in the process of restoring the 3D face shape based on the appearance information of the subject, the face feature points of the subject and the face features of the average face are included. A face three-dimensional computer graphics generation method characterized by comparing points with each other and modifying the face part structure of the restored face three-dimensional shape according to the difference.
【請求項4】 請求項1ないし3のいずれか1項に記載
の顔3次元コンピュータグラフィックス生成方法におい
て、 対象者の性格を顔3次元形状に反映する処理では、 対象者の頭部を撮影し続けた映像情報から、顔特徴点の
変位量を抽出し、 前記顔特徴点の変位量に基づいて対象者の性格を分類
し、該分類した性格に応じて、顔3次元形状の顔部品構
造に変化を与えることを特徴とする顔3次元コンピュー
タグラフィックス生成方法。
4. The facial three-dimensional computer graphics generation method according to claim 1, wherein in the processing of reflecting the personality of the subject on the face three-dimensional shape, the head of the subject is photographed. The amount of displacement of the facial feature points is extracted from the continued image information, the personality of the target person is classified based on the amount of displacement of the facial feature points, and the face part having a three-dimensional face is formed according to the classified personality. A method for generating three-dimensional computer graphics of a face, characterized by giving a change to a structure.
【請求項5】 請求項4記載の顔3次元コンピュータグ
ラフィックス生成方法において、 対象者の性格を顔3次元形状に反映する処理では、 対象者の頭部を撮影し続けた映像情報から、眉、目、口
等の各顔部品の動作の変位量を抽出し、 前記顔部品の動作の変位量に応じたパターン照合によ
り、対象者の動作性格を分類し、 前記動作性格に基づいて、顔3次元形状の各顔部品構造
を変更することを特徴とする顔3次元コンピュータグラ
フイックス生成方法。
5. The face three-dimensional computer graphics generation method according to claim 4, wherein in the process of reflecting the character of the target person in the three-dimensional shape of the face, the eyebrows are extracted from the image information obtained by continuously photographing the head of the target person. , The amount of movement of each face part such as eyes, mouth, etc. is extracted, by pattern matching according to the amount of movement of the face part, the action character of the target person is classified, and based on the action character, the face A face three-dimensional computer graphics generation method characterized by changing each face part structure of a three-dimensional shape.
【請求項6】 請求項1ないし5のいずれか1項に記載
の顔3次元コンピュータグラフィックス生成方法をコン
ピュータに実行させるためのプログラム。
6. A program for causing a computer to execute the face three-dimensional computer graphics generation method according to claim 1.
【請求項7】 請求項1ないし5のいずれか1項に記載
の顔3次元コンピュータグラフィックス生成方法をコン
ピュータに実行させるためのプログラムを記録した記録
媒体。
7. A recording medium recording a program for causing a computer to execute the face three-dimensional computer graphics generating method according to claim 1.
JP2001334370A 2001-10-31 2001-10-31 Method of generating face 3d computer graphics, its program, and recording medium Pending JP2003141563A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2001334370A JP2003141563A (en) 2001-10-31 2001-10-31 Method of generating face 3d computer graphics, its program, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2001334370A JP2003141563A (en) 2001-10-31 2001-10-31 Method of generating face 3d computer graphics, its program, and recording medium

Publications (1)

Publication Number Publication Date
JP2003141563A true JP2003141563A (en) 2003-05-16

Family

ID=19149511

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2001334370A Pending JP2003141563A (en) 2001-10-31 2001-10-31 Method of generating face 3d computer graphics, its program, and recording medium

Country Status (1)

Country Link
JP (1) JP2003141563A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013152639A1 (en) * 2012-04-11 2013-10-17 腾讯科技(深圳)有限公司 Video chatting method and system
KR101420020B1 (en) 2012-12-12 2014-07-17 한국 한의학 연구원 Method and apparatus for detecting profile face
US10453248B2 (en) 2017-05-11 2019-10-22 Colopl, Inc. Method of providing virtual space and system for executing the same
JP7202045B1 (en) 2022-09-09 2023-01-11 株式会社PocketRD 3D avatar generation device, 3D avatar generation method and 3D avatar generation program

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013152639A1 (en) * 2012-04-11 2013-10-17 腾讯科技(深圳)有限公司 Video chatting method and system
US9094571B2 (en) 2012-04-11 2015-07-28 Tencent Technology (Shenzhen) Company Limited Video chatting method and system
KR101420020B1 (en) 2012-12-12 2014-07-17 한국 한의학 연구원 Method and apparatus for detecting profile face
US10453248B2 (en) 2017-05-11 2019-10-22 Colopl, Inc. Method of providing virtual space and system for executing the same
JP7202045B1 (en) 2022-09-09 2023-01-11 株式会社PocketRD 3D avatar generation device, 3D avatar generation method and 3D avatar generation program
WO2024053235A1 (en) * 2022-09-09 2024-03-14 株式会社PocketRD Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program
JP2024039293A (en) * 2022-09-09 2024-03-22 株式会社PocketRD Three-dimensional avatar generation device, three-dimensional avatar generation method and three-dimensional avatar generation program

Similar Documents

Publication Publication Date Title
US11625878B2 (en) Method, apparatus, and system generating 3D avatar from 2D image
TWI708152B (en) Image processing method, device, and storage medium
KR101190686B1 (en) Image processing apparatus, image processing method, and computer readable recording medium
JP7504968B2 (en) Avatar display device, avatar generation device and program
US8437514B2 (en) Cartoon face generation
US8698796B2 (en) Image processing apparatus, image processing method, and program
CN101324961B (en) Human face portion three-dimensional picture pasting method in computer virtual world
Liao et al. Automatic caricature generation by analyzing facial features
CN109410298B (en) Virtual model manufacturing method and expression changing method
WO2020150687A1 (en) Systems and methods for photorealistic real-time portrait animation
WO2024169314A1 (en) Method and apparatus for constructing deformable neural radiance field network
CN113628327B (en) Head three-dimensional reconstruction method and device
US11282257B2 (en) Pose selection and animation of characters using video data and training techniques
JP3753625B2 (en) Expression animation generation apparatus and expression animation generation method
JP2003141563A (en) Method of generating face 3d computer graphics, its program, and recording medium
KR20010084996A (en) Method for generating 3 dimension avatar using one face image and vending machine with the same
Paier et al. Unsupervised learning of style-aware facial animation from real acting performances
JP2003248842A (en) Facial three-dimensional computer graphic generation method, and program and recording medium thereof
WO2021244040A1 (en) Facial expression editing method and electronic device
WO2002097732A1 (en) Method for producing avatar using image data and agent system with the avatar
CN107369209A (en) A kind of data processing method
JP2003030684A (en) Face three-dimensional computer graphic generation method and device, face three-dimensional computer graphic generation program and storage medium storing face three-dimensional computer graphic generation program
JP2001034785A (en) Virtual transformation device
CN117808943B (en) Three-dimensional cartoon face reconstruction method, device, equipment and storage medium
WO2015042867A1 (en) Method for editing facial expression based on single camera and motion capture data