TWM508085U - System for generating three-dimensional facial image and device thereof - Google Patents

System for generating three-dimensional facial image and device thereof Download PDF

Info

Publication number
TWM508085U
TWM508085U TW104202391U TW104202391U TWM508085U TW M508085 U TWM508085 U TW M508085U TW 104202391 U TW104202391 U TW 104202391U TW 104202391 U TW104202391 U TW 104202391U TW M508085 U TWM508085 U TW M508085U
Authority
TW
Taiwan
Prior art keywords
facial
data
feature
avatar
communication device
Prior art date
Application number
TW104202391U
Other languages
Chinese (zh)
Inventor
Shiann-Tsong Tsai
Li-Chuan Chiu
Wei-Meen Liao
Original Assignee
Speed 3D Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Speed 3D Inc filed Critical Speed 3D Inc
Priority to TW104202391U priority Critical patent/TWM508085U/en
Publication of TWM508085U publication Critical patent/TWM508085U/en

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A system for generating a three-dimensional facial image is disclosed. The system includes a server and at least a communication device. The communication device is communicated to the server, and stores a doll data from the server in advance. The communication device receives a facial feature data and a facial texture data transmitted from the server. The model data is modified by the communication device in accordance with facial feature data and the facial texture data. The three-dimensional facial image is displayed on the communication device in accordance with the facial feature data and the modified model data. The present invention discloses a device for generating a three-dimensional facial image as well.

Description

三維頭像產生系統及其裝置 Three-dimensional avatar generating system and device thereof

本創作係關於一種三維頭像產生系統及其裝置。 This creation is about a three-dimensional avatar generation system and its apparatus.

在今日的生活中,隨著通訊設備的架設門檻持續下降,行動通訊裝置的廣泛使用,網路世界與虛擬情境的使用越來越方便容易,佔據現代人一天時間的比例也越來越高。 In today's life, as the threshold for the establishment of communication devices continues to decline, the widespread use of mobile communication devices, the use of the network world and virtual situations is more and more convenient and easy, and the proportion of modern people's time is getting higher and higher.

因為投入許多時間與情感,使用者益發重視在網路或虛擬世界中經營「自我」。過往使用文字或數字作為身份表徵已經明顯不足,即便後來各種通訊媒介或社群網站都設計讓使用者可以利用照片或大頭貼來辨識身份,或凸顯個人風格,但是終究是停留在平面階段,而無法讓使用者產生一個虛擬的自己可以躍然於螢幕上的感覺。 Because of the time and emotion invested, users value the importance of operating the "self" in the online or virtual world. In the past, the use of words or numbers as a representation of identity has become apparently insufficient. Even though various communication media or social networking sites have been designed so that users can use photos or photo stickers to identify themselves or highlight their personal style, they are still in the flat stage. It's impossible to let the user create a virtual feeling that they can jump on the screen.

為解決此問題,目前已發展出一種虛擬人偶或公仔(英文可稱之為Avatar)的技術,主要是在電子裝置上產生一個樣貌類似自己的三維頭像,進而發展成立體人形,作為在網路或虛擬世界的代表。但是該項技術的應用目前僅止於在資料庫中建立非常多的五官、髮型、臉型與身形模組,讓使用者可以依據自己的形象挑選模組來組合。即便是組合起來的模組還可以進行細部的手動微調 ,但是人與人之間的外形差異是如此的多樣化,使用有限的模組來代替實在難以產生樣貌真正高度相似的虛擬人偶。 In order to solve this problem, a technology of virtual dolls or dolls (Avatar in English) has been developed, mainly to create a three-dimensional avatar similar to oneself on an electronic device, and then develop a human figure as a A representative of the Internet or the virtual world. However, the application of this technology is only limited to the establishment of a large number of facial features, hairstyles, face and body modules in the database, so that users can select modules to combine according to their own image. Even the combined modules can be manually fine-tuned But the difference in shape between people is so diverse, using a limited number of modules instead of virtual dolls that are really difficult to produce a truly highly similar look.

有鑒於此,本創作人思及一種三維頭像產生系統及其通訊裝置,透過素體與使用者之臉部相關資料的結合,產生相似度高的三維頭像,且因為素體是預先儲存於使用者的裝置內,而臉部相關資料是接收而得,並不需要由裝置自行處理,所以有助於降低處理時間與裝置的硬體門檻。一旦具有非常相像的三維頭像後,使用者可以輕易地產生具高度自我代表性或識別性的虛擬人偶或公仔,增加網路世界或虛擬情境的使用情趣。 In view of this, the author thinks of a three-dimensional avatar generation system and its communication device, through the combination of the body and the user's face related data, to generate a three-dimensional avatar with high similarity, and because the body is pre-stored in use In the device, the face related data is received and does not need to be processed by the device itself, so it helps to reduce the processing time and the hardware threshold of the device. Once you have a very similar 3D avatar, users can easily create highly self-representative or recognizable virtual dolls or dolls to increase the use of the online world or virtual context.

本創作之目的為提供一種三維頭像產生系統及其通訊裝置,透過素體與使用者之臉部相關資料的結合,產生相似度高的三維頭像,且因為素體是預先儲存於使用者的裝置內,而臉部相關資料是接收而得,並不需要由裝置自行處理,所以有助於降低處理時間與裝置的硬體門檻。一旦具有非常相像的三維頭像後,使用者可以輕易地產生具高度自我代表性或識別性的虛擬人偶或公仔,增加網路世界或虛擬情境的使用情趣。 The purpose of the present invention is to provide a three-dimensional avatar generating system and a communication device thereof, which combines the physical body and the face data of the user to generate a three-dimensional avatar with high similarity, and because the primal body is a device pre-stored in the user. Inside, the facial related data is received and does not need to be processed by the device itself, so it helps to reduce the processing time and the hardware threshold of the device. Once you have a very similar 3D avatar, users can easily create highly self-representative or recognizable virtual dolls or dolls to increase the use of the online world or virtual context.

在本創作中,所稱之頭像並非必要包括生物學或生理學上的整個人體頭部,而是至少包括到面部即可。換言之,本創作產生之三維頭像主要是產生三維的臉部,而不以要產生包括之頭髮或後腦等不會因為不同人而產生顯著差異之位置為限制。 In the present creation, the so-called avatar does not necessarily include the entire human head in biology or physiology, but at least includes the face. In other words, the three-dimensional avatar produced by this creation is mainly to produce a three-dimensional face, and is not limited by the position where the hair or the hindbrain to be included does not cause significant differences due to different people.

為達上述目的,依據本創作的一種三維頭像產生系統包括一伺服器以及至少一通訊裝置。通訊裝置與伺服器通訊連接,並預先儲 存來自於伺服器之一素體資料。伺服器傳送一臉部特徵資料以及一臉部模型資料至通訊裝置。通訊裝置依據臉部特徵資料以及臉部模型資料調整素體資料,通訊裝置依據臉部模型資料以及調整後之素體資料產生一三維頭像。 To achieve the above object, a three-dimensional avatar generating system according to the present invention includes a server and at least one communication device. The communication device is connected to the server and stored in advance Save one of the physical data from the server. The server transmits a facial feature data and a facial model data to the communication device. The communication device adjusts the physical body data according to the facial feature data and the facial model data, and the communication device generates a three-dimensional avatar according to the facial model data and the adjusted physical body data.

為達上述目的,依據本創作的一種三維頭像產生裝置。三維頭像產生裝置包括一傳輸單元、一儲存單元以及一處理單元。儲存單元預先儲存一素體資料。處理單元分別與傳輸單元以及儲存單元電性連接。處理單元依據臉部特徵資料以及臉部模型資料調整素體資料,並依據臉部模型資料以及調整後之素體資料產生一三維頭像。 In order to achieve the above object, a three-dimensional avatar generating device according to the present invention is provided. The three-dimensional avatar generating device comprises a transmission unit, a storage unit and a processing unit. The storage unit stores a piece of physical data in advance. The processing unit is electrically connected to the transmission unit and the storage unit, respectively. The processing unit adjusts the physical body data according to the facial feature data and the facial model data, and generates a three-dimensional avatar according to the facial model data and the adjusted physical body data.

在一實施例中,素體資料係來自於一伺服器。 In one embodiment, the primal data is from a server.

在一實施例中,臉部特徵資料或臉部模型資料係由一伺服器依據一平面頭像所得,且平面頭像係與三維頭像對應。 In an embodiment, the facial feature data or the facial model data is obtained by a server according to a planar avatar, and the planar avatar corresponds to the three-dimensional avatar.

在一實施例中,臉部特徵資料包括複數個臉部特徵點,素體資料包括至少一特徵區,特徵區包括複數個特徵區特徵點,複數個臉部特徵點與複數個特徵區特徵點係個別對應,處理單元依據複數個臉部特徵點調整複數個特徵區特徵點之空間座標。 In an embodiment, the facial feature data includes a plurality of facial feature points, the physical material includes at least one feature region, the feature region includes a plurality of feature region feature points, a plurality of facial feature points, and a plurality of feature region feature points Correspondingly, the processing unit adjusts the spatial coordinates of the feature points of the plurality of feature regions according to the plurality of facial feature points.

在一實施例中,臉部模型資料包括複數個臉部對位點,素體資料包括複數個素體對位點,複數個臉部對位點與複數個素體對位點係個別對應,以供處理單元結合臉部模型資料與素體資料。 In an embodiment, the facial model data includes a plurality of facial pair sites, the body data includes a plurality of body pair sites, and the plurality of face pairs and the plurality of body pairs correspond to each other. The processing unit combines the facial model data with the physical data.

在一實施例中,處理單元係依據臉部模型資料改變素體資料之一部分之空間座標。 In one embodiment, the processing unit changes the spatial coordinates of a portion of the physical data based on the facial model data.

1‧‧‧三維頭像產生系統 1‧‧‧3D avatar generation system

2‧‧‧通訊裝置 2‧‧‧Communication device

3‧‧‧伺服器 3‧‧‧Server

21、31‧‧‧傳輸單元 21, 31‧‧‧ transmission unit

22、32‧‧‧儲存單元 22, 32‧‧‧ storage unit

23、33‧‧‧處理單元 23, 33‧ ‧ processing unit

24‧‧‧顯示單元 24‧‧‧Display unit

4‧‧‧特徵區 4‧‧‧Characteristic Zone

41‧‧‧特徵區特徵點 41‧‧‧Characteristic feature points

5‧‧‧平面頭像 5‧‧‧flat avatar

51‧‧‧臉部特徵點 51‧‧‧Face feature points

6‧‧‧臉部模型資料 6‧‧‧Face model data

61‧‧‧臉部對位點 61‧‧‧ Face alignment

71‧‧‧素體對位點 71‧‧‧ Physique alignment

圖1為本創作三維頭像產生系統之一實施例的系統架構示意圖。 FIG. 1 is a schematic diagram of a system architecture of an embodiment of a three-dimensional avatar generating system.

圖2為本創作實施例中所示之通訊裝置顯示素體資料時的示意圖。 FIG. 2 is a schematic diagram of the communication device shown in the creative embodiment when displaying the physical material.

圖3為圖2所示之素體並標誌特徵點後的部分放大示意圖。 3 is a partially enlarged schematic view of the element body shown in FIG. 2 and showing the feature points.

圖4為本創作實施例中從平面頭像中擷取臉部特徵點之結果的示意圖。 FIG. 4 is a schematic diagram of the result of capturing a facial feature point from a planar avatar in the creative embodiment.

圖5為本創作實施例中臉部模型資料的示意圖。 FIG. 5 is a schematic diagram of facial model data in the present embodiment.

圖6為依據本創作實施例進行素體資料調整時的示意圖。 FIG. 6 is a schematic diagram of the adjustment of the physical data according to the present embodiment.

圖7為依據本創作實施例進行臉部模型與素體對位結合的示意圖。 FIG. 7 is a schematic diagram of combining a face model with a body alignment according to the present embodiment.

以下將參照相關圖式,說明本創作較佳實施例的三維頭像產生系統及其通訊裝置 Hereinafter, a three-dimensional avatar generating system and a communication device thereof according to a preferred embodiment of the present invention will be described with reference to related drawings.

圖1為本創作三維頭像產生系統之一實施例的系統架構示意圖,如圖1所示,本實施例的三維頭像產生系統1包括至少一通訊裝置2以及一伺服器3,較佳是可以同時包括多數個通訊裝置2,以供多數個使用者同時操作。 1 is a schematic diagram of a system architecture of an embodiment of a three-dimensional avatar generating system. As shown in FIG. 1 , the three-dimensional avatar generating system 1 of the present embodiment includes at least one communication device 2 and a server 3, preferably at the same time. A plurality of communication devices 2 are included for simultaneous operation by a plurality of users.

通訊裝置2可以為智慧型手機、平板電腦、行動數位助理、具聯網功能之攝錄影機、穿戴式裝置、桌上型電腦、筆記型電腦或任何具有聯網功能之裝置。在本實施例中,通訊裝置2係以智慧型手機為例說明,並以無線通訊的方式,透過網際網路與伺服器3通訊連接。但在其他實施例中,通訊裝置2也可以是在固定地點 操作的筆記型電腦或桌上型電腦。 The communication device 2 can be a smart phone, a tablet computer, a mobile digital assistant, a networked video camera, a wearable device, a desktop computer, a notebook computer, or any device with networking capabilities. In the embodiment, the communication device 2 is described by taking a smart phone as an example, and communicating with the server 3 through the Internet through wireless communication. However, in other embodiments, the communication device 2 can also be at a fixed location. Operating laptop or desktop computer.

伺服器3包括一傳輸單元31、一儲存單元32以及一或多個處理單元33,儲存單元32與傳輸單元31分別與處理單元33通訊連接。在以下所舉之實施例中,伺服器3係透過處理單元33進行運算與處理,透過傳輸單元31進行資料傳輸,以及透過儲存單元32儲存相關的資料。 The server 3 includes a transmission unit 31, a storage unit 32, and one or more processing units 33. The storage unit 32 and the transmission unit 31 are respectively in communication with the processing unit 33. In the following embodiments, the server 3 performs operations and processing through the processing unit 33, performs data transmission through the transmission unit 31, and stores related materials through the storage unit 32.

通訊裝置2包括一傳輸單元21、一儲存單元22、一處理單元23以及一顯示單元24。傳輸單元21、儲存單元22以及顯示單元24分別與處理單元23電性連接。 The communication device 2 includes a transmission unit 21, a storage unit 22, a processing unit 23, and a display unit 24. The transmission unit 21, the storage unit 22, and the display unit 24 are electrically connected to the processing unit 23, respectively.

使用者可以先利用通訊裝置2之傳輸單元21接收來自於伺服器3中的應用程式(APP),並將應用程式儲存至儲存單元22。由於該應用程式中包括有一素體資料(doll data),所以在下載該應用程式的同時,通訊裝置2亦下載了素體資料。換言之,在執行應用程式之前,素體資料已經預先被儲存於通訊裝置2之儲存單元22。素體係指具有部分人體形框或輪廓的模型,例如頭部模型,或本實施例之全身模型,圖2為本創作實施例中所示之通訊裝置顯示素體資料時的示意圖。 The user can first receive the application (APP) from the server 3 by using the transmission unit 21 of the communication device 2, and store the application to the storage unit 22. Since the application includes a doll data, the communication device 2 also downloads the physical data while downloading the application. In other words, the physical data has been previously stored in the storage unit 22 of the communication device 2 before the application is executed. The prime system refers to a model having a partial human body frame or contour, such as a head model, or a whole body model of the present embodiment, and FIG. 2 is a schematic diagram of the communication device shown in the creative embodiment when displaying the physical material.

在本實施例中,素體資料經通訊裝置2開啟後,可於顯示單元24上顯示出一個三維人形影像,且該三維影像至少包括一臉部。如圖2所示,素體資料包括完整的頭部、軀幹與四肢,頭部正面為臉部,臉部上具有眉毛、眼耳鼻口等五官。素體資料之建立可以由伺服器3下載包括臉部數據之人體數據資料,透過三維建模的方式來達成。 In this embodiment, after the physical data is turned on by the communication device 2, a three-dimensional humanoid image can be displayed on the display unit 24, and the three-dimensional image includes at least one face. As shown in Figure 2, the physical data includes the complete head, trunk and limbs, the front of the head is the face, and the face has five features such as eyebrows, eyes, ears and nose. The establishment of the physical data can be obtained by the server 3 by downloading the human body data including the facial data through the three-dimensional modeling.

圖3為圖2所示之素體的部分放大示意圖。請參考圖3,本實施例之素體資料在建立時,定義眉毛、五官與臉型為特徵區4,每一個特徵區4內具有複數個特徵區特徵點41。以眼睛為例,特徵區特徵點41是依據眼睛周圍輪廓排列,換言之,眼睛特徵區4內的特徵區特徵點41排列出眼睛的輪廓外型。每一個特徵區特徵點41的空間座標位置分別被記錄在素體資料中。其中,空間座標位置產生的方法可以是以臉部的中心點為參考座標,計算出每一特徵區特徵點41的相對空間座標。此外,每一特徵區特徵點41具有各自的註冊編號,在本實施例中共有八十七個特徵區特徵點41,分布於例如但不限於眉毛、眼睛、鼻子、嘴巴、耳朵等特徵區4內,所以編號依序由一號編號至八十七號,以作為各點之間的識別方式。惟須說明的是,為避免圖面過於繁雜有礙理解,圖3中並未將八十七個特徵區特徵點41悉數標出。 Fig. 3 is a partially enlarged schematic view showing the element body shown in Fig. 2. Referring to FIG. 3, when the physical material of the embodiment is established, the eyebrows, the facial features and the face are defined as feature areas 4, and each of the feature areas 4 has a plurality of feature area feature points 41. Taking the eye as an example, the feature area feature points 41 are arranged according to the contours around the eyes, in other words, the feature area feature points 41 in the eye feature area 4 are arranged to outline the outline of the eyes. The spatial coordinate position of each feature area feature point 41 is recorded in the elemental data, respectively. The method for generating the spatial coordinate position may be that the center point of the face is used as a reference coordinate, and the relative space coordinates of the feature points 41 of each feature area are calculated. In addition, each feature area feature point 41 has a respective registration number. In this embodiment, there are a total of eightty-seven feature area feature points 41 distributed in the feature area 4 such as, but not limited to, eyebrows, eyes, nose, mouth, ears, and the like. Within, so the number is numbered sequentially from number one to eighty-seven, as a means of identification between points. It should be noted that in order to avoid the drawing being too complicated and difficult to understand, the eighty-seven feature area feature points 41 are not fully marked in FIG.

本實施例圖2及圖3所示,透過通訊裝置2顯示素體資料非為產生三維頭像之必要步驟,也就是素體資料儲存後不一定要顯示出來,亦可以單純儲存於儲存單元22內,供後續使用。 In the present embodiment, as shown in FIG. 2 and FIG. 3, displaying the physical data through the communication device 2 is not a necessary step for generating a three-dimensional avatar, that is, the physical data is not necessarily displayed after being stored, and may be simply stored in the storage unit 22. For later use.

當使用者希望建立一個三維頭像時,可以啟動通訊裝置2內之應用程式,並上傳一張影像至伺服器3。伺服器3接收到該影像後會對其進行分析。在本實施例中,使用者可以利用通訊裝置2拍攝一張自己的平面頭像,也就是具有臉部影像的照片,上傳至伺服器3進行分析。當然,在其他實施例中,也可以使用已儲存在通訊裝置2或任何其他地方的照片或影像替代。 When the user wishes to create a three-dimensional avatar, the application in the communication device 2 can be activated and an image can be uploaded to the server 3. Server 3 will analyze the image after receiving it. In this embodiment, the user can use the communication device 2 to take a picture of his own plane, that is, a photo with a face image, and upload it to the server 3 for analysis. Of course, in other embodiments, photos or images that have been stored in the communication device 2 or anywhere else may be used instead.

當平面頭像傳送至伺服器3後,伺服器3之處理單元33可以依據演算法或軟體程式辨識平面頭像內的臉部特徵,以形成一臉部特徵 資料。具體來說,處理單元33可以依據與影像辨識有關的演算法或軟體程式辨識平面頭像,以區別其中例如但不限於眉毛、五官以及臉型等具臉部特徵代表性的特定區域,再以複數個點排列出該些區域的輪廓。接著,伺服器3擷取該些點為臉部特徵點,再將該些臉部特徵點組成或再與其他內容共同組成臉部特徵資料,便取得了臉部的特徵。圖4為本實施例中從平面頭像5中擷取臉部特徵點之結果的示意圖。請參考圖4,本實施例係以主動外觀模型演算法(Active Appearance Model,簡稱AAM)分析平面頭像5,並取得總共八十七個臉部特徵點51。此八十七個臉部特徵點51也具有註冊編號,且是與素體資料之特徵區特徵點41相對應,以供將來進行調整素體臉部特徵使用。為避免圖面過於繁雜有礙理解,圖4中同樣未將八十七個臉部特徵點51悉數標出。 After the planar avatar is transmitted to the server 3, the processing unit 33 of the server 3 can recognize the facial features in the planar avatar according to an algorithm or a software program to form a facial feature. data. Specifically, the processing unit 33 can identify the plane avatar according to an algorithm or a software program related to image recognition, so as to distinguish between specific regions having facial features, such as, but not limited to, eyebrows, facial features, and face types, and then multiple numbers. The points are arranged to outline the areas. Then, the server 3 captures the points as facial feature points, and then combines the facial feature points or forms the facial feature data together with other contents to obtain the features of the face. FIG. 4 is a schematic diagram of the result of extracting facial feature points from the planar avatar 5 in the embodiment. Referring to FIG. 4, in this embodiment, the plane avatar 5 is analyzed by an Active Appearance Model (AAM), and a total of eighty-seven facial feature points 51 are obtained. The eighty-seven facial feature points 51 also have a registration number and correspond to the feature area feature points 41 of the physical data for future adjustment of the facial face features. In order to avoid the drawing being too complicated and difficult to understand, the eighty-seven facial feature points 51 are also not fully marked in FIG.

當然,為提升主動外觀模型演算法的功效,在使用之前亦可以先經過一組或一組以上的參考影像進行訓練。此外,為更進一步改善主動外觀模型演算法,在擷取臉部特徵點51的過程中,亦可以同時搭配模型預測,以及YCbCr色彩空間中膚色範圍差異化之處理。 Of course, in order to improve the effectiveness of the active appearance model algorithm, one or more sets of reference images may be trained before use. In addition, in order to further improve the active appearance model algorithm, in the process of capturing the facial feature points 51, it is also possible to simultaneously match the model prediction and the processing of the skin color range differentiation in the YCbCr color space.

同時,伺服器3之處理單元33還會依據圖4所示之平面頭像5進行辨識,以產生一個臉部模型資料。伺服器3的儲存單元32中可以儲存大量的臉部模型,惟各模型之間略有差異。伺服器3之處理單元33可依據上述擷取到之臉部特徵點51之集合的幾何中心位置為參考標準,將臉部特徵點51之集合劃分成一個座標系,並依據中心位置與各臉部特徵點51之距離與夾角進行相似度運算,從而於臉部模型資料庫中找出相似度較高者。請參考圖5,其為依據 本實施例之方式選擇出來之臉部模型的示意圖。 At the same time, the processing unit 33 of the server 3 also recognizes the planar avatar 5 shown in FIG. 4 to generate a facial model data. A large number of face models can be stored in the storage unit 32 of the server 3, but there is a slight difference between the models. The processing unit 33 of the server 3 can divide the set of facial feature points 51 into a coordinate system according to the geometric center position of the set of facial feature points 51 as described above, and according to the central position and each face. The distance between the feature points 51 and the angle are similarly calculated, so that the similarity is higher in the face model database. Please refer to Figure 5, which is based on A schematic diagram of a face model selected in the manner of this embodiment.

臉部模型資料6包括複數個臉部對位點61。臉部對位點61係預設於各臉部模型資料中,且實質上臉部對位點61的排列會形成臉部模型的輪廓,如圖5所示。 The facial model material 6 includes a plurality of facial matching points 61. The face-to-face point 61 is preset in each face model data, and substantially the arrangement of the face-to-face points 61 forms the outline of the face model, as shown in FIG.

伺服器3會透過傳輸單元31將臉部特徵資料以及臉部模型資料傳送至通訊裝置2。當通訊裝置2以傳輸單元21接收該二資料後,通訊裝置2會以處理單元23進行以下動作。圖6為依據本創作實施例進行素體資料調整時的示意圖。請參考圖6,首先,處理單元23會利用臉部特徵資料中臉部特徵點51與素體資料中特徵區特徵點41之間的註冊編號關係,依據各個臉部特徵點51的空間座標,分別修改各個特徵區特徵點41的空間座標,結果會改變特徵區特徵點41的排列,從而使得素體資料被顯示時的像素位置發生變化,讓素體的臉部,例如但不限於五官相對位置、表情等,看起來近似平面頭像5的臉部。在本實施例的一個態樣中,處理單元23是先計算註冊編號相同之臉部特徵點51與特徵區特徵點41之間的空間座標差異,再利用徑向基底函數(Radial basis function,RBF)網路此種類似類神經網路的軟體系統將差異用於修正素體資料,使得素體的臉部可以呈現出非常接近平面頭像5的樣貌。 The server 3 transmits the face feature data and the face model data to the communication device 2 through the transmission unit 31. When the communication device 2 receives the two data by the transmission unit 21, the communication device 2 performs the following operations with the processing unit 23. FIG. 6 is a schematic diagram of the adjustment of the physical data according to the present embodiment. Referring to FIG. 6, first, the processing unit 23 uses the registration number relationship between the facial feature points 51 in the facial feature data and the feature region feature points 41 in the physical material data, according to the spatial coordinates of the respective facial feature points 51. The spatial coordinates of the feature points 41 of each feature area are respectively modified, and the result is that the arrangement of the feature points 41 of the feature area is changed, so that the pixel position of the body data is changed, so that the face of the element body, for example, but not limited to, the facial features are relatively The position, expression, etc., looks like the face of the flat avatar 5. In an aspect of the embodiment, the processing unit 23 first calculates a spatial coordinate difference between the facial feature point 51 and the feature area feature point 41 with the same registration number, and then uses a Radial basis function (RBF). The network, such a neural network-like software system, uses the difference to correct the physical data, so that the face of the body can appear very close to the plane avatar 5.

圖7為依據本實施例進行臉部模型與素體對位結合的示意圖。請同時參考圖7與圖3,因為臉部對位點61具有各自的註冊編號,另外相對的,儲存於通訊裝置2內之素體資料也具有素體對位點71,且同樣具有註冊編號,因此處理單元23可依據二者對位點之間的註冊編號關係,將臉部模型與素體結合。上述步驟類似在素體上「貼臉皮」,也就是將挑選出來具有近似平面頭像5臉型特徵 的臉部模型貼到素體上,讓素體具有平面頭像5之臉型特徵,例如但不限於臉型寬窄、下巴尖闊等。 FIG. 7 is a schematic diagram of combining a face model with a meta-position according to the embodiment. Please refer to FIG. 7 and FIG. 3 at the same time, because the face-to-point 61 has its own registration number, and in addition, the physical data stored in the communication device 2 also has the elemental alignment point 71, and also has the registration number. Therefore, the processing unit 23 can combine the face model with the prime body according to the registration number relationship between the two points. The above steps are similar to the "skin" on the body, which is to be selected to have an approximate planar avatar 5 face features. The facial model is attached to the body, so that the body has the facial features of the planar avatar 5, such as, but not limited to, the width of the face, the width of the chin, and the like.

然而,因為素體的臉型是預設的標準臉型,當臉部模型與素體結合後必然會有存在差異。舉例來說,當平面頭像5是屬於窄臉尖下巴時,臉部模型也是窄臉尖下巴,若將此種臉部模型貼在素體上,顯示出來的三維頭像會在兩頰處發生素體凸出,但是下巴處臉部模型與素體間存有空隙的問題。此時,處理單元23需再依據臉部模型的臉部對位點61調整素體的素體對位點71。在本實施例中,處理單元23之調整是改變素體對位點71之空間座標,連帶使得素體資料被顯示時的像素位置發生變化,從而使得臉部模型與素體共同顯示時,二者之間不會有素體凸出,或臉部模型與素體間有空隙的問題。因為調整素體對位點71的空間座標可使素體對位點71向幾何中心位置靠攏或遠離,顯示出來的效果如同素體被移除或增加了一部分。 However, because the face of the body is the default standard face, there will be differences when the face model is combined with the body. For example, when the plane avatar 5 belongs to a narrow-faced chin, the facial model is also a narrow-faced chin. If the facial model is attached to the temperament, the displayed three-dimensional avatar will be accommodating at the cheeks. The body is convex, but there is a problem that there is a gap between the facial model at the chin and the body. At this time, the processing unit 23 needs to adjust the element body alignment point 71 of the element body according to the face-to-site point 61 of the face model. In this embodiment, the adjustment of the processing unit 23 is to change the spatial coordinates of the body-to-site point 71, and the pixel position when the body data is displayed is changed, so that the face model and the body are displayed together, There is no problem that the body is convex or there is a gap between the facial model and the body. Since adjusting the spatial coordinates of the element pair position 71 allows the element body to move closer to or away from the geometric center position, the effect is displayed as if the element body was removed or added.

接著,處理單元23會將經過臉部特徵資料與臉部模型資料調整好的素體資料,與臉部模型資料共同顯示於顯示單元,即可產生與平面頭像對應之三維頭像。再進一步說明,也就是顯示的三維頭像中,五官、眉毛等是來自於被臉部特徵資料調整過後的素體資料,而包覆臉部的臉皮部分是來自於臉部模型資料。在顯示的三維頭像中,可以由處理單元23將調整過後的素體資料以及臉部模型資料再進一步的結合,以依據結合後的單一資料再顯示;另外,當然也可以分開二份資料的形式,而由處理單元23個別顯示,並透過對位點維持適當的相對位置來顯示,本創作在此不限。 Next, the processing unit 23 displays the physical data adjusted by the facial feature data and the facial model data together with the facial model data on the display unit, thereby generating a three-dimensional avatar corresponding to the planar avatar. Further, in the three-dimensional avatar displayed, the facial features, eyebrows, and the like are derived from the physical data adjusted by the facial feature data, and the facial skin covering the facial surface is derived from the facial model data. In the displayed three-dimensional avatar, the adjusted entity data and the face model data can be further combined by the processing unit 23 to display according to the combined single data; in addition, it is also possible to separate the two data forms. It is displayed by the processing unit 23 individually and displayed by maintaining the proper relative position of the pair of points, and the present creation is not limited thereto.

當然,上述二個調整素體資料之步驟並無固定之執行先後順序, 也就是先以臉部模型資料調整素體資料臉型,再以臉部特徵資料調整素體五官等亦無不可。 Of course, the above two steps of adjusting the physical data have no fixed sequence of execution. That is to say, it is also necessary to adjust the facial data type of the facial data with the facial model data, and then adjust the facial features with facial features.

在本創作其他實施例中,素體資料也可以僅有上半身、頭部,甚至只有臉部,端視使用者希望產生之三維人形影像要包括除了臉部以外多少部分。 In other embodiments of the present invention, the physical data may have only the upper body, the head, or even only the face, and the three-dimensional humanoid image that the user desires to generate includes a part other than the face.

在本創作其他實施例中,通訊裝置2之處理單元23可以在三維頭像產生之後,另外對三維頭像進行貼圖,以使三維頭像具有頭髮、眼鏡、鬍子或甚至其他服飾配件。貼圖同樣可以透過對位點輔助的方式來達成。具體來說,三維頭像可以具有頭髮的對位點,而使用者選擇的頭髮模組上也具有對位點,將二個對位點在空間中貼合,也就是使其空間座標相同,即可以使頭髮模組與三維頭像結合。當然,其他如眼鏡、鬍子等部分的貼圖亦可以照此方式進行。 In other embodiments of the present invention, the processing unit 23 of the communication device 2 may additionally map the three-dimensional avatar after the three-dimensional avatar is generated, so that the three-dimensional avatar has hair, glasses, beard or even other clothing accessories. Textures can also be achieved by means of site-assisted. Specifically, the three-dimensional avatar may have a matching position of the hair, and the hair module selected by the user also has a matching point, and the two opposing points are fitted in the space, that is, the space coordinates are the same, that is, The hair module can be combined with a three-dimensional avatar. Of course, other textures such as glasses, beards, etc. can also be made in this way.

在本創作其他實施例中,產生的三維頭像可以再與設定的背景結合,以虛擬出使用者的人偶或公仔在任何地方或環境的感覺;又或者,三維頭像可以作為資料送去進行三維列印,以產出實體人偶;又或者三維頭像還可以作成電子卡片或貼紙,本創作在此不限。 In other embodiments of the present creation, the generated three-dimensional avatar can be combined with the set background to virtualize the feeling of the user's doll or doll in any place or environment; or, the three-dimensional avatar can be sent as a material for three-dimensional Print to produce physical dolls; or 3D avatars can also be made into electronic cards or stickers, this creation is not limited here.

又在本創作其他實施例中,平面頭像上傳至伺服器後,伺服器可先對平面頭像進行降噪(noise reduction)或美膚(skin beautifier)處理,以使得後續辨識更為精確,或者是產生出來的三維頭像更為明亮好看。 In another embodiment of the present invention, after the plane avatar is uploaded to the server, the server may first perform noise reduction or skin beautifier processing on the plane avatar to make subsequent recognition more accurate, or The resulting 3D avatar is brighter and more beautiful.

本創作另揭露一種三維頭像產生裝置。三維頭像產生裝置包括一 傳輸單元、一儲存單元以及一處理單元。儲存單元預先儲存一素體資料。處理單元分別與傳輸單元以及儲存單元電性連接。處理單元依據臉部特徵資料以及臉部模型資料調整素體資料,並依據臉部模型資料以及調整後之素體資料產生一三維頭像。然而,本三維頭像產生裝置之技術內容與實施細節均與前述三維頭像系統的通訊裝置大致相同,可參考前述,於此不再贅述。 The present invention further discloses a three-dimensional avatar generating device. The three-dimensional avatar generating device includes a A transmission unit, a storage unit, and a processing unit. The storage unit stores a piece of physical data in advance. The processing unit is electrically connected to the transmission unit and the storage unit, respectively. The processing unit adjusts the physical body data according to the facial feature data and the facial model data, and generates a three-dimensional avatar according to the facial model data and the adjusted physical body data. However, the technical content and implementation details of the three-dimensional avatar generating device are substantially the same as those of the foregoing three-dimensional avatar system. For reference, the foregoing is not described herein.

綜上所述,利用遠端或雲端處理來取得產生三維頭像必會面臨資料量過大傳輸困難,導致產生速度過慢的問題,但是依據本創作之三維頭像產生系統及其通訊裝置、產生方法,透過預先在通訊裝置中儲存素體資料,再接收臉部特徵資料以及臉部模型資料進行調整,從而產生三維頭像,可以避免龐大的三維素體資料影響傳輸速度,從而提升三維頭像的產生效率。更進一步來說,透過本創作可以平衡本地端硬體資源不足無法高速處理三維資料,以及遠端或雲端處理資料傳輸量過大的問題,讓虛擬人偶或公仔可更容易被推廣應用在不同面向。 In summary, the use of remote or cloud processing to obtain a three-dimensional avatar must face the problem of excessive data transmission, resulting in a slow speed, but according to the creation of the three-dimensional avatar generation system and its communication device, production method, By pre-storing the physical data in the communication device, and then receiving the facial feature data and the facial model data for adjustment, thereby generating a three-dimensional avatar, the huge three-dimensional physical data can be prevented from affecting the transmission speed, thereby improving the efficiency of the three-dimensional avatar. Furthermore, through this creation, we can balance the problem that the local end hardware resources are insufficient to process 3D data at high speed, and the remote or cloud processing data transmission is too large, so that the virtual doll or doll can be more easily promoted and applied in different aspects. .

相對於將三維頭像之產生完全交給通訊裝置或伺服器處理的形式來說,本創作提供更有效率的硬體資源運用。另外,雖然素體資料的下載較花費時間,但是使用者較容易認同第一次下載應用程式(APP)安裝的時間較長,而非要想要使用應用程式時卻還要花時間等,符合消費者的心態。 This creation provides a more efficient use of hardware resources relative to the form in which the production of a three-dimensional avatar is completely handed over to a communication device or server. In addition, although the download of the physical data takes time, it is easier for the user to agree that the first download of the application (APP) takes longer to install, rather than taking the time when the application is to be used, etc. Consumer mindset.

以上所述僅為舉例性,而非為限制性者。任何未脫離本創作之精神與範疇,而對其進行之等效修改或變更,均應包含於後附之申請專利範圍中。 The above is intended to be illustrative only and not limiting. Any equivalent modifications or alterations to the spirit and scope of this creation shall be included in the scope of the appended patent application.

1‧‧‧三維頭像產生系統 1‧‧‧3D avatar generation system

2‧‧‧通訊裝置 2‧‧‧Communication device

21、31‧‧‧傳輸單元 21, 31‧‧‧ transmission unit

22、32‧‧‧儲存單元 22, 32‧‧‧ storage unit

23、33‧‧‧處理單元 23, 33‧ ‧ processing unit

3‧‧‧伺服器 3‧‧‧Server

Claims (11)

一種三維頭像產生裝置,包括:一傳輸單元;一儲存單元,預先儲存一素體資料;以及一處理單元,分別與該傳輸單元以及該儲存單元電性連接,其中,該傳輸單元接收一臉部特徵資料以及一臉部模型資料,該處理單元依據該臉部特徵資料以及該臉部模型資料調整該素體資料,並依據該臉部模型資料以及調整後之該素體資料產生一三維頭像。 A three-dimensional avatar generating device includes: a transmitting unit; a storage unit pre-storing a physical volume; and a processing unit electrically connected to the transmitting unit and the storage unit, wherein the transmitting unit receives a face The feature data and a facial model data, the processing unit adjusts the physical body data according to the facial feature data and the facial model data, and generates a three-dimensional avatar according to the facial model data and the adjusted physical body data. 如申請專利範圍第1項所述之三維頭像產生裝置,其中該素體資料係來自於一伺服器。 The three-dimensional avatar generating device according to claim 1, wherein the phylogenetic data is from a server. 如申請專利範圍第1項所述之三維頭像產生裝置,其中該臉部特徵資料或該臉部模型資料係由一伺服器依據一平面頭像所得,且該平面頭像係與該三維頭像對應。 The three-dimensional avatar generating device according to claim 1, wherein the facial feature data or the facial model data is obtained by a server according to a planar avatar, and the planar avatar corresponds to the three-dimensional avatar. 如申請專利範圍第1項所述之三維頭像產生裝置,其中該臉部特徵資料包括複數個臉部特徵點,該素體資料包括至少一特徵區,該特徵區包括複數個特徵區特徵點,該複數個臉部特徵點與該複數個特徵區特徵點係個別對應,該處理單元依據該複數個臉部特徵點調整該複數個特徵區特徵點之空間座標。 The three-dimensional avatar generating device of claim 1, wherein the facial feature data comprises a plurality of facial feature points, the physical material includes at least one feature region, and the feature region includes a plurality of feature region feature points. The plurality of facial feature points are individually corresponding to the plurality of feature region feature points, and the processing unit adjusts spatial coordinates of the plurality of feature region feature points according to the plurality of facial feature points. 如申請專利範圍第1項所述之三維頭像產生裝置,其中該臉部模型資料包括複數個臉部對位點,該素體資料包括複數個素體對位點,該複數個臉部對位點與該複數個素體對位點係個別對應,以 供該處理單元結合該臉部模型資料與該素體資料。 The three-dimensional avatar generating device according to claim 1, wherein the facial model data includes a plurality of facial matching points, the primiparous data includes a plurality of morphological pair sites, and the plurality of facial aligning positions Point and the plurality of element body pairs are individually corresponding to The processing unit combines the facial model data with the physical data. 如申請專利範圍第1項所述之三維頭像產生裝置,其中該處理單元係依據該臉部模型資料改變該素體資料之一部分之空間座標。 The three-dimensional avatar generating device according to claim 1, wherein the processing unit changes a spatial coordinate of a part of the physical material according to the facial model data. 一種三維頭像產生系統,包括:一伺服器;以及至少一通訊裝置,與該伺服器通訊連接,並預先儲存來自於該伺服器之一素體資料,該伺服器傳送一臉部特徵資料以及一臉部模型資料至該通訊裝置,該通訊裝置依據該臉部特徵資料以及該臉部模型資料調整該素體資料,該通訊裝置依據該臉部模型資料以及調整後之該素體資料產生一三維頭像。 A three-dimensional avatar generating system, comprising: a server; and at least one communication device, communicatively connected with the server, and pre-storing a physical body data from the server, the server transmitting a facial feature data and a The facial model data is sent to the communication device, and the communication device adjusts the physical body data according to the facial feature data and the facial model data, and the communication device generates a three-dimensional image according to the facial model data and the adjusted physical body data. Avatar. 如申請專利範圍第7項所述之三維頭像產生系統,其中該臉部特徵資料或該臉部模型資料係由該伺服器依據一平面頭像所得,且該平面頭像係與該三維頭像對應。 The three-dimensional avatar generating system of claim 7, wherein the facial feature data or the facial model data is obtained by the server according to a planar avatar, and the planar avatar corresponds to the three-dimensional avatar. 如申請專利範圍第7項所述之三維頭像產生系統,其中該臉部特徵資料包括複數個臉部特徵點,該素體資料包括至少一特徵區,該特徵區包括複數個特徵區特徵點,該複數個臉部特徵點與該複數個特徵區特徵點係個別對應,該通訊裝置依據該複數個臉部特徵點調整該複數個特徵區特徵點之空間座標。 The three-dimensional avatar generating system of claim 7, wherein the facial feature data comprises a plurality of facial feature points, the physical material includes at least one feature region, and the feature region includes a plurality of feature region feature points. The plurality of facial feature points are individually associated with the plurality of feature region feature points, and the communication device adjusts spatial coordinates of the plurality of feature region feature points according to the plurality of facial feature points. 如申請專利範圍第7項所述之三維頭像產生系統,其中該臉部模型資料包括複數個臉部對位點,該素體資料包括複數個素體對位點,該複數個臉部對位點與該複數個素體對位點係個別對應,以供該通訊裝置結合該臉部模型資料與該素體資料。 The three-dimensional avatar generating system according to claim 7, wherein the facial model data comprises a plurality of facial matching points, wherein the physical body data comprises a plurality of primordial locating points, and the plurality of facial aligning positions The point and the plurality of element body pairs are individually corresponding for the communication device to combine the face model data with the body data. 如申請專利範圍第7項所述之三維頭像產生系統,其中該通訊裝置係依據該臉部模型資料改變該素體資料之一部分之空間座標。 The three-dimensional avatar generating system according to claim 7, wherein the communication device changes a space coordinate of a part of the temperament data according to the facial model data.
TW104202391U 2015-02-13 2015-02-13 System for generating three-dimensional facial image and device thereof TWM508085U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW104202391U TWM508085U (en) 2015-02-13 2015-02-13 System for generating three-dimensional facial image and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW104202391U TWM508085U (en) 2015-02-13 2015-02-13 System for generating three-dimensional facial image and device thereof

Publications (1)

Publication Number Publication Date
TWM508085U true TWM508085U (en) 2015-09-01

Family

ID=54606692

Family Applications (1)

Application Number Title Priority Date Filing Date
TW104202391U TWM508085U (en) 2015-02-13 2015-02-13 System for generating three-dimensional facial image and device thereof

Country Status (1)

Country Link
TW (1) TWM508085U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110753179A (en) * 2019-09-06 2020-02-04 启云科技股份有限公司 Augmented reality shooting and recording interactive system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110753179A (en) * 2019-09-06 2020-02-04 启云科技股份有限公司 Augmented reality shooting and recording interactive system

Similar Documents

Publication Publication Date Title
US11798246B2 (en) Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
US11270489B2 (en) Expression animation generation method and apparatus, storage medium, and electronic apparatus
US11736756B2 (en) Producing realistic body movement using body images
US11055514B1 (en) Image face manipulation
US9959453B2 (en) Methods and systems for three-dimensional rendering of a virtual augmented replica of a product image merged with a model image of a human-body feature
EP3912085A1 (en) Systems and methods for face reenactment
AU2018214005A1 (en) Systems and methods for generating a 3-D model of a virtual try-on product
JP2019510297A (en) Virtual try-on to the user's true human body model
CN110688948B (en) Method and device for transforming gender of human face in video, electronic equipment and storage medium
CN108513089B (en) Method and device for group video session
CN112513875B (en) Eye texture repair
WO2021082787A1 (en) Virtual operation object generation method and device, storage medium and electronic apparatus
JP7278724B2 (en) Information processing device, information processing method, and information processing program
US20200065559A1 (en) Generating a video using a video and user image or video
WO2023039462A1 (en) Body fitted accessory with physics simulation
TW201629907A (en) System and method for generating three-dimensional facial image and device thereof
WO2017141223A1 (en) Generating a video using a video and user image or video
KR102498056B1 (en) Metahuman generation system and method in metaverse
TWM508085U (en) System for generating three-dimensional facial image and device thereof
CN112446821B (en) Image processing method and device and electronic equipment
WO2021155666A1 (en) Method and apparatus for generating image
CN104715505A (en) Three-dimensional head portrait generating system and generating device and generating method thereof
CN204791190U (en) Three -dimensional head portrait generation system and device thereof
US11908098B1 (en) Aligning user representations