WO2021208330A1 - Method and apparatus for generating expression for game character - Google Patents

Method and apparatus for generating expression for game character Download PDF

Info

Publication number
WO2021208330A1
WO2021208330A1 PCT/CN2020/112616 CN2020112616W WO2021208330A1 WO 2021208330 A1 WO2021208330 A1 WO 2021208330A1 CN 2020112616 W CN2020112616 W CN 2020112616W WO 2021208330 A1 WO2021208330 A1 WO 2021208330A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
expression
game
character
real
Prior art date
Application number
PCT/CN2020/112616
Other languages
French (fr)
Chinese (zh)
Inventor
张鹏
牛莉丽
智慧嘉
Original Assignee
完美世界(重庆)互动科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN202010305320.3 priority Critical
Priority to CN202010305331.1 priority
Priority to CN202010305331.1A priority patent/CN111530087B/en
Priority to CN202010305320.3A priority patent/CN111530086B/en
Application filed by 完美世界(重庆)互动科技有限公司 filed Critical 完美世界(重庆)互动科技有限公司
Publication of WO2021208330A1 publication Critical patent/WO2021208330A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat

Abstract

Disclosed are a method and apparatus for generating an expression for a game character. The method comprises: when a game character is created, acquiring facial model data of the game character; acquiring expression feature data; generating first character expression data of the game character according to the facial model data and the expression feature data, wherein the first character expression data is configured as part of data of the created game character; and performing rendering according to the first character expression data to generate an expression for the game character.

Description

一种生成游戏角色的表情的方法和装置Method and device for generating expressions of game characters
本申请同时要求与2020年4月17日提交中国专利局、申请号为202010305320.3、申请名称为“一种生成游戏角色的表情的方法和装置”,及申请号为202010305331.1、申请名称为“一种在游戏中生成实时表情包的方法和装置”的两篇中国专利申请的优先权,其全部内容通过引用结合在申请中。This application is also required to be submitted to the Chinese Patent Office on April 17, 2020. The application number is 202010305320.3, the application name is "a method and device for generating the expressions of game characters", and the application number is 202010305331.1, and the application name is "a kind of The priority of the two Chinese patent applications of "Methods and devices for generating real-time emoticons in games", the entire contents of which are incorporated in the application by reference.
技术领域Technical field
本申请涉及电子游戏领域中的增强现实技术,更具体而言,涉及一种生成游戏角色的表情的方法和装置。This application relates to augmented reality technology in the field of electronic games, and more specifically, to a method and device for generating expressions of game characters.
背景技术Background technique
在许多电子游戏系统中,允许用户(即,游戏玩家)使用“表情”刻画游戏角色的面部特征,这使得该游戏角色的虚拟形象更加生动。带有表情的游戏角色可以让用户更切身地感受到该游戏角色的喜怒哀乐,进而增强电子游戏的趣味性和感染力。然而,发明人发现表情是丰富且细腻的,并且细微的面部特征变化就会产生不同的表情,因而电子游戏系统中的表情制作一直是非常繁琐、耗时的工作。当前,在本领域中缺少简便、快捷制作表情以及实时生成表情的方法。In many electronic game systems, users (ie, game players) are allowed to use "emoticons" to portray the facial features of the game character, which makes the virtual image of the game character more vivid. Game characters with facial expressions can make users feel the joys, sorrows and joys of the game characters more personally, thereby enhancing the fun and appeal of electronic games. However, the inventor found that expressions are rich and delicate, and subtle changes in facial features will produce different expressions. Therefore, expression making in electronic game systems has always been a very tedious and time-consuming task. Currently, there is a lack of methods for simple and quick expressions and real-time generation of expressions in this field.
发明内容Summary of the invention
本申请提供了一种生成游戏角色的表情的方法,包括在所述游戏角色被创建时,获取所述游戏角色的面部模型数据;获取表情特征数据;依据所述面部模型数据和所述表情特征数据,生成所述游戏角色的第一角色表情数据,其中所述第一角色表情数据被配置作为所创建的所述游戏角色的数据的一部分;和依据所述第一角色表情数据进行渲染,以生成所述游戏角色的表情。The present application provides a method for generating the expression of a game character, which includes obtaining facial model data of the game character when the game character is created; obtaining expression feature data; according to the facial model data and the expression feature Data, generating first character expression data of the game character, wherein the first character expression data is configured as a part of the created data of the game character; and rendering according to the first character expression data to The expression of the game character is generated.
本申请还提供了一种生成游戏角色的表情的装置,包括用于在所述游戏角色被创建时,获取所述游戏角色的面部模型数据的模块;用于获取表情特征数据的模块;用于依据所述面部模型数据和所述表情特征 数据,生成所述游戏角色的第一角色表情数据的模块,其中所述第一角色表情数据被配置作为所创建的所述游戏角色的数据的一部分;和用于依据所述第一角色表情数据进行渲染,以生成所述游戏角色的表情的模块。This application also provides an apparatus for generating expressions of a game character, including a module for obtaining facial model data of the game character when the game character is created; a module for obtaining expression feature data; A module for generating first character expression data of the game character according to the facial model data and the expression characteristic data, wherein the first character expression data is configured as a part of the created data of the game character; And a module for rendering according to the first character expression data to generate the expression of the game character.
本申请还提供了一种计算机程序,其包括计算机可读代码,当所述计算机可读代码在客户端设备上运行时,导致所述客户端设备执行时实现上述生成游戏角色的表情的方法。The present application also provides a computer program, which includes computer-readable code, which when the computer-readable code runs on a client device, causes the client device to execute the method for generating the expression of a game character.
本申请还提供了一种计算机可读介质,其中存储了上述生成游戏角色的表情的方法的计算机程序。This application also provides a computer-readable medium in which the computer program of the method for generating the expression of a game character is stored.
本申请还提供了一种在游戏中生成实时表情包的方法,包括:接收生成游戏角色的实时表情包的指令;获取所述游戏角色的面部模型数据;实时获取当前游戏用户的表情特征数据;依据所述面部模型数据和所述表情特征数据,实时生成所述游戏角色的实时表情包。The present application also provides a method for generating a real-time emoticon package in a game, including: receiving an instruction to generate a real-time emoticon package of a game character; acquiring facial model data of the game character; acquiring facial expression feature data of the current game user in real time; According to the facial model data and the expression feature data, a real-time expression package of the game character is generated in real time.
本申请还提供了一种在游戏中生成实时表情包的装置,包括:用于接收生成游戏角色的实时表情包的指令的模块;用于获取所述游戏角色的面部模型数据的模块;用于实时获取当前游戏用户的表情特征数据的模块;用于依据所述面部模型数据和所述表情特征数据,实时生成所述游戏角色的实时表情包的模块。This application also provides a device for generating a real-time emoticon package in a game, including: a module for receiving an instruction to generate a real-time emoticon package of a game character; a module for acquiring facial model data of the game character; A module for obtaining the facial expression feature data of the current game user in real time; a module for generating a real-time expression package of the game character in real time according to the facial model data and the expression feature data.
本申请还提供了另一种计算机程序,包括计算机可读代码,当所述计算机可读代码在客户端设备上运行时,导致所述客户端设备执行时实现上述在游戏中生成实时表情包的方法。This application also provides another computer program, including computer-readable code, when the computer-readable code runs on a client device, the client device executes the above-mentioned method of generating real-time emoticons in the game. method.
本申请还提供了另一种计算机可读介质,其中存储了上述在游戏中生成实时表情包的方法的计算机程序。This application also provides another computer-readable medium in which the computer program of the above method for generating a real-time emoticon package in a game is stored.
本申请的技术方案通过增强现实技术(AR)实现了游戏角色的表情的制作和生成。相对于现有技术,本申请的方法和装置操作简便、快捷,制作出的表情真实、细腻,符合人体面部肌肉运动,并且还能够实现表情在电子游戏中的实时生成。另外,依据本申请的方法和装置生成的游戏角色的表情还可以被作为游戏数据的一部分以便在电子游戏中继续使用。The technical solution of the present application realizes the production and generation of expressions of game characters through augmented reality technology (AR). Compared with the prior art, the method and device of the present application are easy and quick to operate, and the produced expressions are real and delicate, conform to the movement of human facial muscles, and can also realize the real-time generation of expressions in electronic games. In addition, the expressions of the game characters generated according to the method and device of the present application can also be used as part of the game data for continued use in electronic games.
附图说明Description of the drawings
图1是根据本申请的一个实施例的生成游戏角色的表情的方法的流程图。Fig. 1 is a flowchart of a method for generating an expression of a game character according to an embodiment of the present application.
图2A是根据本申请的一个实施例的用于检测和保存模特无表情模型数据和模特表情模型数据的软件程序用户界面的示意图。Fig. 2A is a schematic diagram of a user interface of a software program for detecting and saving model expressionless model data and model expression model data according to an embodiment of the present application.
图2B是根据本申请的实施例的示例性模特无表情模型数据和模特表情模型数据。Fig. 2B is exemplary model expressionless model data and model expression model data according to an embodiment of the present application.
图3是根据本申请的实施例的示例性表情特征系数。Fig. 3 is an exemplary expression feature coefficient according to an embodiment of the present application.
图4是根据本申请的一个实施例的生成游戏角色的角色表情数据的示意图。Fig. 4 is a schematic diagram of generating character expression data of a game character according to an embodiment of the present application.
图5是根据本申请的一个实施例的使用的R3DS Wrap软件中的节点。Fig. 5 is a node in the R3DS Wrap software used according to an embodiment of the present application.
图6A是根据本申请的一个实施例的示例性标定特征点。Fig. 6A is an exemplary calibration feature point according to an embodiment of the present application.
图6B是与图6A中的标定特征点对应的游戏角色的面部模型数据和角色表情模型数据。Fig. 6B is the facial model data and the character expression model data of the game character corresponding to the calibration feature points in Fig. 6A.
图7是根据本申请的一个实施例的不同游戏角色的表情模型数据的示意图。Fig. 7 is a schematic diagram of expression model data of different game characters according to an embodiment of the present application.
图8是根据本申请的一个实施例的示例性morph文件和target文件。Fig. 8 is an exemplary morph file and target file according to an embodiment of the present application.
图9是根据本申请的一个实施例的添加morph文件的操作界面。Fig. 9 is an operation interface for adding a morph file according to an embodiment of the present application.
图10是根据本申请的一个实施例的设置表情动画属性的操作界面。Fig. 10 is an operation interface for setting emoticon animation attributes according to an embodiment of the present application.
图11是根据本申请的一个实施例的实施“部分替换”的方法的节点。Fig. 11 is a node that implements a "partial replacement" method according to an embodiment of the present application.
图12是图11中的方法的示例性操作示意图。FIG. 12 is a schematic diagram of an exemplary operation of the method in FIG. 11.
图13是根据本申请的一个实施例的创建游戏角色的界面的示意图。Fig. 13 is a schematic diagram of an interface for creating a game character according to an embodiment of the present application.
图14A和14B是根据本申请的实施例的方法非实时生成的游戏角色的表情的示意图。14A and 14B are schematic diagrams of the expressions of game characters generated in non-real-time according to the method of the embodiment of the present application.
图15A和15B是根据本申请的实施例的方法实时生成的游戏角色的表情的示意图。15A and 15B are schematic diagrams of the expressions of the game character generated in real time according to the method of the embodiment of the present application.
图16是实施本申请的实施例的示例性电子游戏系统的示意图。Fig. 16 is a schematic diagram of an exemplary electronic game system implementing an embodiment of the present application.
图17是本申请实施例提供的用于执行根据本申请的生成游戏角色的表情的方法的客户端设备的框图。Fig. 17 is a block diagram of a client device for executing the method for generating the expression of a game character according to the present application provided by an embodiment of the present application.
图18是本申请实施例提供的用于保持或者携带实现根据本申请的生成游戏角色的表情的方法的程序代码的存储单元。FIG. 18 is a storage unit provided by an embodiment of the present application for storing or carrying program code that implements the method for generating the expression of a game character according to the present application.
图19是根据本发明的一个实施例的在游戏中生成实时表情包的方法的流程图。Fig. 19 is a flowchart of a method for generating a real-time emoticon package in a game according to an embodiment of the present invention.
图20是根据本发明的一个实施例的创建实时表情包的界面的示意图。Fig. 20 is a schematic diagram of an interface for creating a real-time emoticon package according to an embodiment of the present invention.
图21A-21C是根据本发明的实施例的生成实时表情包的示意图。21A-21C are schematic diagrams of generating a real-time emoticon package according to an embodiment of the present invention.
图22是根据本发明的一个实施例的预览生成后的实时表情包的示意图。Fig. 22 is a schematic diagram of a real-time emoticon package after preview generation according to an embodiment of the present invention.
图23是根据本发明的一个实施例的命名实时表情包的界面的示意图。Fig. 23 is a schematic diagram of an interface for naming real-time emoticons according to an embodiment of the present invention.
图24是根据本发明的一个实施例的选择并发送实时表情包的界面的示意图。Fig. 24 is a schematic diagram of an interface for selecting and sending a real-time emoticon package according to an embodiment of the present invention.
图25A和25B是根据本发明的实施例的在发送前预览和播放实时表情包的示意图。25A and 25B are schematic diagrams of previewing and playing a real-time emoticon package before sending according to an embodiment of the present invention.
图26A和26B是根据本发明的实施例的接收并查看实时表情包的示意图。26A and 26B are schematic diagrams of receiving and viewing real-time emoticons according to an embodiment of the present invention.
图27A和27B是根据本发明的实施例的编辑实时表情包和预览编辑后的实时表情包的示意图。27A and 27B are schematic diagrams of editing a real-time emoticon package and previewing the edited real-time emoticon package according to an embodiment of the present invention.
图28是本申请实施例提供的用于执行根据本申请的在游戏中生成实时表情包的方法的客户端设备的框图。Fig. 28 is a block diagram of a client device for executing the method for generating a real-time emoticon package in a game according to an embodiment of the present application.
图29是本申请实施例提供的用于保持或者携带实现根据本申请的在游戏中生成实时表情包的方法的程序代码的存储单元。Fig. 29 is a storage unit provided by an embodiment of the present application for holding or carrying program code that implements the method for generating a real-time emoticon package in a game according to the present application.
具体实施方式Detailed ways
现在将参照若干示例性实施例来说明本申请的内容。应当理解,说明这些实施例仅是为了使得本领域普通技术人员能够更好地理解并且因此实现本申请的内容,而不是暗示对本申请的范围进行任何限制。如本文中所使用的,术语“包括”及其变体应当解读为意味着“包 括但不限于”的开放式术语。术语“基于”应当解读为“至少部分地基于”。术语“一个实施例”和“一种实施例”应当解读为“至少一个实施例”。术语“另一个实施例”应当解读为“至少一个其他实施例”。The content of the present application will now be explained with reference to several exemplary embodiments. It should be understood that the description of these embodiments is only to enable those of ordinary skill in the art to better understand and thus realize the content of the application, and does not imply any limitation on the scope of the application. As used herein, the term "including" and its variants should be read as open-ended terms that mean "including but not limited to." The term "based on" should be read as "based at least in part on." The terms "one embodiment" and "one embodiment" should be read as "at least one embodiment." The term "another embodiment" should be read as "at least one other embodiment."
在本申请的实施例中,生成游戏角色的表情的方法可以在电子游戏的任何阶段和场景中应用,其中该场景包括但不限于:创建游戏角色时,用户聊天(例如,通过聊天室或聊天窗口)时,拍照、截屏或录屏时,用户或游戏情节满足特定条件时。在本申请的实施例中,应用生成游戏角色的表情的方法的时机是由电子游戏系统预设的。在本申请的实施例中,应用生成游戏角色的表情的方法的时机是由用户根据需要选择的。在本申请的优选的实施例中,在创建游戏角色时应用生成游戏角色的表情的方法。In the embodiment of the present application, the method of generating the expression of the game character can be applied in any stage and scene of the electronic game, where the scene includes but not limited to: when the game character is created, the user chats (for example, through a chat room or chat Window), when taking photos, screenshots or recording, when the user or game plot meets certain conditions. In the embodiment of the present application, the timing of applying the method of generating the expression of the game character is preset by the electronic game system. In the embodiment of the present application, the timing of applying the method of generating the expression of the game character is selected by the user according to needs. In a preferred embodiment of the present application, the method of generating the expression of the game character is applied when the game character is created.
图1示出了根据本申请的一个实施例的生成游戏角色的表情的方法的流程图,下面依次说明其中的各个步骤。Fig. 1 shows a flowchart of a method for generating an expression of a game character according to an embodiment of the present application, and each step of the method is described in sequence below.
获取游戏角色的面部模型数据Obtain face model data of game characters
在本申请的实施例中,“游戏角色”指的是电子游戏中的任何虚拟角色,优选地是用户正在操纵的虚拟角色。在本申请的实施例中,可以在任何阶段和场景获取游戏角色的面部模型数据,优选地,在游戏角色被创建时获取游戏角色的面部模型数据。在本申请的实施例中,“游戏角色的面部模型数据”即游戏角色的美术制作模型,指的是在进行渲染后呈现出无表情的游戏角色的面部的数据。参见图4,其中示出了游戏角色的美术制作模型。在本申请的实施例中,游戏角色的美术制作模型由游戏开发人员预置并保存在游戏数据中。在本申请的实施例中,“预置”与“非预置”或“实时”相对,指的是已经预先制作好而不需要实时制作、生成。In the embodiments of the present application, "game character" refers to any virtual character in an electronic game, preferably a virtual character being manipulated by a user. In the embodiment of the present application, the face model data of the game character can be obtained at any stage and scene, preferably, the face model data of the game character is obtained when the game character is created. In the embodiment of the present application, the "face model data of the game character" refers to the art production model of the game character, which refers to the data of the face of the game character that appears expressionless after rendering. See Figure 4, which shows an art production model of the game character. In the embodiment of the present application, the art production model of the game character is preset by the game developer and saved in the game data. In the embodiments of the present application, “preset” is opposite to “non-preset” or “real-time”, which means that it has been pre-made and does not require real-time production and generation.
获取表情特征数据Obtain facial feature data
在本申请的实施例中,“表情特征数据”指的是能够表示表情的数据的统称。在本申请的实施例中,获取的表情特征数据可以是预置的或实时的(即,非预置的),下面分别讨论这两种不同情况的技术方案。In the embodiments of the present application, "expression feature data" refers to a general term for data that can express an expression. In the embodiment of the present application, the acquired facial expression feature data may be preset or real-time (ie, non-preset). The technical solutions for these two different situations are discussed below.
在本申请的预置的实施例中,表情特征数据包括预置的模特无表情模型数据和模特表情模型数据。在本申请的实施例中,针对同一个模特,模特无表情模型数据仅具有一种,而模特表情模型数据可以具有一种或多种。在本申请的实施例中,可以编辑预置的模特表情模型数据以生成与其不同的模特表情模型,其中编辑前的模特表情模型数据被称为模特第一表情模型数据,而编辑后的模特表情模型数据被称为模特第二表情模型数据。在本申请的实施例中,还可以对模特第二表情模型数据进行进一步编辑以生成模特第三表情模型数据,以此类推。在本申请的实施例中,模特第一表情模型数据和模特第二表情模型数据均可以具有一种或多种。在本申请的实施例中,模特表情模型数据对应于多个不同类型的表情中的一个。在本申请的实施例中,可以通过摄像头模组(例如,手机、平板和电脑上的摄像头或摄像机和摄影机等)获取模特表情模型数据。In the preset embodiment of the present application, the expression feature data includes preset model no expression model data and model expression model data. In the embodiment of the present application, for the same model, there is only one type of model's expressionless model data, and the model's expression model data may have one or more types. In the embodiment of this application, the preset model expression model data can be edited to generate a different model expression model. The model expression model data before editing is called the first expression model data of the model, and the edited model expression data The model data is called model second expression model data. In the embodiment of the present application, the second expression model data of the model can be further edited to generate the third expression model data of the model, and so on. In the embodiment of the present application, both the model first expression model data and the model second expression model data may have one or more types. In the embodiment of the present application, the model expression model data corresponds to one of a plurality of different types of expressions. In the embodiment of the present application, the model expression model data can be obtained through a camera module (for example, a camera on a mobile phone, a tablet, a computer, or a video camera and a camera, etc.).
在本申请的实施例中,“模特”可以是任何人,包括但不限于游戏开发人员和、测试人员和用户,优选地是游戏开发人员。在本申请的实施例中,模特可以是相同或不同的人。在本申请的实施例中,“模特无表情模型数据”指的是描述无表情的面部拓扑学的网格结构数据。在本申请的实施例中,“模特表情模型数据”指的是相对于模特无表情模型数据描述具有表情的面部拓扑学的网格结构数据。在本申请的实施例中,“模型数据”与“模型”可互换地使用。In the embodiments of the present application, the "model" can be anyone, including but not limited to game developers, testers, and users, preferably game developers. In the embodiment of the present application, the models may be the same or different people. In the embodiments of the present application, "model expressionless model data" refers to grid structure data describing the topology of expressionless faces. In the embodiments of the present application, "model expression model data" refers to grid structure data that describes the topology of facial expressions with respect to model non-expression model data. In the embodiments of the present application, "model data" and "model" are used interchangeably.
在本申请的实施例中,可以使用软件(例如,已知的Ark Platforms Inc.公司的ARKit软件,其功能的详细信息可在http://www.arkit.io/获得)配合摄像头模组完成模特无表情模型数据和模特表情模型数据的获取。在本申请的实施例中,使用摄像头模组拍摄或检测模特的面部。在本申请的实施例中,模特的面部是已经拍摄完成的图片或照片。在本申请的实施例中,可以根据需要以不同的数据格式表示和存储模特无表情模型数据和模特表情模型数据。申请人编制了一种软件程序(例如,用于在手机、平板或电脑上运行的APP)来调用ARKit软件并且控制摄像头模组来完成上述数据的获取。例如,在通过ARKit软件尤其是 ARFaceAnchor模块中的ARFaceGeometry功能获取的模型数据中,每个模型数据包含1220个顶点和2304个三角面片。在本申请的实施例中,申请人编制软件程序可以以obj格式保存上述模型数据。在本申请的实施例中,可以通过硬件模块(通常是定制的)来实现对ARKit软件和摄像头模组的调用和控制,也可以通过混合了软件和硬件的模块来实现所述调用和控制。在本申请的实施例中,还可以通过其他方法或软件来获取模特无表情模型数据和模特表情模型数据,只要其能够实现本申请的目的。In the embodiments of this application, software (for example, the known ARKit software of Ark Platforms Inc., the detailed information of its functions can be obtained at http://www.arkit.io/) can be used in conjunction with the camera module to complete Acquisition of model data without expression model and model expression model data. In the embodiment of the present application, a camera module is used to photograph or detect the face of a model. In the embodiment of the present application, the face of the model is a picture or photo that has been taken. In the embodiments of the present application, model expressionless model data and model expression model data can be represented and stored in different data formats as required. The applicant has compiled a software program (for example, an APP used to run on a mobile phone, a tablet, or a computer) to call the ARKit software and control the camera module to complete the above-mentioned data acquisition. For example, in the model data obtained through ARKit software, especially the ARFaceGeometry function in the ARFaceAnchor module, each model data contains 1220 vertices and 2304 triangle faces. In the embodiment of the present application, the software program compiled by the applicant can save the above model data in the obj format. In the embodiments of the present application, the call and control of the ARKit software and camera module can be implemented by a hardware module (usually customized), or the call and control can be implemented by a module that mixes software and hardware. In the embodiments of the present application, other methods or software may also be used to obtain model expressionless model data and model expression model data, as long as they can achieve the purpose of the present application.
图2A示出了根据本申请的一个实施例的用于检测和保存模特无表情模型数据和模特表情模型数据的软件程序用户界面的示意图。在图2A所示出的本申请的实施例中,该软件程序运行在苹果公司的iPhone上。在本申请的实施例中,操作者(例如,游戏开发人员)可以点击“FaceCapture”按钮将模特无表情模型数据(左)和模特表情模型数据(右)保存到相应的文件夹。如先前描述的,该软件程序可以通过调用ARKit软件并控制摄像头模组来完成上述操作。在本申请的实施例中,操作者即为模特。图2B示出了根据本申请的一个实施例的示例性模特无表情模型数据和模特表情模型数据。在本申请的实施例中,在获取了作为模特第一表情模型数据的模特表情模型数据后,可以对其进行任何本领域已知的编辑以生成模特第二表情模型数据。Fig. 2A shows a schematic diagram of a user interface of a software program for detecting and saving model expressionless model data and model expression model data according to an embodiment of the present application. In the embodiment of the application shown in FIG. 2A, the software program runs on Apple's iPhone. In the embodiment of the present application, the operator (for example, a game developer) can click the "FaceCapture" button to save the model's expressionless model data (left) and the model's expression model data (right) to the corresponding folder. As previously described, the software program can complete the above operations by calling ARKit software and controlling the camera module. In the embodiment of this application, the operator is the model. Fig. 2B shows exemplary model expressionless model data and model expression model data according to an embodiment of the present application. In the embodiment of the present application, after acquiring the model expression model data as the model's first expression model data, any known editing in the art can be performed on it to generate the model's second expression model data.
在本申请的非预置的实施例中,表情特征数据包括实时获取的表情特征系数。在本申请的实施例中,表情特征数据还包括旋转矩阵。在本申请的实施例中,表情特征系数是通过摄像头模组由当前的游戏用户实时获取的。在本申请的实施例中,“表情特征系数”指的是能够表示面部特征的数据,其以系数形式描述相对于无表情时面部特征的偏移,进而描述有表情时的面部特征。在本申请的实施例中,旋转矩阵具有本领域技术人员通常理解的含义。In a non-preset embodiment of the present application, the expression feature data includes expression feature coefficients obtained in real time. In the embodiment of the present application, the expression feature data further includes a rotation matrix. In the embodiment of the present application, the expression feature coefficient is obtained in real time by the current game user through the camera module. In the embodiments of the present application, "expression feature coefficients" refer to data that can represent facial features, which describe the offset from the facial features when there is no expression in the form of coefficients, and further describe the facial features when there is an expression. In the embodiments of the present application, the rotation matrix has the meaning commonly understood by those skilled in the art.
在本申请的实施例中,表情特征系数可以例如由ARKit软件(尤其是ARFaceAnchor模块中的ARBlendShapeLocation功能)以blendShapes属性或参数的形式提供。在本申请的实施例中,还可以以其他软件或其 他形式来提供表情特征系数。如本领域中已知的,在ARKit软件中,表情特征系数提供了一个模型,该模型表示一系列面部特征相对于无表情时的偏移系数。如本领域中已知的,ARKit软件目前提供了包括52个系数的表情特征系数,其分布在左眼、右眼、嘴、下巴、眉毛、脸颊、舌头等部位,每个系数为0-1之间的浮点数,其中0表示无表情,而1表示最大程度的表情。图3示出了根据本申请的实施例的示例性表情特征系数,其中显示了睁开和闭合左眼的表情。在本申请的实施例中,表情特征数据中包含的模特无表情模型数据和模特表情模型数据也具有相应的表情特征系数。例如,如果假设图3中左右两边分别为模特无表情模型数据和模特表情模型数据,则模特无表情模型数据具有的所有表情特征系数均为0,而模特表情模型数据具有的表情特征系数中与左眼对应的系数为1且其他系数为0。In the embodiment of the present application, the expression feature coefficients may be provided in the form of blendShapes attributes or parameters, for example, by ARKit software (especially the ARBlendShapeLocation function in the ARFaceAnchor module). In the embodiments of this application, the expression feature coefficients can also be provided in other software or other forms. As known in the art, in the ARKit software, the expression feature coefficient provides a model that represents the offset coefficient of a series of facial features relative to the absence of expression. As known in the art, ARKit software currently provides expression feature coefficients including 52 coefficients, which are distributed in the left eye, right eye, mouth, chin, eyebrows, cheeks, tongue, etc., each coefficient is 0-1 Floating point number between, where 0 means no expression, and 1 means maximum expression. Fig. 3 shows exemplary expression feature coefficients according to an embodiment of the present application, in which the expressions of opening and closing the left eye are displayed. In the embodiment of the present application, the model's expressionless model data and the model's expression model data included in the expression feature data also have corresponding expression feature coefficients. For example, if it is assumed that the left and right sides of Figure 3 are model expressionless model data and model expression model data respectively, all expression feature coefficients of the model expressionless model data are 0, and the expression feature coefficients of the model expression model data are with The coefficient for the left eye is 1 and the other coefficients are 0.
依据面部模型数据和表情特征数据生成游戏角色的角色表情数据Generate character expression data of game characters based on facial model data and expression feature data
在本申请的实施例中,生成的游戏角色的角色表情数据使游戏角色表现出由表情特征数据表示的表情。在本申请的实施例中,首先将面部模型数据分别与模特无表情模型数据和模特表情模型数据进行包裹变形处理,然后将包裹变形处理后的数据进行融合变形处理,以得到与面部模型数据对应的角色表情模型数据。在本申请的实施例中,角色表情模型数据可以被作为游戏角色的角色表情数据。在本申请的实施例中,上述模特无表情模型数据和模特表情模型数据可以是预置的或非预置的。在本申请的实施例中,参与上述操作的模特无表情模型数据和模特表情模型数据分别可以是一种或多种。在本申请的实施例中,由第一模特表情模型数据参与生成的角色表情模型数据(角色表情数据)可以被称为第一角色表情模型数据(第二角色表情数据),而由第二模特表情模型数据参与生成的角色表情模型数据(角色表情数据)可以被称为第二角色表情模型数据(第二角色表情数据),以此类推。In the embodiment of the present application, the generated character expression data of the game character makes the game character show the expression represented by the expression feature data. In the embodiment of the present application, firstly, the facial model data is wrapped and deformed with the model's expressionless model data and the model's expression model data respectively, and then the wrapped deformed data is fused and deformed to obtain the data corresponding to the facial model. Character expression model data. In the embodiment of the present application, the character expression model data can be used as the character expression data of the game character. In the embodiment of the present application, the aforementioned model expressionless model data and model expression model data may be preset or non-preset. In the embodiment of the present application, the model's expressionless model data and the model's expression model data participating in the above operations may be one or more types, respectively. In the embodiment of the present application, the character expression model data (character expression data) generated by the first model expression model data may be referred to as the first character expression model data (second character expression data), and the second model The character expression model data (character expression data) generated by the expression model data may be referred to as the second character expression model data (second character expression data), and so on.
在本申请的实施例中,可以使用软件(例如,已知的Russian3DScanner LLC公司的R3DS Wrap软件,其功能的详细信息可在https://www.russian3dscanner.com/获得)实现上述角色表情数据的生成, 特别是利用该软件实现上述“包裹变形处理”和“融合变形处理”。在本申请的实施例中,还可以使用其他方法或软件实现上述操作,只要能够实现本申请的目的。与ARKit软件类似,可以编制软件程序、设置硬件模块或软件与硬件混合的模块来实现对R3DS Wrap软件的调用。在本申请的实施例中,调用ARKit软件的模块和调用R3DS Wrap短剑的模块可以是相同的模块,也可以是不同的但彼此可以交换数据的模块,还可以是相同系统下的不同模块(可以彼此交换数据)。图4示出了根据本申请的一个实施例的生成游戏角色的角色表情数据的示意图。如本领域中已知的,R3DS Wrap是一种节点方式的软件,可以通过选取并且连接节点来实现功能。例如,本申请的实施例可以使用R3DS Wrap软件中的包裹变形(Wrapping)和融合变形(Blendshapes)节点。In the embodiment of this application, software (for example, the known R3DS Wrap software of Russian3DScanner LLC, whose function details can be obtained at https://www.russian3dscanner.com/) can be used to realize the above-mentioned character expression data Generate, especially using the software to realize the above-mentioned "wrap deformation processing" and "fusion deformation processing". In the embodiments of the present application, other methods or software may also be used to achieve the foregoing operations, as long as the objectives of the present application can be achieved. Similar to ARKit software, you can compile software programs, set up hardware modules, or mix software and hardware modules to implement calls to R3DS Wrap software. In the embodiment of the present application, the module that calls the ARKit software and the module that calls the R3DS Wrap Dagger can be the same module, or they can be different modules that can exchange data with each other, or they can be different modules under the same system ( Can exchange data with each other). Fig. 4 shows a schematic diagram of generating character expression data of a game character according to an embodiment of the present application. As known in the art, R3DS Wrap is a kind of node-based software, and functions can be realized by selecting and connecting nodes. For example, the embodiment of the present application may use the Wrapping and Blendshapes nodes in the R3DSWrap software.
图5示出了根据本申请的一个实施例的使用的R3DS Wrap软件中的节点,其中将游戏角色的面部模型数据分别与模特无表情模型数据和模特表情模型数据进行包裹变形处理,然后将在包裹变形处理后得到的数据在游戏角色的面部模型数据的基础上进行融合变形处理,得到游戏角色的角色表情模型书作作为角色表情数据,最后使用SaveGeom节点以obj格式保存生成的数据。在本申请的实施例中,R3DS Wrap软件和ARKit软件可以使用具有相同底层数据结构的模型或数据,以使得二者所利用或生成的模型或数据可以互用。Figure 5 shows the nodes in the R3DS Wrap software used according to an embodiment of the application, in which the facial model data of the game character is wrapped and deformed with the model's expressionless model data and the model's expression model data, and then the The data obtained after the package deformation process is fused and deformed on the basis of the face model data of the game character, and the character expression model book of the game character is obtained as the character expression data. Finally, the SaveGeom node is used to save the generated data in the obj format. In the embodiments of the present application, the R3DS Wrap software and the ARKit software can use models or data with the same underlying data structure, so that the models or data used or generated by the two can be used or generated mutually.
在本申请的实施例中,可以在包裹变形处理中,对面部模型数据和模特无表情模型数据选取一一对应的标定特征点。在本申请的实施例中,标定特征点可以与表情特征系数指示的部位相同或不同。在本申请的实施例中,可以根据需要选择适当数量的标定特征点。在本申请的实施例中,选取的标定特征点可以根据需要进行调整,例如,通常在眼睛、嘴巴等对表情影响较大的部位分布较多。上述对标定特征点的数量和部位的选择可以减轻工作量,从而提高了制作效率。图6A示出了根据本申请的一个实施例的示例性标定特征点,而图6B示出了与图6A中的标定特征点对应的游戏角色的面部模型数据和角色表情模型数据。In the embodiment of the present application, in the package deformation processing, one-to-one corresponding calibration feature points can be selected for the facial model data and the model's expressionless model data. In the embodiment of the present application, the calibration feature point may be the same as or different from the part indicated by the expression feature coefficient. In the embodiment of the present application, an appropriate number of calibration feature points can be selected according to needs. In the embodiment of the present application, the selected calibration feature points can be adjusted as needed. For example, there are usually more distributions in the eyes, mouth, and other parts that have a greater impact on expression. The above-mentioned selection of the number and location of the calibration feature points can reduce the workload, thereby improving the production efficiency. FIG. 6A shows exemplary calibration feature points according to an embodiment of the present application, and FIG. 6B shows facial model data and character expression model data of the game character corresponding to the calibration feature points in FIG. 6A.
在本申请的实施例中,当需要替换游戏角色时,游戏角色的面部 模型数据也会随之改变。在本申请的实施例中,可以针对不同的游戏角色,利用已有的模特模型反复进行包裹变形处理,这简化了表情制作,进而提高了制作效率。在本申请的实施例中,当游戏角色变化时,可以导入变化后的游戏角色的面部模型数据,然后对其选取多个相同的标定特征点,以得到与修改后的面部模型数据对应的角色表情模型数据。图7示出了根据本申请的一个实施例的不同游戏角色的表情模型数据的示意图,其中在模特无表情模型数据和模特表情模型数据不变而游戏角色变化的情况下,根据上述简化方法制作出多个不同的游戏角色的角色表情模型数据。In the embodiment of the present application, when the game character needs to be replaced, the face model data of the game character will also change accordingly. In the embodiment of the present application, it is possible to use existing model models to repeatedly perform wrapping deformation processing for different game characters, which simplifies expression production and thereby improves production efficiency. In the embodiment of the present application, when the game character changes, the face model data of the changed game character can be imported, and then multiple identical calibration feature points can be selected to obtain the character corresponding to the modified face model data Expression model data. FIG. 7 shows a schematic diagram of the expression model data of different game characters according to an embodiment of the present application. In the case where the model has no expression model data and the model expression model data unchanged but the game character changes, it is produced according to the above simplified method Generate character expression model data of multiple different game characters.
在本申请的实施例中,还可以选择依据上面生成的游戏角色的角色表情数据,生成相应的表情动画数据,以便使游戏角色的表情切换能够更加流畅、自然。在本申请的实施例中,“表情动画”指的是从一个表情到另一个表情的融合过程。在本申请的实施例中,可以利用已知的ERA引擎的角色编辑器的Morph编辑功能制作表情动画。在本申请的实施例中,还可以使用其他方法和软件,只要能够实现本申请的目的。在本申请的实施例中,可以对面部模型数据和角色表情数据(例如,其中的角色表情模型数据)进行编辑。在本申请的实施例中,使用ERA引擎的角色编辑器进行上述编辑,游戏角色的面部模型数据和角色表情数据被组装到集成文件(包括morph文件和target文件)。如本领域技术人员知道的,在ERA引擎的角色编辑器中,target文件是表情动画文件,其包含某个角色表情模型数据相对于面部模型数据的变化信息,而morph文件是游戏角色的所有表情数据文件的集合,其包含关于游戏角色的面部模型数据和所有的target文件名。图8示出了根据本申请的一个实施例的示例性morph文件和target文件。In the embodiment of the present application, it is also possible to choose to generate corresponding expression animation data based on the character expression data of the game character generated above, so as to make the expression switching of the game character more smooth and natural. In the embodiments of the present application, "emoji animation" refers to the fusion process from one emoji to another. In the embodiment of the present application, the Morph editing function of the character editor of the known ERA engine can be used to make an expression animation. In the embodiments of the present application, other methods and software may also be used, as long as the purpose of the present application can be achieved. In the embodiment of the present application, the facial model data and the character expression data (for example, the character expression model data therein) can be edited. In the embodiment of the present application, the character editor of the ERA engine is used to perform the above-mentioned editing, and the facial model data and the character expression data of the game character are assembled into an integrated file (including the morph file and the target file). As those skilled in the art know, in the character editor of the ERA engine, the target file is an expression animation file, which contains the change information of a certain character expression model data relative to the facial model data, and the morph file is all the expressions of the game character A collection of data files, which contains facial model data about game characters and all target file names. Fig. 8 shows an exemplary morph file and target file according to an embodiment of the present application.
在本申请的实施例中,在利用ERA引擎的角色编辑器组装morph文件后,可以在BodyPart功能下为游戏角色添加该morph文件,然后可以在技能面板中为游戏角色添加表情动画,以便设置表情动画属性。图9示出了根据本申请的一个实施例的添加morph文件的操作界面。图10示出了根据本申请的一个实施例的设置表情动画属性的操作界面。在本 申请的实施例中,可以通过插值计算算法计算从一个表情到另一个表情融合的过程。在本申请的实施例中,“插值计算”具有本领域技术人员通常理解的含义。在本申请的实施例中,进行插值计算的数据上面的经过编辑的面部模型数据和角色表情数据。在本申请的实施例中,图10中的表情动画属性表示的含义如表1中所示:In the embodiment of this application, after the morph file is assembled using the character editor of the ERA engine, the morph file can be added to the game character under the BodyPart function, and then the expression animation can be added to the game character in the skill panel to set the expression Animation properties. Fig. 9 shows an operation interface for adding a morph file according to an embodiment of the present application. Fig. 10 shows an operation interface for setting an emoticon animation attribute according to an embodiment of the present application. In the embodiment of the present application, the process of fusion from one expression to another expression can be calculated through an interpolation calculation algorithm. In the embodiments of the present application, "interpolation calculation" has the meaning commonly understood by those skilled in the art. In the embodiment of the present application, the edited facial model data and character expression data above the data for interpolation calculation. In the embodiment of the present application, the meanings of the expression animation attributes in FIG. 10 are as shown in Table 1:
表1Table 1
Figure PCTCN2020112616-appb-000001
Figure PCTCN2020112616-appb-000001
本申请还能够使游戏角色实时跟随用户的表情做出相应的表情,这需要利用上面说明的表情特征系数。在本申请的实施例中,可以例如通过ERA引擎实现游戏角色的实时表情的生成。在本申请的实施例中,可以获取的表情特征系数以及相关联的旋转矩阵传输到ERA引擎中。在本申请的实施例中,可以通过将面部模型数据与表情特征系数和旋转矩阵进行插值计算,以得到实时的角色表情数据。在本申请的实施例中,还可以通过融合变形处理以“部分替换”的方式改变已经生成的角色表情数据,以快速得到大量新的角色表情数据。图11示出了实施此“部分替换”的方法的节点的示意图,而图12则示出了上述方法的示例性操作示意图。具体而言,在已有的游戏角色的面部模型数据和角色表情数 据(该数据可以是实时或非实时生成的)的基础上,选择面部模型数据中需要部分替换的部位(节点),从而将角色表情数据相对应部位的“局部表情”应用到面部模型数据上,进而生成新的角色表情数据。在本申请的实施例中,被替换的部位可以是一个或多个。在本申请的实施例中,角色表情数据,特别是第一角色表情数据,被配置作为所创建的游戏角色的数据的一部分。在本申请的实施例中,角色表情数据可以被保存、分享或下载。This application can also enable the game character to follow the user's expression in real time to make corresponding expressions, which requires the use of the expression feature coefficients described above. In the embodiment of the present application, the real-time expression generation of the game character can be realized by, for example, the ERA engine. In the embodiment of the present application, the available expression feature coefficients and the associated rotation matrix are transmitted to the ERA engine. In the embodiment of the present application, the face model data can be interpolated with the expression feature coefficients and the rotation matrix to obtain real-time character expression data. In the embodiment of the present application, it is also possible to change the generated character expression data in a "partial replacement" manner through the fusion deformation process, so as to quickly obtain a large amount of new character expression data. FIG. 11 shows a schematic diagram of a node that implements this "partial replacement" method, and FIG. 12 shows an exemplary operation diagram of the above method. Specifically, on the basis of the existing facial model data and character expression data of the game character (the data can be generated in real-time or non-real-time), the parts (nodes) in the facial model data that need to be partially replaced are selected, so as to change The "partial expression" of the corresponding part of the character expression data is applied to the facial model data to generate new character expression data. In the embodiment of the present application, there may be one or more parts to be replaced. In the embodiment of the present application, the character expression data, especially the first character expression data, is configured as a part of the created game character data. In the embodiment of the present application, the character expression data can be saved, shared or downloaded.
依据角色表情数据进行渲染以生成游戏角色的表情Rendering based on the character expression data to generate the expression of the game character
在本申请的实施例中,可以通过本领域已知的方法进行渲染操作以生成最终呈现的表情。在本申请的实施例中,可以将游戏角色的表情作为预览显示给游戏用户。在本申请的实施例中,可以将游戏角色的表情动画给游戏用户预览播放。在本申请的实施例中,可以提供多种接口以便实现上述预览、播放等操作。在本申请的实施例中,可以通过预览、播放等按钮来查看表情或表情动画。在本申请的实施例中,生成的游戏角色的表情还可以在后续的游戏情节或场景进行交互(例如,用于拍照、截图、截屏、录屏),还可以作为聊天用表情包或依据角色剧情给出的表情。在本申请的实施例中,可以将游戏角色的表情,特别是依据第二角色表情数据渲染生成的表情,在游戏的一个或多个游戏情节中进行突出显示,或将该表情在操作游戏角色的用户或游戏情节满足特定条件时,进行加载显示。在下面特别描述在创建角色的场景下本申请的示例性实施例。在用户登陆电子游戏系统后,电子游戏系统根据用户的选择向用户呈现可供进行定制的角色属性,该属性包括但不限于脸型、发色、五官形状等。在用户选择角色属性的过程中,电子游戏系统通过屏幕向用户呈现具备上述角色属性的角色形象。图13示出了根据本申请的一个实施例的创建游戏角色的界面的示意图,其中用户通过对第三个脸型的选择,电子游戏系统在屏幕上显示出匹配该脸型的游戏角色的形象供用户进行预览。In the embodiment of the present application, the rendering operation may be performed by a method known in the art to generate the final expression. In the embodiment of the present application, the expression of the game character may be displayed to the game user as a preview. In the embodiment of the present application, the expression animation of the game character can be previewed and played by the game user. In the embodiments of the present application, a variety of interfaces may be provided to implement the foregoing operations such as preview and playback. In the embodiment of the present application, the emoticon or emoticon animation can be viewed through buttons such as preview and play. In the embodiment of this application, the generated emoticons of game characters can also be interacted in subsequent game plots or scenes (for example, for taking photos, screenshots, screenshots, and screen recordings), and can also be used as emoticons for chat or based on characters The expression given by the plot. In the embodiment of the present application, the expression of the game character, especially the expression generated based on the expression data of the second character, can be highlighted in one or more game plots of the game, or the expression can be used in the operation of the game character. When the user or game plot meets certain conditions, it will be loaded and displayed. The exemplary embodiments of the present application in the scenario of creating a character are specifically described below. After the user logs in to the electronic game system, the electronic game system presents the user with customizable character attributes according to the user's selection, including but not limited to face shape, hair color, facial features, etc. In the process of the user selecting the character attributes, the electronic game system presents the character image with the above-mentioned character attributes to the user through the screen. FIG. 13 shows a schematic diagram of an interface for creating a game character according to an embodiment of the present application, in which the user selects a third face shape, and the electronic game system displays the image of the game character matching the face shape on the screen for the user Preview it.
除了脸型等面部属性之外,用户还可以对该游戏角色的预置的表情进行选择和预览。图14A和14B是根据本申请的实施例的方法非实 时生成的游戏角色的表情的示意图,其中用户可以在电子游戏系统提供的一个表情预览选择窗口中进行选择。电子游戏系统中预存了该游戏角色的多种表情,用户通过选择不同的表情,屏幕上会显示出游戏角色的不同表情的预览。在用户没有进行任何选择的情况下,游戏角色不呈现任何表情,即此时游戏角色的表情对应于游戏角色的面部模型数据所具有的无表情。上述可供预览的角色的表情,可以是通过常规方法由电子游戏系统的开发人员制备的,也可以是通过本申请的生成表情的方法制备的,优选地通过模特的参与由角色表情数据生成的。表情预览选择窗口列出的表情,可以是与第一角色表情数据或第二角色表情数据相对应的。In addition to facial attributes such as face shape, the user can also select and preview the preset expressions of the game character. 14A and 14B are schematic diagrams of the emoticons of the game character generated in real time according to the method of the embodiment of the present application, in which the user can select in an emoticon preview selection window provided by the electronic game system. A variety of expressions of the game character are pre-stored in the electronic game system, and the user selects different expressions, and a preview of the different expressions of the game character will be displayed on the screen. In the case where the user does not make any selection, the game character does not present any expressions, that is, the expression of the game character at this time corresponds to the expressionless expression of the facial model data of the game character. The expressions of the above-mentioned characters available for preview can be prepared by the developers of the electronic game system by conventional methods, or prepared by the method of generating expressions of this application, preferably generated from character expression data through the participation of models . The expressions listed in the expression preview selection window may correspond to the expression data of the first character or the expression data of the second character.
本申请的电子游戏系统还提供了一个接口以选择“实时表情”,该接口例如可以是一个“按钮”或“勾选框”等。图15A和15B是根据本申请的实施例的方法实时生成的游戏角色的表情的示意图。在用户选择了实时表情功能后,电子游戏系统可以通过游戏终端上的摄像头捕获用户的面部表情,并通过本申请的方法将使当前的游戏角色呈现出与用户的面部表情相同的表情。可选地,该生成的游戏角色的实时表情可以被保存以供用户在表情预览窗口进行选择预览,也可以用于电子游戏系统的其他场景中。在角色的表情在电子游戏的游戏情节中突出显示的场景下,本申请的示例性实施例如下:在电子游戏的进行过程中,在某些特别游戏情节中需要突出显示游戏角色的面部,例如当镜头拉近至游戏角色面部以显示游戏角色的表情时。此时,电子游戏系统可以选择将在创建游戏角色时由电子游戏系统预存或由用户制作的表情作为突出显示的表情使用。例如,当需要表现出游戏角色非常高兴时,突出显示的游戏角色的面部可以使用电子游戏系统中预览名称为“高兴”的表情,也可以使用用户在游戏角色创建时生成并保存的名为“喜悦”的表情,另外其还可以使用玩家当前的表情。例如,当前的游戏情节烘托出悲伤的氛围,用户也因此受影响而表情凝重,可以依据本申请的实时或非实时生成游戏角色的表情的方法,使用户操控的角色呈现出用户此时面部的表情。上述游戏功能可以是电 子游戏系统中预设的或由用户在游戏设置中选定的。在电子游戏的用户或电子游戏的游戏情节满足特定条件的场景下,本申请的示例性实施例如下:在电子游戏的进行过程中,当用户或游戏情节满足某些特定条件时,可以将根据本申请的方法制备的表情用作游戏角色的表情。特定条件包括但不限于用户达到一定等级、完成某项任务、达成某项成就、游戏剧情进展到特定环节,触发特定隐藏剧情等。此时,电子游戏系统可以选择将在创建游戏角色时由电子游戏系统预存或由用户制作的表情作为游戏角色的表情使用。或者在此时,电子游戏系统开始提供制作实时表情的选项(例如,按钮)以供用户操作。The electronic game system of the present application also provides an interface to select "real-time emoticons". The interface can be, for example, a "button" or a "check box". 15A and 15B are schematic diagrams of the expressions of the game character generated in real time according to the method of the embodiment of the present application. After the user selects the real-time expression function, the electronic game system can capture the user's facial expression through the camera on the game terminal, and through the method of this application, the current game character will present the same expression as the user's facial expression. Optionally, the generated real-time expression of the game character can be saved for the user to select and preview in the expression preview window, or can be used in other scenes of the electronic game system. In a scenario where the expression of a character is highlighted in the game plot of an electronic game, an exemplary embodiment of the present application is as follows: During the progress of the electronic game, it is necessary to highlight the face of the game character in some special game plots, for example, When the camera zooms in to the face of the game character to show the expression of the game character. At this time, the electronic game system can choose to use the expression pre-stored in the electronic game system or made by the user when the game character is created as the highlighted expression. For example, when it is necessary to show that the game character is very happy, the highlighted face of the game character can use the expression named "happy" in the preview of the video game system, or use the name "happy" generated and saved by the user when the game character is created. In addition, it can also use the player’s current expression. For example, the current game plot sets off a sad atmosphere, and the user is affected and has a serious expression. According to the method of generating the expression of the game character in real time or non-real time in this application, the character controlled by the user can show the face of the user at this time. expression. The above game functions can be preset in the electronic game system or selected by the user in the game settings. In the scenario where the user of the electronic game or the game plot of the electronic game meets specific conditions, the exemplary embodiment of the present application is as follows: During the progress of the electronic game, when the user or the game plot meets certain specific conditions, the The expression prepared by the method of the present application is used as the expression of the game character. Specific conditions include, but are not limited to, the user reaches a certain level, completes a certain task, achieves a certain achievement, the game plot progresses to a specific link, triggers a specific hidden plot, etc. At this time, the electronic game system can choose to use the expression pre-stored in the electronic game system or made by the user when the game character is created as the expression of the game character. Or at this time, the electronic game system begins to provide options (for example, buttons) for making real-time emoticons for users to operate.
图16示出了实施本申请的实施例的示例性电子游戏系统的示意图。游戏用户能够利用本申请的实施例便捷地生成、共享和播放角色的表情以及相关的动画。游戏用户能够离线地在游戏的客户端进行相关操作,也可以在玩游戏的同时实时地进行相关操作。作为数据源,用户利用实施了本申请的实施例的软件、硬件或软件与硬件混合的模块,通过摄像设备(例如,苹果公司或华为技术有限公司的销售手机)拍摄自己的面部,并且将所拍摄的面部的照片甚至视频提供给本申请所使用的面部处理装置和表情生成装置(例如,ARKit软件和R3DS Wrap软件)。图16所示的游戏客户端是本申请的实施例之一,能够实现对ARKit软件和R3DS Wrap软件的调用和对摄像设备的控制。面部处理装置和表情生成装置对所提供的数据进行处理,生成相关模型或表情特征信息,并将相关数据提供给ERA引擎。玩家通过游戏系统的客户端进行有关角色和表情的相关操作(这些操作可能会调用前述摄像设备和面部处理装置和表情生成装置),包括自拍、角色表情的生成、角色表情传播和角色表情动画的播放、角色表情或动画的保存、角色表情的检索(可以通过ERA引擎执行检索操作)等等。游戏系统的客户端软件通过计算机网络与游戏服务器进行通信,从而能够支持客户端多角色的实时表情同步,以及在玩家之间进行角色表情的传播或共享。Fig. 16 shows a schematic diagram of an exemplary electronic game system implementing an embodiment of the present application. Game users can use the embodiments of the present application to conveniently generate, share and play character expressions and related animations. Game users can perform related operations on the game client offline, or perform related operations in real time while playing the game. As a data source, the user uses the software, hardware, or a module of software and hardware that implements the embodiments of this application to photograph his own face through a camera device (for example, a mobile phone sold by Apple or Huawei Technologies Co., Ltd.) The captured face photos and even videos are provided to the facial processing device and expression generating device used in this application (for example, ARKit software and R3DS Wrap software). The game client shown in FIG. 16 is one of the embodiments of the present application, and can realize the calling of ARKit software and R3DS Wrap software and the control of camera equipment. The facial processing device and the expression generating device process the provided data, generate relevant models or expression feature information, and provide the relevant data to the ERA engine. Players perform operations related to characters and expressions through the client of the game system (these operations may call the aforementioned camera equipment, facial processing device, and expression generation device), including self-portraits, character expression generation, character expression propagation, and character expression animation. Play, save character expressions or animations, search character expressions (retrieving operations can be performed through the ERA engine), etc. The client software of the game system communicates with the game server through a computer network, so that it can support real-time expression synchronization of multiple characters on the client side, and the dissemination or sharing of character expressions between players.
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术 人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本申请实施例的生成游戏角色的表情的设备中的一些或者全部部件的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。The various component embodiments of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them. Those skilled in the art should understand that a microprocessor or a digital signal processor (DSP) can be used in practice to implement some or all of the functions of some or all of the components in the device for generating the expression of a game character according to the embodiment of the present application. . This application can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein. Such a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals. Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.
例如,图17示出了可以实现根据本申请的生成游戏角色的表情的客户端设备,例如智能手机、平板电脑等。该客户端设备传统上包括处理器210和以存储器220形式的计算机程序产品或者计算机可读介质。存储器220可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器220具有用于执行上述方法中的任何方法步骤的程序代码231的存储空间230。例如,用于程序代码的存储空间230可以包括分别用于实现上面的方法中的各种步骤的各个程序代码231。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图18所述的便携式或者固定存储单元。该存储单元可以具有与图17的客户端设备中的存储器220类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码231’,即可以由例如诸如210之类的处理器读取的代码,这些代码当由客户端设备运行时,导致该客户端设备执行上面所描述的方法中的各个步骤。For example, FIG. 17 shows a client device, such as a smart phone, a tablet computer, etc., that can realize the expression of a game character according to the present application. The client device traditionally includes a processor 210 and a computer program product in the form of a memory 220 or a computer-readable medium. The memory 220 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM. The memory 220 has a storage space 230 for executing the program code 231 of any method step in the above method. For example, the storage space 230 for program codes may include various program codes 231 respectively used to implement various steps in the above method. These program codes can be read from or written into one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards, or floppy disks. Such a computer program product is usually a portable or fixed storage unit as described with reference to FIG. 18. The storage unit may have storage segments, storage spaces, etc. arranged similarly to the storage 220 in the client device of FIG. 17. The program code can be compressed in an appropriate form, for example. Generally, the storage unit includes computer-readable codes 231', that is, codes that can be read by, for example, a processor such as 210. These codes, when run by a client device, cause the client device to execute the method described above. The various steps.
图19示出了根据本申请的一个实施例的在游戏中生成实时表情包的方法的流程图,下面依次说明其中的各个步骤。FIG. 19 shows a flowchart of a method for generating a real-time emoticon package in a game according to an embodiment of the present application, and each step of the method is described in sequence below.
接收生成游戏角色的实时表情包的指令Receive instructions to generate real-time emoticons of game characters
在本申请的实施例中,“游戏角色”指的是电子游戏中的任何虚拟角色,优选地是用户正在操纵的虚拟角色。在本申请的实施例中,“游戏角色的表情包”指的是呈现出游戏角色的形象,特别是面部表情特征的表情包。在本申请的实施例中,“游戏角色的实时表情包”指的是该表情包呈现出的游戏角色的面部表情与当前正在操作该游戏角色的用 户的实时表情一致。在本申请的实施例中,该指令是由游戏用户发出的,例如在与其他游戏用户或NPC(非玩家角色)聊天时。在本申请的实施例中,游戏用户可以通过多种方式发出指令,包括但不限于点击按钮、勾选选项、打开游戏终端的摄像头模组等。在本申请的实施例中,该指令是由电子游戏系统发出的,例如在游戏用户或游戏情节满足特定条件时。在本申请的实施例中,可以在任何时间生成实时表情包,优选地,在游戏用户聊天时生成,在此情况下,接收到指令是在当前游戏的聊天室中触发的游戏角色的实时表情包的生成指令。在本申请的实施例中,实时表情包是由当前游戏用户制作生成的,其能够被发送到其他游戏用户的客户端。在本申请的实施例中,实时表情包是由其他游戏用户制作生成的,其能够由其他游戏用户发送到当前游戏用户的客户端。在本申请的实施例中,“其他游戏用户”包括对端游戏用户,即正在与当前游戏用户进行聊天互动的游戏用户。在本申请的实施例中,对端游戏用户可以是一个或多个。In the embodiments of the present application, "game character" refers to any virtual character in an electronic game, preferably a virtual character being manipulated by a user. In the embodiments of the present application, the "emoticon package of the game character" refers to the emoticon package that presents the image of the game character, especially the facial expression characteristics. In the embodiment of the present application, the "real-time emoticon package of the game character" means that the facial expression of the game character presented by the emoticon package is consistent with the real-time expression of the user currently operating the game character. In the embodiment of the present application, the instruction is issued by the game user, for example, when chatting with other game users or NPCs (non-player characters). In the embodiment of the present application, the game user can issue instructions in a variety of ways, including but not limited to clicking buttons, checking options, opening the camera module of the game terminal, and so on. In the embodiment of the present application, the instruction is issued by the electronic game system, for example, when the game user or the game plot meets certain conditions. In the embodiment of the present application, a real-time emoticon package can be generated at any time, preferably when the game user is chatting. In this case, the received instruction is the real-time emoticon of the game character triggered in the chat room of the current game Package generation instructions. In the embodiment of the present application, the real-time emoticon package is produced and generated by the current game user, and it can be sent to the clients of other game users. In the embodiment of the present application, the real-time emoticon package is produced by other game users, and can be sent by other game users to the client of the current game user. In the embodiment of the present application, "other game users" include peer game users, that is, game users who are chatting and interacting with the current game user. In the embodiment of the present application, there may be one or more peer game users.
获取游戏角色的面部模型数据Obtain face model data of game characters
在本申请的实施例中,可以在任何阶段和场景获取游戏角色的面部模型数据,优选地,在游戏角色被创建时获取游戏角色的面部模型数据。在本申请的实施例中,游戏角色的“面部模型数据”即游戏角色的美术制作模型,指的是在进行渲染后呈现出无表情的游戏角色的面部的数据。参见图6B,其中示出了游戏角色的示例性美术制作模型。在本申请的实施例中,“模型数据”与“模型”可互换地使用。在本申请的实施例中,游戏角色的美术制作模型由游戏开发人员预置并保存在游戏数据中。在本申请的实施例中,可以以任何数据格式存储面部模型数据。在本申请的实施例中,“预置”与“非预置”或“实时”相对,指的是已经预先制作好而不需要实时制作、生成。In the embodiment of the present application, the face model data of the game character can be obtained at any stage and scene, preferably, the face model data of the game character is obtained when the game character is created. In the embodiment of the present application, the "face model data" of the game character, that is, the art production model of the game character, refers to the data of the face of the game character that appears expressionless after rendering. See Figure 6B, which shows an exemplary art production model of a game character. In the embodiments of the present application, "model data" and "model" are used interchangeably. In the embodiment of the present application, the art production model of the game character is preset by the game developer and saved in the game data. In the embodiment of the present application, the face model data can be stored in any data format. In the embodiments of the present application, “preset” is opposite to “non-preset” or “real-time”, which means that it has been pre-made and does not require real-time production and generation.
实时获取当前游戏用户的表情特征数据Obtain real-time facial feature data of current game users
在本申请的实施例中,“表情特征数据”指的是能够表示表情的数据的统称。在本申请的实施例中,获取的表情特征数据是实时的。在本申请的实施例中,表情特征数据包括表情特征系数。在本申请的实施 例中,表情特征数据还包括旋转矩阵。在本申请的实施例中,可以通过摄像头模组(例如,手机、平板和电脑上的摄像头或摄像机和摄影机等)获取表情特征数据。在本申请的实施例中,可以使用摄像头模组拍摄或检测游戏用户的面部以获取表情特征数据。在本申请的实施例中,表情特征数据是通过摄像头模组由当前游戏用户实时获取的。在本申请的实施例中,摄像头模组是当前游戏用户正在操纵的客户端中的摄像头模组。在本申请的实施例中,可以以任何数据格式存储表情特征数据。在本申请的实施例中,“表情特征系数”指的是能够表示面部特征的数据,其以系数形式描述相对于无表情时面部特征的偏移,进而描述有表情时的面部特征。在本申请的实施例中,旋转矩阵具有本领域技术人员通常理解的含义。在本申请的实施例中,可以使用软件(例如,已知的Ark Platforms Inc.公司的ARKit软件,其功能的详细信息可在http://www.arkit.io/获得)配合摄像头模组获得表情特征系数。例如,由ARKit软件(尤其是ARFaceAnchor模块中的ARBlendShapeLocation功能)以blendShapes属性或参数的形式提供表情特征系数。在本申请的实施例中,还可以以其他软件或其他形式来提供表情特征系数。如本领域中已知的,在ARKit软件中,表情特征系数提供了一个模型,该模型表示一系列面部特征相对于无表情时的偏移系数。如本领域中已知的,ARKit软件目前提供了包括52个系数的表情特征系数,其分布在左眼、右眼、嘴、下巴、眉毛、脸颊、舌头等部位,每个系数为0-1之间的浮点数,其中0表示无表情,而1表示最大程度的表情。图3示出了根据本申请的实施例的示例性表情特征系数,其中显示了睁开和闭合左眼的表情。在本申请的实施例中,可以通过硬件模块(通常是定制的)来实现对ARKit软件和摄像头模组的调用和控制,也可以通过混合了软件和硬件的模块来实现所述调用和控制。In the embodiments of the present application, "expression feature data" refers to a general term for data that can express an expression. In the embodiment of the present application, the acquired facial expression feature data is real-time. In the embodiment of the present application, the expression feature data includes expression feature coefficients. In the embodiment of the present application, the expression feature data further includes a rotation matrix. In the embodiment of the present application, the facial expression feature data can be obtained through a camera module (for example, a camera on a mobile phone, a tablet, a computer, or a video camera and a video camera, etc.). In the embodiment of the present application, the camera module may be used to photograph or detect the face of the game user to obtain expression feature data. In the embodiment of the present application, the expression feature data is obtained in real time by the current game user through the camera module. In the embodiment of the present application, the camera module is the camera module in the client currently being manipulated by the game user. In the embodiment of the present application, the expression feature data can be stored in any data format. In the embodiments of the present application, "expression feature coefficients" refer to data that can represent facial features, which describe the offset from the facial features when there is no expression in the form of coefficients, and further describe the facial features when there is an expression. In the embodiments of the present application, the rotation matrix has the meaning commonly understood by those skilled in the art. In the embodiment of this application, software (for example, the known ARKit software of Ark Platforms Inc., whose function detailed information can be obtained at http://www.arkit.io/) can be used in conjunction with the camera module to obtain Expression feature coefficient. For example, ARKit software (especially the ARBlendShapeLocation function in the ARFaceAnchor module) provides expression feature coefficients in the form of blendShapes attributes or parameters. In the embodiments of the present application, the expression feature coefficients may also be provided in other software or other forms. As known in the art, in the ARKit software, the expression feature coefficient provides a model that represents the offset coefficient of a series of facial features relative to the absence of expression. As known in the art, ARKit software currently provides expression feature coefficients including 52 coefficients, which are distributed in the left eye, right eye, mouth, chin, eyebrows, cheeks, tongue, etc., each coefficient is 0-1 Floating point number between, where 0 means no expression, and 1 means maximum expression. Fig. 3 shows exemplary expression feature coefficients according to an embodiment of the present application, in which the expressions of opening and closing the left eye are displayed. In the embodiments of the present application, the call and control of the ARKit software and camera module can be implemented by a hardware module (usually customized), or the call and control can be implemented by a module that mixes software and hardware.
依据面部模型数据和表情特征数据实时生成游戏角色的实时表情包Based on facial model data and expression feature data, real-time expression packs of game characters are generated in real time
在本申请的实施例中,游戏角色的实时表情包的获得基于游戏角色的实时表情数据。在本申请的实施例中,可以通过任何本领域中已知的方法通过游戏角色的表情数据生成表情包。在本申请的实施例中,游戏 角色的实时表情数据能够使得游戏角色表现出由表情特征数据表示的表情。在本申请的实施例中,游戏角色的(实时或非实时)表情数据被配置为能够在渲染后生成游戏角色的表情。在本申请的实施例中,游戏角色的“表情数据”与“角色表情数据”可互换地使用,指的是相对于面部模型数据的融合变形数据。在本方申请的实施例中,表情数据是一种面部拓扑学的网格结构数据,其包括基础网格信息、顶点、纹理、三角面片信息和游戏角色的各个融合变形信息中的至少一个。在本申请的实施例中,“基础网格信息”、“顶点”、“纹理”和“三角面片信息”具有本领域技术人员通常理解的含义。例如,ARKit软件尤其是ARFaceAnchor模块中的ARFaceGeometry功能规定了面部的基础网格和纹理,并且其中的每个模型数据可以包含1220个顶点和2304个三角面片。在本申请的实施例中,可以根据需要选择顶点和三角面片的数量。在本申请的实施例,还可以通过其他软件或方法限定表情数据。图6B示出了根据本申请的实施例的示例性角色表情数据。In the embodiment of the present application, the real-time expression package of the game character is obtained based on the real-time expression data of the game character. In the embodiment of the present application, the emoticon package can be generated from the emoticon data of the game character by any method known in the art. In the embodiment of the present application, the real-time expression data of the game character can make the game character show the expression represented by the expression feature data. In the embodiment of the present application, the (real-time or non-real-time) expression data of the game character is configured to be able to generate the expression of the game character after rendering. In the embodiments of the present application, the "expression data" of the game character and the "character expression data" are used interchangeably, and refer to the fusion deformation data relative to the facial model data. In the embodiment of this application, the expression data is a kind of face topology mesh structure data, which includes at least one of basic mesh information, vertices, textures, triangle face information, and various fusion deformation information of game characters. . In the embodiments of the present application, "basic mesh information", "vertex", "texture" and "triangular patch information" have meanings commonly understood by those skilled in the art. For example, the ARKit software, especially the ARFaceGeometry function in the ARFaceAnchor module, specifies the basic mesh and texture of the face, and each model data in it can contain 1220 vertices and 2304 triangle faces. In the embodiment of the present application, the number of vertices and triangle faces can be selected as required. In the embodiment of the present application, other software or methods can also be used to limit the expression data. Fig. 6B shows exemplary character expression data according to an embodiment of the present application.
由图6B可知,游戏角色的表情数据体现了相对于面部模型数据在一个或多个部位的融合变形。在本申请的实施例中,“融合变形信息”记录了上述融合变形的部位和/或程度。在本申请的实施例中,可以根据需要选择进行融合变形处理的部位和/或程度。在本申请的实施例中,上述部位可以与表情特征系数指示的部位相同。在本申请的实施例中,上述程度的取值可以与表情特征系数的取值相同。在本申请的实施例中,选取的进行融合变形处理的部位通常在例如眼睛、嘴巴等对表情影响较大的部位分布较多。在本申请的实施例中,可以使用软件(例如,已知的Russian3DScanner LLC公司的R3DS Wrap软件,其功能的详细信息可在https://www.russian3dscanner.com/获得)实现“融合变形”处理。在本申请的实施例中,还可以使用其他方法或软件实现上述操作,只要能够实现本申请的目的。与ARKit软件类似,可以编制软件程序、设置硬件模块或软件与硬件混合的模块来实现对R3DS Wrap软件的调用。在本申请的实施例中,调用ARKit软件的模块和调用R3DS Wrap短剑的模块可以是相同的模块,也可以是不同的但彼此可以交换数据的模块,还 可以是相同系统下的不同模块(可以彼此交换数据)。如本领域中已知的,R3DS Wrap是一种节点方式的软件,可以通过选取并且连接节点来实现功能。例如,本申请的实施例可以使用R3DS Wrap软件中的融合变形(Blendshapes)节点。It can be seen from FIG. 6B that the expression data of the game character reflects the fusion and deformation of one or more parts relative to the facial model data. In the embodiment of the present application, the "fusion deformation information" records the location and/or degree of the aforementioned fusion deformation. In the embodiments of the present application, the location and/or degree of the fusion deformation treatment can be selected according to needs. In the embodiment of the present application, the above-mentioned part may be the same as the part indicated by the expression feature coefficient. In the embodiment of the present application, the value of the above degree may be the same as the value of the expression feature coefficient. In the embodiments of the present application, the selected parts for fusion and deformation processing are usually distributed more in parts that have a greater influence on expression, such as eyes and mouth. In the embodiments of this application, software (for example, the known R3DS Wrap software of Russian3DScanner LLC, whose function details can be obtained at https://www.russian3dscanner.com/) can be used to implement "fusion deformation" processing . In the embodiments of the present application, other methods or software may also be used to achieve the foregoing operations, as long as the objectives of the present application can be achieved. Similar to ARKit software, you can compile software programs, set up hardware modules, or mix software and hardware modules to implement calls to R3DS Wrap software. In the embodiment of the present application, the module that calls the ARKit software and the module that calls the R3DS Wrap Dagger can be the same module, or they can be different modules that can exchange data with each other, or they can be different modules under the same system ( Can exchange data with each other). As known in the art, R3DS Wrap is a kind of node-based software, and functions can be realized by selecting and connecting nodes. For example, the embodiment of the present application may use the Blendshapes node in the R3DSWrap software.
在本申请的实施例中,游戏角色的角色表情数据由面部模型数据与表情特征数据进行插值计算获得。在本申请的实施例中,“插值计算”具有本领域技术人员通常理解的含义。在本申请的实施例中,可以例如通过已知的ERA引擎实现游戏角色的实时表情包的生成。在本申请的实施例中,可以将获取的表情特征系数以及相关联的旋转矩阵传输到ERA引擎中。在本申请的实施例中,可以通过将面部模型数据与表情特征系数和旋转矩阵进行插值计算,以得到实时的角色表情数据。在本申请的实施例中,在形成游戏角色的实时表情包之前,可以先保存面部模型数据和预定时间段内的表情特征数据。在本申请的实施例中,在触发实时表情包的播放后,可以加载以前保存的面部模型数据和游戏用户的表情特征数据,并且进行插值计算。在本申请的实施例中,通过上述方法生成的实时表情包可以是静态的图片或动态的动画或视频。在本申请的实施例中,可以采用多种格式和形式存储上面生成的角色表情数据和实时表情包。在本申请的实施例中,还可以采用多种格式和形式存储接收到的实时表情包以备后用。在本申请的实施例中,可以将实时表情包按照角色、表情类别、表情描述关键字中的至少一个进行分类存储。在本申请的实施例中,还可以对实时表情包进行编辑,例如,在其中加入实时输入的音频、文字或图片数据。In the embodiment of the present application, the character expression data of the game character is obtained by interpolation calculation of facial model data and expression feature data. In the embodiments of the present application, "interpolation calculation" has the meaning commonly understood by those skilled in the art. In the embodiment of the present application, the generation of real-time emoticons of game characters can be realized, for example, by a known ERA engine. In the embodiment of the present application, the obtained expression feature coefficients and the associated rotation matrix may be transmitted to the ERA engine. In the embodiment of the present application, the face model data can be interpolated with the expression feature coefficients and the rotation matrix to obtain real-time character expression data. In the embodiment of the present application, before the real-time expression package of the game character is formed, the facial model data and the expression characteristic data within a predetermined period of time may be saved. In the embodiment of the present application, after the playback of the real-time emoticon package is triggered, the previously saved facial model data and the emoticon feature data of the game user can be loaded, and interpolation calculations can be performed. In the embodiment of the present application, the real-time emoticon package generated by the above method may be a static picture or a dynamic animation or video. In the embodiment of the present application, the character expression data and real-time expression package generated above can be stored in a variety of formats and forms. In the embodiment of the present application, the received real-time emoticon package may also be stored in multiple formats and forms for later use. In the embodiment of the present application, the real-time emoticon package may be classified and stored according to at least one of a role, an emoticon category, and an emoticon description keyword. In the embodiment of the present application, the real-time emoticon package can also be edited, for example, real-time input audio, text, or picture data can be added to it.
在本申请的实施例中,可以对接收到的实时表情包进行解析。在本申请的实施例中,接收到的实时表情包反应出与当前游戏玩家相同的游戏角色。在本申请的实施例中,通过对实时表情包的解析,可以生成与游戏角色对应的角色表情数据。如前面已经讨论的,在本申请的实施例中,可以通过本领域已知的方法对游戏角色的表情数据进行渲染操作以生成最终呈现的表情。在本申请的实施例中,生成的游戏角色的表情还可以在后续的游戏情节或场景进行交互(例如,用于拍照、截图、截 屏、录屏),还可以作为聊天用表情包或依据角色剧情给出的表情。在本申请的实施例中,优选地将游戏角色的表情,特别是渲染经解析得到的角色表情数据而生成的表情,在游戏的一个或多个游戏情节中进行突出显示,或将该表情在操作游戏角色的用户或游戏情节满足特定条件时,进行加载显示。In the embodiment of the present application, the received real-time emoticon package can be parsed. In the embodiment of the present application, the received real-time emoticon package reflects the same game character as the current game player. In the embodiment of the present application, the character expression data corresponding to the game character can be generated by analyzing the real-time expression package. As discussed above, in the embodiments of the present application, the expression data of the game character can be rendered by a method known in the art to generate the final expression. In the embodiment of this application, the generated emoticons of game characters can also be interacted in subsequent game plots or scenes (for example, for taking photos, screenshots, screenshots, and screen recordings), and can also be used as emoticons for chat or based on characters The expression given by the plot. In the embodiment of the present application, it is preferable to highlight the expression of the game character, especially the expression generated by rendering the character expression data obtained after analysis, in one or more game plots of the game, or to highlight the expression in one or more game plots of the game. When the user who operates the game character or the game plot meets certain conditions, it is loaded and displayed.
在下面特别描述在聊天的场景下本申请的示例性实施例。在电子游戏进行过程中,用户(例如用户A)需要与多个不同用户(例如用户B)或NPC进行聊天对话。在上述聊天对话中,用户A可以向用户B或NPC发送音频、文字或图案,也可以从他们接收相应的内容。发送的聊天内容可以包含表情包,该表情包可以是一个表情图案或一段表情动画。例如,在用户A向用户B发送信息(即,用户A和B之间进行聊天)时,用户A可以选择电子游戏系统中预存的表情包进行发送,例如在一个列表中选择可供使用的表情包。另外,用户A还可以向用户B发送由用户A的当前表情制成生成的实时表情包。图20示出了根据本申请的一个实施例的创建实时表情包的界面的示意图,其中用户A通过点击“实时表情”选项中的“+”来起始实时表情包的创建。In the following, an exemplary embodiment of the present application in a chat scenario is particularly described. During the progress of the electronic game, the user (for example, user A) needs to have a chat conversation with multiple different users (for example, user B) or NPC. In the above-mentioned chat conversation, user A can send audio, text or graphics to user B or NPC, and can also receive corresponding content from them. The sent chat content may include an emoticon package, and the emoticon package may be an emoticon pattern or an emoticon animation. For example, when user A sends a message to user B (ie, a chat between users A and B), user A can select a pre-stored emoticon package in the electronic game system to send, for example, select available emoticons in a list Bag. In addition, user A can also send to user B a real-time emoticon package generated from the current emoticon of user A. FIG. 20 shows a schematic diagram of an interface for creating a real-time emoticon package according to an embodiment of the present application, in which user A initiates the creation of a real-time emoticon package by clicking "+" in the "real-time emoticon" option.
上述实时表情包是由本申请的方法或装置制作生成的。图21A-21C示出根据本申请的实施例的生成实时表情包的示意图,其中用户通过点击屏幕右侧的按钮,启动摄像头模组对当前游戏用户的面部进行拍摄或扫描,以生成当前游戏玩家正在操纵的游戏角色的实时表情包。在实时表情包的制作完成后,用户可以选择对其进行预览以查看是否满足需要。图22示出根据本申请的一个实施例的预览生成后的实时表情包的示意图。在制作完成或预览后,用户可以选择将其保存以进行发送或供后续使用。图23示出根据本申请的一个实施例的命名实时表情包的界面的示意图,其中生成的实时表情包被命名为“笑傲”并保存。在命名过程中,可以选择使用缩略图代表该生成的实时表情包,以方便后续查找。可选地,还可以将该生成的实时表情包其下载到其他终端使用,例如下载到手机中,作为聊天表情包使用。在命名和保存后,用 户可以在聊天窗口选择该实时表情并进行发送。图24示出根据本申请的一个实施例的选择并发送实时表情包的界面的示意图。在发送之前,用户也可以对该实时表情包进行预览或播放,其中图25A和25B示出根据本申请的实施例的在发送前预览和播放实时表情包的示意图。The above-mentioned real-time emoticon package is produced by the method or device of this application. 21A-21C show a schematic diagram of generating a real-time emoticon package according to an embodiment of the present application, in which the user clicks the button on the right side of the screen to activate the camera module to photograph or scan the face of the current game user to generate the current game player A real-time emoticon pack of the game character being manipulated. After the production of the real-time emoticon package is completed, the user can choose to preview it to see if it meets the needs. Fig. 22 shows a schematic diagram of a real-time emoticon package after preview generation according to an embodiment of the present application. After the production is completed or previewed, the user can choose to save it for sending or for subsequent use. FIG. 23 shows a schematic diagram of an interface for naming a real-time emoticon package according to an embodiment of the present application, in which the generated real-time emoticon package is named "Swordsman" and saved. In the naming process, you can choose to use the thumbnail to represent the generated real-time emoticon package to facilitate subsequent search. Optionally, the generated real-time emoticon package can also be downloaded to other terminals for use, for example, downloaded to a mobile phone for use as a chat emoticon package. After naming and saving, the user can select the real-time emoticon in the chat window and send it. Fig. 24 shows a schematic diagram of an interface for selecting and sending a real-time emoticon package according to an embodiment of the present application. Before sending, the user can also preview or play the real-time emoticon package, where FIGS. 25A and 25B show schematic diagrams of previewing and playing the real-time emoticon package before sending according to an embodiment of the present application.
另一方面,在其他用户(例如用户B)接收到上述实时表情包后,通过点击即可进行浏览或播放。图26A和26B示出根据本申请的实施例的接收并查看实时表情包的示意图。另外,用户B也可以将其进行保存和下载,或转发给其他用户。在制作表情包的时候,还可以对其进行编辑,例如添加文字、图案或音视频。图27A和27B示出了根据本申请的实施例的编辑实时表情包和预览编辑后的实时表情包的示意图。在角色的表情在电子游戏的游戏情节中突出显示的场景下,本申请的示例性实施例如下:在电子游戏的进行过程中,在某些特别游戏情节中需要突出显示游戏角色的面部,例如当镜头拉近至游戏角色面部以显示游戏角色的表情时。此时,电子游戏系统可以选择将接收到的实时表情包进行解析,并且将解析产生的游戏角色的表情数据生成的表情作为突出显示的表情使用。例如,当需要表现出游戏角色非常高兴时,突出显示的游戏角色的面部可以使用电子游戏系统中预览名称为“高兴”的表情,也可以使用通过解析表情包生成并保存的名为“喜悦”的表情,另外其还可以使用用户当前的表情。例如,当前的游戏情节烘托出悲伤的氛围,用户也因此受影响而表情凝重,可以依据本申请的实时或非实时生成游戏角色的表情的方法,使用户操控的角色呈现出用户此时面部的表情。上述游戏功能可以是电子游戏系统中预设的或由用户在游戏设置中选定的。On the other hand, after other users (such as user B) receive the above-mentioned real-time emoticon package, they can browse or play by clicking. 26A and 26B show schematic diagrams of receiving and viewing real-time emoticons according to an embodiment of the present application. In addition, user B can also save and download it, or forward it to other users. When making emoticons, you can also edit them, such as adding text, patterns, or audio and video. 27A and 27B show schematic diagrams of editing a real-time emoticon package and previewing the edited real-time emoticon package according to an embodiment of the present application. In a scenario where the expression of a character is highlighted in the game plot of an electronic game, an exemplary embodiment of the present application is as follows: During the progress of the electronic game, it is necessary to highlight the face of the game character in some special game plots, for example, When the camera zooms in to the face of the game character to show the expression of the game character. At this time, the electronic game system may choose to analyze the received real-time expression package, and use the expression generated by analyzing the expression data of the game character generated as the highlighted expression. For example, when it is necessary to show that the game character is very happy, the highlighted face of the game character can use the expression named "happy" in the preview of the electronic game system, or the expression named "happy" generated and saved by parsing the emoticon package. In addition, it can also use the user’s current emoticon. For example, the current game plot sets off a sad atmosphere, and the user is affected and has a serious expression. According to the method of generating the expression of the game character in real time or non-real time in this application, the character controlled by the user can show the face of the user at this time. expression. The above-mentioned game functions may be preset in the electronic game system or selected by the user in the game settings.
在电子游戏的用户或电子游戏的游戏情节满足特定条件的场景下,本申请的示例性实施例如下:在电子游戏的进行过程中,当用户或游戏情节满足某些特定条件时,可以将根据本申请的方法制备的表情用作游戏角色的表情。特定条件包括但不限于用户达到一定等级、完成某项任务、达成某项成就、游戏剧情进展到特定环节,触发特定隐藏剧情等。此时,电子游戏系统可以选择将在创建游戏角色时由电 子游戏系统预存或由用户制作的表情作为游戏角色的表情使用。或者在此时,电子游戏系统开始提供制作实时表情的选项(例如,按钮)以供用户操作。In the scenario where the user of the electronic game or the game plot of the electronic game meets specific conditions, the exemplary embodiment of the present application is as follows: During the progress of the electronic game, when the user or the game plot meets certain specific conditions, the The expression prepared by the method of the present application is used as the expression of the game character. Specific conditions include, but are not limited to, the user reaches a certain level, completes a certain task, achieves a certain achievement, the game plot progresses to a specific link, triggers a specific hidden plot, etc. At this time, the electronic game system can choose to use the expression pre-stored by the electronic game system or made by the user when the game character is created as the expression of the game character. Or at this time, the electronic game system begins to provide options (for example, buttons) for making real-time emoticons for users to operate.
图16示出了实施本申请的实施例的示例性电子游戏系统的示意图。游戏用户能够利用本申请的实施例便捷地生成、共享和播放角色的表情或表情包以及相关的动画。游戏用户能够离线地在游戏的客户端进行相关操作,也可以在玩游戏的同时实时地进行相关操作。作为数据源,用户利用实施了本申请的实施例的软件、硬件或软件与硬件混合的模块,通过摄像设备(例如,苹果公司或华为技术有限公司的销售手机)拍摄自己的面部,并且将所拍摄的面部的照片甚至视频提供给本申请所使用的面部处理装置和表情或表情包生成装置(例如,ARKit软件和R3DS Wrap软件)。图16所示的游戏客户端是本申请的实施例之一,能够实现对ARKit软件和R3DS Wrap软件的调用和对摄像设备的控制。面部处理装置和表情或表情包生成装置对所提供的数据进行处理,生成相关模型或表情特征信息,并将相关数据提供给ERA引擎。用户通过游戏系统的客户端进行有关角色和表情或表情包的相关操作(这些操作可能会调用前述摄像设备和面部处理装置和表情生成装置),包括自拍、角色表情或表情包的生成、角色表情传播和角色表情动画的播放、角色表情或动画的保存、角色表情的检索(可以通过ERA引擎执行检索操作)等等。游戏系统的客户端软件通过计算机网络与游戏服务器进行通信,从而能够支持客户端多角色的实时表情同步,以及在用户之间进行角色表情的传播或共享。Fig. 16 shows a schematic diagram of an exemplary electronic game system implementing an embodiment of the present application. Game users can use the embodiments of the present application to conveniently generate, share and play character emoticons or emoticons and related animations. Game users can perform related operations on the game client offline, or perform related operations in real time while playing the game. As a data source, the user uses the software, hardware, or a module of software and hardware that implements the embodiments of this application to photograph his own face through a camera device (for example, a mobile phone sold by Apple or Huawei Technologies Co., Ltd.) The photographs and even videos of the captured faces are provided to the facial processing device and expression or expression package generating device used in this application (for example, ARKit software and R3DS Wrap software). The game client shown in FIG. 16 is one of the embodiments of the present application, and can realize the calling of ARKit software and R3DS Wrap software and the control of camera equipment. The facial processing device and the expression or expression package generating device process the provided data, generate relevant models or expression feature information, and provide the relevant data to the ERA engine. The user performs related operations related to characters and expressions or expression packs through the client of the game system (these operations may call the aforementioned camera equipment, facial processing device and expression generation device), including self-portraits, character expressions or expression pack generation, and character expressions Propagation and playback of character expression animation, preservation of character expression or animation, character expression retrieval (retrieving operations can be performed through the ERA engine) and so on. The client software of the game system communicates with the game server through a computer network, so that it can support real-time expression synchronization of multiple roles on the client side, and the dissemination or sharing of character expressions between users.
例如,图28示出了可以实现根据本申请的在游戏中生成实时表情包的客户端设备,例如智能手机、平板电脑等。该客户端设备传统上包括处理器310和以存储器320形式的计算机程序产品或者计算机可读介质。存储器320可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器320具有用于执行上述方法中的任何方法步骤的程序代码331的存储空间330。例如,用于程序代码的存储空间330可以包括分别用于实现上面的方法中的各种步骤的各个程序代码331。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类 的程序代码载体。这样的计算机程序产品通常为如参考图29所述的便携式或者固定存储单元。该存储单元可以具有与图28的客户端设备中的存储器320类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码331’,即可以由例如诸如310之类的处理器读取的代码,这些代码当由客户端设备运行时,导致该客户端设备执行上面所描述的方法中的各个步骤。For example, FIG. 28 shows a client device, such as a smart phone, a tablet computer, etc., that can generate a real-time emoticon package in a game according to the present application. The client device traditionally includes a processor 310 and a computer program product in the form of a memory 320 or a computer-readable medium. The memory 320 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM. The memory 320 has a storage space 330 for executing the program code 331 of any method step in the above method. For example, the storage space 330 for program codes may include various program codes 331 respectively used to implement various steps in the above method. These program codes can be read from or written into one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards, or floppy disks. Such a computer program product is usually a portable or fixed storage unit as described with reference to FIG. 29. The storage unit may have storage segments, storage spaces, etc. arranged similarly to the storage 320 in the client device of FIG. 28. The program code can be compressed in an appropriate form, for example. Generally, the storage unit includes computer-readable code 331', that is, code that can be read by a processor such as 310, which, when run by a client device, causes the client device to execute the method described above. The various steps.
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本申请的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。The “one embodiment”, “an embodiment” or “one or more embodiments” referred to herein means that a specific feature, structure, or characteristic described in conjunction with the embodiment is included in at least one embodiment of the present application. In addition, please note that the word examples "in one embodiment" here do not necessarily all refer to the same embodiment. In the instructions provided here, a lot of specific details are explained. However, it can be understood that the embodiments of the present application can be practiced without these specific details. In some instances, well-known methods, structures, and technologies are not shown in detail, so as not to obscure the understanding of this specification.
应该注意的是上述实施例对本申请进行说明而不是对本申请进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本申请可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。此外,还应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本申请的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本申请的范围,对本申请所做的公开是说明性的,而非限制性的,本申请的范围由所附权利要求书限定。It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and those skilled in the art can design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses should not be constructed as a limitation to the claims. The word "comprising" does not exclude the presence of elements or steps not listed in the claims. The word "a" or "an" preceding an element does not exclude the presence of multiple such elements. The application can be realized by means of hardware including several different elements and by means of a suitably programmed computer. In the unit claims listing several devices, several of these devices may be embodied in the same hardware item. The use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names. In addition, it should also be noted that the language used in this specification is mainly selected for readability and teaching purposes, not for explaining or limiting the subject matter of this application. Therefore, without departing from the scope and spirit of the appended claims, many modifications and changes are obvious to those of ordinary skill in the art. For the scope of this application, the disclosure of this application is illustrative rather than restrictive, and the scope of this application is defined by the appended claims.

Claims (31)

  1. 一种生成游戏角色的表情的方法,包括:在所述游戏角色被创建时,获取所述游戏角色的面部模型数据;获取表情特征数据;依据所述面部模型数据和所述表情特征数据,生成所述游戏角色的第一角色表情数据,其中所述第一角色表情数据被配置作为所创建的所述游戏角色的数据的一部分;和依据所述第一角色表情数据进行渲染,以生成所述游戏角色的表情。A method for generating the expression of a game character includes: when the game character is created, obtaining facial model data of the game character; obtaining expression characteristic data; generating according to the facial model data and the expression characteristic data The first character expression data of the game character, wherein the first character expression data is configured as a part of the created data of the game character; and rendering is performed according to the first character expression data to generate the The expression of the game character.
  2. 根据权利要求1所述的方法,所述表情特征数据包括:预置的与所述面部模型数据对应的模特无表情模型数据和至少一个模特第一表情模型数据。The method according to claim 1, wherein the expression feature data comprises: preset model data without expression corresponding to the facial model data and at least one model first expression model data.
  3. 根据权利要求2所述的方法,所述预置的至少一个模特第一表情模型数据中的每个具有:通过摄像头模组获取的相对于所述预置的模特无表情模型数据的模特的多个不同类型的表情中的一个。The method according to claim 2, wherein each of the preset first expression model data of at least one model has: the number of models acquired through the camera module relative to the preset model no expression model data One of two different types of emoticons.
  4. 根据权利要求1所述的方法,进一步包括:将所述游戏角色的表情作为预览显示给游戏用户。The method according to claim 1, further comprising: displaying the expression of the game character as a preview to the game user.
  5. 根据权利要求2所述的方法,生成所述游戏角色的第一角色表情数据的操作包括:游戏引擎将所述面部模型数据分别与所述预置的模特无表情模型数据和所述预置的至少一个模特第一表情模型数据进行包裹变形处理,然后将所述包裹变形处理后的数据进行融合变形处理,以得到与所述面部模型数据对应的角色第一表情模型数据,并且作为所述游戏角色的第一角色表情数据。According to the method of claim 2, the operation of generating the first character expression data of the game character comprises: the game engine separately compares the facial model data with the preset model expressionless model data and the preset expression data. The first expression model data of at least one model is wrapped and deformed, and then the wrapped deformed data is fused and deformed to obtain the character's first expression model data corresponding to the facial model data, which is used as the game The character's first character expression data.
  6. 根据权利要求5所述的方法,进一步包括:在所述包裹变形处理中,对所述面部模型数据和所述模特无表情模型数据选取一一对应的标定特征点,其中在眼睛或嘴部位选取的标定特征点多于在其他部位选取的标定特征点;其中,当导入修改后的面部模型数据时,对所述修改后的面部模型数据选取多个相同的所述标定特征点,以得到与所述修改后的面部模型数据对应的角色第一表情模型数据。The method according to claim 5, further comprising: in the package deformation processing, selecting one-to-one corresponding calibration feature points for the facial model data and the model's expressionless model data, wherein the selected points are in the eyes or mouth. The calibration feature points are more than the calibration feature points selected in other parts; wherein, when the modified face model data is imported, a plurality of the same calibration feature points are selected for the modified face model data to obtain the same calibration feature points as The first expression model data of the character corresponding to the modified facial model data.
  7. 根据权利要求5所述的方法,进一步包括:编辑所述面部模型数据和所述第一角色表情数据中的角色第一表情模型数据,将所述编辑 后的数据在所述游戏引擎中通过变形表情编辑器中的插值计算算法,计算从在前表情到所述表情的表情融合的过程,并且将所述融合过程作为动画给游戏用户预览播放。The method according to claim 5, further comprising: editing the facial model data and the character first expression model data in the first character expression data, and transforming the edited data in the game engine The interpolation calculation algorithm in the expression editor calculates the expression fusion process from the previous expression to the expression, and uses the fusion process as an animation for the game user to preview and play.
  8. 根据权利要求2所述的方法,进一步包括:编辑所述预置的至少一个模特第一表情模型数据,以生成模特第二表情模型数据,并且将所述模特第二表情模型数据用于生成所述游戏角色的第二角色表情数据。The method according to claim 2, further comprising: editing the preset first expression model data of at least one model to generate second expression model data of the model, and using the second expression model data of the model to generate all Describe the second character expression data of the game character.
  9. 根据权利要求8所述的方法,进一步包括:依据所述第二角色表情数据进行渲染,以生成所述游戏角色的表情;并且将所述表情在所述游戏的一个或多个游戏情节中进行突出显示;或将所述表情在所述游戏角色的用户或所述游戏的游戏情节满足特定条件时,进行加载显示。The method according to claim 8, further comprising: rendering according to the second character expression data to generate the expression of the game character; and performing the expression in one or more game plots of the game Highlighting; or loading and displaying the expression when the user of the game character or the game plot of the game meets certain conditions.
  10. 根据权利要求1所述的方法,所述表情特征数据包括:从摄像头模组实时获取的当前游戏用户的表情特征系数。The method according to claim 1, wherein the expression characteristic data comprises: the expression characteristic coefficient of the current game user obtained in real time from the camera module.
  11. 根据权利要求10所述的方法,进一步包括:将所述面部模型数据与所述摄像头模组中获取的所述当前游戏用户的表情特征数据中的表情特征系数与旋转矩阵进行插值计算,以得到实时的第一角色表情数据。The method according to claim 10, further comprising: interpolating the facial model data and the facial expression feature coefficients and the rotation matrix in the facial expression feature data of the current game user acquired from the camera module to obtain Real-time facial expression data of the first character.
  12. 根据权利要求11所述的方法,所述实时的第一角色表情数据包括:通过融合变形节点制作的第一角色表情数据;其中所述融合变形节点被配置为被一个或多个替换节点取代。The method according to claim 11, wherein the real-time first character expression data comprises: first character expression data produced by fusion transformation nodes; wherein the fusion transformation nodes are configured to be replaced by one or more replacement nodes.
  13. 根据权利要求1所述的方法,进一步包括:提供接口以使得能够保存、分享或下载所述游戏角色的第一角色表情数据。The method according to claim 1, further comprising: providing an interface to enable saving, sharing or downloading of the first character expression data of the game character.
  14. 一种生成游戏角色的表情的装置,包括:用于在所述游戏角色被创建时,获取所述游戏角色的面部模型数据的模块;用于获取表情特征数据的模块;用于依据所述面部模型数据和所述表情特征数据,生成所述游戏角色的第一角色表情数据的模块,其中所述第一角色表情数据被配置作为所创建的所述游戏角色的数据的一部分;和用于依据所述第一角色表情数据进行渲染,以生成所述游戏角色的表情的模块。An apparatus for generating expressions of a game character, comprising: a module for obtaining facial model data of the game character when the game character is created; a module for obtaining expression characteristic data; Model data and the expression feature data, a module for generating first character expression data of the game character, wherein the first character expression data is configured as a part of the created data of the game character; and used as a basis A module for rendering the first character expression data to generate the expression of the game character.
  15. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在客户端设备上运行时,导致所述客户端设备执行根据权利要求1至13中任一项所述的生成游戏角色的表情的方法。A computer program, comprising computer readable code, when the computer readable code runs on a client device, causes the client device to execute the game character generation according to any one of claims 1 to 13 The way of expression.
  16. 一种计算机可读介质,其中存储了如权利要求15所述的计算机程序。A computer readable medium in which the computer program according to claim 15 is stored.
  17. 一种在游戏中生成实时表情包的方法,包括:接收生成游戏角色的实时表情包的指令;获取所述游戏角色的面部模型数据;实时获取当前游戏用户的表情特征数据;依据所述面部模型数据和所述表情特征数据,实时生成所述游戏角色的实时表情包。A method for generating a real-time emoticon package in a game includes: receiving an instruction to generate a real-time emoticon package of a game character; acquiring facial model data of the game character; acquiring facial expression feature data of a current game user in real time; according to the facial model Data and the facial expression feature data to generate a real-time facial expression package of the game character in real time.
  18. 根据权利要求17所述的方法,其特征在于,进一步包括:将由所述当前游戏用户实时生成的所述游戏角色的实时表情包发送到对端游戏用户的客户端;或从对端游戏用户的客户端接收由所述对端游戏用户实时生成的游戏角色的实时表情包。The method according to claim 17, further comprising: sending a real-time emoticon package of the game character generated by the current game user in real time to the client of the opposite game user; or from the opposite game user’s The client terminal receives the real-time emoticon package of the game character generated in real time by the opposite game user.
  19. 根据权利要求17所述的方法,其特征在于,实时获取所述当前游戏用户的表情特征数据的操作包括:使用所述当前游戏用户正在操纵的客户端中的摄像头模组获取所述当前游戏用户的表情特征数据。The method according to claim 17, wherein the operation of obtaining the facial expression characteristic data of the current game user in real time comprises: obtaining the current game user by using a camera module in the client that the current game user is manipulating Emoticon feature data.
  20. 根据权利要求19所述的方法,其特征在于,进一步包括:将所述面部模型数据与由所述摄像头模组中获取的所述当前游戏用户的表情特征数据中的表情特征系数与旋转矩阵进行插值计算,以得到所述游戏角色的实时表情数据,进而形成所述游戏角色的实时表情包。The method according to claim 19, further comprising: comparing the facial model data with the expression feature coefficients and the rotation matrix in the expression feature data of the current game user acquired from the camera module Interpolation calculation is performed to obtain real-time expression data of the game character, thereby forming a real-time expression package of the game character.
  21. 根据权利要求20所述的方法,其特征在于,进一步包括:在形成所述游戏角色的实时表情包之前,保存所述面部模型数据和预定时间段内由所述摄像头模组中获取的所述当前游戏用户的表情特征数据中的表情特征系数与旋转矩阵。The method according to claim 20, further comprising: before forming the real-time expression package of the game character, saving the facial model data and the facial model data acquired by the camera module within a predetermined period of time. The expression feature coefficient and rotation matrix in the expression feature data of the current game user.
  22. 根据权利要求21所述的方法,其特征在于,进一步包括:在触发所述实时表情包的播放后,在游戏引擎中加载渲染保存的所述面部模型数据和保存的所述当前游戏用户的表情特征数据中的表情特征系数与旋转矩阵,并且进行插值计算以得到所述游戏角色的实时表情数据。The method according to claim 21, further comprising: after triggering the playing of the real-time emoticon package, loading and rendering the saved facial model data and the saved emoticon of the current game user in a game engine The expression characteristic coefficients and the rotation matrix in the characteristic data are interpolated to obtain the real-time expression data of the game character.
  23. 根据权利要求17所述的方法,其特征在于,接收到的生成所述游戏角色的实时表情包的指令是在当前游戏的聊天室中触发的所述游戏角色的实时表情包的生成指令。The method according to claim 17, wherein the received instruction to generate the real-time emoticon package of the game character is an instruction to generate the real-time emoticon package of the game character triggered in a chat room of the current game.
  24. 根据权利要求18所述的方法,其特征在于,进一步包括:将发送或接收的所述实时表情包按照角色、表情类别、表情描述关键字中的至少一个进行分类存储。The method according to claim 18, further comprising: categorizing and storing the sent or received real-time emoticon package according to at least one of a role, an emoticon category, and an emoticon description keyword.
  25. 根据权利要求17所述的方法,其特征在于,进一步包括:在所述实时表情包中加入所述游戏用户实时输入的音频、文字或图片数据。The method according to claim 17, further comprising: adding audio, text or picture data input by the game user in real time to the real-time emoticon package.
  26. 根据权利要求18所述的方法,其特征在于,进一步包括:对由所述对端游戏用户的客户端接收到的所述对端游戏用户实时生成的相同游戏角色的实时表情包进行解析,然后将解析后的实时表情包用于所述当前游戏用户的客户端的游戏角色,以生成所述当前游戏用户的游戏角色的角色表情数据。The method according to claim 18, further comprising: parsing a real-time emoticon package of the same game character generated in real time by the opposite game user received by the client of the opposite game user, and then The parsed real-time expression package is used for the game character of the client of the current game user to generate the character expression data of the game character of the current game user.
  27. 根据权利要求26所述的方法,其特征在于,所述角色表情数据是相对于所述面部模型数据的融合变形数据,包括:基础网格信息、顶点、纹理、三角面片信息和游戏角色的各个融合变形信息中的至少一个。The method of claim 26, wherein the character expression data is a fusion deformation data relative to the facial model data, including: basic mesh information, vertices, textures, triangle face information, and game character information At least one of each fusion deformation information.
  28. 根据权利要求26所述的方法,其特征在于,进一步包括:依据所述角色表情数据进行渲染,以生成所述游戏角色的表情;并且将所述表情在所述游戏的一个或多个游戏情节中进行突出显示;或将所述表情在所述游戏角色的用户或所述游戏的游戏情节满足特定条件时,进行加载显示。The method of claim 26, further comprising: rendering according to the character expression data to generate the expression of the game character; and displaying the expression in one or more game plots of the game When the user of the game character or the game plot of the game meets a specific condition, the expression is loaded and displayed.
  29. 一种在游戏中生成实时表情包的装置,包括:用于接收生成游戏角色的实时表情包的指令的模块;用于获取所述游戏角色的面部模型数据的模块;用于实时获取当前游戏用户的表情特征数据的模块;用于依据所述面部模型数据和所述表情特征数据,实时生成所述游戏角色的实时表情包的模块。A device for generating a real-time emoticon package in a game, including: a module for receiving instructions for generating a real-time emoticon package of a game character; a module for acquiring facial model data of the game character; and a module for acquiring the current game user in real time A module for facial expression feature data; a module for generating a real-time expression package of the game character in real time according to the facial model data and the facial expression feature data.
  30. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在客户端设备上运行时,导致所述客户端设备执行根据权利要求17 至28中任一项所述的在游戏中生成实时表情包的方法。A computer program, comprising computer readable code, when the computer readable code runs on a client device, causes the client device to execute the in-game generation according to any one of claims 17 to 28 The method of real-time emoticons.
  31. 一种计算机可读介质,其中存储了如权利要求30所述的计算机程序。A computer readable medium in which the computer program according to claim 30 is stored.
PCT/CN2020/112616 2020-04-17 2020-08-31 Method and apparatus for generating expression for game character WO2021208330A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010305320.3 2020-04-17
CN202010305331.1 2020-04-17
CN202010305331.1A CN111530087B (en) 2020-04-17 2020-04-17 Method and device for generating real-time expression package in game
CN202010305320.3A CN111530086B (en) 2020-04-17 2020-04-17 Method and device for generating expression of game role

Publications (1)

Publication Number Publication Date
WO2021208330A1 true WO2021208330A1 (en) 2021-10-21

Family

ID=78083684

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112616 WO2021208330A1 (en) 2020-04-17 2020-08-31 Method and apparatus for generating expression for game character

Country Status (1)

Country Link
WO (1) WO2021208330A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599A (en) * 2007-09-19 2009-03-25 中国科学院自动化研究所 Game role control method based on human face expression
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
US20130109478A1 (en) * 2011-11-01 2013-05-02 Konami Digital Entertainment Co., Ltd. Game device, method of controlling a game device, and non-transitory information storage medium
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN110517337A (en) * 2019-08-29 2019-11-29 成都数字天空科技有限公司 Cartoon role expression generation method, animation method and electronic equipment
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN111530087A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression package in game
CN111530086A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating expression of game role

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599A (en) * 2007-09-19 2009-03-25 中国科学院自动化研究所 Game role control method based on human face expression
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
US20130109478A1 (en) * 2011-11-01 2013-05-02 Konami Digital Entertainment Co., Ltd. Game device, method of controlling a game device, and non-transitory information storage medium
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN110517337A (en) * 2019-08-29 2019-11-29 成都数字天空科技有限公司 Cartoon role expression generation method, animation method and electronic equipment
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN111530087A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression package in game
CN111530086A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating expression of game role

Similar Documents

Publication Publication Date Title
US10210002B2 (en) Method and apparatus of processing expression information in instant communication
CN107770626A (en) Processing method, image synthesizing method, device and the storage medium of video material
US9381429B2 (en) Compositing multiple scene shots into a video game clip
US20120028707A1 (en) Game animations with multi-dimensional video game data
EP2242281A2 (en) Method and apparatus for producing a three-dimensional image message in mobile terminals
CN111530087B (en) Method and device for generating real-time expression package in game
CN111530086B (en) Method and device for generating expression of game role
CN111294663B (en) Bullet screen processing method and device, electronic equipment and computer readable storage medium
KR101996973B1 (en) System and method for generating a video
US20150130816A1 (en) Computer-implemented methods and systems for creating multimedia animation presentations
KR100481588B1 (en) A method for manufacuturing and displaying a real type 2d video information program including a video, a audio, a caption and a message information
US10965629B1 (en) Method for generating imitated mobile messages on a chat writer server
US20140282000A1 (en) Animated character conversation generator
CN111530088B (en) Method and device for generating real-time expression picture of game role
WO2021208330A1 (en) Method and apparatus for generating expression for game character
CN107204026B (en) Method and device for displaying animation
CN108965101A (en) Conversation message processing method, device, storage medium and computer equipment
US20120021827A1 (en) Multi-dimensional video game world data recorder
US20150371661A1 (en) Conveying Audio Messages to Mobile Display Devices
KR100554374B1 (en) A Method for manufacuturing and displaying a real type 2D video information program including a video, a audio, a caption and a message information, and a memory devices recorded a program for displaying thereof
KR100816783B1 (en) 3d graphic display system and display device, and electronic message transfer system and display device
WO2018049682A1 (en) Virtual 3d scene production method and related device
CN110085244B (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
CN112734940A (en) VR content playing and modifying method and device, computer equipment and storage medium
CN114125552A (en) Video data generation method and device, storage medium and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931239

Country of ref document: EP

Kind code of ref document: A1