CN111530088B - Method and device for generating real-time expression picture of game role - Google Patents

Method and device for generating real-time expression picture of game role Download PDF

Info

Publication number
CN111530088B
CN111530088B CN202010305537.4A CN202010305537A CN111530088B CN 111530088 B CN111530088 B CN 111530088B CN 202010305537 A CN202010305537 A CN 202010305537A CN 111530088 B CN111530088 B CN 111530088B
Authority
CN
China
Prior art keywords
data
real
game
expression
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010305537.4A
Other languages
Chinese (zh)
Other versions
CN111530088A (en
Inventor
智慧嘉
张鹏
牛莉丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Chongqing Interactive Technology Co ltd
Original Assignee
Perfect World Chongqing Interactive Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Chongqing Interactive Technology Co ltd filed Critical Perfect World Chongqing Interactive Technology Co ltd
Priority to CN202210312096.XA priority Critical patent/CN114984585A/en
Priority to CN202010305537.4A priority patent/CN111530088B/en
Publication of CN111530088A publication Critical patent/CN111530088A/en
Application granted granted Critical
Publication of CN111530088B publication Critical patent/CN111530088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for generating real-time expression pictures of game characters, wherein the method comprises the steps of receiving an instruction for generating the real-time expression pictures; acquiring face model data of a first game role; acquiring first expression characteristic data of a first game user in real time; generating real-time expression data of the first game role in real time according to the facial model data and the first expression feature data of the first game role; and synthesizing and rendering the real-time expression data and the current game scene image data together to obtain a real-time expression picture.

Description

Method and device for generating real-time expression picture of game role
Technical Field
The invention relates to an augmented reality technology in the field of electronic games, in particular to a method and a device for generating real-time expression pictures of game characters.
Background
In many electronic game systems, users (i.e., game players) are allowed to save and send game scene images for chat interaction, which is in line with the trend of the development of the internet nowadays, and also makes electronic games more interesting. Currently, game characters in scene images generated by users in electronic game systems are generally single in expression and stiff in action, and cannot meet the requirements of modern users. The method for simply, conveniently and quickly generating the real-time expression pictures in the game process is lacked in the field.
Disclosure of Invention
The invention provides a method for generating a real-time expression picture of a game role, which comprises the steps of receiving an instruction for generating the real-time expression picture; acquiring face model data of a first game role; acquiring first expression characteristic data of a first game user in real time; generating real-time expression data of the first game role in real time according to the facial model data of the first game role and the first expression feature data; and synthesizing and rendering the real-time expression data and the current game scene image data together to obtain the real-time expression picture.
The invention also provides a device for generating the real-time expression picture of the game role, which comprises a module for receiving the instruction for generating the real-time expression picture; means for obtaining face model data for a first game character; the module is used for acquiring first expression characteristic data of a first game user in real time; a module for generating real-time expression data of the first game character in real time according to the facial model data of the first game character and the first expression feature data; and the module is used for synthesizing and rendering the real-time expression data and the current game scene image data together to obtain the real-time expression picture.
The invention further provides a system comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes or set of instructions which, when loaded and executed by the processor, is capable of carrying out the method described above.
The invention further provides a computer readable medium having stored thereon at least one instruction, at least one program, set of codes or set of instructions capable, when loaded and executed by a processor, of implementing the method described above.
The technical scheme of the invention realizes the production and generation of the real-time expression picture (especially the real-time expression picture of the game role) through the augmented reality technology (AR). Compared with the prior art, the method and the device are simple and convenient to operate, the expression represented by the produced expression picture is real and fine, the expression picture accords with the facial muscle movement of a human body, and the expression picture can be generated in real time in an electronic game.
Drawings
Fig. 1 is a flowchart of a method of generating a real-time emoticon of a game character according to an embodiment of the present invention.
Fig. 2 is an exemplary expressive feature coefficient according to an embodiment of the invention.
Fig. 3 is exemplary character expression data according to an embodiment of the present invention.
Fig. 4A and 4B are schematic diagrams of an interface for generating an emoticon according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of generating an emoticon according to preset emotion data according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of an interface for generating a real-time emoticon according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of generating a real-time emoticon according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of an exemplary real-time emoticon generated according to an embodiment of the present invention.
FIG. 9 is a schematic diagram of an exemplary electronic gaming system implementing an embodiment of the present invention.
Detailed Description
The content of the invention will now be described with reference to a number of exemplary embodiments. It is to be understood that these examples are set forth merely to enable those of ordinary skill in the art to better understand and thereby implement the teachings of the present invention, and are not intended to suggest any limitation as to the scope of the invention.
As used herein, the term "include" and its variants should be read as open-ended terms meaning "including, but not limited to. The term "based on" should be read as "based, at least in part, on. The terms "one embodiment" and "an embodiment" should be read as "at least one embodiment". The term "another embodiment" should be read as "at least one other embodiment".
Fig. 1 shows a flowchart of a method for generating a real-time emoticon of a game character according to an embodiment of the present invention, and the steps thereof are described in turn.
Receiving an instruction for generating a real-time expression picture
In embodiments of the present invention, a "game character" refers to any virtual character in an electronic game, preferably the virtual character that a user is manipulating. In the embodiment of the present invention, the "emoticon of the game character" refers to a picture that presents the image of the game character, particularly, facial emoticons. In the embodiment of the present invention, the "real-time emoticon of the game character" means that the emoticon shows a facial expression of the game character that is consistent with a real-time expression of a user currently operating the game character. In an embodiment of the invention, the emoticon also presents a current game scene image.
In an embodiment of the present invention, one or more game users who participate in generating the real-time emoticons may participate in the game. Where multiple game users participate, they may be referred to as a first game user, a second game user, a third game user, etc., respectively. In an embodiment of the present invention, game characters manipulated by different game users may be referred to as a first game character, a second game character, a third game character, and the like, respectively. In embodiments of the present invention, the above naming rules are merely exemplary, and other rules may also be used. In the embodiment of the invention, the game users correspond to the numbers of the game characters one by one.
In an embodiment of the invention, the instruction is issued by the game user, for example when talking to other game users or NPCs (non-player characters) or when taking a picture, taking a screen or recording a screen. In embodiments of the invention, the instruction is issued by one game user and responded to by one or more other game users. In the embodiment of the present invention, the game user may issue an instruction in various ways, including but not limited to clicking a button, checking an option, opening a camera module of the game terminal, and the like. In embodiments of the invention, the instruction is issued by the electronic gaming system, for example, when a particular condition is met by a game user or a game episode. In the embodiment of the present invention, the real-time emoticons may be generated at any time, preferably, at the time of the game user chatting, in which case, the received instruction is a generation instruction of the real-time emoticons triggered in the chat room of the current game, so as to send the real-time emoticons to other game users.
In the embodiment of the invention, the real-time expression pictures are independently produced and generated by a game user. In the embodiment of the invention, the real-time expression pictures are jointly produced by a plurality of game users. In an embodiment of the present invention, all game users can send and receive real-time emoticons. In embodiments of the present invention, "other game users" include peer game users, i.e., game users who are chatting with the current game user. In embodiments of the present invention, there may be one or more peer game users.
Obtaining face model data of game character
In the embodiment of the present invention, the face model data of the game character may be acquired at any stage and scene, and preferably, the face model data of the game character is acquired when the game character is created. In embodiments of the present invention, face model data for one or more game characters may be obtained. In the embodiment of the present invention, "face model data" of a game character, that is, an art creation model of a game character refers to data of a face of a game character that appears to be blankness after rendering. Referring to FIG. 3, an exemplary artwork model of a game character is shown. In embodiments of the present invention, "model data" and "model" are used interchangeably. In an embodiment of the present invention, an artistic modeling of a game character is preset by a game developer and stored in game data. In embodiments of the present invention, the face model data may be stored in any data format. In the embodiment of the invention, "preset" is opposite to "non-preset" or "real-time", and means that the preset is manufactured in advance and does not need to be manufactured and generated in real time.
Real-time obtaining of expression feature data of game user
In the embodiment of the present invention, "expression feature data" refers to a general term of data capable of representing an expression. In an embodiment of the invention, the obtained expressive feature data is real-time. In an embodiment of the present invention, the expression feature data includes an expression feature coefficient. In an embodiment of the present invention, the expression feature data further includes a rotation matrix. In an embodiment of the present invention, the expression feature data may be acquired through a camera module (e.g., a camera on a mobile phone, a tablet, and a computer, or a video camera and a video camera). In the embodiment of the invention, the camera module can be used for shooting or detecting the face of the game user to acquire the expression characteristic data. In the embodiment of the invention, the expression characteristic data is acquired by the current game user in real time through the camera module. In an embodiment of the present invention, the camera module is a camera module in a client that is currently being manipulated by a game user. In embodiments of the present invention, the expression feature data may be stored in any data format. In the embodiment of the present invention, when there are a plurality of game users, the acquired expression feature data may be referred to as first expression feature data, second expression feature data, third expression feature data, and the like, respectively. In the embodiment of the invention, the game users, the game characters and the numbers of the expression characteristic data are in one-to-one correspondence. In an embodiment of the present invention, when there are a plurality of game users, the camera module is a camera module in a client that each game user is manipulating.
In the embodiment of the present invention, the "expressive feature coefficient" refers to data capable of representing a facial feature, which describes a shift from a facial feature in the absence of an expression in the form of a coefficient, and further describes a facial feature in the presence of an expression. In embodiments of the present invention, the rotation matrix has a meaning generally understood by those skilled in the art. In an embodiment of the present invention, software (e.g., ARKit software available from Ark Platforms Inc., where detailed information on the function is available in http:// www.arkit.io) may be used in conjunction with the camera module to obtain the expressive feature coefficients. The expressive feature coefficients are provided, for example, by the ARKit software (particularly the ARBlendshapeLocation function in the ARFaceAnchor module) in the form of blendShapes attributes or parameters. In embodiments of the present invention, the expressive feature coefficients may also be provided in other software or other forms. As is known in the art, in the ARKit software, the expressive feature coefficients provide a model that represents the shift coefficients of a series of facial features relative to the case of no expression. As is known in the art, the ARKit software currently provides expressive feature coefficients comprising 52 coefficients, distributed over the left eye, right eye, mouth, chin, eyebrows, cheeks, tongue, etc., each coefficient being a floating point number between 0 and 1, where 0 represents a null expression and 1 represents a maximum degree of expression. Fig. 2 illustrates an exemplary expressive feature coefficient in which an expression of opening and closing the left eye is shown, according to an embodiment of the invention. In the embodiment of the present invention, the calling and controlling of the ARKit software and the camera module can be realized by a hardware module (usually customized), and the calling and controlling can also be realized by a module in which the software and the hardware are mixed.
Real-time expression data of game roles are generated in real time according to the facial model data and the expression characteristic data
In the embodiment of the invention, the real-time emotion pictures of the game characters are obtained based on the real-time emotion data of the game characters. In an embodiment of the present invention, different real-time emotion data may be generated for different game users or different game characters. For example, since the first game user and the second game user manipulate different game characters, the naturally generated real-time emotion data is also different. In the embodiment of the invention, the real-time expression data of the game role can enable the game role to show the expression represented by the expression characteristic data. In an embodiment of the invention, the (real-time or non-real-time) emotion data of the game character is configured to be able to generate an emotion of the game character after rendering. In the embodiment of the present invention, "expression data" of a game character is used interchangeably with "character expression data" and refers to fusion deformation data with respect to face model data. In an embodiment of the present invention, the expression data is mesh structure data of a face topology including at least one of base mesh information, vertices, textures, triangle patch information, and respective fusion deformation information of a game character. In the embodiments of the present invention, "base mesh information", "vertices", "textures", and "triangle patch information" have meanings commonly understood by those skilled in the art. For example, the ARKit software, and in particular the ARFaceGeometry function in the ARFaceAnchor module, specifies the underlying mesh and texture of the face, and each model data therein may contain 1220 vertices and 2304 triangular patches. In embodiments of the present invention, the number of vertices and triangular patches may be selected as desired. The expression data may also be defined by other software or methods in embodiments of the invention. Fig. 3 illustrates exemplary character expression data according to an embodiment of the present invention.
As can be seen from fig. 3, the expression data of the game character represents a fusion deformation at one or more locations with respect to the face model data. In an embodiment of the present invention, "fusion deformation information" records the location and/or extent of the above-described fusion deformation. In the embodiment of the present invention, the portion and/or the degree of the fusion deformation treatment may be selected as necessary. In an embodiment of the present invention, the above-mentioned portion may be the same as the portion indicated by the expressive feature coefficient. In the embodiment of the present invention, the value of the degree may be the same as the value of the expression characteristic coefficient. In the embodiment of the present invention, the selected portions subjected to the fusion deformation processing are generally distributed more in the portions having a larger influence on the appearance, such as the eyes and the mouth.
In an embodiment of the present invention, the "fusion warping" process may be implemented using software (e.g., R3DS Wrap software by the well-known Russian3DScanner LLC, whose functional details are available at https:// www.russian3dscanner.com /). In the embodiment of the present invention, the above-described operations may also be implemented using other methods or software as long as the object of the present invention can be achieved. Similar to the ARKit software, the calling of the R3DS Wrap software can be realized by programming a software program, setting a hardware module or a module mixed by software and hardware. In the embodiment of the invention, the module calling the ARKit software and the module calling the R3DS Wrap software can be the same module, can also be different modules which can exchange data with each other, and can also be different modules under the same system (can exchange data with each other). As is known in the art, R3DS Wrap software is a node-wise software that can be functional by selecting and connecting nodes. For example, embodiments of the invention may use a fusion morph (Blendshapes) node in R3DS Wrap software.
In the embodiment of the invention, the character expression data of the game character is obtained by interpolation calculation of the face model data and the expression characteristic data. In the embodiment of the present invention, "interpolation calculation" has a meaning generally understood by those skilled in the art. In an embodiment of the present invention, the generation of real-time emotion data of a game character may be implemented, for example, by a known ERA engine. In an embodiment of the present invention, the obtained expressive feature coefficients and associated rotation matrices may be transmitted to the ERA engine. In the embodiment of the invention, the real-time character expression data can be obtained by performing interpolation calculation on the facial model data, the expression characteristic coefficients and the rotation matrix.
In an embodiment of the present invention, the character expression data generated above may be stored in various data formats. In the embodiment of the present invention, the expression data of the game character is not generated in real time but preset. In the embodiment of the invention, the expression characteristic data of the game character is not obtained in real time but preset. In the embodiment of the present invention, the expression data of the game character may be generated by preset face model data and preset expression feature data, and thus the generated expression data may also be considered as preset.
Synthesizing and rendering the real-time expression data and the current game scene image data together to obtain a real-time expression picture
In embodiments of the invention, the emoticon may be generated using rendering techniques known in the art. In embodiments of the invention, the real-time expression data may relate to one or more game users. In the case where there are a plurality of game users, for example, a first game character and a second game character that they manipulate, real-time emotion data corresponding to each game character may be synthetically rendered together with the current game scene image data to render a real-time emotion picture. In an embodiment of the present invention, one or more or all of the real-time emoticons participating in the composite rendering of the real-time emoticons may be preset. For example, preset emotion data of a first game character may be synthetically rendered together with current game scene image data to render an emotion picture. For another example, preset expression data of the first game character, real-time expression data of the second game character, and current game scene image data may be synthesized and rendered together to form an expression picture. In an embodiment of the present invention, the preset or non-preset expression data may be combined arbitrarily.
In an embodiment of the present invention, the real-time emoticons generated above may be stored in a variety of data formats. In the embodiment of the invention, the real-time expression pictures can be classified and stored according to at least one of roles, expression categories and expression description keywords. In the embodiment of the present invention, the real-time emoticons may be edited, for example, by adding audio, text, or picture data input in real time. In the embodiment of the invention, any game user can edit the real-time emotion pictures. In the embodiment of the invention, only the game users who participate in production generation can edit the real-time expression pictures. In the embodiment of the invention, only the game user who initiates the production generation can edit the real-time expression picture. In an embodiment of the present invention, a plurality of continuously compositely rendered real-time emoticons may be saved as video data or animation data. In the embodiment of the invention, besides the real-time facial expression picture is stored, the real-time facial expression picture can be shared or downloaded.
In the embodiment of the invention, the real-time expression picture not only can present the expression represented by the real-time expression data, but also can present the gesture represented by the real-time gesture data. In embodiments of the present invention, the real-time pose data reflects the pose of the game character, including but not limited to the measurements (length, width, etc.) and position of various parts of the body. In an embodiment of the present invention, the posture of the above-described game character is the same as the posture of the game user who is currently manipulating it. In an embodiment of the invention, a method of obtaining real-time pose data includes obtaining skeletal matrix data in a mannequin of a game character; and generating real-time posture data including real-time expression data of the game role in real time according to the facial model data, the expression feature data and the skeleton matrix data of the game role. The above-described bone matrix data can be obtained, for example, by using the ARKit software. In the embodiment of the invention, the bone matrix data can be obtained by matching ARKit software with a camera module. In an embodiment of the present invention, the skeletal matrix data corresponding to the first game user and the first game character may be referred to as first skeletal matrix data.
Exemplary embodiments of the present invention in the context of generating emoticons are described in particular below. When a user of the electronic game needs to generate an expression picture, for example, needs to take a picture, capture a screen, or record a screen, the user enters an interface shown in fig. 4A and 4B through a corresponding interface, where the user can select an element in the expression picture to be generated, and can process an expression of a game character in the expression picture to be generated by clicking "the expression". Referring to fig. 5, a schematic diagram of generating an emoticon according to preset emoticon data according to an embodiment of the present invention is shown, wherein a user can select the preset emoticon data to generate a corresponding emoticon. In fig. 5, the user also shows a corresponding expression by selecting preset expression data labeled "smile". The game user can also select real-time expression data, and fig. 6 shows a schematic diagram of an interface for generating real-time expression pictures according to an embodiment of the present invention, where the game user can implement the method for generating real-time expression pictures of the present invention by checking a "real-time expression" option. Fig. 7 is a diagram illustrating generation of a real-time emoticon according to an embodiment of the present invention, in which a game character makes a grimackin emotion following an emotion of a current game user. Fig. 8 is a schematic diagram of an exemplary real-time emoticon generated according to an embodiment of the present invention, wherein the real-time emoticon as shown in the figure is generated by clicking a "take picture" button on the right side of the screen. When the real-time expression pictures relate to a plurality of game characters, a plurality of different game users can respectively operate at own game clients or sequentially operate at the same game client according to the above operation.
FIG. 9 shows a schematic diagram of an exemplary electronic gaming system implementing an embodiment of the present invention. The game user can conveniently generate, share and play the expression or expression picture of the character and the related animation by using the embodiment of the invention. The game user can perform the relevant operation on the client side of the game in an off-line mode, and can also perform the relevant operation in real time while playing the game. As a data source, a user takes a picture of his face by an image pickup apparatus (for example, a sales mobile phone of apple or watson technologies limited) using software, hardware, or a module in which software and hardware are mixed that implements an embodiment of the present invention, and supplies a picture or even a video of the taken face to a face processing apparatus and an expression or expression picture generating apparatus (for example, ARKit software and R3DS Wrap software) used in the present invention. The game client shown in fig. 9 is one of the embodiments of the present invention, and is capable of realizing the call to the ARKit software and the R3DS Wrap software and the control of the image pickup apparatus. The face processing device and the expression or expression picture generating device process the provided data, generate related model or expression characteristic information, and provide the related data to the ERA engine. The user performs related operations on the character, the expression or the expression picture through a client of the game system (the operations may call the camera device, the face processing device and the expression generating device), including self-timer shooting, generation of the character expression or the expression picture, playing of character expression propagation and character expression animation, saving of the character expression or the animation, retrieval of the character expression (retrieval operation may be performed through the ERA engine), and the like. Client software of the game system is communicated with the game server through a computer network, so that real-time expression synchronization of multiple roles of the client can be supported, and the role expressions can be spread or shared among users.
The method and apparatus, and the steps of the method and the constituent modules of the apparatus according to the embodiments of the present invention may be implemented as a pure software module (for example, a software program written in Java language), a pure hardware module (for example, a special ASIC chip or an FPGA chip) as needed, or a module combining software and hardware (for example, a firmware system storing fixed codes).
The present invention also provides a computer-readable medium having stored thereon computer-readable instructions that, when executed, implement the methods of the embodiments of the present invention.
It will be appreciated by persons skilled in the art that the foregoing description is only exemplary of the invention and is not intended to limit the invention. The present invention may include various modifications and variations. Any modifications and variations within the spirit and scope of the present invention should be included within the scope of the present invention.
Various aspects of various embodiments are defined in the claims. These and other aspects of the various embodiments are specified in the following numbered clauses:
1. a method for generating real-time expression pictures of game characters comprises the following steps:
receiving an instruction for generating the real-time expression picture;
acquiring face model data of a first game role;
acquiring first expression characteristic data of a first game user in real time;
generating real-time expression data of the first game role in real time according to the facial model data of the first game role and the first expression feature data; and
and synthesizing and rendering the real-time expression data and the current game scene image data together to obtain the real-time expression picture.
2. The method of clause 1, wherein the operation of obtaining the first expressive feature data of the first game user in real-time comprises:
and acquiring the expression characteristic data of the first game user by using a camera module in the client operated by the first game user.
3. The method of clause 2, further comprising:
and carrying out interpolation calculation on the facial model data of the first game role and the expression characteristic coefficient and the rotation matrix in the expression characteristic data of the first game user acquired by the camera module to obtain real-time expression data.
4. The method according to clause 1, wherein the instruction for receiving the real-time emoticon generation is an instruction for generating the real-time emoticon triggered in a chat room in the current game, so as to send the real-time emoticon to a client of an opposite-end game user.
5. The method of clause 1, wherein the real-time emotion data of the first game character is fused morphed data with respect to the face model data of the first game character, comprising: at least one of base mesh information, vertices, textures, triangular patch information, and respective fusion deformation information of the game character.
6. The method of clause 1, further comprising:
and adding audio, text or picture data input by the first game user in real time into the synthesized and rendered real-time expression picture.
7. The method of clause 1, further comprising:
acquiring face model data of a second game role;
acquiring second expression characteristic data of a second game user in real time;
generating real-time expression data of the second game role in real time according to the facial model data of the second game role and the second expression feature data; and
and synthesizing and rendering the real-time expression data of the first game role, the real-time expression data of the second game role and the current game scene image data together to obtain the real-time expression picture.
8. The method of clause 1, further comprising:
and storing the real-time expression pictures subjected to continuous synthesis rendering as video data or animation data.
9. The method of clause 1, further comprising:
and providing an interface to save, share or download the synthesized and rendered real-time emoticons.
10. The method of clause 1, further comprising:
acquiring first skeleton matrix data in a human body model of the first game role; and
and generating real-time posture data comprising real-time expression data of the first game role in real time according to the facial model data of the first game role, the first expression feature data and the first skeleton matrix data.
11. The method of clause 1, further comprising:
acquiring preset expression data of the first game role, wherein the preset expression data of the first game role is generated by facial model data of the first game role and preset expression characteristic data of the first game user; and
and synthesizing and rendering the preset expression data of the first game role and the current game scene image data together to obtain the real-time expression picture.
12. The method of clause 1, further comprising:
and classifying and storing the synthesized and rendered real-time expression pictures according to at least one of roles, expression categories and expression description keywords.
13. An apparatus for generating a real-time emoticon of a game character, comprising:
a module for receiving an instruction for generating the real-time expression picture;
means for obtaining face model data for a first game character;
the module is used for acquiring first expression characteristic data of a first game user in real time;
a module for generating real-time expression data of the first game character in real time according to the facial model data of the first game character and the first expression feature data; and
and the module is used for synthesizing and rendering the real-time expression data and the current game scene image data together to obtain the real-time expression picture.
14. The apparatus of clause 13, wherein the operation of obtaining the first expressive feature data of the first game user in real-time comprises:
and acquiring the expression characteristic data of the first game user by using a camera module in the client operated by the first game user.
15. The apparatus of clause 14, further comprising:
and the module is used for carrying out interpolation calculation on the facial model data of the first game role and the expression characteristic coefficient and the rotation matrix in the expression characteristic data of the first game user acquired by the camera module to obtain real-time expression data.
16. The apparatus according to clause 13, wherein the instruction for receiving the real-time emoticon generation is an instruction for generating the real-time emoticon triggered in a chat room in the current game, so as to send the real-time emoticon to the client of the opposite-end game user.
17. The apparatus of clause 13, wherein the real-time emotion data of the first game character is fused morphed data with respect to the face model data of the first game character, comprising: at least one of base mesh information, vertices, textures, triangular patch information, and respective fusion deformation information of the game character.
18. The apparatus of clause 13, further comprising:
and the module is used for adding audio, text or picture data input by the first game user in real time into the synthesized and rendered real-time expression picture.
19. The apparatus of clause 13, further comprising:
means for obtaining face model data for a second game character;
a module for acquiring second expression feature data of a second game user in real time;
a module for generating real-time expression data of the second game character in real time according to the facial model data of the second game character and the second expression feature data; and
and the module is used for synthesizing and rendering the real-time expression data of the first game role, the real-time expression data of the second game role and the current game scene image data together to obtain the real-time expression picture.
20. The apparatus of clause 13, further comprising:
and the module is used for saving the real-time expression pictures which are continuously synthesized and rendered into video data or animation data.
21. The apparatus of clause 13, further comprising:
and the module is used for saving, sharing or downloading the synthesized and rendered real-time expression picture.
22. The apparatus of clause 13, further comprising:
means for obtaining first skeletal matrix data in a mannequin of the first game character; and
and the module is used for generating real-time posture data comprising real-time expression data of the first game role in real time according to the facial model data of the first game role, the first expression feature data and the first skeleton matrix data.
23. The apparatus of clause 13, further comprising:
the module is used for acquiring preset expression data of the first game role, wherein the preset expression data of the first game role is generated by facial model data of the first game role and preset expression characteristic data of the first game user; and
and the module is used for synthesizing and rendering the preset expression data of the first game role and the current game scene image data together to obtain the real-time expression picture.
24. The apparatus of clause 13, further comprising:
and the module is used for classifying and storing the synthesized and rendered real-time expression pictures according to at least one of roles, expression categories and expression description keywords.
25. A system comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions that when loaded and executed by said processor is capable of implementing the method according to any one of clauses 1-12.
26. A computer readable medium having stored thereon at least one instruction, at least one program, set of codes or set of instructions capable, when loaded and executed by a processor, of implementing a method according to any one of clauses 1-12.

Claims (22)

1. A method for generating real-time emoticons of game characters during game playing comprises the following steps:
receiving an instruction for generating the real-time expression picture, wherein the instruction for generating the real-time expression picture together with a second game user or an NPC (network provider center) initiated by a first game user is received;
acquiring face model data of a first game character manipulated by the first game user;
acquiring first expression characteristic data of the first game user in real time;
generating real-time expression data of the first game role in real time according to the facial model data of the first game role and the first expression feature data;
acquiring face model data of a second game character manipulated by the second game user;
acquiring second expression characteristic data of the second game user in real time;
generating real-time expression data of the second game role in real time according to the facial model data of the second game role and the second expression feature data; and
and synthesizing and rendering the real-time expression data of the first game role, the real-time expression data of the second game role and real-time game scene image data together to obtain the real-time expression picture, wherein the first game role and the second game role are currently in the real-time game scene.
2. The method of claim 1, wherein the act of obtaining first expressive feature data of the first game user in real-time comprises:
and acquiring the expression characteristic data of the first game user by using a camera module in the client operated by the first game user.
3. The method of claim 2, further comprising:
and carrying out interpolation calculation on the facial model data of the first game role and the expression characteristic coefficient and the rotation matrix in the expression characteristic data of the first game user acquired by the camera module to obtain real-time expression data.
4. The method of claim 1, wherein the real-time expression data of the first game character is fused morphing data relative to the facial model data of the first game character, comprising: at least one of base mesh information, vertices, textures, triangular patch information, and respective fusion deformation information of the game character.
5. The method of claim 1, further comprising:
and adding audio, text or picture data input by the first game user in real time into the synthesized and rendered real-time expression picture.
6. The method of claim 1, further comprising:
and storing the real-time expression pictures subjected to continuous synthesis rendering as video data or animation data.
7. The method of claim 1, further comprising:
and providing an interface to save, share or download the synthesized and rendered real-time emoticons.
8. The method of claim 1, further comprising:
acquiring first skeleton matrix data in a human body model of the first game role; and
and generating real-time posture data comprising real-time expression data of the first game role in real time according to the facial model data of the first game role, the first expression feature data and the first skeleton matrix data.
9. The method of claim 1, further comprising:
acquiring preset expression data of the first game role, wherein the preset expression data of the first game role is generated by facial model data of the first game role and preset expression characteristic data of the first game user; and
and synthesizing and rendering the preset expression data of the first game role and the real-time game scene image data together to obtain the real-time expression picture.
10. The method of claim 1, further comprising:
and classifying and storing the synthesized and rendered real-time expression pictures according to at least one of roles, expression categories and expression description keywords.
11. An apparatus for generating real-time emoticons of a game character during game play, comprising:
a module for receiving an instruction to generate the real-time emoticon, the module receiving an instruction initiated by a first game user to generate the real-time emoticon with a second game user or an NPC;
means for obtaining face model data for a first game character manipulated by the first game user;
the module is used for acquiring first expression characteristic data of the first game user in real time;
a module for generating real-time expression data of the first game character in real time according to the facial model data of the first game character and the first expression feature data;
means for obtaining face model data for the second game character;
a module for acquiring second expression feature data of the second game user in real time;
a module for generating real-time expression data of the second game character in real time according to the facial model data of the second game character and the second expression feature data; and
and the module is used for synthesizing and rendering the real-time expression data of the first game role, the real-time expression data of the second game role and real-time game scene image data together to obtain the real-time expression picture, wherein the first game role and the second game role are currently in the real-time game scene.
12. The apparatus of claim 11, wherein the operation of obtaining the first expressive feature data of the first game user in real-time comprises:
and acquiring the expression characteristic data of the first game user by using a camera module in the client operated by the first game user.
13. The apparatus of claim 12, further comprising:
and the module is used for carrying out interpolation calculation on the facial model data of the first game role and the expression characteristic coefficient and the rotation matrix in the expression characteristic data of the first game user acquired by the camera module to obtain real-time expression data.
14. The apparatus of claim 11, wherein the real-time emotion data of the first game character is fusion morphing data with respect to the face model data of the first game character, comprising: at least one of base mesh information, vertices, textures, triangular patch information, and respective fusion deformation information of the game character.
15. The apparatus of claim 11, further comprising:
and the module is used for adding audio, text or picture data input by the first game user in real time into the synthesized and rendered real-time expression picture.
16. The apparatus of claim 11, further comprising:
and the module is used for saving the real-time expression pictures which are continuously synthesized and rendered into video data or animation data.
17. The apparatus of claim 11, further comprising:
and the module is used for saving, sharing or downloading the synthesized and rendered real-time expression picture.
18. The apparatus of claim 11, further comprising:
means for obtaining first skeletal matrix data in a mannequin of the first game character; and
and the module is used for generating real-time posture data comprising real-time expression data of the first game role in real time according to the facial model data of the first game role, the first expression feature data and the first skeleton matrix data.
19. The apparatus of claim 11, further comprising:
the module is used for acquiring preset expression data of the first game role, wherein the preset expression data of the first game role is generated by facial model data of the first game role and preset expression characteristic data of the first game user; and
and the module is used for synthesizing and rendering the preset expression data of the first game role and the real-time game scene image data together to obtain the real-time expression picture.
20. The apparatus of claim 11, further comprising:
and the module is used for classifying and storing the synthesized and rendered real-time expression pictures according to at least one of roles, expression categories and expression description keywords.
21. A system comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which when loaded and executed by said processor is capable of implementing the method according to any one of claims 1-10.
22. A computer readable medium having stored thereon at least one instruction, at least one program, set of codes or set of instructions capable, when loaded and executed by a processor, of implementing a method according to any one of claims 1-10.
CN202010305537.4A 2020-04-17 2020-04-17 Method and device for generating real-time expression picture of game role Active CN111530088B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210312096.XA CN114984585A (en) 2020-04-17 2020-04-17 Method for generating real-time expression picture of game role
CN202010305537.4A CN111530088B (en) 2020-04-17 2020-04-17 Method and device for generating real-time expression picture of game role

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010305537.4A CN111530088B (en) 2020-04-17 2020-04-17 Method and device for generating real-time expression picture of game role

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210312096.XA Division CN114984585A (en) 2020-04-17 2020-04-17 Method for generating real-time expression picture of game role

Publications (2)

Publication Number Publication Date
CN111530088A CN111530088A (en) 2020-08-14
CN111530088B true CN111530088B (en) 2022-04-22

Family

ID=71970817

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010305537.4A Active CN111530088B (en) 2020-04-17 2020-04-17 Method and device for generating real-time expression picture of game role
CN202210312096.XA Pending CN114984585A (en) 2020-04-17 2020-04-17 Method for generating real-time expression picture of game role

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210312096.XA Pending CN114984585A (en) 2020-04-17 2020-04-17 Method for generating real-time expression picture of game role

Country Status (1)

Country Link
CN (2) CN111530088B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070873B (en) * 2020-08-26 2021-08-20 完美世界(北京)软件科技发展有限公司 Model rendering method and device
CN112749357B (en) * 2020-09-15 2024-02-06 腾讯科技(深圳)有限公司 Interaction method and device based on shared content and computer equipment
CN113559503B (en) * 2021-06-30 2024-03-12 上海掌门科技有限公司 Video generation method, device and computer readable medium
WO2023184357A1 (en) * 2022-03-31 2023-10-05 云智联网络科技(北京)有限公司 Expression model making method and apparatus, and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622774A (en) * 2011-01-31 2012-08-01 微软公司 Living room movie creation
CN105338369A (en) * 2015-10-28 2016-02-17 北京七维视觉科技有限公司 Method and apparatus for synthetizing animations in videos in real time
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character
CN106447785A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Method for driving virtual character and device thereof
CN108648251A (en) * 2018-05-15 2018-10-12 深圳奥比中光科技有限公司 3D expressions production method and system
CN108635860A (en) * 2018-07-24 2018-10-12 合肥爱玩动漫有限公司 A kind of game role expression production method in playing
CN108986189A (en) * 2018-06-21 2018-12-11 珠海金山网络游戏科技有限公司 Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622774A (en) * 2011-01-31 2012-08-01 微软公司 Living room movie creation
CN105338369A (en) * 2015-10-28 2016-02-17 北京七维视觉科技有限公司 Method and apparatus for synthetizing animations in videos in real time
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character
CN106447785A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Method for driving virtual character and device thereof
CN108648251A (en) * 2018-05-15 2018-10-12 深圳奥比中光科技有限公司 3D expressions production method and system
CN108986189A (en) * 2018-06-21 2018-12-11 珠海金山网络游戏科技有限公司 Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming
CN108635860A (en) * 2018-07-24 2018-10-12 合肥爱玩动漫有限公司 A kind of game role expression production method in playing

Also Published As

Publication number Publication date
CN114984585A (en) 2022-09-02
CN111530088A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN111530088B (en) Method and device for generating real-time expression picture of game role
CN112037311B (en) Animation generation method, animation playing method and related devices
CN111530086B (en) Method and device for generating expression of game role
US9747495B2 (en) Systems and methods for creating and distributing modifiable animated video messages
US20120028707A1 (en) Game animations with multi-dimensional video game data
US20100060662A1 (en) Visual identifiers for virtual world avatars
JP2000511368A (en) System and method for integrating user image into audiovisual representation
US20100201693A1 (en) System and method for audience participation event with digital avatars
JP3623415B2 (en) Avatar display device, avatar display method and storage medium in virtual space communication system
JP5169111B2 (en) Composite image output apparatus and composite image output processing program
CN111530087B (en) Method and device for generating real-time expression package in game
CN114095744A (en) Video live broadcast method and device, electronic equipment and readable storage medium
CN115362474A (en) Scoods and hairstyles in modifiable video for custom multimedia messaging applications
KR101996973B1 (en) System and method for generating a video
KR20160134883A (en) Digital actor managing method for image contents
JP2006263122A (en) Game apparatus, game system, game data processing method, program for game data processing method and storage medium
CN116017082A (en) Information processing method and electronic equipment
JP6313003B2 (en) Karaoke apparatus, image output method, and program
WO2021208330A1 (en) Method and apparatus for generating expression for game character
Ballin et al. Personal virtual humans—inhabiting the TalkZone and beyond
JP4168803B2 (en) Image output device
JP2021060779A (en) Image processing program and image processing method
JP2022068550A (en) Game program, game processing method, and information processing device
CN117357890A (en) Archiving processing method and device for game, electronic equipment and computer readable storage medium
Staadt et al. JAPE: A prototyping system for collaborative virtual environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant