CN111028322A - Game animation expression generation method and device and electronic equipment - Google Patents

Game animation expression generation method and device and electronic equipment Download PDF

Info

Publication number
CN111028322A
CN111028322A CN201911315756.4A CN201911315756A CN111028322A CN 111028322 A CN111028322 A CN 111028322A CN 201911315756 A CN201911315756 A CN 201911315756A CN 111028322 A CN111028322 A CN 111028322A
Authority
CN
China
Prior art keywords
expression
animation
facial
video data
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911315756.4A
Other languages
Chinese (zh)
Inventor
韩壮壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pixel Software Technology Co Ltd
Original Assignee
Beijing Pixel Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pixel Software Technology Co Ltd filed Critical Beijing Pixel Software Technology Co Ltd
Priority to CN201911315756.4A priority Critical patent/CN111028322A/en
Publication of CN111028322A publication Critical patent/CN111028322A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a game animation expression generation method, a game animation expression generation device and electronic equipment, wherein facial action video data are firstly acquired; then extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model; and generating an animation expression based on the preset skeleton skin model and the expression characteristics, and storing the animation expression as a game animation expression according to a preset format. According to the game animation expression generation method, the pre-established feature extraction model is adopted, the expression features can be efficiently and conveniently obtained, the animation expressions generated based on the expression features are directly stored as game animation expressions according to the preset format, and the cost for generating the game animation expressions is reduced.

Description

Game animation expression generation method and device and electronic equipment
Technical Field
The invention relates to the technical field of virtual reality, in particular to a game animation expression generation method and device and electronic equipment.
Background
In the related art, most of animation capturing technologies use complex and expensive equipment, and the learning cost of the motion capturing equipment is high, so that the motion capturing equipment can be used after learning for a period of time. And most of the data formats generated by capturing are grid formats such as C3D and FBX, seamless connection with a mainstream game engine is difficult, captured data needs to be converted and then used, and therefore the generation process of the game animation expression based on the animation capturing technology is low in efficiency and high in cost.
Disclosure of Invention
In view of the above, the present invention provides a game animation expression generating method, device and electronic device, so as to generate animation expressions efficiently and conveniently and reduce the cost of generating animation expressions.
In a first aspect, an embodiment of the present invention provides a game animation expression generation method, including: acquiring face motion video data; extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model; generating an animation expression based on a preset skeleton skin model and expression characteristics; and storing the animation expression as the game animation expression according to a preset format.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the face motion video data includes face images arranged according to a plurality of setting orders; the method comprises the following steps of extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model, wherein the steps comprise: sequentially inputting the facial images into a preset feature extraction model, and extracting real-time facial features of each facial image through the feature extraction model; and determining the real-time facial features arranged according to the set sequence as the expression features corresponding to the facial motion video data.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of generating an animated expression based on a preset bone skin model and an expression feature includes: sequentially inputting the real-time facial features as parameters into a skeleton skin model to generate a plurality of corresponding expression images; and generating the animation expression based on the plurality of expression images.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of saving the animation expression as the game animation expression according to a preset format includes: and carrying out animation baking on the animation expression, and storing the animation expression after the animation baking as the game animation expression according to a preset format.
In a second aspect, an embodiment of the present invention further provides a game animation expression generating device, including: the data acquisition module is used for acquiring face motion video data; the feature extraction module is used for extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model; the animation expression generation module is used for generating animation expressions based on a preset skeleton skin model and expression characteristics; and the game animation expression storage module is used for storing the animation expression as the game animation expression according to a preset format.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, wherein the face motion video data includes face images arranged in a plurality of setting orders; the feature extraction module is further configured to: sequentially inputting the facial images into a preset feature extraction model, and extracting real-time facial features of each facial image through the feature extraction model; and determining the real-time facial features arranged according to the set sequence as the expression features corresponding to the facial motion video data.
With reference to the first possible implementation manner of the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, where the animation expression generating module is further configured to: sequentially inputting the real-time facial features as parameters into a skeleton skin model to generate a plurality of corresponding expression images; and generating the animation expression based on the plurality of expression images.
With reference to the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, where the game animation expression storage module is further configured to: and carrying out animation baking on the animation expression, and storing the animation expression after the animation baking as the game animation expression according to a preset format.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor executes the machine executable instructions to implement the game animation expression generation method.
In a fourth aspect, embodiments of the present invention further provide a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to implement the game animation expression generation method.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a game animation expression generation method, a game animation expression generation device and electronic equipment, wherein facial action video data are firstly acquired; then extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model; and generating an animation expression based on the preset skeleton skin model and the expression characteristics, and storing the animation expression as a game animation expression according to a preset format. In the method, the pre-established feature extraction model is adopted, the expression features can be efficiently and conveniently obtained, the animation expression generated based on the expression features is directly stored as the game animation expression according to the preset format, and the cost for generating the game animation expression is reduced.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a method for generating animation expressions of a game according to an embodiment of the present invention;
FIG. 2 is a flow chart of another game animation expression generation method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a game animation expression generating device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, with the rapid iteration of MMORPG (massively multiplayer Online Role playing game) game production technology, characters and expressions are vivid and close to reality, and are widely adopted by various game manufacturers. Motion capture and facial expression capture techniques have been developed. By capturing facial expression data of actors, after processing, the facial expression data is applied to virtual characters to obtain vivid animation expressions.
Most of the existing animation capturing technologies use complex and expensive equipment, such as professional motion capturing equipment such as OptiTack, FACEGOOD and the like for face capturing; and most of the data formats generated by capturing are grid formats such as C3D, FBX and the like, seamless connection with a mainstream game engine is difficult, and the captured data needs to be converted before being used. The animation file is available to the game engine after capture and processing is required by the motion compensation device. Sometimes, the game does not need to use the facial expression precision of the movie level, which is not beneficial to saving game resource space.
Existing motion capture devices are relatively expensive to learn and often require a period of learning before they can be used. And processing the captured data takes time.
Based on this, the embodiment of the invention provides a game animation expression generation method, a game animation expression generation device and electronic equipment, which can be applied to animation expression generation scenes in various games.
To facilitate understanding of the embodiment, a detailed description will be first given of a game animation expression generation method disclosed in the embodiment of the present invention.
The embodiment of the invention provides a game animation expression generation method, as shown in figure 1, the method comprises the following steps:
step S100, acquiring face motion video data; the facial motion video data can be recorded in the process of making expressions according to game animation expressions which are required to be obtained by related personnel, and the video data can be obtained through image acquisition equipment, such as a mobile phone camera or a camera. The facial motion video data typically includes a plurality of facial expression images.
And S102, extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model.
The feature extraction model can be established through an ARKit platform. The ARKit platform is an AR (Augmented Reality) development platform that can be used autonomously by developers; the ARKit count uses a technique called visual-inertial measurement (visual-inertial measurement). This process combines the information from the device motion sensor that captured the image with the image captured by the device through computer vision analysis to identify salient features in the scene image, track differences in the location of these features from each frame of the video, and compare this information to the motion sensing data. Resulting in a high accuracy model of the position and motion of the device.
In the process of establishing the feature extraction model, the sensor information of the equipment for acquiring the image can be referred, the equipment for acquiring the image can also be considered to be in a static state, and only the image shot by the equipment is subjected to computer vision analysis to obtain the feature extraction model, wherein the process is the application of a common ARKIt platform. When the facial motion video data comprise a plurality of facial expression images, the real-time expression features of each image can be extracted sequentially through the feature extraction model, and finally, the expression features corresponding to the facial motion video data are generated based on the real-time expression features. The above expression features generally include the movement process data of each five sense organs and facial bones during facial movement (expression).
And step S104, generating animation expressions based on the preset skeleton skin model and the expression characteristics.
The covering is used in 3D games and is a manufacturing technology of three-dimensional animation. Adding bones to the model on the basis of the model created in the three-dimensional software. Since the skeleton and the model are independent of each other, in order for the skeleton to drive the model to produce reasonable motion, the technique of binding the model to the skeleton is called skinning. The skeleton skin model is built by skin technology in combination with a character model in a game. The bone skinning model may include the five sense organs, facial bones, etc.; and matching the movement data of the five sense organs and the facial skeletons in the expression characteristics with the five sense organs and the facial skeletons of the skeleton skin model, so that the five sense organs and the facial skeletons of the skeleton skin model move according to the movement tracks in the facial movement video when the expression is performed, and generating the animation expression.
Step S106, storing the animation expression as a game animation expression according to a preset format; the preset format can be a file format commonly used in the game development process, the animation expression is stored in the preset format, the animation expression can be directly used in the game development process, and the animation expression can be called as game animation expression.
The embodiment of the invention provides a game animation expression generation method, which comprises the steps of firstly, acquiring facial action video data; then extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model; and generating an animation expression based on the preset skeleton skin model and the expression characteristics, and storing the animation expression as a game animation expression according to a preset format. In the method, the pre-established feature extraction model is adopted, the expression features can be efficiently and conveniently obtained, the animation expression generated based on the expression features is directly stored as the game animation expression according to the preset format, and the cost for generating the game animation expression is reduced.
The embodiment of the invention also provides another game animation expression generation method, which is realized based on the method in the embodiment of the invention; as shown in fig. 2, the method comprises the steps of:
step S200, acquiring face motion video data; the face motion video data includes face images arranged in a plurality of predetermined orders.
And step S202, sequentially inputting the facial images into a preset feature extraction model, and extracting real-time facial features of each facial image through the feature extraction model. The feature extraction model is established based on an ARKit platform, and a computer vision analysis method is usually adopted when the real-time facial features of the facial image are extracted.
And step S204, determining the real-time facial features arranged according to the set sequence as the expression features corresponding to the facial motion video data. The setting order is the same as the arrangement order of the face images corresponding to the real-time face features.
And step S206, sequentially inputting the real-time facial features as parameters into the skeleton skin model to generate a plurality of corresponding expression images.
In step S208, an animation expression is generated based on the plurality of expression images. The animation expression can be generated by playing the expression image according to a certain frequency.
Step S210, carrying out animation baking on the animation expression, and storing the animation expression after the animation baking as a game animation expression according to a preset format; the animation baking process can be carried out according to the animation effect, and after the animation is baked, the animation expression can be smoother and more natural.
The game animation expression generation method comprises the steps of firstly, acquiring facial action video data; then sequentially inputting the facial images into a preset feature extraction model, extracting real-time facial features of each facial image through the feature extraction model, and determining the real-time facial features arranged according to a set sequence as expression features corresponding to the facial motion video data; and generating an animation expression based on the preset skeleton skin model and the expression characteristics, and further storing the animation expression as a game animation expression according to a preset format after the animation baking is carried out on the animation expression. In the mode, the pre-established feature extraction model is adopted, so that the expression features can be efficiently and conveniently obtained, and the cost for generating the game animation expression is reduced.
In a specific implementation process, the game animation expression generation method can be implemented by combining an ARKit with a Unity; capturing a facial expression video (equivalent to the facial motion video) through a mobile phone, processing the facial expression image video by adopting mobile phone software established on the basis of an ARKit platform to obtain facial expression characteristics, and sending the facial expression characteristics to a game engine developed on the basis of Unity through a network. The facial expression features are directly processed into animation assets available for the game in the engine, so that a large number of animation segments with standard fineness can be rapidly recorded, and the requirement of rapid iteration of game development is met.
The method for generating game animation expressions by using the ARKit and Unity comprises the following steps:
the method comprises the following steps: creating a suitable bone covering model; the skeletal skin can be established for the relevant technician according to the demo (prototype) of the game; the characteristics of the skeleton skin model are matched with those captured by the mobile phone end, and if the characteristics comprise five sense organs, facial skeleton and the like. The bone skinning model can be built through 3d max software, and after being built, the bone skinning model is imported into a game engine developed based on Unity; the game engine runs on a computer.
Step two: and opening a game engine recording panel and displaying the IP address of the computer terminal. And opening a client of the mobile phone software established based on the ARKit in the IphoneX and above models, and establishing a communication relationship between the mobile phone and the computer terminal through the IP address.
Step three: after the communication relation is established successfully, recording facial expression videos of operators through a camera of the mobile phone, generating facial expression features through a client of mobile phone software established based on an ARKit, and sending the facial expression features to a game engine; when the video is recorded, the network delay is avoided by using the frame buffer in the recording process. The recording process needs to be stopped at the appropriate time. After the recording is finished, generating animation expressions based on the skeleton skin model and the facial expression characteristics; and automatically baking the animation through the engine, and saving the animation expression to be in a format available for the Unity engine to the local.
Step four: and (4) carrying out animation playback viewing and frame repair length, and finally putting the animation playback viewing and frame repair length into a resource list to be used as an animation file which can be used in the game.
In the method, the ARKit is combined with the Unity to capture the facial expression, and recording can be conveniently carried out only in the same network environment. Meanwhile, the method is simple in process, strong in equipment universality and low in learning cost. And the recorded animation expression precision is completely enough for game use.
Corresponding to the embodiment of the game animation expression generation method, an embodiment of the present invention further provides a game animation expression generation apparatus, as shown in fig. 3, the apparatus includes:
a data obtaining module 300, configured to obtain the video data of the facial motion.
The feature extraction module 302 is configured to extract an expression feature corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model.
And the animation expression generation module 304 is configured to generate an animation expression based on a preset skeleton skin model and the expression features.
And the game animation expression storage module 306 is configured to store the animation expression as a game animation expression according to a preset format.
The embodiment of the invention provides a game animation expression generating device, which comprises the steps of firstly, acquiring facial action video data; then extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model; and generating an animation expression based on the preset skeleton skin model and the expression characteristics, and storing the animation expression as a game animation expression according to a preset format. In the method, the pre-established feature extraction model is adopted, the expression features can be efficiently and conveniently obtained, the animation expression generated based on the expression features is directly stored as the game animation expression according to the preset format, and the cost for generating the game animation expression is reduced.
In an actual implementation process, the face motion video data generally includes face images arranged according to a plurality of set orders; further, the feature extraction module is further configured to: sequentially inputting the facial images into a preset feature extraction model, and extracting real-time facial features of each facial image through the feature extraction model; and determining the real-time facial features arranged according to the set sequence as the expression features corresponding to the facial motion video data.
Further, the animation expression generation module is further configured to: sequentially inputting the real-time facial features as parameters into a skeleton skin model to generate a plurality of corresponding expression images; and generating the animation expression based on the plurality of expression images.
Further, the game animation expression storage module is further configured to: and carrying out animation baking on the animation expression, and storing the animation expression after the animation baking as the game animation expression according to a preset format.
The game animation expression device provided by the embodiment of the invention has the same technical characteristics as the game animation expression method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
An embodiment of the present invention further provides an electronic device, which is shown in fig. 4 and includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the game animation expression method.
Further, the electronic device shown in fig. 4 further includes a bus 132 and a communication interface 133, and the processor 130, the communication interface 133 and the memory 131 are connected through the bus 132.
The Memory 131 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 133 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 132 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
The processor 130 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 130. The Processor 130 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 131, and the processor 130 reads the information in the memory 131 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the present invention further provides a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to implement the game animation expression method.
The game animation expression method and device and the computer program product of the gateway electronic device provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, instructions included in the program codes can be used for executing the method described in the previous method embodiment, and specific implementation can refer to the method embodiment, and is not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A game animation expression generation method is characterized by comprising the following steps:
acquiring face motion video data;
extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model;
generating an animation expression based on a preset skeleton skin model and the expression characteristics;
and storing the animation expression as a game animation expression according to a preset format.
2. The method according to claim 1, wherein the face motion video data includes face images arranged in a plurality of set orders;
extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model, wherein the step comprises the following steps of:
sequentially inputting the facial images into a preset feature extraction model, and extracting real-time facial features of the facial images through the feature extraction model;
and determining the real-time facial features arranged according to a set sequence as expression features corresponding to the facial motion video data.
3. The method of claim 2, wherein the step of generating an animated expression based on the predetermined bone skin model and the expression features comprises:
inputting the real-time facial features into the skeleton skin model as parameters in sequence to generate a plurality of corresponding expression images;
and generating an animation expression based on the expression images.
4. The method of claim 1, wherein the step of saving the animated expression in a predetermined format as a game animated expression comprises:
and carrying out animation baking on the animation expression, and storing the animation expression after animation baking as a game animation expression according to a preset format.
5. A game animation expression generation device, comprising:
the data acquisition module is used for acquiring face motion video data;
the feature extraction module is used for extracting expression features corresponding to the facial motion video data according to the facial motion video data and a preset feature extraction model;
the animation expression generation module is used for generating animation expressions based on a preset skeleton skin model and the expression characteristics;
and the game animation expression storage module is used for storing the animation expression as the game animation expression according to a preset format.
6. The apparatus according to claim 5, wherein the face motion video data includes face images arranged in a plurality of set orders;
the feature extraction module is further to:
sequentially inputting the facial images into a preset feature extraction model, and extracting real-time facial features of the facial images through the feature extraction model;
and determining the real-time facial features arranged according to a set sequence as expression features corresponding to the facial motion video data.
7. The apparatus of claim 6, wherein the animated expression generation module is further configured to:
inputting the real-time facial features into the skeleton skin model as parameters in sequence to generate a plurality of corresponding expression images;
and generating an animation expression based on the expression images.
8. The apparatus of claim 5, wherein the game animation emotion preservation module is further configured to:
and carrying out animation baking on the animation expression, and storing the animation expression after animation baking as a game animation expression according to a preset format.
9. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of any one of claims 1 to 4.
10. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of any of claims 1 to 4.
CN201911315756.4A 2019-12-18 2019-12-18 Game animation expression generation method and device and electronic equipment Pending CN111028322A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911315756.4A CN111028322A (en) 2019-12-18 2019-12-18 Game animation expression generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911315756.4A CN111028322A (en) 2019-12-18 2019-12-18 Game animation expression generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111028322A true CN111028322A (en) 2020-04-17

Family

ID=70210535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911315756.4A Pending CN111028322A (en) 2019-12-18 2019-12-18 Game animation expression generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111028322A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470148A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Expression animation production method and device, storage medium and computer equipment
CN115546868A (en) * 2022-10-25 2022-12-30 湖南芒果无际科技有限公司 Facial animation acquisition apparatus, method and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011156115A2 (en) * 2010-06-09 2011-12-15 Microsoft Corporation Real-time animation of facial expressions
CN107180445A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression control method and device of a kind of animation model
CN107180444A (en) * 2017-05-11 2017-09-19 腾讯科技(深圳)有限公司 A kind of animation producing method, device, terminal and system
CN110570499A (en) * 2019-09-09 2019-12-13 珠海金山网络游戏科技有限公司 Expression generation method and device, computing equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011156115A2 (en) * 2010-06-09 2011-12-15 Microsoft Corporation Real-time animation of facial expressions
CN107180445A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression control method and device of a kind of animation model
CN107180444A (en) * 2017-05-11 2017-09-19 腾讯科技(深圳)有限公司 A kind of animation producing method, device, terminal and system
CN110570499A (en) * 2019-09-09 2019-12-13 珠海金山网络游戏科技有限公司 Expression generation method and device, computing equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于雨: "MAYA中复制角色动画的五种方法比较研究", 《明日风尚》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470148A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Expression animation production method and device, storage medium and computer equipment
CN113470148B (en) * 2021-06-30 2022-09-23 完美世界(北京)软件科技发展有限公司 Expression animation production method and device, storage medium and computer equipment
CN115546868A (en) * 2022-10-25 2022-12-30 湖南芒果无际科技有限公司 Facial animation acquisition apparatus, method and readable storage medium
CN115546868B (en) * 2022-10-25 2023-05-16 湖南芒果无际科技有限公司 Facial animation acquisition device, method and readable storage medium

Similar Documents

Publication Publication Date Title
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN112541445B (en) Facial expression migration method and device, electronic equipment and storage medium
CN112950751B (en) Gesture action display method and device, storage medium and system
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN109816758B (en) Two-dimensional character animation generation method and device based on neural network
US10074205B2 (en) Machine creation of program with frame analysis method and apparatus
CN110570500B (en) Character drawing method, device, equipment and computer readable storage medium
CN111028322A (en) Game animation expression generation method and device and electronic equipment
CN113723317B (en) Reconstruction method and device of 3D face, electronic equipment and storage medium
AU2020425673A1 (en) Apparatus for multi-angle screen coverage analysis
CN116115995A (en) Image rendering processing method and device and electronic equipment
CN109816744B (en) Neural network-based two-dimensional special effect picture generation method and device
US20230120883A1 (en) Inferred skeletal structure for practical 3d assets
CN110719415A (en) Video image processing method and device, electronic equipment and computer readable medium
CN106492460B (en) Data compression method and equipment
CN113989442B (en) Building information model construction method and related device
CN113209626B (en) Game picture rendering method and device
CN113724176A (en) Multi-camera motion capture seamless connection method, device, terminal and medium
CN115193039A (en) Interactive method, device and system of game scenarios
CN114125552A (en) Video data generation method and device, storage medium and electronic device
CN109543557B (en) Video frame processing method, device, equipment and storage medium
CN114299370A (en) Internet of things scene perception method and device based on cloud edge cooperation
CN115984943B (en) Facial expression capturing and model training method, device, equipment, medium and product
CN111738087A (en) Method and device for generating face model of game role
CN115937371B (en) Character model generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination