CN112973122A - Game role makeup method and device and electronic equipment - Google Patents

Game role makeup method and device and electronic equipment Download PDF

Info

Publication number
CN112973122A
CN112973122A CN202110232343.0A CN202110232343A CN112973122A CN 112973122 A CN112973122 A CN 112973122A CN 202110232343 A CN202110232343 A CN 202110232343A CN 112973122 A CN112973122 A CN 112973122A
Authority
CN
China
Prior art keywords
makeup
face
category
sample
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110232343.0A
Other languages
Chinese (zh)
Inventor
谷长健
袁燚
范长杰
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110232343.0A priority Critical patent/CN112973122A/en
Publication of CN112973122A publication Critical patent/CN112973122A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The embodiment of the invention provides a method and a device for making up game characters and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a face makeup reference image provided by a user for a game role, determining a target makeup category parameter combination corresponding to the game role in a face-pinching system according to a makeup feature vector corresponding to a makeup part in the face makeup reference image, and rendering the makeup part of a three-dimensional model of the game role in the face-pinching system based on the target makeup category parameter in the target makeup category parameter combination so as to enable the rendered makeup expression of the game role to be matched with the makeup expression of the face makeup reference image. The embodiment of the invention can render the makeup part of the three-dimensional model of the game role based on the target makeup category parameter combination which can be identified by the face-pinching system, so as to help a player to make up a makeup similar to the makeup expression of the face makeup reference image for the game role, and further improve the game experience.

Description

Game role makeup method and device and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for making up game characters and electronic equipment.
Background
The face-pinching system is one of the standard configuration systems of most of RPG (Role-playing game) type games, and by the face-pinching system, a player can create a makeup image of a game character that the player likes with a high degree of freedom.
At present, when the makeup of game characters is designed according to the makeup expression on a reference image, because the existing makeup migration is realized by the makeup characteristic between two-dimensional images, and the makeup characteristic of the reference image cannot be identified by a face-pinching system to realize the makeup of the game characters, the face-pinching system cannot pinch the game characters similar to the makeup expression on the reference image, and further the game experience of players is reduced.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and an electronic device for making up a game character, so that a face-pinching system pinches out a game character similar to the makeup expression on a reference image.
In a first aspect, an embodiment of the present invention provides a method for making up a game character, where the method includes: acquiring a face makeup reference image provided by a user aiming at a game role; wherein, the face makeup reference image is a face image which is already made up; determining a target makeup category parameter combination corresponding to a game role in a face pinching system according to makeup feature vectors corresponding to makeup parts in a face makeup reference image, wherein the target makeup category parameters in the target makeup category parameter combination correspond to the makeup parts one by one; wherein, a makeup category parameter set corresponding to each makeup part is prestored in the face pinching system; and rendering the three-dimensional model of the game character in the face-pinching system based on the target makeup category parameter combination so that the rendered makeup expression of the game character is matched with the makeup expression of the face makeup reference image.
The step of determining the target makeup category parameter combination corresponding to the game role in the face-pinching system according to the makeup feature vector corresponding to the makeup part in the face makeup reference image comprises the following steps: inputting a face makeup reference image into a pre-trained makeup matching network; and extracting a makeup feature vector corresponding to a makeup part in the face makeup reference image through a makeup matching network, and outputting a target makeup category parameter combination corresponding to the game role in a face pinching system based on the makeup feature vector.
The training process of the makeup matching network comprises the following steps: obtaining a makeup category standard combination corresponding to the face makeup reference sample in the face makeup reference sample set and the face makeup reference sample in the face makeup reference sample set; wherein, the reference parameters of the makeup category in the reference combination of the makeup category are matched with the makeup expression of the makeup part in the reference sample of the face makeup; selecting training samples from a face makeup reference sample set, and executing the following operations for each training sample: inputting the training samples into a makeup initial network to obtain a makeup category prediction combination of the training samples; determining a training loss value according to the makeup category prediction combination and a makeup category reference combination corresponding to the training sample; and if the training loss value converges to the preset value or the training times reach the set times, stopping training and taking the current makeup initial network as a makeup matching network.
The step of obtaining the makeup category reference combination corresponding to the face makeup reference sample in the face makeup reference sample set comprises: cutting a makeup part sample corresponding to each makeup part from the face makeup reference sample; obtaining a pre-stored makeup rendering sample atlas corresponding to each makeup part; each makeup rendering sample map in the makeup rendering sample map set is obtained by rendering a makeup part with a makeup expression on the three-dimensional model through a face pinching system, and the makeup expression on each makeup rendering sample map corresponds to a makeup rendering category; calculating the similarity between the makeup expression characteristics of the makeup part sample and the makeup expression characteristics of each makeup rendering sample corresponding to the makeup part for each makeup part sample corresponding to each makeup part, and determining the makeup rendering category of the makeup rendering sample corresponding to the maximum similarity as the makeup category standard corresponding to the makeup part sample; and accurately determining the makeup category basis corresponding to each makeup part sample as the makeup category basis combination of the human face makeup reference sample.
The step of cutting the makeup part samples corresponding to the makeup parts from the face makeup reference sample comprises the following steps: inputting the face makeup reference sample into a face key point detection model, and performing face key point detection to obtain face key points in the face makeup reference sample; and intercepting a makeup part sample corresponding to each makeup part from the face makeup reference sample based on the key points of the face.
The training loss value is determined by the following formula: l | | | X-X1L; wherein L represents a training loss value, X represents a makeup category reference combination, and X represents a makeup loss value1The makeup category prediction combination is represented.
The step of rendering the three-dimensional model of the game character in the face-pinching system based on the combination of the target makeup category parameters includes: searching target makeup parameters corresponding to all target makeup category parameters in the target makeup category parameter combination; determining a material map based on the target makeup parameters; and calling the texture map to render the makeup part of the three-dimensional model.
In a second aspect, an embodiment of the present invention further provides a game character makeup apparatus, where the apparatus includes: the providing module is used for acquiring a face makeup reference image provided by a user aiming at a game role; wherein, the face makeup reference image is a face image which is already made up; the determining module is used for determining a target makeup category parameter combination corresponding to a game role in a face pinching system according to a makeup feature vector corresponding to a makeup part in a face makeup reference image, wherein the target makeup category parameters in the target makeup category parameter combination correspond to the makeup parts one by one; wherein, a makeup category parameter set corresponding to each makeup part is prestored in the face pinching system; and the rendering module is used for rendering the three-dimensional model of the game role in the face pinching system based on the target makeup category parameter combination so as to enable the rendered makeup expression of the game role to be matched with the makeup expression of the face makeup reference image.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement the foregoing method.
In a fourth aspect, the embodiments of the present invention also provide a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the above-mentioned method.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a method, a device and electronic equipment for making up a game role, wherein a face makeup reference image provided by a user for the game role is obtained, the face makeup reference image is a face image which is made up, a target makeup category parameter combination corresponding to the game role in a face-pinching system is determined according to a makeup feature vector corresponding to a makeup part in the face makeup reference image, the target makeup category parameter combination is determined from a makeup category parameter set corresponding to each makeup part pre-stored in the face-pinching system, so that the target makeup category parameter in the target makeup category parameter combination can be identified by the face-pinching system, and therefore the makeup part of a three-dimensional model of the game role can be rendered in the face-pinching system based on the target makeup category parameter in the target makeup category parameter combination, so that the rendered makeup representation of the game character matches the makeup representation of the face makeup reference image. The embodiment of the invention can convert the makeup characteristic vector corresponding to the makeup part in the face makeup reference image provided by the player into the target makeup category parameter combination capable of being identified by the face-pinching system, and the makeup part of the three-dimensional model of the game character is rendered in the face-pinching system based on the target makeup category parameter in the target makeup category parameter combination, so that the player is helped to match the game character with a makeup appearance similar to the makeup expression of the face makeup reference image, and the game experience is further improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for making up a game character according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for applying makeup to a game character according to an embodiment of the present invention;
FIG. 3 is a flow chart of another method for applying makeup to a game character according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a method for making up a game character according to an embodiment of the present invention;
FIG. 5 is a schematic structural view of a makeup apparatus for game characters according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, the face-pinching system cannot recognize the makeup features of the reference image to make up the game characters, so that the face-pinching system cannot pinch the game characters similar to the makeup representation on the reference image. Based on this, the method, the device and the electronic device for making up the game character provided by the embodiment of the invention can convert the makeup feature vector corresponding to the makeup part in the reference image of the face makeup into the target makeup category parameter combination which can be identified by the face-pinching system, and render the makeup part of the three-dimensional model of the game character in the face-pinching system based on the target makeup category parameter in the target makeup category parameter combination so as to match the rendered makeup expression of the game character with the makeup expression of the reference image of the face makeup.
The embodiment of the invention provides a game role makeup method, which is shown in a flow chart of the game role makeup method in figure 1 and comprises the following steps:
step S102, obtaining a face makeup reference image provided by a user aiming at a game role; wherein, the face makeup reference image is a face image which is already made up;
the face makeup reference image is a face image which is downloaded in advance or is shot in advance and stored in a client of a mobile phone, a computer and the like of a user; in this embodiment, makeup may be performed on one of the game characters in the game with reference to the makeup expression (visual makeup) on the face image so that the makeup expression of the game character after makeup is similar to the makeup expression of the face makeup reference image.
Step S104, determining a target makeup category parameter combination corresponding to a game role in a face pinching system according to makeup feature vectors corresponding to makeup parts in a face makeup reference image, wherein the target makeup category parameters in the target makeup category parameter combination correspond to the makeup parts one by one; wherein, a makeup category parameter set corresponding to each makeup part is prestored in the face pinching system;
the makeup feature vector corresponding to the makeup part in the face makeup reference image may be obtained by a makeup matching network having a feature extraction function, and the makeup feature vector may be used to represent makeup information corresponding to the makeup part, where the makeup matching network is described in detail in the following embodiments and will not be specifically described here.
In actual use, a makeup category parameter set corresponding to each makeup part is prestored in the face pinching system, each makeup category parameter in the makeup category parameter set is an identification parameter corresponding to the makeup expression, and different makeup category parameters in the makeup category parameter set corresponding to the same makeup part can be understood to correspond to different makeup expressions.
For example, the makeup part prestored in the face pinching system comprises a lip and eyes, wherein the makeup expression of the lip is embodied by a lip gloss, four different makeup category parameters, namely a lip gloss 1, a lip gloss 2, a lip gloss 3 and a lip gloss 4, are stored in a makeup category parameter set corresponding to the lip, and each makeup category parameter corresponds to the makeup expression of one lip gloss; the makeup expression of the eye is embodied by the eye shadow and the eye line together, therefore, each makeup category parameter in the makeup category parameter set corresponding to the eye is stored in the form of parameter combination, for example, the parameter combination of four different makeup category parameters, namely, eye shadow 1-eye line 1, eye shadow 2-eye line 1, eye shadow 1-eye line 2 and eye shadow 2-eye line 2, and each parameter combination of the makeup category parameters corresponds to one kind of makeup expression of the eye; in this embodiment, the makeup part, the makeup expression of the makeup part, and the number of makeup category parameters in the makeup category parameter set corresponding to the makeup part may be set according to actual needs, and is not limited herein.
After the makeup feature vectors corresponding to the makeup parts in the face makeup reference image are used, target makeup category parameters corresponding to the makeup parts can be determined from the makeup category parameter set corresponding to the makeup parts on the basis of the makeup feature vectors corresponding to the makeup parts, and the obtained target makeup category parameters are combined to obtain the target makeup category parameter combination, for example, the target makeup category parameter combination corresponding to a game character determined by the makeup feature vectors in a face pinching system is lip color 1, eye shadow 1-eye line 2.
And step S106, rendering the three-dimensional model of the game role in the face pinching system based on the target makeup category parameter combination so as to enable the rendered makeup expression of the game role to be matched with the makeup expression of the face makeup reference image.
And rendering the corresponding makeup part of the three-dimensional model of the game character based on each target makeup category parameter in the target makeup category parameter combination so as to enable the rendered makeup expression of the game character to be close to the makeup expression of the human face makeup reference image.
The embodiment of the invention provides a game role makeup method, wherein a face makeup reference image provided by a user aiming at a game role is obtained, the face makeup reference image is a face image which is already made up, a target makeup category parameter combination corresponding to a game role in a face pinching system is determined according to a makeup feature vector corresponding to a makeup part in the face makeup reference image, since the target makeup category parameter set is determined from the makeup category parameter set corresponding to each makeup part prestored in the face-pinching system, therefore, the target makeup category parameters in the target makeup category parameter combination can be recognized by the face-pinching system, and therefore, the makeup parts of the three-dimensional model of the game character can be rendered in the face-pinching system based on the target makeup category parameters in the target makeup category parameter combination, so that the rendered makeup representation of the game character matches the makeup representation of the face makeup reference image. The embodiment of the invention can convert the makeup characteristic vector corresponding to the makeup part in the face makeup reference image provided by the player into the target makeup category parameter combination capable of being identified by the face-pinching system, and the makeup part of the three-dimensional model of the game character is rendered in the face-pinching system based on the target makeup category parameter in the target makeup category parameter combination, so that the player is helped to match the game character with a makeup appearance similar to the makeup expression of the face makeup reference image, and the game experience is further improved.
The embodiment provides another game role makeup method, which is realized on the basis of the embodiment; the embodiment focuses on a specific implementation of determining a target makeup category parameter combination corresponding to a game character in a face-pinching system. As shown in fig. 2, another flow chart of a method for making up a game character in the present embodiment includes the following steps:
step S202, obtaining a face makeup reference image provided by a user aiming at a game role; wherein, the face makeup reference image is a face image which is already made up;
step S204, inputting the face makeup reference image into a pre-trained makeup matching network;
the dressing matching network is built based on a multilayer convolutional neural network and four fully-connected layers, and in the embodiment, the specific training process of the dressing matching network can be realized through steps A1 to A5:
step A1, obtaining a makeup category reference combination corresponding to the face makeup reference sample in the face makeup reference sample set and the face makeup reference sample in the face makeup reference sample set; wherein, the reference parameters of the makeup category in the reference combination of the makeup category are matched with the makeup expression of the makeup part in the reference sample of the face makeup;
the face makeup reference sample in the face makeup reference sample set can be selected from face images with makeup disclosed on the network, wherein each makeup part of each face makeup reference sample in the face makeup reference sample set has a makeup category reference parameter corresponding to the makeup expression, and the makeup category reference parameter has the same effect as the makeup category parameter and is an identification parameter corresponding to the makeup expression.
Specifically, the process of obtaining the makeup category reference combination corresponding to the face makeup reference sample in the face makeup reference sample set may be implemented by steps B1 to B4:
b1, cutting a makeup part sample corresponding to each makeup part from the face makeup reference sample;
during specific implementation, the face makeup reference sample can be input into a face key point detection model to perform face key point detection, so as to obtain face key points in the face makeup reference sample; and intercepting a makeup part sample corresponding to each makeup part from the face makeup reference sample based on the key points of the face.
In this embodiment, when the face key point detection model is applied to the detection of the face makeup reference sample, the face key points in the face makeup reference sample can be obtained, wherein the obtained face key points can be displayed in the face makeup reference sample in a labeling manner or without being displayed in a labeling manner, which is not limited herein.
Because the face key points in the face makeup reference sample obtained by the face key point detection model may include other face parts except for makeup parts, for example, face key points of a face contour, and the face parts surrounded by the face key points do not need to be made up, the makeup parts represented based on the face key points can be captured from the face makeup reference sample to obtain a makeup part sample containing the makeup parts; for example, a makeup part sample corresponding to a lip and a makeup part sample corresponding to an eye may be obtained by taking only one makeup part sample of the eye due to the symmetrical relationship.
Step B2, obtaining pre-stored makeup rendering sample atlas corresponding to each makeup part; each makeup rendering sample map in the makeup rendering sample map set is obtained by rendering a makeup part with a makeup expression on the three-dimensional model through a face pinching system, and the makeup expression on each makeup rendering sample map corresponds to a makeup rendering category;
before obtaining a makeup rendering sample atlas, a makeup rendering front atlas is obtained, wherein the makeup rendering front atlas is a picture set with makeup expressions on a front face obtained by rendering a three-dimensional model of a game character after inputting different makeup identifiers in a face pinching system, then a makeup rendering sample atlas corresponding to each makeup part is cut from each makeup rendering front atlas, the makeup rendering sample atlas is classified according to the same makeup part to obtain a makeup rendering sample atlas corresponding to each makeup part, the makeup rendering sample atlas includes all the makeup expressions of the makeup part, each makeup expression can be represented by a corresponding makeup rendering category, and the makeup rendering category can be understood as an identifier parameter of the makeup expressions, a reference of the makeup category and a parameter of the makeup category.
Since the method for cutting out the makeup rendering sample map from the makeup rendering front map is the same as the method for cutting out the makeup part sample from the human face makeup reference sample, the description thereof is omitted here.
Step B3, calculating the similarity between the makeup expression characteristic of the makeup part sample and the makeup expression characteristic of each makeup rendering sample corresponding to the makeup part for each makeup part sample, and determining the makeup rendering category of the makeup rendering sample corresponding to the maximum similarity as the makeup category standard corresponding to the makeup part sample;
describing by taking the makeup part as a lip part as an example, calculating the similarity between the makeup expression characteristics of the makeup part sample corresponding to the lip part and the makeup expression characteristics of each makeup rendering sample corresponding to the lip part for the makeup part sample corresponding to the lip part, and determining the makeup rendering category of the makeup rendering sample corresponding to the calculated maximum similarity as the makeup category standard corresponding to the makeup part sample; the process of determining the makeup category reference for other makeup parts is the same as above, and is not described in detail here.
In this embodiment, the similarity of the features can be calculated by using methods such as euclidean distance, manhattan distance, mahalanobis distance, and the like, and is not limited herein.
And step B4, accurately determining the makeup category base corresponding to each makeup part sample as the makeup category base combination of the human face makeup reference sample.
The makeup category reference combination of each face makeup reference sample can be obtained through the steps B1 to B4, and the corresponding makeup expression of each makeup part in the face pinching system in the face makeup reference sample can be obtained through the makeup category reference parameters in the makeup category reference combination.
Step A2, selecting training samples from the face makeup reference sample set, and executing the operations from step A3 to step A5 for each training sample;
the number of training samples selected from the face makeup reference sample set can be determined according to actual needs, and the number of training samples is not limited herein.
Step A3, inputting the training sample into the makeup initial network to obtain the makeup category prediction combination of the training sample;
the makeup initial network is the untrained neural network built by the multilayer convolution neural network and the four fully-connected layers, after a training sample is input into the makeup initial network, a makeup category prediction combination corresponding to the training sample can be output through calculation of the makeup initial network, wherein the obtained makeup category prediction combination is an identification combination corresponding to the makeup expression of each makeup part of the training sample.
Step A4, determining a training loss value according to the makeup category prediction combination and the makeup category reference combination corresponding to the training sample;
specifically, the training loss value may be determined by: l | | | X-X1L; wherein L represents a training loss value, X represents a makeup category reference combination, and X represents a makeup loss value1The makeup category prediction combination is represented. In actual use, the calculation method of the training loss value is not limited to the calculation method given in the present embodiment.
And step A5, if the training loss value converges to the preset value or the training times reach the set times, stopping training, and taking the current makeup initial network as a makeup matching network.
In the process of training the makeup initial network, if the training loss value obtained by calculation converges to a preset value or the training times reaches a set number, stopping network training, taking the trained current makeup initial network as a makeup matching network, and outputting a target makeup category parameter combination corresponding to the face makeup reference image in the face pinching system by using the makeup matching network; the preset value and the set times can be set according to practical application, and are not limited herein.
Step S206, extracting makeup feature vectors corresponding to makeup parts in the face makeup reference image through a makeup matching network, and outputting a target makeup category parameter combination corresponding to a game role in a face pinching system based on the makeup feature vectors;
and S208, rendering the three-dimensional model of the game role in the face pinching system based on the target makeup category parameter combination so as to enable the rendered makeup expression of the game role to be matched with the makeup expression of the face makeup reference image.
The process of step S208 described above may be implemented by steps C1 to C3:
step C1, searching target makeup parameters corresponding to each target makeup category parameter in the target makeup category parameter combination;
the target makeup parameters are specific makeup parameter values for makeup expression, for example, the target makeup parameters corresponding to the lip gloss comprise lip gloss color values and lip gloss brightness values; the target makeup parameters corresponding to the eye shadow comprise an eye shadow color value, an eye shadow brightness value and an eye shadow width value; the target makeup parameters corresponding to the eyeliner include an eyeliner length value and an eyeliner concentration value, and when the makeup machine is actually used, the makeup parameters corresponding to the makeup expression can be set as required, and the makeup parameters are not limited in the embodiment.
For example, the target makeup category parameter lip color 1 corresponds to the target makeup parameters: lip gloss color value 10, lip gloss brightness value 50; the target makeup category parameter eye shadow 1 corresponds to the target makeup parameters: the eye shadow color value is 20, the eye shadow brightness value is 30, the eye shadow width value is 10mm, and the target makeup parameters corresponding to the target makeup category parameter eye line 2 are as follows: an eye length value of 10mm and an eye line concentration value of 30.
Step C2, determining a material map based on the target makeup parameters;
in practical use, the face-pinching system further stores a material library, in which a plurality of maps and makeup parameters corresponding to each material map are stored, and in this embodiment, the material map corresponding to the makeup parameters matching the target makeup parameters can be searched from the material library of the face-pinching system.
And step C3, calling the texture map to render the makeup part of the three-dimensional model.
And C2, calling the material map obtained in the step C2 to a makeup part corresponding to the three-dimensional model for map rendering, so that the rendered makeup expression of the game character is matched with the makeup expression of the human face makeup reference image.
The embodiment of the invention can input the face makeup reference image provided by the player into a pre-trained makeup matching network, extract the makeup characteristic vector corresponding to the makeup part in the face makeup reference image through the makeup matching network, and output the corresponding target makeup category parameter combination of the game character in the face-pinching system based on the makeup characteristic vector.
Further, in order to fully understand the above method for making up game characters, fig. 3 shows a flowchart of another method for making up game characters, the method for making up game characters in this embodiment can be implemented in three stages, namely, a preparation stage of a makeup rendering sample set, a training stage of a makeup matching network, and a making up stage for game characters, where the preparation stage of the makeup rendering sample set can be implemented by steps S300 to S301, the training stage of the makeup matching network can be implemented by steps S302 to S308, and the making up stage for game characters can be implemented by steps S309 to S310: specifically, the method for making up the game role comprises the following steps:
step S300, obtaining a makeup rendering front atlas based on makeup identifiers in the face pinching system;
for convenience of understanding, fig. 4 shows a schematic diagram of a method for making up a game character, and each makeup rendering front in the makeup rendering front set is a picture of a front face with makeup expression obtained by rendering a three-dimensional model of the game character after different makeup identifiers are input in a face-pinching system.
S301, obtaining a makeup rendering sample atlas corresponding to each makeup part according to the makeup rendering front atlas; wherein, the makeup expression on each makeup rendering sample picture corresponds to the makeup rendering category;
as shown in fig. 4, taking the makeup parts including the brows, the eyes, the lips, and the faces as an example for explanation, the makeup rendering sample diagrams corresponding to the four makeup parts including the brows, the eyes, the lips, and the faces are cut from each makeup rendering front diagram, and the makeup rendering sample diagrams are classified according to the makeup parts to obtain a makeup rendering sample set corresponding to the same makeup part, where the makeup rendering sample set corresponding to each makeup part includes all makeup expressions corresponding to the makeup part, each makeup expression can be represented by a unique makeup rendering category, and the makeup rendering category is a makeup identifier corresponding to the makeup expression.
Step S302, obtaining a face makeup reference sample set, and intercepting a makeup part sample corresponding to each makeup part from each face makeup reference sample;
after a face makeup reference sample set with makeup effects is obtained, cutting a makeup part sample corresponding to each makeup part from the face makeup reference sample according to the makeup parts, wherein for the convenience of understanding, the positions encircled by the boxes in fig. 4 are the makeup parts in the face makeup reference sample, and the parts encircled by the boxes in fig. 4 are cut to obtain the makeup part samples, wherein the makeup part samples cut from each face makeup reference sample comprise: a makeup part sample corresponding to the eyebrow part, a makeup part sample corresponding to the eye part, a makeup part sample corresponding to the lip part, and a makeup part sample corresponding to the face part.
Step S303, calculating the similarity of the makeup expression characteristics of the makeup part sample and the makeup expression characteristics of each makeup rendering sample picture corresponding to the makeup part, and determining a makeup category reference combination corresponding to the face makeup reference sample in the face makeup reference sample set based on the similarity;
for the makeup part samples corresponding to each of the makeup parts captured in the face makeup reference sample, the similarity between the makeup expression features of the makeup part samples and the makeup expression features of the makeup rendering samples corresponding to the makeup parts is calculated, and the makeup rendering class of the makeup rendering sample corresponding to the largest similarity is determined as the makeup class standard corresponding to the makeup part sample, as shown in fig. 4, the obtained makeup class standard corresponding to each of the makeup part samples is accurately determined as the makeup class standard combination of the face makeup reference sample, and the makeup class standard combination corresponding to each of the face makeup reference samples in the face makeup reference sample set can be determined through step S303.
Step S304, selecting training samples from the face makeup reference sample set, and executing the operations from step S305 to step S307 for each training sample;
step S305, inputting the training sample into a makeup initial network to obtain a makeup category prediction combination of the training sample;
step S306, determining a training loss value according to the makeup category prediction combination and the makeup category reference combination corresponding to the training sample;
as shown in fig. 4, a training sample is input into a makeup initial network, and a makeup category prediction combination is output through the makeup initial network prediction, wherein the makeup categories in the makeup category prediction combination correspond to the makeup parts in the training sample one by one; and calculating the predicted combination of the makeup categories predicted by the training sample and the reference combination of the makeup categories corresponding to the training sample to obtain a training loss value.
Step S307, judging whether the training loss value converges to a preset value or whether the training times reach the set times;
if yes, step S308 is executed, and if no, step S304 is executed, and the training samples can be selected from the face makeup reference sample set again for network training.
Step S308, stopping training, and taking the current makeup initial network as a makeup matching network;
in the process of training the makeup initial network, if the training requirement set in step S307 is satisfied (the training loss value converges to the preset value or the training number reaches the set number), the trained makeup initial network is used as the makeup matching network.
S309, extracting makeup feature vectors corresponding to makeup parts in the face makeup reference image through a makeup matching network, and outputting a target makeup category parameter combination corresponding to a game role in a face pinching system based on the makeup feature vectors;
and step S310, rendering the three-dimensional model of the game role in the face pinching system based on the target makeup category parameter combination so as to enable the rendered makeup expression of the game role to be matched with the makeup expression of the face makeup reference image.
In actual use, as shown in fig. 4, the face makeup reference image is input into the makeup matching network to obtain a target makeup category parameter combination corresponding to the makeup part in the face-pinching system in the face-pinching reference image, and the makeup part of the three-dimensional model of the game character is rendered in the face-pinching system based on the target makeup category parameter in the target makeup category parameter combination to obtain the makeup game character.
According to the game character making-up process, the target makeup category parameter combination can be determined in the makeup category parameter set corresponding to each makeup part pre-stored in the face pinching system based on the makeup feature vectors of each makeup part of the face makeup reference image, the makeup parts of the three-dimensional model of the game character are rendered based on the target makeup category parameter combination capable of being identified by the face pinching system, so that the game character matched with the makeup expression of the face makeup reference image is obtained, and the problem that the three-dimensional model of the game character cannot be made up by the existing face pinching system based on the makeup feature vectors of each makeup part of the face makeup reference image is effectively solved.
Corresponding to the above game character makeup method embodiment, an embodiment of the present invention provides a game character makeup apparatus, fig. 5 shows a schematic structural diagram of the game character makeup apparatus, and as shown in fig. 5, the apparatus includes:
a providing module 502, configured to obtain a face makeup reference image provided by a user for a game character; wherein, the face makeup reference image is a face image which is already made up;
a determining module 504, configured to determine, according to the makeup feature vectors corresponding to the makeup parts in the face makeup reference image, a target makeup category parameter combination corresponding to the game character in the face pinching system, where target makeup category parameters in the target makeup category parameter combination correspond to the makeup parts one to one; wherein, a makeup category parameter set corresponding to each makeup part is prestored in the face pinching system;
and a rendering module 506, configured to render the three-dimensional model of the game character in the face-pinching system based on the target makeup category parameter combination, so that the rendered makeup expression of the game character matches the makeup expression of the face makeup reference image.
The embodiment of the invention provides a game role makeup device, wherein a face makeup reference image provided by a user aiming at a game role is obtained, the face makeup reference image is a face image which is already made up, a target makeup category parameter combination corresponding to a game role in a face pinching system is determined according to a makeup feature vector corresponding to a makeup part in the face makeup reference image, since the target makeup category parameter set is determined from the makeup category parameter set corresponding to each makeup part prestored in the face-pinching system, therefore, the target makeup category parameters in the target makeup category parameter combination can be recognized by the face-pinching system, and therefore, the makeup parts of the three-dimensional model of the game character can be rendered in the face-pinching system based on the target makeup category parameters in the target makeup category parameter combination, so that the rendered makeup representation of the game character matches the makeup representation of the face makeup reference image. The embodiment of the invention can convert the makeup characteristic vector corresponding to the makeup part in the face makeup reference image provided by the player into the target makeup category parameter combination capable of being identified by the face-pinching system, and the makeup part of the three-dimensional model of the game character is rendered in the face-pinching system based on the target makeup category parameter in the target makeup category parameter combination, so that the player is helped to match the game character with a makeup appearance similar to the makeup expression of the face makeup reference image, and the game experience is further improved.
The determining module 504 is further configured to input the face makeup reference image into a pre-trained makeup matching network; and extracting a makeup feature vector corresponding to a makeup part in the face makeup reference image through a makeup matching network, and outputting a target makeup category parameter combination corresponding to the game role in a face pinching system based on the makeup feature vector.
Wherein, the training process of the makeup matching network comprises the following steps: obtaining a makeup category standard combination corresponding to the face makeup reference sample in the face makeup reference sample set and the face makeup reference sample in the face makeup reference sample set; wherein, the reference parameters of the makeup category in the reference combination of the makeup category are matched with the makeup expression of the makeup part in the reference sample of the face makeup; selecting training samples from a face makeup reference sample set, and executing the following operations for each training sample: inputting the training samples into a makeup initial network to obtain a makeup category prediction combination of the training samples; determining a training loss value according to the makeup category prediction combination and a makeup category reference combination corresponding to the training sample; and if the training loss value converges to the preset value or the training times reach the set times, stopping training and taking the current makeup initial network as a makeup matching network.
Further, the process of obtaining the makeup category reference combination corresponding to the face makeup reference sample in the face makeup reference sample set includes: cutting a makeup part sample corresponding to each makeup part from the face makeup reference sample; obtaining a pre-stored makeup rendering sample atlas corresponding to each makeup part; each makeup rendering sample map in the makeup rendering sample map set is obtained by rendering a makeup part with a makeup expression on the three-dimensional model through a face pinching system, and the makeup expression on each makeup rendering sample map corresponds to a makeup rendering category; calculating the similarity between the makeup expression characteristics of the makeup part sample and the makeup expression characteristics of each makeup rendering sample corresponding to the makeup part for each makeup part sample corresponding to each makeup part, and determining the makeup rendering category of the makeup rendering sample corresponding to the maximum similarity as the makeup category standard corresponding to the makeup part sample; and accurately determining the makeup category basis corresponding to each makeup part sample as the makeup category basis combination of the human face makeup reference sample.
The process of cutting the makeup part samples corresponding to the makeup parts from the face makeup reference sample comprises the following steps: inputting the face makeup reference sample into a face key point detection model, and performing face key point detection to obtain face key points in the face makeup reference sample; and intercepting a makeup part sample corresponding to each makeup part from the face makeup reference sample based on the key points of the face.
The training loss value is determined by the following formula: l | | | X-X1L; wherein L represents a training loss value, X represents a makeup category reference combination, and X represents a makeup loss value1The makeup category prediction combination is represented.
The rendering module 506 is further configured to search for a target makeup category parameter corresponding to each target makeup category parameter in the target makeup category parameter combination; determining a material map based on the target makeup parameters; and calling the texture map to render the makeup part of the three-dimensional model.
The game role makeup device provided by the embodiment of the invention has the same technical characteristics as the game role makeup method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
An embodiment of the present application further provides an electronic device, as shown in fig. 6, which is a schematic structural diagram of the electronic device, where the electronic device includes a processor 121 and a memory 120, the memory 120 stores computer-executable instructions that can be executed by the processor 121, and the processor 121 executes the computer-executable instructions to implement the method for making up the game character.
In the embodiment shown in fig. 6, the electronic device further comprises a bus 122 and a communication interface 123, wherein the processor 121, the communication interface 123 and the memory 120 are connected by the bus 122.
The Memory 120 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 123 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like may be used. The bus 122 may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 122 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
The processor 121 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 121. The Processor 121 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory, and the processor 121 reads information in the memory and combines hardware thereof to complete the steps of the method for making up the game character according to the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where computer-executable instructions are stored, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the method for making up a game character, and specific implementation may refer to the foregoing method embodiment, and is not described herein again.
The method, the apparatus, and the computer program product for making up a game character provided in the embodiments of the present application include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
Unless specifically stated otherwise, the relative steps, numerical expressions, and values of the components and steps set forth in these embodiments do not limit the scope of the present application.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present application, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present application. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of making up a game character, the method comprising:
acquiring a face makeup reference image provided by a user aiming at a game role; wherein, the face makeup reference image is a face image which is made up;
determining a target makeup category parameter combination corresponding to the game role in a face pinching system according to makeup feature vectors corresponding to makeup parts in the face makeup reference image, wherein target makeup category parameters in the target makeup category parameter combination correspond to the makeup parts one by one; wherein, a makeup category parameter set corresponding to each makeup part is prestored in the face pinching system;
rendering the three-dimensional model of the game character in the face-pinching system based on the target makeup category parameter combination so that the rendered makeup expression of the game character matches the makeup expression of the face makeup reference image.
2. The method according to claim 1, wherein the step of determining a target makeup category parameter combination corresponding to the game character in a face pinching system according to the makeup feature vector corresponding to the makeup part in the face makeup reference image comprises:
inputting the face makeup reference image into a pre-trained makeup matching network;
and extracting a makeup feature vector corresponding to a makeup part in the face makeup reference image through the makeup matching network, and outputting a target makeup category parameter combination corresponding to the game role in a face pinching system based on the makeup feature vector.
3. The method of claim 2, wherein the training process of the makeup matching network comprises:
obtaining a face makeup reference sample set and a makeup category standard combination corresponding to the face makeup reference sample in the face makeup reference sample set; wherein, the makeup category benchmark parameters in the makeup category benchmark combination are matched with the makeup expression of the makeup part in the human face makeup reference sample;
selecting training samples from the face makeup reference sample set, and performing the following operations for each training sample:
inputting the training sample into a makeup initial network to obtain a makeup category prediction combination of the training sample;
determining a training loss value according to the makeup category prediction combination and a makeup category reference combination corresponding to the training sample;
and if the training loss value converges to a preset value or the training times reach set times, stopping training, and taking the current makeup initial network as a makeup matching network.
4. The method according to claim 3, wherein the step of obtaining the makeup category reference combination corresponding to the face makeup reference sample in the face makeup reference sample set comprises:
cutting a makeup part sample corresponding to each makeup part from the face makeup reference sample;
obtaining a pre-stored makeup rendering sample atlas corresponding to each makeup part; each makeup rendering sample map in the makeup rendering sample map set is obtained by rendering a makeup part with a makeup expression on a three-dimensional model through the face pinching system, and the makeup expression on each makeup rendering sample map corresponds to a makeup rendering category;
calculating the similarity between the makeup expression characteristics of the makeup part sample and the makeup expression characteristics of each makeup rendering sample corresponding to the makeup part for each makeup part sample, and determining the makeup rendering category of the makeup rendering sample corresponding to the maximum similarity as the makeup category standard corresponding to the makeup part sample;
and determining the makeup category standard corresponding to each makeup part sample as a makeup category standard combination of the face makeup reference sample.
5. The method according to claim 4, wherein the step of cutting the makeup part sample corresponding to each makeup part from the face makeup reference sample comprises the following steps:
inputting the face makeup reference sample into a face key point detection model, and performing face key point detection to obtain face key points in the face makeup reference sample;
and intercepting a makeup part sample corresponding to each makeup part from the face makeup reference sample based on the face key point.
6. The method of claim 3, wherein the training loss value is determined by:
L=||X-X1||;
wherein L represents the training loss value, X represents the makeup category reference combination, and X represents the makeup category reference combination1Representing the makeup category prediction combination.
7. The method of claim 1, wherein the step of rendering the three-dimensional model of the game character in the face-pinching system based on the target makeup category parameter combination comprises:
searching target makeup parameters corresponding to all target makeup category parameters in the target makeup category parameter combination;
determining a material map based on the target makeup parameters;
and calling the material map to render the makeup part of the three-dimensional model.
8. A game character makeup apparatus, comprising:
the providing module is used for acquiring a face makeup reference image provided by a user aiming at a game role; wherein, the face makeup reference image is a face image which is made up;
the determining module is used for determining a target makeup category parameter combination corresponding to the game role in a face pinching system according to the makeup feature vectors corresponding to the makeup parts in the face makeup reference image, wherein the target makeup category parameters in the target makeup category parameter combination correspond to the makeup parts one by one; wherein, a makeup category parameter set corresponding to each makeup part is prestored in the face pinching system;
and the rendering module is used for rendering the three-dimensional model of the game role in the face pinching system based on the target makeup category parameter combination so as to enable the rendered makeup expression of the game role to be matched with the makeup expression of the face makeup reference image.
9. An electronic device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the method of any of claims 1 to 7.
10. A computer-readable storage medium having computer-executable instructions stored thereon which, when invoked and executed by a processor, cause the processor to implement the method of any of claims 1 to 7.
CN202110232343.0A 2021-03-02 2021-03-02 Game role makeup method and device and electronic equipment Pending CN112973122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110232343.0A CN112973122A (en) 2021-03-02 2021-03-02 Game role makeup method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110232343.0A CN112973122A (en) 2021-03-02 2021-03-02 Game role makeup method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112973122A true CN112973122A (en) 2021-06-18

Family

ID=76352216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110232343.0A Pending CN112973122A (en) 2021-03-02 2021-03-02 Game role makeup method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112973122A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781330A (en) * 2021-08-23 2021-12-10 北京旷视科技有限公司 Image processing method, device and electronic system
CN116433827A (en) * 2023-04-07 2023-07-14 广州趣研网络科技有限公司 Virtual face image generation method, virtual face image display method, virtual face image generation and virtual face image display device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022301A (en) * 2017-11-23 2018-05-11 腾讯科技(上海)有限公司 A kind of image processing method, device and storage medium
CN108771868A (en) * 2018-06-14 2018-11-09 广州市点格网络科技有限公司 Game virtual role construction method, device and computer readable storage medium
CN109671016A (en) * 2018-12-25 2019-04-23 网易(杭州)网络有限公司 Generation method, device, storage medium and the terminal of faceform
CN110263617A (en) * 2019-04-30 2019-09-20 北京永航科技有限公司 Three-dimensional face model acquisition methods and device
CN110717977A (en) * 2019-10-23 2020-01-21 网易(杭州)网络有限公司 Method and device for processing face of game character, computer equipment and storage medium
WO2020228389A1 (en) * 2019-05-15 2020-11-19 北京市商汤科技开发有限公司 Method and apparatus for creating facial model, electronic device, and computer-readable storage medium
CN111991808A (en) * 2020-07-31 2020-11-27 完美世界(北京)软件科技发展有限公司 Face model generation method and device, storage medium and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022301A (en) * 2017-11-23 2018-05-11 腾讯科技(上海)有限公司 A kind of image processing method, device and storage medium
CN108771868A (en) * 2018-06-14 2018-11-09 广州市点格网络科技有限公司 Game virtual role construction method, device and computer readable storage medium
CN109671016A (en) * 2018-12-25 2019-04-23 网易(杭州)网络有限公司 Generation method, device, storage medium and the terminal of faceform
CN110263617A (en) * 2019-04-30 2019-09-20 北京永航科技有限公司 Three-dimensional face model acquisition methods and device
WO2020228389A1 (en) * 2019-05-15 2020-11-19 北京市商汤科技开发有限公司 Method and apparatus for creating facial model, electronic device, and computer-readable storage medium
CN110717977A (en) * 2019-10-23 2020-01-21 网易(杭州)网络有限公司 Method and device for processing face of game character, computer equipment and storage medium
CN111991808A (en) * 2020-07-31 2020-11-27 完美世界(北京)软件科技发展有限公司 Face model generation method and device, storage medium and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781330A (en) * 2021-08-23 2021-12-10 北京旷视科技有限公司 Image processing method, device and electronic system
CN116433827A (en) * 2023-04-07 2023-07-14 广州趣研网络科技有限公司 Virtual face image generation method, virtual face image display method, virtual face image generation and virtual face image display device

Similar Documents

Publication Publication Date Title
CN110110715A (en) Text detection model training method, text filed, content determine method and apparatus
CN109657554B (en) Image identification method and device based on micro expression and related equipment
CN111241340B (en) Video tag determining method, device, terminal and storage medium
WO2017206400A1 (en) Image processing method, apparatus, and electronic device
CN111275784B (en) Method and device for generating image
CN109448737B (en) Method and device for creating virtual image, electronic equipment and storage medium
CN112419170A (en) Method for training occlusion detection model and method for beautifying face image
CN112973122A (en) Game role makeup method and device and electronic equipment
CN115204183A (en) Knowledge enhancement based dual-channel emotion analysis method, device and equipment
CN111177470A (en) Video processing method, video searching method and terminal equipment
CN112836661A (en) Face recognition method and device, electronic equipment and storage medium
CN112241667A (en) Image detection method, device, equipment and storage medium
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
CN115100712A (en) Expression recognition method and device, electronic equipment and storage medium
CN113094478B (en) Expression reply method, device, equipment and storage medium
CN110431838B (en) Method and system for providing dynamic content of face recognition camera
CN111728302A (en) Garment design method and device
CN111145283A (en) Expression personalized generation method and device for input method
CN110825859A (en) Retrieval method, retrieval device, readable storage medium and electronic equipment
CN111507139A (en) Image effect generation method and device and electronic equipment
CN115205736A (en) Video data identification method and device, electronic equipment and storage medium
CN111640185A (en) Virtual building display method and device
CN111079472A (en) Image comparison method and device
CN112714362B (en) Method, device, electronic equipment and medium for determining attribute
CN111626254B (en) Method and device for triggering display animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination