CN114863214A - Image generation model training method, image generation device, image generation medium, and image generation device - Google Patents

Image generation model training method, image generation device, image generation medium, and image generation device Download PDF

Info

Publication number
CN114863214A
CN114863214A CN202210533820.1A CN202210533820A CN114863214A CN 114863214 A CN114863214 A CN 114863214A CN 202210533820 A CN202210533820 A CN 202210533820A CN 114863214 A CN114863214 A CN 114863214A
Authority
CN
China
Prior art keywords
sample
object image
parameters
pinching
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210533820.1A
Other languages
Chinese (zh)
Inventor
杨司琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210533820.1A priority Critical patent/CN114863214A/en
Publication of CN114863214A publication Critical patent/CN114863214A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to an image generation model training method, an image generation apparatus, a medium, and a device. The training method comprises the following steps: acquiring a sample object image, a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image; generating a pseudo label of the sample object image according to the sample object image, the sample virtual object image and the sample face pinching parameter; and carrying out supervised model training by using the pseudo labels to obtain an image generation model. Therefore, the image generation model can learn the characteristic distribution of the real human face and the characteristic distribution specific to the virtual human face at the same time, the generalization capability of the model is improved, the difference between the characteristic distribution of the real human face and the characteristic distribution of the virtual human face is reduced, the virtual human face similar to the human face of the user can be quickly rendered according to the face pinching parameters extracted by the image generation model, and the face pinching efficiency and the similarity are high. In addition, model training is carried out by utilizing the pseudo labels, so that corresponding face pinching parameters can be prevented from being marked manually, and the model training efficiency is improved.

Description

Image generation model training method, image generation device, image generation medium, and image generation device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image generation model training method, an image generation device, a medium, and an apparatus.
Background
With the development of mobile terminals and computer technologies, more and more role playing games appear, and in order to meet personalized customization requirements of different players, some face pinching functions are usually added to players when virtual roles corresponding to the players are created, so that the players can create their favorite roles according to their preferences, for example, virtual roles similar to their real faces can be created. How to improve the similarity between the virtual face and the real face of the player becomes the key for pinching the face by the virtual character.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides an image generation model training method, including: acquiring a sample object image, a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image; generating a pseudo label of the sample object image according to the sample object image, the sample virtual object image and the sample pinching parameter; and carrying out supervised model training by using the pseudo labels to obtain an image generation model.
In a second aspect, the present disclosure provides an image generation method, including: responding to a user request, and acquiring a target object image; determining a target face-pinching parameter corresponding to the target object image through an image generation model based on the target object image, wherein the image generation model is obtained by training through the image generation model training method provided by the first aspect of the disclosure; and performing face rendering based on the target face pinching parameters to obtain a target virtual object image corresponding to the target object image.
In a third aspect, the present disclosure provides an image generation model training apparatus, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a sample object image, a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image; a generating module, configured to generate a pseudo label of the sample object image according to the sample object image, the sample virtual object image, and the sample pinching parameter acquired by the first acquiring module; and the training module is used for carrying out supervised model training by using the pseudo labels generated by the generating module so as to obtain an image generating model.
In a fourth aspect, the present disclosure provides an image generating apparatus comprising: the second acquisition module is used for responding to a user request and acquiring a target object image; a face-pinching parameter extraction module, configured to determine, based on the target object image obtained by the second obtaining module, a target face-pinching parameter corresponding to the target object image through an image generation model, where the image generation model is obtained by training through the image generation model training method provided in the first aspect of the disclosure; and the rendering module is used for rendering the human face based on the target face pinching parameters extracted by the face pinching parameter extraction module to obtain a target virtual object image corresponding to the target object image.
In a fifth aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method provided by the first or second aspect of the present disclosure.
In a sixth aspect, the present disclosure provides an electronic device comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to implement the steps of the method provided by the first or second aspect of the present disclosure.
In the technical scheme, the pseudo label of the sample object image is generated according to the obtained sample object image, the sample virtual object image and the sample pinching parameter corresponding to the sample virtual object image, so that the pseudo label can learn the characteristic distribution specific to the virtual human face; and then, a pseudo label containing virtual face features is used for carrying out supervised model training, so that the image generation model can learn not only the feature distribution of a real face, but also the characteristic distribution specific to a virtual face, the generalization capability of the model is improved, the difference between the feature distribution of the real face and the feature distribution of the virtual face is reduced, a virtual face similar to the face of a user can be quickly rendered according to the face pinching parameters extracted by the image generation model, and the face pinching efficiency and the similarity are high. In addition, model training is carried out by using a pseudo label containing virtual human face characteristics, and face pinching parameters corresponding to the sample object images can be avoided being marked manually, so that the efficiency of model training is improved, and the labor cost is saved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flow diagram illustrating a method of training an image generation model according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a method for generating a pseudo label of a sample object image according to the sample object image, the sample virtual object image, and a sample pinching parameter corresponding to the sample virtual object image according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating a method of image generation according to an exemplary embodiment.
FIG. 4 is a block diagram illustrating an image generative model training apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating an image generation apparatus according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
For example, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested operation to be performed would require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
At the same time, it is understood that the data involved in the present disclosure (including but not limited to the data itself, the acquisition or use of the data) should comply with the requirements of the relevant laws and regulations and related regulations.
FIG. 1 is a flow diagram illustrating a method of training an image generation model according to an exemplary embodiment. As shown in fig. 1, the method includes the following S101 to S103.
In S101, a sample object image, a sample virtual object image, and a sample pinching face parameter corresponding to the sample virtual object image are acquired.
In the present disclosure, the sample object images described above are from an authorized public data set. The sample object image may be an image containing a human face, and the sample object image may be a sample human face image, for example.
In one embodiment, the sample pinch face parameters include skeletal point parameters, wherein the skeletal point parameters are parameters related to the shape of the face and the shape of the facial five sense organs.
In another embodiment, the sample pinch parameters include makeup parameters, wherein the makeup parameters include lip color, eye shadow color, hairstyle, and the like.
In yet another embodiment, the sample pinch parameters include bone point parameters and make-up parameters. Therefore, the image generation model can learn parameters related to the shapes of the face and the five sense organs of the human face, and also can learn makeup parameters such as lip color, eye shadow color, hair style and the like, so that the similarity of subsequent face pinching is improved.
In S102, a pseudo label of the sample target image is generated based on the sample target image, the sample virtual target image, and the sample pinch parameter corresponding to the sample virtual target image.
In S103, supervised model training is performed using the pseudo labels of the sample object images to obtain an image generation model.
In the technical scheme, the pseudo label of the sample object image is generated according to the obtained sample object image, the sample virtual object image and the sample pinching parameter corresponding to the sample virtual object image, so that the pseudo label can learn the characteristic distribution specific to the virtual human face; and then, a pseudo label containing virtual face features is used for carrying out supervised model training, so that the image generation model can learn not only the feature distribution of a real face, but also the characteristic distribution specific to a virtual face, the generalization capability of the model is improved, the difference between the feature distribution of the real face and the feature distribution of the virtual face is reduced, a virtual face similar to the face of a user can be quickly rendered according to the face pinching parameters extracted by the image generation model, and the face pinching efficiency and the similarity are high. In addition, model training is carried out by using a pseudo label containing virtual human face characteristics, and face pinching parameters corresponding to the sample object images can be avoided being marked manually, so that the efficiency of model training is improved, and the labor cost is saved.
A detailed description will be given below of a specific embodiment of acquiring a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image in S101.
In an embodiment, a virtual face like a real face may be pinched out by adjusting a preset pinching face parameter, and then, an image including the pinched out virtual face is used as a sample virtual object image, and an adjusted pinching face parameter corresponding to the pinched out virtual face is used as a sample pinching face parameter corresponding to the sample virtual object image.
In another embodiment, sample pinching face parameters may be randomly generated; and then, rendering the human face based on the sample face pinching parameters to obtain a sample virtual object image corresponding to the sample face pinching parameters.
Specifically, based on the sample pinching face parameter, a sample virtual object image corresponding to the sample pinching face parameter may be obtained by: firstly, based on the sample pinching face parameter, reconstructing face information by using a preset rendering engine to obtain reconstructed face information corresponding to the sample pinching face parameter; and then, generating a sample virtual object image corresponding to the sample pinching parameters based on the reconstructed human face information corresponding to the sample pinching parameters. Therefore, the sample virtual object image can be generated quickly by using the preset rendering engine, and time and labor are saved.
Wherein the reconstructed face information corresponds to a three-dimensional face model that may be determined based on the sample pinch parameters. Specifically, the three-dimensional face model may be determined based on the parameter value corresponding to the sample face-pinching parameter, and further, the sample virtual object image corresponding to the sample face-pinching parameter may be generated through a conversion relationship between a three-dimensional space and a two-dimensional space.
For example, in the field of game technology, the preset rendering engine may be a game engine.
A detailed description will be given below of a specific embodiment of generating the pseudo label of the sample target image from the sample target image, the sample virtual target image, and the sample pinch parameter corresponding to the sample virtual target image in S102. Specifically, it can be realized by S201 and S202 shown in fig. 2.
In S201, model parameters are updated for the first regressor based on the sample virtual object image and the sample pinching parameters.
In one embodiment, a sample virtual object image may be input into a first regressor, resulting in a first predicted pinching parameter; then, the model parameters of the first regressor are updated according to the first predicted pinching face parameters and the sample pinching face parameters corresponding to the sample virtual object images.
In S202, a pseudo label of the sample object image is generated by the first regression unit obtained after the model parameter is updated according to the sample object image.
In one embodiment, the sample object image may be input to the first regressor obtained by updating the model parameters to obtain the second predicted pinching parameters, and the second predicted pinching parameters may be used as the pseudo labels of the sample object image.
A detailed description will be given of a specific embodiment of updating the model parameters of the first regressor based on the first predicted pinching parameters and the sample pinching parameters corresponding to the sample virtual target images. Specifically, the method can be realized by the following steps (1) and (2):
(1) and calculating a first loss function according to the first predicted pinching face parameter and the sample pinching face parameter corresponding to the sample virtual object image.
For example, an absolute value of a difference of the first predicted pinching face parameter and the sample pinching face parameter may be determined as the first loss function.
(2) And updating the model parameters of the first regression by adopting a random gradient descent mode according to the first loss function.
A detailed description will be given below of a specific embodiment of performing supervised model training using the pseudo labels of the sample object images to obtain an image generation model in S103. Specifically, the method can be realized by the following steps [1] to [4 ]:
[1] and updating the model parameters of the second regressor according to the sample object image and the pseudo label.
In one embodiment, the sample object image may be input into a second regressor, resulting in a third predicted pinching parameter; then, the model parameters of the second regressor are updated according to the third predicted pinching parameters and the pseudo label.
[2] And updating the parameters of the first regressor obtained after the model parameters are updated by using the parameters of the second regressor obtained after the model parameters are updated.
In one embodiment, the first regression includes a first feature extraction module and a first fully connected module; the second regressor comprises a second feature extraction module and a second full-connection module, wherein the second feature extraction module is the same as the first feature extraction module in structure. At this time, the parameters of the first feature extraction module in the first regressor obtained after the model parameters are updated may be updated by using the parameters of the second feature extraction module in the second regressor obtained after the model parameters are updated.
Illustratively, the first feature extraction module and the second feature extraction module are both composed of four sequentially connected residual convolution networks.
The first full-connection module can be composed of a full-connection layer or a plurality of full-connection layers which are connected in sequence, and the second full-connection module can be composed of a full-connection layer or a plurality of full-connection layers which are connected in sequence. Wherein, can combine different training purposes to set up the quantity of the full linkage layer that first full link module and second full link module contained, for example, first full link module and second full link module can constitute by three full link layer that connects gradually.
It should be noted that, the structures of the first full-connection module and the second full-connection module may be the same or different, and the embodiment of the present disclosure is not limited specifically.
[3] And judging whether a preset training cut-off condition is met.
In this disclosure, the preset training cutoff condition may be that the training frequency reaches the preset frequency, or that the sum of the loss of the first regressor and the loss of the second regressor is less than a preset loss threshold.
If the preset training cutoff condition is not met, returning to obtain a new sample object image, a new sample virtual object image and a sample face-pinching parameter corresponding to the new sample virtual object image, and generating a pseudo label of the new sample object image by the new sample object image, the new sample virtual object image and the sample face-pinching parameter corresponding to the new sample virtual object image; then, supervised model training is performed using the pseudo labels of the new sample object images, i.e., the above S101 is returned until a preset training cutoff condition is satisfied, and then the following step [4] is performed.
[4] And taking the second regressor as an image generation model.
In the above embodiment, the parameter of the first regressor for extracting the virtual face feature is updated by using the sample virtual object image and the sample pinching face parameter corresponding to the sample virtual object image, so that the first regressor learns the feature distribution specific to the virtual face; the parameter updating of the second regressor used for extracting the real face features is monitored by using a pseudo label containing the virtual face features, the characteristic distribution specific to the virtual face can be migrated to the real face by a migration learning method, so that the learning of the second regressor on the real face feature distribution can be assisted by using structural similarity information (including information such as relative positions and proportions of five sense organs) between the virtual face and the real face, the training of the second regressor can be rapidly converged, the training time is shortened, the accuracy of face pinching parameters extracted by the second regressor is improved, and the face pinching similarity is improved.
In addition, the parameter updating of a second regressor used for extracting the real face features is supervised by a pseudo tag containing the virtual face features, and then the parameters of a first feature extraction module in the first regressor obtained after the parameter updating of a second feature extraction module in the second regressor obtained after the model parameter updating is used for updating the model parameters can enable the first feature extraction module and the second feature extraction module to learn the features common to the real face and the virtual face, and enable the first full-connection module to learn the features specific to the virtual face and the second full-connection module to learn the features specific to the real face. Therefore, the second regressor can extract the features of the real human face with higher accuracy, and the features unique to the virtual human face are kept in the second regressor, namely, the second feature extraction module and the first full-connection module can extract the features of the virtual human face with higher accuracy.
A specific embodiment of updating the model parameters of the second regressor based on the third predicted pinching parameters and the pseudo label in step [1] will be described in detail below. Specifically, this can be achieved by the following steps 1) and 2):
1) and calculating a second loss function according to the third predicted pinching parameters and the pseudo label.
For example, an absolute value of a difference between the third predicted pinching face parameter and the pseudo label may be determined as the second loss function.
2) And updating the model parameters of the second regressor by adopting a random gradient descending mode according to the second loss function.
At this time, the sum of the loss of the first regressor and the loss of the second regressor is the sum of the first loss function and the second loss function, that is, the preset training cutoff condition is that the sum of the first loss function and the second loss function is smaller than the preset loss threshold.
FIG. 3 is a flow chart illustrating a method of image generation according to an exemplary embodiment. As shown in fig. 3, the method includes the following S301 to S303.
In S301, a target object image is acquired in response to a user request.
In the present disclosure, the user request may be an image generation request, and the target object image (i.e., the real face image) may be an image containing a face of the user, which is acquired after the user authorization, and may be, for example, a target face image. The target object image may be obtained by a user through shooting with an image obtaining terminal (e.g., a smart phone, a tablet computer, etc.), or may be pre-stored by the user, and the embodiment of the disclosure is not particularly limited.
In S302, a target pinch parameter corresponding to the target object image is determined by the image generation model based on the target object image.
In the present disclosure, the image generation model is trained by the image generation model training method provided by the present disclosure.
In S303, face rendering is performed based on the target face pinching parameter, so as to obtain a sample virtual object image corresponding to the target object image.
In the present disclosure, face rendering may be performed using a preset rendering engine. Specifically, face information reconstruction can be performed by using a preset rendering engine based on the target face-pinching parameter, so as to obtain reconstructed face information corresponding to the target face-pinching parameter; and generating a target virtual object image corresponding to the target corresponding image based on the reconstructed human face information corresponding to the target face pinching parameter.
In the technical scheme, based on the acquired target object image, a target face pinching parameter corresponding to the target object image is determined by using an image generation model; and then, rendering the human face based on the target face pinching parameters to obtain a target virtual object image corresponding to the target corresponding image. The image generation model can learn not only the characteristic distribution of the real human face but also the characteristic distribution specific to the virtual human face, so that the difference between the characteristic distribution of the real human face and the characteristic distribution of the virtual human face is reduced, the virtual human face similar to the human face of the user can be quickly rendered according to the target face pinching parameters extracted by the image generation model, and the face pinching efficiency and the similarity are high.
The present disclosure also provides an image generation model training apparatus, as shown in fig. 4, the image generation model training apparatus 400 includes:
a first obtaining module 401, configured to obtain a sample object image, a sample virtual object image, and a sample pinching face parameter corresponding to the sample virtual object image;
a generating module 402, configured to generate a pseudo label of the sample object image according to the sample object image, the sample virtual object image and the sample pinching parameter acquired by the first acquiring module 401;
a training module 403, configured to perform supervised model training using the pseudo labels generated by the generating module 402, so as to obtain an image generation model.
In the technical scheme, the pseudo label of the sample object image is generated according to the obtained sample object image, the sample virtual object image and the sample pinching parameter corresponding to the sample virtual object image, so that the pseudo label can learn the characteristic distribution specific to the virtual human face; and then, a pseudo label containing virtual face features is used for carrying out supervised model training, so that the image generation model can learn not only the feature distribution of a real face, but also the characteristic distribution specific to a virtual face, the generalization capability of the model is improved, the difference between the feature distribution of the real face and the feature distribution of the virtual face is reduced, a virtual face similar to the face of a user can be quickly rendered according to the face pinching parameters extracted by the image generation model, and the face pinching efficiency and the similarity are high. In addition, model training is carried out by using a pseudo label containing virtual human face characteristics, and face pinching parameters corresponding to the sample object images can be avoided being marked manually, so that the efficiency of model training is improved, and the labor cost is saved.
Optionally, the generating module 402 includes:
the first updating submodule is used for updating model parameters of the first regressor according to the sample virtual object image and the sample face pinching parameters;
the first generation submodule is used for generating a pseudo label of the sample object image through a first regressor obtained after model parameters are updated according to the sample object image;
the training module 403 includes:
the second updating submodule is used for updating model parameters of the second regressor according to the sample object image and the pseudo label;
the third updating submodule is used for updating the parameters of the first regressor obtained after the model parameters are updated by using the parameters of the second regressor obtained after the model parameters are updated;
and the determining submodule is used for taking a second regressor as the image generation model when a preset training cutoff condition is met.
Optionally, the first update sub-module includes:
the first input submodule is used for inputting the sample virtual object image into a first regressor to obtain a first prediction face pinching parameter;
a fourth updating sub-module, configured to update the model parameters of the first regressor according to the first predicted pinching parameter and the sample pinching parameter.
Optionally, the fourth update sub-module includes:
a first calculation sub-module for calculating a first loss function based on the first predicted pinching face parameter and the sample pinching face parameter;
and the fifth updating submodule is used for updating the model parameters of the first regression by adopting a random gradient descending mode according to the first loss function.
Optionally, the first generating sub-module is configured to input the sample object image into a first regressor obtained after updating the model parameters, to obtain a second predicted pinching parameter, and use the second predicted pinching parameter as a pseudo label of the sample object image.
Optionally, the second update sub-module includes:
the second input submodule is used for inputting the sample object image into a second regressor to obtain a third predicted face pinching parameter;
and a sixth updating submodule, configured to update the model parameters of the second regressor according to the third predicted pinching parameter and the pseudo label.
Optionally, the sixth update sub-module includes:
a second calculation submodule, configured to calculate a second loss function according to the third predicted pinching face parameter and the pseudo tag;
and the seventh updating submodule is used for updating the model parameters of the second regressor by adopting a random gradient descending mode according to the second loss function.
Optionally, the first regressor includes a first feature extraction module and a first full-connection module, the second regressor includes a second feature extraction module and a second full-connection module, and the second feature extraction module and the first feature extraction module have the same structure;
and the third updating submodule is used for updating the parameters of the first feature extraction module in the first regressor, which are obtained after the model parameters are updated, by using the parameters of the second feature extraction module in the second regressor, which are obtained after the model parameters are updated.
Optionally, the first obtaining module 401 includes:
the second generation submodule is used for randomly generating sample face pinching parameters;
and the rendering submodule is used for rendering the human face based on the sample face pinching parameters to obtain a sample virtual object image corresponding to the sample face pinching parameters.
Optionally, the rendering sub-module comprises:
the reconstruction submodule is used for reconstructing face information by utilizing a preset rendering engine based on the sample face pinching parameters to obtain reconstructed face information;
and the third generation submodule is used for generating a sample virtual object image corresponding to the sample pinching face parameter based on the reconstructed face information.
Optionally, the sample pinching parameters comprise bone point parameters and/or makeup parameters.
Fig. 5 is a block diagram illustrating an image generation apparatus according to an exemplary embodiment. As shown in fig. 5, the image generating apparatus 500 includes:
a second obtaining module 501, configured to obtain a target object image in response to a user request;
a pinching face parameter extracting module 502, configured to determine, based on the target object image acquired by the second acquiring module 501, a target pinching face parameter corresponding to the target object image through an image generation model, where the image generation model is obtained by training through the above image generation model training method provided by the present disclosure;
a rendering module 503, configured to perform face rendering based on the target face-pinching parameter extracted by the face-pinching parameter extraction module 502, so as to obtain a target virtual object image corresponding to the target object image.
In the technical scheme, based on the acquired target object image, a target face pinching parameter corresponding to the target object image is determined by using an image generation model; and then, rendering the human face based on the target face pinching parameters to obtain a target virtual object image corresponding to the target corresponding image. The image generation model can learn not only the characteristic distribution of the real human face but also the characteristic distribution specific to the virtual human face, so that the difference between the characteristic distribution of the real human face and the characteristic distribution of the virtual human face is reduced, the virtual human face similar to the human face of the user can be quickly rendered according to the target face pinching parameters extracted by the image generation model, and the face pinching efficiency and the similarity are high.
The image generation model training apparatus 300 may be provided independently of the image generation apparatus 500, or may be integrated into the image generation apparatus 500, and is not particularly limited in this disclosure.
The present disclosure also provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, implements the steps of the above-mentioned image generation method provided by the present disclosure, or the steps of the above-mentioned image generation model training method.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., a terminal device or a server) 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a sample object image, a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image; generating a pseudo label of the sample object image according to the sample object image, the sample virtual object image and the sample pinching parameter; and carrying out supervised model training by using the pseudo labels to obtain an image generation model.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to a user request, and acquiring a target object image; determining a target face pinching parameter corresponding to the target object image through an image generation model based on the target object image, wherein the image generation model is obtained by training through the image generation model training method provided by the disclosure; and performing face rendering based on the target face pinching parameters to obtain a target virtual object image corresponding to the target object image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not in some cases constitute a limitation to the module itself, and for example, the first acquiring module may also be described as a "module that acquires a face image of a target user".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides an image generation model training method according to one or more embodiments of the present disclosure, including: acquiring a sample object image, a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image; generating a pseudo label of the sample object image according to the sample object image, the sample virtual object image and the sample pinching parameter; and carrying out supervised model training by using the pseudo labels to obtain an image generation model.
Example 2 provides the method of example 1, the generating a pseudo label for the sample object image from the sample object image, the sample virtual object image, and the sample pinching parameters, comprising: updating model parameters of a first regressor according to the sample virtual object image and the sample face pinching parameters; generating a pseudo label of the sample object image through a first regressor obtained after model parameters are updated according to the sample object image; the supervised model training by using the pseudo label to obtain an image generation model comprises: updating model parameters of a second regressor according to the sample object image and the pseudo label; updating the parameters of the first regressor obtained after the model parameters are updated by using the parameters of the second regressor obtained after the model parameters are updated; and when a preset training cut-off condition is met, taking a second regressor as the image generation model.
Example 3 provides the method of example 2, the updating model parameters of the first regressor from the sample virtual object images and the sample pinching face parameters, comprising: inputting the sample virtual object image into a first regressor to obtain a first prediction face pinching parameter; updating the model parameters of the first regressor according to the first predicted pinching parameters and the sample pinching parameters.
Example 4 provides the method of example 3, the updating model parameters of the first regressor according to the first predicted face pinching parameters and the sample face pinching parameters, comprising: calculating a first loss function according to the first predicted pinching parameter and the sample pinching parameter; and updating the model parameters of the first regression by adopting a random gradient descending mode according to the first loss function.
Example 5 provides the method of example 2, the generating, from the sample object image, the pseudo label of the sample object image by the first regressor resulting from the model parameter update, including: and inputting the sample object image into a first regressor obtained after model parameters are updated to obtain a second predicted pinching parameter, and taking the second predicted pinching parameter as a pseudo label of the sample object image.
Example 6 provides the method of example 2, wherein updating model parameters of the second regressor from the sample object images and the pseudo-labels, comprises: inputting the sample object image into a second regressor to obtain a third predicted pinching face parameter; and updating the model parameters of the second regressor according to the third predicted pinching face parameters and the pseudo label.
Example 7 provides the method of example 6, the updating model parameters of the second regressor according to the third predicted face pinching parameters and the pseudo label, comprising: calculating a second loss function according to the third predicted pinching parameter and the pseudo label; and updating the model parameters of the second regressor by adopting a random gradient descending mode according to the second loss function.
Example 8 provides the method of example 2, the first regressor comprising a first feature extraction module and a first fully connected module, the second regressor comprising a second feature extraction module and a second fully connected module, the second feature extraction module being structurally identical to the first feature extraction module; the updating the parameters of the first regressor obtained after the model parameters are updated by using the parameters of the second regressor obtained after the model parameters are updated comprises the following steps: and updating the parameters of the first feature extraction module in the first regressor obtained after the model parameters are updated by using the parameters of the second feature extraction module in the second regressor obtained after the model parameters are updated.
Example 9 provides the method of example 1, the obtaining a sample virtual object image and sample pinch parameters corresponding to the sample virtual object image, according to one or more embodiments of the present disclosure, including: randomly generating sample face pinching parameters; and rendering the human face based on the sample face pinching parameters to obtain a sample virtual object image corresponding to the sample face pinching parameters.
Example 10 provides the method of example 9, wherein the rendering a human face based on the sample pinching parameters to obtain a sample virtual object image corresponding to the sample pinching parameters includes: based on the sample face pinching parameters, face information reconstruction is carried out by utilizing a preset rendering engine to obtain reconstructed face information; and generating a sample virtual object image corresponding to the sample pinching parameters based on the reconstructed face information.
Example 11 provides the method of any one of examples 1-10, the sample pinch parameters including bone point parameters and/or makeup parameters, according to one or more embodiments of the present disclosure.
Example 12 provides an image generation method according to one or more embodiments of the present disclosure, including: responding to a user request, and acquiring a target object image; determining a target face-pinching parameter corresponding to the target object image through an image generation model based on the target object image, wherein the image generation model is obtained by training through the image generation model training method of any one of examples 1-11; and performing face rendering based on the target face pinching parameters to obtain a target virtual object image corresponding to the target object image.
Example 13 provides an image generation model training apparatus according to one or more embodiments of the present disclosure, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a sample object image, a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image; a generating module, configured to generate a pseudo label of the sample object image according to the sample object image, the sample virtual object image, and the sample pinching parameter acquired by the first acquiring module; and the training module is used for carrying out supervised model training by using the pseudo labels generated by the generating module so as to obtain an image generating model.
Example 14 provides, in accordance with one or more embodiments of the present disclosure, an image generation apparatus comprising: the second acquisition module is used for responding to a user request and acquiring a target object image; a pinching face parameter extraction module, configured to determine, based on the target object image obtained by the second obtaining module, a target pinching face parameter corresponding to the target object image through an image generation model, where the image generation model is obtained by training through the image generation model training method according to any one of examples 1 to 11; and the rendering module is used for rendering the human face based on the target face pinching parameters extracted by the face pinching parameter extraction module to obtain a target virtual object image corresponding to the target object image.
Example 15 provides a computer-readable medium, on which is stored a computer program that, when executed by a processing device, implements the steps of the method of any of examples 1-12, in accordance with one or more embodiments of the present disclosure.
Example 16 provides, in accordance with one or more embodiments of the present disclosure, an electronic device, comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to carry out the steps of the method of any of examples 1-12.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Claims (16)

1. An image generation model training method, comprising:
acquiring a sample object image, a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image;
generating a pseudo label of the sample object image according to the sample object image, the sample virtual object image and the sample pinching parameter;
and carrying out supervised model training by using the pseudo labels to obtain an image generation model.
2. The method of claim 1, wherein generating a pseudo label for the sample object image from the sample object image, the sample virtual object image, and the sample pinch parameters comprises:
updating model parameters of a first regressor according to the sample virtual object image and the sample pinching face parameters;
generating a pseudo label of the sample object image through a first regressor obtained after model parameters are updated according to the sample object image;
the supervised model training by using the pseudo label to obtain an image generation model comprises:
updating model parameters of a second regressor according to the sample object image and the pseudo label;
updating the parameters of the first regressor obtained after the model parameters are updated by using the parameters of the second regressor obtained after the model parameters are updated;
and when a preset training cut-off condition is met, taking a second regressor as the image generation model.
3. The method of claim 2, wherein said updating model parameters of the first regressor based on the sample virtual object images and the sample pinching face parameters comprises:
inputting the sample virtual object image into a first regressor to obtain a first prediction face pinching parameter;
updating the model parameters of the first regressor according to the first predicted pinching parameters and the sample pinching parameters.
4. The method of claim 3, wherein updating model parameters of the first regressor based on the first predicted face pinching parameters and the sample face pinching parameters comprises:
calculating a first loss function according to the first predicted pinching parameter and the sample pinching parameter;
and updating the model parameters of the first regression by adopting a random gradient descending mode according to the first loss function.
5. The method of claim 2, wherein generating the pseudo label of the sample object image from the sample object image through the first regressor obtained after updating model parameters comprises:
and inputting the sample object image into a first regressor obtained after model parameters are updated to obtain a second predicted pinching parameter, and taking the second predicted pinching parameter as a pseudo label of the sample object image.
6. The method of claim 2, wherein updating model parameters of the second regressor based on the sample object images and the pseudo-labels comprises:
inputting the sample object image into a second regressor to obtain a third predicted pinching face parameter;
updating the model parameters of the second regressor according to the third predicted pinching face parameters and the pseudo label.
7. The method of claim 6, wherein updating model parameters of the second regressor based on the third predicted face pinching parameters and the pseudo-label comprises:
calculating a second loss function according to the third predicted pinching parameter and the pseudo label;
and updating the model parameters of the second regressor by adopting a random gradient descending mode according to the second loss function.
8. The method of claim 2, wherein the first regressor comprises a first feature extraction module and a first fully connected module, and wherein the second regressor comprises a second feature extraction module and a second fully connected module, wherein the second feature extraction module has the same structure as the first feature extraction module;
the updating the parameters of the first regressor obtained after the model parameters are updated by using the parameters of the second regressor obtained after the model parameters are updated comprises the following steps:
and updating the parameters of the first feature extraction module in the first regressor obtained after the model parameters are updated by using the parameters of the second feature extraction module in the second regressor obtained after the model parameters are updated.
9. The method according to claim 1, wherein the obtaining of the sample virtual object image and the sample pinching parameters corresponding to the sample virtual object image comprises:
randomly generating sample face pinching parameters;
and rendering the human face based on the sample face pinching parameters to obtain a sample virtual object image corresponding to the sample face pinching parameters.
10. The method according to claim 9, wherein the performing face rendering based on the sample pinching parameters to obtain a sample virtual object image corresponding to the sample pinching parameters comprises:
based on the sample face pinching parameters, face information reconstruction is carried out by utilizing a preset rendering engine to obtain reconstructed face information;
and generating a sample virtual object image corresponding to the sample pinching face parameter based on the reconstructed human face information.
11. The method according to any one of claims 1-10, wherein the sample pinching parameters comprise bone point parameters and/or make-up parameters.
12. An image generation method, comprising:
responding to a user request, and acquiring a target object image;
determining a target face pinching parameter corresponding to the target object image through an image generation model based on the target object image, wherein the image generation model is obtained through training through the image generation model training method according to any one of claims 1-11;
and performing face rendering based on the target face pinching parameters to obtain a target virtual object image corresponding to the target object image.
13. An image generative model training apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a sample object image, a sample virtual object image and a sample pinching face parameter corresponding to the sample virtual object image;
a generating module, configured to generate a pseudo label of the sample object image according to the sample object image, the sample virtual object image, and the sample pinching parameter acquired by the first acquiring module;
and the training module is used for carrying out supervised model training by using the pseudo labels generated by the generating module so as to obtain an image generating model.
14. An image generation apparatus, comprising:
the second acquisition module is used for responding to a user request and acquiring a target object image;
a face-pinching parameter extraction module, configured to determine, based on the target object image acquired by the second acquisition module, a target face-pinching parameter corresponding to the target object image through an image generation model, where the image generation model is obtained by training through the image generation model training method according to any one of claims 1 to 11;
and the rendering module is used for rendering the human face based on the target face pinching parameters extracted by the face pinching parameter extraction module to obtain a target virtual object image corresponding to the target object image.
15. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1-12.
16. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 1 to 12.
CN202210533820.1A 2022-05-16 2022-05-16 Image generation model training method, image generation device, image generation medium, and image generation device Pending CN114863214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210533820.1A CN114863214A (en) 2022-05-16 2022-05-16 Image generation model training method, image generation device, image generation medium, and image generation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210533820.1A CN114863214A (en) 2022-05-16 2022-05-16 Image generation model training method, image generation device, image generation medium, and image generation device

Publications (1)

Publication Number Publication Date
CN114863214A true CN114863214A (en) 2022-08-05

Family

ID=82636807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210533820.1A Pending CN114863214A (en) 2022-05-16 2022-05-16 Image generation model training method, image generation device, image generation medium, and image generation device

Country Status (1)

Country Link
CN (1) CN114863214A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115253303A (en) * 2022-08-16 2022-11-01 北京字跳网络技术有限公司 Method, device, storage medium and electronic equipment for beautifying virtual object
CN115775024A (en) * 2022-12-09 2023-03-10 支付宝(杭州)信息技术有限公司 Virtual image model training method and device
CN115809696A (en) * 2022-12-01 2023-03-17 支付宝(杭州)信息技术有限公司 Virtual image model training method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115253303A (en) * 2022-08-16 2022-11-01 北京字跳网络技术有限公司 Method, device, storage medium and electronic equipment for beautifying virtual object
CN115809696A (en) * 2022-12-01 2023-03-17 支付宝(杭州)信息技术有限公司 Virtual image model training method and device
CN115809696B (en) * 2022-12-01 2024-04-02 支付宝(杭州)信息技术有限公司 Virtual image model training method and device
CN115775024A (en) * 2022-12-09 2023-03-10 支付宝(杭州)信息技术有限公司 Virtual image model training method and device
CN115775024B (en) * 2022-12-09 2024-04-16 支付宝(杭州)信息技术有限公司 Virtual image model training method and device

Similar Documents

Publication Publication Date Title
CN109902186B (en) Method and apparatus for generating neural network
CN111784565B (en) Image processing method, migration model training method, device, medium and equipment
CN110288049B (en) Method and apparatus for generating image recognition model
CN114863214A (en) Image generation model training method, image generation device, image generation medium, and image generation device
CN111784566B (en) Image processing method, migration model training method, device, medium and equipment
CN109981787B (en) Method and device for displaying information
CN110009059B (en) Method and apparatus for generating a model
CN113315924A (en) Image special effect processing method and device
CN115908640A (en) Method and device for generating image, readable medium and electronic equipment
CN114937192A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111652675A (en) Display method and device and electronic equipment
CN114693876A (en) Digital human generation method, device, storage medium and electronic equipment
CN111694629A (en) Information display method and device and electronic equipment
CN114429418A (en) Method and device for generating stylized image, electronic equipment and storage medium
CN114913061A (en) Image processing method and device, storage medium and electronic equipment
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN115937356A (en) Image processing method, apparatus, device and medium
US20230367972A1 (en) Method and apparatus for processing model data, electronic device, and computer readable medium
CN112070888B (en) Image generation method, device, equipment and computer readable medium
CN115049537A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114913058A (en) Display object determination method and device, electronic equipment and storage medium
CN114866706A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112488947A (en) Model training and image processing method, device, equipment and computer readable medium
CN111738415A (en) Model synchronous updating method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination