CN115187450A - Image generation method, image generation device and related equipment - Google Patents

Image generation method, image generation device and related equipment Download PDF

Info

Publication number
CN115187450A
CN115187450A CN202210735272.0A CN202210735272A CN115187450A CN 115187450 A CN115187450 A CN 115187450A CN 202210735272 A CN202210735272 A CN 202210735272A CN 115187450 A CN115187450 A CN 115187450A
Authority
CN
China
Prior art keywords
model
image
face
feature
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210735272.0A
Other languages
Chinese (zh)
Inventor
邢晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202210735272.0A priority Critical patent/CN115187450A/en
Publication of CN115187450A publication Critical patent/CN115187450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image generation method, an image generation device and related equipment, wherein the method comprises the following steps: training a pre-trained StyleGAN model and adjusting model parameters by using a target data set with a target face style to obtain a stylized model, wherein the stylized model is used for generating a face image with the target face style; random noise is respectively input into a StyleGAN model and a stylized model, so that the StyleGAN model obtains a first face image based on the random noise input, the stylized model obtains a second face image based on the random noise input and the first face image, and the second face image is a face image with a target face style obtained after the first face image is subjected to style migration. The method provided by the embodiment of the invention can reduce the cost and difficulty of obtaining the training sample used by the style migration model.

Description

Image generation method, image generation device and related equipment
Technical Field
The present invention relates to the field of technologies, and in particular, to an image generation method, an image generation apparatus, and a related device.
Background
The face stylization refers to converting a real face image in a video or a picture into a face image with different styles. In the process of face stylization, a real face image is generally required to be input into a style migration model for style migration, so that images of the face with different styles are obtained.
In the process of training the style migration model, at least thousands of image groups or tens of thousands of image groups are required to be used as training samples, and each image group needs to comprise a real face image and a stylized face image corresponding to the real face image. Because the number of image groups required by style migration model training is large, the image groups are difficult to directly acquire on the network, and manual drawing is usually required. Therefore, the problem of high cost and difficulty in obtaining training samples used by the style migration model exists in the prior art.
Disclosure of Invention
The embodiment of the invention aims to provide an image generation method, an image generation device and related equipment, so as to reduce the cost and difficulty of obtaining a training sample used by a style transition model. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided an image generating method, including:
training a pre-trained StyleGAN model and adjusting model parameters by using a target data set with a target face style to obtain a stylized model, wherein the stylized model is used for generating a face image with the target face style;
respectively inputting random noise into the StyleGAN model and the stylized model so as to enable the StyleGAN model to obtain a first face image based on the random noise input, the stylized model to obtain a second face image based on the random noise input and the first face image, and the second face image is the face image with the target face style obtained after the style of the first face image is transferred.
Optionally, the StyleGAN model derives the first face image based on the random noise input, including:
a first mapping layer of the StyleGAN model performs feature extraction based on the random noise input to obtain a first hidden feature; the first hidden feature is used for representing the attribute of a face area in the first face image;
weighting the first hidden feature and a preset first average feature to obtain a first target feature; the first average feature is determined based on a result output by the first mapping layer at each training of the StyleGAN model;
and a first integration module of the StyleGAN model generates an image based on the first target feature input to obtain the first face image.
Optionally, the weighting the first hidden feature and a preset first average feature to obtain a first target feature includes:
weighting the first hidden feature and a preset first average feature to obtain a first intermediate feature;
and adjusting parameters corresponding to the first intermediate features to obtain the first target features.
Optionally, the stylized model derives the second facial image based on the random noise input and the first facial image, including:
a second mapping layer of the stylized model performs feature extraction based on the random noise input to obtain a second hidden feature, wherein the second hidden feature is used for representing the attribute of a face region in the second face image;
weighting the second hidden feature and a preset second average feature to obtain a second target feature; the second average feature is determined based on a result output by the second mapping layer during each training of the stylized model;
and a second synthesis module of the stylized model performs image generation based on the second target feature input and the first face image input to obtain the second face image.
Optionally, the inputting random noise into the StyleGAN model and the stylized model respectively to obtain a first face image and a second face image includes:
mixing the StyleGAN model and the stylized model to obtain a mixed model; wherein the hybrid model comprises the StyleGAN model and the stylized model, and an output end of a first synthesis module of the StyleGAN model is connected with an input end of a second synthesis module of the stylized model;
and inputting the random noise into the StyleGAN model and the mixed model for image generation to obtain a first human face image and a second human face image.
Optionally, after the pre-trained StyleGAN model is trained and the model parameters are adjusted by using the target data set with the target face style to obtain the stylized model, the method further includes:
acquiring a third hidden feature of a sample real image, wherein the third hidden feature is used for representing the attribute of a face area in the sample real image;
weighting the third hidden feature and the preset second average feature to obtain a third target feature;
a first synthesis module of the StyleGAN model generates an image based on the third hidden feature input to obtain a first real image;
a second synthesis module of the stylized model performs image generation based on the third target feature input and the first real image input to obtain a second real image; and the second real image is a face image with the style of the target face after the style of the first real image is transferred.
Optionally, after the random noise is input to the StyleGAN model and the stylized model respectively to obtain the first face image and the second face image, the method further includes:
determining attributes of a background region in the sample real image and determining attributes of a background region in the second real image;
and replacing the attribute of the background area in the second real image by the attribute of the background area in the sample real image.
In a second aspect of the present invention, there is provided an image generating apparatus comprising:
the system comprises a processing module, a storage module and a processing module, wherein the processing module is used for training a pre-trained StyleGAN model and adjusting model parameters by using a target data set with a target face style to obtain a stylized model, and the stylized model is used for generating a face image with the target face style;
an input module, configured to input random noise to the StyleGAN model and the stylized model, respectively, so that the StyleGAN model obtains a first face image based on the random noise input, the stylized model obtains a second face image based on the random noise input and the first face image, and the second face image is a face image with the target face style obtained after style migration is performed on the first face image.
In a third aspect of the present invention, an electronic device is provided, which includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a program;
a processor for implementing the method steps as described in the first aspect when executing a program stored in the memory.
In a third aspect of the implementation of the present invention, a readable storage medium is provided, on which a program is stored, which program, when being executed by a processor, is adapted to carry out the method according to the first aspect.
In the embodiment of the present invention, the first face image and the second face image are obtained by inputting random noise to the StyleGAN model and the stylized model, respectively. In one aspect, the style gan model obtains the first face image based on the random noise input, where the first face image is a real face image. On the other hand, the stylized model obtains the second face image based on the random noise input and the first face image, and the second face image is a stylized image corresponding to the first face image. Thus, the method provided by the embodiment of the invention can stably generate a large amount of sample data for training the style migration model, and reduces the cost and difficulty of obtaining the training sample used by the style migration model.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic flow chart illustrating an image generation method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a StyleGAN2 model according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a stylized model in an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a hybrid model according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of obtaining a mixture model according to an embodiment of the present invention;
FIG. 6 is a flow chart illustrating data processing according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an image generation method, including the following steps:
step 101, training a Style generation countermeasure Network (Style area Network) model generated in advance by using a target data set with a target face Style and adjusting model parameters to obtain a stylized model, wherein the stylized model is used for generating a face image with the target face Style;
step 102, respectively inputting random noise into the StyleGAN model and the stylized model, so that the StyleGAN model obtains the first face image based on the random noise input, the stylized model obtains the second face image based on the random noise input and the first face image, and the second face image is the face image with the target face style obtained after the style of the first face image is transferred.
It should be understood that the training dataset used when the StyleGAN model is pre-trained may be any dataset, in some embodiments, the training dataset used is an open-source large sample dataset. For example, in some embodiments, the training dataset used may be a Yahoo network photo album High-Quality face (Flickr-Faces-High-Quality), FFHQ, dataset. In other embodiments, the training Database used may be a Public regulations Face Database (PubFig).
It should be understood that the specific structure of the StyleGAN model is not limited herein. In a specific implementation, the StyleGAN model may be a StyleGAN base model or a model optimized on the StyleGAN base model. For example, in some embodiments, the StyleGAN model may also be a StyleGAN2 model.
For convenience of description, in the following embodiments, the StyleGAN models are the pre-trained StyleGAN models, and a case where the StyleGAN model is a StyleGAN2 model will be used as an example for description.
It should be understood that the face style may be understood as the display style of the face of the person in the image, including different face attributes such as face type, facial expression, face orientation, hair style, face skin color, face lighting, hair line shape, hair color and wrinkles.
The target data set having the target face style may be understood as all sample images within the target data set having the target face style. The target data set may be any data set. For example, in some embodiments, the target data set may be a self-created data set, i.e., the target data set is created by acquiring images having the style of the target face. In other embodiments, the target data set may also be an open source data set.
In specific implementation, the five sense organs drawing method, the brush stroke, the color coating method and the like in the works of the same painter are relatively uniform, so that the works of the same painter generally have the same face style. Therefore, a work set of any artist can be collected as the target data set, and the face style of the artist work is the target face style of the target data set.
It should be noted that, according to the different target face styles of the used target data sets, the target face styles of the second face images generated by the trained stylized model are also different.
Since the StyleGAN2 model is pre-trained, the target dataset used in training and model parameter adjustment of the StyleGAN2 model may be a large sample dataset or a small sample dataset. For example, in some embodiments, the number of samples included in the target dataset ranges from 50 to 2000. In other embodiments, the number of samples included in the target dataset ranges from 80 to 300.
In a specific implementation, the difficulty of acquiring the small sample data set is generally smaller than that of acquiring the large sample data set, and a better effect can be generally obtained by using the large sample data set for model training. Therefore, in this embodiment, the StyleGAN2 model can be trained in advance by using a large sample data set, and a better training effect is obtained. And then training and adjusting model parameters of the StyleGAN2 model by using the target data set to obtain a stylized model. Through the arrangement, on one hand, the training effect of the stylized model is improved, and on the other hand, the acquisition difficulty of the target data set is reduced.
In some embodiments, since the target data set may be a small sample data set, overfitting may occur during training and model parameter adjustment of the StyleGAN2 model using the target data set. Therefore, in some embodiments, in order to improve the training effect, training the StyleGAN2 model may also be understood as training the StyleGAN2 model by a Style generation versus Network-Adaptive Discriminator evaluation (StyleGAN-ada) method and an early stop (early stop) method. Still further, in other embodiments, training the StyleGAN2 model may also be understood as training the StyleGAN2 model by the StyleGAN-ada method, the Freeze the Discriminator (Freeze D) method, and the early stop method.
Model parameter adjustment of the StyleGAN2 model may be understood as fine tuning (finetune) of the StyleGAN2 model. Specifically, after the StyleGAN2 model is trained to obtain the trained StyleGAN2 model, the parameters of at least one network layer are adjusted on the basis of the trained StyleGAN2 model, so as to obtain the stylized model.
In specific implementation, the StyleGAN2 model can be trained and adjusted for multiple times to obtain different stylized models. And then comparing the style migration effects of the plurality of obtained stylized models to determine the final stylized model, wherein the stylized model is used for generating a face image with the target face style.
After inputting the random noise to the StyleGAN2 model, the StyleGAN2 model may generate the first face image based on the random noise input. In this embodiment, the first face image may be regarded as a face image whose face style is a real face style.
After inputting the random noise to the stylized model, the stylized model may generate the second facial image based on the random noise input. In this embodiment, the second face image is a face image with the style of the target face obtained after the style of the first face image is transferred. Specifically, the face content of the second face image is generated based on the face content of the first face image, but the second face image has the target face style.
It should be understood that the first facial image and the second facial image may be regarded as an image group, and the image group includes a real facial image and a corresponding stylized facial image. The first face image can be understood as a generated real face image, and the second face image can be understood as a stylized face image corresponding to the first face image.
It should be noted that, after random noise is input to the StyleGAN2 model and the stylized model, the first face image and the second face image can be obtained. Thus, the random noise input to the StyleGAN2 model and the stylized model is the same.
In the embodiment of the present invention, the first face image and the second face image are obtained by inputting random noise to the StyleGAN model and the stylized model, respectively. In one aspect, the StyleGAN model obtains the first face image based on the random noise input, where the first face image is a real face image. On the other hand, the stylized model obtains the second face image based on the random noise input and the first face image, and the second face image is a stylized image corresponding to the first face image. Thus, the method provided by the embodiment of the invention can stably generate a large amount of sample data for training the style migration model, and reduces the cost and difficulty of obtaining the training sample used by the style migration model.
It should be understood that the specific method of the StyleGAN2 model deriving the first face image based on the random noise input is not limited herein. Optionally, in some embodiments, the style gan2 model obtains the first face image based on the random noise input, specifically including the following steps:
a first mapping layer of the StyleGAN2 model performs feature extraction based on the random noise input to obtain a first hidden feature; the first hidden feature is used for representing the attribute of a face area in the first face image;
weighting the first hidden feature and a preset first average feature to obtain a first target feature; the first average feature is determined based on a result output by the first mapping layer at each training of the StyleGAN2 model;
and a first synthesis module of the StyleGAN2 model generates an image based on the first target feature input to obtain the first face image.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a style gan2 model according to an embodiment of the present invention. As shown in fig. 2, the StyleGAN2 model includes a first mapping layer and a first synthesis module. In some embodiments, the first Mapping layer may also be referred to as a first Mapping Network (Mapping Network), and the first Synthesis module may also be referred to as a first Synthesis Network (Synthesis Network).
Specifically, the first mapping layer is configured to perform feature extraction based on the random noise input to obtain a first hidden feature, where the first hidden feature may also be understood as a Latent vector or a Latent factor (Latent Code) for characterizing a property of a face region in the first face image. In specific implementation, different attributes of a face region in a face image are generally associated with each other and have high coupling, and the late Code is a feature obtained by decoupling the different attributes.
As can be seen from the above, in this embodiment, feature extraction is performed on the basis of the random noise input through the first mapping layer to obtain a first hidden feature, so that a characterization effect on attributes of a face region in the first face image can be improved, and convenience in operation of adjusting different attributes of the face region in the first face image is also improved.
After the first hidden feature is obtained in the first mapping layer, the first hidden feature and a preset first average feature need to be weighted to obtain a first target feature. Wherein the first average feature is determined based on a result output by the first mapping layer at each pre-training of the StyleGAN2 model. In this embodiment, in the process of weighting the first hidden feature and a preset first average feature, weights of the first hidden feature and the first average feature are not limited herein.
It should be understood that the specific manner in which the first average feature is determined based on the result output by the first mapping layer at each training of the StyleGAN2 model is not limited herein. For example, in some embodiments, when the StyleGAN2 model is trained in advance, the result output by the first mapping layer in each training process is recorded, and a mean value is determined based on the result output by the first mapping layer in each training process, where the mean value is the first average feature. Further, in other embodiments, after the abnormal data is eliminated, the first average feature may be determined based on the result output by the first mapping layer in other training processes.
In this embodiment, the first hidden feature and a preset first average feature are subjected to weighting processing, and then the weights of the first hidden feature and the first average feature are adjusted, so that the value range of the first target feature is more reasonable, the generation effect of the first facial image is improved, and the generation stability of the first facial image is improved.
Optionally, in some embodiments, the weighting the first hidden feature and a preset first average feature to obtain a first target feature includes:
weighting the first hidden feature and a preset first average feature to obtain a first intermediate feature;
and adjusting parameters corresponding to the first intermediate features to obtain the first target features.
Specifically, the first hidden feature and the first average feature may be both expressed as a high-dimensional vector, and therefore, after the first hidden feature and the first average feature are weighted, the obtained first intermediate feature may also be expressed as a high-dimensional vector.
It should be understood that each element in the high-dimensional vector corresponding to the first intermediate feature may be understood as a parameter of the first intermediate feature, or as an attribute characterizing the face region in the first face image. Therefore, adjusting the parameter corresponding to the first intermediate feature may also be understood as editing an element in a high-dimensional vector, so as to achieve an effect of editing the attribute of the face region in the first face image.
For ease of understanding, the following will be exemplified. For example, in one embodiment, the attributes of the face region in the first face image include asian, mouth opening, eye closing, head turning, head raising and head lowering, and all of the above attributes can be edited.
As can be seen from the above, in the present embodiment, the first intermediate feature is a six-dimensional vector, and each element in the six-dimensional vector corresponds to one attribute. After the corresponding relationship between the attribute and the element is determined, the attribute can be edited by adjusting the corresponding element.
For example, in a case where the face in the first face image is expected to be in open-mouth posture, the second element in the six-dimensional vector may be adjusted accordingly, so that the attribute characterized by the six-dimensional vector is in open-mouth posture. For another example, in another case that the face in the first face image is an asian person and is in a closed-eye posture, the first element and the third element in the six-dimensional vector may be adjusted accordingly, so that the attribute characterized by the six-dimensional vector is an asian person and is in a closed-eye posture.
In this embodiment, after the first hidden feature and a preset first average feature are subjected to weighting processing to obtain a first intermediate feature, a parameter corresponding to the first intermediate feature may be adjusted to obtain the first target feature. By adjusting the parameters corresponding to the first intermediate features, different first target features can be obtained. According to the difference of the first target features, the attributes of the face regions in the first face image are different, so that the face postures in the first sample image are different. Therefore, through the arrangement, the first face images in different postures can be obtained, and the richness of data is further improved.
It should be understood that the first synthesis module of the StyleGAN2 model is configured to perform image generation based on the first target feature input, resulting in the first face image, and the attribute of the face region in the first face image is determined based on the first target feature.
The structure of the first integration module is not limited herein. In some embodiments, the first integration module generally comprises at least two first image processing layers connected in series; and the resolution of the result output by the first image processing layer is gradually improved from the upper layer to the lower layer. The number and specific structure of the first image processing layers are not limited herein.
Therefore, the first synthesis module of the StyleGAN2 model performs image generation based on the first target feature input, and obtaining the first face image may also be understood as inputting the first target feature into each first image processing layer of the first synthesis module. In other words, each of the first image processing layers may perform image processing based on the previous first image processing layer input and the first target feature input, and input a processed result to the next first image processing layer. And when the first image processing layer is the uppermost layer, generating an image based on the first target characteristic input by the first image processing layer. And when the first image processing layer is the first image processing layer at the lowest layer, the result output by the first image processing layer is the first face image.
It should be understood that the specific method by which the stylized model derives the second facial image based on the random noise input and the first facial image is not limited herein. Optionally, in some embodiments, the stylized model obtains the second facial image based on the random noise input and the first facial image, and specifically includes the following steps:
a second mapping layer of the stylized model performs feature extraction based on the random noise input to obtain a second hidden feature, wherein the second hidden feature is used for representing the attribute of a face region in the second face image;
weighting the second hidden feature and a preset second average feature to obtain a second target feature; the second average feature is determined based on a result output by the second mapping layer during each training of the stylized model;
and a second synthesis module of the stylized model performs image generation based on the second target feature input and the first face image input to obtain the second face image.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a stylized model according to an embodiment of the present invention. As shown in fig. 3, the stylized model includes a second mapping layer and a second synthesis module. In some embodiments, the second Mapping layer may also be referred to as a second Mapping Network, and the second Synthesis module may also be referred to as a second Synthesis Network.
Specifically, the second mapping layer is configured to perform feature extraction based on the random noise input to obtain a second hidden feature, where the second hidden feature may also be understood as a late Code for characterizing attributes of a face region in the second face image.
As can be seen from the above, in this embodiment, feature extraction is performed on the basis of the random noise input through the second mapping layer to obtain a second hidden feature, so that a characterization effect on attributes of the face region in the second face image can be improved, and convenience in operation of adjusting different attributes of the face region in the second face image is also improved.
After the second hidden feature is obtained in the second mapping layer, the second hidden feature and a preset second average feature need to be weighted to obtain a second target feature. Wherein the second average feature is determined based on a result output by the second mapping layer at each training of the stylized model. In this embodiment, in the process of performing weighting processing on the second hidden feature and the second average feature, weights of the second hidden feature and the second average feature are not limited herein.
It should be understood that the specific manner in which the second average feature is determined based on the result output by the second mapping layer at each training of the stylized model is not limited herein. For example, in some embodiments, when the stylized model is trained in advance, the result output by the second mapping layer in each training process is recorded, and a mean value is determined based on the result output by the second mapping layer in each training process, where the mean value is the second average feature. Further, in other embodiments, after the abnormal data is eliminated, the second average feature may be determined based on the result output by the second mapping layer in other training processes.
In this embodiment, the second hidden feature and a preset second average feature are weighted, and then the weights of the second hidden feature and the second average feature are adjusted, so that the value range of the second target feature is more reasonable, the generation effect of the second face image is improved, and the generation stability of the second face image is improved.
Meanwhile, the stylized model is used for carrying out style migration on the first face image, so that the strength of the stylized model used for carrying out style migration on the first face image, or the stylized strength, can be controlled by adjusting the weights of the second hidden feature and the second average feature. Therefore, under the condition that the weights of the second hidden feature and the second average feature are different, the target face style can also comprise a plurality of target face sub-styles according to the difference of stylization strengths, so that a stylization effect which is more in line with the requirement is obtained.
It should be understood that the second synthesis module of the stylized model is configured to perform image generation based on the second target feature input, resulting in the second face image, and the attribute of the face region in the second face image is determined based on the second target feature.
The structure of the second integration module is not limited herein. In some embodiments, the second integration module generally comprises at least two second image processing layers connected in series; and the resolution of the result output by the second image processing layer is gradually improved from the upper layer to the lower layer. The number and specific structure of the second image processing layers are not limited herein.
Therefore, the second integrating module of the stylized model performs image generation based on the second target feature input, and obtaining the second face image may also be understood as inputting the second target feature into each of the second image processing layers of the second integrating module. In other words, each of the second image processing layers may perform image processing based on the previous second image processing layer input and the second target feature input, and input the processed result to the next second image processing layer. And when the second image processing layer is the uppermost second image processing layer, the second image processing layer generates an image based on the second target feature input. And when the second image processing layer is the second image processing layer at the lowest layer, the result output by the second image processing layer is the second face image.
Optionally, in some embodiments, the inputting random noise into the StyleGAN model and the stylized model respectively to obtain the first face image and the second face image specifically includes the following steps:
mixing the StyleGAN model and the stylized model to obtain a mixed model; wherein the hybrid model comprises the StyleGAN model and the stylized model, and an output end of a first synthesis module of the StyleGAN model is connected with an input end of a second synthesis module of the stylized model;
and inputting the random noise into the StyleGAN model and the mixed model for image generation to obtain a first human face image and a second human face image.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a hybrid model according to an embodiment of the present invention. As shown in fig. 4, the hybrid model includes a StyleGAN2 model and the stylized model, and an output terminal of a first synthesis module of the StyleGAN2 model is connected to an input terminal of a second synthesis module of the stylized model.
The output end of the first synthesis module of the StyleGAN2 model is connected with the input end of the second synthesis module of the stylized model, which means that the result output by the first synthesis module of the StyleGAN2 model is input to the second synthesis module of the stylized model.
As shown in FIG. 4, in one case, the results output by the first synthesis module of the StyleGAN2 model are only input to the second synthesis module of the stylized model. In another case, the result output by the first synthesis module of the StyleGAN2 model is input to the second synthesis module of the stylized model and is output as the first face image.
Referring to fig. 5, fig. 5 is a schematic flow chart of obtaining a hybrid model according to this embodiment. As shown in fig. 5, first, sample data with the style of the target face is collected to form the target database, and then the StyleGAN2 model is trained and refined on the target data set by a StyleGAN-ada method, a FreezeD method, and an early stop method, so as to obtain the stylized model. And mixing the stylized model and the StyleGAN2 model to obtain the mixed model.
In this embodiment, the StyleGAN model and the stylized model are mixed to obtain the mixed model. And inputting the random noise into the StyleGAN model and the mixed model for image generation to obtain a first human face image and a second human face image. In this way, the first face image and the second face image may still be obtained for training the style migration model. Meanwhile, under the condition that only the second face image needs to be obtained, only the random noise can be input into the hybrid model for processing, so that the second face image is obtained.
From the above, by the method provided in this embodiment, the StyleGAN2 model shown in fig. 2, the stylized model shown in fig. 3, and the hybrid model shown in fig. 4 can be obtained. Under the condition that different face images need to be generated, different models can be used, and the flexibility and convenience of image generation are improved.
Optionally, in some embodiments, after the step 101, the method further comprises the steps of:
acquiring a third hidden feature of a sample real image, wherein the third hidden feature is used for representing the attribute of a face area in the sample real image;
weighting the third hidden features and the preset second average features to obtain third target features;
a first synthesis module of the StyleGAN model generates an image based on the third hidden feature input to obtain a first real image;
a second synthesis module of the stylized model performs image generation based on the third target feature input and the first real image input to obtain a second real image; and the second real image is a face image with the style of the target face after the style of the first real image is transferred.
It should be noted that the first face image may be considered as a face image having a real face style. And the sample real image can be understood as a real face image acquired on the network or acquired through other ways. The difference between the first face image and the sample real image can be understood as that the person corresponding to the face region in the first face image is false, and the person corresponding to the face region in the sample real image is real.
It should be noted that, the third hidden feature and the preset second average feature are weighted to obtain a third target feature, which may be understood as replacing the second hidden feature with the third hidden feature. Thus, in this embodiment, the second hidden feature is not generated by the random noise, but is determined from the sample real image.
And the stylized model aims to transfer the style of the target face to the first face image to obtain the second face image. And the second average feature is used for controlling the strength of face stylization, so that the face attribute of the sample real image can be input by replacing the second hidden feature with the third hidden feature, and meanwhile, the stylization strength of the sample real image can still be adjusted by weighting the third hidden feature and the preset second average feature.
It should be noted that, the first synthesis module of the StyleGAN2 model performs image generation based on the third hidden feature input, and obtaining the first real image may be understood as replacing the first target feature with the third hidden feature.
Since the purpose of the StyleGAN2 model is to generate a face image having a real face style, replacing the first target feature with the third hidden feature may improve the similarity between the generated first face image and the sample real image, and improve the degree of realism of the first face image. Of course, in some embodiments, the sample real image may be directly used to replace the first facial image, that is, the sample real image and the second real image may be used as one image group.
In this embodiment, the sample real image is collected, and the third hidden feature of the sample real image is acquired. By replacing the second hidden feature and the first average feature with the third hidden feature, the stability of the output first real image and the output second real image is improved, and meanwhile, the source multi-sample of the sample data for training the style migration model is improved, so that the sample data for training the style migration model is more stable and real, and the training effect of the style migration model is further improved.
Optionally, in some embodiments, after the step 102, the method further comprises the steps of:
determining attributes of a background region in the sample real image and determining attributes of a background region in the second real image;
and replacing the attribute of the background area in the second real image by the attribute of the background area in the sample real image.
It should be understood that the specific method for determining the attributes of the background region in the real image of the sample is not limited herein. Likewise, the specific method for determining the attribute of the background region in the second real image is not limited herein.
For example, in some embodiments, determining the property of the background region in the sample real image may be understood as performing image segmentation on the sample real image to determine the background region of the sample real image. Determining the property of the background region in the second real image may be understood as performing image segmentation on the second real image to determine the background region of the second real image.
In this embodiment, determining the attribute of the background region in the sample real image and determining the attribute of the background region in the second real image are understood as determining the background region in the sample real image and determining the background region in the second real image. Thus, replacing the attribute of the background region in the second real image with the attribute of the background region in the sample real image may be understood as replacing the background in the second real image with the background in the sample real image.
In other embodiments, determining the attribute of the background region in the sample real image may be understood as performing image segmentation on the sample real image to determine an attribute parameter of the background region of the sample real image. Determining the attribute of the background region in the second real image may be understood as performing image segmentation on the second real image, and determining the attribute parameter of the background region of the second real image.
In this embodiment, determining the attribute of the background region in the sample real image and determining the attribute of the background region in the second real image are understood as determining the attribute parameter of the background region in the sample real image and determining the attribute parameter of the background region in the second real image. Thus, replacing the attribute of the background region in the second real image with the attribute of the background region in the sample real image may be understood as replacing the background parameter in the second real image with the background parameter in the sample real image.
In the process of stylizing and migrating the sample real image, the background area in the sample real image may also be stylized and migrated, so that the background area in the second real image is different from the background area in the sample real image.
In this embodiment, it is necessary to determine the attribute of the background region in the sample real image, and determine the attribute of the background region in the second real image; replacing the attributes of the background region in the second real image with the attributes of the background region in the sample real image. Through the arrangement, the background area in the second real image is the same as the background area in the sample real image, and the background of the images in the image group is ensured to be the same.
It should be noted that, as shown in fig. 6, in some embodiments, after the second real image is obtained, the sample real image and the second real image may be further aligned by a segmentation (piece wise) method, so as to further improve the effect of the second real image.
It should be noted that, in some embodiments, after the second real image is obtained, the clothing in the second real image may also be replaced by the clothing in the sample real image, so as to further improve the matching degree of the clothing in the second real image and the sample real image and the richness of the clothing in the second real image.
It should be noted that, in some embodiments, after obtaining the first facial image, the second facial image, the first real image, and/or the second real image, accessory data may be added to the first facial image, the second facial image, the first real image, and/or the second real image, so as to further enrich the type and quantity of sample data for training the style migration model. Wherein, the accessory data can be glasses data, jewelry data, mask data and the like.
It should be noted that, in some embodiments, after the first face image, the second face image, the first real image and/or the second real image are obtained, data cleaning may be performed on the first face image, the second face image, the first real image and/or the second real image to remove data with a poor effect, so as to improve quality of sample data finally used for training the style migration model.
Referring to fig. 7, fig. 7 is a structural diagram of an image generating apparatus 700 according to an embodiment of the present invention.
As shown in fig. 7, the present embodiment provides an image generating apparatus 700 including:
a processing module 701, configured to train and adjust model parameters of a pre-trained StyleGAN model by using a target data set having a target face style, so as to obtain a stylized model, where the stylized model is used to generate a face image having the target face style;
an input module 702, configured to input random noise into the StyleGAN model and the stylized model, respectively, so that the StyleGAN model obtains the first face image based on the random noise input, the stylized model obtains the second face image based on the random noise input and the first face image, and the second face image is a face image with the style of the target face obtained after the style migration of the first face image is performed.
Optionally, the StyleGAN model derives the first face image based on the random noise input, including:
a first mapping layer of the StyleGAN model performs feature extraction based on the random noise input to obtain a first hidden feature; the first hidden feature is used for representing the attribute of a face area in the first face image;
weighting the first hidden feature and a preset first average feature to obtain a first target feature; the first average feature is determined based on a result output by the first mapping layer at each training of the StyleGAN model;
and a first integration module of the StyleGAN model generates an image based on the first target feature input to obtain the first face image.
Optionally, the weighting the first hidden feature and a preset first average feature to obtain a first target feature includes:
weighting the first hidden feature and a preset first average feature to obtain a first intermediate feature;
and adjusting parameters corresponding to the first intermediate features to obtain the first target features.
Optionally, the stylized model derives the second facial image based on the random noise input and the first facial image, including:
a second mapping layer of the stylized model performs feature extraction based on the random noise input to obtain a second hidden feature, wherein the second hidden feature is used for representing the attribute of a face region in the second face image;
weighting the second hidden feature and a preset second average feature to obtain a second target feature; the second average feature is determined based on a result output by the second mapping layer during each training of the stylized model;
and a second synthesis module of the stylized model performs image generation based on the second target feature input and the first face image input to obtain the second face image.
Optionally, the input module 702 includes:
the mixing unit is used for mixing the StyleGAN model and the stylized model to obtain a mixed model; wherein the hybrid model comprises the StyleGAN model and the stylized model, and an output end of a first synthesis module of the StyleGAN model is connected with an input end of a second synthesis module of the stylized model;
and the image generation unit is used for inputting the random noise into the mixed model to generate an image so as to obtain a first human face image and a second human face image.
Optionally, the image generating apparatus 700 further comprises:
the acquisition module is used for acquiring third hidden features of a sample real image, wherein the third hidden features are used for representing attributes of a face area in the sample real image;
the weighting processing module is used for weighting the third hidden feature and the preset second average feature to obtain a third target feature;
a first image generation module, configured to perform image generation on the basis of the third hidden feature input by a first synthesis module of the StyleGAN model to obtain a first real image;
a second image generation module, configured to perform image generation on the basis of the third target feature input and the first real image input by a second synthesis module of the stylized model, to obtain a second real image; and the second real image is a face image with the style of the target face after the style of the first real image is transferred.
Optionally, the image generating apparatus 700 further comprises:
a determining module for determining an attribute of a background region in the sample real image and determining an attribute of a background region in the second real image;
a replacing module for replacing the attribute of the background area in the second real image with the attribute of the background area in the sample real image.
The image generating apparatus 700 provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described herein again to avoid repetition.
An embodiment of the present invention further provides an electronic device, as shown in fig. 8, which includes a processor 801, a communication interface 802, a memory 803, and a communication bus 804, where the processor 801, the communication interface 802, and the memory 803 complete mutual communication through the communication bus 804,
a memory 803 for storing programs;
the processor 801 is configured to implement the following steps when executing the program stored in the memory 803:
training a pre-trained StyleGAN model and adjusting model parameters by using a target data set with a target face style to obtain a stylized model, wherein the stylized model is used for generating a face image with the target face style;
respectively inputting random noise into the StyleGAN model and the stylized model so as to enable the StyleGAN model to obtain the first face image based on the random noise input, the stylized model to obtain the second face image based on the random noise input and the first face image, and the second face image is the face image with the target face style obtained after the style of the first face image is transferred.
Optionally, the StyleGAN model derives the first face image based on the random noise input, including:
a first mapping layer of the StyleGAN model performs feature extraction based on the random noise input to obtain a first hidden feature; the first hidden feature is used for representing the attribute of a face area in the first face image;
weighting the first hidden feature and a preset first average feature to obtain a first target feature; the first average feature is determined based on a result output by the first mapping layer at each training of the StyleGAN model;
and a first integration module of the StyleGAN model generates an image based on the first target feature input to obtain the first face image.
Optionally, the weighting the first hidden feature and a preset first average feature to obtain a first target feature includes:
weighting the first hidden feature and a preset first average feature to obtain a first intermediate feature;
and adjusting parameters corresponding to the first intermediate features to obtain the first target features.
Optionally, the stylized model derives the second facial image based on the random noise input and the first facial image, including:
a second mapping layer of the stylized model performs feature extraction based on the random noise input to obtain a second hidden feature, wherein the second hidden feature is used for representing the attribute of a face region in the second face image;
weighting the second hidden feature and a preset second average feature to obtain a second target feature; the second average characteristic is determined based on a result output by the second mapping layer during each training of the stylized model;
and a second synthesis module of the stylized model performs image generation based on the second target feature input and the first face image input to obtain the second face image.
Optionally, the inputting random noise into the StyleGAN model and the stylized model respectively to obtain a first face image and a second face image includes:
mixing the StyleGAN model and the stylized model to obtain a mixed model; wherein the hybrid model comprises the StyleGAN model and the stylized model, and an output end of a first synthesis module of the StyleGAN model is connected with an input end of a second synthesis module of the stylized model;
and inputting the random noise into the mixed model to generate an image, so as to obtain a first face image and a second face image.
Optionally, after the pre-trained StyleGAN model is trained and the model parameters are adjusted by using the target data set with the target face style to obtain the stylized model, the method further includes:
acquiring a third hidden feature of a sample real image, wherein the third hidden feature is used for representing the attribute of a face area in the sample real image;
weighting the third hidden features and the preset second average features to obtain third target features;
a first synthesis module of the StyleGAN model generates an image based on the third hidden feature input to obtain a first real image;
a second synthesis module of the stylized model performs image generation based on the third target feature input and the first real image input to obtain a second real image; and the second real image is a face image with the style of the target face after the style of the first real image is transferred.
Optionally, after the random noise is input to the StyleGAN model and the stylized model respectively to obtain the first face image and the second face image, the method further includes:
determining attributes of a background region in the sample real image and determining attributes of a background region in the second real image;
and replacing the attribute of the background area in the second real image by the attribute of the background area in the sample real image.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a readable storage medium is further provided, which has stored therein instructions that, when executed on a processor, cause the processor to execute the image generation method described in any of the above embodiments.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the image generation method of any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to be performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. An image generation method, comprising:
generating an antagonistic network StyleGAN model with a pre-trained style by utilizing a target data set with a target face style, training and adjusting model parameters to obtain a stylized model, wherein the stylized model is used for generating a face image with the target face style;
respectively inputting random noise into the StyleGAN model and the stylized model so as to enable the StyleGAN model to obtain a first face image based on the random noise input, the stylized model to obtain a second face image based on the random noise input and the first face image, and the second face image is the face image with the target face style obtained after the style of the first face image is transferred.
2. The method of claim 1, wherein the StyleGAN model derives the first face image based on the random noise input, comprising:
a first mapping layer of the StyleGAN model performs feature extraction based on the random noise input to obtain a first hidden feature; the first hidden features are used for representing attributes of a face area in the first face image;
weighting the first hidden feature and a preset first average feature to obtain a first target feature; the first average feature is determined based on a result output by the first mapping layer at each training of the StyleGAN model;
and a first synthesis module of the StyleGAN model generates an image based on the first target feature input to obtain the first face image.
3. The method according to claim 2, wherein the weighting the first hidden feature and a preset first average feature to obtain a first target feature comprises:
weighting the first hidden feature and a preset first average feature to obtain a first intermediate feature;
and adjusting parameters corresponding to the first intermediate features to obtain the first target features.
4. The method of claim 2, wherein the stylized model derives the second facial image based on the random noise input and the first facial image, comprising:
a second mapping layer of the stylized model performs feature extraction based on the random noise input to obtain a second hidden feature, wherein the second hidden feature is used for representing the attribute of a face region in the second face image;
weighting the second hidden feature and a preset second average feature to obtain a second target feature; the second average feature is determined based on a result output by the second mapping layer during each training of the stylized model;
and a second synthesis module of the stylized model performs image generation based on the second target feature input and the first face image input to obtain the second face image.
5. The method according to claim 4, wherein the inputting random noise into the StyleGAN model and the stylized model respectively to obtain a first face image and a second face image comprises:
mixing the StyleGAN model and the stylized model to obtain a mixed model; wherein the hybrid model comprises the StyleGAN model and the stylized model, and an output end of a first synthesis module of the StyleGAN model is connected with an input end of a second synthesis module of the stylized model;
and inputting the random noise into the StyleGAN model and the mixed model for image generation to obtain a first human face image and a second human face image.
6. The method of claim 4, wherein after training and adjusting model parameters of a pre-trained StyleGAN model to obtain a stylized model using a target data set having a target face style, the method further comprises:
acquiring a third hidden feature of a sample real image, wherein the third hidden feature is used for representing the attribute of a face area in the sample real image;
weighting the third hidden features and the preset second average features to obtain third target features;
a first synthesis module of the StyleGAN model generates an image based on the third hidden feature input to obtain a first real image;
a second synthesis module of the stylized model performs image generation based on the third target feature input and the first real image input to obtain a second real image; and the second real image is a face image with the style of the target face after the style of the first real image is transferred.
7. The method of claim 6, wherein after inputting random noise to the StyleGAN model and the stylized model respectively to obtain a first face image and a second face image, the method further comprises:
determining attributes of a background region in the sample real image and determining attributes of a background region in the second real image;
replacing the attributes of the background region in the second real image with the attributes of the background region in the sample real image.
8. An image generation apparatus, comprising:
the system comprises a processing module, a storage module and a processing module, wherein the processing module is used for training a pre-trained StyleGAN model and adjusting model parameters by using a target data set with a target face style to obtain a stylized model, and the stylized model is used for generating a face image with the target face style;
an input module, configured to input random noise to the StyleGAN model and the stylized model, respectively, so that the StyleGAN model obtains a first face image based on the random noise input, the stylized model obtains a second face image based on the random noise input and the first face image, and the second face image is a face image with the target face style obtained after style migration is performed on the first face image.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a program;
a processor for implementing the steps of the method according to any one of claims 1 to 7 when executing a program stored on a memory.
10. A readable storage medium, on which a program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210735272.0A 2022-06-27 2022-06-27 Image generation method, image generation device and related equipment Pending CN115187450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210735272.0A CN115187450A (en) 2022-06-27 2022-06-27 Image generation method, image generation device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210735272.0A CN115187450A (en) 2022-06-27 2022-06-27 Image generation method, image generation device and related equipment

Publications (1)

Publication Number Publication Date
CN115187450A true CN115187450A (en) 2022-10-14

Family

ID=83516134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210735272.0A Pending CN115187450A (en) 2022-06-27 2022-06-27 Image generation method, image generation device and related equipment

Country Status (1)

Country Link
CN (1) CN115187450A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953821A (en) * 2023-02-28 2023-04-11 北京红棉小冰科技有限公司 Virtual face image generation method and device and electronic equipment
CN116862757A (en) * 2023-05-19 2023-10-10 上海任意门科技有限公司 Method, device, electronic equipment and medium for controlling face stylization degree

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953821A (en) * 2023-02-28 2023-04-11 北京红棉小冰科技有限公司 Virtual face image generation method and device and electronic equipment
CN115953821B (en) * 2023-02-28 2023-06-30 北京红棉小冰科技有限公司 Virtual face image generation method and device and electronic equipment
CN116862757A (en) * 2023-05-19 2023-10-10 上海任意门科技有限公司 Method, device, electronic equipment and medium for controlling face stylization degree

Similar Documents

Publication Publication Date Title
US10853987B2 (en) Generating cartoon images from photos
CN105144239B (en) Image processing apparatus, image processing method
CN115187450A (en) Image generation method, image generation device and related equipment
WO2020045236A1 (en) Augmentation device, augmentation method, and augmentation program
JP6994588B2 (en) Face feature extraction model training method, face feature extraction method, equipment, equipment and storage medium
US9449256B1 (en) Providing image candidates based on diverse adjustments to an image
CN107423551B (en) Imaging method and imaging system for performing medical examinations
CN109598307B (en) Data screening method and device, server and storage medium
CN108960269B (en) Feature acquisition method and device for data set and computing equipment
CN110264407B (en) Image super-resolution model training and reconstruction method, device, equipment and storage medium
CN109102885B (en) Automatic cataract grading method based on combination of convolutional neural network and random forest
CN109597858B (en) Merchant classification method and device and merchant recommendation method and device
US11321839B2 (en) Interactive training of a machine learning model for tissue segmentation
CN112132208B (en) Image conversion model generation method and device, electronic equipment and storage medium
CN107609487B (en) User head portrait generation method and device
CN111652257A (en) Sample data cleaning method and system
CN113821296B (en) Visual interface generation method, electronic equipment and storage medium
WO2020135054A1 (en) Method, device and apparatus for video recommendation and storage medium
Pang et al. Convolutional neural network-based sub-pixel line-edged angle detection with applications in measurement
CN113762005B (en) Feature selection model training and object classification methods, devices, equipment and media
CN110880018B (en) Convolutional neural network target classification method
CN110706804B (en) Application method of mixed expert system in lung adenocarcinoma classification
CN112329586A (en) Client return visit method and device based on emotion recognition and computer equipment
WO2018137226A1 (en) Fingerprint extraction method and device
Gao et al. A robust improved network for facial expression recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination