CN112102194A - Face restoration model training method and device - Google Patents

Face restoration model training method and device Download PDF

Info

Publication number
CN112102194A
CN112102194A CN202010969607.6A CN202010969607A CN112102194A CN 112102194 A CN112102194 A CN 112102194A CN 202010969607 A CN202010969607 A CN 202010969607A CN 112102194 A CN112102194 A CN 112102194A
Authority
CN
China
Prior art keywords
image
model
face image
face
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010969607.6A
Other languages
Chinese (zh)
Inventor
李果
熊宝玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202010969607.6A priority Critical patent/CN112102194A/en
Publication of CN112102194A publication Critical patent/CN112102194A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a face repairing model training method and device, wherein a first face image is obtained by decoding an input first feature vector through an image decoding model; obtaining a second face image by reducing the image quality of the first face image; calling an initial image coding model to code the second face image to obtain a second feature vector corresponding to the second face image; based on the difference information of the second feature vector relative to the first feature vector, the initial image coding model is trained to obtain an image coding model, so that a face image with lower image quality can be accessed from the image coding model, the feature vector is provided for the image decoding model, and the image decoding model outputs a face image with improved image quality, so that the image decoding model with the obvious effect at present is applied to the face repairing model, the face repairing effect is improved, and meanwhile, the application range of the image decoding model is expanded.

Description

Face restoration model training method and device
Technical Field
The application belongs to the technical field of models, and particularly relates to a face repairing model training method and device.
Background
In the related art, the image decoding model can output a high-definition face image by taking the feature vector as an input, for example, the StyleGan network model is a high-definition face generation model derived from NVIDIA, the StyleGan network model can be used as an image decoding model, and the StyleGan network model can output a high-definition face image of 1024 × 1024 at maximum by taking the random feature vector as an input. With the obvious effect of the image decoding model in the face generation direction, how to use the image decoding model in the face restoration field close to the face generation is a problem to be solved urgently at present.
Disclosure of Invention
In view of the above, an object of the present application is to provide a face repairing type training method and apparatus, which are used to improve a face repairing effect and expand an application range of an image decoding model.
In one aspect, the present application provides a face repairing model training method, where the face repairing model includes an image coding model and an image decoding model, and the method includes:
acquiring a first face image, wherein the first face image is obtained by decoding an input first feature vector by the image decoding model;
obtaining a second face image by reducing the image quality of the first face image;
calling an initial image coding model to code the second face image to obtain a second feature vector corresponding to the second face image;
and training the initial image coding model based on the difference information of the second feature vector relative to the first feature vector to obtain the image coding model.
Optionally, the image decoding model is obtained by training a feature vector generated by the initial image decoding model with a random number in advance.
Optionally, the process of training the feature vector generated by the initial image decoding model with a random number in advance includes:
generating a third feature vector according to the first random number;
calling the initial image decoding model to decode the third feature vector to obtain a third face image;
calling an initial discrimination model, and discriminating the image quality of the third face image and a preset face image to obtain a first discrimination result aiming at the third face image and a second discrimination result aiming at the preset face image, wherein the preset face image and the first face image have the same first image quality;
if the third face image is determined not to have the first image quality according to the first judgment result, training the initial image decoding model based on the first judgment result;
and if the preset face image is determined not to have the first image quality according to the second judgment result, training the initial judgment model based on the second judgment result.
Optionally, the method further includes:
after the training of the initial image coding model is completed, a fourth face image and a fifth face image corresponding to the fourth face image are obtained, wherein the fourth face image and the second face image have the same second image quality, and the fifth face image has the first image quality;
calling the image coding model to code the fourth face image to obtain a fourth feature vector;
calling the image decoding model to decode the fourth feature vector to obtain a sixth face image;
calling a discrimination model obtained through initial discrimination model training, and discriminating the image quality of the sixth face image and a preset face image to obtain a third discrimination result aiming at the sixth face image and the preset face image;
and at least adjusting model parameters of the image decoding model and the discrimination model based on the third discrimination result and the difference information of the sixth face image relative to the fifth face image.
Optionally, the invoking the image coding model to code the fourth face image to obtain a fourth feature vector includes: under the condition that the model parameters of the image coding model are kept unchanged, calling the image coding model to code the fourth face image to obtain a fourth feature vector;
the adjusting at least the model parameters of the image decoding model and the discrimination model based on the third discrimination result and the difference information of the sixth face image relative to the fifth face image includes:
if the sixth face image is determined to have no first image quality according to the third judgment result, based on the third judgment result and difference information of the sixth face image relative to the fifth face image, adjusting model parameters of the image decoding model by using a first adjustment step length, wherein the first adjustment step length is smaller than an adjustment step length adopted in training the initial image decoding model;
and if the preset face image does not have the first image quality according to the third judgment result, adjusting the model parameters of the judgment model by using a second adjustment step length, wherein the second adjustment step length is smaller than the adjustment step length adopted when the initial judgment model is trained.
Optionally, the method further includes:
after the image decoding model and the discrimination model are adjusted, acquiring a seventh face image with the second image quality, an eighth face image which corresponds to the seventh face image and has the first image quality;
calling the image coding model to code the seventh face image to obtain a fifth feature vector;
calling the adjusted image decoding model to decode the fifth feature vector to obtain a ninth face image;
calling the adjusted discrimination model, and discriminating the image quality of the ninth face image and a preset face image to obtain a fourth discrimination result aiming at the ninth face image and the preset face image;
and based on the fourth judgment result and the difference information of the ninth face image relative to the eighth face image, readjusting the model parameters of the image decoding model and the judgment model after adjustment, and adjusting the model parameters of the image coding model.
On the other hand, the present application further provides a training apparatus for a face repairing model, where the face repairing model includes an image coding model and an image decoding model, the apparatus includes:
the acquisition unit is used for acquiring a first face image, wherein the first face image is obtained by decoding an input first feature vector by the image decoding model;
the image processing unit is used for obtaining a second face image by reducing the image quality of the first face image;
the encoding unit is used for calling an initial image encoding model to encode the second face image to obtain a second feature vector corresponding to the second face image;
and the training unit is used for training the initial image coding model based on the difference information of the second characteristic vector relative to the first characteristic vector to obtain the image coding model.
Optionally, the image decoding model is obtained by training a feature vector generated by the initial image decoding model with a random number in advance;
the device further comprises:
a generating unit configured to generate a third feature vector from the first random number;
the decoding unit is used for calling the initial image decoding model to decode the third feature vector to obtain a third face image;
the judging unit is used for calling an initial judging model, judging the image quality of the third face image and a preset face image to obtain a first judging result aiming at the third face image and a second judging result aiming at the preset face image, wherein the preset face image and the first face image have the same first image quality;
the training unit is further configured to train the initial image decoding model based on the first discrimination result if it is determined that the third face image does not have the first image quality according to the first discrimination result, and train the initial discrimination model based on the second discrimination result if it is determined that the preset face image does not have the first image quality according to the second discrimination result.
In another aspect, the present application further provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the above face repairing model training method.
In another aspect, the present application further provides a storage medium, where instructions are stored in the storage medium, and when the instructions in the storage medium are executed, the face repairing model training method is implemented.
The method and the device for training the face repairing model acquire a first face image, wherein the first face image is obtained by decoding an input first feature vector by an image decoding model; obtaining a second face image by reducing the image quality of the first face image; calling an initial image coding model to code the second face image to obtain a second feature vector corresponding to the second face image; and training the initial image coding model based on the difference information of the second feature vector relative to the first feature vector to obtain the image coding model.
Because the second face image is obtained by reducing the image quality of the first face image, which indicates that the image quality of the second face image input by the image coding model is lower than that of the first face image output by the image decoding model, in the process of face restoration by using the face restoration model, a face image with lower image quality can be accessed by the image coding model, a feature vector is provided for the image decoding model, and a face image with higher image quality is output by the image decoding model, so that restoration of the face image with lower image quality is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a face repairing model training method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of training an initial image coding model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of obtaining an image decoding model according to an embodiment of the present application;
FIG. 4 is a flowchart of obtaining an image decoding model according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of another face repairing model training method provided in the embodiment of the present application;
FIG. 6 is a diagram of adjusting an image decoding model and a discrimination model according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating simultaneous adjustment of an image coding model, an image decoding model and a discrimination model according to an embodiment of the present application;
fig. 8 is a flowchart of a further training method for a face repairing model according to an embodiment of the present application;
fig. 9 is a schematic diagram illustrating a repairing effect of a trained face repairing model according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a face repairing model training device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another face repairing model training device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, which shows a flowchart of a face repairing model training method provided in an embodiment of the present application, where a face repairing model includes an image coding model and an image decoding model, so as to repair a face image through the image coding model and the image decoding model, the face repairing model training method shown in fig. 1 may include the following steps:
101: and acquiring a first face image, wherein the first face image is obtained by decoding the input first characteristic vector by an image decoding model.
The image decoding model has a function of generating a face image, and the image decoding model may decode the input feature vector with the feature vector as input to generate the face image corresponding to the feature vector. For example, the first feature vector is input into an image decoding model, and the image decoding model is called to decode the first feature vector to obtain a first face image corresponding to the first feature vector.
In this embodiment, the first feature vector input by the image decoding model may be a feature vector generated according to a random number, for example, the first feature vector is but not limited to a 512 × 512 feature vector, the 512 × 512 feature vector is composed of 512 × 512 random numbers, and a preset number of random numbers in all random numbers composing the first feature vector may be the same, for example, a value of the preset number of random numbers is 0, and the preset number is not limited in this embodiment.
102: and obtaining a second face image by reducing the image quality of the first face image, so that the image quality of the second face image is lower than that of the first face image.
If the first face image has the first image quality, the second face image has the second image quality, for example, the resolution corresponding to the first image quality is greater than the resolution corresponding to the second image quality and/or the noise data corresponding to the first image quality is less than the noise data corresponding to the second image quality, if the second face image is at least one of a low-definition face image and a face image with noise, the first face image is a high-definition face image, and the true value of the first face image is greater than a preset true threshold, that is, the first face image may be a true high-definition face image to be distinguished from a fictional high-definition face image, so that the first face image decoded by the image decoding model achieves an effect of false reality.
The manner in which the image quality of the first face image is reduced may be, but is not limited to, performing degradation processing on the first face image, for example, performing at least one of noise loading, blurring processing, compression processing, and sharpness reduction processing on the first face image, so as to reduce the image quality of the first face image by at least one of these manners, and the detailed procedures of these manners are not further described in this embodiment.
103: and calling the initial image coding model to code the second face image to obtain a second feature vector corresponding to the second face image.
The initial image coding model has a function of generating a feature vector, and the initial image coding model can code an input face image by taking the face image as input to generate the feature vector corresponding to the face image. For example, the second face image is input into the initial image coding model, and the initial image coding model is called to code the second face image, so as to obtain a second feature vector corresponding to the second face image.
104: and training the initial image coding model based on the difference information of the second feature vector relative to the first feature vector to obtain the image coding model.
The purpose of training the initial image coding model based on the difference information of the second feature vector relative to the first feature vector is to train the initial image coding model to have the capability of coding the first feature vector corresponding to the first face image, so that when one face image with the second image quality is input, the image coding model obtained by training the initial image coding model can output the feature vector corresponding to the face image with the first image quality, and further, the image decoding model can restore the face image with the first image quality by using the feature vector output by the image coding model.
In this embodiment, based on the difference information of the second feature vector relative to the first feature vector, the training of the initial image coding model is mainly as follows: the model parameters of the initial image coding model are adjusted based on the difference information of the second feature vector relative to the first feature vector, the process of adjusting the model parameters of the initial image coding model can be adjusted in a preset adjustment step mode, the model parameters can be randomly adjusted according to the difference information of the second feature vector relative to the first feature vector, and the adjustment step of the model parameters does not need to be limited.
The difference information of the second feature vector relative to the first feature vector is obtained by comparing the second feature vector with the first feature vector, and if the similarity between the second feature vector and the first feature vector is calculated, the similarity between the second feature vector and the first feature vector is determined as the difference information.
And if the initial image coding model is determined not to be in accordance with the preset training condition based on the difference information of the second feature vector relative to the first feature vector, adjusting the model parameters of the initial image coding model, wherein the preset training condition indicates that the second feature vector is matched with the first feature vector and the matching times are greater than the preset times.
In the process of adjusting the initial image coding model, the second face image can be kept unchanged, the adjusted initial image coding model is called to code the second face image so as to output the second feature vector again, and then whether the second face image does not accord with the preset training condition is determined based on the difference information of the second feature vector relative to the re-output first feature vector. Or in the process of adjusting the initial image coding model, changing the second face image, calling the adjusted initial image coding model to code the changed second face image so as to output the second feature vector again, and then determining whether the second face image does not accord with the preset training condition or not based on the difference information of the second feature vector relative to the re-output first feature vector. Ways to change the second face image include, but are not limited to: and changing at least one of processing modes of changing the first face image corresponding to the image decoding model and reducing the image quality of the first face image under the condition of keeping the first face image unchanged.
If the initial image coding model is determined to meet the preset training condition based on the difference information of the second feature vector relative to the first feature vector, the second feature vector coded by the initial image coding model is matched with the first feature vector, the initial image coding model can output the feature vector corresponding to the face image with the first image quality through training, and therefore the trained initial image coding model can be determined to be the image coding model.
Based on the above training method of the face repairing model shown in fig. 1, the idea of obtaining the image coding model in the face repairing model is shown in fig. 2, in the case of obtaining an image decoding model in the face repairing model, and the image decoding model can decode the feature vector to output a face image corresponding to the feature vector, training an initial image coding model by using an image decoding model, wherein the initial image coding model is connected behind the image decoding model in the training process, namely, the output of the image decoding model is used as the input of the initial image coding model, and under the condition of keeping the model parameters of the image decoding model unchanged, and training the initial image coding model by utilizing the difference information between the second characteristic vector output by the initial image coding model and the first characteristic vector input by the image decoding model to obtain the image coding model.
According to the technical scheme, a first face image is obtained after an image decoding model decodes an input first feature vector; obtaining a second face image by reducing the image quality of the first face image; calling an initial image coding model to code the second face image to obtain a second feature vector corresponding to the second face image; and training the initial image coding model based on the difference information of the second feature vector relative to the first feature vector to obtain the image coding model.
Because the second face image is obtained by reducing the image quality of the first face image, which indicates that the image quality of the second face image input by the image coding model is lower than that of the first face image output by the image decoding model, in the process of face restoration by using the face restoration model, a face image with lower image quality can be accessed by the image coding model, a feature vector is provided for the image decoding model, and a face image with higher image quality is output by the image decoding model, so that restoration of the face image with lower image quality is realized.
In this embodiment, the image decoding model in the face restoration model may adopt a model that currently has an output high-definition face image, for example, the image decoding model may adopt a StyleGan network model, and certainly, the image decoding model in the face restoration model may also be different from the model that currently has an output high-definition face image, for example, a custom initial image decoding model, and the image decoding model is obtained by training a feature vector generated by the initial image decoding model with a random number in advance.
Fig. 3 shows a concept of training a feature vector generated by an initial image decoding model with a random number in advance, where: generating a feature vector by using a random number, calling an initial image decoding model to decode the feature vector to generate a face image corresponding to the feature vector, calling an initial discrimination model to at least discriminate the image quality of the face image corresponding to the feature vector, and training the initial image decoding model by using a confrontation model composed of the initial image decoding model and the initial discrimination model, wherein a corresponding flow chart is shown in fig. 4 and can comprise the following steps:
201: a third feature vector is generated based on the first random number. The form of the third feature vector may be set by the initial image decoding model, if the feature vector input by the initial image decoding model is a 512 × 512 feature vector, a 512 × 512 feature vector needs to be generated according to the first random number as the third feature vector, and this embodiment does not limit the relationship between the random numbers in the third feature vector, for example, the first random number may be selected from 0 to 255, and the preset number of first random numbers in all the first random numbers constituting the first feature vector may be the same, for example, the value of the preset number of first random numbers is 0, and this embodiment does not limit the preset number.
202: and calling the initial image decoding model to decode the third feature vector to obtain a third face image.
203: and calling an initial discrimination model, and discriminating the image quality of the third face image and the preset face image to obtain a first discrimination result aiming at the third face image and a second discrimination result aiming at the preset face image, wherein the preset face image and the first face image have the same first image quality.
If the preset face image and the first face image are both real high-definition face images, whether the third face image and the preset face image are both real high-definition face images is judged through the initial judgment model, so that the image decoding model obtained through training can output the real high-definition face images by means of the judgment of the initial judgment model, and the initial judgment model can be restrained through the preset face images, so that the requirement of the initial judgment model on the real high-definition face images is high.
The first discrimination result aiming at the third face image is used for representing whether the third face image has the first image quality, if the initial discrimination model outputs the probability indicating whether the third face image has the first image quality, if the probability output by the initial discrimination model is larger than a preset threshold value, the first discrimination result represents that the third face image has the first image quality; if the probability output by the initial discrimination model is less than or equal to the preset threshold, the first discrimination result represents that the third face image does not have the first image quality, the preset threshold is used as a threshold for discriminating whether the third face image has the first image quality, and the preset threshold is not limited in this embodiment. The second determination result of the preset face image is used to represent whether the preset face image has the first image quality, and for the description of the second determination result, reference is made to the description of the first determination result, which is not described herein again.
204: and if the third face image is determined not to have the first image quality according to the first judgment result, training an initial image decoding model based on the first judgment result.
If the third face image is determined to have no first image quality according to the first judgment result, it is indicated that the initial image decoding model has no face image with the first image quality, and the initial image decoding model needs to be trained continuously, for example, model parameters of the initial image decoding model are continuously adjusted. The process of adjusting the model parameters of the initial image decoding model can be adjusted by presetting an adjustment step length, and certainly, the model parameters can be randomly adjusted according to the first judgment result, and the adjustment step length of the model parameters does not need to be limited.
In the process of adjusting the initial image decoding model, the third face image can be kept unchanged, the adjusted initial image decoding model is called to decode the third feature vector so as to output the third face image again, and then the image quality of the third face image is judged so as to obtain the first judgment result aiming at the third face image again. Or in the process of adjusting the initial image decoding model, changing the third feature vector, calling the adjusted initial image decoding model to decode the changed third feature vector so as to output the third face image again, and then judging the image quality of the third face image so as to obtain the first judgment result aiming at the third face image again. Ways to alter the third feature vector include, but are not limited to: and at least one of changing the first random number, maintaining the first random number unchanged, and changing the position of the first random number in the third feature vector.
Because the initial image decoding model and the initial discrimination model form a countermeasure model, the model parameters of the initial discrimination model can be adjusted in the process of adjusting the model parameters of the initial image decoding model, the adjustment of the model parameters of the initial discrimination model can be adjusted according to the first discrimination result, and is similar to the adjustment of the model parameters of the initial image decoding model, and the specific embodiment is not described again.
205: and if the preset face image is determined not to have the first image quality according to the second judgment result, training the initial judgment model based on the second judgment result.
If the preset face image is determined not to have the first image quality according to the second judgment result, the problem that the initial judgment model has the recognition error is described, which may be caused by the fact that the requirement of the initial judgment model for the first image quality is too high, so if the preset face image is determined not to have the first image quality according to the second judgment result, the initial judgment model is trained based on the second judgment result, and if the model parameters of the initial judgment model are continuously adjusted. The process of adjusting the model parameters of the initial discrimination model can be adjusted by presetting an adjustment step length, and certainly, the model parameters can be randomly adjusted according to the first discrimination result, and the adjustment step length of the adjusted model parameters does not need to be limited.
In the process of adjusting the initial discrimination model, the preset face image and/or the third face image can be maintained unchanged, and the adjusted initial discrimination model is called to discriminate the image quality of the third face image and the preset face image. Or, in the process of adjusting the initial discrimination model, the preset face image and/or the third face image are/is changed, the adjusted initial discrimination model is called to discriminate the image quality of the third face image and the preset face image, and the change of the third face image can be realized by, but not limited to, changing the third feature vector, and details are not described here.
Similarly, in the process of adjusting the model parameters of the initial discrimination model, the model parameters of the initial image decoding model may also be adjusted, and the adjustment of the model parameters of the initial image decoding model may be adjusted according to the second discrimination result, which is not described in this embodiment.
In this embodiment, the training end conditions corresponding to the initial image decoding model and the initial discriminant model may be, but are not limited to: and performing model parameter adjustment for a first preset number of times on at least one of the initial image decoding model and the initial discrimination model, determining that the number of times that the third face image and the preset face image have the first image quality reaches a second preset number of times by the initial discrimination model, and giving any one of training termination instructions based on the third face image by the user. The first preset times are used for indicating the total times of training of the initial image decoding model and the initial discrimination model, the second preset times are used for indicating the total times that the third face image and the preset face image have the first image quality, the first preset times are greater than the second preset times, but the value of the first preset times and the value of the second preset times are not limited in this embodiment.
According to the technical scheme, a third feature vector is generated according to a first random number, an initial image decoding model is called to decode the third feature vector to obtain a third face image, an initial discrimination model is called to discriminate the image quality of the third face image and a preset face image to obtain a first discrimination result aiming at the third face image and a second discrimination result aiming at the preset face image, and the preset face image and the first face image have the same first image quality; if the third face image is determined to have no first image quality according to the first judgment result, training an initial image decoding model based on the first judgment result; if the preset face image is determined to have no first image quality according to the second judgment result, the initial judgment model is trained based on the second judgment result, so that the image decoding model in the face restoration model is obtained through training the initial image decoding model, and the use of the face restoration model is not limited by the existing image decoding model.
Although the initial image decoding model is trained to obtain the image decoding model, the image quality of the third face image output by the initial image decoding model is determined by the initial determination model in the training process, but the determination result of the initial determination model may be deceptive, so that the image quality of the face image output by the image decoding model is different from the first image quality, for this reason, after the image decoding model is obtained, the model parameters of the image decoding model can be adjusted by referring to the difference information between the face images, and the process is as shown in fig. 5, which explains the adjustment process of the model parameters of the image decoding model in the face repairing model, and may include the following steps:
105: and after the training of the initial image coding model is finished, acquiring a fourth face image and a fifth face image corresponding to the fourth face image, wherein the fourth face image and the second face image have the same second image quality, and the fifth face image has the first image quality.
After the training of the initial image coding model is completed to obtain the image coding model, the image coding model can code one face image to obtain a corresponding feature vector, the feature vector output by the image coding model can be decoded into the face image by the image decoding model, and for the training of the initial image decoding model, the feature vector input by the image decoding model is obtained by coding one face image by the image coding model, and two face images of the same face can be obtained in the process, wherein the two face images are respectively: and the fifth face image can be used as a reference image of an image decoding model, such as obtaining a low-definition face image (namely, the fourth face image) and a real high-definition face image (namely, the fifth face image) of the same face.
106: and calling an image coding model to code the fourth face image so as to obtain a fourth feature vector.
107: and calling an image decoding model to decode the fourth feature vector to obtain a sixth face image.
108: and calling a discrimination model obtained through the initial discrimination model training, and discriminating the image quality of the sixth face image and the preset face image to obtain a third discrimination result aiming at the sixth face image and the preset face image.
109: and at least adjusting model parameters of the image decoding model and the discrimination model based on the third discrimination result and the difference information of the sixth face image relative to the fifth face image.
For example, model parameters of the image coding model, the image decoding model and the discrimination model may be adjusted, and the adjustment step size used in the adjustment is smaller than the adjustment step size used in the training, for example, the adjustment step size used in the adjustment of the image coding model is smaller than the adjustment step size used in the training of the initial image coding model, and how much smaller the adjustment step size is, the embodiment is not limited.
In the process of adjusting the model parameters of the image decoding model, if the model parameters of the image coding model can be maintained unchanged, the image coding model is called to code the fourth face image, so as to obtain a fourth feature vector, where the fourth feature vector may be: under the condition that the model parameters of the image coding model are kept unchanged, the image coding model is called to code the fourth face image to obtain a fourth feature vector, so that the model parameters of the image decoding model and the discrimination model are adjusted, and compared with the method for adjusting the image coding model, the image decoding model and the discrimination model simultaneously, the number of model adjustment is reduced, so that the adjustment efficiency can be improved. In the process of adjusting the model parameters of the image decoding model and the discrimination model, the difference information of the sixth face image relative to the fifth face image is introduced, so that the difference information of the sixth face image output by the image decoding model and the fifth face image with the first image quality can be referred to for adjustment, and the image quality of the face image output by the image decoding model is closer to the first image quality.
Based on the third discrimination result and the difference information of the sixth face image relative to the fifth face image, the process of adjusting the model parameters of the image decoding model and the discrimination model is as follows:
if the sixth face image is determined to have no first image quality according to the third judgment result, based on the third judgment result and difference information of the sixth face image relative to a sixth image of a fifth person, adjusting model parameters of the image decoding model by using a first adjustment step length, wherein the first adjustment step length is smaller than an adjustment step length adopted when the initial image decoding model is trained; and if the preset face image does not have the first image quality according to the third judgment result, adjusting the model parameters of the judgment model by using a second adjustment step length, wherein the second adjustment step length is smaller than the adjustment step length adopted when the initial judgment model is trained.
The training process with respect to the initial image decoding model and the initial discrimination model is different in that: a first adjustment step length for adjusting the model parameters of the image decoding model is smaller than the adjustment step length adopted during training of the initial image decoding model, and difference information of a sixth human face image relative to a sixth human face image is introduced; and the second adjustment step length for adjusting the model parameters of the discrimination model is smaller than the adjustment step length adopted when the initial discrimination model is trained.
And if the similarity between the sixth facial image and the sixth image of the fifth person is calculated, determining the similarity between the sixth facial image and the sixth image of the fifth person as the difference information.
And if the image decoding model is determined to be not in accordance with the preset adjusting condition based on the difference information of the sixth face image relative to the sixth image of the fifth person, adjusting the model parameter of the image decoding model by using the first adjusting step length, wherein the preset adjusting condition indicates that the sixth face image is matched with the sixth image of the fifth person, and the matching times are more than the preset times. In the process of adjusting the model parameter of the image decoding model by using the first adjustment step length, the fifth face image may be maintained unchanged or changed, which is not described in this embodiment. And if the image decoding model is determined to meet the preset adjustment condition based on the difference information of the sixth face image relative to the sixth image of the fifth person, the adjustment of the model parameters of the image decoding model by using the first adjustment step length is finished. Since the discrimination model and the image decoding model are adjusted in synchronization, the adjustment of the model parameters of the discrimination model is ended simultaneously with the end of the adjustment of the model parameters of the image decoding model.
In addition to the condition of finishing adjusting the model parameters of the image decoding model and the discrimination model by using the preset adjustment condition, other conditions can be adopted for constraint, such as any one of model parameter adjustment for a third preset number of times on at least one of the image decoding model and the discrimination model, the number of times that the discrimination model determines that the sixth face image and the preset face image have the first image quality reaches a fourth preset number of times, and a training termination instruction is given by the user based on the sixth face image. The third preset number is used for indicating the total number of training times of the image decoding model and the discrimination model, the fourth preset number is used for indicating the total number of times that the sixth face image and the preset face image have the first image quality, the third preset number is greater than the fourth preset number, but the value of the third preset number and the fourth preset number is not limited in this embodiment.
Fig. 6 shows a schematic diagram of the adjustment of the model parameters of the image decoding model and the discrimination model, which illustrates an idea of adjusting the model parameters of the image decoding model and the discrimination model: the image coding model is placed in front of the image decoding model, model parameters of the image coding model are kept unchanged, the output of the image coding model is used as the input of the image decoding model, a fourth feature vector output by the image coding model is decoded to obtain a sixth face image, the sixth face image and the preset face image are judged by the judging model according to the image quality, whether the sixth face image and the preset face image have the first image quality is judged, and therefore the model parameters of the image decoding model and the judging model are adjusted on the basis of a third judging result of the judging model and difference information of the sixth face image relative to the fifth face image.
After the adjustment of the image decoding model and the discrimination model is completed, the model parameters of the image coding model, the image decoding model and the discrimination model can be adjusted to jointly adjust the three models, so as to improve the accuracy of the face restoration model, wherein the schematic diagram of the adjustment is shown in fig. 7, and the corresponding flowchart is shown in fig. 8, and the method can include the following steps:
110: and after the adjustment of the image decoding model and the discrimination model is finished, acquiring a seventh face image with second image quality and an eighth face image which corresponds to the seventh face image and has first image quality.
111: and calling an image coding model to code the seventh face image so as to obtain a fifth feature vector.
112: and calling the adjusted image decoding model to decode the fifth feature vector to obtain a ninth face image.
113: and calling the adjusted discrimination model, and discriminating the image quality of the ninth face image and the preset face image to obtain a fourth discrimination result aiming at the ninth face image and the preset face image.
114: and based on the fourth discrimination result and the difference information of the ninth face image relative to the eighth face image, readjusting the model parameters of the image decoding model and the discrimination model after adjustment and adjusting the model parameters of the image coding model.
The difference between the above-mentioned model parameters for adjusting the image decoding model and the discrimination model is that: the adjustment step length adopted when the adjusted model parameters of the image decoding model and the discrimination model are adjusted again is smaller than the adjustment step length adopted when the adjusted model parameters of the image decoding model and the discrimination model are adjusted for the first time, the adjustment step length adopted when the model parameters of the image coding model are adjusted is smaller than the adjustment step length adopted when the initial image coding model is trained, and other processes can refer to the description of the adjustment of the model parameters of the image decoding model and the discrimination model, and are not repeated here.
By adjusting the model parameters of the image decoding model and the image coding model, the face image with the second image quality is repaired by the face repairing model to obtain a face image with the first image quality, so that the face image with the low image quality is repaired. As shown in fig. 9, it shows that the repairing effect of the face repairing model provided by this embodiment can be used to complete a good repairing on a low-definition face image.
Corresponding to the foregoing method embodiment, an embodiment of the present application further provides a face repairing model training apparatus, where the face repairing model includes an image coding model and an image decoding model, and an optional structure of the face repairing model training apparatus is shown in fig. 10, and may include: an acquisition unit 100, an image processing unit 200, an encoding unit 300, and a training unit 400.
The obtaining unit 100 is configured to obtain a first face image, where the first face image is obtained by decoding an input first feature vector by using an image decoding model.
The image decoding model has a function of generating a face image, and the image decoding model may decode the input feature vector with the feature vector as input to generate the face image corresponding to the feature vector. For example, the first feature vector is input into an image decoding model, and the image decoding model is called to decode the first feature vector to obtain a first face image corresponding to the first feature vector.
In this embodiment, the first feature vector input by the image decoding model may be a feature vector generated according to a random number, for example, the first feature vector is but not limited to a 512 × 512 feature vector, the 512 × 512 feature vector is composed of 512 × 512 random numbers, and a preset number of random numbers in all random numbers composing the first feature vector may be the same, for example, a value of the preset number of random numbers is 0, and the preset number is not limited in this embodiment.
An image processing unit 200, configured to obtain a second face image by reducing the image quality of the first face image, so that the image quality of the second face image is lower than the image quality of the first face image.
If the first face image has the first image quality, the second face image has the second image quality, for example, the resolution corresponding to the first image quality is greater than the resolution corresponding to the second image quality and/or the noise data corresponding to the first image quality is less than the noise data corresponding to the second image quality, if the second face image is at least one of a low-definition face image and a face image with noise, the first face image is a high-definition face image, and the true value of the first face image is greater than a preset true threshold, that is, the first face image may be a true high-definition face image to be distinguished from a fictional high-definition face image, so that the first face image decoded by the image decoding model achieves an effect of false reality.
The manner in which the image quality of the first face image is reduced may be, but is not limited to, performing degradation processing on the first face image, for example, performing at least one of noise loading, blurring processing, compression processing, and sharpness reduction processing on the first face image, so as to reduce the image quality of the first face image by at least one of these manners, and the detailed procedures of these manners are not further described in this embodiment.
And the encoding unit 300 is configured to invoke the initial image encoding model to encode the second face image, so as to obtain a second feature vector corresponding to the second face image. The initial image coding model has a function of generating a feature vector, and the initial image coding model can code an input face image by taking the face image as input to generate the feature vector corresponding to the face image. For example, the second face image is input into the initial image coding model, and the initial image coding model is called to code the second face image, so as to obtain a second feature vector corresponding to the second face image.
And the training unit 400 is configured to train the initial image coding model based on difference information of the second feature vector relative to the first feature vector, so as to obtain the image coding model.
Based on the difference information of the second feature vector relative to the first feature vector, the purpose of training the initial image coding model is to train the initial image coding model to have the capability of coding the first feature vector corresponding to the first face image, so that when a face image with the second image quality is input, the image coding model obtained by training the initial image coding model can output the feature vector corresponding to the face image with the first image quality, and further the image decoding model can restore the face image with the first image quality by using the feature vector output by the image coding model.
According to the technical scheme, a first face image is obtained after an image decoding model decodes an input first feature vector; obtaining a second face image by reducing the image quality of the first face image; calling an initial image coding model to code the second face image to obtain a second feature vector corresponding to the second face image; and training the initial image coding model based on the difference information of the second feature vector relative to the first feature vector to obtain the image coding model.
Because the second face image is obtained by reducing the image quality of the first face image, which indicates that the image quality of the second face image input by the image coding model is lower than that of the first face image output by the image decoding model, in the process of face restoration by using the face restoration model, a face image with lower image quality can be accessed by the image coding model, a feature vector is provided for the image decoding model, and a face image with higher image quality is output by the image decoding model, so that restoration of the face image with lower image quality is realized.
In this embodiment, the image decoding model is obtained by training the initial image decoding model in advance with feature vectors generated by random numbers. Another alternative structure of the corresponding face repairing model training device is shown in fig. 11, and may further include: generation section 500, decoding section 600, and discrimination section 700.
A generating unit 500, configured to generate a third feature vector according to the first random number. The form of the third feature vector may be set by the initial image decoding model, if the feature vector input by the initial image decoding model is a 512 × 512 feature vector, a 512 × 512 feature vector needs to be generated according to the first random number as the third feature vector, and this embodiment does not limit the relationship between the random numbers in the third feature vector, for example, the first random number may be selected from 0 to 255, and the preset number of first random numbers in all the first random numbers constituting the first feature vector may be the same, for example, the value of the preset number of first random numbers is 0, and this embodiment does not limit the preset number.
And the decoding unit 600 is configured to invoke the initial image decoding model to decode the third feature vector, so as to obtain a third face image.
The determining unit 700 is configured to invoke an initial determining model, and perform image quality determination on the third face image and the preset face image to obtain a first determining result for the third face image and a second determining result for the preset face image, where the preset face image and the first face image have the same first image quality.
If the preset face image and the first face image are both real high-definition face images, whether the third face image and the preset face image are both real high-definition face images is judged through the initial judgment model, so that the image decoding model obtained through training can output the real high-definition face images by means of the judgment of the initial judgment model, and the initial judgment model can be restrained through the preset face images, so that the requirement of the initial judgment model on the real high-definition face images is high.
The first discrimination result aiming at the third face image is used for representing whether the third face image has the first image quality, if the initial discrimination model outputs the probability indicating whether the third face image has the first image quality, if the probability output by the initial discrimination model is larger than a preset threshold value, the first discrimination result represents that the third face image has the first image quality; if the probability output by the initial discrimination model is less than or equal to the preset threshold, the first discrimination result represents that the third face image does not have the first image quality, the preset threshold is used as a threshold for discriminating whether the third face image has the first image quality, and the preset threshold is not limited in this embodiment. The second determination result of the preset face image is used to represent whether the preset face image has the first image quality, and for the description of the second determination result, reference is made to the description of the first determination result, which is not described herein again.
The training unit 400 is further configured to train the initial image decoding model based on the first determination result if it is determined that the third face image does not have the first image quality according to the first determination result, and train the initial determination model based on the second determination result if it is determined that the preset face image does not have the first image quality according to the second determination result, and for the training process of the training unit 400 on the initial image decoding model and the initial determination model, refer to the above method embodiment, which is not described herein again.
According to the technical scheme, a third feature vector is generated according to a first random number, an initial image decoding model is called to decode the third feature vector to obtain a third face image, an initial discrimination model is called to discriminate the image quality of the third face image and a preset face image to obtain a first discrimination result aiming at the third face image and a second discrimination result aiming at the preset face image, and the preset face image and the first face image have the same first image quality; if the third face image is determined to have no first image quality according to the first judgment result, training an initial image decoding model based on the first judgment result; if the preset face image is determined to have no first image quality according to the second judgment result, the initial judgment model is trained based on the second judgment result, so that the image decoding model in the face restoration model is obtained through training the initial image decoding model, and the use of the face restoration model is not limited by the existing image decoding model.
Although the image quality of the third face image output by the initial image decoding model is judged by the initial judgment model in the training process, the judgment result of the initial judgment model may be deceptive, so that the image quality of the face image output by the image decoding model is different from the first image quality, and therefore, the face restoration model training device provided by this embodiment may further adjust at least the model parameters of the image decoding model, and the process is as follows:
the obtaining unit 100 is further configured to obtain a fourth face image and a fifth face image corresponding to the fourth face image after the training of the initial image coding model is completed, where the fourth face image and the second face image have the same second image quality, and the fifth face image has the first image quality.
The encoding unit 200 is further configured to invoke an image coding model to encode a fourth face image to obtain a fourth feature vector. And if the model parameters of the image coding model are not changed, calling the image coding model to code the fourth face image so as to obtain a fourth feature vector.
The decoding unit 600 is further configured to invoke an image decoding model to decode the fourth feature vector, so as to obtain a sixth face image.
The determining unit 700 is further configured to call a determining model obtained through the initial determining model training, and perform image quality determination on the sixth face image and the preset face image to obtain a third determining result for the sixth face image and the preset face image.
The training unit 400 is further configured to adjust at least model parameters of the image decoding model and the discrimination model based on the third discrimination result and difference information of the sixth face image relative to the fifth face image. Based on the third discrimination result and the difference information of the sixth face image relative to the fifth face image, the process of adjusting the model parameters of the image decoding model and the discrimination model is as follows:
if the sixth face image is determined to have no first image quality according to the third judgment result, based on the third judgment result and difference information of the sixth face image relative to a sixth image of a fifth person, adjusting model parameters of the image decoding model by using a first adjustment step length, wherein the first adjustment step length is smaller than an adjustment step length adopted when the initial image decoding model is trained; and if the preset face image does not have the first image quality according to the third judgment result, adjusting the model parameters of the judgment model by using a second adjustment step length, wherein the second adjustment step length is smaller than the adjustment step length adopted when the initial judgment model is trained.
After the adjustment of the image decoding model and the discrimination model is completed, model parameters of the image coding model, the image decoding model and the discrimination model can be adjusted to jointly adjust the three models, so that the accuracy of the face restoration model is improved, and the adjustment process is as follows:
the obtaining unit 100 is further configured to obtain, after the adjustment of the image decoding model and the discrimination model is completed, a seventh face image with a second image quality and an eighth face image with a first image quality corresponding to the seventh face image.
The encoding unit 200 is further configured to invoke an image coding model to encode the seventh facial image to obtain a fifth feature vector.
The decoding unit 600 is further configured to invoke the adjusted image decoding model to decode the fifth feature vector, so as to obtain a ninth human face image.
The determining unit 700 is further configured to invoke the adjusted determining model, and determine the image quality of the ninth face image and the preset face image to obtain a fourth determining result for the ninth face image and the preset face image.
The training unit 400 is further configured to adjust the model parameters of the adjusted image decoding model and discrimination model again and adjust the model parameters of the image coding model based on the fourth discrimination result and the difference information of the ninth face image relative to the eighth face image.
For a description of adjusting the model parameters of the image coding model, the image decoding model and the discrimination model, please refer to the above method embodiment, which is not described herein again.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a processor.
A memory for storing processor-executable instructions.
Wherein the processor is configured to execute the instructions to implement the above face repairing model training method.
The embodiment of the application also discloses a storage medium, wherein instructions are stored in the storage medium, and when the instructions in the storage medium are executed, the face repairing model training method is realized.
It should be noted that, the embodiments in the present specification may be described in a progressive manner, features described in the embodiments in the specification may be replaced with or combined with each other, each embodiment focuses on differences from other embodiments, and similar parts between the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A face restoration model training method is characterized in that the face restoration model comprises an image coding model and an image decoding model, and the method comprises the following steps:
acquiring a first face image, wherein the first face image is obtained by decoding an input first feature vector by the image decoding model;
obtaining a second face image by reducing the image quality of the first face image;
calling an initial image coding model to code the second face image to obtain a second feature vector corresponding to the second face image;
and training the initial image coding model based on the difference information of the second feature vector relative to the first feature vector to obtain the image coding model.
2. The method of claim 1, wherein the image decoding model is obtained by training feature vectors generated by random numbers of an initial image decoding model in advance.
3. The method of claim 2, wherein the pre-training of the feature vectors generated by the initial image decoding model with random numbers comprises:
generating a third feature vector according to the first random number;
calling the initial image decoding model to decode the third feature vector to obtain a third face image;
calling an initial discrimination model, and discriminating the image quality of the third face image and a preset face image to obtain a first discrimination result aiming at the third face image and a second discrimination result aiming at the preset face image, wherein the preset face image and the first face image have the same first image quality;
if the third face image is determined not to have the first image quality according to the first judgment result, training the initial image decoding model based on the first judgment result;
and if the preset face image is determined not to have the first image quality according to the second judgment result, training the initial judgment model based on the second judgment result.
4. The method of any one of claims 1 to 3, further comprising:
after the training of the initial image coding model is completed, a fourth face image and a fifth face image corresponding to the fourth face image are obtained, wherein the fourth face image and the second face image have the same second image quality, and the fifth face image has the first image quality;
calling the image coding model to code the fourth face image to obtain a fourth feature vector;
calling the image decoding model to decode the fourth feature vector to obtain a sixth face image;
calling a discrimination model obtained through initial discrimination model training, and discriminating the image quality of the sixth face image and a preset face image to obtain a third discrimination result aiming at the sixth face image and the preset face image;
and at least adjusting model parameters of the image decoding model and the discrimination model based on the third discrimination result and the difference information of the sixth face image relative to the fifth face image.
5. The method of claim 4, wherein said invoking the image coding model to code the fourth face image to obtain a fourth feature vector comprises: under the condition that the model parameters of the image coding model are kept unchanged, calling the image coding model to code the fourth face image to obtain a fourth feature vector;
the adjusting at least the model parameters of the image decoding model and the discrimination model based on the third discrimination result and the difference information of the sixth face image relative to the fifth face image includes:
if the sixth face image is determined to have no first image quality according to the third judgment result, based on the third judgment result and difference information of the sixth face image relative to the fifth face image, adjusting model parameters of the image decoding model by using a first adjustment step length, wherein the first adjustment step length is smaller than an adjustment step length adopted in training the initial image decoding model;
and if the preset face image does not have the first image quality according to the third judgment result, adjusting the model parameters of the judgment model by using a second adjustment step length, wherein the second adjustment step length is smaller than the adjustment step length adopted when the initial judgment model is trained.
6. The method of claim 5, further comprising:
after the image decoding model and the discrimination model are adjusted, acquiring a seventh face image with the second image quality, an eighth face image which corresponds to the seventh face image and has the first image quality;
calling the image coding model to code the seventh face image to obtain a fifth feature vector;
calling the adjusted image decoding model to decode the fifth feature vector to obtain a ninth face image;
calling the adjusted discrimination model, and discriminating the image quality of the ninth face image and a preset face image to obtain a fourth discrimination result aiming at the ninth face image and the preset face image;
and based on the fourth judgment result and the difference information of the ninth face image relative to the eighth face image, readjusting the model parameters of the image decoding model and the judgment model after adjustment, and adjusting the model parameters of the image coding model.
7. A face restoration model training device, wherein the face restoration model comprises an image coding model and an image decoding model, the device comprising:
the acquisition unit is used for acquiring a first face image, wherein the first face image is obtained by decoding an input first feature vector by the image decoding model;
the image processing unit is used for obtaining a second face image by reducing the image quality of the first face image;
the encoding unit is used for calling an initial image encoding model to encode the second face image to obtain a second feature vector corresponding to the second face image;
and the training unit is used for training the initial image coding model based on the difference information of the second characteristic vector relative to the first characteristic vector to obtain the image coding model.
8. The apparatus according to claim 7, wherein the image decoding model is obtained by training a feature vector generated by an initial image decoding model with random numbers in advance;
the device further comprises:
a generating unit configured to generate a third feature vector from the first random number;
the decoding unit is used for calling the initial image decoding model to decode the third feature vector to obtain a third face image;
the judging unit is used for calling an initial judging model, judging the image quality of the third face image and a preset face image to obtain a first judging result aiming at the third face image and a second judging result aiming at the preset face image, wherein the preset face image and the first face image have the same first image quality;
the training unit is further configured to train the initial image decoding model based on the first discrimination result if it is determined that the third face image does not have the first image quality according to the first discrimination result, and train the initial discrimination model based on the second discrimination result if it is determined that the preset face image does not have the first image quality according to the second discrimination result.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the face repair model training method according to any one of claims 1 to 6.
10. A storage medium having stored therein instructions which, when executed, implement the face restoration model training method according to any one of claims 1 to 6.
CN202010969607.6A 2020-09-15 2020-09-15 Face restoration model training method and device Pending CN112102194A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010969607.6A CN112102194A (en) 2020-09-15 2020-09-15 Face restoration model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010969607.6A CN112102194A (en) 2020-09-15 2020-09-15 Face restoration model training method and device

Publications (1)

Publication Number Publication Date
CN112102194A true CN112102194A (en) 2020-12-18

Family

ID=73760185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010969607.6A Pending CN112102194A (en) 2020-09-15 2020-09-15 Face restoration model training method and device

Country Status (1)

Country Link
CN (1) CN112102194A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361325A (en) * 2021-04-28 2021-09-07 星宏网络科技有限公司 Method, device and equipment for decoding face feature vector into image

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306305A (en) * 2011-07-06 2012-01-04 北京航空航天大学 Method for authenticating safety identity based on organic characteristic watermark
CN109377448A (en) * 2018-05-20 2019-02-22 北京工业大学 A kind of facial image restorative procedure based on generation confrontation network
WO2019113471A1 (en) * 2017-12-08 2019-06-13 Digimarc Corporation Artwork generated to convey digital messages, and methods/apparatuses for generating such artwork
US20190236759A1 (en) * 2018-01-29 2019-08-01 National Tsing Hua University Method of image completion
CN110147864A (en) * 2018-11-14 2019-08-20 腾讯科技(深圳)有限公司 The treating method and apparatus of coding pattern, storage medium, electronic device
CN110532871A (en) * 2019-07-24 2019-12-03 华为技术有限公司 The method and apparatus of image procossing
US20200177470A1 (en) * 2018-07-03 2020-06-04 Kabushiki Kaisha Ubitus Method for enhancing quality of media
CN111462264A (en) * 2020-03-17 2020-07-28 中国科学院深圳先进技术研究院 Medical image reconstruction method, medical image reconstruction network training method and device
WO2020155518A1 (en) * 2019-02-03 2020-08-06 平安科技(深圳)有限公司 Object detection method and device, computer device and storage medium
CN111652049A (en) * 2020-04-17 2020-09-11 北京三快在线科技有限公司 Face image processing model training method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306305A (en) * 2011-07-06 2012-01-04 北京航空航天大学 Method for authenticating safety identity based on organic characteristic watermark
WO2019113471A1 (en) * 2017-12-08 2019-06-13 Digimarc Corporation Artwork generated to convey digital messages, and methods/apparatuses for generating such artwork
US20190236759A1 (en) * 2018-01-29 2019-08-01 National Tsing Hua University Method of image completion
CN109377448A (en) * 2018-05-20 2019-02-22 北京工业大学 A kind of facial image restorative procedure based on generation confrontation network
US20200177470A1 (en) * 2018-07-03 2020-06-04 Kabushiki Kaisha Ubitus Method for enhancing quality of media
CN110147864A (en) * 2018-11-14 2019-08-20 腾讯科技(深圳)有限公司 The treating method and apparatus of coding pattern, storage medium, electronic device
WO2020155518A1 (en) * 2019-02-03 2020-08-06 平安科技(深圳)有限公司 Object detection method and device, computer device and storage medium
CN110532871A (en) * 2019-07-24 2019-12-03 华为技术有限公司 The method and apparatus of image procossing
CN111462264A (en) * 2020-03-17 2020-07-28 中国科学院深圳先进技术研究院 Medical image reconstruction method, medical image reconstruction network training method and device
CN111652049A (en) * 2020-04-17 2020-09-11 北京三快在线科技有限公司 Face image processing model training method and device, electronic equipment and storage medium

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
MD AMIR BAIG等: "Referenceless Full Reference Image Quality Metric Estimation For Embedded Codec", 《2019 INTERNATIONAL CONFERENCE ON COMPUTING, POWER AND COMMUNICATION TECHNOLOGIES (GUCON)》, 27 December 2019 (2019-12-27) *
冷佳明;曾振;刘广源;郑新阳;刘璎慧;: "基于条件生成对抗网络的图像转化方法研究", 数码世界, no. 09, 1 September 2020 (2020-09-01) *
李凯旋;曹林;杜康宁;: "基于双层生成对抗网络的素描人脸合成方法", 计算机应用与软件, no. 12, 12 December 2019 (2019-12-12) *
李天成;何嘉;: "一种基于生成对抗网络的图像修复算法", 计算机应用与软件, no. 12, 12 December 2019 (2019-12-12) *
李泽文;李子铭;费天禄;王瑞琳;谢在鹏;: "基于残差生成对抗网络的人脸图像复原", 计算机科学, no. 1, 15 June 2020 (2020-06-15) *
李辉;石波;: "基于卷积神经网络的人脸识别算法", 软件导刊, no. 03, 15 March 2017 (2017-03-15) *
黄克斌;胡瑞敏;韩镇;卢涛;江俊君;王锋;: "基于K近邻稀疏编码均值约束的人脸超分辨率算法", 计算机科学, no. 05, 15 May 2013 (2013-05-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361325A (en) * 2021-04-28 2021-09-07 星宏网络科技有限公司 Method, device and equipment for decoding face feature vector into image

Similar Documents

Publication Publication Date Title
CN111798400B (en) Non-reference low-illumination image enhancement method and system based on generation countermeasure network
JP7446997B2 (en) Training methods, image processing methods, devices and storage media for generative adversarial networks
CN111768327B (en) Watermark adding and extracting method and device based on deep learning and storage medium
CN110570358A (en) vehicle loss image enhancement method and device based on GAN network
CN109345456A (en) Generate confrontation network training method, image processing method, equipment and storage medium
CN111105375B (en) Image generation method, model training method and device thereof, and electronic equipment
US11068746B2 (en) Image realism predictor
CN112614066A (en) Image restoration method and device and electronic equipment
CN110738153A (en) Heterogeneous face image conversion method and device, electronic equipment and storage medium
CN112102194A (en) Face restoration model training method and device
CN115115540A (en) Unsupervised low-light image enhancement method and unsupervised low-light image enhancement device based on illumination information guidance
US11948278B2 (en) Image quality improvement method and image processing apparatus using the same
KR20190127090A (en) Method and Apparatus for Just Noticeable Quantization Distortion based Perceptual Video Coding using Machine Learning
Liu et al. Facial image inpainting using multi-level generative network
CN112102191A (en) Face image processing method and device
CN114662666A (en) Decoupling method and system based on beta-GVAE and related equipment
CN114862699B (en) Face repairing method, device and storage medium based on generation countermeasure network
CN114630130B (en) Face-changing video tracing method and system based on deep learning
EP4064095A1 (en) Encoding, decoding and integrity validation systems for a security document with a steganography-encoded image and methods, security document, computing devices, computer programs and associated computer-readable data carrier
CN111539263B (en) Video face recognition method based on aggregation countermeasure network
KR102537207B1 (en) Method for processing image based on machine learning and apparatus therefof
CN114359009A (en) Watermark embedding method, watermark embedding network construction method and system of robust image based on visual perception and storage medium
CN113688694B (en) Method and device for improving video definition based on unpaired learning
Chen et al. Iterative Token Evaluation and Refinement for Real-World Super-Resolution
CN115880737B (en) Subtitle generation method, system, equipment and medium based on noise reduction self-learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination