CN111292262B - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111292262B
CN111292262B CN202010060550.8A CN202010060550A CN111292262B CN 111292262 B CN111292262 B CN 111292262B CN 202010060550 A CN202010060550 A CN 202010060550A CN 111292262 B CN111292262 B CN 111292262B
Authority
CN
China
Prior art keywords
image
beautified
sample
network model
image sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010060550.8A
Other languages
Chinese (zh)
Other versions
CN111292262A (en
Inventor
储文青
邰颖
汪铖杰
李季檩
葛彦昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010060550.8A priority Critical patent/CN111292262B/en
Publication of CN111292262A publication Critical patent/CN111292262A/en
Application granted granted Critical
Publication of CN111292262B publication Critical patent/CN111292262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, electronic equipment and a storage medium, wherein the image processing method comprises the following steps: the method comprises the steps of obtaining a material image sample and an image sample to be beautified, carrying out beautification treatment on the image sample to be beautified by using a generator in a preset network model and the material image sample to obtain an image sample after beautification, generating difference information corresponding to the image sample to be beautified under different scales by using a discriminator in the preset network model, converging the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model, and carrying out beautification treatment on the image to be beautified based on the generated countermeasure network model to obtain a beautified image.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
With the rapid development of computer technology, terminal applications for processing pictures, such as cameras and image editing software, can be installed in terminal devices such as smart phones, palm computers and tablet computers, and users can add special effects, decorate and beautify, make-up and/or change figures to original pictures (such as figures, landscapes or buildings) or videos based on the terminal applications. In the mind of loving or fun, people choose to make proper beautification or modification to their face photos when they disclose their photos in social networking sites or live broadcasting sites.
Taking image beautification as an example, the processing process of the face image data by the image-retouching terminal generally carries out image processing on the face image according to an image beautification algorithm and a material image, however, in the existing image processing scheme, the processing effect is often hard, and the beautifying effect is poor.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, which can improve the beautifying effect of images.
The embodiment of the invention provides an image processing method, which comprises the following steps:
acquiring a material image sample and an image sample to be beautified;
carrying out beautification treatment on the image sample to be beautified by utilizing a generator and a material image sample in a preset network model to obtain an beautified image sample;
generating difference information corresponding to the beautified image sample and the image sample to be beautified under different scales through a discriminator in a preset network model;
converging a preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model;
and carrying out beautification processing on the image to be beautified based on the generated countermeasure network model to obtain the beautified image.
Correspondingly, the embodiment of the invention also provides an image processing device, which comprises:
The acquisition module is used for acquiring a material image sample and an image sample to be beautified;
the first beautifying module is used for carrying out beautifying treatment on the image sample to be beautified by utilizing a generator and a material image sample in a preset network model to obtain a beautified image sample;
the generation module is used for generating difference information corresponding to the beautified image sample and the image sample to be beautified under different scales through a discriminator in a preset network model;
the convergence module is used for converging the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model;
and the second beautifying module is used for carrying out beautifying treatment on the image to be beautified based on the generated countermeasure network model to obtain a beautified image.
Optionally, in some embodiments of the present invention, the generating module includes:
the dimension conversion unit is used for carrying out image dimension conversion on the beautified image samples to obtain a plurality of first image samples, and carrying out image dimension conversion on the image samples to be beautified to obtain a plurality of second image samples;
the adding unit is used for adding the first image sample and the second image sample with the same scale into the same set to obtain a plurality of sample pairs with the same scale;
And the generating unit is used for generating the difference information of each same-scale sample pair through a discriminator in the preset network model.
Optionally, in some embodiments of the present invention, the generating unit includes:
an extraction subunit, configured to extract the scale of each same-scale sample pair;
a determining subunit, configured to determine a pair of samples of the same scale with a scale greater than a preset threshold as a first pair of samples, and determine a pair of samples of the same scale with a scale less than or equal to the preset threshold as a second pair of samples;
a construction subunit for constructing a plurality of first regions on a first image sample in the first sample pair;
a generating subunit, configured to generate, by using a discriminator in a preset network model, difference information between each first area and an area of a corresponding second image sample to obtain first difference information, and generate, by using a discriminator in a preset network model, difference information of each second sample pair to obtain second difference information;
the convergence module is specifically configured to: and converging the preset network model according to the first difference information and the second difference information to obtain a generated countermeasure network.
Optionally, in some embodiments of the present invention, the convergence module includes:
The construction unit is used for constructing a loss function corresponding to a preset network model according to the first difference information and the second difference information to obtain a target loss function;
and the convergence unit is used for converging a preset network model based on the target loss function to obtain a generated countermeasure network.
Optionally, in some embodiments of the present invention, the convergence unit is specifically configured to:
extracting an image error value from the first difference information to obtain a first image error value, and;
extracting an image error value from the second difference information to obtain a second image error value;
and constructing a loss function corresponding to a preset network model based on the first image error value, the second image error value and a preset gradient optimization algorithm to obtain a target loss function.
Optionally, in some embodiments of the present invention, the first beautification module includes:
the extraction unit is used for respectively carrying out feature extraction on the material image sample and the image sample to be beautified by utilizing a convolution layer in a generator in a preset network model to obtain a first feature vector corresponding to the material image sample and a second feature vector corresponding to the image sample to be beautified;
And the beautifying unit is used for carrying out beautifying treatment on the image sample to be beautified based on the first characteristic vector and the second characteristic vector to obtain an image sample after beautifying.
Optionally, in some embodiments of the present invention, the beautifying unit is specifically configured to:
splicing the first feature vector and the second feature vector to obtain a spliced feature vector;
and generating a beautified image sample based on the spliced feature vectors.
Optionally, in some embodiments of the present invention, the second beautifying module is specifically configured to:
receiving an image beautifying request, wherein the image beautifying request carries a material image and an image to be beautified;
and carrying out beautification processing on the image to be beautified based on the generated countermeasure network model and the material image to obtain an beautified image.
Optionally, in some embodiments of the present invention, a processing module is further included, where the processing module is specifically configured to:
determining an area to be beautified in an image sample to be beautified according to a preset strategy;
intercepting an image block corresponding to the region to be beautified from the image sample to be beautified to obtain the processed image sample to be beautified;
the first beautifying module is specifically configured to: and carrying out beautification treatment on the image sample to be beautified after the treatment by using a generator and a material image sample in a preset network model to obtain the beautified image sample.
After a material image sample and an image sample to be beautified are obtained, a generator in a preset network model and the material image sample are utilized to beautify the image sample to be beautified to obtain an beautified image sample, then difference information corresponding to the beautified image sample and the image sample to be beautified under different scales is generated through a discriminator in the preset network model, then the preset network model is converged according to the corresponding difference information under different scales to obtain a generated countermeasure network model, and finally the beautification treatment is carried out on the image to be beautified based on the generated countermeasure network model to obtain a beautified image. Therefore, the scheme can improve the beautifying effect of the image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic view of a scenario of an image processing method according to an embodiment of the present invention;
FIG. 1b is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 1c is a schematic diagram of face beautification in an image processing method according to an embodiment of the present invention;
fig. 1d is a schematic structural diagram of a preset network model in an image processing method according to an embodiment of the present invention;
FIG. 1e is a schematic diagram of an image processing method and a first region according to an embodiment of the present invention
FIG. 1f is a schematic diagram of a local feature error in an image processing method according to an embodiment of the present invention;
FIG. 2 is another flow chart of an image processing method according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another embodiment of an image processing apparatus
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium.
First, the concept of image beautification is introduced, and image beautification is to process image information by using a computer to meet the behavior of human visual psychology or application requirements, such as increasing contrast, removing blur and noise, correcting geometric distortion, changing face of a person, and the like.
The image processing device may be integrated in a terminal or a server, where the terminal may include a mobile phone, a tablet computer, a personal computer (PC, personal Computer) or a detection device, and the server may include a server that operates independently or a distributed server, or may include a server cluster that is formed by a plurality of servers.
For example, referring to fig. 1a, the image processing apparatus is integrated on a terminal, where the terminal may include a camera, and in a model training stage, the terminal may firstly obtain a material image sample and an image sample to be beautified, then the terminal may utilize a generator in a preset network model and the material image sample to beautify the image sample to be beautified to obtain an beautified image sample, then the terminal may generate difference information corresponding to the beautified image sample and the image sample to be beautified under different scales through a discriminator in the preset network model, and finally the terminal converges the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model; in the using stage, when the terminal receives an image beautifying request triggered by a user, the terminal can acquire a corresponding image to be beautified and a material image according to the image beautifying request, for example, the user needs to perform expression substitution on self-timer, and after the terminal acquires the image to be beautified and the material image, the terminal performs beautifying processing on the image to be beautified based on the generated countermeasure network model to acquire a beautifying image.
Compared with the existing image processing method, the method has the advantages that the difference information of the beautified image sample and the image sample to be beautified corresponding to different scales is generated through the discriminator in the preset network model, then the preset network model is converged according to the corresponding difference information of the different scales, the generated countermeasure network model is obtained, namely, in a training stage, the relationship between the beautified image sample and the image sample to be beautified is considered, the countermeasure between the discriminator and the generator is realized, the parameters of the generated countermeasure network model are optimized, and the generator can improve the image beautifying effect when in use.
The following will describe in detail. It should be noted that the following description order of embodiments is not a limitation of the priority order of embodiments.
An image processing method, comprising: the method comprises the steps of obtaining a material image sample and an image sample to be beautified, carrying out beautification treatment on the image sample to be beautified by using a generator in a preset network model and the material image sample to obtain an beautified image sample, generating difference information corresponding to the beautified image sample and the image sample to be beautified under different scales by using a discriminator in the preset network model, converging the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model, and carrying out beautification treatment on the image to be beautified based on the generated countermeasure network model to obtain a beautified image.
Referring to fig. 1b, fig. 1b is a flowchart illustrating an image processing method according to an embodiment of the invention. The specific flow of the image processing method can be as follows:
101. and acquiring a material image sample and an image sample to be beautified.
For example, the image sample to be beautified may be a sample containing a human body image, where the human body image may include an image of a head, an image of a torso, and an image of limbs, and it should be noted that the image of the head may include a human face image, and the material image may be an image containing expressions, an image containing various human body postures, and/or an image containing various wallpaper, which are specifically selected according to practical situations, and will not be described herein. The material image sample and the image sample to be beautified can be obtained in various ways, for example, the material image sample and the image sample to be beautified can be obtained from a local database, the data can be pulled through accessing a network interface, and the data can be obtained through shooting in real time by a camera, and the method is specific according to actual conditions.
102. And beautifying the image sample to be beautified by utilizing a generator in the preset network model to obtain a beautified image sample.
Wherein the pre-set beautification model may generate an impedance network model (Generative Adversarial Networks, GAN). Generating the antagonism network is a deep learning model by including at least two modules in a framework: a generator (Model) and a arbiter (Discriminative Model) by which a relatively good output is produced through a mutual game learning of the generator and the arbiter. In the original GAN theory, it is not required that both G and D are neural networks, but only functions that can fit the corresponding generation and discrimination. But in practice deep neural networks are generally used as G and D. An excellent GAN application requires a good training method, otherwise the output may be non-ideal due to the freedom of the neural network model.
Wherein the arbiter needs to input variables, predicted by some model. The generator is given some implicit information to randomly generate the observation data. For example, the arbiter: given a graph, it is determined whether the animal in the graph is a cat or a dog. A generator: a new cat (not in the dataset) is generated for a series of cat pictures, wherein the conditional feature network may be used to extract specific conditional features, such as specific beautification features. The specific condition feature can be set in various ways, for example, the specific condition feature can be flexibly set according to the actual application requirement, and the condition feature network setting can be trained in advance and stored in the network device. In addition, the specific condition features may be built into the network device, or may be stored in a memory and transmitted to the network device, or the like. For example, the conditional feature network may have a two-class network. As the name implies, a bisectional network is a network that classifies data input into the network into two categories, such as 0 or 1, yes or no, and so on. For example, the two-class network may have the ability to identify an unsorted image and a beautified image by prior training.
Specifically, feature extraction may be performed on the material image sample and the image sample to be beautified, and then image beautification may be performed based on the extracted features to obtain an beautified image sample, that is, optionally, in some embodiments, the step of "performing beautification processing on the image sample to be beautified by using a generator in a preset network model and the material image sample to obtain an beautified image sample" may specifically include:
(11) Respectively extracting features of the material image sample and the image sample to be beautified by using a convolution layer in a generator in a preset network model to obtain a first feature vector corresponding to the material image sample and a second feature vector corresponding to the image sample to be beautified;
(12) And carrying out beautifying treatment on the image sample to be beautified based on the first feature vector and the second feature vector to obtain an image sample after beautifying.
Specifically, the first feature vector and the second feature vector may be spliced in feature dimensions to obtain a spliced feature vector, and then, a post-beautification image sample is generated based on the spliced feature vector, that is, optionally, in some embodiments, the step of "performing, based on the first feature vector and the second feature vector, a post-beautification image sample to be beautified to obtain a post-beautification image sample" may specifically include:
(21) Splicing the first feature vector and the second feature vector to obtain a spliced feature vector;
(22) And generating a beautified image sample based on the spliced feature vectors.
Wherein both the generator and the arbiter may comprise fully connected layers, multi-layer convolutional layers, and so on.
Convolution layer: the method is mainly used for extracting characteristics of an input image (such as an image sample to be beautified or a material image sample), wherein the size of convolution kernels and the number of the convolution kernels can be determined according to practical application, for example, the sizes of the convolution kernels from a first layer of convolution layers to a fourth layer of convolution layers can be (7, 7), (5, 5), (3, 3) in sequence; optionally, in order to reduce the complexity of computation and improve the computation efficiency, in this embodiment, the convolution kernels of the four convolution layers may be set to (3, 3), the activation functions are all "relu (linear rectification function, rectified Linear Unit)", and the padding (padding refers to the space between the attribute defining element frame and the element content) is set to "same", and the "same" padding mode may be simply understood as padding edges with 0, where the number of left (upper) 0 supplements is the same as or less than the number of right (lower) 0 supplements. Optionally, the convolution layers may be connected by a direct connection manner, so as to increase the network convergence speed, in order to further reduce the calculation amount, a downsampling (sampling) operation may be performed on all layers or any 1-2 layers of the second to fourth convolution layers, where the downsampling operation is substantially the same as the convolution operation, and only a maximum value (max sampling) or an average value (average sampling) of the corresponding positions is taken as a convolution kernel of the downsampling, which is described as an example in the second layer convolution layer and the third layer convolution layer for convenience of description.
It should be noted that, for convenience of description, in the embodiment of the present invention, the layer where the activation function is located and the downsampling layer (also referred to as the pooling layer) are both included in the convolution layer, it should be understood that the structure may also be considered to include the convolution layer, the layer where the activation function is located, the downsampling layer (i.e. the pooling layer), and the full connection layer, and of course, may also include an input layer for inputting data and an output layer for outputting data, which are not described herein again.
Full tie layer: the learned features can be mapped to a sample marking space, which mainly plays a role of a "classifier" in the whole convolutional neural network, and each node of the full-connection layer is connected with all nodes output by the upper layer (such as a downsampling layer in the convolutional layer), wherein one node of the full-connection layer is called one neuron in the full-connection layer, and the number of the neurons in the full-connection layer can be determined according to the practical requirement, for example, in the text detection model, the number of the neurons of the full-connection layer can be set to 512, or can also be set to 128, and the like. Similar to the convolutional layer, optionally, in the fully connected layer, non-linear factors can also be added by adding an activation function, for example, an activation function sigmoid (S-type function) can be added.
It should be noted that, in order to make the beautifying effect of the model more realistic, the detail processing of the image is better, so before the beautifying processing is performed on the image sample to be beautified by using the generator and the material image sample in the preset network model, the preprocessing may be further performed on the image sample to be beautified, for example, before the image sample to be beautified determines the area to be beautified, then the image block corresponding to the area to be beautified is reserved, and finally, the beautifying processing is performed on the reserved image block by using the generator and the material image sample in the preset network model, that is, optionally, in some embodiments, the step of "beautifying processing is performed on the image sample to be beautified by using the generator and the material image sample in the preset network model" may specifically further include:
(31) Determining an area to be beautified in an image sample to be beautified according to a preset strategy;
(32) And intercepting an image block corresponding to the region to be beautified from the image sample to be beautified to obtain the processed image sample to be beautified.
For example, referring to fig. 1c, the image sample to be beautified is an image sample including a face, the material image sample is an expression a, firstly, an area to be beautified is determined as an area where the face is located in the image sample to be beautified according to a preset strategy, then, an image block corresponding to the area to be beautified is intercepted in the image sample to be beautified to obtain a processed image sample to be beautified, after the processed image sample to be beautified is obtained, a generator in a preset network model and the material image sample can be utilized to beautify the processed image sample to obtain the beautified image sample, as shown in fig. 1 c.
103. And generating difference information corresponding to the beautified image sample and the image sample to be beautified under different scales by a discriminator in a preset network model.
The preset network model may include a generator and a discriminator, and specific network parameters of the generator and the discriminator may be set according to requirements of practical applications.
The preset network model includes a generator and a discriminator, as shown in fig. 1d, after the material image sample and the image sample to be beautified pass through the generator, a post-beautification image sample is generated, the discriminator may generate difference information corresponding to the post-beautification image sample and the image sample to be beautified at different scales, the difference information may be used to characterize whether the post-beautification image sample and the image sample to be beautified belong to a real image, for example, the discriminator may generate difference information of the post-beautification image sample and the image sample to be beautified at a scale of "40×40" and difference information of the image sample to be beautified at a scale of "20×20", that is, optionally, in some embodiments, the step of generating, by the discriminator in the preset network model, difference information corresponding to the post-beautification image sample and the image sample to be beautified at different scales may specifically include:
(41) Performing image scale transformation on the beautified image samples to obtain a plurality of first image samples, and performing image scale transformation on the image samples to be beautified to obtain a plurality of second image samples;
(42) Adding a first image sample and a second image sample with the same scale to the same set to obtain a plurality of sample pairs with the same scale;
(43) And generating difference information of each same-scale sample pair through a discriminator in a preset network model.
Specifically, the post-beautification image samples and the image samples to be beautified may be subjected to scale conversion respectively to obtain a plurality of post-beautification image samples (i.e. first image samples) after the scale conversion and a plurality of image samples to be beautified (i.e. second image samples) after the scale conversion, for example, 4 first image samples and 4 second image samples are obtained, then the first image samples and the second image samples with the same scale are added to the same set to obtain a plurality of same-scale sample pairs, for example, the first image samples of 40 x 40 and the second image samples of 40 x 40 are added to the same set to obtain a same-scale sample pair, the first image samples of 20 x 20 and the second image samples of 20 x 20 are added to the same set to obtain a same-scale sample pair, and finally, difference information of each same-scale sample pair is generated through a discriminator in a preset network model.
Further, the corresponding scale of each co-scale sample pair may be extracted, the co-scale sample pair with the scale greater than the threshold is determined as a first sample pair, the co-scale sample pair with the scale less than or equal to the preset threshold is determined as a second sample pair, then a plurality of first areas are constructed in the first image sample in the first sample pair, then difference information between each first area and the corresponding area of the second image sample is generated by a discriminator in the preset network model, so as to obtain first difference information, and difference information of each second sample pair is generated by a discriminator in the preset network model, so as to obtain second difference information, that is, optionally, in some embodiments, step "difference information of each co-scale sample pair is generated by a discriminator in the preset network model":
(51) Extracting the scale of each same-scale sample pair;
(52) Determining a pair of co-scale samples with the scale being larger than a preset threshold value as a first pair of samples, and determining a pair of co-scale samples with the scale being smaller than or equal to the preset threshold value as a second pair of samples;
(53) Constructing a plurality of first regions on a first image sample in a first sample pair;
(54) Generating difference information between each first region and a corresponding region of the second image sample through a discriminator in a preset network model to obtain first difference information, and generating difference information of each second sample pair through the discriminator in the preset network model to obtain second difference information.
For example, the preset threshold is 10×10, the scales of the plurality of co-scale sample pairs are 40×40, 20×20, 10×10 and 5*5, respectively, that is, the co-scale sample pairs of 40×40 and the co-scale sample pairs of 20×20 may be determined as the first sample pair, the co-scale sample pairs of 10×10 and the co-scale sample pairs of 5*5 may be determined as the second sample pair, and a plurality of first regions are constructed on the first image samples in the first sample pairs respectively, where the number of the first regions may be 3, or 4, or 5, or may be specifically selected according to practical situations, and the positions of the 4 first regions in the first image samples are: as shown in fig. 1e, it should be noted that, the first region and the corresponding region of the second image sample refer to: the step 104 may be performed after the first difference information and the second difference information are obtained, for example, when the first region a is located at the upper left of the first image sample and the region b of the second image sample corresponding to the first region a is also located at the upper left of the second image sample.
104. And converging the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model.
For example, after the first difference information and the second difference information are obtained, the preset network model may be converged according to the first difference information and the second difference information, that is, optionally, in some embodiments, the step of converging the preset network model according to the corresponding difference information under different scales to obtain the generated countermeasure network model may specifically include: and converging the preset network model according to the first difference information and the second difference information to obtain a generated countermeasure network.
Specifically, a loss function corresponding to a preset network model may be constructed according to the first difference information and the second difference information to obtain a target loss function, and then the preset network model is converged based on the target loss function to obtain a generated countermeasure network, that is, optionally, in some embodiments, the step of converging the preset network model according to the first difference information and the second difference information to obtain the generated countermeasure network may include:
(61) Constructing a loss function corresponding to a preset network model according to the first difference information and the second difference information to obtain a target loss function;
(62) And converging the preset network model based on the target loss function to obtain a generated countermeasure network.
The method may further include the steps of generating an image error value between the beautified image sample and the image sample to be beautified by the discriminator, then constructing a loss function corresponding to the preset network model based on the image error value and a preset gradient optimization algorithm to obtain a target loss function, that is, optionally, in some embodiments, the step of converging the preset network model based on the target loss function to obtain a generated countermeasure network, which may specifically include:
(71) Extracting an image error value from the first difference information to obtain a first image error value, and extracting an image error value from the second difference information to obtain a second image error value;
(72) And constructing a loss function corresponding to a preset network model based on the first image error value, the second image error value and a preset gradient optimization algorithm to obtain a target loss function.
The first image error value may be expressed by a contrast error, a pixel error, and a feature error between the first image sample and the second image sample in the first sample pair, and the second image error value is the same, in this embodiment of the present invention, the generator subjects the image sample to be beautified to obtain an beautified image sample through the material image sample, so that the contrast error refers to a contrast loss between the image sample to be beautified after being processed by the generator and the image sample to be beautified after being not processed by the generator, and the pixel error refers to a pixel loss between the image sample to be beautified after being processed by the generator and the image sample to be beautified after being processed by the generator, and the feature error refers to a feature loss between the image sample to be beautified after being processed by the generator and the image sample to be beautified after being beautified, for example, please refer to fig. 1f, and the local feature error refers to a feature loss between a region a of the image sample to be beautified and a region B of the image sample to be beautified after being beautified, so that the local feature error between the image sample to be beautified by the generator and the image sample to be beautified by the generator may be plural, specifically selected according to the actual conditions
The probability that the image sample to be beautified and the image sample after beautification are true can be detected by the discriminator, for example, the probability that the discriminator detects that the image sample to be beautified is a real image is 0.8, and the probability that the discriminator detects that the image sample after beautification is a real image is 0.3, then the countermeasure error between the image sample to be beautified and the image sample after beautification is 0.5, and the calculation mode of the pixel error characteristic error and the calculation mode of the characteristic error can refer to the calculation mode of the countermeasure error and are not described herein.
105. And carrying out beautification processing on the image to be beautified based on the generated countermeasure network model to obtain the beautified image.
For example, specifically, when an image beautification command triggered by a user is received, an image to be beautified and a material image corresponding to the image beautification command may be acquired, and taking a face image for changing a face as an example, the material image may be a face image including a star, the image to be beautified may be a face image of the user, and then, beautification processing is performed on the image to be beautified and the material image based on generating an countermeasure network model to obtain an beautified image, that is, optionally, in some embodiments, the step of "performing image beautification on the image to be beautified based on generating the countermeasure network model to obtain an beautified image" may specifically include:
(81) Receiving an image beautifying request, wherein the image beautifying request carries a material image and an image to be beautified;
(82) And carrying out beautification processing on the material image and the image to be beautified based on the generated countermeasure network model to obtain a target image.
In order to improve the quality of the image beautification, the material image and the image to be beautified may be preprocessed respectively before the material image and the image to be beautified are beautified, for example, the background of the image to be beautified may be removed, if the image to be beautified is a face image, the image of the non-face area may be removed to obtain a preprocessed image, and then the preprocessed image is beautified based on the generated countermeasure network model to obtain the beautified image.
After the material image sample and the image sample to be beautified are obtained, the generator in the preset network model and the material image sample are utilized to beautify the image sample to be beautified, so that the beautified image sample is obtained, then, difference information corresponding to the beautified image sample and the image sample to be beautified under different scales is generated through a discriminator in the preset network model, then, the preset network model is converged according to the corresponding difference information under different scales, so that an countermeasure network model is generated, and finally, the beautification treatment is carried out on the image to be beautified based on the generated countermeasure network model, so that the beautified image is obtained. Compared with the existing image processing method, the method has the advantages that the difference information of the beautified image sample and the image sample to be beautified corresponding to different scales is generated through the discriminator in the preset network model, then the preset network model is converged according to the corresponding difference information of the different scales, the generated countermeasure network model is obtained, namely, in a training stage, the relationship between the beautified image sample and the image sample to be beautified is considered, the countermeasure between the discriminator and the generator is realized, the parameters of the generated countermeasure network model are optimized, and the generator can improve the image beautifying effect when in use.
The method according to the embodiment will be described in further detail by way of example.
In this embodiment, an example will be described in which the image processing apparatus is specifically integrated in a terminal.
Referring to fig. 2, a specific process of the image beautifying method may be as follows:
201. and the terminal acquires a material image sample and an image sample to be beautified.
For example, the image sample to be beautified may be a sample containing a human body image, where the human body image may include an image of a head, an image of a torso, and an image of limbs, and it should be noted that the image of the head may include a human face image, and the material image may be an image containing expressions, an image containing various human body postures, and/or an image containing various wallpaper, which are specifically selected according to practical situations, and will not be described herein. The terminal can acquire the material image sample and the image sample to be beautified, for example, the terminal can acquire the material image sample and the image sample to be beautified from a local database, the terminal can also pull data through accessing a network interface, and the terminal can also be obtained through shooting in real time through a camera, and the terminal is specific according to actual conditions.
202. And the terminal beautifies the material image sample and the image sample to be beautified by using a generator in a preset network model to obtain an beautified image sample.
Wherein the pre-set beautification model may generate an impedance network model (Generative Adversarial Networks, GAN). Generating the antagonism network is a deep learning model by including at least two modules in a framework: a generator (Model) and a arbiter (Discriminative Model) by which a relatively good output is produced through a mutual game learning of the generator and the arbiter.
Specifically, the terminal may perform feature extraction on the material image sample and the image sample to be beautified, and then perform image beautification based on the extracted features to obtain an beautified image sample, for example, the terminal may perform feature extraction on the material image sample and the image sample to be beautified by using a convolution layer in a generator in a preset network model to obtain a first feature vector corresponding to the material image sample and a second feature vector corresponding to the image sample to be beautified, then the terminal splices the first feature vector and the second feature vector to obtain a spliced feature vector, and finally the terminal generates the beautified image sample based on the spliced feature vector.
203. The terminal generates difference information corresponding to the beautified image sample and the image sample to be beautified under different scales through a discriminator in a preset network model.
The preset network model may include a generator and a discriminator, specific network parameters of the generator and the discriminator may be set according to requirements of practical applications, the terminal may extract scales corresponding to each co-scale sample pair, then the terminal determines the co-scale sample pair with the scale larger than a threshold value as a first sample pair, and determines the co-scale sample pair with the scale smaller than or equal to a preset threshold value as a second sample pair, then a plurality of first areas are constructed in a first image sample in the first sample pair, then difference information between each first area and an area of a corresponding second image sample is generated through the discriminator in the preset network model, so as to obtain first difference information, and difference information of each second sample pair is generated through the discriminator in the preset network model, so as to obtain second difference information.
204. And the terminal converges the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model.
The terminal can construct a loss function corresponding to a preset network model according to the first difference information and the second difference information to obtain a target loss function, and then the terminal converges the preset network model based on the target loss function to obtain a generated countermeasure network.
205. And the terminal performs beautification processing on the image to be beautified based on the generated countermeasure network model to obtain the beautified image.
For example, specifically, when receiving an image beautification command triggered by a user, the terminal may acquire an image to be beautified and a material image corresponding to the image beautification command, and beautify the image to be beautified and the material image based on generating an countermeasure network model, so as to obtain the beautified image.
After acquiring a material image sample and an image sample to be beautified, the terminal of the embodiment of the invention beautifies the image sample to be beautified by utilizing a generator in a preset network model and the material image sample to obtain the beautified image sample, then the terminal generates difference information corresponding to the beautified image sample and the image sample to be beautified under different scales through a discriminator in the preset network model, then the terminal converges the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model, and finally the terminal beautifies the image to be beautified based on the generated countermeasure network model to obtain a beautified image. Compared with the existing image processing method, the terminal generates the difference information of the beautified image sample and the image sample to be beautified corresponding to different scales through the discriminator in the preset network model, then the terminal converges the preset network model according to the corresponding difference information of different scales to obtain the generated countermeasure network model, namely, in the training stage, the countermeasure between the discriminator and the generator is realized in consideration of the relationship between the beautified image sample and the image sample to be beautified, so that the parameters of the generated countermeasure network model are optimized, and the generator can improve the image beautifying effect when in use.
In order to facilitate better implementation of the image processing method according to the embodiment of the present invention, the embodiment of the present invention further provides an image processing apparatus (abbreviated as a processing apparatus) based on the foregoing embodiment of the present invention. Where the meaning of the terms is the same as in the image processing method described above, specific implementation details may be referred to in the description of the method embodiments.
Referring to fig. 3a, fig. 3a is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, where the identifying apparatus may include an obtaining module 301, a first beautifying module 302, a generating module 303, a converging module 304, and a second beautifying module 305, and may specifically be as follows:
the acquiring module 301 is configured to acquire a material image sample and an image sample to be beautified.
For example, the image sample to be beautified may be a sample containing a human body image, where the human body image may include an image of a head, an image of a torso, and an image of limbs, and it should be noted that the image of the head may include a human face image, and the material image may be an image containing expressions, an image containing various human body postures, and/or an image containing various wallpaper, which are specifically selected according to practical situations, and will not be described herein. The obtaining of the material image sample and the image sample to be beautified may be various, for example, the obtaining module 301 may obtain the material image sample and the image sample to be beautified from a local database, the obtaining module 301 may also pull data through accessing a network interface, and the obtaining module 301 may also obtain the data through real-time shooting by a camera, which is specific according to practical situations.
The first beautifying module 302 is configured to perform beautifying processing on the image sample to be beautified by using the generator and the material image sample in the preset network model, so as to obtain a beautified image sample.
The first beautifying module 302 may perform feature extraction on the material image sample and the image sample to be beautified, and then perform image beautification based on the extracted features to obtain a beautified image sample, that is, optionally, in some embodiments, the first beautifying module 302 may specifically include:
the extraction unit is used for respectively carrying out feature extraction on the material image sample and the image sample to be beautified by utilizing a convolution layer in a generator in the preset network model to obtain a first feature vector corresponding to the material image sample and a second feature vector corresponding to the image sample to be beautified;
and the beautifying unit is used for carrying out beautifying treatment on the image sample to be beautified based on the first feature vector and the second feature vector to obtain an image sample after beautifying.
The beautifying unit may splice the first feature vector and the second feature vector in feature dimensions to obtain a spliced feature vector, and then generate a post-beautifying image sample based on the spliced feature vector, that is, in some embodiments, the beautifying unit may specifically be configured to: and splicing the first feature vector and the second feature vector to obtain a spliced feature vector, and generating a beautified image sample based on the spliced feature vector.
Optionally, in some embodiments, referring to fig. 3b, the processing apparatus may further include a processing module 306, where the processing module 306 may specifically be configured to: determining an area to be beautified in an image sample to be beautified according to a preset strategy, and intercepting an image block corresponding to the area to be beautified in the image sample to be beautified to obtain the processed image sample to be beautified;
the first beautifying module 302 specifically is for: and carrying out beautification treatment on the image sample to be beautified after treatment by utilizing a generator in the preset network model and the material image sample to obtain the beautified image sample.
The generating module 303 is configured to generate, by using a discriminator in a preset network model, difference information corresponding to the beautified image sample and the image sample to be beautified under different scales.
Optionally, in some embodiments, the generating module 303 may specifically include:
the dimension conversion unit is used for carrying out image dimension conversion on the beautified image samples to obtain a plurality of first image samples, and carrying out image dimension conversion on the image samples to be beautified to obtain a plurality of second image samples;
the adding unit is used for adding the first image sample and the second image sample with the same scale into the same set to obtain a plurality of sample pairs with the same scale;
And the generating unit is used for generating the difference information of each same-scale sample pair through a discriminator in the preset network model.
Optionally, in some embodiments, the generating unit may specifically include:
an extraction subunit, configured to extract the scale of each same-scale sample pair;
a determining subunit, configured to determine a pair of samples of the same scale with a scale greater than a preset threshold as a first pair of samples, and determine a pair of samples of the same scale with a scale less than or equal to the preset threshold as a second pair of samples;
a construction subunit for constructing a plurality of first regions on a first image sample in a first sample pair;
the generation subunit is used for generating difference information between each first area and the corresponding area of the second image sample through a discriminator in the preset network model to obtain first difference information, and generating difference information of each second sample pair through the discriminator in the preset network model to obtain second difference information.
And the convergence module 304 is configured to converge the preset network model according to the corresponding difference information under different scales, so as to obtain a generated countermeasure network model.
Alternatively, in some embodiments, the convergence module 304 may be specifically configured to: and converging the preset network model according to the first difference information and the second difference information to obtain a generated countermeasure network.
Optionally, in some embodiments, the convergence module 304 may specifically include:
the construction unit is used for constructing a loss function corresponding to a preset network model according to the first difference information and the second difference information to obtain a target loss function;
and the convergence unit is used for converging the preset network model based on the target loss function to obtain the generated countermeasure network.
Alternatively, in some embodiments, the convergence unit may specifically be configured to: extracting an image error value from the first difference information to obtain a first image error value, extracting an image error value from the second difference information to obtain a second image error value, and constructing a loss function corresponding to a preset network model based on the first image error value, the second image error value and a preset gradient optimization algorithm to obtain a target loss function.
And a second beautifying module 305, configured to perform beautifying processing on the image to be beautified based on generating the countermeasure network model, so as to obtain a beautified image.
For example, specifically, when receiving the image beautification command triggered by the user, the second beautification module 305 may acquire the image to be beautified and the material image corresponding to the image beautification command, and beautify the image to be beautified and the material image based on generating the countermeasure network model, so as to obtain the beautified image
Optionally, in some embodiments, second aesthetic module 305 may be specifically configured to: receiving an image beautifying request, wherein the image beautifying request carries a material image and an image to be beautified, and carrying out beautifying processing on the image to be beautified based on the generated countermeasure network model and the material image to obtain a beautifying image
After the obtaining module 301 of the embodiment of the present invention obtains the material image sample and the image sample to be beautified, the first beautifying module 302 performs beautification processing on the image sample to be beautified by using the generator in the preset network model and the material image sample to obtain the beautified image sample, then the generating module 303 generates difference information corresponding to the beautified image sample and the image sample to be beautified at different scales through the discriminator in the preset network model, then the converging module 304 converges the preset network model according to the corresponding difference information at different scales to obtain the generated countermeasure network model, and finally the second beautifying module 305 performs beautification processing on the image to be beautified based on the generated countermeasure network model to obtain the beautified image. Compared with the existing image processing method, the method has the advantages that the difference information of the beautified image sample and the image sample to be beautified corresponding to different scales is generated through the discriminator in the preset network model, then the preset network model is converged according to the corresponding difference information of the different scales, the generated countermeasure network model is obtained, namely, in a training stage, the relationship between the beautified image sample and the image sample to be beautified is considered, the countermeasure between the discriminator and the generator is realized, the parameters of the generated countermeasure network model are optimized, and the generator can improve the image beautifying effect when in use.
In addition, the embodiment of the invention further provides an electronic device, as shown in fig. 4, which shows a schematic structural diagram of the electronic device according to the embodiment of the invention, specifically:
the electronic device may include one or more processing cores 'processors 401, one or more computer-readable storage media's memory 402, power supply 403, and input unit 404, among other components. Those skilled in the art will appreciate that the electronic device structure shown in fig. 4 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components. Wherein:
the processor 401 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402, and calling data stored in the memory 402, thereby performing overall detection of the electronic device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, etc., and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by executing the software programs and modules stored in the memory 402. The memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402.
The electronic device further comprises a power supply 403 for supplying power to the various components, preferably the power supply 403 may be logically connected to the processor 401 by a power management system, so that functions of managing charging, discharging, and power consumption are performed by the power management system. The power supply 403 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may further comprise an input unit 404, which input unit 404 may be used for receiving input digital or character information and generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with user settings and function control.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 401 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions as follows:
the method comprises the steps of obtaining a material image sample and an image sample to be beautified, carrying out beautification treatment on the image sample to be beautified by using a generator in a preset network model and the material image sample to obtain an beautified image sample, generating difference information corresponding to the beautified image sample and the image sample to be beautified under different scales by using a discriminator in the preset network model, converging the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model, and carrying out beautification treatment on the image to be beautified based on the generated countermeasure network model to obtain a beautified image.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
After the material image sample and the image sample to be beautified are obtained, the generator in the preset network model and the material image sample are utilized to beautify the image sample to be beautified, so that the beautified image sample is obtained, then, difference information corresponding to the beautified image sample and the image sample to be beautified under different scales is generated through a discriminator in the preset network model, then, the preset network model is converged according to the corresponding difference information under different scales, so that an countermeasure network model is generated, and finally, the beautification treatment is carried out on the image to be beautified based on the generated countermeasure network model, so that the beautified image is obtained. Compared with the existing image processing method, the method has the advantages that the difference information of the beautified image sample and the image sample to be beautified corresponding to different scales is generated through the discriminator in the preset network model, then the preset network model is converged according to the corresponding difference information of the different scales, the generated countermeasure network model is obtained, namely, in a training stage, the relationship between the beautified image sample and the image sample to be beautified is considered, the countermeasure between the discriminator and the generator is realized, the parameters of the generated countermeasure network model are optimized, and the generator can improve the image beautifying effect when in use.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention provides a storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform steps in any one of the image processing methods provided by the embodiment of the present invention. For example, the instructions may perform the steps of:
the method comprises the steps of obtaining a material image sample and an image sample to be beautified, carrying out beautification treatment on the image sample to be beautified by using a generator in a preset network model and the material image sample to obtain an beautified image sample, generating difference information corresponding to the beautified image sample and the image sample to be beautified under different scales by using a discriminator in the preset network model, converging the preset network model according to the corresponding difference information under different scales to obtain a generated countermeasure network model, and carrying out beautification treatment on the image to be beautified based on the generated countermeasure network model to obtain a beautified image.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium may perform steps in any image processing method provided by the embodiments of the present invention, so that the beneficial effects that any image processing method provided by the embodiments of the present invention can be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing has described in detail the image processing method, the terminal, the device, the electronic apparatus and the storage medium provided by the embodiments of the present invention, and specific examples have been applied to illustrate the principles and the embodiments of the present invention, where the foregoing description of the embodiments is only for helping to understand the method and the core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.

Claims (10)

1. An image processing method, comprising:
acquiring a material image sample and an image sample to be beautified;
carrying out beautification treatment on the image sample to be beautified by utilizing a generator and a material image sample in a preset network model to obtain an beautified image sample;
generating difference information corresponding to the same region position of the beautified image sample and the image sample to be beautified under different scales by a discriminator in a preset network model, wherein the difference information comprises: performing image scale transformation on the beautified image samples to obtain a plurality of first image samples, and performing image scale transformation on the image samples to be beautified to obtain a plurality of second image samples; adding a first image sample and a second image sample with the same scale to the same set to obtain a plurality of sample pairs with the same scale; determining a same-scale sample pair with a scale larger than a preset threshold value as a first sample pair, and determining a same-scale sample pair with a scale smaller than or equal to the preset threshold value as a second sample pair; constructing a plurality of first regions on a first image sample in the first sample pair; generating difference information between each first region and a corresponding region of the second image sample through a discriminator in a preset network model to obtain first difference information, and generating difference information of each second sample pair through the discriminator in the preset network model to obtain second difference information;
Converging a preset network model according to difference information corresponding to the same region position under different scales to obtain a generated countermeasure network model, comprising: converging a preset network model according to the first difference information and the second difference information to obtain a generated countermeasure network;
carrying out beautification treatment on the image to be beautified based on the generated countermeasure network model to obtain a beautified image; comprising the following steps: receiving an image beautifying request, wherein the image beautifying request carries a material image and an image to be beautified; and carrying out beautification processing on the image to be beautified based on the generated countermeasure network model and the material image to obtain an beautified image.
2. The method of claim 1, wherein prior to the step of determining the pair of co-scaled samples having a scale greater than the preset threshold as the first pair of samples, further comprising:
the dimensions of each co-scale sample pair are extracted.
3. The method of claim 1, wherein the converging the preset network model according to the first difference information and the second difference information to generate the countermeasure network comprises:
constructing a loss function corresponding to a preset network model according to the first difference information and the second difference information to obtain a target loss function;
And converging a preset network model based on the target loss function to obtain a generated countermeasure network.
4. The method of claim 3, wherein constructing a loss function corresponding to a preset network model according to the first difference information and the second difference information to obtain the target loss function comprises:
extracting an image error value from the first difference information to obtain a first image error value, and;
extracting an image error value from the second difference information to obtain a second image error value;
and constructing a loss function corresponding to a preset network model based on the first image error value, the second image error value and a preset gradient optimization algorithm to obtain a target loss function.
5. The method according to any one of claims 1 to 4, wherein the performing the beautifying process on the image sample to be beautified using the generator and the material image sample in the preset network model to obtain the beautified image sample includes:
respectively extracting features of the material image sample and the image sample to be beautified by using a convolution layer in a generator in a preset network model to obtain a first feature vector corresponding to the material image sample and a second feature vector corresponding to the image sample to be beautified;
And carrying out beautification treatment on the image sample to be beautified based on the first feature vector and the second feature vector to obtain an beautified image sample.
6. The method according to claim 5, wherein the performing the beautifying process on the image sample to be beautified based on the first feature vector and the second feature vector to obtain the beautified image sample comprises:
splicing the first feature vector and the second feature vector to obtain a spliced feature vector;
and generating a beautified image sample based on the spliced feature vectors.
7. The method according to any one of claims 1 to 4, wherein the beautifying processing of the image sample to be beautified using the generator and the material image sample in the preset network model, before obtaining the beautified image sample, further comprises:
determining an area to be beautified in an image sample to be beautified according to a preset strategy;
intercepting an image block corresponding to the region to be beautified from the image sample to be beautified to obtain the processed image sample to be beautified;
the beautifying processing of the image sample to be beautified by using a generator and a material image sample in a preset network model to obtain a beautified image sample comprises the following steps: and carrying out beautification treatment on the image sample to be beautified after the treatment by using a generator and a material image sample in a preset network model to obtain the beautified image sample.
8. An image processing apparatus, comprising:
the acquisition module is used for acquiring a material image sample and an image sample to be beautified;
the first beautifying module is used for carrying out beautifying treatment on the image sample to be beautified by utilizing a generator and a material image sample in a preset network model to obtain a beautified image sample;
the generating module is used for generating difference information corresponding to the same region position of the beautified image sample and the image sample to be beautified under different scales through a discriminator in a preset network model, and comprises the following steps: performing image scale transformation on the beautified image samples to obtain a plurality of first image samples, and performing image scale transformation on the image samples to be beautified to obtain a plurality of second image samples; adding a first image sample and a second image sample with the same scale to the same set to obtain a plurality of sample pairs with the same scale; determining a same-scale sample pair with a scale larger than a preset threshold value as a first sample pair, and determining a same-scale sample pair with a scale smaller than or equal to the preset threshold value as a second sample pair; constructing a plurality of first regions on a first image sample in the first sample pair; generating difference information between each first region and a corresponding region of the second image sample through a discriminator in a preset network model to obtain first difference information, and generating difference information of each second sample pair through the discriminator in the preset network model to obtain second difference information;
The convergence module is configured to converge a preset network model according to difference information corresponding to the same region position under the same scale, so as to generate an countermeasure network model, and includes: converging a preset network model according to the first difference information and the second difference information to obtain a generated countermeasure network;
the second beautifying module is used for carrying out beautifying treatment on the image to be beautified based on the generated countermeasure network model to obtain a beautified image; comprising the following steps: receiving an image beautifying request, wherein the image beautifying request carries a material image and an image to be beautified; and carrying out beautification processing on the image to be beautified based on the generated countermeasure network model and the material image to obtain an beautified image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the image processing method according to any one of claims 1-7 when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1-7.
CN202010060550.8A 2020-01-19 2020-01-19 Image processing method, device, electronic equipment and storage medium Active CN111292262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010060550.8A CN111292262B (en) 2020-01-19 2020-01-19 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010060550.8A CN111292262B (en) 2020-01-19 2020-01-19 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111292262A CN111292262A (en) 2020-06-16
CN111292262B true CN111292262B (en) 2023-10-13

Family

ID=71029140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010060550.8A Active CN111292262B (en) 2020-01-19 2020-01-19 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111292262B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833242A (en) * 2020-07-17 2020-10-27 北京字节跳动网络技术有限公司 Face transformation method and device, electronic equipment and computer readable medium
CN113222841A (en) * 2021-05-08 2021-08-06 北京字跳网络技术有限公司 Image processing method, device, equipment and medium
CN113256513B (en) * 2021-05-10 2022-07-01 杭州格像科技有限公司 Face beautifying method and system based on antagonistic neural network
CN114387373A (en) * 2021-12-29 2022-04-22 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN117252954A (en) * 2022-06-10 2023-12-19 脸萌有限公司 Image processing method, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203754A (en) * 2017-05-26 2017-09-26 北京邮电大学 A kind of license plate locating method and device based on deep learning
CN107563977A (en) * 2017-08-28 2018-01-09 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN108805828A (en) * 2018-05-22 2018-11-13 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN110135366A (en) * 2019-05-20 2019-08-16 厦门大学 Pedestrian's recognition methods again is blocked based on multiple dimensioned generation confrontation network
CN110211063A (en) * 2019-05-20 2019-09-06 腾讯科技(深圳)有限公司 A kind of image processing method, device, electronic equipment and system
CN110378844A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled
CN110414345A (en) * 2019-06-25 2019-11-05 北京汉迪移动互联网科技股份有限公司 Cartoon image generation method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203754A (en) * 2017-05-26 2017-09-26 北京邮电大学 A kind of license plate locating method and device based on deep learning
CN107563977A (en) * 2017-08-28 2018-01-09 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN108805828A (en) * 2018-05-22 2018-11-13 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN110135366A (en) * 2019-05-20 2019-08-16 厦门大学 Pedestrian's recognition methods again is blocked based on multiple dimensioned generation confrontation network
CN110211063A (en) * 2019-05-20 2019-09-06 腾讯科技(深圳)有限公司 A kind of image processing method, device, electronic equipment and system
CN110378844A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled
CN110414345A (en) * 2019-06-25 2019-11-05 北京汉迪移动互联网科技股份有限公司 Cartoon image generation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111292262A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN111292262B (en) Image processing method, device, electronic equipment and storage medium
CN112232425B (en) Image processing method, device, storage medium and electronic equipment
CN108898087B (en) Training method, device and equipment for face key point positioning model and storage medium
US12067690B2 (en) Image processing method and apparatus, device, and storage medium
CN109783666B (en) Image scene graph generation method based on iterative refinement
CN110991380B (en) Human attribute identification method, device, electronic equipment and storage medium
WO2021139307A1 (en) Video content recognition method and apparatus, storage medium, and computer device
CN111553267B (en) Image processing method, image processing model training method and device
CN111242844B (en) Image processing method, device, server and storage medium
CN111242019B (en) Video content detection method and device, electronic equipment and storage medium
CN111741330A (en) Video content evaluation method and device, storage medium and computer equipment
CN111833360B (en) Image processing method, device, equipment and computer readable storage medium
CN110070484B (en) Image processing, image beautifying method, image processing device and storage medium
CN116363261B (en) Training method of image editing model, image editing method and device
CN110958469A (en) Video processing method and device, electronic equipment and storage medium
CN112052759A (en) Living body detection method and device
CN114282059A (en) Video retrieval method, device, equipment and storage medium
WO2021217919A1 (en) Facial action unit recognition method and apparatus, and electronic device, and storage medium
CN113610953A (en) Information processing method and device and computer readable storage medium
CN117252791A (en) Image processing method, device, electronic equipment and storage medium
CN113657272B (en) Micro video classification method and system based on missing data completion
CN110674716A (en) Image recognition method, device and storage medium
CN112528978B (en) Face key point detection method and device, electronic equipment and storage medium
CN113705307A (en) Image processing method, device, equipment and storage medium
CN111445545B (en) Text transfer mapping method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025225

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant