CN112861592B - Training method of image generation model, image processing method and device - Google Patents

Training method of image generation model, image processing method and device Download PDF

Info

Publication number
CN112861592B
CN112861592B CN201911193533.5A CN201911193533A CN112861592B CN 112861592 B CN112861592 B CN 112861592B CN 201911193533 A CN201911193533 A CN 201911193533A CN 112861592 B CN112861592 B CN 112861592B
Authority
CN
China
Prior art keywords
image
vector
feature map
matrix
generation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911193533.5A
Other languages
Chinese (zh)
Other versions
CN112861592A (en
Inventor
黄星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201911193533.5A priority Critical patent/CN112861592B/en
Publication of CN112861592A publication Critical patent/CN112861592A/en
Application granted granted Critical
Publication of CN112861592B publication Critical patent/CN112861592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a training method, an image processing method and an image processing device for an image generation model, and relates to the technical field of image processing. The feature layers of the feature map are grouped according to the channel number of the feature map to obtain a target feature map containing target number of feature layer groups, the learnable parameters of a preset normalized conversion function are set according to the random vector of the input model, the channel number of the feature map and the target number to obtain a target normalized conversion function, and the feature map is normalized according to the target normalized conversion function, so that training of an image generation model is completed. In the normalization operation process, the input of the image generation model is considered, and the characteristics of the normalization operation object are considered, so that the parameters in the normalization operation process are related to each other, the difference is ensured not to disappear due to the normalization process, the training effect of the model for image generation is improved, the quality of the generated image is improved, and the training time of the model is shortened.

Description

Training method of image generation model, image processing method and device
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a training method of an image generation model, an image processing method and an image processing device.
Background
With the development of technology, the application range of deep learning is wider and wider, the deep learning is the internal law and the representation level of learning sample data, so that the network model can analyze the learning ability like a person, can recognize data such as characters, images, sound and the like,
in the training process of the deep learning network model, the feature map processed by the network layer in the deep learning network model is generally normalized to obtain normalized features, so that the data of the feature map is changed into a distribution or a standard deviation of 1, wherein the mean value is 0The distribution of the range of 0-1 can shorten the model convergence time and improve the model training effect, wherein the current training method uses a standardized score method, and the method normalizes according to the mean value and standard deviation of the original data, so that the processed data accords with the standard normal distribution, namely, the mean value is 0, the standard deviation is 1, namely, the application formula is that And carrying out normalization operation, wherein x is original data, E (x) is the mean value of x, var (x) is the variance of x, y (x) is a normalization processing result, gamma and beta are learnable parameters, wherein gamma and beta are independent values which are independently optimized along with the loss back propagation process in the model training process and are not related to the model input quantity and the model, so that the difference disappears due to the normalization process, and the image quality is reduced.
Disclosure of Invention
The disclosure provides a training method, an image processing method and an image processing device for an image generation model, so as to improve the training effect of the model for image generation and improve the quality of the generated image.
The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, the present disclosure provides a training method of an image generation model, including:
obtaining a random vector, wherein the random vector is a vector with a preset dimension;
inputting the random vector into an image generation model to be trained for processing to obtain a feature map comprising a plurality of channels;
grouping the feature layers of the feature map to obtain a target feature map comprising a target number of feature layer groups, wherein the target number is a divisor of the channel number of the feature map;
setting a learnable parameter of a preset normalized conversion function according to the random vector, the channel number of the feature map and the target number to obtain a target normalized conversion function;
normalizing the feature map according to the target normalized conversion function to obtain an intermediate feature map;
processing the intermediate feature map by using the image generation model to be trained to obtain a predicted image;
And adjusting parameters of the image generation model to be trained based on the predicted image and the sample image to obtain a trained image generation model, wherein the sample image is an image with specified facial features. Optionally, the image generation model is a generative antagonism network or a variational self-encoder.
Optionally, the setting a learnable parameter of a preset normalized conversion function according to the random vector, the channel number of the feature map, and the target number to obtain a target normalized conversion function includes:
generating a first matrix and a second matrix respectively according to the random vector, wherein the number of columns of the first matrix and the number of columns of the second matrix are the same as the dimension of the random vector, and generating a first vector and a second vector respectively according to the number of channels of the feature map and the number of targets, and the number of rows of the first matrix, the number of rows of the second matrix, the number of rows of the first vector and the number of rows of the second vector are the same;
and setting a learnable parameter of a preset normalized conversion function according to the first matrix, the random vector and the first vector, and the second matrix and the second vector to obtain a target normalized conversion function.
Optionally, the preset normalized conversion function is thatWherein x is a target feature map, E (x) is the mean of x, var (x) is the variance of x, y (x) is the intermediate feature map, and γ and β are learnable parameters.
Optionally, the target normalized conversion function is:
wherein Z represents the random vector, A γ Representing the first matrix, B γ Representing the first vector, A β Representing the second matrix, B β Representing the second vector, wherein Z is an m-dimensional column vector, A γ And A β Matrix of g rows and m columns composed of g multiplied by m numbers, B γ And B β Are g-dimensional column vectors, wherein g is set according to the channel number of the feature map, whereinWherein C represents the number of channels of the feature map, G is the target number, and G is an integer.
According to a second aspect of embodiments of the present disclosure, the present disclosure provides an image processing method, including:
acquiring an image to be processed;
inputting the image to be processed into a trained image generation model to be processed, so as to obtain an image with specified facial features, wherein the trained image generation model is obtained by training according to the method of any one of claims 1 to 5.
Optionally, the image with the specified facial feature is an image with a doll face feature.
According to a third aspect of embodiments of the present disclosure, the present disclosure provides a training apparatus of an image generation model, including:
the acquisition module is configured to acquire a random vector, wherein the random vector is a vector with a preset dimension;
the processing module is configured to input the random vector into an image generation model to be trained for processing, and a feature map comprising a plurality of channels is obtained;
the grouping module is configured to group the feature layers of the feature images to obtain target feature images containing target number of feature layer groups, wherein the target number is a divisor of the channel number of the feature images;
the setting module is configured to set a learnable parameter of a preset normalized conversion function according to the random vector, the channel number of the feature map and the target number, so as to obtain a target normalized conversion function;
the normalization module is configured to normalize the target feature map according to the target normalized conversion function to obtain an intermediate feature map;
the processing module is configured to process the intermediate feature map by utilizing the image generation model to be trained to obtain a predicted image;
the training module is configured to adjust parameters of the image generation model to be trained based on the predicted image and a sample image to obtain a trained image generation model, and the sample image is an image with specified facial features.
Optionally, the image generation model is a generative antagonism network or a variational self-encoder.
Optionally, the setting module is specifically configured to:
generating a first matrix and a second matrix respectively according to the random vector, wherein the number of columns of the first matrix and the number of columns of the second matrix are the same as the dimension of the random vector, and generating a first vector and a second vector respectively according to the number of channels of the feature map and the number of targets, and the number of rows of the first matrix, the number of rows of the second matrix, the number of rows of the first vector and the number of rows of the second vector are the same;
and setting a learnable parameter of a preset normalized conversion function according to the first matrix, the random vector and the first vector, and the second matrix and the second vector to obtain a target normalized conversion function.
Optionally, the preset normalized conversion function is thatWherein x is a target feature map, E (x) is the mean of x, var (x) is the variance of x, y (x) is the intermediate feature map, and γ and β are learnable parameters.
Optionally, the target normalized conversion function is:
wherein Z represents the random vector, A γ Representing the first matrix, B γ Representing the first vector, A β Representing the second matrix, B β Representing the second vector, wherein Z is an m-dimensional column vector, A γ And A β Matrix of g rows and m columns composed of g multiplied by m numbers, B γ And B β Are g-dimensional column vectors, wherein g is set according to the channel number of the feature map, whereinWherein C represents the number of channels of the feature map, G is the target number, and G is an integer.
According to a fourth aspect of embodiments of the present disclosure, the present disclosure provides an image processing apparatus including:
the acquisition module is configured to acquire an image to be processed;
the image processing module is configured to input the image to be processed into a trained image generation model to be processed, so as to obtain an image with specified facial features, and the trained image generation model is trained according to the method of any one of the first aspect.
Optionally, the image with the specified facial feature is an image with a doll face feature.
According to a fifth aspect of embodiments of the present disclosure, the present disclosure provides an electronic device, comprising: a processor;
a memory for storing the processor-executable instructions, wherein the processor is configured to execute the instructions to implement the training method of the image generation model of any one of the first aspect or the image processing method of any one of the second aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a storage medium having stored therein a computer program which, when executed by a processor, implements the training method of the image generation model of any one of the first aspects or the image processing method of any one of the second aspects.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the training method of the image generation model according to any of the first aspects or the image processing method according to any of the second aspects.
The training method, the image processing method and the device for the image generation model provided by the embodiment of the disclosure have at least the following beneficial effects:
the random vector is input into an image generation model to be trained for processing, a feature map comprising a plurality of channels is obtained, feature layers of the feature map are grouped according to the channel number of the feature map, a target feature map comprising target number of feature layer groups is obtained, learning parameters of a preset normalized conversion function are set according to the random vector, the channel number of the feature map and the target number, a target normalized conversion function is obtained, the feature map is normalized according to the target normalized conversion function, an intermediate feature map is obtained, the image generation model to be trained is utilized for processing the intermediate feature map, a predicted image is obtained, the input of the image generation model is considered in the normalization operation process, the characteristics of normalization operation objects are considered, parameters in the normalization operation process are mutually related, the difference is guaranteed not to disappear due to the normalization process, finally, the training of the image generation model is completed based on the predicted image and the sample image, the training effect of the image generation model is improved, and the quality of the generated image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram illustrating a training method of an image generation model, according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 3 is a block diagram of a training apparatus for generating a model of an image, according to an example embodiment;
fig. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment;
FIG. 5 is a block diagram of a first electronic device, shown according to an exemplary embodiment;
fig. 6 is a block diagram of a second electronic device, shown in accordance with an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The embodiment of the disclosure discloses a training method, an image processing method and an image processing device for an image generation model, and the training method, the image processing method and the image processing device are respectively described below.
FIG. 1 is a schematic diagram of a training method for an image generation model, as shown in FIG. 1, according to an exemplary embodiment, including the steps of:
in step S110, a random vector is obtained, where the random vector is a vector with a preset dimension.
The training method of the image generation model in the embodiment of the disclosure may be implemented by an electronic device, and in particular, the electronic device may be a server.
The image generation model of the present disclosure is a deep learning-based generation model, specifically, the image generation model may be a GAN (Generative Adversarial Network, generation type antagonism network) model, a VAE (Variational Autoencoder, variational self-encoder) model, or the like, a random vector is obtained, the random vector is input as an input of the image generation model, and the random vector is input into the image generation model to be processed, so as to generate an image having a certain characteristic, for example, an image having a doll face characteristic, an image having an elderly person face characteristic, an image having a female face characteristic, or the like, and the image having the certain characteristic is set according to actual needs. The random vector is a vector with a preset dimension, for example, an image generating model is used for generating an image with doll face features, the random vector is a column vector Z with a dimension of 512, and Z is input into the image generating model to generate the image with doll face features. The random vectors are input into the image generation model to be processed to generate an image with doll face features in the prior art, and will not be described in detail herein. In the training process of the image generation model, the feature map processed by the network layer in the image generation model is generally normalized to obtain normalized features, so that the data of the feature map is changed into the distribution with the mean value of 0 and the standard deviation of 1 or the distribution with the range of 0-1, thereby shortening the model convergence time and improving the model training effect.
In one possible embodiment, the image generation model is a generation type countermeasure network or a variation self-encoder.
The GAN model is built up of (at least) two modules in a framework: the mutual game learning of the Generative Model and the discriminant Model Discriminative Model produces a fairly good output. The VAE model aims at learning the basic probability distribution of training data so that new data can be easily sampled from the learned distribution to generate an image of the target feature. The use of GAN and VAE models to generate images is prior art and will not be described in detail herein.
In step S120, the random vector is input into an image generation model to be trained for processing, so as to obtain a feature map including a plurality of channels.
The image generation model is a deep learning model, the image generation model can comprise a plurality of network layers, specifically, the network layers can be a convolution layer, a connection layer, an up-sampling layer and an activation layer, the random vector is input into the image generation model to be trained for processing, feature extraction is carried out through each network layer of the image generation model, and a feature map output by the network layer of the image generation model is obtained, wherein the feature map comprises a plurality of channels. For example, the image generation model is a GAN model, where the generation model of the GAN model includes a plurality of network layers, for example, a convolution layer, a pooling layer, a connection layer, an activation layer, an upsampling layer, and the like, the random vector is a column vector Z having a dimension of 512, Z is input into the above image generation model, and an image of c×h×w is finally generated through processing of each network layer in the image generation model, where C is the number of channels, H is the image feature length, W is the image feature width, for example, the image output by the generation model of the final GAN model is 3×256×256.
For example, the feature map P output by the active layer in the network layer of the image generation model is acquired, the P size is 512×4×4, the channel is 512, and the feature map P is processed by the network layers such as the convolution layer, the upsampling layer and the like in the network layer of the image generation model, and finally the generated image is 3×256×256.
In step S130, the feature layers of the feature map are grouped to obtain a target feature map including a target number of feature layer groups, where the target number is a divisor of the number of channels of the feature map.
The feature layers of the feature map are grouped to obtain a target feature map comprising a target number of feature layer groups, wherein the target number is a divisor of the number of channels of the feature map, for example, the feature map P is obtained, and the feature layers of the feature map are grouped according to the number 512 of channels of the feature map P, wherein the target number is a divisor of the number of channels of the feature map, for example, a feature layer comprising 16 target feature maps, each of which comprises 32 channels, is obtained.
In step S140, a learnable parameter of a preset normalized transformation function is set according to the random vector, the number of channels of the feature map, and the target number, so as to obtain a target normalized transformation function.
Setting a learning parameter in the normalized conversion function to represent the degree of data scaling and the data offset, for example, presetting the normalized conversion function to beWherein x is a target feature map, E (x) is a mean value of x, var (x) is a variance of x, y (x) is a normalization result of the target feature map, and γ and β are learnable parameters, wherein γ represents a degree of scaling of data, and β represents a data offset. Because gamma and beta are learnable parameters, in order to meet the requirements of normalized data, the input of an image generation model is considered in the normalization operation process, and the characteristics of a normalization operation object are considered, so that the parameters in the normalization operation process are mutually related, the difference is not lost due to the normalization process, the gamma and the beta can be related, and the gamma and the beta are set according to the random vector, the channel number of the feature map and the target number when the gamma and the beta are related to the input of the image generation model. For example, if the number of channels of the feature map P is 512 and the number of targets is 16 according to the dimension of the random vector Z being 512, each target feature map includes 32-channel feature layers, and the target feature map is represented as x, and is composed ofThis further calculates the mean E (x) and variance Var (x) of the target feature map. Based on the random vector 512, the channel number 512 of the feature map and the target number 16, for example, each target feature map includes a feature layer of 512/16=32 channels, and randomly generates a first matrix and a second matrix, respectively, wherein the first matrix is a 32×512 matrix, denoted as a γ The second matrix is also a 32×512 matrix, denoted as a β Generating a first vector and a second vector according to the channel number of the characteristic diagram and the target number, wherein the first vector is marked as B γ The second vector is denoted as B β ,B γ And B β All are 32-dimensional column vectors, and a learner parameter of a preset normalized conversion function is set according to the random vector, the channel number of the feature map and the target number, so that the target normalized conversion function is obtained as +.>According to the random vector, the channel number of the feature map and the target number, the learnable parameters of a preset normalized conversion function are set to obtain a target normalized conversion function, and the target feature map is normalized according to the target normalized conversion function to obtain an intermediate feature map, so that the input of an image generation model is considered in the normalization operation process, the features of a normalization operation object are considered, the parameters in the normalization operation process are related, and the difference is guaranteed not to disappear due to the normalization process, thereby improving the training effect of the model for image generation and improving the quality of the generated image.
In step S150, the target feature map is normalized according to the target normalized transformation function, so as to obtain an intermediate feature map.
After the target normalized conversion function is determined, normalizing the target feature map according to the target normalized conversion function to obtain an intermediate feature map. And then, the intermediate feature map can be input into other network layers in the image generation model to be trained to perform feature extraction, so that the difference is ensured not to disappear due to the normalization process, the training effect of the model for image generation is improved, and the quality of the generated image is improved.
In step S160, the intermediate feature map is processed by using the image generation model to be trained, so as to obtain a predicted image.
The intermediate feature map is obtained after the target normalized conversion function is processed, and can be input into other network layers in the image generation model to be trained to perform feature extraction so that the image generation model to be trained finally generates a predicted image.
In step S170, parameters of the image generation model to be trained are adjusted based on the predicted image and the sample image, so as to obtain a trained image generation model, where the sample image is an image with a specified facial feature.
The sample image is an image having a specified facial feature, for example, the sample image is an image of a doll face feature, and the difference between the sample image and the predicted image is calculated based on the predicted image and the sample image, so that parameters of the image generation model to be trained are adjusted based on a loss function, a trained image generation model is obtained, and the image having the specified facial feature, for example, the image of the doll face feature is obtained through the trained image generation model. The adjustment of the parameters of the image generation model to be trained based on the sample image and the predicted image is in the prior art, and will not be described herein.
The random vector is input into an image generation model to be trained for processing, a feature map comprising a plurality of channels is obtained, feature layers of the feature map are grouped according to the channel number of the feature map, a target feature map comprising target number of feature layer groups is obtained, learning parameters of a preset normalized conversion function are set according to the random vector, the channel number of the feature map and the target number, a target normalized conversion function is obtained, the feature map is normalized according to the target normalized conversion function, an intermediate feature map is obtained, the image generation model to be trained is utilized for processing the intermediate feature map, a predicted image is obtained, the input of the image generation model is considered in the normalization operation process, the features of a normalization operation object are considered, parameters in the normalization operation process are mutually related, the difference is guaranteed not to disappear due to the normalization process, finally, the training of the image generation model is completed based on the predicted image and the sample image, the training effect of the image generation model is improved, and the quality of the generated image is improved.
In one possible implementation manner, the setting the learnable parameters of the preset normalized conversion function according to the random vector, the channel number of the feature map, and the target number to obtain the target normalized conversion function includes:
generating a first matrix and a second matrix according to the random vector, wherein the number of channels of the feature map and the target number are the same as the dimension of the random vector, and generating a first vector and a second vector according to the number of channels of the feature map and the target number, and the number of rows of the first matrix, the number of rows of the second matrix, the number of rows of the first vector and the number of rows of the second vector are the same;
and setting the learnable parameters of a preset normalized conversion function according to the first matrix, the random vector and the first vector, and the second matrix and the second vector to obtain a target normalized conversion function.
The dimension of the random vector Z is 512, the number of channels of the feature map P is 512, and the number of targets is 16, so that each target feature map includes 32 feature layers of channels, and the target feature map is denoted as x, thereby further calculating the mean E (x) and variance Var (x) of the target feature map. Based on the random vector 512, the channel number 512 of the feature map and the target number 16, for example, each target feature map includes a feature layer of 512/16=32 channels, and randomly generates a first matrix and a second matrix, respectively, wherein the first matrix is a 32×512 matrix, denoted as a γ The second matrix is also a 32×512 matrix, denoted as a β Generating a first vector and a second vector according to the channel number of the characteristic diagram and the target number, wherein the first vector is marked as B γ The second vector is denoted as B β ,B γ And B β All are 32-dimensional column vectors, and a learner parameter of a preset normalized conversion function is set according to the random vector, the channel number of the feature map and the target number to obtain a target normalized conversion function as followsAnd carrying out normalization processing on the target feature map according to the random vector, the channel number of the feature map and the group number of the group to obtain an intermediate feature map, so that the input of an image generation model is considered in the normalization operation process, the features of a normalization operation object are considered, parameters in the normalization operation process are related to each other, the difference is ensured not to disappear due to the normalization process, the training effect of the model for image generation is improved, and the quality of the generated image is improved.
In one possible embodiment, the preset normalized conversion function is as follows Wherein x is the target feature map, E (x) is the mean of x, var (x) is the variance of x, y (x) is the intermediate feature map, and γ and β are the learnable parameters.
Using normalized conversion functionsAnd carrying out normalization processing on the feature map processed by the network layer in the deep learning network model to obtain normalized features, so that the data of the feature map is changed into the distribution with the mean value of 0 and the standard deviation of 1 or the distribution with the range of 0-1, thereby shortening the model convergence time and improving the model training effect.
In one possible implementation, the target normalized transformation function is:
wherein Z represents the random vector, A γ Representing the first matrix, B γ Representing the first vector, A β Representing the second matrix, B β Representing the second vector, wherein Z is an m-dimensional column vector, A γ And A β Matrix of g rows and m columns composed of g multiplied by m numbers, B γ And B β Are g-dimensional column vectors, wherein g is set according to the channel number of the characteristic diagram, andwherein C represents the number of channels of the feature map, G represents the target number, and G represents an integer.
In order to correlate γ with β and with the input of the image generation model, the number of channels of the feature map P is 512, and the number of targets is 16 according to the dimension of the random vector Z, and each target feature map includes 32 channel feature layers, and the target feature map is denoted as x, so that the mean E (x) and variance Var (x) of the target feature map are further calculated. Based on the random vector 512, the channel number 512 of the feature map and the target number 16, for example, each target feature map includes a feature layer of 512/16=32 channels, and randomly generates a first matrix and a second matrix, respectively, wherein the first matrix is a 32×512 matrix, denoted as a γ The second matrix is also a 32×512 matrix, denoted as a β Generating a first vector and a second vector according to the channel number of the characteristic diagram and the target number, wherein the first vector is marked as B γ The second vector is denoted as B β ,B γ And B β All are 32-dimensional column vectors, and a learner parameter of a preset normalized conversion function is set according to the random vector, the channel number of the feature map and the target number to obtain a target normalized conversion function as followsAnd carrying out normalization processing on the target feature map according to the random vector, the channel number of the feature map and the group number of the group to obtain an intermediate feature map, so that the input of an image generation model is considered in the normalization operation process, the features of a normalization operation object are considered, parameters in the normalization operation process are related to each other, the difference is ensured not to disappear due to the normalization process, the training effect of the model for image generation is improved, and the quality of the generated image is improved.
Fig. 2 is a schematic diagram of an image processing method according to an exemplary embodiment, as shown in fig. 2, including the steps of:
in step S210, an image to be processed or a random vector to be processed is acquired.
The image processing method of the embodiment of the disclosure may be implemented by an electronic device, and in particular, the electronic device may be a server.
After training according to the training method of any image generation model disclosed in the above embodiment to obtain a trained image generation model, an image to be processed or a random vector to be processed is obtained, where the image to be processed may be represented as a vector with a certain dimension.
In step S220, the image to be processed or the random vector to be processed is input into a trained image generation model to be processed, so as to obtain an image with a specified facial feature, where the trained image generation model is trained according to the method of the training method of any one of the image generation models disclosed in the above embodiment.
Inputting the image to be processed or the random vector to be processed into a trained image generation model for processing to obtain an image with specified facial features, wherein the image with specified facial features can be an image with doll face features, an image with old people face features or an image with female people face features, and the like, and the image with certain features can be set according to actual needs. For example, according to the training method of any one of the image generation models disclosed in the above embodiment, the trained image generation model is a model for generating the facial features of the female, and then the image to be processed is input into the trained image generation model for processing, so that an image with the facial features of the female can be generated.
In one possible embodiment, the image with the specified facial feature is an image with a doll face feature.
The image with the specified facial features may be an image with doll face features, and the training method according to any one of the image generation models disclosed in the embodiments trains the trained image generation model to be a model for generating doll face features, and then inputs the image to be processed into the trained image generation model for processing, so as to generate the image with doll face features.
FIG. 3 is a block diagram of a training apparatus for generating a model of an image, according to an exemplary embodiment, see FIG. 3, the apparatus comprising: the system comprises an acquisition module 310, a processing module 320, a grouping module 330, a setting module 340, a normalization module 350, a processing module 360 and a training module 370.
The acquisition module 310 is configured to acquire a random vector, where the random vector is a vector with a preset dimension;
the processing module 320 is configured to input the random vector into an image generation model to be trained for processing, so as to obtain a feature map comprising a plurality of channels; the method comprises the steps of carrying out a first treatment on the surface of the
A grouping module 330 configured to group feature layers of the feature map to obtain a target feature map including a target number of feature layer groups, where the target number is a divisor of a channel number of the feature map;
A setting module 340, configured to set a learnable parameter of a preset normalized conversion function according to the random vector, the number of channels of the feature map, and the target number, so as to obtain a target normalized conversion function;
the normalization module 350 is configured to normalize the target feature map according to the target normalized transformation function to obtain an intermediate feature map;
a processing module 360, configured to process the intermediate feature map by using the image generation model to be trained, so as to obtain a predicted image;
the training module 370 is configured to adjust parameters of the image generation model to be trained based on the predicted image and a sample image, where the sample image is an image with a specified facial feature, to obtain a trained image generation model.
In one possible embodiment, the image generation model is a generation type countermeasure network or a variation self-encoder.
In one possible implementation, the setting module 340 is specifically configured to:
generating a first matrix and a second matrix according to the random vector, wherein the number of channels of the feature map and the target number are the same as the dimension of the random vector, and generating a first vector and a second vector according to the number of channels of the feature map and the target number, and the number of rows of the first matrix, the number of rows of the second matrix, the number of rows of the first vector and the number of rows of the second vector are the same;
And setting the learnable parameters of a preset normalized conversion function according to the first matrix, the random vector and the first vector, and the second matrix and the second vector to obtain a target normalized conversion function.
In one possible embodiment, the preset normalized conversion function is as follows Wherein x is the target feature map, E (x) is the mean of x, var (x) is the variance of x, y (x) is the intermediate feature map, and γ and β are the learnable parameters.
In one possible implementation, the target normalized transformation function is:
wherein Z represents the random vector, A γ Representing the first matrix, B γ Representing the first vector, A β Representing the second matrix, B β Representing the second vector, wherein Z is an m-dimensional column vector, A γ And A β Matrix of g rows and m columns composed of g multiplied by m numbers, B γ And B β Are g-dimensional column vectors, wherein g is set according to the channel number of the characteristic diagram, andwherein C represents the number of channels of the feature map, G represents the target number, and G represents an integer.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment, see fig. 4, including: the acquisition module 410, the image processing module 420.
An acquisition module 410 configured to acquire an image to be processed or a random vector to be processed;
the image processing module 420 is configured to input the image to be processed or the random vector to be processed into a trained image generation model to be processed, so as to obtain an image with a specified facial feature, where the trained image generation model is trained according to the method of any one of the first aspect.
Optionally, the image with the specified facial feature is an image with a doll face feature.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 5 is a block diagram of a first electronic device according to an exemplary embodiment of the present disclosure, see fig. 5, for example, electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
Referring to fig. 5, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, images, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only a boundary of a touch or a sliding action but also a duration and a pressure related to the touch or the sliding operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 described above further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the training method of the image generation model or the image processing method of any of the above embodiments.
Fig. 6 is a schematic diagram of a second electronic device shown in accordance with an exemplary embodiment of the present disclosure. For example, the electronic device 900 may be provided as a server. Referring to fig. 6, electronic device 900 includes a processing component 922 that further includes one or more processors and memory resources represented by memory 932 for storing instructions, such as applications, executable by processing component 922. The application programs stored in memory 932 may include one or more modules that each correspond to a set of instructions. Further, processing component 922 is configured to execute instructions to perform the training method of the image generation model described in any of the above embodiments or the image processing method described in any of the above embodiments.
The electronic device 900 may also include a power supply component 926 configured to perform power management for the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 900 to a network, and an input output (I/O) interface 958. The electronic device 900 may operate based on an operating system stored in the memory 932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or similar operating systems.
In an embodiment of the present disclosure, there is also provided a storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the training method of the image generation model or the image processing method of any of the above embodiments. In an exemplary embodiment, a storage medium is also provided, such as a memory 804 including instructions executable by the processor 820 of the electronic device 800 to perform the above-described method. Alternatively, for example, the storage medium may be a non-transitory computer readable storage medium, such as ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In an embodiment of the present disclosure, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the training method of the image generation model of any of the above embodiments or the image processing method of any of the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A method of training an image generation model, comprising:
obtaining a random vector, wherein the random vector is a vector with a preset dimension;
inputting the random vector into an image generation model to be trained for processing to obtain a feature map comprising a plurality of channels;
grouping the feature layers of the feature map to obtain a target feature map comprising a target number of feature layer groups, wherein the target number is a divisor of the channel number of the feature map;
generating a first matrix and a second matrix respectively according to the random vector, wherein the number of columns of the first matrix and the number of columns of the second matrix are the same as the dimension of the random vector, and generating a first vector and a second vector respectively according to the number of channels of the feature map and the number of targets, and the number of rows of the first matrix, the number of rows of the second matrix, the number of rows of the first vector and the number of rows of the second vector are the same;
According to the first matrix, the random vector and the first vector, the second matrix and the second vector set the learnable parameters of a preset normalized conversion function, and a target normalized conversion function is obtained;
normalizing the feature map according to the target normalized conversion function to obtain an intermediate feature map;
processing the intermediate feature map by using the image generation model to be trained to obtain a predicted image;
and adjusting parameters of the image generation model to be trained based on the predicted image and the sample image to obtain a trained image generation model, wherein the sample image is an image with specified facial features.
2. The method of claim 1, wherein the image generation model is a generative antagonism network or a variational self-encoder.
3. The method of claim 1, wherein the predetermined normalized conversion function isWherein x is a target feature map, E (x) is the mean of x, var (x) is the variance of x, y (x) is the intermediate feature map, and γ and β are learnable parameters.
4. A method according to claim 3, wherein the target normalized conversion function is:
Wherein Z represents the random vector, A γ Representing the first matrix, B γ Representing the first vector, A β Representing the second matrix, B β Representing the second vector, wherein Z is an m-dimensional column vector, A γ And A β Matrix of g rows and m columns composed of g multiplied by m numbers, B γ And B β Are g-dimensional column vectors, wherein g is set according to the channel number of the feature map, whereinWherein C represents the number of channels of the feature map, G is the target number, and G is an integer.
5. An image processing method, comprising:
acquiring an image to be processed or a random vector to be processed;
inputting the image to be processed or the random vector to be processed into a trained image generation model to be processed, so as to obtain an image with specified facial features, wherein the trained image generation model is trained according to the method of any one of claims 1 to 4.
6. The method of claim 5, wherein the image with the specified facial features is an image with doll face features.
7. A training device for an image generation model, comprising:
the acquisition module is configured to acquire a random vector, wherein the random vector is a vector with a preset dimension;
The processing module is configured to input the random vector into an image generation model to be trained for processing, and a feature map comprising a plurality of channels is obtained;
the grouping module is configured to group the feature layers of the feature images to obtain target feature images containing target number of feature layer groups, wherein the target number is a divisor of the channel number of the feature images;
the setting module is configured to generate a first matrix and a second matrix according to the random vector, wherein the number of columns of the first matrix and the number of columns of the second matrix are identical to the dimension of the random vector, and generate a first vector and a second vector according to the number of channels of the feature map and the number of targets, and the number of rows of the first matrix, the number of rows of the second matrix, the number of rows of the first vector and the number of rows of the second vector are identical; according to the first matrix, the random vector and the first vector, the second matrix and the second vector set the learnable parameters of a preset normalized conversion function, and a target normalized conversion function is obtained;
the normalization module is configured to normalize the feature map according to the target normalized conversion function to obtain an intermediate feature map;
The processing module is configured to process the intermediate feature map by utilizing the image generation model to be trained to obtain a predicted image;
the training module is configured to adjust parameters of the image generation model to be trained based on the predicted image and a sample image to obtain a trained image generation model, and the sample image is an image with specified facial features.
8. The apparatus of claim 7, wherein the image generation model is a generative antagonism network or a variational self-encoder.
9. The apparatus of claim 7, wherein the predetermined normalized conversion function isWherein x is a target feature map, E (x) is the mean of x, var (x) is the variance of x, y (x) is the intermediate feature map, and γ and β are learnable parameters.
10. The apparatus of claim 9, wherein the target normalized conversion function is:
wherein Z represents the random vector, A γ Representing the first matrix, B γ Representing the first vector, A β Representing the second matrix, B β Representing the second vector, wherein Z is an m-dimensional column vector, A γ And A β Matrix of g rows and m columns composed of g multiplied by m numbers, B γ And B β Are g-dimensional column vectors, where g is according to the specificationThe number of channels of the sign diagram is set, in whichWherein C represents the number of channels of the feature map, G is the target number, and G is an integer.
11. An image processing apparatus, comprising:
the acquisition module is configured to acquire an image to be processed or a random vector to be processed;
the image processing module is configured to input the image to be processed or the random vector to be processed into a trained image generation model to be processed, so as to obtain an image with specified facial features, wherein the trained image generation model is trained according to the method of any one of claims 1 to 4.
12. The apparatus of claim 11, wherein the image with the specified facial features is an image with doll face features.
13. An electronic device, comprising: a processor; a memory for storing the processor-executable instructions, wherein the processor is configured to execute the instructions to implement the training method of the image generation model of any of claims 1-4 or the image processing method of any of claims 5-6.
14. A storage medium having stored therein a computer program which, when executed by a processor, implements the training method of an image generation model according to any one of claims 1-4 or the image processing method according to any one of claims 5-6.
CN201911193533.5A 2019-11-28 2019-11-28 Training method of image generation model, image processing method and device Active CN112861592B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911193533.5A CN112861592B (en) 2019-11-28 2019-11-28 Training method of image generation model, image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911193533.5A CN112861592B (en) 2019-11-28 2019-11-28 Training method of image generation model, image processing method and device

Publications (2)

Publication Number Publication Date
CN112861592A CN112861592A (en) 2021-05-28
CN112861592B true CN112861592B (en) 2023-12-29

Family

ID=75995742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911193533.5A Active CN112861592B (en) 2019-11-28 2019-11-28 Training method of image generation model, image processing method and device

Country Status (1)

Country Link
CN (1) CN112861592B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378911B (en) * 2021-06-08 2022-08-26 北京百度网讯科技有限公司 Image classification model training method, image classification method and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609598A (en) * 2017-09-27 2018-01-19 武汉斗鱼网络科技有限公司 Image authentication model training method, device and readable storage medium storing program for executing
CN110009059A (en) * 2019-04-16 2019-07-12 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN110135349A (en) * 2019-05-16 2019-08-16 北京小米智能科技有限公司 Recognition methods, device, equipment and storage medium
CN110163267A (en) * 2019-05-09 2019-08-23 厦门美图之家科技有限公司 A kind of method that image generates the training method of model and generates image
CN110163077A (en) * 2019-03-11 2019-08-23 重庆邮电大学 A kind of lane recognition method based on full convolutional neural networks
CN110390394A (en) * 2019-07-19 2019-10-29 深圳市商汤科技有限公司 Criticize processing method and processing device, electronic equipment and the storage medium of normalization data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609598A (en) * 2017-09-27 2018-01-19 武汉斗鱼网络科技有限公司 Image authentication model training method, device and readable storage medium storing program for executing
CN110163077A (en) * 2019-03-11 2019-08-23 重庆邮电大学 A kind of lane recognition method based on full convolutional neural networks
CN110009059A (en) * 2019-04-16 2019-07-12 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN110163267A (en) * 2019-05-09 2019-08-23 厦门美图之家科技有限公司 A kind of method that image generates the training method of model and generates image
CN110135349A (en) * 2019-05-16 2019-08-16 北京小米智能科技有限公司 Recognition methods, device, equipment and storage medium
CN110390394A (en) * 2019-07-19 2019-10-29 深圳市商汤科技有限公司 Criticize processing method and processing device, electronic equipment and the storage medium of normalization data

Also Published As

Publication number Publication date
CN112861592A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN109858524B (en) Gesture recognition method and device, electronic equipment and storage medium
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
EP3086275A1 (en) Numerical value transfer method, terminal, cloud server, computer program and recording medium
CN111047507B (en) Training method of image generation model, image generation method and device
CN108470322B (en) Method and device for processing face image and readable storage medium
CN109543066B (en) Video recommendation method and device and computer-readable storage medium
KR20160021737A (en) Method, apparatus and device for image segmentation
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN107341509B (en) Convolutional neural network training method and device and readable storage medium
CN110288716B (en) Image processing method, device, electronic equipment and storage medium
CN107220614B (en) Image recognition method, image recognition device and computer-readable storage medium
CN109325908B (en) Image processing method and device, electronic equipment and storage medium
CN109543069B (en) Video recommendation method and device and computer-readable storage medium
CN111078170B (en) Display control method, display control device, and computer-readable storage medium
CN107424130B (en) Picture beautifying method and device
CN112188091A (en) Face information identification method and device, electronic equipment and storage medium
CN107239758B (en) Method and device for positioning key points of human face
CN107992894B (en) Image recognition method, image recognition device and computer-readable storage medium
US9665925B2 (en) Method and terminal device for retargeting images
CN112861592B (en) Training method of image generation model, image processing method and device
CN110148424B (en) Voice processing method and device, electronic equipment and storage medium
CN110533006B (en) Target tracking method, device and medium
CN107633490B (en) Image processing method, device and storage medium
CN112184876B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115374256A (en) Question and answer data processing method, device, equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant