CN107945204B - Pixel-level image matting method based on generation countermeasure network - Google Patents

Pixel-level image matting method based on generation countermeasure network Download PDF

Info

Publication number
CN107945204B
CN107945204B CN201711022184.1A CN201711022184A CN107945204B CN 107945204 B CN107945204 B CN 107945204B CN 201711022184 A CN201711022184 A CN 201711022184A CN 107945204 B CN107945204 B CN 107945204B
Authority
CN
China
Prior art keywords
network
image
generation
portrait
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711022184.1A
Other languages
Chinese (zh)
Other versions
CN107945204A (en
Inventor
王伟
周红丽
王晨吉
方凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201711022184.1A priority Critical patent/CN107945204B/en
Publication of CN107945204A publication Critical patent/CN107945204A/en
Application granted granted Critical
Publication of CN107945204B publication Critical patent/CN107945204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a pixel-level image matting method based on a generation countermeasure network, which solves the problem that the field of machine matting requires massive data sets with huge manufacturing cost to train an optimization network. The method comprises the steps of presetting a generation network and a discrimination network of an antagonistic learning mode, wherein the generation network is a deep neural network with jump connection; inputting a real image containing a portrait into a network to output a landscape segmentation image; inputting the first and second image pairs into a discrimination network respectively to output a discrimination probability, and determining a loss function of the generation network and the discrimination network; adjusting configuration parameters of the two networks according to the two minimized network loss function values to complete training of the generated network; and inputting the test image into the trained generation network to generate a human scene segmentation image, performing probability transformation on the generated image, and finally sending the probability matrix into the conditional random field for further optimization. The invention reduces the number of the training graphs in batch and improves the efficiency and the segmentation precision.

Description

Pixel-level image matting method based on generation countermeasure network
Technical Field
The invention relates to the technical field of computer vision, in particular to a pixel-level image matting method, and specifically relates to a pixel-level image matting method based on a generation countermeasure network, which is used for separating an image from a background.
Background
Image matting has been a hot problem in the field of computer vision. Pixel-level image matting requires that the foreground of an object is accurately extracted from the background, and belongs to a more refined two-classification semantic segmentation problem.
With the rapid development of electronic commerce, people image cutout has a very wide application scene, for example, more and more people choose to buy clothes on the internet, and then the function of searching for objects by pictures of e-commerce is produced. It is difficult to accurately search for similar clothes, so it is necessary to divide the portrait in the picture. For another example, with the advent of various portrait beautifying software, the background blurring function therein also needs to accurately distinguish the portrait from the background. For example, in case solving monitoring, the monitored portrait is preprocessed so as to quickly locate the search target. However, in most images, the background is complex, and the technology for accurately distinguishing the foreground from the background is still to be improved.
Before the deep learning era, people mainly use a pixel-based clustering method and a Graph Partitioning (Graph Partitioning) algorithm to solve the problem of relevant semantic segmentation, the traditional Graph Partitioning-based semantic segmentation method is to abstract an image into a Graph (Graph) form and then carry out semantic segmentation on the image by means of the algorithm in the Graph theory, and the method has the main defects of poor segmentation effect and low speed if the background is complex or the similarity between the background and a target is high.
With the arrival of the artificial intelligence 2.0 era, the development of deep learning technology, the improvement of computer capability and the generation of big data, a good environment is established for the development of semantic segmentation technology. There are many deep neural network based models for semantic segmentation, such as Full Convolution Networks (FCNs), which use an upsampled deconvolution layer to obtain end-to-end dense predictions. However, because the network structure is simple, many training set images are often required to train the network. In addition, the data set used for pixel-level portrait matting is few, the calibration and manufacturing cost of the data set is huge, one training and labeling image sample needs manual calibration for half an hour, however, a segmentation model based on a full convolution network needs tens of thousands of training set images to obtain a good result, and it is seen that manual labeling consumes time and labor to obtain an available data set.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method for generating a pixel-level image matting based on a confrontation network, which obtains a better segmentation result by using fewer training set images.
The invention relates to a pixel-level portrait matting method based on a generation countermeasure network, which aims at a real image containing a portrait and obtains a marked image with the real portrait separated from a background in a manual marking mode, and is characterized by comprising the following steps:
(1) presetting a network: presetting a generation network and a discrimination network, setting the two networks into an antagonistic learning mode, wherein the generation network and the discrimination network are both deep neural networks, and the generation network is a deep neural network with jump connection and is also referred to as the generation network for short;
(2) generating a segmentation image: inputting a real image containing a portrait into a generation network, and outputting a segmentation image of the portrait separated from a background, which is called a generation portrait segmentation image for short;
(3) calculating a loss function value: and respectively inputting the real image containing the portrait and the generated landscape segmentation image as a first image pair, and the real image containing the portrait and the annotation image as a second image pair into the discrimination network. Calculating false discrimination probability and true discrimination probability through the discrimination network, and obtaining a loss function value of the discrimination network and a loss function value of the generation network by using a loss function formula of the discrimination network and a loss function formula of the generation network;
(4) updating the network parameters: respectively minimizing the loss function value of the discrimination network and the loss function value of the generation network, and iteratively updating the values of all parameters of the generation network and the discrimination network by utilizing a deep neural network back propagation algorithm to finish the training of the generation network and the discrimination network;
(5) generating a test set scene segmentation image: after training of the generated network is completed, receiving a real image containing a portrait to be segmented, inputting the real image into the trained generated network, and generating a generated landscape segmented image of a network output test set in the network through iterative computation;
(6) optimizing and dividing the image to finish portrait matting: and (3) performing probability transformation on the generated human scene segmentation image of the generated network output test set, taking a human scene probability matrix of the test set as the input of a conditional random field, and further refining the human image segmentation image subjected to probability transformation by using the conditional random field to finish the human image cutout based on the generated countermeasure network.
Compared with the prior art, the invention has the technical advantages that:
firstly, compared with a full convolution network, the method is based on generation of the confrontation network, the structure of the generated confrontation network is complex, the parameters are more, more detailed characteristics such as shapes, colors and the like of the portrait and the background image are easy to learn, and therefore the segmentation precision is higher. The FCN needs to train the network by using tens of thousands of training set images to obtain a good segmentation effect of separating the portrait from the background. The number of the images of the training set used by the invention is less than that of the images of the FCN network training set by 2 orders of magnitude, so that the training speed is improved. The invention provides a new method for segmenting the portrait and the background.
In addition, the invention uses the deep neural network with jump connection, which is helpful to make the network converge quickly when training the generated network, and obtains better segmentation effect in a shorter time. A random inactivation mechanism is added in a decoder layer of a generated network, so that overfitting caused by excessive network parameters and complex network structure is prevented. The two mechanisms enhance the soundness of generating the confrontation network and improve the speed of network training.
By applying the technical scheme of the invention, after the generation network and the judgment network are preset, the generation network and the judgment network resist against each other for learning, and the generated image is sent to the conditional random field for further optimization, so that the segmentation smoothness of the portrait edge is improved.
Drawings
FIG. 1 is a schematic diagram of a generating network according to the present invention;
FIG. 2 is a schematic diagram of an encoder according to the present invention;
FIG. 3 is a block diagram of a decoder according to the present invention;
FIG. 4 is a schematic diagram of a discriminating network according to the present invention;
FIG. 5 is a flowchart illustrating the optimization of a segmented human scene image generated by a test based on conditional random fields according to the present invention;
FIG. 6 is a schematic diagram of a pixel-level image matting segmentation process according to the present invention;
FIG. 7 is a diagram of the segmentation effect of separating the portrait from the background according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Example 1
The invention provides a pixel-level image matting method based on a generation countermeasure network, which develops research and innovation aiming at the problem of low efficiency caused by the need of massive training set images in the existing image matting method. Aiming at the real images containing the portrait, the invention also needs to obtain the marked images of the real portrait separated from the background in a manual marking mode like the full convolution network training model, but the number of the training set images is much less. See fig. 7b, which is an image separated from the background by a manually annotated portrait. Referring to fig. 6, the image matting method of the present invention comprises the following steps:
(1) presetting a network: the method comprises the steps of presetting a generation network and a discrimination network, setting the two networks into a counterstudy mode, namely obtaining a loss function of the generation network through the loss function of the discrimination network. The generation network and the discrimination network are both deep neural networks, wherein the generation network is a deep neural network with jump connection, also referred to as a generation network for short. The generation network is mainly composed of convolutional layers, encoder layer groups, decoder layer groups, and deconvolution layers, see fig. 1. A portrait image input layer, a convolution layer, an encoder layer group, a decoder layer group, a deconvolution layer, and a divided image output layer are connected in sequence in an information processing flow direction. The number of the encoder layer groups corresponds to that of the decoder layer groups, and each encoder layer is in jumping connection with the corresponding decoder layer. The countermeasure network consists of 5 convolutional layers and activation functions, see fig. 4.
(2) Generating a segmentation image: before a real image containing a portrait is input into a generation network, preprocessing needs to be performed on the input image, such as image resolution modification, picture cutting normalization, image inversion and the like, and through generation network calculation, a segmented image with the portrait separated from a background is output, which is referred to as generating a landscape segmented image for short.
(3) Calculating a loss function value: and respectively inputting the real image containing the portrait and the generated landscape segmentation image output by the generation network into the discrimination network as a first image pair, and the real image containing the portrait and the marked image as a second image pair. And calculating false discrimination probability and true discrimination probability through the discrimination network weight, and obtaining a loss function value of the discrimination network and a loss function value of the generation network by using a loss function formula of the discrimination network and a loss function formula of the generation network.
(4) Updating the network parameters: and respectively minimizing the loss function value of the discrimination network and the loss function value of the generated network, and iteratively updating the values of the parameters of the generated network and the discrimination network by utilizing a deep neural network back propagation algorithm to finish the training of the generated network and the discrimination network. And the trained generation network is used for generating a test scene segmentation image, and the judgment network is not used any more during testing.
(5) Generating a test set scene segmentation image: and after the training of the generating network is finished, the generating network receives a real image containing the portrait to be segmented, namely a test image. Through iterative computation in the network, the trained generation network can segment the test image landscape. And generating a generated human-scene segmentation image of the network output test set.
(6) Optimizing and dividing the image to finish portrait matting: the method comprises the steps of generating a network output test set, generating a human-scene segmentation image in a probabilistic manner, using a human-scene probability matrix of the test set as an input of a conditional random field, and further refining the human-image segmentation image after probability by using the conditional random field, so that a smaller data set is used, a more accurate human-image and background segmentation image is obtained, and the human-image cutout based on the generation of a confrontation network is completed.
The invention uses smaller training set image training to generate a countermeasure Network (GAN) Network, and obtains better result. The method is characterized in that a generation network and a discrimination network are preset, and the types of the generation network and the discrimination network are deep neural networks. In addition, the invention adds conditional random field optimization when testing the output of the segmentation image, so that the segmentation result is more precise.
Example 2
The general technical solution of the pixel-level human image matting method based on the generation countermeasure network is the same as that of embodiment 1, the generation network described in step 1 is a deep neural network with jump Connection, the deep neural network with jump Connection is a gradient transmission path of jump Connection (Skip Connection) between N concatenated encoder layers and N concatenated decoder layers for forming the generation network, and this is an identity mapping at the same time, referring to fig. 1, specifically, 3-8 layers of encoder layers are connected to 11-16 layers of decoder layers, for example, the output of the 3 layer of encoder layer is simultaneously input to the 4 th layer of encoder layer and the 16 th layer of decoder layer, wherein the output of the 3 layer of encoder layer to the 4 th layer of encoder layer is a basic output, the output of the 3 layer of encoder layer to the 16 th layer of decoder layer is a result of jump Connection, and so on, the jump-connected neural network structure of the present invention is formed, the effect is to prevent the problems of gradient dispersion and degradation caused by the increase of the number of the generated network layers.
Example 3
Based on the general technical scheme of the pixel-level portrait matting method for generating the countermeasure network and the embodiment 1-2, a random inactivation mechanism (Dropout) is introduced into the decoder layer of the generation network, specifically, inactivation parameters are randomly discarded before the final output of the decoder layer of the generation network, that is, each decoder layer of the generation network randomly discards the respective inactivation parameters before the final output, specifically, the value of the parameter output of each inactivation part is randomly set to 0, so that many unnecessary calculations are simplified, and the robustness of the network structure is ensured.
Example 4
The overall technical scheme of the pixel-level portrait matting method based on the generation of the countermeasure network is the same as that of the embodiment 1-3, the loss functions mentioned in the step (3) and the step (4) are smaller, the smaller the loss function of the generation network is, the higher the truth of the segmentation image which is output by the generation network and is separated from the background is, the generation network loss function formula is obtained by weighting two losses through two coefficients, and the specific generation network loss function formula is as follows:
gen_loss=GAN_weight×GAN_loss+L1_weight×L1_loss
wherein, GAN _ weight represents the weight of the loss-resisting function GAN _ loss, and the weight is 0-50; the value in this example is 10. L1_ weight represents the weight of the L1 norm loss function of the generated segmented image of the human scene and the annotated image, which is output for the generation network, and has a value of 0 to 100, in this example, 90. GAN _ loss is a penalty function, and is calculated as follows:
Figure BDA0001447670980000051
in the formula PfakeThe probability matrix is generated by a real image containing a portrait and a generated scene segmentation image output by a generation network, and the probability that the generation network output image is considered as true by the discrimination network is judged, namely the false discrimination probability calculated by the discrimination network weight; in order to avoid errors caused by the fact that the true number of the log function is equal to 0, a very small number, namely EPS, is added into the formula, and the value range of the EPS is 10 as a simple correction-13—10-11The value in this example is 10-13
The L1 norm loss L1_ loss, also referred to as the pixel-level regularization loss function, describes the degree of difference between the generated segmented human-scene image and the annotated image of the generated network output, i.e., the pixel-level regularization loss function is determined from the difference between the generated segmented human-scene image and the annotated image of the generated network.
The L1 norm loss L1_ loss is calculated as follows:
Figure BDA0001447670980000061
targets in the formulaijRepresenting annotated images, outputijThe generated scene segmentation image output by the generation network is shown, wherein N is the number of side length pixels of the real image containing the portrait, the value range is 256-1024, and the value in the example is 256.
Example 5
The overall technical scheme of the pixel-level image matting method based on the generation of the countermeasure network is the same as that of the embodiments 1 to 4, and the loss functions mentioned in the step (3) and the step (4) are provided, wherein the smaller the loss function of the discrimination network is, the higher the accuracy of the discrimination probability output by the discrimination network is, and the calculation formula of the discrimination network loss is:
Figure BDA0001447670980000062
in which EPS, as in example 4, also represents a very small constant in the range of 10-13—10-11;PrealThe true judgment probability is the probability that the judgment network considers the labeled image to be true, specifically the probability that the probability moment generated by the pair of the true image containing the portrait and the labeled image as the second image is true, namely the true judgment probability calculated by the judgment network weight.
The invention enables the generation network and the discrimination network to mutually resist and learn, namely, a deep neural network back propagation algorithm is utilized, values of parameters of the generation network and the discrimination network are iteratively updated in the propagation process, a loss function of the discrimination network and a loss function of the generation network are minimized, the purpose of fast convergence of the network is achieved, and the generation capability of the generation network is further enhanced.
Example 6
The general technical scheme of the pixel-level image matting method based on the generation countermeasure network is the same as that of the embodiment 1-5, in the step (6), the generated portrait segmentation image after the probability is further refined by using the conditional random field, namely, after the real image containing the portrait is input into the generation network to output the portrait segmentation image after the two-classification semantic segmentation processing, the generated portrait segmentation image is subjected to probability, namely, the whole portrait segmentation image is divided by 255, and the converted image is converted into a value between 0 and 1 to be used as a probability matrix of the portrait part; and subtracting the probability matrix of the portrait part from 1 to obtain the probability matrix of the background part, and providing the probability matrix for calculating the potential function for the conditional random field.
Example 7
The general technical scheme of the pixel-level image matting method based on the generation of the countermeasure network is the same as that of the embodiments 1 to 6, and when the conditional random field involved in the invention is optimized, a special database, such as a denseCRF library based on Python, can be called. Referring to fig. 5, a specific optimization process can be expressed as:
(1) and (4) carrying out probability transformation on the generated human scene segmentation image of the generated network output test set to obtain a probability matrix.
(2) The position-related characteristics of the portrait and the background in the image are obtained according to the algorithm of the conditional random field, so that the segmentation effect of the portrait and the background can be smoother.
(3) And obtaining the color correlation characteristics of the portrait and the background in the image according to the algorithm of the conditional random field, and finally outputting the optimized portrait segmentation image to finish the pixel-level portrait matting based on the generated countermeasure network.
The invention discloses a pixel-level image matting method based on a generated confrontation network, and solves the problem that the field of machine matting requires massive data sets with huge manufacturing cost to train and optimize the network. The implementation comprises the following steps: presetting a generation network and a discrimination network of an antagonistic learning mode, wherein the generation network is a deep neural network with jump connection; training in a generating network to output a landscape segmentation image; calculating a loss function value; updating the network parameters; generating a test set human scene segmentation image; and optimizing and dividing the image to finish portrait matting. The present invention adjusts the configuration parameters of the generative network and the discriminative network according to the minimization of the generative network loss function value and the discriminative network loss function value. And inputting the generated scene segmentation image after the probability into a conditional random field, and further optimizing the segmentation result. The training process of the invention reduces the number of the training set graphs in batch, and improves the efficiency and the segmentation precision. For portrait separation from background.
A more detailed example is given below, which, in conjunction with the figures and the image matting process, further illustrates the invention:
example 8
The invention provides a pixel-level image matting method based on a generated confrontation network, which aims at a single real image containing an image and provides a confrontation network framework aiming at the single real image containing the image, wherein the confrontation network framework comprises two deep neural networks, one is a generated network and the other is a discrimination network, and the two networks are trained to mutually learn and confront competition simultaneously. The purpose of generating the network is to generate a human scene segmentation image from a real image containing a human image; the purpose of the discrimination network is to distinguish between the annotation image and the scene segmentation image generated by the generation network. Through the countertraining, the generation network generates an annotation image which is as close to a real segmentation result as possible. In addition, the invention adds conditional random field optimization when testing and generating the output of the human scene segmentation image, so that the segmentation result is more precise. The following is a more complete embodiment of the present invention:
first, the network structure of the present invention is introduced:
as shown in fig. 1, fig. 1 is a schematic structural diagram of a generation network in the specific embodiment of the present application, in this example, the present invention uses a U-Net based architecture, and the specific meaning of each layer of the generation network structure is as follows:
the layer 1 is a human image input layer, and the operations of the layer mainly comprise human image picture resolution modification, picture cutting normalization and the like;
the layer 2 is a convolution layer, the step size of convolution operation is 2, the size of convolution kernel is 4 x 4, and the number of convolution kernels is 64;
layers 3-9 are encoder layers, each of which, as shown in FIG. 2, includes three sublayers, namely, a convolutional layer, a ReLU (reconstructed Linear Unit) layer and a Batch Normalization layer; the step length of each convolution layer is 2, and the number of convolution kernels is 2 times of the number of convolution kernels on the upper layer; the ReLU layer is used for realizing nonlinear transformation; the batch normalization layer is used for performing normalization processing on the output of the layer (namely the input of the next layer);
layers 10-16 are decoder layers, each of which, as shown in fig. 3, contains three sublayers, namely, a deconvolution layer, a ReLU layer, and a bulk normalization layer. Wherein the step size of the height and width of each deconvolution layer is 2, i.e. the output is 2 times the original in both height and width directions. The ReLU layer and the batch normalization layer are consistent with the above described roles. In addition, a random deactivation mechanism (Dropout) is introduced in the decoder layer, and during the training process, the deactivation ratio is 0.5;
the 17 th layer is a deconvolution layer, and the output size is 256 × 1;
the 18 th layer is a segmentation image output layer, the main operation of the segmentation image output layer is picture cutting, and finally a generated human scene segmentation image is output;
it should be noted that in the above description, there is a Skip Connection (Skip Connection) between the encoder layer and the decoder layer as a gradient transfer path.
The above describes a specific implementation of the generation network in the present invention, and it should be noted that, in the field of portrait segmentation based on generation of a confrontation network, and in the case that the input image is a real image containing a portrait, other modifications, such as parameter adjustment, etc., made by other professionals in the field belong to the scope of protection of the present application.
As shown in fig. 4, fig. 4 is a schematic diagram of a discrimination network structure of the present invention, and the specific meanings of each layer of the discrimination network are as follows:
the layer 1 is a distinguishing image input layer, and the main operation is to superpose a generated human scene segmentation image output by a generation network and an annotated image;
the 2 nd layer is a convolution layer, wherein the step size of the convolution layer is 2, the size of convolution kernels is 4 x 4, and the number of convolution kernels is 64;
the 3 rd layer is a convolution layer, wherein the step size of the convolution layer is 2, the size of convolution kernels is 4 x 4, and the number of convolution kernels is 128;
the 4 th layer is a convolution layer, wherein the step size of the convolution layer is 2, the size of convolution kernels is 4 x 4, and the number of convolution kernels is 256;
the 5 th layer is a convolution layer, wherein the step size of the convolution layer is 1, the size of convolution kernels is 4 x 4, and the number of convolution kernels is 512;
the 6 th layer is a convolution layer, wherein the step length of the convolution layer is 1, the size of convolution kernels is 4 x 4, and the number of convolution kernels is 1;
the 7 th layer is a discrimination network output layer, and the basic operation of the discrimination network output layer is activated by a Sigmoid function (the Sigmoid function is a deep learning activation function) and is used for outputting a probability value of 30 × 30.
The implementation of the discrimination network in the embodiment is described above, and it should be noted that, in the area of segmenting the portrait based on generating the confrontation network, and in the case that the input graph is a real image containing the portrait, other modifications, such as parameter adjustment, etc., made by other professionals in the art to the present application belong to the protection scope of the present application.
The invention is introduced to the process of optimizing the segmentation image by using the conditional random field:
as shown in fig. 5, fig. 5 is a flowchart for optimizing a human-scene segmentation image generated by a test based on a conditional random field, and an output optimization process based on the conditional random field mainly includes the following steps:
and step 1, the generated network output result is subjected to probability, so that a probability matrix for calculating a potential function is provided for the conditional random field.
And 2, adding picture position related features, and obtaining the position related features of the portrait and the background in the image according to the algorithm of the conditional random field, so that the segmentation effect of the portrait and the background can be smoother.
And 3, adding picture color related features, and obtaining the color related features of the portrait and the background in the image according to the algorithm of the conditional random field.
And 4, iteratively solving.
And 5, outputting the optimized human scene segmentation image.
Based on the above-mentioned network generation, network discrimination and conditional random field, the pixel-level image matting and segmentation process proposed by the present invention is shown in fig. 6, and includes the following steps:
s301, inputting the real image containing the portrait into the generation network to output a segmentation image of the portrait separated from the background. Conventionally, some preprocessing work, such as scaling, normalization, etc., is required before the real image containing the portrait is input into the network.
S302, the real image containing the portrait and the generated landscape segmentation image output by the generation network are used as a first image pair, and the real image containing the portrait and the marked image are used as a second image pair and are respectively input into the judgment network. Calculating true discrimination probability P through discrimination networkrealAnd false discrimination probability PfakeAnd further obtaining the discriminant network loss _ loss and the generated network loss gen _ loss.
In this embodiment, the training set image labels are labeled manually, the pixel value of the human image portion is 255, and the pixel value of the background portion is 0, which is referred to as a labeled image in this application. The calculation formula of the discsrim _ los for judging the network loss function is as follows:
Figure BDA0001447670980000101
where EPS represents a very small constant, in the range of 10-13—10-11The value in this example is 10-12
In this embodiment, the real image including the portrait and the segmented image in which the portrait and the background are separated and which is output by the generation network are input to the discrimination network as a first image pair, and the real image including the portrait and the annotation image are input to the discrimination network as a second image pair. Each time the discrimination network outputs a probability matrix representing the degree of similarity between different parts of the two input images. Therefore, the probability matrix generated by the real image containing the portrait and the generated landscape segmentation image for generating the network output is used for judging the probability that the network considers the generated network output image to be true, and the probability matrix is called as P in the applicationfake(ii) a Similarly, the probability matrix generated by the second image pair of the real image containing the portrait and the annotated image is the probability that the network considers the annotated image to be true, which is referred to as P in this applicationreal. Discriminating network loss functions by minimizationThe numerical value can improve the capability of identifying and generating the network output image and the labeled image by the discrimination network.
In this embodiment, the network loss generation includes two parts, one of which is a loss counteracting function GAN _ loss, and the calculation formula is as follows:
Figure BDA0001447670980000111
the other is an L1 norm loss, which describes the difference degree between the generated human-scene segmentation image and the annotation image of the generated network output, and the calculation formula is as follows:
Figure BDA0001447670980000112
among them, targetsijRepresenting annotated images, outputijShown is a segmented image with the portrait separated from the background, which is generated as a network output.
The two losses are weighted by two coefficients, and the final generated network loss function formula is obtained as follows:
gen_loss=GAN_weight×GAN_loss+L1_weight×L1_loss
wherein, GAN _ weight represents the weight of the penalty-fighting function, and the value in this example is 1; l1_ loss represents an L1 norm loss function for generating a network output image and an annotation image, and the value in the example is 100;
by minimizing the generation network loss, the capability of the generation network to generate correct portrait and background segmentation images can be improved.
S303, by respectively minimizing the judgment network loss function value and the generation network loss function value, the values of the parameters of the generation network and the judgment network are updated iteratively by utilizing a neural network back propagation algorithm.
It should be noted that the parameter adjustment described herein is based on a back propagation algorithm of a neural network, and the adjustment process and the adjustment strategy thereof are within the scope of protection of the present application.
S304, after parameter adjustment is completed, the real image containing the portrait to be segmented is received and used as the input of the generation network, and the output of the generation network is obtained.
In this embodiment, after the parameters of the generation network and the discrimination network are adjusted in steps S301 to S303, the real image to be segmented containing the portrait may be preprocessed and used as the input of the generation network to output the segmented image through the generation network, and the generated landscape segmented image has continuously distributed pixel values, and the probability that the network considers that the pixel belongs to the portrait part is higher as the value is closer to 255.
S305 further refines the output result using the conditional random field by using the generated network output probability as an input.
In this embodiment, in order to further refine the segmentation result and make the portrait edge of the landscape segmented image smoother, the output of the generation network is imported into the conditional random field. Since the conditional random field algorithm needs to receive the probability matrix as input and the output of the generating network is the pixel value, the output image is firstly made probabilistic, the whole image is divided by 255 and converted into a value between 0 and 1 as the probability matrix of the portrait part. And subtracting the portrait part probability matrix from the background part probability matrix by 1. By using a denseCRF library of Python, the invention can conveniently extract the position characteristics and the color characteristics of the image, and iterate solution to obtain a more refined segmented image.
The technical effects of the present invention are explained again by the following simulation and results thereof:
example 9
Based on the general technical solution of the pixel-level portrait matting method for generating a confrontation network as in embodiments 1 to 8, in this embodiment, an ACC index is used to measure the actual generation effect of the confrontation network, specifically, a segmentation effect graph with a portrait separated from a background, as shown in fig. 7.
FIG. 7 is a real image of a test set containing a portrait; FIG. 7b is a corresponding annotation image; fig. 7c is a segmented image of a portrait and background trained and optimized by generating a confrontation network. The trunk contour of the portrait generated by the confrontation network is generated by the observation of human eyes, and a good segmentation effect is obtained. Then, the product is processedAccording to the ACC calculation rule, the more accurate the generated result, the smaller the corresponding ACC value. The ACC value of FIG. 7 is 0.0138 calculated by the formula of ACC, 500 samples are tested in this example, and the ACC values of the images are all 10-2On an order of magnitude and at a lower level of that order, this also indicates a higher accuracy of the generated results of the inventive network.
In the information age, image information accounts for a large proportion of total information, and people have a great trend to acquire image information, process the image information and utilize the image information. Although the generated human-scene segmentation image and the labeled image have some differences, manual labeling is time-consuming and labor-consuming, and is not a long-term measure, and machine labeling is a future development direction. In the prior art, images are segmented based on a full convolution network, and 10 thousands of training images are needed to obtain a better segmentation result. According to the invention, better human scene segmentation images can be obtained only by training the image training network by 1000, compared with the FCN, the image segmentation speed and the segmentation precision are greatly improved by 2 orders of magnitude, and the popularization of the human image segmentation technology in engineering application is facilitated.
In short, the invention discloses a pixel-level image matting method based on a generation countermeasure network, which solves the problem that the field of machine matting requires massive data sets with huge manufacturing cost to train and optimize a network. The method comprises the steps of presetting a generation network and a discrimination network of an antagonistic learning mode, wherein the generation network is a deep neural network with jump connection; inputting a real image containing a portrait into a generating network to output a landscape segmentation image; inputting the first image pair and the second image pair into a discrimination network respectively to output discrimination probability, and determining loss functions of a generation network and the discrimination network by using the true and false discrimination probability; adjusting configuration parameters of the two networks according to the two network loss function values of the minimum generation network and the judgment network to finish training of the generation network; inputting the test image into a generation network after training is completed so as to generate a human scene segmentation image; and (4) carrying out probability transformation on the generated human scene segmentation image, and sending the probability matrix into a conditional random field for further optimization. The invention reduces the quantity of the network training images in batch, improves the efficiency and the segmentation precision and provides an effective new way for pixel-level image matting.

Claims (4)

1. A pixel-level portrait matting method based on a generation countermeasure network is provided, aiming at a real image containing a portrait, a marked image with the real portrait separated from a background is obtained through a manual marking mode, and the method is characterized by comprising the following steps:
(1) presetting a network: presetting a generation network and a judgment network, and setting the two networks into an antagonistic learning mode, wherein the types of the generation network and the judgment network are deep neural networks, the deep neural network of the generation network is a deep neural network with jump connection, and is also called the generation network for short, and the deep neural network with the jump connection is a gradient transmission path for the jump connection between N concatenated encoder layers and N concatenated decoder layers which form the generation network;
(2) generating a segmentation image: inputting a real image containing a portrait into a generation network, and outputting a segmentation image of the portrait separated from a background, which is called a generation portrait segmentation image for short;
(3) calculating a loss function value: respectively inputting a real image containing a portrait and a generated landscape segmentation image output by a generation network as a first image pair, and a real image containing the portrait and a marked image as a second image pair into a discrimination network, calculating a false discrimination probability and a true discrimination probability through the discrimination network, and obtaining a loss function value of the discrimination network and a loss function value of the generation network by using a loss function formula of the discrimination network and a loss function formula of the generation network;
(4) updating the network parameters: respectively minimizing a loss function value of a judgment network and a loss function value of a generation network, introducing a random inactivation mechanism into N decoder layers which are connected in series and generate the network by utilizing a deep neural network back propagation algorithm, specifically discarding inactivation parameters randomly before final output of each decoder layer of the generation network, iteratively updating values of various parameters of the generation network and the judgment network, and finishing training of the generation network and the judgment network;
(5) generating a test set scene segmentation image: after training of the generating network is completed, the generating network after training receives a real image containing a portrait to be segmented, and a generated landscape segmented image of a test image is output in the network through iterative computation;
(6) optimizing and dividing the image to finish portrait matting: the method comprises the steps of performing probability transformation on a generated human-scene segmentation image of a test image to obtain a human-scene probability matrix and a background probability matrix of the generated human-scene segmentation image of the test image, taking the human-scene probability matrix and the background probability matrix as input of a conditional random field, and further refining the generated human-scene segmentation image of the test image after probability transformation by using the conditional random field to obtain a more accurate image with separated human and background, so as to complete pixel-level human image matting based on a generated confrontation network.
2. The pixel-level portrait matting method based on the generation countermeasure network as claimed in claim 1, characterized in that the loss functions mentioned in step (3) and step (4), wherein the smaller the loss function of the generation network is, the higher the trueness of the segmented image of the portrait separated from the background outputted by the generation network is, the generation network loss function formula is obtained by weighting two losses by two coefficients, specifically, a network loss function gen _ loss is generated, and the formula is:
gen_loss=GAN_weight×GAN_loss+L1_weight×L1_loss
wherein, GAN _ weight represents the weight of the loss-resisting function GAN _ loss, and the weight is 0-50; l1_ weight represents the weight of the L1 norm loss function of the generated human-scene segmentation image and the annotation image which are output by the network for generating, and the value is 0-100; GAN _ loss is a penalty function, and is calculated as follows:
Figure FDA0002987956920000021
in the formula PfakeIs a probability matrix generated by using a real image containing a portrait and a generated landscape segmentation image output by a generation network as a second image pair, and the judgment network considers that the generated network output image is trueProbability; EPS represents a very small constant, in the range of 10-13—10-11
L1 norm loss L1_ loss, describing the degree of difference between the generated segmented image of human scenery and the annotated image of the generated network output, the calculation formula is as follows:
Figure FDA0002987956920000022
targets in the formulaijRepresenting annotated images, outputijShown is a generated segmented image of the scene that generates the network output.
3. The pixel-level image matting method based on generation of a confrontation network as claimed in claim 1, characterized by the loss functions mentioned in step (3) and step (4), wherein the accuracy of discriminant probability of discriminant network output is higher the smaller the loss function of discriminant network is, wherein discriminant network loss function is _ loss, and the formula is:
Figure FDA0002987956920000031
where EPS represents a very small constant, in the range of 10-13—10-11;PrealThe probability matrix is generated by taking the real image containing the portrait and the labeled image as a second image pair, and the probability that the labeled image is considered to be true by the network is judged.
4. The pixel-level portrait matting method based on generation countermeasure network as claimed in claim 1, wherein the generation of the test image after probability segmentation is further refined by using conditional random field in step (6), after inputting the real image containing portrait into the generation network to output the segmentation image of the two-classification semantic segmentation process after the portrait is separated from the background, the generation of the test image is probability-ized to provide probability matrix for calculating potential function for the conditional random field.
CN201711022184.1A 2017-10-27 2017-10-27 Pixel-level image matting method based on generation countermeasure network Active CN107945204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711022184.1A CN107945204B (en) 2017-10-27 2017-10-27 Pixel-level image matting method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711022184.1A CN107945204B (en) 2017-10-27 2017-10-27 Pixel-level image matting method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN107945204A CN107945204A (en) 2018-04-20
CN107945204B true CN107945204B (en) 2021-06-25

Family

ID=61935772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711022184.1A Active CN107945204B (en) 2017-10-27 2017-10-27 Pixel-level image matting method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN107945204B (en)

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805789B (en) * 2018-05-29 2022-06-03 厦门市美亚柏科信息股份有限公司 Method, device and equipment for removing watermark based on antagonistic neural network and readable medium
CN108830209B (en) * 2018-06-08 2021-12-17 西安电子科技大学 Remote sensing image road extraction method based on generation countermeasure network
CN112020721A (en) * 2018-06-15 2020-12-01 富士通株式会社 Training method and device for classification neural network for semantic segmentation, and electronic equipment
CN109035267B (en) * 2018-06-22 2021-07-27 华东师范大学 Image target matting method based on deep learning
CN109035253A (en) * 2018-07-04 2018-12-18 长沙全度影像科技有限公司 A kind of stingy drawing method of the deep learning automated graphics of semantic segmentation information guiding
CN109034162B (en) * 2018-07-13 2022-07-26 南京邮电大学 Image semantic segmentation method
CN110728626A (en) * 2018-07-16 2020-01-24 宁波舜宇光电信息有限公司 Image deblurring method and apparatus and training thereof
CN109242000B (en) * 2018-08-09 2021-08-31 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and computer readable storage medium
CN109166126B (en) * 2018-08-13 2022-02-18 苏州比格威医疗科技有限公司 Method for segmenting paint cracks on ICGA image based on condition generation type countermeasure network
CN109214973B (en) * 2018-08-24 2020-10-27 中国科学技术大学 Method for generating countermeasure security carrier aiming at steganalysis neural network
CN110880001A (en) * 2018-09-06 2020-03-13 银河水滴科技(北京)有限公司 Training method, device and storage medium for semantic segmentation neural network
CN109190707A (en) * 2018-09-12 2019-01-11 深圳市唯特视科技有限公司 A kind of domain adapting to image semantic segmentation method based on confrontation study
CN110909754B (en) * 2018-09-14 2023-04-07 哈尔滨工业大学(深圳) Attribute generation countermeasure network and matching clothing generation method based on same
CN109544652B (en) * 2018-10-18 2024-01-05 上海威豪医疗科技有限公司 Nuclear magnetic resonance multi-weighted imaging method based on depth generation antagonistic neural network
CN109448006B (en) * 2018-11-01 2022-01-28 江西理工大学 Attention-based U-shaped dense connection retinal vessel segmentation method
CN109523523B (en) * 2018-11-01 2020-05-05 郑宇铄 Vertebral body positioning, identifying and segmenting method based on FCN neural network and counterstudy
WO2020107264A1 (en) * 2018-11-28 2020-06-04 深圳市大疆创新科技有限公司 Neural network architecture search method and apparatus
CN109754403A (en) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 Tumour automatic division method and system in a kind of CT image
CN109543827B (en) 2018-12-02 2020-12-29 清华大学 Generating type confrontation network device and training method
CN109859113B (en) * 2018-12-25 2021-08-20 北京奇艺世纪科技有限公司 Model generation method, image enhancement method, device and computer-readable storage medium
CN109754447B (en) 2018-12-28 2021-06-22 上海联影智能医疗科技有限公司 Image generation method, device, equipment and storage medium
CN109639710B (en) * 2018-12-29 2021-02-26 浙江工业大学 Network attack defense method based on countermeasure training
CN111462162B (en) * 2019-01-18 2023-07-21 上海大学 Foreground segmentation algorithm for specific class pictures
CN109840561A (en) * 2019-01-25 2019-06-04 湘潭大学 A kind of rubbish image automatic generation method can be used for garbage classification
CN111582278B (en) * 2019-02-19 2023-12-08 北京嘀嘀无限科技发展有限公司 Portrait segmentation method and device and electronic equipment
CN110188760B (en) * 2019-04-01 2021-10-22 上海卫莎网络科技有限公司 Image processing model training method, image processing method and electronic equipment
CN110334805B (en) * 2019-05-05 2022-10-25 中山大学 JPEG domain image steganography method and system based on generation countermeasure network
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110276745B (en) * 2019-05-22 2023-04-07 南京航空航天大学 Pathological image detection algorithm based on generation countermeasure network
CN110189341B (en) * 2019-06-05 2021-08-10 北京青燕祥云科技有限公司 Image segmentation model training method, image segmentation method and device
CN110287851A (en) * 2019-06-20 2019-09-27 厦门市美亚柏科信息股份有限公司 A kind of target image localization method, device, system and storage medium
CN110458904B (en) * 2019-08-06 2023-11-10 苏州瑞派宁科技有限公司 Method and device for generating capsule endoscope image and computer storage medium
CN110610509B (en) * 2019-09-18 2023-07-21 上海大学 Optimizing matting method and system capable of specifying category
WO2021102697A1 (en) * 2019-11-26 2021-06-03 驭势(上海)汽车科技有限公司 Method and system for training generative adversarial network, and electronic device and storage medium
CN111080592B (en) * 2019-12-06 2021-06-01 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning
CN111222440A (en) * 2019-12-31 2020-06-02 江西开心玉米网络科技有限公司 Portrait background separation method, device, server and storage medium
CN111161272B (en) * 2019-12-31 2022-02-08 北京理工大学 Embryo tissue segmentation method based on generation of confrontation network
CN111278085B (en) * 2020-02-24 2023-08-29 北京百度网讯科技有限公司 Method and device for acquiring target network
CN111311485B (en) * 2020-03-17 2023-07-04 Oppo广东移动通信有限公司 Image processing method and related device
CN111524060B (en) * 2020-03-31 2023-04-14 厦门亿联网络技术股份有限公司 System, method, storage medium and device for blurring portrait background in real time
CN111899203B (en) * 2020-07-10 2023-06-20 贵州大学 Real image generation method based on label graph under unsupervised training and storage medium
CN112100908B (en) * 2020-08-31 2024-03-22 西安工程大学 Clothing design method for generating countermeasure network based on multi-condition deep convolution
CN112215868B (en) * 2020-09-10 2023-12-26 湖北医药学院 Method for removing gesture image background based on generation of countermeasure network
CN112529929A (en) * 2020-12-07 2021-03-19 北京邮电大学 Full-convolution dense network-based portrait cutout method
CN113160358A (en) * 2021-05-21 2021-07-23 上海随幻智能科技有限公司 Non-green-curtain cutout rendering method
CN114187668B (en) * 2021-12-15 2024-03-26 长讯通信服务有限公司 Face silence living body detection method and device based on positive sample training

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104272344A (en) * 2012-10-24 2015-01-07 株式会社摩如富 Image processing device, image processing method, image processing program, and recording medium
CN105160310A (en) * 2015-08-25 2015-12-16 西安电子科技大学 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge
CN106683048A (en) * 2016-11-30 2017-05-17 浙江宇视科技有限公司 Image super-resolution method and image super-resolution equipment
CN106845471A (en) * 2017-02-20 2017-06-13 深圳市唯特视科技有限公司 A kind of vision significance Forecasting Methodology based on generation confrontation network
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN107274358A (en) * 2017-05-23 2017-10-20 广东工业大学 Image Super-resolution recovery technology based on cGAN algorithms

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009094587A1 (en) * 2008-01-23 2009-07-30 Deering Michael F Eye mounted displays

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104272344A (en) * 2012-10-24 2015-01-07 株式会社摩如富 Image processing device, image processing method, image processing program, and recording medium
CN105160310A (en) * 2015-08-25 2015-12-16 西安电子科技大学 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge
CN106683048A (en) * 2016-11-30 2017-05-17 浙江宇视科技有限公司 Image super-resolution method and image super-resolution equipment
CN106845471A (en) * 2017-02-20 2017-06-13 深圳市唯特视科技有限公司 A kind of vision significance Forecasting Methodology based on generation confrontation network
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN107274358A (en) * 2017-05-23 2017-10-20 广东工业大学 Image Super-resolution recovery technology based on cGAN algorithms

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Dropout: A Simple Way to Prevent Neural Networks from Overfitting;Srivastava N et al;《Journal of Machine Learning Research》;20140630;第15卷(第1期);第1929-1958页 *
Learnable contextual regularization for semantic segmentation of indoor scene images;Jun Chu et al;《2017 IEEE International Conference on Image Processing (ICIP)》;20170920;第1267-1271页 *
基于生成式对抗网络的人脸识别开发;张卫等;《电子世界》;20171023(第20期);第164-165页 *
生成对抗网络理论模型和应用综述;徐一峰等;《金华职业技术学院学报》;20170501;第17卷(第3期);第81-88页 *

Also Published As

Publication number Publication date
CN107945204A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107945204B (en) Pixel-level image matting method based on generation countermeasure network
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN109859190B (en) Target area detection method based on deep learning
Žbontar et al. Stereo matching by training a convolutional neural network to compare image patches
CN108961327A (en) A kind of monocular depth estimation method and its device, equipment and storage medium
CN111476806B (en) Image processing method, image processing device, computer equipment and storage medium
CN110111346B (en) Remote sensing image semantic segmentation method based on parallax information
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN111739037B (en) Semantic segmentation method for indoor scene RGB-D image
CN113822951A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114332578A (en) Image anomaly detection model training method, image anomaly detection method and device
CN116229056A (en) Semantic segmentation method, device and equipment based on double-branch feature fusion
CN111179272B (en) Rapid semantic segmentation method for road scene
CN112668638A (en) Image aesthetic quality evaluation and semantic recognition combined classification method and system
CN112418032A (en) Human behavior recognition method and device, electronic equipment and storage medium
CN112966777B (en) Semi-automatic labeling method and system based on human-computer interaction
CN114021704A (en) AI neural network model training method and related device
CN111914809A (en) Target object positioning method, image processing method, device and computer equipment
CN116596966A (en) Segmentation and tracking method based on attention and feature fusion
CN116543433A (en) Mask wearing detection method and device based on improved YOLOv7 model
CN112862840B (en) Image segmentation method, device, equipment and medium
CN113888603A (en) Loop detection and visual SLAM method based on optical flow tracking and feature matching
CN114492634A (en) Fine-grained equipment image classification and identification method and system
CN113971764A (en) Remote sensing image small target detection method based on improved YOLOv3
CN109871835B (en) Face recognition method based on mutual exclusion regularization technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant