CN115830384A - Image fusion method and system for generating countermeasure network based on double discriminators - Google Patents

Image fusion method and system for generating countermeasure network based on double discriminators Download PDF

Info

Publication number
CN115830384A
CN115830384A CN202211586407.8A CN202211586407A CN115830384A CN 115830384 A CN115830384 A CN 115830384A CN 202211586407 A CN202211586407 A CN 202211586407A CN 115830384 A CN115830384 A CN 115830384A
Authority
CN
China
Prior art keywords
discriminator
fused
fusion
generator
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211586407.8A
Other languages
Chinese (zh)
Inventor
胡若澜
杨晨
王哲
张桂林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202211586407.8A priority Critical patent/CN115830384A/en
Publication of CN115830384A publication Critical patent/CN115830384A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image fusion method and system for generating a confrontation network based on double discriminators, and belongs to the field of computer vision. The method comprises the steps of constructing and training a dual-discriminator to generate a confrontation image fusion model; the generator comprises a dense feature extraction and fusion module, a feature enhancement module and a decoding and reconstruction module; the dense feature extraction and fusion module extracts and fuses the features of the two frames of images to be fused to obtain a fusion feature map; the feature enhancement module performs global average pooling operation, two full-connection layers and Sigmoid activation function processing on the fused feature map to obtain a feature enhancement coefficient, and then performs multiplication operation on the feature enhancement coefficient and the fused feature map to obtain an enhanced feature map; and the decoding reconstruction module decodes and reconstructs the enhanced feature map to obtain a fused image. The method effectively retains the information in the different source images, improves the fusion capability of the generator for fitting and modeling the different source images, and improves the quality of the fused image.

Description

Image fusion method and system for generating countermeasure network based on double discriminators
Technical Field
The invention belongs to the field of computer vision, and particularly relates to an image fusion method and system for generating a countermeasure network based on a dual-discriminator.
Background
With the rapid development of sensor technology, sensors of different types such as visible light, infrared and laser radar are widely applied to systems for target detection, tracking, monitoring and early warning. Different types of sensors acquire different information, and a single sensor cannot provide all the information needed for certain application tasks. The image fusion technology is used for fusing sensor information from different sources to improve the performance of subsequent processing tasks, and is one of important research directions.
In the traditional image fusion method, the image features are extracted by adopting a manual design algorithm for fusion, so that effective feature expressions of different sensor images are difficult to obtain, the feature fusion strategy is rough, specific information in different sensor images is difficult to effectively retain, and the image fusion performance is limited. The image fusion method based on deep learning utilizes the hierarchical distributed feature expression capability of a deep learning network to extract image features in a self-adaptive manner so as to obtain activity level measurement and fusion weight, and is favorable for improving the quality of fusion images. However, the image fusion lacks true values, which makes it difficult to train a deep learning network by adopting a supervised learning method, and also to effectively retain specific information in different source images. The generation countermeasure network utilizes the fitting capability of the deep learning network to implicitly model the generation process, and utilizes the discriminator to carry out supervision training, thereby being an effective method for generating images. However, in the image fusion, how to design a generator network model, a loss function and a training method can not only fit a model for fusing different source images, but also effectively retain specific information in the different source images, thereby improving the quality of the fused image.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides an image fusion method and an image fusion system for generating a confrontation network based on a dual-discriminator, and aims to solve the technical problem that a generator network model in the image fusion method for generating the confrontation network is difficult to effectively fit a model for fusing different source images and retain specific information in the different source images.
To achieve the above object, according to one aspect of the present invention, there is provided an image fusion method for generating a countermeasure network based on a dual discriminator, comprising:
s1, constructing and training a dual-discriminator to generate a confrontation image fusion model; the dual-discriminator generated confrontation image fusion model comprises a generator and two discriminators, wherein the generator comprises a dense feature extraction and fusion module, a feature enhancement module and a decoding and reconstruction module; the dense feature extraction and fusion module extracts and fuses the features of the two frames of images to be fused to obtain a fusion feature map; the feature enhancement module performs global average pooling operation, two full-connection layers and Sigmoid activation function processing on the fused feature map to obtain a feature enhancement coefficient, and then performs multiplication operation on the feature enhancement coefficient and the fused feature map to obtain an enhanced feature map; the decoding reconstruction module decodes and reconstructs the enhanced feature map to obtain a fused image;
and S2, inputting the two frames of images to be fused into a double-discriminator to generate a confrontation image fusion model, and outputting by a generator to obtain an image fusion result.
Further, the generator loss function is:
Figure BDA0003991111540000021
wherein L is adv1 For the loss of opposition between the generator and the discriminator 1, L adv2 For the loss-immunity between the generator and the discriminator 2, SSIM1 and SSIM2 are the structural similarity coefficients of the generated image of the generator and the images 1 and 2, L content For content loss, α is the equilibrium coefficient;
Figure BDA0003991111540000022
v and i respectively represent two frame images to be fused, G (v, i) represents a fused image output by the generator, D 1 (. Represents the output of the discriminator 1, D 2 (. Cndot.) represents the output of the arbiter 2, and N represents the batch size.
Furthermore, the decoding reconstruction module is composed of a plurality of convolution layers, each layer adopts batch normalization processing, the last convolution layer uses a tanh activation function, and the rest convolution layers use a ReLU activation function.
Further, each discriminator includes a plurality of convolution layers and 1 linear layer connected in sequence.
Further, two discriminator loss functions are defined as follows:
Figure BDA0003991111540000031
wherein v and i respectively represent two frames of images to be fused, G (v, i) represents a fused image output by the generator, D 1 (. Represents the output of the discriminator 1, D 2 (. Cndot.) represents the output of the arbiter 2, and N represents the batch size.
Further, the dense feature extraction fusion module is composed of a plurality of convolution layers; each layer connects the output characteristics of all previous layers on the channel as input.
The invention also provides an image fusion system based on the dual-discriminator generated countermeasure network, which comprises:
the model construction and training module is used for constructing and training the dual-discriminator to generate a confrontation image fusion model; the dual-discriminator generated confrontation image fusion model comprises a generator and two discriminators, wherein the generator comprises a dense feature extraction and fusion module, a feature enhancement module and a decoding and reconstruction module; the dense feature extraction and fusion module extracts and fuses the features of the two frames of images to be fused to obtain a fusion feature map; the feature enhancement module performs global average pooling operation, two full-connection layers and Sigmoid activation function processing on the fused feature map to obtain a feature enhancement coefficient, and then performs multiplication operation on the feature enhancement coefficient and the fused feature map to obtain an enhanced feature map; the decoding reconstruction module decodes and reconstructs the enhanced feature map to obtain a fused image;
and the online fusion module is used for inputting the two frames of images to be fused into the dual-discriminator to generate a confrontation image fusion model, and the generator outputs the confrontation image fusion model to obtain an image fusion result.
In general, the above technical solutions contemplated by the present invention can achieve the following advantageous effects compared to the prior art.
(1) The invention designs a network structure of the generator, adopts a dense feature extraction and fusion module to extract and fuse multilayer features, utilizes a feature enhancement module to further enhance the features, inputs the enhanced features into a decoding and reconstruction module consisting of a plurality of convolution layers connected in series to obtain a fusion image, and effectively retains the information in different source images under the combined action of the three modules, simultaneously improves the capability of the generator for fitting and modeling the fusion of the different source images, and improves the quality of the fusion image.
(2) The invention designs an image fusion method for generating a confrontation network based on a double-discriminator, which adopts the double-discriminator to combine with a comprehensive confrontation content loss function, dynamically adjusts the balance between two confrontation losses by introducing gradient information loss and pixel intensity loss and adopting a balance coefficient based on a structural similarity coefficient, solves the problem of instability in the process of training the confrontation network, effectively retains specific information contained in an image to be fused, and improves the performance of the fused image.
Drawings
Fig. 1 is a flowchart of an image fusion method for generating a countermeasure network based on a dual-discriminator according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a dual-discriminator-generated confrontation image fusion network model architecture according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a generator for generating a confrontation image fusion network by a dual-discriminator according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a dual-discriminator-based anti-image fusion network according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a flowchart of an image fusion method for generating a countermeasure network based on a dual-discriminator according to an embodiment of the present invention, and with reference to fig. 1 and fig. 2 to 4, the image fusion method for generating a countermeasure network based on a dual-discriminator according to the present embodiment is described in detail, and the method includes the following steps.
S1, constructing and training a dual-discriminator to generate a confrontation image fusion network model, referring to fig. 2, the dual-discriminator generated confrontation image fusion network model comprises a generator and two discriminators, and referring to fig. 3, the generator comprises a dense feature extraction and fusion module, a feature enhancement module and a decoding and reconstruction module.
And the dense feature extraction and fusion module extracts and fuses the features of the two frames of images to be fused to obtain a fusion feature map. The dense feature extraction fusion module is composed of 6 convolutional layers, the size of a convolutional kernel of each convolutional layer is 3 multiplied by 3, the step length is 1, the number of the convolutional kernels is 48, and output features of the former convolutional layers are connected on a channel to be used as input of the current convolutional layer. And after the two frames of images to be fused are combined, performing feature extraction and fusion through a dense feature extraction and fusion module, and outputting a fusion feature map.
And the feature enhancement module performs channel feature enhancement on the fusion feature map to obtain an enhanced feature map. Referring to fig. 3, the feature enhancement module performs global average pooling operation, two full-link layers and Sigmoid activation function processing on the fused feature map to obtain a feature enhancement coefficient, and then performs multiplication operation on the feature enhancement coefficient and the fused feature map to obtain an enhanced feature map. The calculation process is as follows:
F′=Sigmoid(W fc2 (W fc1 (AveragePool(F))))⊙F
wherein F' represents an enhanced feature map, F represents a fused feature map, averagePool (x) represents a global average pooling layer process, and W represents a global average pooling layer process fc1 (. About.) and W fc2 () denotes the first and second linear layer processes, respectively, and Sigmoid (@) denotes a Sigmoid function, which indicates a hadamard product operation.
And the decoding reconstruction module decodes and reconstructs the enhanced feature map to obtain a fused image. The decoding reconstruction module consists of 5 convolutional layers, the size of a convolutional core of each convolutional layer is 3 multiplied by 3, the step length is 1, the number of the convolutional cores is 240, 128, 64, 32 and 1 respectively, batch normalization processing is adopted for each layer, the last convolutional layer uses a tanh activation function, and the rest convolutional layers use ReLU activation functions.
Referring to fig. 4, in the present embodiment, each discriminator is composed of 3 convolutional layers and 1 linear layer, the size of the convolution kernel of each convolutional layer is 3 × 3, the step size is 2, the number of convolution kernels is 16, 32, and 64, respectively, batch normalization processing is adopted for each layer output, and the activation function is ReLU. The input to each discriminator is the generator output image and one of the two frame input images of the generator. The output of the discriminator is the JS divergence.
The loss function of the generator comprises a generator loss function and two discriminator loss functions, and the dual discriminators are trained in a two-stage rotation mode to generate a confrontation image fusion network model.
The generator loss function is a synthetic confrontation content loss function consisting of confrontation loss L adv1 、L adv2 And content loss L content Three parts, defined as follows:
Figure BDA0003991111540000061
wherein L is adv1 For the loss of opposition between the generator and the discriminator 1, L adv2 For the loss-of-opposition between the generator and the discriminator 2, SSIM1 and SSIM2 are the generated images of the generator and the images 1 and 22 structural similarity coefficient, L content For content loss, α is the equilibrium coefficient, preferably a value of 0.6.
Against loss L adv1 And L adv2 Is defined as follows:
Figure BDA0003991111540000062
wherein v and i respectively represent two frames of images to be fused, G (v, i) represents a fused image output by the generator, D 1 (. Represents the output of the discriminator 1, D 2 (. Cndot.) represents the output of the arbiter 2, and N represents the batch size.
The structural similarity coefficient SSIM is calculated as follows:
Figure BDA0003991111540000063
SSIM=SSIM A,F +SSI M B ,F
wherein, A and B respectively represent the image to be fused, F represents the fusion result image, X represents A or B, X and F respectively represent the sliding window image blocks of the image X and the image F, u x And u f Representing the mean value of the gray levels, σ, of the image blocks x and f x And σ f Representing the gray-scale variance, σ, of the image blocks x and f xf Representing the covariance of image block x and image block f, C 1 、C 2 And C 3 Is a constant.
Content loss L content Is defined as follows:
Figure BDA0003991111540000071
wherein v and i respectively represent two frame images to be fused, H and W are respectively the height and width of the images, and G (v, i) represents the fused image output by the generator, | · | F Represents the F-norm, | | TV Representing the TV norm, β is the equilibrium coefficient, preferably β is 2.8.
Two discriminator loss functions are defined as follows:
Figure BDA0003991111540000072
wherein v and i respectively represent two frames of images to be fused, G (v, i) represents a fused image output by the generator, D 1 (. Cndot.) denotes the output of the discriminator 1, D 2 (. Cndot.) represents the output of the arbiter 2, and N represents the batch size.
The two-stage alternate training comprises a discriminator training stage and a generator training stage. In the stage of training the classifiers, the generator is kept unchanged, and the two classifiers are trained by taking the two classifier loss functions as the targets of maximization. In the generator training phase, the arbiter is kept unchanged, and the generator is trained with the goal of minimizing the synthetic confrontation content loss function.
And S2, inputting the two frames of images to be fused into a double-discriminator to generate a confrontation image fusion model, and outputting by a generator to obtain an image fusion result.
In order to verify the image fusion result of the image fusion method for generating the countermeasure network based on the dual discriminator in the embodiment of the invention, a data set for image fusion is constructed based on the TNO image fusion data set. A training set of 40 pairs of infrared and visible light images are selected from the TNO dataset and the 40 pairs of images are cropped overlappingly by a step size of 14. 7 pairs of infrared and visible images were selected from the TNO dataset as test sets.
After the data set preparation is completed, training of the network model is performed. The batch size during training was set to 12, the initial learning rate was 0.002, the exponential decayed to 0.8 of the original value after each training batch,
the method comprises the steps of using a trained dual-discriminator-based generation confrontation image fusion network to test a test set, simultaneously using other classical image fusion methods to test, and comparing average values of 8 individual performance indexes such as information Entropy (EN), correlation Coefficient (CC), mutual Information (MI), standard Deviation (SD), average Gradient (AG), mean Square Error (MSE), difference correlation Sum (SCD), peak signal to noise ratio (PSNR) and the like, wherein the test results are shown in table 1.
TABLE 1 comparison table of average performance indexes obtained by various image fusion methods
Method EN CC MI SD AG MSE SCD PSNR
CBF 6.7343 0.4799 13.4686 0.1268 0.0193 0.0256 1.2951 63.6399
JSR 6.7490 0.5553 13.4980 0.1500 0.0181 0.0615 1.6047 60.3482
DCHWT 6.7654 0.5335 13.5209 0.1242 0.0143 0.0249 1.3141 63.9527
GTF 6.7755 0.4784 13.5510 0.1358 0.0137 0.0300 0.9289 63.6970
MEFGAN 6.4260 0.5382 12.8521 0.1113 0.0088 0.0347 1.0914 62.9152
FusionGAN 6.4260 0.5382 12.8521 0.1113 0.0088 0.0347 1.0914 62.9152
Sia-Fusion 6.7510 0.6399 13.7469 0.1321 0.0155 0.0189 1.6413 65.9007
This example 7.2440 0.5775 14.4880 0.1868 0.0637 0.0580 1.6519 61.9325
As can be seen from the experimental results in table 1, among the 8 performance indexes, this embodiment is the best among 5 indexes of Entropy (EN), mutual Information (MI), standard Deviation (SD), average Gradient (AG), and difference correlation Sum (SCD), and is also the method with the most optimal index among all the methods, and the second best result is obtained among all the fusion methods on the Correlation Coefficient (CC) index. The embodiment has certain advantages compared with other image fusion methods.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. An image fusion method for generating a countermeasure network based on a dual-discriminator is characterized by comprising the following steps:
s1, constructing and training a dual-discriminator to generate a confrontation image fusion model; the dual-discriminator generated confrontation image fusion model comprises a generator and two discriminators, wherein the generator comprises a dense feature extraction and fusion module, a feature enhancement module and a decoding and reconstruction module; the dense feature extraction and fusion module extracts and fuses the features of the two frames of images to be fused to obtain a fusion feature map; the feature enhancement module performs global average pooling operation, two full-connection layers and Sigmoid activation function processing on the fused feature map to obtain a feature enhancement coefficient, and then performs multiplication operation on the feature enhancement coefficient and the fused feature map to obtain an enhanced feature map; the decoding reconstruction module decodes and reconstructs the enhanced feature map to obtain a fused image;
and S2, inputting the two frames of images to be fused into a double-discriminator to generate a confrontation image fusion model, and outputting by a generator to obtain an image fusion result.
2. The dual-discriminator-based generation countermeasure network image fusion method of claim 1, wherein the generator loss function is:
Figure FDA0003991111530000011
wherein L is adv1 For the loss of opposition between the generator and the discriminator 1, L adv2 For the loss-immunity between the generator and the discriminator 2, SSIM1 and SSIM2 are the structural similarity coefficients of the generated image of the generator and the images 1 and 2, L content For content loss, α is the equilibrium coefficient;
Figure FDA0003991111530000012
v and i respectively represent two frame images to be fused, G (v, i) represents a fused image output by the generator, D 1 (. Cndot.) denotes the output of the discriminator 1, D 2 (. Cndot.) represents the output of the arbiter 2, and N represents the batch size.
3. The image fusion method based on dual-discriminator generated countermeasure network of claim 2, wherein the decoding reconstruction module is composed of a plurality of convolution layers, each layer adopts batch normalization processing, the last convolution layer uses tanh activation function, and the rest convolution layers use ReLU activation function.
4. The image fusion method based on dual-discriminator generated countermeasure network of claim 2, wherein each discriminator comprises a plurality of convolution layers and 1 linear layer connected in sequence.
5. The image fusion method based on dual-discriminator generated countermeasure network of claim 4, wherein the two discriminator loss functions are defined as follows:
Figure FDA0003991111530000021
wherein v and i respectively represent two frames of images to be fused, G (v, i) represents a fused image output by the generator, D 1 (. Represents the output of the discriminator 1, D 2 (. Represents the output of the discriminator 2, N tableShowing the batch size.
6. The image fusion method based on the dual-discriminator generated countermeasure network of any one of claims 1 to 5, wherein the dense feature extraction fusion module is composed of a plurality of convolution layers; each layer connects the output characteristics of all previous layers on the channel as input.
7. An image fusion system for generating a countermeasure network based on a dual discriminator, comprising:
the model construction and training module is used for constructing and training the dual-discriminator to generate a confrontation image fusion model; the dual-discriminator generated confrontation image fusion model comprises a generator and two discriminators, wherein the generator comprises a dense feature extraction and fusion module, a feature enhancement module and a decoding and reconstruction module; the dense feature extraction and fusion module extracts and fuses the features of the two frames of images to be fused to obtain a fusion feature map; the feature enhancement module performs global average pooling operation, two full-connection layers and Sigmoid activation function processing on the fused feature map to obtain a feature enhancement coefficient, and then performs multiplication operation on the feature enhancement coefficient and the fused feature map to obtain an enhanced feature map; the decoding reconstruction module decodes and reconstructs the enhanced feature map to obtain a fused image;
and the online fusion module inputs the two frames of images to be fused into the dual-discriminator to generate a confrontation image fusion model, and the generator outputs the confrontation image fusion model to obtain an image fusion result.
8. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202211586407.8A 2022-12-09 2022-12-09 Image fusion method and system for generating countermeasure network based on double discriminators Pending CN115830384A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211586407.8A CN115830384A (en) 2022-12-09 2022-12-09 Image fusion method and system for generating countermeasure network based on double discriminators

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211586407.8A CN115830384A (en) 2022-12-09 2022-12-09 Image fusion method and system for generating countermeasure network based on double discriminators

Publications (1)

Publication Number Publication Date
CN115830384A true CN115830384A (en) 2023-03-21

Family

ID=85546376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211586407.8A Pending CN115830384A (en) 2022-12-09 2022-12-09 Image fusion method and system for generating countermeasure network based on double discriminators

Country Status (1)

Country Link
CN (1) CN115830384A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563147A (en) * 2023-05-04 2023-08-08 北京联合大学 Underwater image enhancement system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563147A (en) * 2023-05-04 2023-08-08 北京联合大学 Underwater image enhancement system and method
CN116563147B (en) * 2023-05-04 2024-03-26 北京联合大学 Underwater image enhancement system and method

Similar Documents

Publication Publication Date Title
JP7379787B2 (en) Image haze removal method using generative adversarial network fused with feature pyramids
US10535141B2 (en) Differentiable jaccard loss approximation for training an artificial neural network
CN110490239B (en) Training method, quality classification method, device and equipment of image quality control network
Rahmon et al. Motion U-Net: Multi-cue encoder-decoder network for motion segmentation
CN109614874B (en) Human behavior recognition method and system based on attention perception and tree skeleton point structure
CN111709903A (en) Infrared and visible light image fusion method
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
CN115222998B (en) Image classification method
CN112668638A (en) Image aesthetic quality evaluation and semantic recognition combined classification method and system
CN111639230B (en) Similar video screening method, device, equipment and storage medium
CN111696136A (en) Target tracking method based on coding and decoding structure
CN117079098A (en) Space small target detection method based on position coding
CN115830384A (en) Image fusion method and system for generating countermeasure network based on double discriminators
CN109766918A (en) Conspicuousness object detecting method based on the fusion of multi-level contextual information
CN116012255A (en) Low-light image enhancement method for generating countermeasure network based on cyclic consistency
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN114492634A (en) Fine-grained equipment image classification and identification method and system
CN114782503A (en) Point cloud registration method and system based on multi-scale feature similarity constraint
CN113850811B (en) Three-dimensional point cloud instance segmentation method based on multi-scale clustering and mask scoring
CN113850182A (en) Action identification method based on DAMR-3 DNet
Di et al. FDNet: An end-to-end fusion decomposition network for infrared and visible images
CN113747168A (en) Training method of multimedia data description model and generation method of description information
CN116823983A (en) One-to-many style handwriting picture generation method based on style collection mechanism
CN116665300A (en) Skeleton action recognition method based on space-time self-adaptive feature fusion graph convolution network
CN116595479A (en) Community discovery method, system, equipment and medium based on graph double self-encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination