CN115690578A - Image fusion method and target identification method and device - Google Patents

Image fusion method and target identification method and device Download PDF

Info

Publication number
CN115690578A
CN115690578A CN202211317045.2A CN202211317045A CN115690578A CN 115690578 A CN115690578 A CN 115690578A CN 202211317045 A CN202211317045 A CN 202211317045A CN 115690578 A CN115690578 A CN 115690578A
Authority
CN
China
Prior art keywords
test
image
module
infrared
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211317045.2A
Other languages
Chinese (zh)
Inventor
关永胜
马林
焦连猛
孙旭敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC Information Science Research Institute
Original Assignee
CETC Information Science Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC Information Science Research Institute filed Critical CETC Information Science Research Institute
Priority to CN202211317045.2A priority Critical patent/CN115690578A/en
Publication of CN115690578A publication Critical patent/CN115690578A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an image fusion method, a target identification method and a target identification device, and relates to the field of image processing. The invention aims to solve the problem that the visible light image and the infrared image are difficult to fuse in the prior art. The method comprises the following steps: s10, acquiring a visible light image and an infrared image to be fused; s20, copying the infrared image into copied images of three channels; s30, processing the visible light image according to a pre-trained convolutional neural network model, and taking intermediate data output by a first convolutional module of the pre-trained convolutional neural network model as at least one visible light feature; s40, processing the copied image according to the pre-trained convolutional neural network model, and taking intermediate data output by a second convolutional module of the pre-trained convolutional neural network model as at least one infrared feature; and S50, carrying out image fusion based on the at least one visible light characteristic and the at least one infrared characteristic to obtain a fused image.

Description

Image fusion method and target identification method and device
Technical Field
The present invention relates to the field of image processing, and more particularly, to an image fusion method, a target identification method, and an apparatus.
Background
In the field of image processing, the resolution of a visible light image is high, and the detail information of a target, such as a texture pattern, an edge contour, a color and the like of the target, can be intuitively displayed; however, the quality of the visible light image is easily affected by the environment, such as poor imaging effect at night or under extreme weather conditions, and is sensitive to illumination changes, and is prone to overexposure or underexposure. The infrared image generated based on the infrared thermal imaging technology has less image detail information and lower resolution, but the infrared image is less influenced by the environment, and can be clearly imaged under extreme weather conditions such as night, foggy days and the like.
If the visible light image and the infrared image can be fused, an image which is less influenced by the environment and has richer details can be obtained. Therefore, how to fuse the visible light image and the infrared image is an urgent problem to be solved.
Disclosure of Invention
The invention aims to provide an image fusion method, a target identification method and a target identification device, which can fuse a visible light image and an infrared image.
In one aspect, an embodiment of the present invention provides an image fusion method, including: s10, acquiring a visible light image and an infrared image to be fused, wherein the visible light image and the infrared image are images shot in the same scene;
s20, copying the infrared image into a copied image of three channels, wherein the data on the three channels of the copied image are the same and are the data corresponding to the infrared image; s30, processing the visible light image according to a pre-trained convolutional neural network model, and taking intermediate data output by a first convolutional module of the pre-trained convolutional neural network model as at least one visible light feature; the layer number of the first convolution module is preset, and the number of the first convolution module is one or more; s40, processing the copied image according to the pre-trained convolutional neural network model, and taking intermediate data output by a second convolutional module of the pre-trained convolutional neural network model as at least one infrared feature; the number of layers of the second convolution module is preset, and the number of the layers is one or more; the second convolution module and the first convolution module have different layer numbers; and S50, carrying out image fusion based on the at least one visible light characteristic and the at least one infrared characteristic to obtain a fused image.
In another aspect, an embodiment of the present invention provides a target identification method, including: performing image fusion on a visible light image and an infrared image corresponding to a target to be recognized according to an image fusion method to obtain a fusion image of the target to be recognized; and carrying out target identification based on the fusion image of the target to be identified.
In another aspect, an embodiment of the present invention provides an image fusion apparatus, including:
the image acquisition module is used for acquiring a visible light image and an infrared image to be fused, wherein the visible light image and the infrared image are images shot in the same scene;
the image copying module is connected with the image acquisition module and is used for copying the infrared image into copied images of three channels, and the data on the three channels of the copied images are the same and are the data corresponding to the infrared image;
the first processing module is connected with the image acquisition module and used for processing the visible light image according to a pre-trained convolutional neural network model and taking intermediate data output by the first convolutional module of the pre-trained convolutional neural network model as at least one visible light characteristic; the layer number of the first convolution module is preset, and the number of the first convolution modules is one or more;
the second processing module is connected with the image copying module and used for processing the copied image according to the pre-trained convolutional neural network model and taking intermediate data output by the second convolutional module of the pre-trained convolutional neural network model as at least one infrared feature; the layer number of the second convolution modules is preset, and the number of the second convolution modules is one or more; the second convolution module and the first convolution module have different layer numbers;
and the image fusion module is respectively connected with the first processing module and the second processing module and is used for carrying out image fusion based on the at least one visible light characteristic and the at least one infrared characteristic to obtain a fused image.
In another aspect, an embodiment of the present invention provides a target identification apparatus, including:
the target image acquisition module is used for carrying out image fusion on the visible light image and the infrared image corresponding to the target to be recognized by using the image fusion device to obtain a fusion image of the target to be recognized;
and the target identification module is connected with the target image acquisition module and is used for carrying out target identification based on the fusion image of the target to be identified.
The invention has the following beneficial effects: the method comprises the steps of obtaining a visible light image and an infrared image shot in the same scene, copying the infrared image into copy images of three channels, processing the copy images according to a pre-trained convolutional neural network model to obtain visible light characteristics and infrared characteristics, and performing image fusion based on the visible light characteristics and the infrared characteristics to obtain a fusion image. Because the visible light characteristic and the infrared characteristic are intermediate data output by the first convolution module and the second convolution module of the convolution neural network model, and the characteristic levels output by different convolution modules of the convolution neural network model are different, the technical scheme provided by the embodiment of the invention can realize cross-layer fusion and efficient complementary utilization of the infrared image and the visible light image, thereby improving the representation capability of the fused image and further improving the accuracy of target identification.
Drawings
Fig. 1 is a first flowchart of an image fusion method provided in embodiment 1 of the present invention;
fig. 2 is a second flowchart of an image fusion method provided in embodiment 1 of the present invention;
fig. 3 is a first flowchart of an image fusion method provided in embodiment 2 of the present invention;
fig. 4 is a second flowchart of an image fusion method provided in embodiment 2 of the present invention;
fig. 5 is a flowchart of a target identification method provided in embodiment 3 of the present invention;
fig. 6 is a first schematic structural diagram of an image fusion apparatus provided in embodiment 4 of the present invention;
fig. 7 is a schematic structural diagram of a second image fusion apparatus provided in embodiment 4 of the present invention;
fig. 8 is a schematic structural diagram three of an image fusion apparatus provided in embodiment 4 of the present invention;
fig. 9 is a schematic structural diagram of a fourth image fusion apparatus provided in embodiment 4 of the present invention;
FIG. 10 is a schematic diagram of an image fusion module of the image fusion apparatus shown in FIG. 6;
fig. 11 is a schematic structural diagram of an object recognition apparatus provided in embodiment 5 of the present invention.
Detailed Description
The technical solution of the present invention is further described below with reference to the following embodiments and the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present invention provides an image fusion method, including:
step 101, a visible light image and an infrared image to be fused are obtained.
In this embodiment, the visible light image and the infrared image in step 101 are images captured in the same scene, that is, images captured by the infrared camera and the visible light camera in the same scene respectively.
And 102, copying the infrared image into a copy image of three channels.
In this embodiment, since the visible light image is a Red, yellow and Blue (Red, green, blue, RGB) three-channel image, and the infrared image is a single-channel image, in order to facilitate the fusion of the two images, the infrared image may be copied through the step 102, and data on three channels of the copied image are the same and are data corresponding to the infrared image.
And 103, processing the visible light image according to the pre-trained convolutional neural network model, and taking the intermediate data output by the first convolutional module of the pre-trained convolutional neural network model as at least one visible light feature.
In this embodiment, before the step 103 processes the visible light image according to the pre-trained first convolutional neural network model, the visible light image may be first converted into a first image with a preset size, and then the first image is processed according to the pre-trained convolutional neural network model through the step 103. The number of layers of the first convolution module in step 103 is preset, and the number of layers is one or more.
And 104, processing the copied image according to the pre-trained convolutional neural network model, and taking the intermediate data output by the second convolutional module of the pre-trained convolutional neural network model as at least one infrared feature.
In this embodiment, before the step 104 processes the copied image according to the pre-trained convolutional neural network model, the copied image may be first converted into a second image with a preset size, and then the second image is processed according to the pre-trained convolutional neural network model through the step 104. In step 104, the number of layers of the second convolution modules is preset, and the number of the layers is one or more; the second convolution module has a different number of layers than the first convolution module.
Since the intermediate data output by the first/second convolution modules is used as at least one visible light/infrared characteristic, the convolution neural network model used in this embodiment may specifically be a convolution neural network model including only a convolution module and a pooling module.
And 105, carrying out image fusion based on the at least one visible light characteristic and the at least one infrared characteristic to obtain a fusion image.
In this embodiment, the process of image fusion through step 105 may include: converting the at least one visible light feature and the at least one infrared feature to the same size; and carrying out image fusion based on the converted at least one visible light characteristic and at least one infrared characteristic to obtain a fused image.
The size of a certain feature can be used as a reference size, and the sizes of all visible light features and infrared features except the feature can be converted into the reference size; one standard size can be set arbitrarily, and the sizes of all visible light features and infrared features are converted into the standard size, which is not described in detail herein. The specific conversion method may be downsampling, deconvolution, etc., and is not limited herein.
The specific mode for carrying out image fusion can be additive fusion, maximum fusion and the like; in particular, in order to completely reserve all elements of the input features, the detail information of the fused image is effectively enhanced by utilizing the complementary features, and the image fusion can also adopt cascade average fusion.
Further, as shown in fig. 2, the image fusion method provided in this embodiment further includes, before step 103:
and 106, training the initial convolutional neural network model to obtain a pre-trained convolutional neural network model.
In this embodiment, the specific process of training through step 106 may include: acquiring a plurality of test image pairs and a class label corresponding to each test image pair, wherein each test image pair consists of a test visible light image and a test infrared image shot in the same scene; respectively copying each test infrared image into test copy images of three channels, wherein the data on the three channels of the test copy images are the same and are the data corresponding to the test infrared images; for any test image pair, processing the test visible light image in the test image pair according to the current initial convolutional neural network model, and taking the intermediate data output by the first convolutional module of the current initial convolutional neural network model as at least one test visible light characteristic; processing the test copy image in the test image pair according to the current initial convolutional neural network model, and taking intermediate data output by a second convolutional module of the current initial convolutional neural network model as at least one test infrared characteristic; respectively carrying out image fusion on at least one test visible light characteristic and at least one test infrared characteristic corresponding to each test image to obtain a plurality of test fusion images; respectively carrying out full connection and softmax classification processing on each test fusion image to obtain a training class corresponding to each test image; judging whether the classification accuracy is greater than a preset classification threshold value or not according to the class label and the training class corresponding to each test image pair; if not, according to the class label and the training class corresponding to each test image pair, updating the current initial convolutional neural network model and then executing a test visible light image processing process; and if so, taking the current initial convolutional neural network model used when the classification accuracy is greater than the preset classification threshold value as the pre-trained convolutional neural network model.
Wherein the plurality of test image pairs can be acquired through the public data set; if the number of the obtained test image pairs is small, in order to improve the classification accuracy, the test image pairs can be firstly subjected to image augmentation, and then the augmented images are trained through the process; the specific augmentation modes include one or more of turning, selecting, adding noise and the like, and are not limited herein. In particular, when image enhancement is performed, the number of images in each category can be balanced as much as possible by increasing the number of images in each category.
In this embodiment, step 106 may be between step 102 and step 103, as shown in FIG. 2; step 106 may also be between step 101 and step 102 or before step 101, without limitation.
In this embodiment, the network structure of the pre-trained convolutional neural network model may be a LeNet5 network, an AlexNet network, a VGG network, a GoogleNet network, a renets network, or the like; taking the VGG16 network as an example, the first convolution module is a third layer convolution module and a fifth layer convolution module; the second convolution module is a fourth layer convolution module.
The invention has the following beneficial effects: the method comprises the steps of obtaining a visible light image and an infrared image shot in the same scene, copying the infrared image into copy images of three channels, processing the copy images according to a pre-trained convolutional neural network model to obtain visible light characteristics and infrared characteristics, and performing image fusion based on the visible light characteristics and the infrared characteristics to obtain a fusion image. Because the visible light characteristic and the infrared characteristic are intermediate data output by the first convolution module and the second convolution module of the convolution neural network model, and the characteristic levels output by different convolution modules of the convolution neural network model are different, the technical scheme provided by the embodiment of the invention can realize cross-layer fusion and efficient complementary utilization of the infrared image and the visible light image, thereby improving the representation capability of the fused image and further improving the accuracy of target identification.
Example 2
As shown in fig. 3, an embodiment of the present invention provides an image fusion method, including:
and 301 to 304, acquiring the visible light image and the infrared image to be fused, copying the infrared image into a copy image, processing the visible light image and the copy image, and acquiring the visible light characteristic and the infrared characteristic. The specific process is similar to steps 101 to 104 shown in fig. 1, and is not described in detail here.
And 305, respectively processing the at least one visible light feature and the at least one infrared feature according to a pre-trained SENet model to obtain at least one visible attention feature corresponding to the at least one visible light feature and at least one infrared attention feature corresponding to the at least one infrared feature.
In this embodiment, in order to further improve the representation capability of the fused image, and thus further improve the accuracy of the target recognition, an attention mechanism is introduced after obtaining at least one visible light feature and infrared feature through steps 303 and 304, and the at least one visible light feature and infrared feature are further processed through a pre-trained sentet model to obtain a more effective feature representation.
And step 306, performing image fusion based on the at least one visible attention feature and the at least one infrared attention feature to obtain a fused image.
In this embodiment, the process of performing image fusion in step 306 is similar to step 105 shown in fig. 1, and is not repeated herein.
Further, as shown in fig. 4, before step 303, the method may further include:
and 307, training the initial convolutional neural network model and the initial SENet model to obtain a pre-trained convolutional neural network model and a pre-trained SENet model.
In this embodiment, the specific process of training through step 307 may include: acquiring a plurality of test image pairs and a class label corresponding to each test image pair, wherein each test image pair consists of a test visible light image and a test infrared image shot in the same scene; respectively copying each test infrared image into test copy images of three channels, wherein the data on the three channels of the test copy images are the same and are the data corresponding to the test infrared images; for any test image pair, processing the test visible light image in the test image pair according to the current initial convolutional neural network model, and taking the intermediate data output by the first convolutional module of the current initial convolutional neural network model as at least one test visible light characteristic; processing the test copy image in the test image pair according to the current initial convolutional neural network model, and taking intermediate data output by a second convolutional module of the current initial convolutional neural network model as at least one test infrared characteristic; respectively processing at least one test visible light characteristic and at least one test infrared characteristic according to the current initial SENET model to obtain at least one test visible attention characteristic corresponding to the at least one test visible light characteristic and at least one test infrared attention characteristic corresponding to the at least one test infrared characteristic; respectively carrying out image fusion on at least one test visible attention feature and at least one test infrared attention feature corresponding to each test image to obtain a plurality of test attention fusion images; respectively carrying out full connection and softmax classification processing on each test attention fusion image to obtain a training class corresponding to each test image; judging whether the classification accuracy is greater than a preset classification threshold value or not according to the class label and the training class corresponding to each test image pair; if not, updating the current initial convolutional neural network model and the current initial SEnet model according to the class label and the training class corresponding to each test image pair, and then executing a test visible light image processing process; if the classification accuracy rate is higher than the preset classification threshold value, the current initial convolutional neural network model used when the classification accuracy rate is higher than the preset classification threshold value is used as a pre-trained convolutional neural network model, and the current initial SENET model used when the classification accuracy rate is higher than the preset classification threshold value is used as a pre-trained SENET model.
The invention has the following beneficial effects: the method comprises the steps of obtaining a visible light image and an infrared image shot in the same scene, copying the infrared image into copy images of three channels, processing the copy images according to a pre-trained convolutional neural network model to obtain visible light characteristics and infrared characteristics, and performing image fusion based on the visible light characteristics and the infrared characteristics to obtain a fusion image. Because the visible light characteristic and the infrared characteristic are intermediate data output by the first convolution module and the second convolution module of the convolution neural network model, and the characteristic levels output by different convolution modules of the convolution neural network model are different, the technical scheme provided by the embodiment of the invention can realize cross-layer fusion and efficient complementary utilization of the infrared image and the visible light image, thereby improving the representation capability of the fused image and further improving the accuracy of target identification.
Example 3
As shown in fig. 5, an embodiment of the present invention provides a target identification method, including:
and 501, performing image fusion on the visible light image and the infrared image corresponding to the target to be recognized according to an image fusion method to obtain a fusion image of the target to be recognized.
And 502, performing target identification based on the fusion image of the target to be identified.
In this embodiment, the specific process of performing target identification through step 502 may be to perform full connection and softmax processing on the fused image to determine a corresponding target; the fused image may also be processed according to a pre-trained classification model to determine a corresponding target, which is not described herein any more.
The invention has the following beneficial effects: the method comprises the steps of obtaining a visible light image and an infrared image corresponding to a target to be recognized, copying the infrared image into copy images of three channels, processing the copy images according to a pre-trained convolutional neural network model to obtain visible light characteristics and infrared characteristics, carrying out image fusion on the basis of the visible light characteristics and the infrared characteristics to obtain a fusion image, and recognizing the target on the basis of the fusion image. Because the visible light characteristic and the infrared characteristic are intermediate data output by the first convolution module and the second convolution module of the convolution neural network model, and the characteristic levels output by different convolution modules of the convolution neural network model are different, the technical scheme provided by the embodiment of the invention can realize cross-layer fusion and efficient complementary utilization of the infrared image and the visible light image, thereby improving the representation capability of the fused image and further improving the accuracy of target identification.
Example 4
As shown in fig. 6, an embodiment of the present invention provides an image fusion apparatus, including:
the image acquisition module 601 is used for acquiring a visible light image and an infrared image to be fused, wherein the visible light image and the infrared image are images shot in the same scene;
the image copying module 602 is connected to the image acquiring module, and is configured to copy the infrared image into copied images of three channels, where data on the three channels of the copied image are the same and are data corresponding to the infrared image;
the first processing module 603 is connected to the image acquisition module, and is configured to process the visible light image according to the pre-trained convolutional neural network model, and use intermediate data output by the first convolutional module of the pre-trained convolutional neural network model as at least one visible light feature; the layer number of the first convolution module is preset, and the number of the first convolution modules is one or more;
a second processing module 604, connected to the image replication module, for processing the replicated image according to the pre-trained convolutional neural network model, and taking the intermediate data output by the second convolutional module of the pre-trained convolutional neural network model as at least one infrared feature; the number of layers of the second convolution module is preset, and the number of the layers is one or more; the second convolution module and the first convolution module have different layer numbers;
and an image fusion module 605, connected to the first processing module and the second processing module, respectively, for performing image fusion based on the at least one visible light feature and the at least one infrared feature to obtain a fused image.
In this embodiment, the process of implementing image fusion by the above modules is similar to that provided in embodiment 1 of the present invention, and is not described in detail here.
Further, as shown in fig. 7, the image fusion apparatus provided in this embodiment further includes:
a first training module 606, connected to the first processing module and the second processing module, respectively, for training the initial convolutional neural network model to obtain a pre-trained convolutional neural network model;
a first training module comprising:
the test acquisition sub-module is used for acquiring a plurality of test image pairs and class labels corresponding to each test image pair, and each test image pair consists of a test visible light image and a test infrared image shot in the same scene;
the test copy sub-module is connected with the test acquisition sub-module and is used for copying each test infrared image into test copy images of three channels, and the data on the three channels of the test copy images are the same and are the data corresponding to the test infrared images;
the first test processing sub-module is connected with the test acquisition sub-module and used for processing the test visible light images in any test image pair according to the current initial convolutional neural network model and taking the intermediate data output by the first convolutional module of the current initial convolutional neural network model as at least one test visible light characteristic;
the second test processing submodule is connected with the test replication sub-module and used for processing the test replication image in the test image pair according to the current initial convolutional neural network model and taking intermediate data output by the second convolutional module of the current initial convolutional neural network model as at least one test infrared characteristic;
the first test fusion submodule is respectively connected with the first test processing submodule and the second test processing submodule and is used for respectively carrying out image fusion on at least one test visible light characteristic and at least one test infrared characteristic corresponding to each test image pair to obtain a plurality of test fusion images;
the first test fusion submodule is connected with the first test fusion submodule and is used for respectively carrying out full connection and softmax classification on each test fusion image to obtain a training class corresponding to each test image;
the first test judgment sub-module is connected with the first test classification sub-module and used for judging whether the classification accuracy is greater than a preset classification threshold value or not according to the corresponding class label and training class of each test image pair;
the first test updating submodule is respectively connected with the first test judging submodule and the first test processing submodule and is used for updating the current initial convolutional neural network model and then executing the first test processing submodule according to the class label and the training class corresponding to each test image pair when the classification accuracy is not greater than the preset classification threshold;
and the first model acquisition sub-module is connected with the first test judgment sub-module and is used for taking the current initial convolutional neural network model used when the classification accuracy is greater than the preset classification threshold value as the pre-trained convolutional neural network model when the classification accuracy is greater than the preset classification threshold value.
In this embodiment, when the image fusion apparatus further includes the first training module, the image fusion process is similar to that provided in embodiment 1 of the present invention, and details are not repeated here.
Further, as shown in fig. 8, the image fusion apparatus provided in this embodiment further includes:
the attention processing module 607 is respectively connected to the first processing module and the second processing module, and is configured to respectively process at least one visible light feature and at least one infrared feature according to a pre-trained SENet model to obtain at least one visible attention feature corresponding to the at least one visible light feature and at least one infrared attention feature corresponding to the at least one infrared feature;
the image fusion module is also connected with the attention processing module and is specifically used for carrying out image fusion based on at least one visible attention feature and at least one infrared attention feature to obtain a fused image.
In this embodiment, when the image fusion device further includes the attention processing module, the image fusion process is implemented, which is similar to that provided in embodiment 2 of the present invention, and is not described in detail herein.
At this time, as shown in fig. 9, the image fusion apparatus according to the present embodiment further includes:
a second training module 608, connected to the first processing module, the second processing module, and the attention processing module, respectively, for training the initial convolutional neural network model and the initial send model to obtain a pre-trained convolutional neural network model and a pre-trained send model;
a second training module comprising:
the test acquisition sub-module is used for acquiring a plurality of test image pairs and a category label corresponding to each test image pair, and each test image pair consists of a test visible light image and a test infrared image shot in the same scene;
the test copy sub-module is connected with the test acquisition sub-module and is used for copying each test infrared image into test copy images of three channels respectively, and the data on the three channels of the test copy images are the same and are the data corresponding to the test infrared images;
the first test processing sub-module is connected with the test acquisition sub-module and used for processing the test visible light images in any test image pair according to the current initial convolutional neural network model and taking the intermediate data output by the first convolutional module of the current initial convolutional neural network model as at least one test visible light characteristic;
the second test processing sub-module is connected with the test replication sub-module and used for processing the test replication image in the test image pair according to the current initial convolutional neural network model and taking intermediate data output by the second convolutional module of the current initial convolutional neural network model as at least one test infrared characteristic;
the attention processing sub-module is respectively connected with the first testing processing sub-module and the second testing processing sub-module and is used for respectively processing at least one testing visible light characteristic and at least one testing infrared characteristic according to the current initial SENEt model to obtain at least one testing visible attention characteristic corresponding to the at least one testing visible light characteristic and at least one testing infrared attention characteristic corresponding to the at least one testing infrared characteristic;
the second testing fusion sub-module is connected with the attention processing sub-module and is used for respectively carrying out image fusion on at least one testing visible light characteristic and at least one testing infrared characteristic corresponding to each testing image to obtain a plurality of testing fusion images;
the second test classification submodule is connected with the second test fusion submodule and is used for respectively carrying out full connection and softmax classification on each test fusion image to obtain a training class corresponding to each test image;
the second test judgment sub-module is connected with the second test classification sub-module and is used for judging whether the classification accuracy is greater than a preset classification threshold value or not according to the corresponding class label and training class of each test image pair;
the second test updating sub-module is respectively connected with the second test judging sub-module and the first test processing sub-module and used for updating the current initial convolutional neural network model and the current initial SEnet model according to the class label and the training class corresponding to each test image when the classification accuracy is not greater than the preset classification threshold value and then executing the first test processing sub-module;
and the second model acquisition sub-module is connected with the second test judgment sub-module and is used for taking the current initial convolutional neural network model used when the classification accuracy is greater than the preset classification threshold as a pre-trained convolutional neural network model and taking the current initial SENET model used when the classification accuracy is greater than the preset classification threshold as a pre-trained SENET model.
In this embodiment, when the image fusion apparatus further includes an attention processing module and a second training module, the process of implementing image fusion is similar to that provided in embodiment 2 of the present invention, and is not described in detail herein.
As shown in fig. 10, the image fusion module 605 in the image fusion apparatus provided in the present embodiment includes:
a size conversion sub-module 6051 for converting the at least one visible light feature and the at least one infrared feature to the same size;
and the fusion submodule 6052 is connected with the size conversion submodule and is used for carrying out image fusion on the basis of the converted at least one visible light characteristic and the converted at least one infrared characteristic to obtain a fusion image.
In this embodiment, the process of obtaining the fusion image through the size conversion sub-module 6051 and the fusion sub-module 6052 is similar to that provided in embodiment 2 of the present invention, and is not described in detail herein.
The invention has the following beneficial effects: the method comprises the steps of obtaining a visible light image and an infrared image shot in the same scene, copying the infrared image into copy images of three channels, processing the copy images according to a pre-trained convolutional neural network model to obtain visible light characteristics and infrared characteristics, and performing image fusion based on the visible light characteristics and the infrared characteristics to obtain a fusion image. Because the visible light characteristic and the infrared characteristic are intermediate data output by the first convolution module and the second convolution module of the convolution neural network model, and the characteristic levels output by different convolution modules of the convolution neural network model are different, the technical scheme provided by the embodiment of the invention can realize cross-layer fusion and efficient complementary utilization of the infrared image and the visible light image, thereby improving the representation capability of the fused image and further improving the accuracy of target identification.
Example 5
As shown in fig. 11, an embodiment of the present invention provides an object recognition apparatus, including:
the target image acquisition module 1101 is configured to perform image fusion on the visible light image and the infrared image corresponding to the target to be recognized by using an image fusion device, so as to obtain a fusion image of the target to be recognized;
and the target identification module 1102 is connected with the target image acquisition module and is used for carrying out target identification based on the fusion image of the target to be identified.
In this embodiment, the process of implementing target identification through the modules is similar to that provided in embodiment 3 of the present invention, and is not described in detail here.
The invention has the following beneficial effects: the method comprises the steps of obtaining a visible light image and an infrared image corresponding to a target to be recognized, copying the infrared image into copy images of three channels, processing the copy images according to a pre-trained convolutional neural network model to obtain visible light characteristics and infrared characteristics, carrying out image fusion on the basis of the visible light characteristics and the infrared characteristics to obtain a fusion image, and recognizing the target on the basis of the fusion image. Because the visible light characteristic and the infrared characteristic are intermediate data output by the first convolution module and the second convolution module of the convolution neural network model, and the characteristic levels output by different convolution modules of the convolution neural network model are different, the technical scheme provided by the embodiment of the invention can realize cross-layer fusion and efficient complementary utilization of the infrared image and the visible light image, thereby improving the representation capability of the fused image and further improving the accuracy of target identification.
The sequence of the above embodiments is only for convenience of description and does not represent the advantages and disadvantages of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (13)

1. An image fusion method, comprising:
s10, acquiring a visible light image and an infrared image to be fused, wherein the visible light image and the infrared image are images shot in the same scene;
s20, copying the infrared image into copied images of three channels, wherein the data on the three channels of the copied images are the same and are the data corresponding to the infrared image;
s30, processing the visible light image according to a pre-trained convolutional neural network model, and taking intermediate data output by a first convolution module of the pre-trained convolutional neural network model as at least one visible light feature; the layer number of the first convolution module is preset, and the number of the first convolution module is one or more;
s40, processing the copied image according to the pre-trained convolutional neural network model, and taking intermediate data output by a second convolutional module of the pre-trained convolutional neural network model as at least one infrared feature; the layer number of the second convolution modules is preset, and the number of the second convolution modules is one or more; the second convolution module and the first convolution module have different layer numbers;
and S50, carrying out image fusion based on the at least one visible light characteristic and the at least one infrared characteristic to obtain a fusion image.
2. The image fusion method according to claim 1, further comprising, before the S30:
s21, training the initial convolutional neural network model to obtain a pre-trained convolutional neural network model;
the S21 specifically includes:
s211, obtaining a plurality of test image pairs and a class label corresponding to each test image pair, wherein each test image pair consists of a test visible light image and a test infrared image shot in the same scene;
s212, copying each test infrared image into test copy images of three channels respectively, wherein the data on the three channels of the test copy images are the same and are the data corresponding to the test infrared images;
s213, for any test image pair, processing the test visible light image in the test image pair according to the current initial convolutional neural network model, and taking the intermediate data output by the first convolutional module of the current initial convolutional neural network model as at least one test visible light characteristic;
s214, processing the test copy image in the test image pair according to the current initial convolution neural network model, and taking intermediate data output by a second convolution module of the current initial convolution neural network model as at least one test infrared characteristic;
s215, respectively carrying out image fusion on at least one test visible light characteristic and at least one test infrared characteristic corresponding to each test image to obtain a plurality of test fusion images;
s216, respectively carrying out full connection and softmax classification on each test fusion image to obtain a training class corresponding to each test image;
s217, judging whether the classification accuracy is greater than a preset classification threshold value or not according to the corresponding class label and training class of each test image pair; if so, go to S219; otherwise, go to S218;
s218, updating the current initial convolutional neural network model according to the class label and the training class corresponding to each test image pair, and then executing the step S213;
s219, taking the current initial convolutional neural network model used when the classification accuracy rate is greater than a preset classification threshold value as the pre-trained convolutional neural network model.
3. The image fusion method according to claim 1, further comprising, before the S50:
s41, respectively processing the at least one visible light feature and the at least one infrared feature according to a pre-trained SENET model to obtain at least one visible attention feature corresponding to the at least one visible light feature and at least one infrared attention feature corresponding to the at least one infrared feature;
the S50 specifically comprises the following steps: and performing image fusion based on the at least one visible attention feature and the at least one infrared attention feature to obtain a fused image.
4. The image fusion method according to claim 3, further comprising, before the S30:
s22, training the initial convolutional neural network model and the initial SENEt model to obtain a pre-trained convolutional neural network model and a pre-trained SENEt model;
the S22 specifically includes:
s221, obtaining a plurality of test image pairs and a class label corresponding to each test image pair, wherein each test image pair consists of a test visible light image and a test infrared image shot in the same scene;
s222, copying each test infrared image into test copy images of three channels respectively, wherein the data on the three channels of the test copy images are the same and are the data corresponding to the test infrared images;
s223, for any test image pair, processing the test visible light image in the test image pair according to the current initial convolutional neural network model, and taking the intermediate data output by the first convolutional module of the current initial convolutional neural network model as at least one test visible light feature;
s224, processing the test copy image in the test image pair according to the current initial convolutional neural network model, and taking intermediate data output by a second convolutional module of the current initial convolutional neural network model as at least one test infrared feature;
s225, the at least one test visible light feature and the at least one test infrared feature are processed respectively according to the current initial SENET model, and at least one test visible attention feature corresponding to the at least one test visible light feature and at least one test infrared attention feature corresponding to the at least one test infrared feature are obtained;
s226, respectively carrying out image fusion on at least one test visible attention feature and at least one test infrared attention feature corresponding to each test image to obtain a plurality of test attention fusion images;
s227, performing full connection and softmax classification processing on each test attention fusion image respectively to obtain a training class corresponding to each test image;
s228, judging whether the classification accuracy is greater than a preset classification threshold value or not according to the class label and the training class corresponding to each test image pair; if so, go to S230; otherwise, S229 is executed;
s229, according to the class label and the training class corresponding to each test image pair, updating the current initial convolutional neural network model and the current initial SEnet model, and then executing the step S223;
and S230, taking the current initial convolutional neural network model used when the classification accuracy rate is greater than the preset classification threshold value as the pre-trained convolutional neural network model, and taking the current initial SENET model used when the classification accuracy rate is greater than the preset classification threshold value as the pre-trained SENET model.
5. The image fusion method according to claim 1, wherein the S50 includes:
s501, converting the at least one visible light characteristic and the at least one infrared characteristic into the same size;
s502, image fusion is carried out on the basis of the converted at least one visible light characteristic and the at least one infrared characteristic, and a fusion image is obtained.
6. The image fusion method according to any one of claims 1 to 5, wherein when the pre-trained convolutional neural network model is a pre-trained VGG16 network,
the first convolution module is a third layer convolution module and a fifth layer convolution module;
the second convolution module is a fourth layer convolution module.
7. A method of object recognition, comprising:
the image fusion method according to any one of claims 1 to 6, wherein the visible light image and the infrared image corresponding to the target to be recognized are subjected to image fusion to obtain a fusion image of the target to be recognized;
and carrying out target identification based on the fusion image of the target to be identified.
8. An image fusion apparatus, comprising:
the image acquisition module is used for acquiring a visible light image and an infrared image to be fused, wherein the visible light image and the infrared image are images shot in the same scene;
the image copying module is connected with the image acquisition module and is used for copying the infrared image into copied images of three channels, and the data on the three channels of the copied images are the same and are the data corresponding to the infrared image;
the first processing module is connected with the image acquisition module and used for processing the visible light image according to a pre-trained convolutional neural network model and taking intermediate data output by the first convolutional module of the pre-trained convolutional neural network model as at least one visible light characteristic; the layer number of the first convolution module is preset, and the number of the first convolution module is one or more;
the second processing module is connected with the image copying module and used for processing the copied image according to the pre-trained convolutional neural network model and taking intermediate data output by the second convolutional module of the pre-trained convolutional neural network model as at least one infrared feature; the layer number of the second convolution modules is preset, and the number of the second convolution modules is one or more; the second convolution module and the first convolution module have different layer numbers;
and the image fusion module is respectively connected with the first processing module and the second processing module and is used for carrying out image fusion based on the at least one visible light characteristic and the at least one infrared characteristic to obtain a fused image.
9. The image fusion apparatus according to claim 8, further comprising:
the first training module is respectively connected with the first processing module and the second processing module and used for training the initial convolutional neural network model to obtain a pre-trained convolutional neural network model;
the first training module comprising:
the test acquisition sub-module is used for acquiring a plurality of test image pairs and class labels corresponding to each test image pair, and each test image pair consists of a test visible light image and a test infrared image shot in the same scene;
the test copy sub-module is connected with the test acquisition sub-module and is used for copying each test infrared image into test copy images of three channels, and the data on the three channels of the test copy images are the same and are the data corresponding to the test infrared images;
the first test processing submodule is connected with the test acquisition submodule and used for processing the test visible light image in any test image pair according to the current initial convolutional neural network model and taking the intermediate data output by the first convolutional module of the current initial convolutional neural network model as at least one test visible light characteristic;
the second test processing submodule is connected with the test replication submodule and used for processing the test replication image in the test image pair according to the current initial convolutional neural network model and taking intermediate data output by the second convolutional module of the current initial convolutional neural network model as at least one test infrared characteristic;
the first test fusion submodule is respectively connected with the first test processing submodule and the second test processing submodule and is used for respectively carrying out image fusion on at least one test visible light characteristic and at least one test infrared characteristic corresponding to each test image pair to obtain a plurality of test fusion images;
the first test classification submodule is connected with the first test fusion submodule and is used for respectively carrying out full connection and softmax classification on each test fusion image to obtain a training class corresponding to each test image;
the first test judgment sub-module is connected with the first test classification sub-module and used for judging whether the classification accuracy is greater than a preset classification threshold value or not according to the corresponding class label and training class of each test image pair;
the first test updating submodule is respectively connected with the first test judging submodule and the first test processing submodule and is used for updating the current initial convolutional neural network model and then executing the first test processing submodule according to the class label and the training class corresponding to each test image pair when the classification accuracy is not greater than the preset classification threshold;
and the first model obtaining sub-module is connected with the first test judging sub-module and is used for taking the current initial convolutional neural network model used when the classification accuracy is greater than the preset classification threshold value as the pre-trained convolutional neural network model when the classification accuracy is greater than the preset classification threshold value.
10. The image fusion apparatus according to claim 8, further comprising:
the attention processing module is respectively connected with the first processing module and the second processing module and is used for respectively processing the at least one visible light characteristic and the at least one infrared characteristic according to a pre-trained SEnet model to obtain at least one visible attention characteristic corresponding to the at least one visible light characteristic and at least one infrared attention characteristic corresponding to the at least one infrared characteristic;
the image fusion module is further connected with the attention processing module, and is specifically configured to perform image fusion based on the at least one visible attention feature and the at least one infrared attention feature to obtain a fused image.
11. The image fusion apparatus according to claim 10, further comprising:
the second training module is respectively connected with the first processing module, the second processing module and the attention processing module and is used for training the initial convolutional neural network model and the initial SENEt model to obtain a pre-trained convolutional neural network model and a pre-trained SENEt model;
the second training module comprising:
the test acquisition sub-module is used for acquiring a plurality of test image pairs and a category label corresponding to each test image pair, and each test image pair consists of a test visible light image and a test infrared image shot in the same scene;
the test copy sub-module is connected with the test acquisition sub-module and is used for copying each test infrared image into test copy images of three channels respectively, and the data on the three channels of the test copy images are the same and are the data corresponding to the test infrared images;
the first test processing submodule is connected with the test acquisition submodule and used for processing the test visible light image in any test image pair according to the current initial convolutional neural network model and taking the intermediate data output by the first convolutional module of the current initial convolutional neural network model as at least one test visible light characteristic;
the second test processing sub-module is connected with the test copy sub-module and used for processing the test copy images in the test image pair according to the current initial convolutional neural network model and taking intermediate data output by the second convolutional module of the current initial convolutional neural network model as at least one test infrared characteristic;
the attention processing sub-module is respectively connected with the first testing processing sub-module and the second testing processing sub-module and is used for respectively processing the at least one testing visible light characteristic and the at least one testing infrared characteristic according to a current initial SENEt model to obtain at least one testing visible attention characteristic corresponding to the at least one testing visible light characteristic and at least one testing infrared attention characteristic corresponding to the at least one testing infrared characteristic;
the second testing fusion sub-module is connected with the attention processing sub-module and is used for respectively carrying out image fusion on at least one testing visible light characteristic and at least one testing infrared characteristic corresponding to each testing image to obtain a plurality of testing fusion images;
the second test classification submodule is connected with the second test fusion submodule and is used for respectively carrying out full connection and softmax classification on each test fusion image to obtain a training class corresponding to each test image;
the second test judgment sub-module is connected with the second test classification sub-module and is used for judging whether the classification accuracy is greater than a preset classification threshold value or not according to the corresponding class label and the training class of each test image pair;
the second test updating sub-module is respectively connected with the second test judging sub-module and the first test processing sub-module and is used for updating the current initial convolutional neural network model and the current initial SEnet model according to the class label and the training class corresponding to each test image when the classification accuracy is not greater than a preset classification threshold value and then executing the first test processing sub-module;
and the second model obtaining sub-module is connected with the second testing and judging sub-module and is used for taking the current initial convolutional neural network model used when the classification accuracy is greater than the preset classification threshold value as the pre-trained convolutional neural network model and taking the current initial SEnet model used when the classification accuracy is greater than the preset classification threshold value as the pre-trained SEnet model when the classification accuracy is greater than the preset classification threshold value.
12. The image fusion device of claim 8, wherein the image fusion module comprises:
a size conversion sub-module for converting the at least one visible light feature and the at least one infrared feature to the same size;
and the fusion submodule is connected with the size conversion submodule and is used for carrying out image fusion on the basis of the converted at least one visible light characteristic and the converted at least one infrared characteristic to obtain a fusion image.
13. An object recognition apparatus, comprising:
a target image obtaining module, configured to perform image fusion on the visible light image and the infrared image corresponding to the target to be recognized by using the image fusion device according to any one of claims 8 to 12, to obtain a fusion image of the target to be recognized;
and the target identification module is connected with the target image acquisition module and is used for carrying out target identification based on the fusion image of the target to be identified.
CN202211317045.2A 2022-10-26 2022-10-26 Image fusion method and target identification method and device Pending CN115690578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211317045.2A CN115690578A (en) 2022-10-26 2022-10-26 Image fusion method and target identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211317045.2A CN115690578A (en) 2022-10-26 2022-10-26 Image fusion method and target identification method and device

Publications (1)

Publication Number Publication Date
CN115690578A true CN115690578A (en) 2023-02-03

Family

ID=85099409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211317045.2A Pending CN115690578A (en) 2022-10-26 2022-10-26 Image fusion method and target identification method and device

Country Status (1)

Country Link
CN (1) CN115690578A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612110A (en) * 2023-12-13 2024-02-27 安徽省川佰科技有限公司 Hearth flame intelligent monitoring system and method based on computer vision technology
CN117612110B (en) * 2023-12-13 2024-05-14 安徽省川佰科技有限公司 Hearth flame intelligent monitoring system and method based on computer vision technology

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276736A (en) * 2019-04-01 2019-09-24 厦门大学 A kind of magnetic resonance image fusion method based on weight prediction network
CN111222396A (en) * 2019-10-23 2020-06-02 江苏大学 All-weather multispectral pedestrian detection method
CN111859005A (en) * 2020-07-01 2020-10-30 江西理工大学 Cross-layer multi-model feature fusion and image description method based on convolutional decoding
CN111986132A (en) * 2020-08-12 2020-11-24 兰州交通大学 Infrared and visible light image fusion method based on DLatLRR and VGG & Net
CN111986240A (en) * 2020-09-01 2020-11-24 交通运输部水运科学研究所 Drowning person detection method and system based on visible light and thermal imaging data fusion
CN113361475A (en) * 2021-06-30 2021-09-07 江南大学 Multi-spectral pedestrian detection method based on multi-stage feature fusion information multiplexing
CN113610180A (en) * 2021-08-17 2021-11-05 湖南工学院 Visible light image and infrared image fusion ship classification method and device based on deep learning
CN114066955A (en) * 2021-11-19 2022-02-18 安徽大学 Registration method for registering infrared light image to visible light image
CN114581353A (en) * 2022-03-11 2022-06-03 飞础科智慧科技(上海)有限公司 Infrared image processing method and device, medium and electronic equipment
CN114820408A (en) * 2022-05-12 2022-07-29 中国地质大学(武汉) Infrared and visible light image fusion method based on self-attention and convolutional neural network
CN114820555A (en) * 2022-05-11 2022-07-29 山东省立第三医院 Breast cancer pathological image classification method based on SENet channel attention and transfer learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276736A (en) * 2019-04-01 2019-09-24 厦门大学 A kind of magnetic resonance image fusion method based on weight prediction network
CN111222396A (en) * 2019-10-23 2020-06-02 江苏大学 All-weather multispectral pedestrian detection method
CN111859005A (en) * 2020-07-01 2020-10-30 江西理工大学 Cross-layer multi-model feature fusion and image description method based on convolutional decoding
CN111986132A (en) * 2020-08-12 2020-11-24 兰州交通大学 Infrared and visible light image fusion method based on DLatLRR and VGG & Net
CN111986240A (en) * 2020-09-01 2020-11-24 交通运输部水运科学研究所 Drowning person detection method and system based on visible light and thermal imaging data fusion
CN113361475A (en) * 2021-06-30 2021-09-07 江南大学 Multi-spectral pedestrian detection method based on multi-stage feature fusion information multiplexing
CN113610180A (en) * 2021-08-17 2021-11-05 湖南工学院 Visible light image and infrared image fusion ship classification method and device based on deep learning
CN114066955A (en) * 2021-11-19 2022-02-18 安徽大学 Registration method for registering infrared light image to visible light image
CN114581353A (en) * 2022-03-11 2022-06-03 飞础科智慧科技(上海)有限公司 Infrared image processing method and device, medium and electronic equipment
CN114820555A (en) * 2022-05-11 2022-07-29 山东省立第三医院 Breast cancer pathological image classification method based on SENet channel attention and transfer learning
CN114820408A (en) * 2022-05-12 2022-07-29 中国地质大学(武汉) Infrared and visible light image fusion method based on self-attention and convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FANGCEN LIU 等: "Infrared and Visible Cross-Modal Image Retrieval Through Shared Features", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, vol. 31, no. 11, XP011885209, DOI: 10.1109/TCSVT.2020.3048945 *
曲海成 等: "结合亮度感知与密集卷积的红外与可见光图像融合", 《智能系统学报》, vol. 17, no. 3 *
王君尧 等: "红外与可见光图像多特征自适应融合方法", 《红外技术》, vol. 44, no. 6 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612110A (en) * 2023-12-13 2024-02-27 安徽省川佰科技有限公司 Hearth flame intelligent monitoring system and method based on computer vision technology
CN117612110B (en) * 2023-12-13 2024-05-14 安徽省川佰科技有限公司 Hearth flame intelligent monitoring system and method based on computer vision technology

Similar Documents

Publication Publication Date Title
Ying et al. From patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality
CN109902732B (en) Automatic vehicle classification method and related device
Rijal et al. Ensemble of deep neural networks for estimating particulate matter from images
Li et al. No-reference image quality assessment with deep convolutional neural networks
CN110555465B (en) Weather image identification method based on CNN and multi-feature fusion
CN108875821A (en) The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
CN109509156B (en) Image defogging processing method based on generation countermeasure model
CN115496740B (en) Lens defect detection method and system based on convolutional neural network
CN114240939A (en) Method, system, equipment and medium for detecting appearance defects of mainboard components
WO2024021461A1 (en) Defect detection method and apparatus, device, and storage medium
CN111242868A (en) Image enhancement method based on convolutional neural network under dark vision environment
Liu et al. Enhanced image no‐reference quality assessment based on colour space distribution
CN115019340A (en) Night pedestrian detection algorithm based on deep learning
CN115690578A (en) Image fusion method and target identification method and device
CN116385293A (en) Foggy-day self-adaptive target detection method based on convolutional neural network
CN112991236B (en) Image enhancement method and device based on template
CN113486929B (en) Rock slice image identification method based on residual shrinkage module and attention mechanism
CN111179224B (en) Non-reference evaluation method for aerial image restoration quality based on joint learning
Zhou et al. Improving Lens Flare Removal with General-Purpose Pipeline and Multiple Light Sources Recovery
CN113657183A (en) Vehicle 24 color identification method under smooth neural network based on multilayer characteristics
CN116664463B (en) Two-stage low-illumination image enhancement method
Liu et al. Low Light Image Enhancement Based on Multi-Scale Network Fusion
TWI780465B (en) Defect-inspecting method of goggles and a system thereof
CN114511462B (en) Visual image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination