CN116778293A - Image fusion method based on mask - Google Patents

Image fusion method based on mask Download PDF

Info

Publication number
CN116778293A
CN116778293A CN202311068607.9A CN202311068607A CN116778293A CN 116778293 A CN116778293 A CN 116778293A CN 202311068607 A CN202311068607 A CN 202311068607A CN 116778293 A CN116778293 A CN 116778293A
Authority
CN
China
Prior art keywords
image
convolution
mask
branch
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311068607.9A
Other languages
Chinese (zh)
Other versions
CN116778293B (en
Inventor
吕国华
司马超群
高翔
王西艳
张曾彬
宋文廓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202311068607.9A priority Critical patent/CN116778293B/en
Publication of CN116778293A publication Critical patent/CN116778293A/en
Application granted granted Critical
Publication of CN116778293B publication Critical patent/CN116778293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses an image fusion method based on a mask, and relates to the technical field of new generation information. The application comprises the following steps: s1: acquiring a training set and a testing set; step S2: detecting a target, recording coordinates, and then constructing a mask of the target; step S3: constructing an image fusion network; step S4: under the guidance of a mask-based self-attention loss function, the image fusion network is trained by utilizing a training set, and an image fusion network model is obtained. The application recognizes semantic information rich areas in the visible light image and the infrared image, not only realizes automatic mask construction, but also provides space guidance and weight guidance of obvious targets for the image fusion network by using a self-attention loss function based on the mask so as to improve the image fusion effect.

Description

Image fusion method based on mask
Technical Field
The application belongs to the technical field of new generation information, and particularly relates to an image fusion method based on a mask.
Background
The image fusion technology has great application value in the fields of remote sensing detection, medical image analysis, environmental protection, traffic monitoring, clear image reconstruction, computer vision and the like. At present, the fusion of infrared and visible light images is more widely applied and relatively mature, and the image fusion method of the infrared and visible light images mainly comprises a traditional image fusion method and an image fusion method based on deep learning. Conventional infrared and visible light image fusion methods, such as a multi-scale transformation-based image fusion method, a saliency-based image fusion method and an optimization-based image fusion method, mainly focus on the design of feature extraction and fusion rules, and less consider the requirement of high-level tasks such as target detection of fusion images, and also do not explicitly define and process important semantic target areas. The existing image fusion method based on deep learning mainly focuses on fusion of the whole image when the infrared image and the visible light image are fused, focuses on the advanced task of target detection of the fused image later, and also lacks of focus enhancement on the remarkable target with rich semantic content in the infrared image and the visible light image. Therefore, the application provides an image fusion method based on a mask.
Disclosure of Invention
In order to make up for the defects of the prior art, the invention provides an image fusion method based on a mask.
The technical scheme of the invention is as follows:
a mask-based image fusion method comprises the following steps:
step S1: acquiring a training set and a testing set which comprise infrared images and visible light images;
step S2: detecting targets in the infrared image and the visible light image in the training set, recording coordinates of a target detection frame, and respectively constructing a mask of the targets in the infrared image and a mask of the targets in the visible light image according to the coordinates of the target detection frame by using a mask generating method to obtain the mask of the targets in the infrared image and the mask of the targets in the visible light image;
step S3: constructing an image fusion network, wherein the image fusion network comprises a feature extraction network, a Concat layer I and an image reconstruction network IRM; the feature extraction network is used for extracting features of Y-channel images of infrared images and visible light images in a training set or a test set; the Concat layer I is used for splicing the characteristics of the infrared image extracted by the characteristic extraction network and the characteristics of the Y-channel image of the visible light image to obtain spliced characteristics; the feature reconstruction network is used for fusing the spliced features to obtain a fused image;
Step S4: under the guidance of a mask-based self-attention loss function, the image fusion network is trained by utilizing a training set, and an image fusion network model is obtained.
Preferably, the training set and the test set are obtained in the following manner: firstly, selecting 22 pairs of infrared images and visible light images of a white scene and 23 pairs of infrared images and visible light images of a night scene from a RoadSence data set as a test set; then, the remaining 176 pairs of infrared and visible images in the roadsequence dataset are used as training sets.
Preferably, in step S3, the feature extraction network has a dual-branch structure, and both branch a and branch B are composed of a convolution layer with a convolution kernel size of 1×1, a Dense connection module Dense, and a texture enhancement module TEM.
Preferably, in step S3, the Dense connection module Dense includes three convolution modules I sequentially connected, where each convolution module I is composed of a convolution layer with a convolution kernel size of 3×3 and an LReLU activation layer; and the three convolution modules I in the Dense connection module Dense are connected in a Dense connection mode.
Preferably, in step S3, the texture enhancement module TEM includes a first branch, a second branch, a third branch, a fourth branch, and a Concat layer ii;
The first branch comprises three convolution modules II, wherein the first two convolution modules II (namely a first convolution module II and a second convolution module II) are respectively composed of a convolution layer with a convolution kernel size of 3 multiplied by 3 and an LReLU activation layer, and the third convolution module II is composed of a convolution layer with a convolution kernel size of 1 multiplied by 1 and an LReLU activation layer;
the second branch comprises a Laplace edge detection module, an Add layer and three convolution modules III which are sequentially connected, wherein the first two convolution modules III (namely a first convolution module III and a second convolution module III) are respectively composed of a convolution layer with a convolution kernel size of 3 multiplied by 3 and an LReLU activation layer, and the third convolution module III is composed of a convolution layer with a convolution kernel size of 1 multiplied by 1 and an LReLU activation layer; the Laplace edge detection module is connected with the Dense connection module Dense, and the Dense connection module Dense and the Laplace edge detection module are both connected with the Add layer;
the third branch is composed of a Sobel edge detector and a convolution module IV; the convolution module IV consists of a convolution layer with the convolution kernel size of 1 multiplied by 1 and an LReLU activation layer;
the fourth branch comprises a convolution layer with a convolution kernel size of 1×1 and an lrehu activation layer;
The first branch, the second branch, the third branch and the fourth branch are all connected with a Concat layer II, and the Concat layer II is used for splicing the characteristics I output by the first branch, the characteristics II output by the second branch, the characteristics III output by the third branch and the characteristics IV output by the fourth branch.
Preferably, in step S3, the image reconstruction network includes four convolution modules v, where each of the first three convolution modules v includes a convolution layer with a convolution kernel size of 3×3 and an lrehu activation layer, and the fourth convolution module v includes a convolution layer with a convolution kernel size of 3×3 and a Tanh activation layer.
Preferably, the step S4 specifically includes the following steps:
s4-1: converting a visible light image in a training set of a fusion network into a YCbCr image by using an image space conversion module, and then separating out a Y-channel image of the visible light image;
s4-2: inputting the infrared image in the training set of the fusion network and the Y-channel image obtained in the step S4-1 into the image fusion network to obtain a single-channel fusion image; and then, carrying out back propagation under the guidance of the self-attention loss based on the mask, which is obtained by calculation of the self-attention loss function based on the mask, updating the weight of the image fusion network, and iterating 100 times to complete the training process of the image fusion network, thereby obtaining the image fusion network model.
Preferably, in step S4, mask-based self-attention loss includes pixel loss, gradient loss, and structural similarity loss.
Compared with the prior art, the application has the following beneficial effects:
the image fusion method utilizes the object detection network to identify semantic information rich areas in the visible light and the infrared image, utilizes the mask generation method to construct masks of the infrared image and the visible light image, and uses the mask-based self-attention loss function to provide space guidance and weight guidance of a significant object for the image fusion network, thereby realizing the key enhancement of the significant object with rich semantic content in the infrared image and the visible light image, and further realizing the improvement of the image fusion effect. The fusion image obtained by the application not only contains the heat radiation information in the infrared light image, but also contains the texture information in the visible light image, so that the fusion image has a better promotion effect on the advanced computer vision task of the subsequent target detection. Tests show that the fusion image generated by the mask-based image fusion method is detected and tested by utilizing the Yolov5 target detection network, and good effects are obtained in five evaluation indexes of F1, precision, recall, mAP@5 and mAP@5:95, so that the fusion image generated by the image fusion method has the advantages of high precision and better fusion effect of a target area when the high-level computer vision task of target detection is carried out.
Drawings
FIG. 1 is a flow chart of a mask-based image fusion method of the present application;
FIG. 2 is a schematic diagram of a network structure of an image fusion network according to the present application; in FIG. 2Is Concat layer I;
FIG. 3 is a schematic diagram of a network structure of a Dense connection module Dense according to the present application;
FIG. 4 is a schematic diagram of a network structure of a texture enhancement module TEM according to the present application; in the view of figure 4 of the drawings,is Concat layer II->Is an Add layer;
FIG. 5 is a schematic diagram of a network architecture of an image reconstruction network IRM according to the present application;
FIG. 6 is a fused image obtained by the image fusion method of the present application and five prior image fusion methods; in fig. 6, (a) shows an original image of a visible light image used for being input to an image Fusion network of the image Fusion method according to the present application and five existing image Fusion methods, (b) shows an original image of an infrared image used for being input to an image Fusion network of the image Fusion method according to the present application and five existing image Fusion methods, (c) to (h) respectively show a divuse image Fusion method, a piause image Fusion method, a U2Fusion image Fusion method, a Fusion gan image Fusion method, a stdfuse image Fusion method, and Fusion images obtained by the image Fusion method according to the present application;
FIG. 7 is a diagram showing the test effect of a target detection test on a fusion image obtained by the image fusion method of the present application and five existing image fusion methods using a Yolov5 target detection network; in fig. 7, (a) is a test effect display diagram for performing a target detection test on a visible light image in an MSRS data set by using a Yolov5 target detection network, (b) is a test effect display diagram for performing a target detection test on an infrared image in an MSRS data set by using a Yolov5 target detection network, (c) to (h) are test effect display diagrams for performing a target detection test on a divuse image Fusion method, a piause image Fusion method, a U2Fusion image Fusion method, a Fusion gan image Fusion method, a STDFusion image Fusion method, and a Fusion image generated by the image Fusion method of the present application, respectively.
Detailed Description
As shown in fig. 1, the present application provides a mask-based image fusion method, which includes the following steps:
step S1: the method comprises the steps of obtaining a training set and a testing set containing infrared images and visible light images, wherein the training set and the testing set are specifically as follows:
the prior art roadsequence dataset (website: https:// gitsub. Com/hanna-xu/RoadScene) contains 221 pairs of infrared and visible images.
Firstly, selecting 22 pairs of infrared images and visible light images of a white scene and 23 pairs of infrared images and visible light images of a night scene from a RoadSence data set as a test set; then, taking the remaining 176 pairs of infrared images and visible light images in the RoadSece data set as a training set;
step S2: detecting targets in the infrared image and the visible light image in the training set, recording coordinates of a target detection frame, and respectively constructing a mask of the targets in the infrared image and a mask of the targets in the visible light image according to the coordinates of the target detection frame by using a mask generating method to obtain the mask of the targets in the infrared image and the mask of the targets in the visible light image; in particular the number of the elements,
step S2-1: respectively detecting targets in the infrared image and targets in the visible light image by using the existing target detection network, and respectively recording coordinates of a target detection frame; the target detection network is one of the existing yolo series target detection network, R-CNN target detection network, fast R-CNN target detection network, fater R-CNN target detection network and SSD target detection network;
step S2-2: respectively constructing a mask of a target in the infrared image and a mask of the target in the visible light image according to coordinates of the target detection frame by using a mask generation method; the mask generation method respectively constructs a mask of a target in an infrared image and a mask of the target in a visible light image according to coordinates of a target detection frame, and comprises the following steps: firstly, reading coordinates output by a target detection network and the length and width of corresponding images in a training set, then generating images with the same size, setting values in a coordinate range to 255, setting values in other positions to 0, and generating a target mask;
Step S3: constructing an image fusion network:
the specific structure of the image fusion network is shown in fig. 2, wherein the image fusion network comprises a feature extraction network, a Concat layer I and an image reconstruction network IRM; the feature extraction network is used for extracting features of Y-channel images of infrared images and visible light images in a training set or a test set; the Concat layer I is used for splicing the characteristics of the infrared image extracted by the characteristic extraction network and the characteristics of the Y-channel image of the visible light image to obtain splicing characteristics, and the number of channels of the splicing characteristics is 192; the feature reconstruction network is used for fusing the spliced features to obtain a fused image;
the feature extraction network is of a double-branch structure, the branch A is used for extracting features of infrared light images in a training set or a test set, and the branch B is used for extracting features of Y-channel images of visible light images in the training set or the test set; the branch A and the branch B are composed of a convolution layer with a convolution kernel size of 1 multiplied by 1, a Dense connection module Dense and a texture enhancement module TEM;
the convolution layer with the convolution kernel size of 1 multiplied by 1 in the branch A is used for expanding channels of infrared images in a training set or a test set to obtain a characteristic A with the channel number of 4;
The Dense connection module Dense in the branch A is used for fully extracting the characteristics in the characteristics A and expanding channels to obtain characteristics B with the channel number of 16;
the texture enhancement module TEM in the branch A is used for enhancing texture features in the feature B to obtain a feature C with 96 channels;
the convolution layer with the convolution kernel size of 1 multiplied by 1 in the branch B is used for expanding channels of Y channel images of visible light images in a training set or a test set to obtain a characteristic D with the channel number of 4;
the Dense connection module Dense in the branch B is used for fully extracting the characteristics in the characteristics D and expanding channels to obtain characteristics E with the channel number of 16;
the texture enhancement module TEM in the branch B is used for enhancing texture features in the features E to obtain features F with the channel number of 96;
in the specific structure of the Dense connection module Dense, as shown in fig. 3, the Dense connection module Dense comprises three convolution modules I which are sequentially connected, and each convolution module I consists of a convolution layer with a convolution kernel size of 3×3 and an LReLU activation layer; three convolution modules I in the Dense connection module Dense are connected in a Dense connection mode; in the application, a first convolution module I, a second convolution module I and a third convolution module I in the Dense connection module Dense are all used for expanding channels and extracting features, so as to realize reuse of enhanced features;
In the specific structure of the texture enhancement module TEM in the present application, as shown in fig. 4, the texture enhancement module TEM has a four-branch structure, including a first branch, a second branch, a third branch, a fourth branch, and a Concat layer ii; the working principle of the texture enhancement module TEM in the branch A of the feature extraction network is as follows:
the first branch of the texture enhancement module TEM comprises three convolution modules ii; the first two convolution modules II (namely a first convolution module II and a second convolution module II) are composed of a convolution layer with a convolution kernel size of 3 multiplied by 3 and an LReLU activation layer, and the third convolution module II is composed of a convolution layer with a convolution kernel size of 1 multiplied by 1 and an LReLU activation layer;
the first branch of the texture enhancement module TEM is used for extracting information in the feature B and expanding channels to obtain a feature I with the channel number of 32; specifically, a first convolution module II in a first branch of the texture enhancement module TEM extracts information in a feature B with the channel number of 16 and expands the channels to obtain a feature with the channel number of 32; the second convolution module II in the first branch is used for extracting information in the output characteristics of the first convolution module II; the third convolution module II in the first branch is used for extracting information in the output characteristics of the second convolution module II, and the characteristics output by the third convolution module II are the characteristics I with the channel number of 32;
The second branch of the texture enhancement module TEM comprises a Laplace edge detection module, an Add layer and three convolution modules III which are connected in sequence; the first two convolution modules III (namely a first convolution module III and a second convolution module III) are composed of a convolution layer with a convolution kernel size of 3 multiplied by 3 and an LReLU activation layer, and the third convolution module III is composed of a convolution layer with a convolution kernel size of 1 multiplied by 1 and an LReLU activation layer; the Laplace edge detection module is connected with the Dense connection module Dense, and the Dense connection module Dense and the Laplace edge detection module are also connected with the Add layer;
the second branch of the texture enhancement module TEM is used for extracting information in the feature B and expanding a channel so as to enhance texture information in the feature B and obtain a feature II; specifically, the laplace edge detection module in the second branch is used for extracting texture information in the feature B with the channel number of 16; the Add layer in the second branch is used for adding the characteristic B output by the Dense connection module Dense and the characteristic containing texture information output by the Laplace edge detection module element by element to obtain the characteristic of texture information enhancement; the first convolution module III in the second branch is used for extracting the information of the characteristic enhanced by the texture information and expanding the channel to obtain the characteristic of 32 channels; the second convolution module III in the second branch is used for extracting the information of the characteristics of the channel 32 output by the first convolution module III; the third convolution module III in the second branch is used for extracting information in the characteristics of 32 channels output by the second convolution module III and outputting characteristics II of 32 channels;
The third branch of the texture enhancement module TEM consists of a Sobel edge detector and a convolution module IV; the convolution module IV consists of a convolution layer with the convolution kernel size of 1 multiplied by 1 and an LReLU activation layer;
the third branch of the texture enhancement module TEM is used for extracting texture information in the feature B to obtain a feature III; specifically, the Sobel edge detector in the third branch of the texture enhancement module TEM is used for extracting texture information in the feature B with the channel number of 16, so as to obtain a feature containing the texture information; the convolution module IV in the third branch is used for extracting information in the characteristics including texture information output by the Sobel edge detector and outputting characteristics III with the channel number of 16;
the fourth branch of the texture enhancement module TEM comprises a convolution layer with a convolution kernel size of 1 multiplied by 1 and an LReLU activation layer; the fourth branch of the texture enhancement module TEM is used for extracting the information characteristic in the characteristic B with the channel number of 16 to obtain a characteristic IV with the channel number of 16; the fourth branch in the texture enhancement module TEM is set for the purpose of: the problems of gradient explosion and gradient disappearance in the image fusion network training process can be effectively relieved while the characteristics are extracted;
In the application, a first branch, a second branch, a third branch and a fourth branch of a texture enhancement module TEM are all connected with a Concat layer II of the texture enhancement module TEM, and the Concat layer II of the texture enhancement module TEM is used for splicing a characteristic I output by the first branch, a characteristic II output by the second branch, a characteristic III output by the third branch and a characteristic IV output by the fourth branch to obtain a characteristic C with 96 channels.
The network structure and the working principle of the texture enhancement module TEM in the branch B are the same as those of the texture enhancement module TEM in the branch A; the input characteristic of the texture enhancement module TEM in the branch B is the characteristic E output by the Dense connection module Dense in the branch B, and the output characteristic of the texture enhancement module TEM in the branch B is the characteristic F with 96 channels.
The image reconstruction network, as shown in fig. 5, includes four convolution modules v, where the first three convolution modules v (i.e., the first convolution module v, the second convolution module v, and the third convolution module v) each include a convolution layer with a convolution kernel size of 3×3 and an lreh activation layer, and the fourth convolution module v includes a convolution layer with a convolution kernel size of 3×3 and a Tanh activation layer;
The first convolution module V in the image reconstruction network is used for carrying out feature fusion on the splicing features with the channel number of 192 output by the Concat layer I and shrinking the channels to obtain the features with the channel number of 96; the second convolution module V is used for carrying out feature fusion on the features output by the first convolution module V and reducing channels to obtain the features with the channel number of 48; the third convolution module V is used for carrying out feature fusion on the features output by the second convolution module V and reducing channels to obtain the features with the channel number of 16; the fourth convolution module V is used for carrying out feature fusion on the features output by the third convolution module V and shrinking channels, and enabling the output Tensor value to be between [ -1,1] so as to finally obtain a gray fusion image of the Y channel image and the infrared image.
Step S4: under the guidance of a mask-based self-attention loss function, the image fusion network is trained by utilizing a training set, and an image fusion network model is obtained. The method specifically comprises the following steps:
s4-1: converting a visible light image in a training set of a fusion network into a YCbCr image by using an existing image space conversion module, and then separating out a Y-channel image of the visible light image;
s4-2: inputting the infrared image in the training set of the fusion network and the Y-channel image obtained in the step S4-1 into the image fusion network to obtain a single-channel fusion image; then, in the mask-based self-attention loss function And (3) carrying out back propagation under the guidance of the self-attention loss based on the mask obtained by calculation, updating the weight of the image fusion network, and iterating 100 times to complete the training process of the image fusion network, thereby obtaining the image fusion network model.
In the process of training by the image fusion network, before the infrared image and the visible light image in the training set are input into the image fusion network, the Y-channel image of the visible light image in the training set is firstly extracted, then the sizes of the infrared image and the Y-channel image are uniformly modified to be 512X 512 in width and height, and then the infrared image and the visible light image are input into the image fusion network. The application also uses an Adam optimizer to train the image fusion network model, the batch size is set to 4, the learning rate is set to 0.001, the iteration number is set to 100, and the image fusion network is realized on a Pytorch platform.
Mask-based self-attention loss function in the present applicationIs used for calculating the loss of the image fusion network; mask-based self-attention loss in the present application includes pixel loss, gradient loss and structural similarity loss, wherein the mask-based self-attention loss utilizes a mask-based self-attention loss function +. >To calculate.
The pixel loss in the application comprises background pixel loss, infrared image mask pixel loss and visible light image mask pixel loss, wherein the background pixel loss, the infrared image mask pixel loss and the visible light image mask pixel loss respectively pass through a background pixel loss functionInfrared image mask pixel loss function>And a visible light image mask pixel loss function +.>Calculation is performed and the background pixel loss function +.>Infrared image mask pixel loss function>And a visible light image mask pixel loss function +.>The calculation formulas of (a) are respectively shown as a formula (1), a formula (2) and a formula (3);
in the process of training the image fusion network or in the process of verifying the image fusion network, before the infrared image and the visible light image in the training set are input into the image fusion network, the Y-channel image of the visible light image is firstly extracted, then the sizes of the infrared image and the Y-channel image are uniformly modified to be 512X 512 in width and height, and then the infrared image and the visible light image are input into the image fusion network;
(1)
(2)
(3)
in the formula (1), the components are as follows,is->Norms (F/F)>Gray fusion image generated for image fusion network, < +.>For an infrared image input to the image fusion network, < > >Y-channel images for visible light images input to the image fusion network; h and W respectively represent height and width, the sizes of an infrared image, a Y channel image of a visible light image and a fusion image output by the image fusion network of the input image fusion network are consistent, the heights are 512, and the widths are 512;
in the formula (2), the amino acid sequence of the compound,is->Norms (F/F)>Represented as pixel-by-pixel multiplication, ">Gray fusion image generated for image fusion network, < +.>For an infrared image input to the image fusion network, < >>A mask that is a target in the infrared image; h and W represent height and width, respectively, H is 512 and W is 512 in the application;
in the formula (3), the amino acid sequence of the compound,is->The norms, H and W, respectively represent height and width, H is 512 and W is 512 in the application; />Represented as pixel-by-pixel multiplication, ">Mask for object in visible light image, +.>Gray fusion image generated for image fusion network, < +.>Y-channel images for visible light images input to the image fusion network;
the gradient loss in the application comprises background gradient loss, infrared image mask gradient loss and visible light image mask gradient loss, and the background gradient loss and the infrared image mask gradient lossFilm gradient loss and visible light image mask gradient loss respectively pass through background gradient loss function Infrared image mask gradient loss function>And a visible light image mask gradient loss function +.>Calculation is performed and the background gradient loss function +.>Infrared image mask gradient loss function>And a visible light image mask gradient loss function +.>The calculation formulas of (a) are respectively shown as formula (4), formula (5) and formula (6):
(4)
(5)
(6)
in the formula (4), the amino acid sequence of the compound,is->The norms, H and W, respectively represent height and width, H is 512 and W is 512 in the application; expressed as gradient operator +.>Representing a gray fusion image generated with a gradient operator versus an image fusion network>Performing operation to obtain a gradient value, and then taking an absolute value; />Representing the ir image input to the image fusion network with gradient operator +.>Performing operation to obtain a gradient value, and then taking an absolute value; />Y-channel image representing a visible light image input to an image fusion network with gradient operators +.>Performing operation to obtain a gradient value, and then taking an absolute value;representing the ir image input to the image fusion network with gradient operator +.>And Y-channel image of visible light image input to image fusion network +.>Performing operation to obtain a gradient value, taking an absolute value, and taking a maximum value;
In equation (5), v represents a gradient operator,is->The norms, H and W, respectively represent height and width, H is 512 and W is 512 in the application; />Represented is a pixel-by-pixel multiplication; />Representing a gray fusion image generated with a gradient operator versus an image fusion network>Performing operation to obtain a gradient value; />Representing the ir image input to the image fusion network with gradient operator +.>Performing operation to obtain a gradient value; />A mask that is a target in the infrared image;
in equation (6), v represents a gradient operator,is->The norms, H and W, respectively represent height and width, H is 512 and W is 512 in the application; />Represented is a pixel-by-pixel multiplication; />A mask that is a target in the visible light image; />Representing a gray fusion image generated with a gradient operator versus an image fusion network>Performing operation to obtain a gradient value; />Y-channel image representing a visible light image input to an image fusion network with gradient operators +.>And (5) performing operation to obtain a gradient value.
The structural similarity loss comprises background structural similarity loss, infrared image mask structural similarity loss and visible light image mask structural similarity loss, wherein the background structural similarity loss, the infrared image mask structural similarity loss and the visible light image mask structural similarity loss are respectively transmitted through a background structural similarity loss function Infrared image mask structure similarity loss function>And a visible light image mask structure similarity loss function +.>Calculation is performed and the background structural similarity loss function +.>Infrared image mask structure similarity loss function>And a visible light image mask structure similarity loss function +.>The calculation formulas of (a) are respectively shown as formula (7), formula (8) and formula (9):
(7)
(8)
(9)
in the formula (7), the amino acid sequence of the compound,structural similarity of gray fusion image and infrared image generated for image fusion network, +.>Structural similarity of the gray fusion image generated for the image fusion network to the Y-channel image of the visible light image, -/->Gray fusion image generated for image fusion network, < +.>For an infrared image input to the image fusion network, < >>A Y-channel image that is a visible light image;
in the formula (8), the amino acid sequence of the compound,represented as pixel-by-pixel multiplication, ">Gray fusion image generated for image fusion network, < +.>For an infrared image input to the image fusion network, < >>A mask that is a target in the infrared image; />The gray fusion image generated for the image fusion network is multiplied by a mask of a target in the infrared image pixel by pixel, and the gray fusion image corresponding to the mask is obtainedArea (S) >Multiplying the infrared image by a mask of a target in the infrared image pixel by pixel to obtain a region corresponding to the mask in the infrared image; />The structural similarity between the region corresponding to the mask in the gray fusion image and the region corresponding to the mask in the infrared image;
in the formula (9), the amino acid sequence of the compound,represented as pixel-by-pixel multiplication, ">Mask for object in Y channel image of visible light image, +.>Gray fusion image generated for image fusion network, < +.>Y-channel image for visible light image input to image fusion network, +.>The gray fusion image generated for the image fusion network is multiplied by a mask of a target in a Y channel image of the visible light image pixel by pixel to obtain a region corresponding to the mask in the gray fusion image, < >>Obtaining a region corresponding to a mask in a Y-channel image of the visible light image by multiplying the Y-channel image of the visible light image with the mask of a target in the Y-channel image of the visible light image pixel by pixel, wherein +.>Is the region corresponding to the mask in the gray fusion image and the region corresponding to the mask in the Y channel image of the visible light imageStructural similarity between.
The above-mentioned loss is divided according to loss type, and can be divided into three types of infrared mask loss, visible light mask loss and background loss, so that in the application, the infrared mask loss, visible light mask loss and background loss can be respectively used for infrared mask loss function Visible light mask loss function>And background loss function->The calculation is performed by the following formulas (10), (11) and (12):
(10)
(11)
(12)
in the formula (10), the amino acid sequence of the compound,for background pixel loss function,/->For background gradient loss function, +.>A background structural similarity loss function;
in the formula (11), the amino acid sequence of the compound,mask pixel loss function for infrared image, < >>Mask gradient penalty function for infrared image, +.>A mask structure similarity loss function for the infrared image;
in the formula (12), the amino acid sequence of the compound,mask pixel loss function for visible light image, < >>Mask gradient loss function for visible light image, +.>A mask structure similarity loss function for the visible light image;
while mask-based self-attention loss functionAnd infrared mask loss function>Visible light mask loss function>And background loss function->The relation of (2) is as shown in the formula (13):
(13)
in the formula (13), the amino acid sequence of the compound,and->Respectively controlling the infrared mask loss function and the visible mask loss function to be flatSuper parameters of balance +.>、/>The values of (2) are all 1.
And (3) testing the image fusion network model obtained in the step (S4):
before the infrared image and the visible light image in the test set are input into the image fusion network, the Y channel image of the visible light image, the Cb channel image of the visible light image and the Cr channel image of the visible light image in the test set are respectively extracted, then the sizes of the infrared image and the Y channel image are uniformly modified to be 512X 512 in width and height, and then the infrared image and the Y channel image are input into the image fusion network for forward propagation once.
In order to intuitively see the test effect of the application, the application also utilizes the format conversion module to fuse the gray fusion image generated by the image fusion network in the test process with the Cb channel image of the visible light image in the test set and the Cr channel image of the visible light image together to obtain a YCbCr color space, and transmits the YCbCr color space to the RGB space for format conversion to obtain a color fusion image.
In order to compare the Fusion effect of the mask-based image Fusion method according to the present application, the present application specifically uses the test set divided by the present application to test five existing image Fusion methods of the U2Fusion image Fusion method (from paper "U2 Fusion: A unified unsupervised image Fusion network"), the piaf use image Fusion method (from paper "piaf use: a pro [1]gressive infrared and visible image Fusion network based on illumination aware ]), the Fusion gan image Fusion method (from paper" Fusion gan: agenerative adversarial network for infrared and visible image Fusion "), the stdfuse image Fusion method (from paper" STDFusionNet: an Infrared and Visible Image Fusion Network Based on Salient Target Detection "), the divuse image Fusion method (from paper" divuse: darkness-free infrared and visible image Fusion "), and the mask-based image Fusion method according to the present application, and the test results are shown in table 1 and fig. 6.
TABLE 1
In table 1, PSNR represents peak signal-to-noise ratio, which is used to measure pixel-level distortion between the fused image and the source image, and higher value indicates smaller distortion; CC represents a correlation coefficient for evaluating a degree of linear correlation between the fused image and the source image, and a value closer to 1 represents a stronger correlation; q (Q) AB/F Representing the edge retention quality, which is used for measuring the retention condition of the edge information in the fusion process, and a higher value represents that the edge information is better transferred into the fusion image; SSIM represents a structural similarity index for reflecting structural similarity between the fused image and the source image, and a value closer to 1 represents higher structural similarity; MS_SSIM represents multi-scale structural similarity, namely, the structural similarity under a plurality of scales is comprehensively considered and is used for comprehensively evaluating the distortion condition of the fusion image; ours represents the mask-based image fusion method of the present application.
The application is based on PSNR (peak signal to noise ratio), CC (correlation coefficient), Q AB/F The 5 indices (i.e., edge preserving quality), SSIM (i.e., structural similarity index) and ms_ssim (i.e., multi-scale structural similarity) evaluate the fusion effect of the mask-based image fusion method of the present application and the above 5 existing image fusion methods.
As can be seen from table 1, the mask-based image fusion method of the present application is shown in PSNR, CC, Q AB/F The five evaluation indexes of SSIM and MS_SSIM have good effects, and the five evaluation indexes are as follows:
PSNR index comparison: compared with the U2Fusion image Fusion method with higher PSNR index in the prior art, the mask-based image Fusion method achieves 64.4236 on the PSNR index, and improves (64.4236-63.4799)/63.4799 multiplied by 100% = 1.48%; this illustrates that the mask-based image fusion method described in the present application has less pixel-level distortion between the fused image and the source image when image fusion is performed;
CC index comparison: compared with the U2Fusion image Fusion method with higher CC index in the prior art, the mask-based image Fusion method achieves 0.6676 in CC index, and is improved by (0.6676-0.6428)/0.6428 multiplied by 100% = 3.85%; this illustrates that the mask-based image fusion method of the present application has a higher degree of linear correlation between the fused image and the source image when performing image fusion;
Q AB/F index contrast: the mask-based image fusion method of the application is shown in Q AB/F The index reaches 0.5383, compared with Q in the prior art AB/F The U2Fusion image Fusion method with higher index improves (0.5383-0.4873)/0.4873 ×100% = 10.46%; this illustrates that the mask-based image fusion method described in the present application better transfers the edge information of the source image to the fused image when the image fusion is performed;
SSIM index contrast: compared with the U2Fusion image Fusion method with higher SSIM index in the prior art, the mask-based image Fusion method achieves 0.9416 on the SSIM index, and improves (0.9416-0.9275)/0.9275 multiplied by 100% = 1.52%; this illustrates that the mask-based image fusion method of the present application has higher structural similarity between the fused image and the source image when performing image fusion;
ms_ssim index contrast: compared with the U2Fusion image Fusion method with higher MS_SSIM index in the prior art, the mask-based image Fusion method achieves 0.9125 on the MS_SSIM index, and improves (0.9125-0.8960)/0.8960 multiplied by 100% = 1.84%; the image fusion method has high multi-scale structure similarity when the image fusion is carried out.
It can also be seen from fig. 6 that: the fusion image generated by the image fusion method is superior to the five existing image fusion methods in the aspect of vision on the aspect of texture details; in addition, the fusion image generated by the image fusion method is clearer and better in brightness control, and the fusion image generated by the image fusion method simultaneously maintains strong light information on the premise of ensuring visual effect, maintains certain contrast while controlling brightness, and is more in line with the semantics expressed in the source image.
In order to compare the advantages of the above five existing image Fusion methods and the Fusion image generated by the mask-based image Fusion method in terms of target detection relative to a source image (i.e. an infrared image and a visible light image which are not fused), the application specifically firstly utilizes the five existing image Fusion methods of a U2Fusion image Fusion method, a PIAFusion image Fusion method, a Fusion GAN image Fusion method, an STDFusion image Fusion method and a DIVFusion image Fusion method, and the mask-based image Fusion method generates Fusion images based on all infrared and visible light images with target detection labels in MSRS data sets (the acquired website of the MSRS data sets is https:// github.com/Linfeng-Tang/MSRS); then, using the Yolov5 target detection network (the acquired website is https:// github.com/iscyy/yolair) in the prior art to respectively perform target detection tests on the five existing image fusion methods and the fusion images generated by the mask-based image fusion method, wherein the test results are shown in table 2 and fig. 7, and the detected targets are shown in the detection frames in the images shown in fig. 7.
The five existing image fusion methods and the fusion images generated by the mask-based image fusion method are respectively subjected to target detection tests by utilizing a Yolov5 target detection network, and are respectively represented by names of the image fusion methods in table 2, as shown in the third row to the ninth row of the first column in table 2. In addition, in table 2, ours refers to performing a target detection test on the fused image generated by the image fusion method of the present application; inforred represents that utilizing the Yolov5 target detection network to carry out target detection test on all Infrared images with target detection labels in the MSRS data set; visible means that the Yolov5 target detection network is utilized to perform target detection test on all Visible light images with target detection labels in the MSRS data set.
TABLE 2
F1 precision recall mAP@.5 mAP@.5:.95
Infrared 0.789 0.929 0.686 0.813 0.563
Visible 0.828 0.884 0.780 0.808 0.529
U2Fusion 0.871 0.928 0.822 0.900 0.612
FusionGAN 0.786 0.861 0.724 0.821 0.573
STDFusion 0.781 0.844 0.727 0.786 0.507
DIVFusion 0.853 0.919 0.797 0.878 0.584
PIAFusion 0.885 0.934 0.842 0.908 0.622
Ours 0.893 0.939 0.852 0.909 0.641
In table 2, F1 represents a harmonic mean of the accuracy and recall of the YOLOv5 neural network model, precision represents the accuracy, and in the data predicted to be correct, the true value is the correct ratio; the recall rate is represented by the recall rate, and represents how much of all the data with correct true values can be predicted to be correct; mAP@.5 represents the average accuracy of the Yolov5 neural network model when IOU=0.5; mAP@5:. 95 represents the value obtained by calculating an average accuracy every 0.05 from 0.5 to 0.95 of the IOU and finally averaging all the average accuracy.
As can be seen from Table 2, the detection test is performed on the fused image generated by the mask-based image fusion method by using the Yolov5 target detection network, and good effects are obtained in five evaluation indexes of F1, precision, recall, mAP@.5 and mAP@. 5:95, specifically:
f1 index comparison: the F1 index obtained by detecting the fusion image generated by the image fusion method reaches 0.893 by utilizing the Yolov5 target detection network, and compared with the F1 index obtained by detecting the fusion image generated by the PIAFusion image fusion method by utilizing the Yolov5 target detection network, the F1 index is improved by (0.893-0.885)/0.885X100% = 0.90%;
precision index comparison: the precision index obtained by detecting the fusion image generated by the image fusion method by using the Yolov5 target detection network reaches 0.939, and compared with the precision index obtained by detecting the fusion image generated by the PIAFusion image fusion method by using the Yolov5 target detection network, the precision index is improved by (0.939-0.934)/0.934 multiplied by 100% = 0.53%;
comparison of the recovery index: compared with the method for detecting the fusion image generated by the PIAFuse image fusion method by using the Yolov5 target detection network, the method has the advantages that the recovery index obtained by detecting the fusion image generated by the image fusion method by using the Yolov5 target detection network reaches 0.852, and the recovery index obtained by detecting the fusion image generated by using the PIAFuse image fusion method by using the Yolov5 target detection network is improved by (0.852-0.842)/0.842×100% = 1.18%;
mAP@.5 index contrast: compared with mAP@5 indexes obtained by detecting the fusion image generated by the PIAFusion image fusion method by using the Yolov5 target detection network, the mAP@5 indexes obtained by detecting the fusion image generated by the mask-based image fusion method by using the Yolov5 target detection network are improved by (0.909-0.908)/0.908 multiplied by 100% = 0.11%;
mAP@5:95 index contrast: mAP@5:. 95 obtained by detecting the fusion image generated by the image fusion method by utilizing the Yolov5 target detection network reaches 0.641, and compared with mAP@5:. 95 obtained by detecting the fusion image generated by utilizing the PIAFusion image fusion method by utilizing the Yolov5 target detection network, the mAP@5:. 95 index is improved by (0.641-0.622)/0.622 multiplied by 100% = 3.05%.
In conclusion, the method utilizes the Yolov5 target detection network to detect and test the fusion image generated by the mask-based image fusion method, and has good effects in five evaluation indexes of F1, precision, recall, mAP@5 and mAP@5:95, so that the accuracy is high and the fusion effect of a target area is better when the fusion image generated by the image fusion method is subjected to target detection.
In addition, it can be seen from fig. 7 that: the fusion image generated by the image fusion method has better performance in detection; for example, the target in the fused image obtained by the image fusion method is clearer, such as the texture details on the person, and the fused image obtained by the image fusion method is better than the five image fusion methods in the prior art in the aspect of processing the small target, such as a small car behind the background of the person and two tiny shadows in late night, which can be detected in the fused image obtained by the image fusion method.
The above disclosure is merely illustrative of specific embodiments of the present application, but the present application is not limited thereto, and any variations that can be considered by those skilled in the art should fall within the scope of the present application.

Claims (7)

1. A mask-based image fusion method is characterized in that: the method comprises the following steps:
step S1: acquiring a training set and a testing set which comprise infrared images and visible light images;
step S2: detecting targets in the infrared image and the visible light image in the training set, recording coordinates of a target detection frame, and then respectively constructing a mask of the targets in the infrared image and a mask of the targets in the visible light image by using a mask generating method;
Step S3: constructing an image fusion network, wherein the image fusion network comprises a feature extraction network, a Concat layer I and an image reconstruction network IRM; the feature extraction network is used for extracting features of Y-channel images of the infrared image and the visible light image; the Concat layer I is used for splicing the characteristics of the infrared image extracted by the characteristic extraction network and the characteristics of the Y-channel image to obtain spliced characteristics; the feature reconstruction network is used for fusing the spliced features to obtain a fused image;
step S4: under the guidance of a mask-based self-attention loss function, the image fusion network is trained by utilizing a training set, and an image fusion network model is obtained.
2. A mask-based image fusion method according to claim 1, wherein: in step S3, the feature extraction network is in a dual-branch structure, and the branches a and B are each composed of a convolution layer with a convolution kernel size of 1×1, a Dense connection module Dense, and a texture enhancement module TEM.
3. A mask-based image fusion method according to claim 2, characterized in that: in step S3, the Dense connection module Dense includes three convolution modules I sequentially connected, where each convolution module I is composed of a convolution layer with a convolution kernel size of 3×3 and an LReLU activation layer; and the three convolution modules I in the Dense connection module Dense are connected in a Dense connection mode.
4. A mask-based image fusion method according to claim 2, characterized in that: in step S3, the texture enhancement module TEM includes a first branch, a second branch, a third branch, a fourth branch, and a Concat layer ii; the first branch comprises three convolution modules II, wherein the first two convolution modules II are respectively composed of a convolution layer with a convolution kernel size of 3 multiplied by 3 and an LReLU activation layer, and the third convolution module II is composed of a convolution layer with a convolution kernel size of 1 multiplied by 1 and an LReLU activation layer; the second branch comprises a Laplace edge detection module, an Add layer and three convolution modules III which are sequentially connected, wherein the first two convolution modules III are composed of a convolution layer with a convolution kernel size of 3 multiplied by 3 and an LReLU activation layer, and the third convolution module III is composed of a convolution layer with a convolution kernel size of 1 multiplied by 1 and an LReLU activation layer; the Laplace edge detection module is connected with the Dense connection module Dense, and the Dense connection module Dense and the Laplace edge detection module are both connected with the Add layer; the third branch is composed of a Sobel edge detector and a convolution module IV; the convolution module IV consists of a convolution layer with the convolution kernel size of 1 multiplied by 1 and an LReLU activation layer; the fourth branch comprises a convolution layer with a convolution kernel size of 1×1 and an lrehu activation layer; the first branch, the second branch, the third branch and the fourth branch are all connected with a Concat layer II, and the Concat layer II is used for splicing the characteristics I output by the first branch, the characteristics II output by the second branch, the characteristics III output by the third branch and the characteristics IV output by the fourth branch.
5. A mask-based image fusion method according to claim 1, wherein: in step S3, the image reconstruction network includes four convolution modules v; wherein, the first three convolution modules V comprise a convolution layer with a convolution kernel size of 3×3 and an LReLU activation layer; the fourth convolution module v comprises a convolution layer with a convolution kernel size of 3 x 3 and a Tanh activation layer.
6. A mask-based image fusion method according to claim 1, wherein: the step S4 specifically comprises the following steps: s4-1: converting a visible light image in a training set of a fusion network into a YCbCr image by using an image space conversion module, and then separating out a Y-channel image of the visible light image; s4-2: inputting the infrared image in the training set of the fusion network and the Y-channel image obtained in the step S4-1 into the image fusion network to obtain a single-channel fusion image; and then, carrying out back propagation under the guidance of the self-attention loss based on the mask, which is obtained by calculation of the self-attention loss function based on the mask, updating the weight of the image fusion network, and iterating 100 times to complete the training process of the image fusion network, thereby obtaining the image fusion network model.
7. The mask-based image fusion method of claim 6, wherein: in step S4, mask-based self-attention loss includes pixel loss, gradient loss, and structural similarity loss.
CN202311068607.9A 2023-08-24 2023-08-24 Image fusion method based on mask Active CN116778293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311068607.9A CN116778293B (en) 2023-08-24 2023-08-24 Image fusion method based on mask

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311068607.9A CN116778293B (en) 2023-08-24 2023-08-24 Image fusion method based on mask

Publications (2)

Publication Number Publication Date
CN116778293A true CN116778293A (en) 2023-09-19
CN116778293B CN116778293B (en) 2023-12-22

Family

ID=87986390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311068607.9A Active CN116778293B (en) 2023-08-24 2023-08-24 Image fusion method based on mask

Country Status (1)

Country Link
CN (1) CN116778293B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541944A (en) * 2023-11-07 2024-02-09 南京航空航天大学 Multi-mode infrared small target detection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021139069A1 (en) * 2020-01-09 2021-07-15 南京信息工程大学 General target detection method for adaptive attention guidance mechanism
CN115565035A (en) * 2022-11-08 2023-01-03 长春理工大学 Infrared and visible light image fusion method for night target enhancement
CN115965862A (en) * 2022-12-07 2023-04-14 西安电子科技大学 SAR ship target detection method based on mask network fusion image characteristics
CN116385326A (en) * 2023-03-24 2023-07-04 浙江大学 Multispectral image fusion method, device and equipment based on multi-target segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021139069A1 (en) * 2020-01-09 2021-07-15 南京信息工程大学 General target detection method for adaptive attention guidance mechanism
CN115565035A (en) * 2022-11-08 2023-01-03 长春理工大学 Infrared and visible light image fusion method for night target enhancement
CN115965862A (en) * 2022-12-07 2023-04-14 西安电子科技大学 SAR ship target detection method based on mask network fusion image characteristics
CN116385326A (en) * 2023-03-24 2023-07-04 浙江大学 Multispectral image fusion method, device and equipment based on multi-target segmentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈卓;方明;柴旭;付飞蚺;苑丽红;: "红外与可见光图像融合的U-GAN模型", 西北工业大学学报, no. 04 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541944A (en) * 2023-11-07 2024-02-09 南京航空航天大学 Multi-mode infrared small target detection method

Also Published As

Publication number Publication date
CN116778293B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN110378222B (en) Method and device for detecting vibration damper target and identifying defect of power transmission line
CN116778293B (en) Image fusion method based on mask
CN113449727A (en) Camouflage target detection and identification method based on deep neural network
JP2022025008A (en) License plate recognition method based on text line recognition
CN114841244B (en) Target detection method based on robust sampling and mixed attention pyramid
CN113609896A (en) Object-level remote sensing change detection method and system based on dual-correlation attention
Geng et al. An improved helmet detection method for YOLOv3 on an unbalanced dataset
WO2023246921A1 (en) Target attribute recognition method and apparatus, and model training method and apparatus
CN112686261A (en) Grape root system image segmentation method based on improved U-Net
CN110321867A (en) Shelter target detection method based on part constraint network
CN116757988B (en) Infrared and visible light image fusion method based on semantic enrichment and segmentation tasks
CN111881914B (en) License plate character segmentation method and system based on self-learning threshold
CN111275694B (en) Attention mechanism guided progressive human body division analysis system and method
CN117274173A (en) Semantic and structural distillation reference-free image quality evaluation method
CN117197763A (en) Road crack detection method and system based on cross attention guide feature alignment network
CN110930393A (en) Chip material pipe counting method, device and system based on machine vision
CN114550016B (en) Unmanned aerial vehicle positioning method and system based on context information perception
CN108764287B (en) Target detection method and system based on deep learning and packet convolution
CN115527098A (en) Infrared small target detection method based on global mean contrast space attention
CN114998866A (en) Traffic sign identification method based on improved YOLOv4
CN115512174A (en) Anchor-frame-free target detection method applying secondary IoU loss function
CN113610032A (en) Building identification method and device based on remote sensing image
CN117876836A (en) Image fusion method based on multi-scale feature extraction and target reconstruction
CN113888754B (en) Vehicle multi-attribute identification method based on radar vision fusion
CN116309623B (en) Building segmentation method and system with multi-source information fusion enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant