CN110555458A - Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism - Google Patents

Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism Download PDF

Info

Publication number
CN110555458A
CN110555458A CN201910672081.2A CN201910672081A CN110555458A CN 110555458 A CN110555458 A CN 110555458A CN 201910672081 A CN201910672081 A CN 201910672081A CN 110555458 A CN110555458 A CN 110555458A
Authority
CN
China
Prior art keywords
image
feature
multiband
network
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910672081.2A
Other languages
Chinese (zh)
Other versions
CN110555458B (en
Inventor
蔺素珍
李大威
杨晓莉
王丽芳
田嵩旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201910672081.2A priority Critical patent/CN110555458B/en
Publication of CN110555458A publication Critical patent/CN110555458A/en
Application granted granted Critical
Publication of CN110555458B publication Critical patent/CN110555458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Abstract

The invention relates to an image fusion method, in particular to a multiband image fusion method, and specifically relates to a multiband image feature level fusion method for generating a countermeasure network based on an attention mechanism. The method comprises the following steps: designing and constructing a generation confrontation network, wherein a generator comprises a feature enhancement module of a mixed attention mechanism, and a generation model is obtained through dynamic balance training of the generator and a discriminator; the method realizes the end-to-end neural network fusion of the multiband images and obviously improves the detail quality of the fused images.

Description

Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism
Technical Field
The invention relates to an image fusion method, in particular to a multiband image fusion method, and specifically relates to a multiband image feature level fusion method for generating a countermeasure network based on an attention mechanism.
Background
The multi-band synchronous detection imaging of the same scene is one of the main technical characteristics of a new generation of high-precision detection system, and aims to comprehensively utilize the complementarity of detection information of different bands through image fusion to obtain more accurate and comprehensive understanding of the scene. Researchers generally consider that image fusion can be divided into three levels, namely a pixel level, a feature level and a decision level, but the pixel level fusion is convenient for retaining more original information, so that the image fusion is a hot point of image fusion research in more than ten years. In fact, a fact that it is not negligible is that the fusion of the layers is not only large in calculation amount, but also extremely high in requirement on image registration accuracy, which is very unfavorable for the application of the model algorithm in the high-accuracy detection system. In contrast, feature level image fusion not only can retain more original information, but also can eliminate partial redundant information to realize data compression, so that the feature level image fusion method is widely applied to image data processing such as target identification, image classification, target detection and tracking and the like. Therefore, it is particularly necessary to explore multi-band image feature level fusion in a big data context.
The feature level image fusion comprises two key technologies of image feature extraction and feature combination. Among them, feature extraction is one of the classic problems of digital image processing. Different from the traditional method that mathematical modeling is required to be carried out based on prior knowledge such as target characteristics, an imaging mechanism and the like, deep learning refers to autonomous learning of feature representation from a large amount of data, and low-level features and high-level semantic features can be obtained simultaneously. With the development of deep learning, research and exploration have been conducted to extract image features for fusion by using a neural network model. For example, in the document "image fusion based on deep stacked convolutional neural network" (vol.40no.11), high-frequency and low-frequency feature maps of infrared and visible light images are successfully obtained by using a stacked auto-encoder (SAE) network through presetting a high-frequency filter and a low-frequency filter, but the definition and the edge intensity of a fused image are not ideal. A fusion approach to associated and visual image (IEEE transaction image processing 28(5) 20192614) 2623) extracts multi-source image features based on the coding structure of a convolution layer and a dense block (dense block), and by connecting each layer of convolution in the dense block with each convolution sequence thereafter, the problem of information loss caused by convolution operation is solved well, but details of a fusion image are still not rich and clear enough. In fact, the problem is better processed in the traditional multi-scale transformation image fusion research. Therefore, how to guide the neural network to highlight local regions (such as infrared thermal targets and visible light edges) like multi-scale transformation is worthy of intensive study.
Therefore, a feature level fusion method based on a neural network is needed, which can solve the problem that the details of a multiband fusion image are not rich and realize an end-to-end fusion method.
Disclosure of Invention
the invention provides a multiband image feature level fusion method for generating a countermeasure network based on an attention mechanism, aiming at solving the problem that the detail features of a fusion image are not abundant.
The invention is realized by adopting the following technical scheme: the method for generating the multiband image feature level fusion of the countermeasure network based on the attention mechanism comprises the following steps:
designing and constructing a generation confrontation network, wherein the confrontation network structure is divided into a generator and a discriminator, and the generator is composed of a feature extraction and enhancement module and a feature fusion module.
the characteristic extraction and enhancement module uses the first 7 layers of convolution and two layers of pooling of a VGG-16 network as a main network, the main network is used as a characteristic extractor, a training set multiband image passes through the main network and then is subjected to global average pooling and reshape to obtain the average information quantity of a multiband characteristic image and is reset to be a tensor, then the multiband characteristic image extracted by the main network and the characteristic image after the reshape are subjected to subtraction, a multiband image weight value is obtained through activation function activation after the subtraction, an attention map is obtained after the point multiplication of the multiband characteristic image and the multiband image weight value, and the multiband characteristic image and the attention map are added to construct a characteristic enhancement image; the novel hybrid attention mechanism presented by the present invention activates the difference between the multiband signature and its corresponding mean as an attention weight.
the feature fusion module fuses the multi-channel feature enhancement graph on channel dimension by using concatee; in order to eliminate the distribution difference of the fused different wave band characteristic graphs, two layers of convolution products and BN are connected to be normalized to be distributed in the same way; and finally, sampling and three-layer convolution are performed, the size of the original image is restored, and a fused image is reconstructed.
And (3) approximating the generator by using a discriminator, continuously inputting the fused image and the real image into the discriminator, enabling the output of the loss function of the discriminator to be close to 0.5 by optimizing the loss functions of the generator and the discriminator, namely gradually generating plausible data by the generator, training to obtain a generator model, and carrying out image fusion by using the trained generator model.
The method for fusing the characteristic levels of the multiband images based on deep learning generates a training set multiband image of the countermeasure network, wherein the training set multiband image comprises a visible light (400-.
According to the multi-band image feature level fusion method for generating the countermeasure network based on the attention mechanism, the generator loss function for generating the countermeasure network is multi-task loss.
In the multiband image feature level fusion method for generating the countermeasure network based on the attention mechanism, the discriminator adopts the trained VGG-16 network, and the number of all-connected channels is modified to be 1024, 512 and 1.
According to the multi-band image feature level fusion method for generating the countermeasure network based on the attention mechanism, the value of the size of the countermeasure network batch is 16-36; the learning rate is 0.0002.
The attention mechanism can simulate human vision, selectively focusing on the region of interest while ignoring other visible information. More recently, the mechanism has been widely used in the fields of natural language processing, image processing, and speech recognition. The spatial attention utilizes an image transformation method with rotation and scaling functions to convert image information from an original space to different image spaces, so that a feature extraction result is greatly improved; the channel attention endows different channels with different weights to describe the correlation between the channels and key information, so that the aims of inhibiting the background and highlighting the target are fulfilled; the mixed attention of the attention mechanism of the spatial domain and the channel domain is combined, and the image classification effect is improved while the network depth is effectively expanded by using the residual error network thought for reference. In view of the breakthrough achievement of the attention mechanism in the above field, the invention tries to apply attention to the image fusion field for improving the image detail characteristics, and proposes a new mixed attention model for a specific scene. And guiding the neural network to learn and pay attention to the region of interest by using the difference value between the image and the mean value thereof as a weight, so as to improve the remarkable characteristics in the image.
Drawings
Fig. 1 is a general network structure diagram.
Fig. 2 is a feature enhancement module layout.
FIG. 3 is a diagram of a feature fusion module architecture.
FIG. 4 is a drawing of attention for different channels of different bands
Fig. 5 is an infrared long-wave image.
Fig. 6 is an infrared short wave image.
Fig. 7 is a visible light image.
FIG. 8 is a fused image of the present invention.
Detailed description of the invention
A multiband image fusion method for generating a countermeasure network based on an attention mechanism comprises the following steps: 1. designing and constructing feature extraction and enhancement module
the main network adopts the first 7 layers of convolution and two layers of pooling of the VGG-16 network as a feature extractor. And then connecting the global pooling layer to obtain the average information content of each channel feature map so as to reflect the average level of the image features. After global average pooling, connecting reshape layers, resetting the reshape layers to be 1 multiplied by C tensor, and finally obtaining a final feature enhancement image through a novel mixed attention mechanism designed by the invention. The method comprises the following specific steps:
(1) Defining network input multiband image as X1,X2,…,XkWhere k ≧ 2, it is assumed here that k ≧ 3. Xk∈RH ′,W′,C′H ', W ' and C ' are respectively the height, width and channel number of the input characteristic diagram. Extracting features from the backbone network to obtain a multi-band feature mapWhere C is {1,2, …, C }, U is RH,W,CH-1/4H ', W-1/4W', C-256, expressed by the following formula:In the formula, Vc k={Vc k,1,Vc k,2,…,Vc k,C′Represents a set of convolution kernels, represents a convolution operation,Is a two-dimensional spatial convolution.
(2) Obtaining average eigenvalues using global average poolingthen each channel feature mapMean of its corresponding channelAnd (after reshape) subtraction can well reflect the salient information of each channel in the image, such as edges and targets. After differencing, normalization to [0,1 ] using sigmoid activation]And obtaining the weight value. The process is described as follows:wherein, (i, j) is a position of the feature map,Activating a characteristic map (weight value) for the c channel of the kth wave band image; σ is sigmoid activation function, Reshape(. cndot.) denotes reshape reset.
(3) Characterizing each channelCorresponding weight value thereofMultiplying the obtained attention map enables the network to automatically select effective characteristics and realize specific area self-adaptation. Meanwhile, in order to enhance the global information quantity of the feature map, the obtained attention map is matched with the volumeproduct-converted channel feature mapAdding, while preserving background information, enhances detail features, as shown in detail below:In the formula (I), the compound is shown in the specification,An attention map of the c channel of the k wave band image; fz(. cndot.) is an enhancement function, representing a dot product.For the feature enhancement map of the c-th channel of the k-th band image,
2. Designing and constructing feature fusion modules
(1) Multi-channel feature enhancement mapFirstly, merging the concatee in the dimension of a channel c, normalizing the merged different waveband feature maps to the same distribution by using convolution and BN to eliminate the distribution difference of the different waveband feature maps, and describing the process as follows:
N=f1×1(f3×3(M))=LReLU(W1*(LReLU(W0M))), wherein M is a merged feature map; fc(. cndot.) represents a concatee function. N is a normalized feature map; f. of3×3,f1×1Respectively, the standard 3 × 3, 1 × 1 convolution operations, W0,W1to correspond to the convolution kernel size, W0∈R3×3×1024,W1∈R1×1×1024And then both are activated using the nonlinear activation function lreul.
(2) N characteristic diagramAnd then, connecting up sampling and convolution, recovering the size of the original image and reconstructing a fused image, wherein the formula is as follows:wherein F represents a fused image,representing the feature map after N upsampling, W2∈R3×3×512,W3∈R3×3×256,W4∈R3×3×1
3. Building a generative confrontation network
The generation countermeasure network consists of a generator and a discriminator, and the generation countermeasure network comprises the following parts:
(1) The cascade feature extraction and enhancement module and the feature fusion module form a generator and output a fusion image.
(2) the arbiter loads the pre-trained VGG-16 network and modifies the number of channels of the full link layer to 1024, 512, 1 to reduce the network parameters.
4. antagonistic network training
Training a neural network by using back propagation, which comprises the following steps:
(1) Training the generator and the discriminator at intervals, namely training the generator once, training the discriminator once again, and then circulating in sequence until the generator and the discriminator reach dynamic balance;
(2) And designing a multitask loss function. Training the discriminants and the generators by maximizing the discriminant loss and minimizing the generator loss, the discriminant loss function is shown in the formula:in the formula Itrue,IpredRespectively representing the real image and the generated fused image,It refers to the actual data expectation,Representative number of generationsaccordingly. Generator loss LGAgainst the lossin addition, the method also comprises pixel-level mean square error loss Lmse=||Iture-Ipred||2And loss of structural similarity Lssim=1-SSIM(Ipred,Itrue). In the formula: | | non-woven hair2Represents the L2 norm; SSIM (·) denotes structural similarity operations.
5. Image fusion for generating countermeasure network based on attention mechanism
(1) Respectively inputting the multiband image data sets into a generator, carrying out network feature extraction and self-adaptive fusion, approximating the generator by a discriminator, gradually generating plausible data by optimizing loss functions of the generator and the discriminator, and training to obtain a generator model;
(2) And inputting the multiband test images to be fused into the trained generation model to obtain the final fusion images.
the method for fusing the characteristic levels of the multiband images based on deep learning generates a training set and a test set of an anti-network, wherein the training set and the test set comprise three-band equal-proportion mixed images of visible light (400-.
According to the multiband image feature level fusion method based on deep learning, a generator structure is composed of a feature extraction and enhancement module and a feature fusion module, a trained VGG-16 network is adopted by a discriminator, and the number of all-connected channels of the discriminator is modified to be 1024, 512 and 1.
According to the method for fusing the multiband image feature levels based on deep learning, the value of batch is 16-36, too small value occupies more memory, and too large value is not easy to converge; the learning rate is 0.0002, the network convergence speed is determined by the learning rate, the network oscillation is caused by overlarge learning rate, the convergence is unstable, more time is consumed by undersize learning rate, and the network efficiency is influenced, so that the learning rate is selected to be between 0.02 and 0.00002.

Claims (5)

1. The method for generating the multiband image feature level fusion of the countermeasure network based on the attention mechanism is characterized by comprising the following steps of:
Designing and constructing a generation confrontation network, wherein the confrontation network structure is divided into a generator and a discriminator, and the generator consists of a feature extraction and enhancement module and a feature fusion module;
the feature extraction and enhancement module takes the first 7 layers of convolution and two layers of pooling of a VGG-16 network as a main network, training set multiband images pass through the main network and then are subjected to global average pooling and reshape, then difference is carried out on multiband feature images extracted by the main network and feature images after reshape, multiband image weight values are obtained through activation function activation after difference is carried out, attention diagrams are obtained after the multiband feature images and the multiband image weight values are point-multiplied, and the multiband feature images and the attention diagrams are added to construct feature enhancement images;
The feature fusion module fuses the multi-channel feature enhancement graph on channel dimension by using concatee; in order to eliminate the distribution difference of the fused different wave band characteristic graphs, two layers of convolution products and BN are connected to be normalized to be distributed in the same way; finally, sampling and three-layer convolution are connected, the size of the original image is restored, and a fused image is reconstructed;
And (3) approximating the generator by using a discriminator, continuously inputting the fused image and the real image into the discriminator, enabling the output of the loss function of the discriminator to be close to 0.5 by optimizing the loss functions of the generator and the discriminator, namely gradually generating plausible data by the generator, training to obtain a generator model, and carrying out image fusion by using the trained generator model.
2. The attention-based mechanism generation countermeasure network-based multiband image feature level fusion method of claim 1, wherein the training set multiband images of the generator comprise visible light, infrared short wave and infrared long wave three-band images.
3. The attention mechanism-based multi-band image feature level fusion method for generating an antagonistic network according to claim 1 or 2, characterized in that the generator loss function for generating the antagonistic network is a multitask loss.
4. the method for feature level fusion of multiband images based on an attention mechanism generation countermeasure network according to claim 1 or 2, wherein the discriminator adopts a trained VGG-16 network and modifies the number of fully connected channels to 1024, 512, 1.
5. the method for generating the multiband image feature level fusion of the confrontation network based on the attention mechanism as claimed in claim 1 or 2, wherein the size of the confrontation network batch is between 16 and 36; the learning rate is 0.0002.
CN201910672081.2A 2019-07-24 2019-07-24 Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism Active CN110555458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910672081.2A CN110555458B (en) 2019-07-24 2019-07-24 Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910672081.2A CN110555458B (en) 2019-07-24 2019-07-24 Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism

Publications (2)

Publication Number Publication Date
CN110555458A true CN110555458A (en) 2019-12-10
CN110555458B CN110555458B (en) 2022-04-19

Family

ID=68735884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910672081.2A Active CN110555458B (en) 2019-07-24 2019-07-24 Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism

Country Status (1)

Country Link
CN (1) CN110555458B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080729A (en) * 2019-12-24 2020-04-28 山东浪潮人工智能研究院有限公司 Method and system for constructing training picture compression network based on Attention mechanism
CN111127468A (en) * 2020-04-01 2020-05-08 北京邮电大学 Road crack detection method and device
CN111311518A (en) * 2020-03-04 2020-06-19 清华大学深圳国际研究生院 Image denoising method and device based on multi-scale mixed attention residual error network
CN111444980A (en) * 2020-04-09 2020-07-24 中国人民解放军国防科技大学 Infrared point target classification method and device
CN111614974A (en) * 2020-04-07 2020-09-01 上海推乐信息技术服务有限公司 Video image restoration method and system
CN111696168A (en) * 2020-06-13 2020-09-22 中北大学 High-speed MRI reconstruction method based on residual self-attention image enhancement
CN111696066A (en) * 2020-06-13 2020-09-22 中北大学 Multi-band image synchronous fusion and enhancement method based on improved WGAN-GP
CN111915545A (en) * 2020-08-06 2020-11-10 中北大学 Self-supervision learning fusion method of multiband images
CN112241765A (en) * 2020-10-26 2021-01-19 三亚中科遥感研究所 Image classification model and method based on multi-scale convolution and attention mechanism
CN112488971A (en) * 2020-11-23 2021-03-12 石家庄铁路职业技术学院 Medical image fusion method for generating countermeasure network based on spatial attention mechanism and depth convolution
CN112668655A (en) * 2020-12-30 2021-04-16 中山大学 Method for detecting out-of-distribution image based on generation of confrontation network uncertainty attention enhancement
CN112750097A (en) * 2021-01-14 2021-05-04 中北大学 Multi-modal medical image fusion based on multi-CNN combination and fuzzy neural network
CN113112441A (en) * 2021-04-30 2021-07-13 中北大学 Multi-band low-resolution image synchronous fusion method based on dense network and local brightness traversal operator
CN113205468A (en) * 2021-06-01 2021-08-03 桂林电子科技大学 Underwater image real-time restoration model based on self-attention mechanism and GAN
CN113222846A (en) * 2021-05-18 2021-08-06 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus
CN113343705A (en) * 2021-04-26 2021-09-03 山东师范大学 Text semantic based detail preservation image generation method and system
CN113435474A (en) * 2021-05-25 2021-09-24 中国地质大学(武汉) Remote sensing image fusion method based on double-generation antagonistic network
CN113762277A (en) * 2021-09-09 2021-12-07 东北大学 Multi-band infrared image fusion method based on Cascade-GAN
CN116258658A (en) * 2023-05-11 2023-06-13 齐鲁工业大学(山东省科学院) Swin transducer-based image fusion method
CN117726979A (en) * 2024-02-18 2024-03-19 合肥中盛水务发展有限公司 Piping lane pipeline management method based on neural network
CN112241765B (en) * 2020-10-26 2024-04-26 三亚中科遥感研究所 Image classification model and method based on multi-scale convolution and attention mechanism

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018015080A1 (en) * 2016-07-19 2018-01-25 Siemens Healthcare Gmbh Medical image segmentation with a multi-task neural network system
US20190130221A1 (en) * 2017-11-02 2019-05-02 Royal Bank Of Canada Method and device for generative adversarial network training
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism
CN110021051A (en) * 2019-04-01 2019-07-16 浙江大学 One kind passing through text Conrad object image generation method based on confrontation network is generated
CN110022422A (en) * 2019-04-19 2019-07-16 吉林大学 A kind of sequence of frames of video generation method based on intensive connection network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018015080A1 (en) * 2016-07-19 2018-01-25 Siemens Healthcare Gmbh Medical image segmentation with a multi-task neural network system
US20190130221A1 (en) * 2017-11-02 2019-05-02 Royal Bank Of Canada Method and device for generative adversarial network training
CN110021051A (en) * 2019-04-01 2019-07-16 浙江大学 One kind passing through text Conrad object image generation method based on confrontation network is generated
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism
CN110022422A (en) * 2019-04-19 2019-07-16 吉林大学 A kind of sequence of frames of video generation method based on intensive connection network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵竞超等: "《多波段图像融合的直觉模糊化处理方法比较》", 《红外技术》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080729B (en) * 2019-12-24 2023-06-13 山东浪潮科学研究院有限公司 Training picture compression network construction method and system based on Attention mechanism
CN111080729A (en) * 2019-12-24 2020-04-28 山东浪潮人工智能研究院有限公司 Method and system for constructing training picture compression network based on Attention mechanism
CN111311518A (en) * 2020-03-04 2020-06-19 清华大学深圳国际研究生院 Image denoising method and device based on multi-scale mixed attention residual error network
CN111311518B (en) * 2020-03-04 2023-05-26 清华大学深圳国际研究生院 Image denoising method and device based on multi-scale mixed attention residual error network
CN111127468A (en) * 2020-04-01 2020-05-08 北京邮电大学 Road crack detection method and device
CN111614974B (en) * 2020-04-07 2021-11-30 上海推乐信息技术服务有限公司 Video image restoration method and system
CN111614974A (en) * 2020-04-07 2020-09-01 上海推乐信息技术服务有限公司 Video image restoration method and system
CN111444980A (en) * 2020-04-09 2020-07-24 中国人民解放军国防科技大学 Infrared point target classification method and device
CN111444980B (en) * 2020-04-09 2024-02-20 中国人民解放军国防科技大学 Infrared point target classification method and device
CN111696066A (en) * 2020-06-13 2020-09-22 中北大学 Multi-band image synchronous fusion and enhancement method based on improved WGAN-GP
CN111696168B (en) * 2020-06-13 2022-08-23 中北大学 High-speed MRI reconstruction method based on residual self-attention image enhancement
CN111696168A (en) * 2020-06-13 2020-09-22 中北大学 High-speed MRI reconstruction method based on residual self-attention image enhancement
CN111696066B (en) * 2020-06-13 2022-04-19 中北大学 Multi-band image synchronous fusion and enhancement method based on improved WGAN-GP
CN111915545B (en) * 2020-08-06 2022-07-05 中北大学 Self-supervision learning fusion method of multiband images
CN111915545A (en) * 2020-08-06 2020-11-10 中北大学 Self-supervision learning fusion method of multiband images
CN112241765B (en) * 2020-10-26 2024-04-26 三亚中科遥感研究所 Image classification model and method based on multi-scale convolution and attention mechanism
CN112241765A (en) * 2020-10-26 2021-01-19 三亚中科遥感研究所 Image classification model and method based on multi-scale convolution and attention mechanism
CN112488971A (en) * 2020-11-23 2021-03-12 石家庄铁路职业技术学院 Medical image fusion method for generating countermeasure network based on spatial attention mechanism and depth convolution
CN112668655B (en) * 2020-12-30 2023-08-29 中山大学 Out-of-distribution image detection method based on generating attention enhancement against network uncertainty
CN112668655A (en) * 2020-12-30 2021-04-16 中山大学 Method for detecting out-of-distribution image based on generation of confrontation network uncertainty attention enhancement
CN112750097A (en) * 2021-01-14 2021-05-04 中北大学 Multi-modal medical image fusion based on multi-CNN combination and fuzzy neural network
CN113343705A (en) * 2021-04-26 2021-09-03 山东师范大学 Text semantic based detail preservation image generation method and system
CN113112441A (en) * 2021-04-30 2021-07-13 中北大学 Multi-band low-resolution image synchronous fusion method based on dense network and local brightness traversal operator
CN113112441B (en) * 2021-04-30 2022-04-26 中北大学 Multi-band low-resolution image synchronous fusion method based on dense network and local brightness traversal operator
CN113222846A (en) * 2021-05-18 2021-08-06 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus
CN113435474A (en) * 2021-05-25 2021-09-24 中国地质大学(武汉) Remote sensing image fusion method based on double-generation antagonistic network
CN113205468A (en) * 2021-06-01 2021-08-03 桂林电子科技大学 Underwater image real-time restoration model based on self-attention mechanism and GAN
CN113762277A (en) * 2021-09-09 2021-12-07 东北大学 Multi-band infrared image fusion method based on Cascade-GAN
CN116258658A (en) * 2023-05-11 2023-06-13 齐鲁工业大学(山东省科学院) Swin transducer-based image fusion method
CN117726979A (en) * 2024-02-18 2024-03-19 合肥中盛水务发展有限公司 Piping lane pipeline management method based on neural network

Also Published As

Publication number Publication date
CN110555458B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN110555458B (en) Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism
Gao et al. CyCU-Net: Cycle-consistency unmixing network by learning cascaded autoencoders
CN108537742B (en) Remote sensing image panchromatic sharpening method based on generation countermeasure network
CN110706302B (en) System and method for synthesizing images by text
Alam et al. Conditional random field and deep feature learning for hyperspectral image classification
Lin et al. Hyperspectral image denoising via matrix factorization and deep prior regularization
Dong et al. Generative dual-adversarial network with spectral fidelity and spatial enhancement for hyperspectral pansharpening
CN112347888B (en) Remote sensing image scene classification method based on bi-directional feature iterative fusion
Wang et al. Hyperspectral image super-resolution via deep prior regularization with parameter estimation
Jiang et al. Hyperspectral image classification with spatial consistence using fully convolutional spatial propagation network
CN112149720A (en) Fine-grained vehicle type identification method
Sannidhan et al. Evaluating the performance of face sketch generation using generative adversarial networks
Osahor et al. Quality guided sketch-to-photo image synthesis
CN113240040A (en) Polarized SAR image classification method based on channel attention depth network
CN114863173B (en) Self-mutual-attention hyperspectral image classification method for land resource audit
CN105550712B (en) Aurora image classification method based on optimization convolution autocoding network
Ahmad et al. Hybrid dense network with attention mechanism for hyperspectral image classification
Zhao et al. High resolution remote sensing bitemporal image change detection based on feature interaction and multi-task learning
CN115861076A (en) Unsupervised hyperspectral image super-resolution method based on matrix decomposition network
Cherian et al. A Novel AlphaSRGAN for Underwater Image Super Resolution.
CN114550305A (en) Human body posture estimation method and system based on Transformer
CN114511735A (en) Hyperspectral image classification method and system of cascade empty spectral feature fusion and kernel extreme learning machine
CN113762277A (en) Multi-band infrared image fusion method based on Cascade-GAN
Chaabane et al. Self-attention generative adversarial networks for times series VHR multispectral image generation
CN112686817A (en) Image completion method based on uncertainty estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant