CN114359419B - Attention mechanism-based image compressed sensing reconstruction method - Google Patents

Attention mechanism-based image compressed sensing reconstruction method Download PDF

Info

Publication number
CN114359419B
CN114359419B CN202111286704.6A CN202111286704A CN114359419B CN 114359419 B CN114359419 B CN 114359419B CN 202111286704 A CN202111286704 A CN 202111286704A CN 114359419 B CN114359419 B CN 114359419B
Authority
CN
China
Prior art keywords
attention
image
feature
feature map
compressed sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111286704.6A
Other languages
Chinese (zh)
Other versions
CN114359419A (en
Inventor
田金鹏
袁文杰
冯玉田
严军
沈明华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202111286704.6A priority Critical patent/CN114359419B/en
Publication of CN114359419A publication Critical patent/CN114359419A/en
Application granted granted Critical
Publication of CN114359419B publication Critical patent/CN114359419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image compressed sensing reconstruction method based on an attention mechanism. The method applies a plurality of attention modules and proposes a convolution attention network (AM-CSNet) based on a compressed sensing algorithm; each Attention Module (AM) contains a channel attention and a spatial attention mechanism; and the plurality of attention modules output reconstructed images through residual connection and global feature fusion. The invention has the following advantages: in each AM, the channel attention and the space attention mechanism adaptively allocate characteristic weights, so that the image reconstruction quality is remarkably improved; the global feature fusion and residual connection reserve the feature information of different layers, so that the feature information extracted by different modules is fully utilized, and the number of network layers required for reconstructing high-quality images is reduced.

Description

Attention mechanism-based image compressed sensing reconstruction method
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image compressed sensing reconstruction method based on an attention mechanism.
Background
The compressed sensing reconstruction technology is that when a signal is sparse in a certain transformation domain, a measurement matrix Φ epsilon R m ×n is used for carrying out compressed measurement on a signal x epsilon R n×1, the signal x is projected to a low-dimensional space in m dimensions, and an original signal can be accurately reconstructed according to a measured value y epsilon R m×1. Because the compressed sensing reconstruction technology can recover the original signal from the few sampling signals, the compressed sensing reconstruction technology is widely applied to the fields of wireless communication, medical imaging and the like. Current compressed perceptual reconstruction models can be divided into two categories: a traditional reconstruction model based on sparse prior knowledge and a data-driven deep learning model.
The traditional algorithm solves the optimization problem iteratively based on sparse priori knowledge, but real signals such as natural images and the like do not necessarily meet the sparsity accurately in a manually designed transformation domain, the algorithm solves the original signals by adopting multiple iterations, the time complexity of the algorithm is high, the application scene with high real-time requirements is difficult to meet, and the application depth and the application breadth of compressed sensing are limited.
The image compressed sensing reconstruction algorithm constructed based on the neural network does not need to rely on prior knowledge of any signal, but is driven by a large amount of natural image data to learn the structure of the data. The reconstruction quality is improved, and meanwhile, the image reconstruction process is accelerated. However, when the current reconstruction network uses a convolution layer to reconstruct an image, the feature images extracted by convolution are not well utilized, and more network layers are often required to be stacked to reconstruct the image in high quality. The problem existing in the prior art that the characteristic information extracted by each convolution layer cannot be effectively utilized is urgently needed to be solved.
Disclosure of Invention
The invention aims to solve the problem that the characteristic information extracted by each convolution layer cannot be effectively utilized in the prior art, namely the technical problem that part of effective characteristics are weakened and invalid characteristics are amplified due to the fact that characteristic diagrams of different layers and different areas use the same weight, and provides an image compressed sensing reconstruction method based on an attention mechanism, so that an image compressed reconstruction network model (Attention Mechanism Compressed Sensing Network, AM-CSNet) based on the attention mechanism is provided. In the conventional residual neural network, only simple convolution layers are stacked, the features extracted by convolution are not reasonably utilized, and in the AM-CSNet network provided by the invention, a plurality of convolution Attention Modules (AM) are used, and the AM can adaptively allocate feature weights to a convolution feature map, so that the effects of strengthening important features and weakening low-efficiency features are achieved, and the original image is accurately reconstructed. In the AM-CSNet network, the output of each AM is used as the input of the next attention module through residual connection, the information of different layers is fully reserved, and finally, the output of all the AMs is fused through global features to obtain a reconstructed high-quality image.
In order to achieve the above purpose, the invention adopts the following technical scheme:
An attention mechanism-based image compressed sensing reconstruction method applies at least two attention modules and utilizes an attention mechanism-based image compressed sensing reconstruction network AM-CSNet. The network uses deconvolution to complete initial reconstruction, and the attention module is used for adaptively distributing feature weights in the deep reconstruction network, so that the effects of strengthening important features and weakening invalid features are achieved. The attention modules are connected in a residual way, each Attention Module (AM) consists of two parts of channel attention and space attention, and finally, feature information extracted by a plurality of AM is fused through global features to obtain a depth reconstructed image, and the specific operation steps are as follows:
step 1: inputting a natural image, and obtaining a compressed measurement value after convolution compressed sensing measurement;
step 2: performing primary reconstruction on the obtained measured value y through deconvolution to obtain an initial reconstructed image;
Step 3: the initial reconstructed image is characterized by a plurality of attention modules and feature weights are adaptively assigned. The attention module completes feature extraction through convolution, reLU, convolution, and learns feature weights through channel attention CA (Channel Attention) and spatial attention SA (Spatial Attention). The channel attention module obtains the descriptor of the current channel through the maximum pooling of the input characteristics in the channel direction and the Sigmoid activation function operation, and then multiplies the channel descriptor with the input characteristic information to obtain the channel attention characteristics after self-adaptive weight allocation. The spatial attention is to carry out maximum pooling and Sigmoid activation function operation on the channel attention characteristics along the spatial direction to obtain a spatial characteristic descriptor, the spatial characteristic descriptor focuses on the position of effective characteristic information, and the structure can strengthen the area with more information in the network processing image;
Step 4: multiplying the obtained feature map by feature weights obtained by the channel attention and the space attention to obtain a feature map with the weight reassigned.
Step 5: the attention modules of the channels are connected in a residual mode, the input of the next AM is the combination of the input characteristic and the output characteristic of the previous AM, and the validity of the information is enhanced while the characteristic information is fully reserved.
Step 6: repeating the operations of the steps 3, 4 and 5, and combining the output characteristics of a plurality of AM modules by using a Concat fusion layer and a convolution layer, wherein the process is called global characteristic fusion, and the global characteristic fusion can fully utilize the characteristic information of different layers and obtain a final high-quality reconstructed image;
Preferably, in the image compressed sensing reconstruction method based on the attention mechanism, the specific method for measuring the original image compressed sensing is as follows: a convolution compression measurement is performed on the original image using a convolution layer. This process can be expressed as:
y=φx=Conv(x)
wherein the measurement matrix phi may be represented by a convolutional layer Conv.
Preferably, in the method of reconstructing image compressed sensing based on an attention mechanism, in the step 2, a specific method of reconstructing measured values is: the resulting measurement is first linearly varied using deconvolution to obtain an initial reconstructed image, which is expressed as:
Wherein x and y respectively represent an original signal and a measured value, Representing an image initially reconstructed by deconvolution.
Preferably, in the above method for reconstructing image compressed sensing based on an attention mechanism, in the step 4, a specific method for implementing an attention mechanism for adaptively assigning weights to features is as follows:
By adaptively assigning feature weights to the convolution feature map by the channel attention descriptor and the spatial attention descriptor, the process can be expressed as:
Fs,out=Fs,in*MC*MS=σ(MaxPool_Spatial(σ(MaxPool_Channel(Finput_featre))))
Wherein F s,out represents a feature map output by the s-th attention module, F s,in represents an input feature map of the s-th attention module, and M C and M S represent a channel attention descriptor and a spatial attention descriptor, respectively; σ represents the activation function Sigmoid, maxPool _channel represents the maximum pooling in the feature map Channel direction, maxPool _spatial represents the maximum pooling in the feature map Spatial direction, and F input_featre represents the input feature map.
Preferably, in the above image compressed sensing reconstruction method based on the attention mechanism, in the step 5, a specific method for implementing residual connection is: the input profile of AM is jump connected and summed with the output profile of AM, and the process can be expressed as:
Fs=Fs,in+Fs,out
Wherein F s is a feature map of the residual connection output of the s-th attention module, F s,out is a feature map of the output of the s-th attention module, and F s,in is an input feature map of the s-th attention module. The residual learning method can effectively improve the performance of the network.
Preferably, in the above image compressed sensing reconstruction method based on the attention mechanism, in step 6, outputs of a plurality of attention modules connected by residuals in a network are subjected to global feature fusion to obtain reconstructed deep features. The process of global feature fusion can be expressed as:
Fout=Conv(Concat(F1,…,Fs,…,FS))
Wherein F out represents the output of the global feature fusion, i.e., the reconstructed high-quality image; conv denotes a convolution layer, concat denotes a feature fusion layer, and F 1,…,Fs,…,FS denotes feature graphs output from the 1 st to S st attention module residual connection, respectively.
Compared with the prior art, the invention has the following obvious prominent substantive features and obvious advantages:
1. The invention applies the attention mechanism to the field of image compression perception reconstruction, and the improved attention module can effectively distribute feature weights to achieve the effects of strengthening effective features and weakening ineffective features;
2. The global feature fusion can fully utilize the features extracted by the attention modules with different layers, and remarkably improves the reconstruction quality of the image by reserving as many effective features as possible;
3. the invention realizes high-precision reconstruction of the image under the conditions of few model parameters and low complexity.
Drawings
FIG. 1 is a schematic diagram of an image compressed sensing network based on an attention mechanism according to the present invention
Detailed Description
For a better understanding of the objects, technical solutions and advantages of the present invention, the following preferred embodiments are described in further detail with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Embodiment one:
Referring to fig. 1, an attention mechanism-based image compressed sensing reconstruction method applies at least two attention modules, utilizes an attention mechanism-based image compressed sensing reconstruction network AM-CSNet, uses deconvolution to complete initial reconstruction, and self-adaptively distributes feature weights in a deep reconstruction network through the attention modules so as to achieve the effects of strengthening important features and weakening invalid features; the attention modules are connected in a residual way, each attention module AM consists of two parts of channel attention and space attention, and finally, the feature information extracted by the AM is fused through global features to obtain a depth reconstructed image, and the specific operation steps are as follows:
step 1: inputting a natural image, and obtaining a compressed measurement value after convolution compressed sensing measurement;
Step 2: performing primary reconstruction on the obtained compression measured value y through deconvolution to obtain an initial reconstructed image;
Step 3: taking the initial reconstructed image as input, and firstly carrying out convolution, reLU and convolution in an attention module AM to obtain a feature map; the obtained feature map is self-adaptively learned into feature weights through the channel attention CA and the space attention SA;
Step 4: multiplying the obtained feature map with feature weights obtained by the channel attention CA and the space attention SA to obtain a feature map with re-assigned weights;
Step 5: the input characteristic diagram and the output characteristic diagram of the AM are connected in a jumping mode and summed, and the operation is called residual learning;
Step 6: repeating the operations of steps 3, 4 and 5, and combining the output features of the plurality of AM modules by using Concat fusion layers and convolution layers, wherein the process is called global feature fusion; and obtaining a final output image after the global features are fused.
In the embodiment, in an AM-CSNet network, a plurality of convolution Attention Modules (AM) are used, and the AM can adaptively allocate feature weights to the convolution feature map, so as to achieve the effects of strengthening important features and weakening low-efficiency features, thereby accurately reconstructing an original image.
Embodiment two:
this embodiment is substantially the same as the first embodiment, and is specifically as follows:
in said step1, a convolution compression measurement is performed on the original image using a convolution layer, which process is expressed as:
y=φx=Conv(x)
wherein the measurement matrix phi is represented by a convolution layer Conv, and x and y respectively represent the original signal and the measured value.
In the step2, the specific steps of reconstructing the measured value are as follows:
the resulting measurement is linearly varied using deconvolution to yield an initial reconstructed image, which is expressed as:
Wherein y represents the measured value, Representing an image initially reconstructed by deconvolution.
In the step 4, the specific method for realizing the attention mechanism for adaptively assigning weights to the features is as follows:
The extracted features are adaptively assigned weights by channel attention descriptors and spatial attention descriptors, the process being expressed as:
Fs,out=Fs,in*MC*MS=σ(MaxPool_Spatial(σ(MaxPool_Channel(Finput_featre))))
Wherein F s,out represents a feature map output by the s-th attention module, F s,in represents an input feature map of the s-th attention module, and M C and M S represent a channel attention descriptor and a spatial attention descriptor, respectively; σ represents the activation function Sigmoid, maxPool _channel represents the maximum pooling in the feature map Channel direction, maxPool _spatial represents the maximum pooling in the feature map Spatial direction, and F input_featre represents the input feature map.
In the step 5, the specific method of residual connection is as follows:
Fs=Fs,in+Fs,out
Wherein F s is a feature map of the residual connection output of the s-th attention module, F s,out is a feature map of the output of the s-th attention module, and F s,in is an input feature map of the s-th attention module.
In the step 6, the process of global feature fusion may be expressed as:
Fout=Conv(Concat(F1,…,Fs,…,FS))
Wherein F out represents the output of the global feature fusion, i.e., the reconstructed high-quality image; conv denotes a convolution layer, concat denotes a feature fusion layer, and F 1,…,Fs,…,FS denotes feature graphs output from the 1 st to S st attention module residual connection, respectively.
As shown in fig. 1. In the conventional residual neural network, only simple convolutional layers are stacked, features extracted by convolution are not reasonably utilized, and in the AM-CSNet network of the embodiment, a plurality of convolution Attention Modules (AM) are used, and the AM can adaptively allocate feature weights to a convolutional feature map, so that the effects of strengthening important features and weakening low-efficiency features are achieved, and an original image is accurately reconstructed. In the AM-CSNet network, the output of each AM is used as the input of the next attention module through residual connection, the information of different layers is fully reserved, and finally, the output of all the AMs is fused through global features to obtain a reconstructed high-quality image.
Embodiment III:
In this embodiment, as shown in FIG. 1, AM-CSNet takes a natural image as input, then performs compressed sensing measurements by convolution, and performs image reconstruction by deconvolution layers and multiple attention modules. The AM-CSNet contains a plurality of attention modules, unlike the conventional residual block, which performs simple average weighting processing on the convolution feature map, so that feature information cannot be effectively utilized, whereas in the AM-CSNet, the Attention Module (AM) integrates a channel attention and spatial attention mechanism, and can adaptively allocate channel and spatial weights to the convolutionally extracted features, so as to achieve the effects of strengthening effective features and weakening ineffective features. The AM-CSNet is composed of an initial reconstruction sub-network and a deep reconstruction sub-network of attention modules, the functions of which are described below.
1) Initial reestablishment of the subnetwork:
The initial reconstruction network is realized by a deconvolution layer (Deconv), the input channel of the deconvolution layer is the dimension of the compressed measured value, the output channel is 1, bias is not generated, the convolution kernel size and the step length are both 32, and the linear mapping from the low-dimension measured value to the high-dimension initial reconstruction image can be realized. The process can be described as:
wherein y represents the image measurement value, The image which is initially reconstructed through deconvolution is represented, and the quality of the reconstructed image is low at the moment due to simple linear mapping, so that the quality of the reconstructed image needs to be improved through a depth reconstruction network.
2) Deep reconstruction sub-network based on attention module:
The attention module is represented by channel attention and spatial attention, and the process can be represented as:
Fs,out=Fs,in*MC*MS=σ(MaxPool_Spatial(σ(MaxPool_Channel(Finput_featre))))
Wherein F s,out represents a feature map output by the s-th attention module, F s,in represents an input feature map of the s-th attention module, and M C and M S represent a channel attention descriptor and a spatial attention descriptor, respectively; σ represents the activation function Sigmoid, maxPool _channel represents the maximum pooling in the feature map Channel direction, maxPool _spatial represents the maximum pooling in the feature map Spatial direction, and F input_featre represents the input feature map.
3) Residual connection
In the deep reconstruction network, we use residual connection, which can significantly improve the information circulation in the network, and can effectively improve the performance of the network, and the process can be expressed as:
Fs=Fs,in+Fs,out
Wherein F s is a feature map of the residual connection output of the s-th attention module, F s,out is a feature map of the output of the s-th attention module, and F s,in is an input feature map of the s-th attention module.
4) Global feature fusion
In order to fully retain the feature information obtained by the AM, the feature fusion layer is used to fuse the output features of the AM in different layers, and the process can be described as:
Fout=Conv(Concat(F1,…,Fs,…,FS))
Wherein F out represents the output of the global feature fusion, i.e., the reconstructed high-quality image; conv denotes a convolution layer, concat denotes a feature fusion layer, and F 1,…,Fs,…,FS denotes feature graphs output from the 1 st to S st attention module residual connection, respectively.
In the embodiment, the attention mechanism is applied to the field of image compressed sensing reconstruction, and the improved attention module can effectively distribute feature weights to achieve the effects of strengthening effective features and weakening ineffective features; the global feature fusion can fully utilize the features extracted by the attention modules of different layers, and the reconstruction quality of the image is obviously improved by reserving as many effective features as possible; the embodiment realizes high-precision reconstruction of the image under the conditions of few model parameters and low complexity.
The image compressed sensing reconstruction method based on the attention mechanism, disclosed by the embodiment of the invention, applies a plurality of attention modules and provides a convolution attention network (AM-CSNet) based on a compressed sensing algorithm; each Attention Module (AM) contains a channel attention and a spatial attention mechanism; and the plurality of attention modules output reconstructed images through residual connection and global feature fusion. The above embodiment of the present invention has the following advantages: in each AM, the channel attention and the space attention mechanism adaptively allocate characteristic weights, so that the image reconstruction quality is remarkably improved; the global feature fusion and residual connection reserve the feature information of different layers, so that the feature information extracted by different modules is fully utilized, and the number of network layers required for reconstructing high-quality images is reduced.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the embodiments described above, and various changes, modifications, substitutions, combinations or simplifications made under the spirit and principles of the technical solution of the present invention can be made according to the purpose of the present invention, and all the changes, modifications, substitutions, combinations or simplifications should be equivalent to the substitution, so long as the purpose of the present invention is met, and all the changes are within the scope of the present invention without departing from the technical principles and the inventive concept of the present invention.

Claims (6)

1. An image compressed sensing reconstruction method based on an attention mechanism is characterized in that at least two attention modules are applied, an attention mechanism-based image compressed sensing reconstruction network AM-CSNet is utilized, initial reconstruction is completed through deconvolution, and feature weights are distributed in a depth reconstruction network in a self-adaptive mode through the attention modules; the attention modules are connected in a residual way, each attention module AM consists of two parts of channel attention and space attention, and finally, the feature information extracted by the AM is fused through global features to obtain a depth reconstructed image, and the specific operation steps are as follows:
step 1: inputting a natural image, and obtaining a compression measurement value of the image after convolution compressed sensing measurement;
Step 2: performing primary reconstruction on the obtained compression measured value y through deconvolution to obtain an initial reconstructed image;
Step 3: taking the initial reconstructed image as input, and carrying out convolution, reLU and convolution in the attention module AM to obtain a feature map; the obtained feature map is self-adaptively learned into feature weights through the channel attention CA and the space attention SA;
Step 4: multiplying the obtained feature map by feature weights obtained by the channel attention CA and the space attention SA to obtain a feature map with weight reassigned;
Step 5: the input characteristic diagram and the output characteristic diagram of the AM are connected in a jumping mode and summed, and the operation is called residual learning;
Step 6: repeating the operations of steps 3, 4 and 5, and combining the output features of the plurality of AM modules by using Concat fusion layers and convolution layers, wherein the process is called global feature fusion; and obtaining a final output image after the global features are fused.
2. The attention mechanism based image compressed sensing reconstruction method according to claim 1, wherein in the step 1, a convolution layer is used to perform a convolution compression measurement on an original image, and the process is expressed as:
y=φx=Conv(x)
wherein the measurement matrix phi is represented by a convolution layer Conv, and x and y respectively represent the original signal and the measured value.
3. The method for reconstructing image compressed sensing based on an attention mechanism according to claim 1, wherein in said step2, the specific steps of reconstructing the measured values are:
the resulting measurement is linearly varied using deconvolution to yield an initial reconstructed image, which is expressed as:
Wherein y represents the measured value, Representing an image initially reconstructed by deconvolution.
4. The method for reconstructing image compressed sensing based on attention mechanism according to claim 1, wherein in said step 4, the specific method for implementing the attention mechanism for adaptively assigning weights to features is:
The extracted features are adaptively assigned weights by channel attention descriptors and spatial attention descriptors, the process being expressed as:
Fs,out=Fs,in*MC*MS=σ(MaxPool_Spatial(σ(MaxPool_Channel(Finput_featre))))
Wherein F s,out represents a feature map output by the s-th attention module, F s,in represents an input feature map of the s-th attention module, and M C and M S represent a channel attention descriptor and a spatial attention descriptor, respectively; σ represents the activation function Sigmoid, maxPool _channel represents the maximum pooling in the feature map Channel direction, maxPool _spatial represents the maximum pooling in the feature map Spatial direction, and F input_featre represents the input feature map.
5. An attention mechanism based image compressed sensing reconstruction method as described in claim 1, wherein in said step 5, the specific method of residual learning is:
Fs=Fs,in+Fs,out
Wherein F s is a feature map of the residual connection output of the s-th attention module, F s,out is a feature map of the output of the s-th attention module, and F s,in is an input feature map of the s-th attention module.
6. The attention mechanism-based image compressed sensing reconstruction method of claim 1, wherein the process of global feature fusion can be expressed as:
Fout=Conv(Concat(F1,…,Fs,…,FS))
Wherein F out represents the output of the global feature fusion, i.e., the reconstructed high-quality image; conv denotes a convolution layer, concat denotes a feature fusion layer, and F 1,…,Fs,…,FS denotes feature graphs output from the 1 st to S st attention module residual connection, respectively.
CN202111286704.6A 2021-11-02 2021-11-02 Attention mechanism-based image compressed sensing reconstruction method Active CN114359419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111286704.6A CN114359419B (en) 2021-11-02 2021-11-02 Attention mechanism-based image compressed sensing reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111286704.6A CN114359419B (en) 2021-11-02 2021-11-02 Attention mechanism-based image compressed sensing reconstruction method

Publications (2)

Publication Number Publication Date
CN114359419A CN114359419A (en) 2022-04-15
CN114359419B true CN114359419B (en) 2024-05-17

Family

ID=81095420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111286704.6A Active CN114359419B (en) 2021-11-02 2021-11-02 Attention mechanism-based image compressed sensing reconstruction method

Country Status (1)

Country Link
CN (1) CN114359419B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330635B (en) * 2022-08-25 2023-08-15 苏州大学 Image compression artifact removing method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354051A (en) * 2020-03-03 2020-06-30 昆明理工大学 Image compression sensing method of self-adaptive optimization network
CN111667445A (en) * 2020-05-29 2020-09-15 湖北工业大学 Image compressed sensing reconstruction method based on Attention multi-feature fusion
CN113191948A (en) * 2021-04-22 2021-07-30 中南民族大学 Image compressed sensing reconstruction system with multi-resolution characteristic cross fusion and method thereof
CN113269685A (en) * 2021-05-12 2021-08-17 南通大学 Image defogging method integrating multi-attention machine system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761215B (en) * 2016-01-27 2018-11-30 京东方科技集团股份有限公司 A kind of method for compressing image, image reconstructing method, apparatus and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354051A (en) * 2020-03-03 2020-06-30 昆明理工大学 Image compression sensing method of self-adaptive optimization network
CN111667445A (en) * 2020-05-29 2020-09-15 湖北工业大学 Image compressed sensing reconstruction method based on Attention multi-feature fusion
CN113191948A (en) * 2021-04-22 2021-07-30 中南民族大学 Image compressed sensing reconstruction system with multi-resolution characteristic cross fusion and method thereof
CN113269685A (en) * 2021-05-12 2021-08-17 南通大学 Image defogging method integrating multi-attention machine system

Also Published As

Publication number Publication date
CN114359419A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN111461983B (en) Image super-resolution reconstruction model and method based on different frequency information
CN109241972B (en) Image semantic segmentation method based on deep learning
CN112150521B (en) Image stereo matching method based on PSMNet optimization
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN112967178B (en) Image conversion method, device, equipment and storage medium
CN113762147B (en) Facial expression migration method and device, electronic equipment and storage medium
CN114359419B (en) Attention mechanism-based image compressed sensing reconstruction method
CN116416375A (en) Three-dimensional reconstruction method and system based on deep learning
CN112669248A (en) Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
CN117252761A (en) Cross-sensor remote sensing image super-resolution enhancement method
Jin et al. A lightweight scheme for multi-focus image fusion
Lu et al. WDLReconNet: Compressive sensing reconstruction with deep learning over wireless fading channels
CN113807497B (en) Unpaired image translation method for enhancing texture details
Wang et al. An ensemble multi-scale residual attention network (EMRA-net) for image Dehazing
CN117456078B (en) Neural radiation field rendering method, system and equipment based on various sampling strategies
Zheng et al. A multi‐view image fusion algorithm for industrial weld
CN114140357A (en) Multi-temporal remote sensing image cloud region reconstruction method based on cooperative attention mechanism
CN110782396B (en) Light-weight image super-resolution reconstruction network and reconstruction method
CN117408924A (en) Low-light image enhancement method based on multiple semantic feature fusion network
CN116823647A (en) Image complement method based on fast Fourier transform and selective attention mechanism
CN116091319A (en) Image super-resolution reconstruction method and system based on long-distance context dependence
Yang et al. Remote sensing image super‐resolution based on convolutional blind denoising adaptive dense connection
CN116128743A (en) Depth convolution hybrid neural network-based calculation associated imaging reconstruction algorithm
CN112150566A (en) Dense residual error network image compressed sensing reconstruction method based on feature fusion
Bairi et al. Pscs-net: Perception optimized image reconstruction network for autonomous driving systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant