CN111354051A - Image compression sensing method of self-adaptive optimization network - Google Patents

Image compression sensing method of self-adaptive optimization network Download PDF

Info

Publication number
CN111354051A
CN111354051A CN202010139605.4A CN202010139605A CN111354051A CN 111354051 A CN111354051 A CN 111354051A CN 202010139605 A CN202010139605 A CN 202010139605A CN 111354051 A CN111354051 A CN 111354051A
Authority
CN
China
Prior art keywords
network
channel
ista
optimization
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010139605.4A
Other languages
Chinese (zh)
Other versions
CN111354051B (en
Inventor
李楠宇
柳翠寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN202010139605.4A priority Critical patent/CN111354051B/en
Publication of CN111354051A publication Critical patent/CN111354051A/en
Application granted granted Critical
Publication of CN111354051B publication Critical patent/CN111354051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for realizing image undersampling reconstruction through a self-adaptive optimization network, belonging to the technical field of natural image compressed sensing. The invention mainly comprises an adaptive part, an optimization algorithm part, a channel attention network part and a loss function part. The invention combines the advantages of the deep neural network and the optimization algorithm, and is a quick and accurate method. The measurement matrix is adaptively learned by using the full-connection network, so that more effective information can be extracted, and the initialization is adaptively learned by using the full-connection network without the original complex pseudo-inverse process. The attention mechanism of the channel is learned by using the full convolution network, so that more effective characteristics can be extracted, and the reconstruction effect is improved.

Description

Image compression sensing method of self-adaptive optimization network
Technical Field
The invention relates to an image compression sensing method of a self-adaptive optimization network, belonging to the technical field of natural image compression sensing.
Background
Compressed Sensing (CS) theory refers to measuring a signal using a measurement matrix with a sampling frequency much lower than the NQ frequency, with a high probability of achieving reconstruction if the signal itself is compressible or there is sparsity in some transform domains. Conventionally, many fast and accurate convex optimization methods are used for processing CS reconstruction, such as ADMM, ista, and CS sampling (far below NQ frequency) based devices have hardware-friendly characteristics, reduce sampling cost, and even increase imaging speed, so that the methods have wide and mature applications in many fields, such as MRI imaging, CT imaging, radio astronomical imaging, single-pixel cameras, and 3D-video, but the conventional convex optimization algorithm has the following disadvantages that not only optimization parameters, incoherent measurement matrices, and optimization transformation need to be set manually in a priori, but also an optimal solution needs to be obtained by hundreds of iterations.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an image compression sensing method of a self-adaptive optimization network, so as to solve the above problems.
The technical scheme of the invention is as follows: an image compression sensing method of a self-adaptive optimization network comprises the following specific steps:
step 1: an adaptive part;
using a fully-connected network with 0 bias, inputting an image X, outputting a measurement result Y, setting the dimensionality of the output according to the CS rate, weighting a measurement matrix, and adding another fully-connected network with 0 bias, inputting a measurement result, outputting the measurement result as the initialization input of a CS reconstruction network, compared with ISTA-Net+The training set is required to perform pseudo-inverse calculation, and measurement and initialization are as follows:
Figure RE-GDA0002443585420000011
f represents a fully connected network, Wθ,WQThe weight to be trained;
step 2: optimization algorithm part
The orthogonal projection operator is constructed by introducing the canonical dual frame: set DTSolving X directly, so ISTA-Net + connecting convolution and residual error to network Projected ISTA, the formula is
Figure RE-GDA0002443585420000012
k is the iteration coefficient of project ISTA, β is the step size, soft is the soft function, D is the optimization transformation;
step 3: channel attention network part
Setting input as a feature C which is the feature representation of multiple channels of a convolution network, introducing a channel attention mechanism, obtaining global information by using global average pooling, finishing rearrangement of cross-channel features by using one-dimensional convolution with different channel numbers and a Relu function, and obtaining the attention representation of the channels by using a sigmoid function, wherein the formula is represented as follows:
Figure RE-GDA0002443585420000021
two stacked convolutional layers f are used for rearrangement, the first convolutional layer channel is w/u, the second convolutional layer channel is w, the activation functions are relu and sigmoid respectively, and the attention mechanism of the output channel is a(k)
Step 4: part of loss function
The entire network, in addition to errors in the reconstructed image and original image, also being in a convolutional network
Figure RE-GDA0002443585420000023
Two error ratios are 1: 0.1, measuring by using a mean square error function, learning by using Adam, wherein the learning rate is 0.001, and training for 200 times to make the network converge.
The invention has the beneficial effects that: the method combines the advantages of a deep neural network and an optimization algorithm, and is a quick and accurate method. The measurement matrix is adaptively learned by using the full-connection network, so that more effective information can be extracted, and the initialization is adaptively learned by using the full-connection network without the original complex pseudo-inverse process. The attention mechanism of the channel is learned by using the full convolution network, so that more effective characteristics can be extracted, and the reconstruction effect is improved.
Drawings
FIG. 1 is a framework diagram of the present invention;
FIG. 2 is a diagram of a natural image gray scale data Set11 according to the present invention;
fig. 3 is a diagram of the natural image gray scale data set BSD68 according to the invention.
Detailed Description
The invention is further described with reference to the following drawings and detailed description.
An image compression sensing method of a self-adaptive optimization network comprises the following specific steps:
step 1: an adaptive part;
as shown in the measurement matrix and initialization part in FIG. 1, a fully-connected network with 0 bias is used, the input is image X, the output is measurement result Y, the output dimension is set according to the CS rate, the weight is the measurement matrix, and similarly, another fully-connected network with 0 bias is added, the input is measurement result, the output is used as the initialization input of the CS reconstruction network, compared with ISTA-Net+The pseudo-inverse calculation of the training set is needed, and the calculation amount is greatly reduced. The measurements and initializations are as follows:
Figure RE-GDA0002443585420000022
f represents a fully connected network, Wθ,WQFor the weights to be trained, i.e. the measurement matrix and the initialization matrix, so that ISTA-Net+[2]Further networking is facilitated, RIP conditions (requiring that the measurement matrix and the optimization transformation in CS reconstruction are incoherent) are met, and better initialization X is obtained(0)And improving the CS reconstruction result.
Step 2: optimization algorithm part
In the conventional optimization algorithm, Projected ISTA achieves better performance than ADMM algorithm by introducing canonical dual frame to construct the orthogonal projection operator: set DTDirectly solve for X, so ISTA-Net+The convolution and residual concatenation pair is networked to project ISTA, as
Figure RE-GDA0002443585420000031
k is the iterative coefficient of project ISTA, i.e. the networked stack coefficient, β is the step size, soft is the soft function, D is the optimization transform, conforming to the orthogonal principle, which uses the convolution network for simulation.
Step 3: channel attention network part
The input is set as a characteristic C which is a characteristic representation of multiple channels of a convolutional network and can be regarded as characteristic representation of different frequencies of a convex optimization up-conversion domain, and a simple convolutional network cannot pay attention to interdependency among different channels, so a channel attention mechanism is introduced, global information is obtained by using global average pooling, then, the cross-channel characteristic rearrangement is completed by using one-dimensional convolution with different channel numbers and a Relu function, and attention representation of the channels is obtained by using a sigmoid function, and the formula is expressed as follows:
Figure RE-GDA0002443585420000032
two stacked convolutional layers f are used for rearrangement, the first convolutional layer channel is w/u, the second convolutional layer channel is w, the activation functions are relu and sigmoid respectively, and the attention mechanism of the output channel is a(k). Different from the commonly used SE-Net [5 ]]The channel attention mechanism replaces a full-connection network with a one-dimensional convolution network, and reduces a certain parameter.
Step 4: part of loss function
The entire network, in addition to errors in the reconstructed image and original image, also being in a convolutional network
Figure RE-GDA0002443585420000034
Two error ratios are 1: 0.1, measuring by using a mean square error function, learning by using Adam, wherein the learning rate is 0.001, and training for 200 times to make the network converge.
The network code is realized as follows: the platform was a titan XP server, written using tensrflow, since based on data-driven we needed to train this network on Image91, and after 200 trains, its model was used to deal with practical problems.
Example 2: experiments are carried out on Set11 and BSD68 data sets, wherein the data sets are shown in FIG. 2 and FIG. 3, the experimental results are shown in tables 1 and 2, the comparison method comprises the neural network method and the optimization programming method, and the adaptive optimization network method comprises the following experimental steps:
selecting a model: the corresponding model was selected to be trained 200 times on Tensorflow, with a training set of Image91, based on the undersampling rate.
A data preprocessing step: and (5) partitioning and normalizing the image.
Figure RE-GDA0002443585420000033
Figure RE-GDA0002443585420000041
TABLE 1
Figure RE-GDA0002443585420000042
TABLE 2
And (3) testing: and inputting all image blocks into the model, and reconstructing and synthesizing to obtain a result.
The experimental result is measured by PSNR, and compared with the current method, the PSNR of the algorithm is highest under different undersampling rates, the reconstruction effect is best, and the method can be popularized to practical application.
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit and scope of the present invention.

Claims (1)

1. An image compression sensing method of a self-adaptive optimization network is characterized in that:
step 1: an adaptive part;
using a fully-connected network with 0 bias, inputting an image X, outputting a measurement result Y, setting the output dimension according to the CS rate, weighting a measurement matrix, adding another fully-connected network with 0 bias, inputting a measurement result, and outputting the measurement result as the initial value of the CS reconstruction networkChange input, phase comparison ISTA-Net+The training set is required to perform pseudo-inverse calculation, and measurement and initialization are as follows:
Figure FDA0002398597250000011
f represents a fully connected network, Wθ,WQThe weight to be trained;
step 2: optimization algorithm part
The orthogonal projection operator is constructed by introducing the canonical dual frame: set DTSolving X directly, so ISTA-Net + connecting convolution and residual error to network Projected ISTA, the formula is
Figure FDA0002398597250000012
k is the iteration coefficient of project ISTA, β is the step size, soft is the soft function, D is the optimization transformation;
step 3: channel attention network part
Setting input as a feature C which is the feature representation of multiple channels of a convolution network, introducing a channel attention mechanism, obtaining global information by using global average pooling, finishing rearrangement of cross-channel features by using one-dimensional convolution with different channel numbers and a Relu function, and obtaining the attention representation of the channels by using a sigmoid function, wherein the formula is represented as follows:
Figure FDA0002398597250000013
two stacked convolutional layers f are used for rearrangement, the first convolutional layer channel is w/u, the second convolutional layer channel is w, the activation functions are relu and sigmoid respectively, and the attention mechanism of the output channel is a(k)
Step 4: part of loss function
The entire network, in addition to errors in the reconstructed image and original image, also being in a convolutional network
Figure FDA0002398597250000014
Two error ratios are 1: 0.1, measuring by using a mean square error function, learning by using Adam, wherein the learning rate is 0.001, and training for 200 times to make the network converge.
CN202010139605.4A 2020-03-03 2020-03-03 Image compression sensing method of self-adaptive optimization network Active CN111354051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010139605.4A CN111354051B (en) 2020-03-03 2020-03-03 Image compression sensing method of self-adaptive optimization network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010139605.4A CN111354051B (en) 2020-03-03 2020-03-03 Image compression sensing method of self-adaptive optimization network

Publications (2)

Publication Number Publication Date
CN111354051A true CN111354051A (en) 2020-06-30
CN111354051B CN111354051B (en) 2022-07-15

Family

ID=71194278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010139605.4A Active CN111354051B (en) 2020-03-03 2020-03-03 Image compression sensing method of self-adaptive optimization network

Country Status (1)

Country Link
CN (1) CN111354051B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884851A (en) * 2021-01-27 2021-06-01 武汉大学 Deep compression sensing network for expanding iterative optimization algorithm
CN113284202A (en) * 2021-06-11 2021-08-20 北京大学深圳研究生院 Image compression sensing method of scalable network based on content self-adaption
CN113516601A (en) * 2021-06-17 2021-10-19 西南大学 Image restoration technology based on deep convolutional neural network and compressed sensing
CN113643261A (en) * 2021-08-13 2021-11-12 江南大学 Lung disease diagnosis method based on frequency attention network
CN114359419A (en) * 2021-11-02 2022-04-15 上海大学 Image compressed sensing reconstruction method based on attention mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730451A (en) * 2017-09-20 2018-02-23 中国科学院计算技术研究所 A kind of compressed sensing method for reconstructing and system based on depth residual error network
CN108171762A (en) * 2017-12-27 2018-06-15 河海大学常州校区 System and method for is reconfigured quickly in a kind of similar image of the compressed sensing of deep learning
CN109168002A (en) * 2018-07-26 2019-01-08 西安电子科技大学 Vision signal measurement field estimation method based on compressed sensing and convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730451A (en) * 2017-09-20 2018-02-23 中国科学院计算技术研究所 A kind of compressed sensing method for reconstructing and system based on depth residual error network
CN108171762A (en) * 2017-12-27 2018-06-15 河海大学常州校区 System and method for is reconfigured quickly in a kind of similar image of the compressed sensing of deep learning
CN109168002A (en) * 2018-07-26 2019-01-08 西安电子科技大学 Vision signal measurement field estimation method based on compressed sensing and convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUNSONG LIU 等: "Projected Iterative Soft-thresholding Algorithm for Tight Frames in Compressed Sensing Magnetic Resonance Imaging", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
李楠宇 等: "射电日像仪的压缩感知和脏图高斯去噪", 《四川大学学报(自然科学版)》 *
柯钧 等: "压缩感知在光学成像领域的应用", 《光学学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884851A (en) * 2021-01-27 2021-06-01 武汉大学 Deep compression sensing network for expanding iterative optimization algorithm
CN112884851B (en) * 2021-01-27 2022-06-14 武汉大学 Construction method of deep compressed sensing network based on expansion iteration optimization algorithm
CN113284202A (en) * 2021-06-11 2021-08-20 北京大学深圳研究生院 Image compression sensing method of scalable network based on content self-adaption
CN113516601A (en) * 2021-06-17 2021-10-19 西南大学 Image restoration technology based on deep convolutional neural network and compressed sensing
CN113643261A (en) * 2021-08-13 2021-11-12 江南大学 Lung disease diagnosis method based on frequency attention network
CN114359419A (en) * 2021-11-02 2022-04-15 上海大学 Image compressed sensing reconstruction method based on attention mechanism
CN114359419B (en) * 2021-11-02 2024-05-17 上海大学 Attention mechanism-based image compressed sensing reconstruction method

Also Published As

Publication number Publication date
CN111354051B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN111354051B (en) Image compression sensing method of self-adaptive optimization network
Schlemper et al. A deep cascade of convolutional neural networks for dynamic MR image reconstruction
CN108090871B (en) Multi-contrast magnetic resonance image reconstruction method based on convolutional neural network
Yang et al. Lossy image compression with conditional diffusion models
CN107038730B (en) Sparse representation image reconstruction method based on Gaussian scale structure block grouping
CN109064412A (en) A kind of denoising method of low-rank image
Kelkar et al. Compressible latent-space invertible networks for generative model-constrained image reconstruction
CN109003229A (en) Magnetic resonance super resolution ratio reconstruction method based on three-dimensional enhancing depth residual error network
Peng Adaptive ADMM for dictionary learning in convolutional sparse representation
Liu et al. A deep framework assembling principled modules for CS-MRI: unrolling perspective, convergence behaviors, and practical modeling
CN107527371A (en) One kind approaches smooth L in compressed sensing0The design constructing method of the image reconstruction algorithm of norm
CN111612695A (en) Super-resolution reconstruction method for low-resolution face image
CN115456918B (en) Image denoising method and device based on wavelet high-frequency channel synthesis
Zha et al. The power of triply complementary priors for image compressive sensing
CN114202459B (en) Blind image super-resolution method based on depth priori
CN115829834A (en) Image super-resolution reconstruction method based on half-coupling depth convolution dictionary learning
CN106991651B (en) Fast imaging method and system based on synthesis analysis deconvolution network
Zhang et al. LR-CSNet: low-rank deep unfolding network for image compressive sensing
CN106296583B (en) Based on image block group sparse coding and the noisy high spectrum image ultra-resolution ratio reconstructing method that in pairs maps
CN116612009A (en) Multi-scale connection generation countermeasure network medical image super-resolution reconstruction method
Cao et al. Sparse representation of classified patches for CS-MRI reconstruction
CN117036901A (en) Small sample fine adjustment method based on visual self-attention model
Moeller et al. Image denoising—old and new
CN113313175B (en) Image classification method of sparse regularized neural network based on multi-element activation function
Liu et al. SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Liu Cuiyin

Inventor after: Li Nanyu

Inventor before: Li Nanyu

Inventor before: Liu Cuiyin

GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Nanyu

Inventor after: Liu Cuiyin

Inventor before: Liu Cuiyin

Inventor before: Li Nanyu