CN111354051A - Image compression sensing method of self-adaptive optimization network - Google Patents
Image compression sensing method of self-adaptive optimization network Download PDFInfo
- Publication number
- CN111354051A CN111354051A CN202010139605.4A CN202010139605A CN111354051A CN 111354051 A CN111354051 A CN 111354051A CN 202010139605 A CN202010139605 A CN 202010139605A CN 111354051 A CN111354051 A CN 111354051A
- Authority
- CN
- China
- Prior art keywords
- network
- channel
- ista
- optimization
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000006835 compression Effects 0.000 title claims description 8
- 238000007906 compression Methods 0.000 title claims description 8
- 238000005259 measurement Methods 0.000 claims abstract description 21
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims abstract description 10
- 230000007246 mechanism Effects 0.000 claims abstract description 9
- 230000003044 adaptive effect Effects 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 7
- 230000008707 rearrangement Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000009977 dual effect Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- RTAQQCXQSZGOHL-UHFFFAOYSA-N Titanium Chemical compound [Ti] RTAQQCXQSZGOHL-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013170 computed tomography imaging Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method for realizing image undersampling reconstruction through a self-adaptive optimization network, belonging to the technical field of natural image compressed sensing. The invention mainly comprises an adaptive part, an optimization algorithm part, a channel attention network part and a loss function part. The invention combines the advantages of the deep neural network and the optimization algorithm, and is a quick and accurate method. The measurement matrix is adaptively learned by using the full-connection network, so that more effective information can be extracted, and the initialization is adaptively learned by using the full-connection network without the original complex pseudo-inverse process. The attention mechanism of the channel is learned by using the full convolution network, so that more effective characteristics can be extracted, and the reconstruction effect is improved.
Description
Technical Field
The invention relates to an image compression sensing method of a self-adaptive optimization network, belonging to the technical field of natural image compression sensing.
Background
Compressed Sensing (CS) theory refers to measuring a signal using a measurement matrix with a sampling frequency much lower than the NQ frequency, with a high probability of achieving reconstruction if the signal itself is compressible or there is sparsity in some transform domains. Conventionally, many fast and accurate convex optimization methods are used for processing CS reconstruction, such as ADMM, ista, and CS sampling (far below NQ frequency) based devices have hardware-friendly characteristics, reduce sampling cost, and even increase imaging speed, so that the methods have wide and mature applications in many fields, such as MRI imaging, CT imaging, radio astronomical imaging, single-pixel cameras, and 3D-video, but the conventional convex optimization algorithm has the following disadvantages that not only optimization parameters, incoherent measurement matrices, and optimization transformation need to be set manually in a priori, but also an optimal solution needs to be obtained by hundreds of iterations.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an image compression sensing method of a self-adaptive optimization network, so as to solve the above problems.
The technical scheme of the invention is as follows: an image compression sensing method of a self-adaptive optimization network comprises the following specific steps:
step 1: an adaptive part;
using a fully-connected network with 0 bias, inputting an image X, outputting a measurement result Y, setting the dimensionality of the output according to the CS rate, weighting a measurement matrix, and adding another fully-connected network with 0 bias, inputting a measurement result, outputting the measurement result as the initialization input of a CS reconstruction network, compared with ISTA-Net+The training set is required to perform pseudo-inverse calculation, and measurement and initialization are as follows:
f represents a fully connected network, Wθ,WQThe weight to be trained;
step 2: optimization algorithm part
The orthogonal projection operator is constructed by introducing the canonical dual frame: set DTSolving X directly, so ISTA-Net + connecting convolution and residual error to network Projected ISTA, the formula is
k is the iteration coefficient of project ISTA, β is the step size, soft is the soft function, D is the optimization transformation;
step 3: channel attention network part
Setting input as a feature C which is the feature representation of multiple channels of a convolution network, introducing a channel attention mechanism, obtaining global information by using global average pooling, finishing rearrangement of cross-channel features by using one-dimensional convolution with different channel numbers and a Relu function, and obtaining the attention representation of the channels by using a sigmoid function, wherein the formula is represented as follows:
two stacked convolutional layers f are used for rearrangement, the first convolutional layer channel is w/u, the second convolutional layer channel is w, the activation functions are relu and sigmoid respectively, and the attention mechanism of the output channel is a(k);
Step 4: part of loss function
The entire network, in addition to errors in the reconstructed image and original image, also being in a convolutional networkTwo error ratios are 1: 0.1, measuring by using a mean square error function, learning by using Adam, wherein the learning rate is 0.001, and training for 200 times to make the network converge.
The invention has the beneficial effects that: the method combines the advantages of a deep neural network and an optimization algorithm, and is a quick and accurate method. The measurement matrix is adaptively learned by using the full-connection network, so that more effective information can be extracted, and the initialization is adaptively learned by using the full-connection network without the original complex pseudo-inverse process. The attention mechanism of the channel is learned by using the full convolution network, so that more effective characteristics can be extracted, and the reconstruction effect is improved.
Drawings
FIG. 1 is a framework diagram of the present invention;
FIG. 2 is a diagram of a natural image gray scale data Set11 according to the present invention;
fig. 3 is a diagram of the natural image gray scale data set BSD68 according to the invention.
Detailed Description
The invention is further described with reference to the following drawings and detailed description.
An image compression sensing method of a self-adaptive optimization network comprises the following specific steps:
step 1: an adaptive part;
as shown in the measurement matrix and initialization part in FIG. 1, a fully-connected network with 0 bias is used, the input is image X, the output is measurement result Y, the output dimension is set according to the CS rate, the weight is the measurement matrix, and similarly, another fully-connected network with 0 bias is added, the input is measurement result, the output is used as the initialization input of the CS reconstruction network, compared with ISTA-Net+The pseudo-inverse calculation of the training set is needed, and the calculation amount is greatly reduced. The measurements and initializations are as follows:
f represents a fully connected network, Wθ,WQFor the weights to be trained, i.e. the measurement matrix and the initialization matrix, so that ISTA-Net+[2]Further networking is facilitated, RIP conditions (requiring that the measurement matrix and the optimization transformation in CS reconstruction are incoherent) are met, and better initialization X is obtained(0)And improving the CS reconstruction result.
Step 2: optimization algorithm part
In the conventional optimization algorithm, Projected ISTA achieves better performance than ADMM algorithm by introducing canonical dual frame to construct the orthogonal projection operator: set DTDirectly solve for X, so ISTA-Net+The convolution and residual concatenation pair is networked to project ISTA, as
k is the iterative coefficient of project ISTA, i.e. the networked stack coefficient, β is the step size, soft is the soft function, D is the optimization transform, conforming to the orthogonal principle, which uses the convolution network for simulation.
Step 3: channel attention network part
The input is set as a characteristic C which is a characteristic representation of multiple channels of a convolutional network and can be regarded as characteristic representation of different frequencies of a convex optimization up-conversion domain, and a simple convolutional network cannot pay attention to interdependency among different channels, so a channel attention mechanism is introduced, global information is obtained by using global average pooling, then, the cross-channel characteristic rearrangement is completed by using one-dimensional convolution with different channel numbers and a Relu function, and attention representation of the channels is obtained by using a sigmoid function, and the formula is expressed as follows:
two stacked convolutional layers f are used for rearrangement, the first convolutional layer channel is w/u, the second convolutional layer channel is w, the activation functions are relu and sigmoid respectively, and the attention mechanism of the output channel is a(k). Different from the commonly used SE-Net [5 ]]The channel attention mechanism replaces a full-connection network with a one-dimensional convolution network, and reduces a certain parameter.
Step 4: part of loss function
The entire network, in addition to errors in the reconstructed image and original image, also being in a convolutional networkTwo error ratios are 1: 0.1, measuring by using a mean square error function, learning by using Adam, wherein the learning rate is 0.001, and training for 200 times to make the network converge.
The network code is realized as follows: the platform was a titan XP server, written using tensrflow, since based on data-driven we needed to train this network on Image91, and after 200 trains, its model was used to deal with practical problems.
Example 2: experiments are carried out on Set11 and BSD68 data sets, wherein the data sets are shown in FIG. 2 and FIG. 3, the experimental results are shown in tables 1 and 2, the comparison method comprises the neural network method and the optimization programming method, and the adaptive optimization network method comprises the following experimental steps:
selecting a model: the corresponding model was selected to be trained 200 times on Tensorflow, with a training set of Image91, based on the undersampling rate.
A data preprocessing step: and (5) partitioning and normalizing the image.
TABLE 1
TABLE 2
And (3) testing: and inputting all image blocks into the model, and reconstructing and synthesizing to obtain a result.
The experimental result is measured by PSNR, and compared with the current method, the PSNR of the algorithm is highest under different undersampling rates, the reconstruction effect is best, and the method can be popularized to practical application.
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit and scope of the present invention.
Claims (1)
1. An image compression sensing method of a self-adaptive optimization network is characterized in that:
step 1: an adaptive part;
using a fully-connected network with 0 bias, inputting an image X, outputting a measurement result Y, setting the output dimension according to the CS rate, weighting a measurement matrix, adding another fully-connected network with 0 bias, inputting a measurement result, and outputting the measurement result as the initial value of the CS reconstruction networkChange input, phase comparison ISTA-Net+The training set is required to perform pseudo-inverse calculation, and measurement and initialization are as follows:
f represents a fully connected network, Wθ,WQThe weight to be trained;
step 2: optimization algorithm part
The orthogonal projection operator is constructed by introducing the canonical dual frame: set DTSolving X directly, so ISTA-Net + connecting convolution and residual error to network Projected ISTA, the formula is
k is the iteration coefficient of project ISTA, β is the step size, soft is the soft function, D is the optimization transformation;
step 3: channel attention network part
Setting input as a feature C which is the feature representation of multiple channels of a convolution network, introducing a channel attention mechanism, obtaining global information by using global average pooling, finishing rearrangement of cross-channel features by using one-dimensional convolution with different channel numbers and a Relu function, and obtaining the attention representation of the channels by using a sigmoid function, wherein the formula is represented as follows:
two stacked convolutional layers f are used for rearrangement, the first convolutional layer channel is w/u, the second convolutional layer channel is w, the activation functions are relu and sigmoid respectively, and the attention mechanism of the output channel is a(k);
Step 4: part of loss function
The entire network, in addition to errors in the reconstructed image and original image, also being in a convolutional networkTwo error ratios are 1: 0.1, measuring by using a mean square error function, learning by using Adam, wherein the learning rate is 0.001, and training for 200 times to make the network converge.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010139605.4A CN111354051B (en) | 2020-03-03 | 2020-03-03 | Image compression sensing method of self-adaptive optimization network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010139605.4A CN111354051B (en) | 2020-03-03 | 2020-03-03 | Image compression sensing method of self-adaptive optimization network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111354051A true CN111354051A (en) | 2020-06-30 |
CN111354051B CN111354051B (en) | 2022-07-15 |
Family
ID=71194278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010139605.4A Active CN111354051B (en) | 2020-03-03 | 2020-03-03 | Image compression sensing method of self-adaptive optimization network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111354051B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112884851A (en) * | 2021-01-27 | 2021-06-01 | 武汉大学 | Deep compression sensing network for expanding iterative optimization algorithm |
CN113284202A (en) * | 2021-06-11 | 2021-08-20 | 北京大学深圳研究生院 | Image compression sensing method of scalable network based on content self-adaption |
CN113516601A (en) * | 2021-06-17 | 2021-10-19 | 西南大学 | Image restoration technology based on deep convolutional neural network and compressed sensing |
CN113643261A (en) * | 2021-08-13 | 2021-11-12 | 江南大学 | Lung disease diagnosis method based on frequency attention network |
CN114359419A (en) * | 2021-11-02 | 2022-04-15 | 上海大学 | Image compressed sensing reconstruction method based on attention mechanism |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107730451A (en) * | 2017-09-20 | 2018-02-23 | 中国科学院计算技术研究所 | A kind of compressed sensing method for reconstructing and system based on depth residual error network |
CN108171762A (en) * | 2017-12-27 | 2018-06-15 | 河海大学常州校区 | System and method for is reconfigured quickly in a kind of similar image of the compressed sensing of deep learning |
CN109168002A (en) * | 2018-07-26 | 2019-01-08 | 西安电子科技大学 | Vision signal measurement field estimation method based on compressed sensing and convolutional neural networks |
-
2020
- 2020-03-03 CN CN202010139605.4A patent/CN111354051B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107730451A (en) * | 2017-09-20 | 2018-02-23 | 中国科学院计算技术研究所 | A kind of compressed sensing method for reconstructing and system based on depth residual error network |
CN108171762A (en) * | 2017-12-27 | 2018-06-15 | 河海大学常州校区 | System and method for is reconfigured quickly in a kind of similar image of the compressed sensing of deep learning |
CN109168002A (en) * | 2018-07-26 | 2019-01-08 | 西安电子科技大学 | Vision signal measurement field estimation method based on compressed sensing and convolutional neural networks |
Non-Patent Citations (3)
Title |
---|
YUNSONG LIU 等: "Projected Iterative Soft-thresholding Algorithm for Tight Frames in Compressed Sensing Magnetic Resonance Imaging", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 * |
李楠宇 等: "射电日像仪的压缩感知和脏图高斯去噪", 《四川大学学报(自然科学版)》 * |
柯钧 等: "压缩感知在光学成像领域的应用", 《光学学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112884851A (en) * | 2021-01-27 | 2021-06-01 | 武汉大学 | Deep compression sensing network for expanding iterative optimization algorithm |
CN112884851B (en) * | 2021-01-27 | 2022-06-14 | 武汉大学 | Construction method of deep compressed sensing network based on expansion iteration optimization algorithm |
CN113284202A (en) * | 2021-06-11 | 2021-08-20 | 北京大学深圳研究生院 | Image compression sensing method of scalable network based on content self-adaption |
CN113516601A (en) * | 2021-06-17 | 2021-10-19 | 西南大学 | Image restoration technology based on deep convolutional neural network and compressed sensing |
CN113643261A (en) * | 2021-08-13 | 2021-11-12 | 江南大学 | Lung disease diagnosis method based on frequency attention network |
CN114359419A (en) * | 2021-11-02 | 2022-04-15 | 上海大学 | Image compressed sensing reconstruction method based on attention mechanism |
CN114359419B (en) * | 2021-11-02 | 2024-05-17 | 上海大学 | Attention mechanism-based image compressed sensing reconstruction method |
Also Published As
Publication number | Publication date |
---|---|
CN111354051B (en) | 2022-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111354051B (en) | Image compression sensing method of self-adaptive optimization network | |
CN108090871B (en) | Multi-contrast magnetic resonance image reconstruction method based on convolutional neural network | |
CN108765511A (en) | Ultrasonoscopy super resolution ratio reconstruction method based on deep learning | |
Kelkar et al. | Compressible latent-space invertible networks for generative model-constrained image reconstruction | |
CN107038730B (en) | Sparse representation image reconstruction method based on Gaussian scale structure block grouping | |
CN109064412A (en) | A kind of denoising method of low-rank image | |
CN114202459B (en) | Blind image super-resolution method based on depth priori | |
CN111612695B (en) | Super-resolution reconstruction method for low-resolution face image | |
US12045961B2 (en) | Image denoising method and apparatus based on wavelet high-frequency channel synthesis | |
Liu et al. | A deep framework assembling principled modules for CS-MRI: unrolling perspective, convergence behaviors, and practical modeling | |
Peng | Adaptive ADMM for dictionary learning in convolutional sparse representation | |
Zha et al. | The power of triply complementary priors for image compressive sensing | |
CN112270646A (en) | Super-resolution enhancement method based on residual error dense jump network | |
Zhang et al. | LR-CSNet: low-rank deep unfolding network for image compressive sensing | |
CN111640067A (en) | Single image super-resolution reconstruction method based on three-channel convolutional neural network | |
CN113838104B (en) | Registration method based on multispectral and multimodal image consistency enhancement network | |
Wen et al. | The power of complementary regularizers: Image recovery via transform learning and low-rank modeling | |
US12086908B2 (en) | Reconstruction with magnetic resonance compressed sensing | |
Liu et al. | SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks | |
CN117278049A (en) | OMP signal reconstruction method and device based on weighted QR decomposition | |
CN116612009A (en) | Multi-scale connection generation countermeasure network medical image super-resolution reconstruction method | |
Cang et al. | Research on hyperspectral image reconstruction based on GISMT compressed sensing and interspectral prediction | |
Cao et al. | Sparse representation of classified patches for CS-MRI reconstruction | |
CN117036901A (en) | Small sample fine adjustment method based on visual self-attention model | |
CN111243047A (en) | Image compression sensing method based on self-adaptive nonlinear network and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Liu Cuiyin Inventor after: Li Nanyu Inventor before: Li Nanyu Inventor before: Liu Cuiyin |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Li Nanyu Inventor after: Liu Cuiyin Inventor before: Liu Cuiyin Inventor before: Li Nanyu |