CN112150566A - Dense residual error network image compressed sensing reconstruction method based on feature fusion - Google Patents

Dense residual error network image compressed sensing reconstruction method based on feature fusion Download PDF

Info

Publication number
CN112150566A
CN112150566A CN202011034288.6A CN202011034288A CN112150566A CN 112150566 A CN112150566 A CN 112150566A CN 202011034288 A CN202011034288 A CN 202011034288A CN 112150566 A CN112150566 A CN 112150566A
Authority
CN
China
Prior art keywords
residual block
output
layer
dense residual
dense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011034288.6A
Other languages
Chinese (zh)
Inventor
曾春艳
严康
叶佳翔
余琰
王正辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Technology
Original Assignee
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Technology filed Critical Hubei University of Technology
Priority to CN202011034288.6A priority Critical patent/CN112150566A/en
Publication of CN112150566A publication Critical patent/CN112150566A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of image processing, in particular to a dense residual error network image compressed sensing reconstruction method based on feature fusion. Applying a plurality of dense residual blocks, and providing a dense residual network (RDNCS) based on a compressed sensing algorithm; each dense residual block (RDB) includes a connection memory unit, a local feature fusion unit, and a local residual learning unit. Therefore, the invention has the following advantages: 1. in each RDB, the image reconstruction quality is remarkably improved by the memory unit connection mechanism, the feature fusion mechanism and the residual learning. 2. The feature fusion mechanism enables the features acquired by the RDB network to be more extensive and effective, obtains the information required by reconstruction in a self-adaptive manner, and reduces the number of feature maps of the network.

Description

Dense residual error network image compressed sensing reconstruction method based on feature fusion
Technical Field
The invention relates to the field of image processing, in particular to a dense residual error network image compressed sensing reconstruction method based on feature fusion.
Background
The purpose of the compressed sensing algorithm based on deep learning is to reconstruct a high-precision system image from a measurement signal by using a deep learning method, which is a pathological inverse problem. Compressed sensing algorithms have been sufficiently successful in the field of computer vision, such as satellite remote sensing imaging, acquisition of medical images, and the like. In the past data-driven methods, a great deal of training is often required to solve the problems. For example, a stacked denoising autoencoder, a convolutional neural network and a residual error network are applied to a compressed sensing reconstruction algorithm. In the past reconstruction algorithms, a great deal of CNNs are used, but the CNNs are improved in the size of a convolution kernel, or the spatial arrangement sequence of images is rearranged when the images are input as a network, so as to obtain stronger correlation between image pixel domains. In the conventional series of neural network algorithms, the reconstruction process of an image is a process from a measurement vector to an image reconstruction process, the hierarchy information of the image is changed from simple to complex, the closer the convolutional layer is to a final reconstructed image, the more abundant the hierarchy information carried by the convolutional layer is, in the convolutional layer close to the measured image, the convolutional layer acquires a large amount of low-hierarchy information, and the full utilization of the low-hierarchy information can guide accurate reconstruction of the image.
Disclosure of Invention
The method mainly solves the technical problems that the information acquired by each convolutional layer cannot be fully utilized in the prior art, namely low-level information cannot be fully utilized, and the reconstructed image result is often strongly related to the last layer of convolutional neural network, and is shown in figure 1. In a conventional convolutional neural Network, it is very difficult to directly acquire low-level information, and in an RDNCS (redundant depth Network Compressed Sensing) Network provided by the invention, 2 Dense Residual Blocks RDB (RDB) are used, and the Dense Residual Blocks fully utilize the low-level image information to accurately reconstruct an original image. In the dense residual block, the RDB uses a Memory Unit MU (MU) to transfer low-level information to each layer in the dense residual, and uses residual learning to further improve the image reconstruction quality.
The technical problem of the invention is mainly solved by the following technical scheme:
a dense residual network image compressed sensing reconstruction method based on feature fusion is characterized in that a plurality of dense residual blocks are applied, and a dense residual network (RDNCS) based on a compressed sensing algorithm is provided. Each dense residual block (RDB) comprises a connection memory unit, a local feature fusion unit and a local residual learning unit, and the method comprises the following steps:
step 1, inputting an electric power system image shot by an unmanned aerial vehicle, obtaining a measured value y after CS measurement, and inputting the measured value y into an intensive residual block;
step 2, taking the image measured value y as input, after carrying out convolution pooling operation in the dense residual block, connecting a memory unit, transmitting the output of each layer to the next layer, and transmitting the information processed by each layer to each layer in the residual block;
and 3, fusing the output data of the connection memory unit in the step 2 with the output generated by all the convolution layers by the local feature fusion unit. Performing convolution operation by using the convolution kernel of 3 x 3 to obtain the output of the convolution layer;
step 4, in the dense residual block, the input of forward propagation and the output of local feature fusion in the step 3 are output after jump connection, and the operation is called residual learning;
and 5, inputting an electric power system image shot by the unmanned aerial vehicle, obtaining a measured value y after CS measurement, inputting the measured value y into the dense residual block, performing the operations of the steps 2, 3 and 4 in the dense residual block, outputting the operation into the next dense residual block, and repeating the operations to obtain a final reconstructed image.
In the above method for reconstructing compressed sensing of dense residual network images based on feature fusion, the specific steps of processing the measured values of the images of the power system by connecting the memory unit are as follows: after the convolution pooling operation is carried out on the image measured value of the power system, the memory unit reserves the output of each layer, and not only transmits the output to the next layer, but also transmits the information processed by each layer to each layer in the residual block. The connection memory unit is used for transmitting the information of the current layer to each layer in the dense residual block after processing, fully utilizing the hierarchical information of each layer, realizing 'information sharing' in the forward propagation between layers, and acquiring more low-level information and the forward propagation information by the convolution layer at the back in the network structure. Its propagation process can be expressed as:
Fd,c=σ(Wd,c[Fd,1,...,Fd,c-1])
wherein, sigma is a ReLU activation function, d is represented as the d-th dense residual block, c is represented as the c-th network in the dense residual block, Wd,cF is the weight coefficient of the c-th convolution layer when c is 1d,0The output of the previous dense residual block. After the convolution pooling operation is carried out on the image measured value of the power system, the weight coefficient of each convolution layer in the dense residual block is multiplied by the output of the previous convolution layer respectively, and output data connected with the memory unit is obtained.
In the above dense residual network image compressed sensing reconstruction method based on feature fusion, the specific method for fusing the local feature fusion unit is as follows: the local feature fusion unit connects the output data F of the memory unit in the step 2d,cAnd the outputs from all convolutional layers. The output generated by each convolutional layer also fully represents the characteristic information in the hierarchy represented by the convolutional layer, and the information is fused with the output data connected with the memory unit to adaptively acquire the accurate information required by reconstruction. As analyzed in the previous section, the output of each convolutional layer in the dense residual block is input to each convolutional layer next, and feature fusion can effectively reduce the number of feature maps. The output of each dense residual block is adaptively controlled using 1 x 1 convolution. The process can be expressed as:
Figure BDA0002704662430000031
wherein the content of the first and second substances,
Figure BDA0002704662430000032
feature fusion layers representing 1 × 1 convolution within the d-th residual block. Fd,LFRepresenting the output of the local feature fusion.
In the above dense residual network image compressed sensing reconstruction method based on feature fusion, the specific method for the local residual learning unit to perform jump connection is as follows: in the dense residual block, we use the jump connection often used in the traditional residual network, and the output of the dense residual block is fused with the local feature output Fd,LFRelated also to the forward propagating input Fd-1Input F relating to, forward propagationd-1The inclusion of residual learning can significantly improve the flow of information processed by the convolutional neural network, and its process can be expressed as:
Fd=Fd-1+Fd,LF
Fdi.e. the output of the dense residual block. Residual learning produces a very good effect on dense residual blocks.
In the dense residual network image compressed sensing reconstruction method based on feature fusion, the measured value y is input into the dense residual block, the operations of the steps 2, 3 and 4 are carried out in the dense residual block, the operation is output to the next dense residual block, and the operations are repeated to obtain the final reconstructed image, and the image reconstruction quality is obviously improved.
Therefore, the invention has the following advantages: 1. in each RDB, the image reconstruction quality is remarkably improved by the memory unit connection mechanism, the feature fusion mechanism and the residual learning. 2. The feature fusion mechanism enables the features acquired by the RDB network to be more extensive and effective, obtains the information required by reconstruction in a self-adaptive manner, and reduces the number of feature maps of the network.
Drawings
FIG. 1 is a schematic diagram of the dense residual error network based on feature fusion of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example (b):
as shown in the figure, the RDNCS takes as input the power system image measurements and then outputs the reconstructed image. RDNCS contains 2 dense residual blocks (RDBs), unlike conventional residual blocks, where the conventional residual network only performs a jump connection inside the residual blocks, whereas in this network, the dense residual network not only performs a jump connection inside but also performs a depth feature fusion on the output of each dense residual block. The dense residual block is schematically shown in the figure, and in the dense residual block, there are settings of a connection memory unit, local feature fusion, and local residual learning, and below, i respectively describe the functions thereof.
1) Connecting memory cells
Inside the dense residual block, a Memory Unit (MU) is connected to form dense connection of the dense residual block, the Memory Unit reserves the output of each layer and handles the output by a dense connection mechanism, and the Memory Unit not only transmits the output of each layer to the next layer, but also transmits the information processed by each layer to each layer inside the residual block. The connection memory unit is used for transmitting the information of the current layer to each layer in the dense residual block after processing, fully utilizing the hierarchical information of each layer, realizing 'information sharing' in the forward propagation between layers, and acquiring more low-level information and the forward propagation information by the convolution layer at the back in the network structure. Its propagation process can be expressed as:
Fd,c=σ(Wd,c[Fd,1,...,Fd,c-1])
wherein, sigma is a ReLU activation function, d is represented as the d-th dense residual block, c is represented as the c-th network in the dense residual block, Wd,cF is the weight coefficient of the c-th convolution layer when c is 1d,0The output of the previous dense residual block. After the convolution pooling operation is carried out on the image measured value of the power system, the weight coefficient of each convolution layer in the dense residual block is multiplied by the output of the previous convolution layer respectively, and output data connected with the memory unit is obtained.
2) Local feature fusion
The local feature fusion unit connects the output data F of the memory unit in the step 2d,cAnd the outputs from all convolutional layers. The output generated by each convolutional layer also fully represents the characteristic information in the hierarchy represented by the convolutional layer, and the information is fused with the output data connected with the memory unit to adaptively acquire the accurate information required by reconstruction. As analyzed in the previous section, the output of each convolutional layer in the dense residual block is input to each convolutional layer next, and feature fusion can effectively reduce the number of feature maps. The output of each dense residual block is adaptively controlled using 1 x 1 convolution. The process can be expressed as:
Figure BDA0002704662430000051
wherein the content of the first and second substances,
Figure BDA0002704662430000052
feature fusion layers representing 1 × 1 convolution within the d-th residual block. Fd,LFRepresenting the output of the local feature fusion.
3) Residual learning
In the dense residual block, we use the jump connection often used in the traditional residual network, and the output of the dense residual block is fused with the local feature output Fd,LFRelated also to the forward propagating input Fd-1Input F relating to, forward propagationd-1The inclusion of residual learning can significantly improve the flow of information processed by the convolutional neural network, and its process can be expressed as:
Fd=Fd-1+Fd,LF
Fdi.e. the output of the dense residual block. Residual learning produces a very good effect on dense residual blocks.
Under the combined action of the connection memory unit, the special fusion and the residual learning, an intensive residual block with good performance is formed, the connection memory unit reserves low-level information to realize information sharing, the special fusion obtains the information obtained by different convolutional layers in a self-adaptive manner, the number of feature maps is effectively reduced, and the reconstruction performance is further improved by the residual learning.
And inputting the measured value y into the dense residual block, performing the operations of the steps 2, 3 and 4 in the dense residual block, outputting the operation into the next dense residual block, and repeating the operations to obtain a final reconstructed image, wherein the image reconstruction quality is obviously improved.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (5)

1. A dense residual network image compressed sensing reconstruction method based on feature fusion is characterized in that a dense residual network (RDNCS) based on a compressed sensing algorithm is provided by applying a plurality of dense residual blocks; each dense residual block (RDB) comprises a connection memory unit, a local feature fusion unit and a local residual learning unit, and the method comprises the following steps:
step 1, inputting an electric power system image shot by an unmanned aerial vehicle, obtaining a measured value y after CS measurement, and inputting the measured value y into an intensive residual block;
step 2, taking the image measured value y as input, after carrying out convolution pooling operation in the dense residual block, connecting a memory unit, transmitting the output of each layer to the next layer, and transmitting the information processed by each layer to each layer in the residual block;
step 3, the local feature fusion unit fuses the output data of the connection memory unit in the step 2 with the output generated by all the convolution layers; performing convolution operation by using the convolution kernel of 3 x 3 to obtain the output of the convolution layer;
step 4, in the dense residual block, the input of forward propagation and the output of local feature fusion in the step 3 are output after jump connection, and the operation is called residual learning;
and 5, inputting an electric power system image shot by the unmanned aerial vehicle, obtaining a measured value y after CS measurement, inputting the measured value y into the dense residual block, performing the operations of the steps 2, 3 and 4 in the dense residual block, outputting the operation into the next dense residual block, and repeating the operations to obtain a final reconstructed image.
2. The method for compressed sensing reconstruction of dense residual network images based on feature fusion as described in claim 1, wherein the specific steps of processing the measured values of the power system images by connecting the memory unit are as follows: after the convolution pooling operation is carried out on the image measured value of the power system, the memory unit reserves the output of each layer, not only transmits the output to the next layer, but also transmits the information processed by each layer to each layer in the residual block; the function of the connection memory unit is to transmit the information of the current layer to each layer in the dense residual block after processing, fully utilize the hierarchical information of each layer, realize 'information sharing' in the forward propagation between layers, the convolution layer behind in the network structure not only obtains the information of the forward propagation, but also obtains more low-level information; its propagation process can be expressed as:
Fd,c=σ(Wd,c[Fd,1,...,Fd,c-1])
wherein, sigma is a ReLU activation function, d is represented as the d-th dense residual block, c is represented as the c-th network in the dense residual block, Wd,cF is the weight coefficient of the c-th convolution layer when c is 1d,0The output of the previous dense residual block; after the convolution pooling operation is carried out on the image measured value of the power system, the weight coefficient of each convolution layer in the dense residual block is multiplied by the output of the previous convolution layer respectively, and output data connected with the memory unit is obtained.
3. The method for reconstructing the compressed sensing of the dense residual network image based on the feature fusion as described in claim 1, wherein the specific method for fusing the local feature fusion unit is as follows: the local feature fusion unit connects the output data F of the memory unit in the step 2d,cFusing with the output generated by all the convolution layers; the output generated by each convolution layer also fully represents the characteristic information in the hierarchy represented by the convolution layer, and the information is fused with the output data connected with the memory unit to adaptively acquire the accurate information required by reconstruction; as analyzed in the previous section, the output of each convolutional layer in the dense residual block is input to each convolutional layer next, and then the feature fusion can effectively reduce the number of feature maps; adaptively controlling the output of each dense residual block using 1 x 1 convolution; the process can be expressed as:
Figure FDA0002704662420000021
wherein the content of the first and second substances,
Figure FDA0002704662420000022
a feature fusion layer representing 1 × 1 convolution within the d-th residual block; fd,LFRepresenting the output of the local feature fusion.
4. The method for reconstructing the compressed sensing of the dense residual network image based on the feature fusion as described in claim 1, wherein the specific method for the local residual learning unit to perform the jump connection is: in the dense residual block, we use the jump connection often used in the traditional residual network, and the output of the dense residual block is fused with the local feature output Fd,LFRelated also to the forward propagating input Fd-1Input F relating to, forward propagationd-1The inclusion of residual learning can significantly improve the flow of information processed by the convolutional neural network, and its process can be expressed as:
Fd=Fd-1+Fd,LF
Fdnamely the output of the dense residual block; residual learning produces a very good effect on dense residual blocks.
5. The method for reconstructing compressed sensing of dense residual network images based on feature fusion as claimed in claim 1, wherein the measured value y is input into the dense residual block, the operations of steps 2, 3 and 4 are performed in the dense residual block, and then output to the next dense residual block, and the steps 1-4 are repeated to obtain the final reconstructed image.
CN202011034288.6A 2020-09-27 2020-09-27 Dense residual error network image compressed sensing reconstruction method based on feature fusion Pending CN112150566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011034288.6A CN112150566A (en) 2020-09-27 2020-09-27 Dense residual error network image compressed sensing reconstruction method based on feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011034288.6A CN112150566A (en) 2020-09-27 2020-09-27 Dense residual error network image compressed sensing reconstruction method based on feature fusion

Publications (1)

Publication Number Publication Date
CN112150566A true CN112150566A (en) 2020-12-29

Family

ID=73895445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011034288.6A Pending CN112150566A (en) 2020-09-27 2020-09-27 Dense residual error network image compressed sensing reconstruction method based on feature fusion

Country Status (1)

Country Link
CN (1) CN112150566A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991472A (en) * 2021-03-19 2021-06-18 华南理工大学 Image compressed sensing reconstruction method based on residual dense threshold network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064398A (en) * 2018-07-14 2018-12-21 深圳市唯特视科技有限公司 A kind of image super-resolution implementation method based on residual error dense network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064398A (en) * 2018-07-14 2018-12-21 深圳市唯特视科技有限公司 A kind of image super-resolution implementation method based on residual error dense network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991472A (en) * 2021-03-19 2021-06-18 华南理工大学 Image compressed sensing reconstruction method based on residual dense threshold network
CN112991472B (en) * 2021-03-19 2023-12-19 华南理工大学 Image compressed sensing reconstruction method based on residual error dense threshold network

Similar Documents

Publication Publication Date Title
CN109509152B (en) Image super-resolution reconstruction method for generating countermeasure network based on feature fusion
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN110659727A (en) Sketch-based image generation method
CN111899168B (en) Remote sensing image super-resolution reconstruction method and system based on feature enhancement
CN111582483A (en) Unsupervised learning optical flow estimation method based on space and channel combined attention mechanism
CN112699844B (en) Image super-resolution method based on multi-scale residual hierarchy close-coupled network
CN113554032B (en) Remote sensing image segmentation method based on multi-path parallel network of high perception
CN112465718A (en) Two-stage image restoration method based on generation of countermeasure network
CN113096239B (en) Three-dimensional point cloud reconstruction method based on deep learning
CN112116601A (en) Compressive sensing sampling reconstruction method and system based on linear sampling network and generation countermeasure residual error network
CN112884668A (en) Lightweight low-light image enhancement method based on multiple scales
CN115170915A (en) Infrared and visible light image fusion method based on end-to-end attention network
CN110930306A (en) Depth map super-resolution reconstruction network construction method based on non-local perception
CN111242999B (en) Parallax estimation optimization method based on up-sampling and accurate re-matching
CN113344869A (en) Driving environment real-time stereo matching method and device based on candidate parallax
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN112288690A (en) Satellite image dense matching method fusing multi-scale and multi-level features
CN112150566A (en) Dense residual error network image compressed sensing reconstruction method based on feature fusion
CN115984949B (en) Low-quality face image recognition method and equipment with attention mechanism
CN116823610A (en) Deep learning-based underwater image super-resolution generation method and system
CN114708353B (en) Image reconstruction method and device, electronic equipment and storage medium
CN114359419B (en) Attention mechanism-based image compressed sensing reconstruction method
CN115731280A (en) Self-supervision monocular depth estimation method based on Swin-Transformer and CNN parallel network
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion
CN117853340B (en) Remote sensing video super-resolution reconstruction method based on unidirectional convolution network and degradation modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201229