CN112669216A - Super-resolution reconstruction network of parallel cavity new structure based on federal learning - Google Patents

Super-resolution reconstruction network of parallel cavity new structure based on federal learning Download PDF

Info

Publication number
CN112669216A
CN112669216A CN202110009979.9A CN202110009979A CN112669216A CN 112669216 A CN112669216 A CN 112669216A CN 202110009979 A CN202110009979 A CN 202110009979A CN 112669216 A CN112669216 A CN 112669216A
Authority
CN
China
Prior art keywords
output
residual
dense
receptive field
prdb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110009979.9A
Other languages
Chinese (zh)
Other versions
CN112669216B (en
Inventor
贾智焱
马丽红
韦岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110009979.9A priority Critical patent/CN112669216B/en
Publication of CN112669216A publication Critical patent/CN112669216A/en
Application granted granted Critical
Publication of CN112669216B publication Critical patent/CN112669216B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a super-resolution reconstruction network feel of a new parallel cavity structure based on federal learning, which comprises a plurality of local dense connection residual error groups (LDCRGs), wherein the local dense connection residual error groups are connected in series, and the outputs of all the LDCRGs are fused to provide information for up-sampling reconstruction; each local dense connected residual group consists of a receptive field matched residual dense block (PRDB); the residual dense block of the receptive field matching comprises a receptive field matching module (PDM) and a Residual Dense Block (RDB), and the receptive field matching module is added between signals at two ends of jump connection of the residual dense block RDB; according to the invention, the PDM is matched with the receptive fields at two ends and the LDCRG are connected in a jumping way to selectively perform fusion learning on the output of the residual dense block, so that the performance of the SR network is improved.

Description

Super-resolution reconstruction network of parallel cavity new structure based on federal learning
Technical Field
The invention relates to the field of convolutional neural network structures, in particular to a super-resolution reconstruction network of a new parallel cavity structure based on federal learning.
Background
The Single-frame Image Super-Resolution Reconstruction (SISR) technology has wide application in the fields of remote sensing and remote measuring, sky signal and medical information imaging, safety monitoring and the like. The SISR technique, based on deep convolutional nets, relies on deep and wide nets to produce as many feature atoms as possible; and the method depends on the equivalent mapping of the residual error network, so that the deeper network is converged. Meanwhile, the reconstruction quality of the SR image is remarkably improved by means of dense connection, memory gate, attention and other mechanisms and by means of training and accurate fitting mechanisms of an oversized sample set.
The first image SR convolutional network SRCNN (C.DONG, C.C.LOY, K.HE, X.TANG.learning a deep conditional network for image super-resolution [ C ]// European Conf.on Computer vision: Switzerland: Springer-Cham 2014:184-189.) is characterized by large sample, strong learning and high computational power; the 20-layer hop connection residual mechanism of very deep network VDSR (J.KIM, J.K LEE, K.MU LEE.Accate image super-resolution using lower conditional networks [ C ]// IEEE Conf.on Computer Vision and Pattern registration.Las vectors: IEEE,2016: 1646-1654) is the first guarantee of network convergence. A recursion net DRCN (J.KIM, J.K LEE, K.MU LEE.deep-recursion connected network for image super-recursion [ C ]// IEEE Conf.on Computer Vision and Pattern registration.Las vector: IEEE,2016: 1637) -1645.) realizes multiple recursions without collapse based on a supervision mechanism in an inference stage, so that the reconstruction performance is improved along with the increase of depth and recursion times. On the other hand, enhanced SR residual network EDSR (B.LIM, S.SON, H.KIM.enhanced deep residual network for single image super-resolution [ C ]// IEEE Conf.on Computer Vision and Pattern registration. Honolu: IEEE,2017: 1132) removes the Batch Normalization layer (BN) of residual dense network SRRes (T.TONG, G.LI, X.LIU, Q.GAO.image super resolution using density connections [ C ]// IEEE Conf.on Computer Vision. vehicle: IEEE,2017: 4809.), because BN layer regenerates the distribution of data, which can reduce the influence of gradient destruction, but can distort the original data in reconstruction of SR characteristics, such as low distortion BN. The SR network containing residual connection can be really deepened, and the identity mapping provided by the SR network is a priori constraint on network learning, so that the symmetry of a network matrix is changed, a deep hidden unit can still present different responses to different inputs, and the degradation of the network is reduced; and the residual error is the difference value between input and output in the basic unit, so that the learning performance is visually indicated, and the learning difficulty of a residual error network is reduced. However, even for the residual basic unit with the same depth and the same width of the most provincial parameters, in the sub-module at the network non-communication position, the difference value of mismatch of the reception fields (PF) of the input and output signals at the two ends of the residual jump contains the change of the reception fields and the change of other image characteristics, which limits the sub-module to efficiently and controllably extract the characteristic information.
Solving the matching problem of PFs is more necessary in dense networks based on residual structures. Since only the output characteristics of the last residual block are used in the reconstruction from VDSR to EDSR, the output of each RDB (residual Density Block) is used in the upsampling reconstruction of the residual Dense network RDN (Y.ZHANG, Y.TIAN, Y.KONG, B.ZHONG, Y.FU.Residual Dense network for image super-resolution [ C ]// IEEE Conf.on Computer Vision and Pattern recognition.salt Lake City: IEEE,2018:2472 and 2481.), Dense connections are used between the various rolling layers in the RDB, and the output of each layer is directly transmitted to all subsequent layers. A cascade residual network CARN (N.AHN, B.KANG, K. -A.SOHN.fast, accurate, and light super-resolution with scaling residual network [ C ]// Proceedings of the European Conference on Computer vision. Munich: Springer-Cham,2018: 256-plus 272.) adopts dense connection between sub-modules, but the dense connection structure has the defects that the number of convolution layers is too large, and dense connection is adopted between each layer of convolution, so that PF matching of any convolution layer in a sub-module to unit output is not easy to achieve.
RCAN (Y.ZHANG, K.P.LI, K.LI, L.C.WANG & et al.image Super-Resolution Using Very Deep responsive Channel orientations Networks [ C ]// Proceedings of the European Conf.on Computer vision. Munich: spring-Cham, 2018:1-16.) introduces RIR (responsive in responsive) structure, and by long jump connection, the abundant low frequency characteristic information in LR image is bypassed, and the main network is more focused on the reconstruction of high frequency information. But this jump-join structure does not solve the residual learning PF matching problem within the sub-module.
Disclosure of Invention
Aiming at the problem of difficulty in matching of input-output layer information receptive fields caused by variable number of cross layers in residual learning connection and efficiently utilizing the output characteristics of sub-modules, the super-resolution reconstruction network of a parallel cavity new structure based on federal learning is provided, and comprises a receptive field matching module PDM and a local dense connection residual group LDCRG.
The invention is realized by at least one of the following technical schemes.
A super-resolution reconstruction network of a new structure of a parallel cavity based on federal learning comprises a plurality of local dense connection residual error groups (LDCRGs), wherein the local dense connection residual error groups are connected in series, and the outputs of all the LDCRGs are fused to provide information for up-sampling reconstruction; each local dense connected residual group consists of a receptive field matched residual dense block (PRDB); the residual dense block (PRDB) of the receptive field matching comprises a receptive field matching module (PDM) and a Residual Dense Block (RDB), and the receptive field matching module (PDM) is added between signals at two ends of a jump connection of the residual dense block RDB;
the receptive field matching module comprises a plurality of cavity convolution kernels with different expansion rates;
the cavity convolution kernels continuously fuse the outputs of the cavity convolution kernels in a longitudinal federated learning and iteration mode, so that the output of the receptive field matching module is improved;
the residual error dense blocks (PRDB) of the receptive field matching are connected through local dense, and the residual error dense blocks (PRDB) of the receptive field matching adopt local feature line feature fusion learning, so that the self-adaptive selection of features is realized;
the receptive field matching residual dense block (PRDB) performs residual learning on input and output through local residual learning, so that a network with more sparse input characteristics is constructed, and network training is easier;
the local dense connection is that all PRDB outputs are selected out with similar degree of difference between the high resolution image and the PRDB output to form a group of local dense connection residual error group LDCRG, and all PRDB in the group is used as input after the PRDB outputs are all input into the LDCRG group;
and local feature fusion of the local dense connection residual group LDCRG performs feature fusion learning on the output of the reserved PRDB, so that the self-adaptive selection of features is realized.
Preferably, the receptive field matching module PDM is configured to implement receptive field matching of the residual dense block RDB between any number of layers, and a formula of the size of the receptive field between input and output is:
RFi+1=RFiDR×(S-1) (1)
wherein the RFiInput field size, RF, for field matching modulei+1Is the output receptive field size, eta, of the receptive field matching moduleDRIs the hole convolution kernel dilation rate, and S is the size of the hole convolution kernel.
Preferably, the longitudinal federated learning is to align the output features of each cavity convolution kernel in the receptive field matching module, perform fusion training on the aligned features until the output of the receptive field matching module covers all pixel points of the input image, end the training, and improve the output of the receptive field matching module through a decentralized fusion mode and continuous iterative learning.
Preferably, the input and output mathematical expression of the receptive field matching module is as follows:
xPDM=HFL(x1,x2,···,xm) (2)
wherein x1,x2,···,xmInput x for receptive field matching moduleinOutput of the hole convolution kernels at different expansion ratios, HFLFor longitudinal federal learning, xPDMIs the output of the receptive field matching module.
Preferably, the input-output formula of the domain-matched residual dense block PRDB is:
Fi=FPDM+Fi,LFF=HPDM(Fi-1)+σ(W[Fi,1,Fi,2,···,Fi,N]) (3)
wherein Fi-1As input to PRDB, FiAs output of PRDB, Fi,NIs the output of the Nth convolutional layer in PRDB, Fi,LFFFor local feature fusion output, FPDMFor the output of the receptive field matching module, a PRDB has N convolutional layers in total, sigma is the ReLU activation function, W is the weight parameter, HPDMA function calculated for the receptive field matching module.
Preferably, for the local dense connected residual group, the grouping condition is a degree of similarity of differences between the output image and the high-resolution image of the residual dense block according to the receptive field matching.
Preferably, each Local Dense Concatenation Residual Group (LDCRG) consists of 4 receptor field matched residual dense blocks (PRDB).
Preferably, the input and output formula of the d-th Local Dense Concatenation Residual Group (LDCRG) is:
Fd=Fd-1+Fd,LFF=Fd-1+HLFF([Fd-1,Fd,1,Fd,2,Fd,3,Fd,4]) (4)
wherein Fd-1For the input of the d-th locally dense connected residual group, FdFor the output of the d-th locally dense connected residual group, Fd,1、Fd,2、Fd,3、Fd,4For the output of 4 PRDB in a locally dense connected residual group, Fd,LFFOutput for local feature fusion, HLFFIs a local feature fusion function.
Preferably, feature fusion is performed on the output of the local dense connection residual group, and the output of the feature fusion is:
FDF=HGFF([F-1,F0,F1,···,FD]) (5)
wherein, [ F ]-1,F0,F1,…,FD]Expressed as the original shallow feature F-1、F0And D Locally Dense Connected Residual Groups (LDCRG) output concatenations, HGFFIs a convolutional layer with a convolutional kernel size of 1 × 1.
Preferably, the image is subjected to Conv convolution layer primary extraction features and then subjected to a plurality of local dense connected residual groups.
Compared with the prior art, the invention has the following beneficial effects:
1. compared with the traditional mixed hole convolution structure in the semantic segmentation field, in the mixed hole convolution structure of the receptive field matching module, in order to be applicable to the image super-resolution reconstruction field, a batch normalization layer BN is removed from a convolution unit, only a convolution layer and a ReLU activation layer are included, and meanwhile, aiming at the output of different hole convolutions, a feature fusion mode is adopted for fusing longitudinal federal learning.
2. Compared with the traditional residual dense block, the method has the advantages that the problem of unmatched receptive field sizes between the RDB input image and the RDB output image is noticed, the receptive field matching module PDM is added on the residual jump connection, the problem of unmatched receptive fields between the input and the output of any layer of spaced sub-modules in the SR network is solved, and therefore the performance of the SR network is improved.
3. Different from the dense connection mode of the sub-modules in the traditional SR network, the method provided by the invention has the advantages that the sub-modules with high similarity degree are grouped into one group according to the similarity degree of the difference value between the image output by the sub-modules and the high-resolution image, and the output characteristic information of the sub-modules in the group is continuously transmitted, so that the characteristic information is efficiently utilized and fused, and the redundant characteristic information is prevented from being excessively learned by the deep network. The sub-modules employed by the present invention are residual dense blocks, but are equally applicable to other types of basic units.
Drawings
FIG. 1 is a schematic structural diagram of a super-resolution reconstruction network of a new structure of parallel holes based on Federal learning;
FIG. 2 is a block diagram of the receptor field matching module of the present invention;
FIG. 3 is a block diagram of a domain matching residual dense block of the present invention;
FIG. 4 is a block diagram of a local dense concatenation residual group of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples, i.e., drawings, but the embodiments of the present invention are not limited thereto.
Fig. 1 shows a structure of a super-resolution reconstruction network FPDN of a parallel hole new structure based on federal learning according to the present invention, where the FPDN includes a receptive field matching module PDM and a local dense connection residual group LDCRG. The LR image is subjected to the primary feature extraction of the Conv convolution layer, then is subjected to the LDCRG, and the output F of the LDCRG is subjected to the LDCRG1,F2,…,FDReconstructing an HR image by adopting a reconstruction layer Upscale after residual learning through a fusion layer Concat; i isLFIs an input of FPDN, IHFIs the output of FPDN, FGFFor global fusion feature output of LDCRG in FPDN, FDFAnd (4) input characteristics of an upper sampling reconstruction layer in the FPDN.
FIG. 2 shows the structure of the domain matching module PDM, input x of the PDMinAfter a plurality of cavity convolution kernels DCK with different expansion rates are parallelly passed through, x is fused through federal learning1,x2,…,xmOutput xPDM
FIG. 3 shows the structure of the domain matching residual dense block PRDB, which extracts the feature of the output Fi,LFFAnd output F of PDMPDMResidual error learning is carried out.
FIG. 4 shows a structure diagram of a locally dense connection residual group LDCRG, which is the d-th LDCRG to be input Fd-1And output of feature extraction Fd,LFFPerforming residual error learning to obtain FdAs input to the d +1 th LDCRG.
A super-resolution reconstruction network of a new structure of a parallel cavity based on federal learning comprises a receptive field matching module PDM and a local dense connection residual error group LDCRG;
the PDM consists of a plurality of hole convolution kernels with different expansion rates;
the hole convolution kernel in PDM outputs a plurality of hole convolution kernels in a longitudinal federated learning and iteration mode
Performing continuous fusion so as to improve the output of the PDM;
PDM is added between two end signals of residual learning jump connection to match the receptive field of the two end signals.
The PDM can realize the reception field matching of the sub-modules of the SR network with any layer number interval, and the formula of the size of the reception field between the input and the output is as follows:
RFi+1=RFiDR×(S-1) (1)
wherein the RFiInput field size, RF, for PDMi+1Is the output field size, η, of the PDMDRIs the hole convolution kernel dilation rate, and S is the size of the hole convolution kernel.
And for longitudinal federated learning in the PDM, aligning the output characteristics of each cavity convolution kernel in the PDM, performing fusion training on the aligned characteristics until the output of the PDM covers all pixel points of the input image, and finishing the training. The learning training mode keeps the output characteristics of each cavity convolution kernel independent, the output of each cavity convolution kernel cannot be interfered due to characteristic interaction during characteristic fusion training, the number of optimized nodes in the PDM is larger than that of the nodes of each independent cavity convolution kernel, and the output of the PDM is improved through continuous iterative learning in a decentralized fusion mode. The input and output expressions of the PDM are as follows:
xPDM=HFL(x1,x2,···,xm) (2)
wherein x1,x2,···,xmInput x for PDMinOutput of the hole convolution kernels at different expansion ratios, HFLFor longitudinal federal learning, xPDMIs the output of the PDM.
And adding the PDM between signals at two ends of the jump connection of the residual dense block RDB to form a receptive field matching residual dense block PRDB, so as to realize receptive field matching between input and output of the RDB. The input and output formula of PRDB is:
Fi=FPDM+Fi,LFF=HPDM(Fi-1)+σ(W[Fi,1,Fi,2,···,Fi,N]) (3)
wherein Fi-1As input to PRDB, FiAs output of PRDB, Fi,nFor the output of the nth convolutional layer in PRDB, Fi,LFFFor local feature fusion output, FPDMFor the output of PDM, there are N convolutional layers in a PRDB, σ is the ReLU activation function, W is the weight parameter, HPDMA function calculated for PDM.
The LDCRG comprises local dense connection, local feature fusion and local residual error learning;
the local dense connection is to select the similar degrees of the difference values between the output images of all PRDB and the high-resolution image to form a group of local dense connection residual error groups LDCRG
The local dense connection of the LDCRG is that all PRDB after the outputs of all selected receptor field matching residual dense blocks PRDB with similar similarity degrees of the difference values between the output images of all PRDB and the high-resolution image are all input into the LDCRG group are used as input;
the local feature fusion of the LDCRG is to perform feature fusion learning on the output of the PRDB reserved before so as to realize the self-adaptive selection of features;
the local residual learning of the LDCRG is to perform residual learning on the input and the output of the LDCRG and construct a network with sparser input characteristics, so that the network training is easier;
for the LDCRG, according to the similarity degree of the difference between the output image of the PRDB and the high-resolution image, the PRDB with the closer similarity degree is formed into one LDCRG, and the analysis finds that the PRDB has 4 output differences at intervals, so that 1 LDCRG is formed by every 4 PRDBs, and the FPDN contains 8 LDCRGs in total.
The local dense connection and the local feature fusion of the LDCRG are used for fully utilizing the output of each PRDB in the LDCRG, and the local residual learning is used for constructing a network with more sparse input features so as to facilitate network training. The input and output formula of the d-th LDCRG is as follows:
Fd=Fd-1+Fd,LFF=Fd-1+HLFF([Fd-1,Fd,1,Fd,2,Fd,3,Fd,4]) (4)
wherein Fd-1Is the input of the d-th LDCRG, FdIs the output of the d-th LDCRG, Fd,1,Fd,2,Fd,3,Fd4For the output of 4 PRDB in LDCRG, Fd,LFFOutput for local feature fusion, HLFFIs a local feature fusion function.
For the super-resolution reconstruction network FPDN with the new structure of the parallel cavity based on the federal learning, the performance is improved by performing feature fusion on the output of the LDCRG, and the output of the feature fusion is as follows:
FDF=HGFF([F-1,F0,F1,···,FD]) (5)
wherein, [ F ]-1,F0,F1,…,FD]Expressed as the original shallow feature F-1,F0And D partial dense connected residual groups, HGFFIs a convolutional layer with a convolutional kernel size of 1 × 1.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A super-resolution reconstruction network of a new structure of parallel cavities based on federal learning is characterized by comprising a plurality of local dense connection residual error groups (LDCRGs), wherein the output of all the LDCRGs is fused to provide information for up-sampling reconstruction by connecting the local dense connection residual error groups in series; each local dense connected residual group consists of a receptive field matched residual dense block (PRDB); the residual dense block (PRDB) of the receptive field matching comprises a receptive field matching module (PDM) and a Residual Dense Block (RDB), and the receptive field matching module (PDM) is added between signals at two ends of a jump connection of the residual dense block RDB;
the receptive field matching module comprises a plurality of cavity convolution kernels with different expansion rates;
the cavity convolution kernels continuously fuse the outputs of the cavity convolution kernels in a longitudinal federated learning and iteration mode, so that the output of the receptive field matching module is improved;
the residual error dense blocks (PRDB) of the receptive field matching are connected through local dense, and the residual error dense blocks (PRDB) of the receptive field matching adopt local feature line feature fusion learning, so that the self-adaptive selection of features is realized;
the receptive field matching residual dense block (PRDB) performs residual learning on input and output through local residual learning, so that a network with more sparse input characteristics is constructed, and network training is easier;
the local dense connection is that all PRDB outputs are selected out with similar degree of difference between the high resolution image and the PRDB output to form a group of local dense connection residual error group LDCRG, and all PRDB in the group is used as input after the PRDB outputs are all input into the LDCRG group;
the local feature fusion of the LDCRG performs feature fusion learning on the output of the reserved PRDB, thereby realizing the self-adaptive selection of features.
2. The super-resolution reconstruction network of a new structure of parallel cavities based on federated learning according to claim 1, wherein the domain matching module PDM is configured to implement domain matching of residual dense blocks RDB between any number of layer intervals, and the formula of the domain size between input and output is:
RFi+1=RFiDR×(S-1) (1)
wherein the RFiInput field size, RF, for field matching modulei+1Is the output receptive field size, eta, of the receptive field matching moduleDRDilation rate of a hole convolution kernel, S is the magnitude of the hole convolution kernelIs small.
3. The super-resolution reconstruction network of a new structure of parallel cavities based on federated learning of claim 2, characterized in that, the longitudinal federated learning aligns the output features of each cavity convolution kernel in the receptive field matching module, performs fusion training on the aligned features until the output of the receptive field matching module covers all pixel points of the input image, ends the training, and improves the output of the receptive field matching module by continuous iterative learning in a decentralized fusion mode.
4. The super-resolution reconstruction network of a new structure of parallel holes based on federated learning according to claim 3, characterized in that the input-output mathematical expression of the receptive field matching module is:
xPDM=HFL(x1,x2,…,xm) (2)
wherein x1,x2,…,xmInput x for receptive field matching moduleinOutput of the hole convolution kernels at different expansion ratios, HFLFor longitudinal federal learning, xPDMIs the output of the receptive field matching module.
5. The super-resolution reconstruction network of a new structure of parallel holes based on federated learning of claim 4 is characterized in that, the input-output formula of the residual dense block PRDB of the receptive field matching is:
Fi=FPDM+Fi,LFF=HPDM(Fi-1)+σ(W[Fi,1,Fi,2,…,Fi,N]) (3)
wherein Fi-1As input to PRDB, FiAs output of PRDB, Fi,NIs the output of the Nth convolutional layer in PRDB, Fi,LFFFor local feature fusion output, FPDMFor the output of the receptive field matching module, a PRDB has N convolutional layers in total, sigma is the ReLU activation function, W is the weight parameter, HPDMA function calculated for the receptive field matching module.
6. The super-resolution reconstruction network of the new structure of the parallel holes based on the federal study is characterized in that, for the local dense connection residual group, the grouping condition is the similarity degree of the difference value between the output image and the high-resolution image of the residual dense block matched according to the receptive field.
7. The super-resolution reconstruction network of a new structure of parallel holes based on federated learning of claim 6, wherein each Local Dense Connection Residual Group (LDCRG) is composed of 4 perceptually matched residual dense blocks (PRDB).
8. The super-resolution reconstruction network of a new structure of parallel holes based on federated learning of claim 7, wherein the input-output formula of the d-th Local Dense Connection Residual Group (LDCRG) is:
Fd=Fd-1+Fd,LFF=Fd-1+HLFF([Fd-1,Fd,1,Fd,2,Fd,3,Fd,4]) (4)
wherein Fd-1For the input of the d-th locally dense connected residual group, FdFor the output of the d-th locally dense connected residual group, Fd,1、Fd,2、Fd,3、Fd,4For the output of 4 PRDB in a locally dense connected residual group, Fd,LFFOutput for local feature fusion, HLFFIs a local feature fusion function.
9. The super-resolution reconstruction network of the new structure of the parallel cavity based on the federal learning of claim 8, wherein the feature fusion is performed on the output of the local dense connection residual group, and the output of the feature fusion is:
FDF=HGFF([F-1,F0,F1,…,FD]) (5)
wherein, [ F ]-1,F0,F1,…,FD]Expressed as the original shallow feature F-1、F0And D Locally Dense Connected Residual Groups (LDCRG) output concatenations, HGFFIs a convolutional layer with a convolutional kernel size of 1 × 1.
10. The super-resolution reconstruction network of the new structure of the parallel cavities based on the federal learning of claim 9 is characterized in that the images are subjected to the primary feature extraction by the Conv convolution layer and then are subjected to a plurality of local dense connection residual error groups.
CN202110009979.9A 2021-01-05 2021-01-05 Super-resolution reconstruction network of parallel cavity new structure based on federal learning Expired - Fee Related CN112669216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110009979.9A CN112669216B (en) 2021-01-05 2021-01-05 Super-resolution reconstruction network of parallel cavity new structure based on federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110009979.9A CN112669216B (en) 2021-01-05 2021-01-05 Super-resolution reconstruction network of parallel cavity new structure based on federal learning

Publications (2)

Publication Number Publication Date
CN112669216A true CN112669216A (en) 2021-04-16
CN112669216B CN112669216B (en) 2022-04-22

Family

ID=75413063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110009979.9A Expired - Fee Related CN112669216B (en) 2021-01-05 2021-01-05 Super-resolution reconstruction network of parallel cavity new structure based on federal learning

Country Status (1)

Country Link
CN (1) CN112669216B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830400A (en) * 2023-02-10 2023-03-21 南昌大学 Data identification method and system based on federal learning mechanism

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232857A1 (en) * 2015-11-04 2018-08-16 Peking University Shenzhen Graduate School Method and device for super-resolution image reconstruction based on dictionary matching
US20180268284A1 (en) * 2017-03-15 2018-09-20 Samsung Electronics Co., Ltd. System and method for designing efficient super resolution deep convolutional neural networks by cascade network training, cascade network trimming, and dilated convolutions
CN109711529A (en) * 2018-11-13 2019-05-03 中山大学 A kind of cross-cutting federal learning model and method based on value iterative network
CN110276721A (en) * 2019-04-28 2019-09-24 天津大学 Image super-resolution rebuilding method based on cascade residual error convolutional neural networks
US20200118423A1 (en) * 2017-04-05 2020-04-16 Carnegie Mellon University Deep Learning Methods For Estimating Density and/or Flow of Objects, and Related Methods and Software
CN111047515A (en) * 2019-12-29 2020-04-21 兰州理工大学 Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
US20200167930A1 (en) * 2017-06-16 2020-05-28 Ucl Business Ltd A System and Computer-Implemented Method for Segmenting an Image
AU2020100200A4 (en) * 2020-02-08 2020-06-11 Huang, Shuying DR Content-guide Residual Network for Image Super-Resolution
CN111583112A (en) * 2020-04-29 2020-08-25 华南理工大学 Method, system, device and storage medium for video super-resolution
CN111598778A (en) * 2020-05-13 2020-08-28 云南电网有限责任公司电力科学研究院 Insulator image super-resolution reconstruction method
CN111640060A (en) * 2020-04-30 2020-09-08 南京理工大学 Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module
US20200293887A1 (en) * 2019-03-11 2020-09-17 doc.ai, Inc. System and Method with Federated Learning Model for Medical Research Applications
CN111915490A (en) * 2020-08-14 2020-11-10 深圳清研智城科技有限公司 License plate image super-resolution reconstruction model and method based on multi-scale features

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232857A1 (en) * 2015-11-04 2018-08-16 Peking University Shenzhen Graduate School Method and device for super-resolution image reconstruction based on dictionary matching
US20180268284A1 (en) * 2017-03-15 2018-09-20 Samsung Electronics Co., Ltd. System and method for designing efficient super resolution deep convolutional neural networks by cascade network training, cascade network trimming, and dilated convolutions
US20200118423A1 (en) * 2017-04-05 2020-04-16 Carnegie Mellon University Deep Learning Methods For Estimating Density and/or Flow of Objects, and Related Methods and Software
US20200167930A1 (en) * 2017-06-16 2020-05-28 Ucl Business Ltd A System and Computer-Implemented Method for Segmenting an Image
CN109711529A (en) * 2018-11-13 2019-05-03 中山大学 A kind of cross-cutting federal learning model and method based on value iterative network
US20200293887A1 (en) * 2019-03-11 2020-09-17 doc.ai, Inc. System and Method with Federated Learning Model for Medical Research Applications
CN110276721A (en) * 2019-04-28 2019-09-24 天津大学 Image super-resolution rebuilding method based on cascade residual error convolutional neural networks
CN111047515A (en) * 2019-12-29 2020-04-21 兰州理工大学 Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
AU2020100200A4 (en) * 2020-02-08 2020-06-11 Huang, Shuying DR Content-guide Residual Network for Image Super-Resolution
CN111583112A (en) * 2020-04-29 2020-08-25 华南理工大学 Method, system, device and storage medium for video super-resolution
CN111640060A (en) * 2020-04-30 2020-09-08 南京理工大学 Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module
CN111598778A (en) * 2020-05-13 2020-08-28 云南电网有限责任公司电力科学研究院 Insulator image super-resolution reconstruction method
CN111915490A (en) * 2020-08-14 2020-11-10 深圳清研智城科技有限公司 License plate image super-resolution reconstruction model and method based on multi-scale features

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
C Y CHANG ET AL: "Multi-scale Dense Network for Single-image Super-resolution", 《ICASSP 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 *
DONGHYEON HAN ET AL: "Extension of Direct Feedback Alignment to Convolutional and Recurrent Neural Network for Bio-plausible Deep Learning", 《ARXIV PREPRINT ARXIV》 *
H XU ET AL: "Human Activity Recognition Based on Gramian Angular Field and Deep Convolutional Neural Network", 《IN IEEE ACCESS》 *
J XU ET AL: "Dense Bynet: Residual Dense Network for Image Super Resolution", 《2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
JIN X ET AL: "Single image super-resolution with multi-level feature fusion recursive network", 《NEUROCOMPUTING》 *
SHANG T ET AL: "Perceptual Extreme Super Resolution Network with Receptive Field Block", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》 *
TAO DAI ET AL: "Second-order Attention Network for Single Image Super-Resolution", 《IEEE CONF. ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
Y TAI ET AL: "Memnet: A persistent memory network for image restoration", 《IEEE CONF. ON COMPUTER VISION (ICCV)》 *
Y ZHANG ET AL: "Image Super-Resolution Using Very Deep Residual Channel Attention Networks", 《EUROPEAN CONF. ON COMPUTER VISION (ECCV)》 *
YULUN ZHANG ET AL: "Residual Dense Network for Image Restoration", 《JOURNAL OF LATEX CLASS FILES》 *
YULUN ZHANG ET AL: "Residual dense network for image super-resolution", 《IEEE CONF. ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
李诚 等: "改进的生成对抗网络图像超分辨率重建", 《计算机工程与应用》 *
杨欢 等: "基于感受野匹配的多尺度行人检测", 《现代计算机》 *
王蓉 等: "基于联邦学习和卷积神经网络的入侵检测方法", 《信息网络安全》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830400A (en) * 2023-02-10 2023-03-21 南昌大学 Data identification method and system based on federal learning mechanism
CN115830400B (en) * 2023-02-10 2023-05-16 南昌大学 Data identification method and system based on federal learning mechanism

Also Published As

Publication number Publication date
CN112669216B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN111583109B (en) Image super-resolution method based on generation of countermeasure network
CN110782462A (en) Semantic segmentation method based on double-flow feature fusion
CN110728682B (en) Semantic segmentation method based on residual pyramid pooling neural network
CN111259905A (en) Feature fusion remote sensing image semantic segmentation method based on downsampling
CN110223234A (en) Depth residual error network image super resolution ratio reconstruction method based on cascade shrinkage expansion
CN113240683B (en) Attention mechanism-based lightweight semantic segmentation model construction method
CN113361485A (en) Hyperspectral image classification method based on spectral space attention fusion and deformable convolution residual error network
CN112699844A (en) Image super-resolution method based on multi-scale residual error level dense connection network
CN112669216B (en) Super-resolution reconstruction network of parallel cavity new structure based on federal learning
CN115330620A (en) Image defogging method based on cyclic generation countermeasure network
Ahn et al. Neural architecture search for image super-resolution using densely constructed search space: DeCoNAS
CN111027542A (en) Target detection method improved based on fast RCNN algorithm
CN110223224A (en) A kind of Image Super-resolution realization algorithm based on information filtering network
Cong et al. CAN: Contextual aggregating network for semantic segmentation
CN114519384B (en) Target classification method based on sparse SAR amplitude-phase image dataset
CN112529098B (en) Dense multi-scale target detection system and method
Liu et al. Single‐image super‐resolution using lightweight transformer‐convolutional neural network hybrid model
CN113807164A (en) Face recognition method based on cosine loss function
CN113920124A (en) Brain neuron iterative segmentation method based on segmentation and error guidance
Zhuge et al. An improved deep multiscale crowd counting network with perspective awareness
Li et al. Single Image Super-resolution Reconstruction Algorithm Based on Deep Learning
Guan et al. Inception donut convolution for top-down semantic segmentation
Wei et al. Satellite image super-resolution reconstruction based on ACGAN and dual-channel dense residual network
CN112364892B (en) Image identification method and device based on dynamic model
Gandhi et al. Application of deep learning in cartography using UNet and generative adversarial network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220422

CF01 Termination of patent right due to non-payment of annual fee