AU2020100200A4 - Content-guide Residual Network for Image Super-Resolution - Google Patents
Content-guide Residual Network for Image Super-Resolution Download PDFInfo
- Publication number
- AU2020100200A4 AU2020100200A4 AU2020100200A AU2020100200A AU2020100200A4 AU 2020100200 A4 AU2020100200 A4 AU 2020100200A4 AU 2020100200 A AU2020100200 A AU 2020100200A AU 2020100200 A AU2020100200 A AU 2020100200A AU 2020100200 A4 AU2020100200 A4 AU 2020100200A4
- Authority
- AU
- Australia
- Prior art keywords
- features
- layer
- equation
- image
- cgrg
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000000605 extraction Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 2
- 230000006835 compression Effects 0.000 claims description 2
- 238000007906 compression Methods 0.000 claims description 2
- CJRJTCMSQLEPFQ-UHFFFAOYSA-N 6-cat Chemical compound ClC1=CC=C2CC(N)CCC2=C1 CJRJTCMSQLEPFQ-UHFFFAOYSA-N 0.000 claims 1
- 241000282326 Felis catus Species 0.000 claims 1
- 230000010339 dilation Effects 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 4
- 239000000284 extract Substances 0.000 abstract description 2
- 239000012141 concentrate Substances 0.000 abstract 1
- 238000013135 deep learning Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 7
- 238000005070 sampling Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Analysis (AREA)
Abstract
Abstract: Single image super-resolution (SISR) has always received much attention, as one of the basic assignment of computer vision. The development and extensive exploration of deep learning have brought remarkable performance and index improvement to the SISR. However, most of existing CNN-based SISR methods concentrate on using wider or deeper architecture to obtain better results, while ignoring the portability of the network. The huge network structure expend too much computing resources, which constraint its application in mobile equipment. In order to address these problems, this patent advanced a content-based network, which can extract features more efficiently under the premise of ensuring quality and complexity. Experimental results testify that our network achieved better results compared with state-of-the-art SISR methods in terms of both objective indicators and visual quality.
Description
BACKGROUND AND PURPOSE
[0001] Image Super-Resolution, a fundamental and concerned problem, which aims to recover from the observed low-resolution (LR) image to the distinct high-resolution (HR) image. However, this is a sick problem since that there are myriad HR solutions map for a LR input. Therefore, quite some SR methods have been proposed, range from early interpolation-based method and model-based method, as well as the recently popular learning-based method, to emulate the problem.
[0002] In the early years, interpolation-based methods (bilinear and bicubic methods) were used to deal with some simplicity image amplify applications. As a result of its insufficiency flexibility and poor restoration quality, it’s have been displaced by modelbased methods later. The model-based method had achieved many satisfactory results, such as wavelet transform, sparse coding, and total variation model and so on. These methods use the prior information of the image to reestablish the HR image. Although the model-based methods are more flexible to restore a clearer HR image, they still have fatal defects: (1) they take too much time to get a HR image with good quality: (2) in the absence of prior information, the reconstruction quality of the image drops rapidly.
[0003] Deep convolution neural networks (CNNs) have shown great vitality and unprecedented success in many fields because of their powerful ability of feature extraction and expression. The powerful feature expression and end-to-end training of CNNs make they have been extremely referred single image super-resolution (SISR). Recently, a large number of CNNs-based SISR methods have emerged. By the statistical exploration of the image in the dataset, SISR methods based on CNNs shows their power and achieved state-of-the-art results. Most of the structures focus on increasing the width or depth of the network, resulting in huge consumption of computing resources.
[0004] To address these problems, we propose a deep content-guide residual network
2020100200 23 Apr 2020 (CGRN) for more powerful feature expression and feature correlation learning. In particular, we propose a content-guide multi-scale residual module to enhance the feature extraction and expression. Moreover, dilate convolution is presented instead of up and down sampling operation, which can obtain multi-scale feature information without losing high-frequency information easily lost in the up-down sampling operation, it is easy to train and has few parameters. Our method is faster and smaller in model while obtaining better visual quality and recovers more image details compared with stat-of-the-art SR methods.
[0005] In summary, the main contributions of this patent are listed as follows: (1) We propose a deep content-guide residual network (CGRN) for accurate image SR. Experimental results on pubic datasets demonstrate that our CGRN achieves better visual quality and objective indicators compared with other state-of-the-art SR methods.
(2) We propose a multi-scale attention module to extract image features more efficiently than a deeper or wider single-scale network. The multi-scale attention module makes our network pay attention to the structure information of the images as well as the characteristics. (3) We propose to use dilated convolution instead of up and down sampling to obtain more receptive fields. Compared with the high-frequency details of image lost in the up-down sampling operation, the dilated can better retain highfrequency. At the same time, it can avoid the huge increase of computation cost caused by using larger convolution kernel. And our network excellent balances visual quality, speed, and computing resources.
FRAMEWORK OF OUR CONTENT GURIDE RESIDUAL NET
[0001] Our CGRN mainly consists of four parts: shallow features extraction, contentguide residual group (CGRG), multi-scale attention block, and reconstruction part. Define Ilr and Ihr as inputs and outputs of CGRN. We proposed merely one convolution layer to extract the shallow features Fo from the LR images.
2020100200 23 Apr 2020
Equation 1
Where HSF(·) representative convolution operation. Then the extracted shallow features Fo is used as the input of CGRG to extract the deep and structural features, which thus generate the deep features as.
Fdf=Hcgrg(Fo) Equation 2
Where Fdf represents the deep feature and structure information extracts by CGRG, which is composed if four multi-scale attention blocks. Then the extracted deep feature Fdf is upscaled via the upscale module via.
Ft=Ht(Fflf.) Equation 3
Where /C(·) and Ff are an upscale module and upscaled feature respectively.
There are some processing methods in the reconstruction part, such as transposed convolution, ESPCN. We use pixelshuffle function and a convolution layer to form the upscale part.
Isr ~ Fr (F)) — Hcgrg (Jlr ) Equation 4
Where HR(·), /C(·) and HCGRG(·) are the reconstruction layer, upscale layer and the function of CGRN, respectively.
[0002] Li and L2 loss functions had been widely used in SISR. To show the effectiveness of our network, we use Li loss as current work. Given a training set {IJ, , which are composed of N LR input and their HR counterparts. The goal of training CGRN is to optimize the loss function of LI.
2020100200 23 Apr 2020 N £(Θ) = ΐυΣ|Κσ™(4)-4?|| Equation 5
Where Θ denotes the parameter set of CGRN. The loss function is optimized by using stochastic gradient descent.
CONTENT GUIDE RESIDUAL GROUP
[0001] Our content-guide residual group (CGRG) contains four multi-scale attention blocks which each has its own convolution kernel with the same size (3 x 3) and different dilated rate to acquire more structure information and reduce parameters. We fused the intermediate information obtained by each multi-scale attention block to enhance the feature extraction ability of CGRG. Give input features Fin, this procedure in the CGRG can be expressed as
F1 = H1 ,(F ) Equation 6 msat msat v in /~
F2 , = //2 (F1 ,) = //2 ,(//1 ,(F )) Equation 7 msat msat s msat/ msat s msats m//T.
F\=H3 (F2 ,) = Η3 ,(. H2 ,(//1 ,(F ))) Equation 8 msat msat s msat/ msat' msat' msat' m / / /T.
FL = CO = Equation 9
Fcgrg = Fin + Re ducem (Concat(F^sal, F2sat, F3sal, F*sat)) Equation 10
Where F‘ (·) denotes that features which obtain by the i -th multi-scale attention block. H‘ ,(·) denotes the function of i -th multi-scale attention. Concat denotes concatenation operation along the channel dimension, and Reduce indicates compression operation along the channel dimension.
MULTI-SCALE ATTENTION BLOCK
[0001] Our multi-scale attention block consists of 5 convolution layer (including Relu layer) and 1 SEblock. We fused the features from the first four layers and through the
2020100200 23 Apr 2020
SEblock. Then, we proposed a convolution layer to extract more high frequency features and use skip-chain to inject more details into the network of later layer. We use SEblock to allocate the parameters of the features learned from the precious convolution layer, so as to improve the efficiency of feature extraction. The formula of multi-scale channel block can be expressed as.
fi.j = hi,j ) = hij ( · -fii · · ·)) Equation 11
Cat = Z,i + hi,s (se(concat(fil ))) Equation 12
Where . represents the features extracted by the j -th layer of the i -th multiscale attention block and .(·) represents its convolution operation (including ReLU layer). The concatl·) operation represents four convolutions layer. The 5β(·) operation stands for the function of the channel attention module. The analysis shows that the channel attention module enhances the network expression ability in the process of assigning weight parameters to the channels.
Claims (3)
1. The procedures of the proposed single-image super-resolution method as follows:
[0001]The proposed method is introduced in detail. The structure of the proposed method is shown in Figure 1. From Figure 1, we can know that the proposed method consists of four parts.
[0002] The part 1: Use a simple 3x3 convolutional layer to extract shallow features of low-resolution images.
[0003] The part 2: We design the Content-guide Residual Groups (CGRG) to extract and fuse multi-level features of the image.
[0004] The part 3: We propose the Multiscale-attention Block containing convolutional layers with multiple dilations to obtain a larger receptive field and thus more structural features of the image.
[0005] The part 4: The image reconstruction part includes a convolutional layer to integrate the features obtained previously and an upsampling layer to reconstruct the image.
2. The structures of content guide residual group are as follows:
[0001] Our content-guide residual group (CGRG) contains four multi-scale attention blocks which each has its own convolution kernel with the same size (3 x 3) and different dilated rate to acquire more structure information and reduce parameters. We fused the intermediate information obtained by each multi-scale attention block to enhance the feature extraction ability of CGRG. Give input features En, this procedure
2020100200 23 Apr 2020 in the CGRG can be expressed as FLt=HLt(FJ Equation 1 FL = (FL· ) - (F in)) Equation 2 FLt = HLat (FL) - HLat (HLt (Fin))) Equation 3 FL - HLAfL ) - C(C(C(C(O) Equation 4 Fcgrg = Fin + Re dlicew (Concat(FLt ’ FL, ’ FL, ’ FL,)) Equation 5
Where F' (·) denotes that features which obtain by the i -th multi-scale attention block. H' (·) denotes the function of i -th multi-scale attention. Concat denotes concatenation operation along the channel dimension, and Reduce indicates compression operation along the channel dimension.
3. The structures of multi-scale attention block are as follows:
[0001] Our multi-scale attention block consists of 5 convolution layer (including Relu layer) and 1 SEblock. We fused the features from the first four layers and through the SEblock. Then, we proposed a convolution layer to extract more high frequency features and use skip-chain to inject more details into the network of later layer. We use SEblock to allocate the parameters of the features learned from the precious convolution layer, so as to improve the efficiency of feature extraction. The formula of multi-scale channel block can be expressed as.
fij = hi,j ) = hij (· ·-f,i ·)) Equation 6
Cat = fi.x + C (se(concat(f 4, fi 7, fi3, fA ))) Equation 7
Where ft. represents the features extracted by the j -th layer of the i -th multiscale attention block and .(·) represents its convolution operation (including
23 Apr 2020
ReLU layer). The co«cat(·) operation represents four convolutions layer. The se(·) operation stands for the function of the channel attention module. The analysis shows that the channel attention module enhances the network expression ability in the process of assigning weight parameters to the channels.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020100200A AU2020100200A4 (en) | 2020-02-08 | 2020-02-08 | Content-guide Residual Network for Image Super-Resolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020100200A AU2020100200A4 (en) | 2020-02-08 | 2020-02-08 | Content-guide Residual Network for Image Super-Resolution |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2020100200A4 true AU2020100200A4 (en) | 2020-06-11 |
Family
ID=70969047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2020100200A Ceased AU2020100200A4 (en) | 2020-02-08 | 2020-02-08 | Content-guide Residual Network for Image Super-Resolution |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2020100200A4 (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111754404A (en) * | 2020-06-18 | 2020-10-09 | 重庆邮电大学 | Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism |
CN111768340A (en) * | 2020-06-30 | 2020-10-13 | 苏州大学 | Super-resolution image reconstruction method and system based on dense multi-path network |
CN111882485A (en) * | 2020-06-19 | 2020-11-03 | 北京交通大学 | Hierarchical feature feedback fusion depth image super-resolution reconstruction method |
CN112085760A (en) * | 2020-09-04 | 2020-12-15 | 厦门大学 | Prospect segmentation method of laparoscopic surgery video |
CN112233033A (en) * | 2020-10-19 | 2021-01-15 | 中南民族大学 | Progressive high-power face super-resolution system and method for analytic prior fusion |
CN112330539A (en) * | 2020-10-10 | 2021-02-05 | 北京嘀嘀无限科技发展有限公司 | Super-resolution image reconstruction method, device, storage medium and electronic equipment |
CN112419155A (en) * | 2020-11-26 | 2021-02-26 | 武汉大学 | Super-resolution reconstruction method for fully-polarized synthetic aperture radar image |
CN112561838A (en) * | 2020-12-02 | 2021-03-26 | 西安电子科技大学 | Image enhancement method based on residual self-attention and generation countermeasure network |
CN112634238A (en) * | 2020-12-25 | 2021-04-09 | 武汉大学 | Image quality evaluation method based on attention module |
CN112669216A (en) * | 2021-01-05 | 2021-04-16 | 华南理工大学 | Super-resolution reconstruction network of parallel cavity new structure based on federal learning |
CN112734915A (en) * | 2021-01-19 | 2021-04-30 | 北京工业大学 | Multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning |
CN112767251A (en) * | 2021-01-20 | 2021-05-07 | 重庆邮电大学 | Image super-resolution method based on multi-scale detail feature fusion neural network |
CN113033448A (en) * | 2021-04-02 | 2021-06-25 | 东北林业大学 | Remote sensing image cloud-removing residual error neural network system, method and equipment based on multi-scale convolution and attention and storage medium |
CN113139899A (en) * | 2021-03-31 | 2021-07-20 | 桂林电子科技大学 | Design method of high-quality light-weight super-resolution reconstruction network model |
CN113222818A (en) * | 2021-05-18 | 2021-08-06 | 浙江师范大学 | Method for reconstructing super-resolution image by using lightweight multi-channel aggregation network |
CN113284067A (en) * | 2021-05-31 | 2021-08-20 | 西安理工大学 | Hyperspectral panchromatic sharpening method based on depth detail injection network |
CN113313644A (en) * | 2021-05-26 | 2021-08-27 | 西安理工大学 | Underwater image enhancement method based on residual double attention network |
CN113393382A (en) * | 2021-08-16 | 2021-09-14 | 四川省人工智能研究院(宜宾) | Binocular picture super-resolution reconstruction method based on multi-dimensional parallax prior |
CN113506222A (en) * | 2021-07-30 | 2021-10-15 | 合肥工业大学 | Multi-mode image super-resolution method based on convolutional neural network |
CN113627487A (en) * | 2021-07-13 | 2021-11-09 | 西安理工大学 | Super-resolution reconstruction method based on deep attention mechanism |
CN113658201A (en) * | 2021-08-02 | 2021-11-16 | 天津大学 | Deep learning colorectal cancer polyp segmentation device based on enhanced multi-scale features |
CN113674156A (en) * | 2021-09-06 | 2021-11-19 | 苏州大学 | Method and system for reconstructing image super-resolution |
CN114092327A (en) * | 2021-11-02 | 2022-02-25 | 哈尔滨工业大学 | Hyperspectral image super-resolution method by utilizing heterogeneous knowledge distillation |
CN114119547A (en) * | 2021-11-19 | 2022-03-01 | 广东工业大学 | Three-dimensional hepatobiliary duct image segmentation algorithm and system |
CN114202481A (en) * | 2021-12-13 | 2022-03-18 | 贵州大学 | Multi-scale feature defogging network and method based on image high-frequency information fusion |
CN114547017A (en) * | 2022-04-27 | 2022-05-27 | 南京信息工程大学 | Meteorological big data fusion method based on deep learning |
CN114821261A (en) * | 2022-05-20 | 2022-07-29 | 合肥工业大学 | Image fusion algorithm |
CN114820302A (en) * | 2022-03-22 | 2022-07-29 | 桂林理工大学 | Improved image super-resolution algorithm based on residual dense CNN and edge enhancement |
CN115908144A (en) * | 2023-03-08 | 2023-04-04 | 中国科学院自动化研究所 | Image processing method, device, equipment and medium based on random wavelet attention |
CN116797456A (en) * | 2023-05-12 | 2023-09-22 | 苏州大学 | Image super-resolution reconstruction method, system, device and storage medium |
CN117078516A (en) * | 2023-08-11 | 2023-11-17 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117132472A (en) * | 2023-10-08 | 2023-11-28 | 兰州理工大学 | Forward-backward separable self-attention-based image super-resolution reconstruction method |
CN114202481B (en) * | 2021-12-13 | 2024-07-02 | 贵州大学 | Multi-scale feature defogging network and method based on image high-frequency information fusion |
-
2020
- 2020-02-08 AU AU2020100200A patent/AU2020100200A4/en not_active Ceased
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111754404B (en) * | 2020-06-18 | 2022-07-01 | 重庆邮电大学 | Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism |
CN111754404A (en) * | 2020-06-18 | 2020-10-09 | 重庆邮电大学 | Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism |
CN111882485A (en) * | 2020-06-19 | 2020-11-03 | 北京交通大学 | Hierarchical feature feedback fusion depth image super-resolution reconstruction method |
CN111882485B (en) * | 2020-06-19 | 2023-08-18 | 北京交通大学 | Hierarchical feature feedback fusion depth image super-resolution reconstruction method |
CN111768340A (en) * | 2020-06-30 | 2020-10-13 | 苏州大学 | Super-resolution image reconstruction method and system based on dense multi-path network |
CN111768340B (en) * | 2020-06-30 | 2023-12-01 | 苏州大学 | Super-resolution image reconstruction method and system based on dense multipath network |
CN112085760A (en) * | 2020-09-04 | 2020-12-15 | 厦门大学 | Prospect segmentation method of laparoscopic surgery video |
CN112085760B (en) * | 2020-09-04 | 2024-04-26 | 厦门大学 | Foreground segmentation method for laparoscopic surgery video |
CN112330539A (en) * | 2020-10-10 | 2021-02-05 | 北京嘀嘀无限科技发展有限公司 | Super-resolution image reconstruction method, device, storage medium and electronic equipment |
CN112233033A (en) * | 2020-10-19 | 2021-01-15 | 中南民族大学 | Progressive high-power face super-resolution system and method for analytic prior fusion |
CN112419155A (en) * | 2020-11-26 | 2021-02-26 | 武汉大学 | Super-resolution reconstruction method for fully-polarized synthetic aperture radar image |
CN112419155B (en) * | 2020-11-26 | 2022-04-15 | 武汉大学 | Super-resolution reconstruction method for fully-polarized synthetic aperture radar image |
CN112561838A (en) * | 2020-12-02 | 2021-03-26 | 西安电子科技大学 | Image enhancement method based on residual self-attention and generation countermeasure network |
CN112561838B (en) * | 2020-12-02 | 2024-01-30 | 西安电子科技大学 | Image enhancement method based on residual self-attention and generation of countermeasure network |
CN112634238B (en) * | 2020-12-25 | 2024-03-08 | 武汉大学 | Attention module-based image quality evaluation method |
CN112634238A (en) * | 2020-12-25 | 2021-04-09 | 武汉大学 | Image quality evaluation method based on attention module |
CN112669216A (en) * | 2021-01-05 | 2021-04-16 | 华南理工大学 | Super-resolution reconstruction network of parallel cavity new structure based on federal learning |
CN112669216B (en) * | 2021-01-05 | 2022-04-22 | 华南理工大学 | Super-resolution reconstruction network of parallel cavity new structure based on federal learning |
CN112734915A (en) * | 2021-01-19 | 2021-04-30 | 北京工业大学 | Multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning |
CN112767251A (en) * | 2021-01-20 | 2021-05-07 | 重庆邮电大学 | Image super-resolution method based on multi-scale detail feature fusion neural network |
CN113139899A (en) * | 2021-03-31 | 2021-07-20 | 桂林电子科技大学 | Design method of high-quality light-weight super-resolution reconstruction network model |
CN113033448A (en) * | 2021-04-02 | 2021-06-25 | 东北林业大学 | Remote sensing image cloud-removing residual error neural network system, method and equipment based on multi-scale convolution and attention and storage medium |
CN113033448B (en) * | 2021-04-02 | 2022-07-08 | 东北林业大学 | Remote sensing image cloud-removing residual error neural network system, method and equipment based on multi-scale convolution and attention and storage medium |
CN113222818A (en) * | 2021-05-18 | 2021-08-06 | 浙江师范大学 | Method for reconstructing super-resolution image by using lightweight multi-channel aggregation network |
CN113313644B (en) * | 2021-05-26 | 2024-03-26 | 西安理工大学 | Underwater image enhancement method based on residual double-attention network |
CN113313644A (en) * | 2021-05-26 | 2021-08-27 | 西安理工大学 | Underwater image enhancement method based on residual double attention network |
CN113284067A (en) * | 2021-05-31 | 2021-08-20 | 西安理工大学 | Hyperspectral panchromatic sharpening method based on depth detail injection network |
CN113284067B (en) * | 2021-05-31 | 2024-02-09 | 西安理工大学 | Hyperspectral panchromatic sharpening method based on depth detail injection network |
CN113627487A (en) * | 2021-07-13 | 2021-11-09 | 西安理工大学 | Super-resolution reconstruction method based on deep attention mechanism |
CN113627487B (en) * | 2021-07-13 | 2023-09-05 | 西安理工大学 | Super-resolution reconstruction method based on deep attention mechanism |
CN113506222B (en) * | 2021-07-30 | 2024-03-01 | 合肥工业大学 | Multi-mode image super-resolution method based on convolutional neural network |
CN113506222A (en) * | 2021-07-30 | 2021-10-15 | 合肥工业大学 | Multi-mode image super-resolution method based on convolutional neural network |
CN113658201B (en) * | 2021-08-02 | 2022-07-29 | 天津大学 | Deep learning colorectal cancer polyp segmentation device based on enhanced multi-scale features |
CN113658201A (en) * | 2021-08-02 | 2021-11-16 | 天津大学 | Deep learning colorectal cancer polyp segmentation device based on enhanced multi-scale features |
CN113393382A (en) * | 2021-08-16 | 2021-09-14 | 四川省人工智能研究院(宜宾) | Binocular picture super-resolution reconstruction method based on multi-dimensional parallax prior |
CN113393382B (en) * | 2021-08-16 | 2021-11-09 | 四川省人工智能研究院(宜宾) | Binocular picture super-resolution reconstruction method based on multi-dimensional parallax prior |
CN113674156A (en) * | 2021-09-06 | 2021-11-19 | 苏州大学 | Method and system for reconstructing image super-resolution |
CN114092327B (en) * | 2021-11-02 | 2024-06-07 | 哈尔滨工业大学 | Hyperspectral image super-resolution method utilizing heterogeneous knowledge distillation |
CN114092327A (en) * | 2021-11-02 | 2022-02-25 | 哈尔滨工业大学 | Hyperspectral image super-resolution method by utilizing heterogeneous knowledge distillation |
CN114119547A (en) * | 2021-11-19 | 2022-03-01 | 广东工业大学 | Three-dimensional hepatobiliary duct image segmentation algorithm and system |
CN114202481B (en) * | 2021-12-13 | 2024-07-02 | 贵州大学 | Multi-scale feature defogging network and method based on image high-frequency information fusion |
CN114202481A (en) * | 2021-12-13 | 2022-03-18 | 贵州大学 | Multi-scale feature defogging network and method based on image high-frequency information fusion |
CN114820302A (en) * | 2022-03-22 | 2022-07-29 | 桂林理工大学 | Improved image super-resolution algorithm based on residual dense CNN and edge enhancement |
CN114820302B (en) * | 2022-03-22 | 2024-05-24 | 桂林理工大学 | Improved image super-resolution algorithm based on residual dense CNN and edge enhancement |
CN114547017A (en) * | 2022-04-27 | 2022-05-27 | 南京信息工程大学 | Meteorological big data fusion method based on deep learning |
CN114547017B (en) * | 2022-04-27 | 2022-08-05 | 南京信息工程大学 | Meteorological big data fusion method based on deep learning |
CN114821261A (en) * | 2022-05-20 | 2022-07-29 | 合肥工业大学 | Image fusion algorithm |
CN115908144A (en) * | 2023-03-08 | 2023-04-04 | 中国科学院自动化研究所 | Image processing method, device, equipment and medium based on random wavelet attention |
CN116797456A (en) * | 2023-05-12 | 2023-09-22 | 苏州大学 | Image super-resolution reconstruction method, system, device and storage medium |
CN117078516B (en) * | 2023-08-11 | 2024-03-12 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117078516A (en) * | 2023-08-11 | 2023-11-17 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117132472A (en) * | 2023-10-08 | 2023-11-28 | 兰州理工大学 | Forward-backward separable self-attention-based image super-resolution reconstruction method |
CN117132472B (en) * | 2023-10-08 | 2024-05-31 | 兰州理工大学 | Forward-backward separable self-attention-based image super-resolution reconstruction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2020100200A4 (en) | Content-guide Residual Network for Image Super-Resolution | |
CN109165660B (en) | Significant object detection method based on convolutional neural network | |
CN112651973B (en) | Semantic segmentation method based on cascade of feature pyramid attention and mixed attention | |
CN111275618B (en) | Depth map super-resolution reconstruction network construction method based on double-branch perception | |
CN113362223B (en) | Image super-resolution reconstruction method based on attention mechanism and two-channel network | |
Liu et al. | An attention-based approach for single image super resolution | |
CN111242288B (en) | Multi-scale parallel deep neural network model construction method for lesion image segmentation | |
CN110276354B (en) | High-resolution streetscape picture semantic segmentation training and real-time segmentation method | |
CN113888550B (en) | Remote sensing image road segmentation method combining super-resolution and attention mechanism | |
CN113409191A (en) | Lightweight image super-resolution method and system based on attention feedback mechanism | |
CN114419449B (en) | Self-attention multi-scale feature fusion remote sensing image semantic segmentation method | |
CN110210620A (en) | A kind of channel pruning method for deep neural network | |
CN113313180B (en) | Remote sensing image semantic segmentation method based on deep confrontation learning | |
CN109241858A (en) | A kind of passenger flow density detection method and device based on rail transit train | |
CN110599502A (en) | Skin lesion segmentation method based on deep learning | |
CN116682120A (en) | Multilingual mosaic image text recognition method based on deep learning | |
Qin et al. | Lightweight hierarchical residual feature fusion network for single-image super-resolution | |
He et al. | Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks | |
CN111951289B (en) | Underwater sonar image data segmentation method based on BA-Unet | |
Kang et al. | Multilayer degradation representation-guided blind super-resolution for remote sensing images | |
CN117173577A (en) | Remote sensing image building change detection method based on improved Swin transducer | |
CN109272450B (en) | Image super-resolution method based on convolutional neural network | |
CN111461988A (en) | Seismic velocity model super-resolution technology based on multi-task learning | |
CN116385265B (en) | Training method and device for image super-resolution network | |
CN117173022A (en) | Remote sensing image super-resolution reconstruction method based on multipath fusion and attention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |