CN110288524B - Deep learning super-resolution method based on enhanced upsampling and discrimination fusion mechanism - Google Patents
Deep learning super-resolution method based on enhanced upsampling and discrimination fusion mechanism Download PDFInfo
- Publication number
- CN110288524B CN110288524B CN201910385777.7A CN201910385777A CN110288524B CN 110288524 B CN110288524 B CN 110288524B CN 201910385777 A CN201910385777 A CN 201910385777A CN 110288524 B CN110288524 B CN 110288524B
- Authority
- CN
- China
- Prior art keywords
- features
- resolution
- layer
- resolution image
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000004927 fusion Effects 0.000 title claims abstract description 31
- 230000007246 mechanism Effects 0.000 title claims abstract description 23
- 238000013135 deep learning Methods 0.000 title claims abstract description 20
- 239000010410 layer Substances 0.000 claims abstract description 74
- 238000013507 mapping Methods 0.000 claims abstract description 23
- 125000004122 cyclic group Chemical group 0.000 claims abstract description 18
- 238000005070 sampling Methods 0.000 claims abstract description 16
- 239000002356 single layer Substances 0.000 claims abstract description 15
- 230000009467 reduction Effects 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 101100365548 Caenorhabditis elegans set-14 gene Proteins 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a deep learning super-resolution method based on an enhanced upsampling and discrimination fusion mechanism, which comprises the following steps of: in the residual branch, a single layer of convolution layer based on deep learning is used for directly extracting the original features of the low-resolution input image; extracting deep features by applying 6 cascaded cyclic convolution units based on multilayer feature fusion, and performing up-sampling on the depth features output by each cyclic convolution unit through 6 deconvolution layers; fusing the up-sampled high-resolution features by using a feature fusion mode with a discrimination mechanism, and reducing the dimension of the up-sampled features by using a single-layer convolution layer to obtain a residual error of a high-resolution image; in the mapping branch, a bicubic interpolation method is used for up-sampling the low-resolution image; and adding the mapping and the residual error of the high-resolution image pixel by pixel to obtain a final high-resolution image. The invention provides an up-sampling opportunity for low-resolution features of each stage, and obtains higher reconstruction accuracy.
Description
Technical Field
The invention belongs to the technical field of low-level computer vision super-resolution, in particular to a depth learning super-resolution method based on an enhanced upsampling and identification fusion mechanism.
Background
With the progress of science and technology, more and more image resolution formats appear in people's daily life, and the formats are developed from Standard Definition (Standard Definition) to High Definition (High Definition), and further developed to 1080p, 2K and even 4K which are common at present. Higher resolution means more image detail and thus a larger amount of information may be present. A greater amount of information implies greater application potential. However, in the real world, one cannot acquire high resolution images due to limitations of physical properties of the imaging device; on the other hand, in internet applications, users can only store and transmit images with relatively low resolution due to limitations of network bandwidth, storage medium capacity, and the like. In fact, in most cases, it is often desirable to obtain higher resolution images. Therefore, how to efficiently improve the image resolution and further improve the image quality is an important research topic in the field of computer vision.
The Super-Resolution (Super-Resolution) technique is to reconstruct a corresponding high-Resolution image from an observed low-Resolution image, and is an important technical means for solving the problem of improving the Resolution of the image from the perspective of software. The super-resolution reconstruction of the image provides important technical support for enabling a computer to better observe, analyze and process the image, and has very important application value in the fields of high-definition televisions, medical images, satellite imaging, monitoring systems and the like
One of the conventional common methods for improving the resolution of an image is interpolation, including bilinear interpolation, bicubic interpolation, etc. The interpolation method is easy to realize, but the reconstructed pixel points and the pixel points around the reconstructed pixel points have regional similarity, so that a good smooth image can be generated by using the interpolation method, but the edge region of the image cannot be fully reconstructed. Some image details may be lost as a result, for example, edge portions of image elements that are more variable, sharper, may be blurred, and so forth. Another conventional method that is commonly used is a sparse coding method. The sparse coding method assumes that arbitrary natural pictures can be sparsely expressed in a conversion area. This conversion area is usually a dictionary of image elements, obtained by exploiting the correspondence between the low-resolution image and the high-resolution image. However, a drawback of sparse coding methods is that introducing sparsity constraints into non-linear reconstruction typically requires a large computational cost. Convolutional neural networks, which are neural networks specifically designed to process data having a grid-like structure (e.g., an image can be viewed as a two-dimensional grid of pixels), have been successful in a number of different types of computer vision processing tasks (e.g., image classification, image monitoring, etc.). Many solutions for implementing image super-resolution reconstruction based on convolutional neural network have been developed, such as super-resolution technology (DRCN) based on deep recursive convolutional network, super-resolution technology (RDN) based on dense residual network, and so on. However, the upsampling modules of these methods are too simple (only one upsampling layer) compared with the feature extraction module, which increases the instability and randomness of the network, and in feature fusion, different importance among feature channels is not considered, so that a more limited effect is obtained. Therefore, how to utilize the deep learning technique to improve the super-resolution performance of the image more efficiently is a problem that needs to be solved urgently at present.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a deep learning super-resolution method based on an enhanced upsampling and discrimination fusion mechanism, which is reasonable in design, high in efficiency and high in reconstruction accuracy.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
a deep learning super-resolution method based on an enhanced upsampling and discrimination fusion mechanism is characterized by comprising the following steps of:
step 2, extracting deep features by applying 6 cascaded cyclic convolution units based on multilayer feature fusion, and performing up-sampling on the depth features output by each cyclic convolution unit through 6 deconvolution layers;
step 3, fusing the up-sampled high-resolution features by using a feature fusion mode with a discrimination mechanism, and reducing the dimension of the up-sampled features by using a single-layer convolutional layer to obtain a residual error of the high-resolution image;
step 4, on the mapping branch, a bicubic interpolation method is used for carrying out up-sampling on the low-resolution image to obtain the mapping of the high-resolution image;
and 5, adding the mapping and the residual error of the high-resolution image pixel by pixel to obtain a final high-resolution image.
Further, the deep learning super-resolution method comprises a residual branch and a mapping branch.
Further, the specific implementation method of step 1 includes the following steps:
at the beginning of the residual branch, by a single layer based on depth scienceDirect extraction of low resolution images I from the convolutional layer of interestlrOriginal feature F of0;
Wherein the size of the convolution layer is 3 multiplied by 64.
Further, the specific implementation method of step 2 includes the following steps:
extracting deeper features through the first two serially-connected convolutional layers, and then cascading the deeper features with the initial features to obtain shallow features; then, the shallow layer features pass through the two subsequent series-connected convolution layers, and the obtained features are cascaded with the shallow layer features to obtain deep layer features; the deep layer features are fused with the multilayer features including the initial features and the shallow layer features;
wherein the sizes of the first two serially connected convolution layers are 3 multiplied by 64, and the sizes of the second two convolution layers are 3 multiplied by 128;
reducing the dimension of deep features through the deeply learned convolutional layer, and reducing the dimension of the features and the low-resolution image IlrOriginal feature F of0Adding the pixels one by one to obtain the output F of the kth cyclic convolution unitk;
Wherein the dimension reduction layer size is 1 × 1 × 256, k is 1, …, 6;
output characteristic F of each cyclic convolution unit through the deconvolution layerkPerforming up-sampling to obtain high-resolution characteristic Uk;
Wherein the deconvolution layer size is 3 × 3 × 64.
Further, the feature fusion mode with the identification mechanism is packaged into an identification fusion module.
Further, the specific implementation method of step 3 includes the following steps:
(1) high-resolution features U of different stages are combined through channel combination operationkCascading to obtain a characteristic R;
secondly, a one-dimensional channel adjustment factor vector beta is obtained by integrating the characteristic R1,…,βi,…,β6×64]Adaptively recalibrating each channel of the characteristic R, and reducing the dimension of each channel through the single-layer convolution layer to obtain a characteristic Dhr;
Wherein, betaiRepresenting the adjustment factor of the ith channel, and the dimension reduction layer size is 3 multiplied by 64;
characteristics D of using single-layer convolution layer pairshrReducing dimension to obtain residual error I of high-resolution imageRb;
Wherein, the dimension reduction layer size is 1 × 1 × 1.
Further, adaptively recalibrating the individual channels of the signature R includes:
fourthly, global pooling is conducted on all channels of the cascaded characteristic R, global semantic information of the channels is extracted, and then channel calibration coefficient vectors beta are obtained after convolutional layers, ReLU nonlinear transformation, convolutional layers and Sigmoid nonlinear transformation;
fifthly, recalibrating the input features R by using the channel calibration coefficient vector beta.
Further, the specific implementation method of step 4 includes the following steps:
in the mapping branch, a bicubic interpolation method is used for low-resolution image IlrUp-sampling to obtain high-resolution image mapping IIb。
Further, the specific implementation method of step 5 includes the following steps:
mapping of high resolution images IIbAnd residual error IRbAdding the pixels one by one to obtain the final high-resolution image Ihr。
The invention has the advantages and positive effects that:
the invention has reasonable design. In the residual branch, it uses a deep convolutional neural network model to directly extract original features from the low-resolution image to obtain more accurate feature representation. The cyclic feature extraction unit efficiently and fully utilizes the hierarchical features by utilizing a multi-layer feature fusion mechanism, enhances the semantic information of the features, and simultaneously reduces reconstruction ambiguity caused by feature information loss. The output features of each feature extraction unit are subjected to up-sampling processing, so that the output features can directly contribute to high-resolution features, and the stability of the network is improved through the integration mode. By utilizing a distinguished feature fusion mechanism, the differences of features at different stages are fully considered, the features are efficiently utilized, and the expression capacity of the network is improved. In the mapping branch, a bicubic linear up-sampling low-resolution image is used for obtaining a high-resolution mapping image, so that the residual error branch is further forced to definitely learn the image residual error by using a strong convolutional neural network, and the detail quality of a reconstructed image is improved. The invention utilizes the characteristic of cyclic convolution, reduces the parameter quantity of the network, improves the image resolution by using the Laplace pyramid structure, and ensures the reconstruction quality, especially for large-scale reconstruction tasks.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a super resolution algorithm network framework diagram of the present invention;
FIG. 2 is a block diagram of a feature extraction module;
FIG. 3 is a block diagram of a discrimination fusion module.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It should be noted that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art without any inventive work based on the embodiments of the present invention belong to the protection scope of the present invention.
The invention provides a depth learning super-resolution method based on an enhanced upsampling and discrimination fusion mechanism, which is characterized in that in a residual branch, a depth convolution neural network model is used for directly extracting original features from a low-resolution image, as shown in figures 1 to 3. The cyclic feature extraction unit efficiently and fully utilizes the hierarchical features by using a multi-layer feature fusion mechanism. The output features of each feature extraction unit are subjected to up-sampling processing, the difference of features in different stages is fully considered by utilizing a distinguished feature fusion mechanism, the features are efficiently utilized, and the expression capability of the network is improved. And in the mapping branch, the low-resolution image is linearly up-sampled by using bicubic to obtain a high-resolution mapping image. The invention utilizes the characteristic of cyclic convolution, reduces the parameter quantity of the network, improves the image resolution by using the Laplace pyramid structure, and ensures the reconstruction quality, especially for large-scale reconstruction tasks. The output of the network is a high-resolution image, the super-resolution accuracy is calculated by using the low-resolution and high-resolution image pairs, and finally the network is trained by taking an average absolute loss function as a target.
In this embodiment, the 2-fold image super-resolution method based on the enhanced upsampling and discriminant fusion mechanism includes the following steps:
step S1, extracting low resolution image I directly from the residual branch by single-layer convolution layer based on deep learninglrOriginal feature F of0. Wherein the size of the convolution layer is 3 multiplied by 64.
And step S2, extracting deep features by applying 6 cascaded cyclic convolution units based on multilayer feature fusion, and up-sampling the depth features output by each cyclic convolution unit through 6 deconvolution layers. The specific implementation method of the step is as follows:
s2.1, extracting deeper features through the first two convolutional layers connected in series, and then cascading the deeper features with the initial features to obtain shallow features; and then the shallow layer features are passed through the last two convolution layers connected in series, and the obtained features are cascaded with the shallow layer features to obtain the deep layer features. The deep features now merge the multi-layered features including the initial feature and the shallow features. Wherein the sizes of the first two series-connected convolution layers are 3 multiplied by 64, and the sizes of the last two series-connected convolution layers are 3 multiplied by 128.
S2.2, reducing the dimension of the deep layer feature through the convolution layer of deep learning, and combining the feature after dimension reduction and the low-resolution image IlrOriginal feature F of0Adding the pixels one by one to obtain the output F of the kth cyclic convolution unitk. The dimension reduction layer size is 1 × 1 × 256, and k is 1, …, 6.
Step S2.3, output characteristic F of each cyclic convolution unit through deconvolution layerkPerforming up-sampling to obtain high-resolution characteristic Uk. Wherein the deconvolution layer size is 3 × 3 × 64.
And step S3, fusing the up-sampled high-resolution features by using a feature fusion mode with a discrimination mechanism, and reducing the dimension of the up-sampled features by using a single-layer convolution layer to obtain the residual error of the high-resolution image. The specific implementation method of the step is as follows:
step S3.1, high-resolution features U in different stages are combined through channel combination operationkThe concatenation gives the characteristic R.
Step S3.2, by integrating the features R, a one-dimensional channel adjustment factor vector β ═ β is obtained1,…,βi,…,β6×64]Adaptively recalibrating each channel of the characteristic R, and reducing the dimension of each channel through the single-layer convolution layer to obtain a characteristic Dhr. Wherein, betaiRepresenting the adjustment factor of the ith channel, the dimension reduction layer size is 3 × 3 × 64. The specific implementation method of the step is as follows:
and S3.1.1, performing global pooling on each channel of the cascaded characteristic R, extracting global semantic information of the channel, and then performing convolutional layer, ReLU nonlinear transformation, convolutional layer and Sigmoid nonlinear transformation to obtain a channel calibration coefficient vector beta.
Step S3.1.2, input feature R is recalibrated using the channel calibration coefficient vector β.
Step S4, in the mapping branch, using bicubic interpolation method to low resolution image IlrPerforming 2 times of upsampling to obtain a mapping I of a high-resolution imageIb。
Step S5, mapping I of high resolution imageIbAnd residual error IRbAdding the pixels one by one to obtain the final high-resolution image Ihr。
Step S6, training the network with the average absolute loss function as the target, and evaluating the network performance by using PSNR (Peak Signal to noise ratio) and SSIM (structural similarity index).
The following experiment was conducted in accordance with the method of the present invention to demonstrate the effects of the present invention.
And (3) testing environment: matlab 2015 b; a MatConvNet framework; ubuntu16.04 system; NVIDIA GTX1080ti GPU
And (3) testing sequence: the selected data sets are the image data sets Set5, Set14, BSD100, and Urban100 for image super resolution. Where the Set5 dataset contains 5 images, the Set14 dataset contains 14 images, the BSD100 dataset contains 100 images, and the Urban100 dataset contains 100 images.
Testing indexes are as follows: the invention uses PSNR and SSIM for evaluation. The index data are calculated by different algorithms which are popular at present, and then result comparison is carried out, so that the method provided by the invention obtains a better result in the field of image super-resolution.
The test results were as follows:
TABLE 1 comparison of Performance of the present invention with other algorithms at different scales of amplification and different data sets (PSNR/SSIM)
As can be seen from the comparison data, the invention obtains better reconstruction effect, especially on the scale of x 3, x 4 and x 8, which is superior to other methods under most data sets. At the x 2 scale, the invention achieves a relatively low PSNR, but achieves a near-optimal SSIM, notably, the SSIM emphasizes the measure of similarity of image structures. Comprehensive analysis shows that the invention well reconstructs low-resolution images and keeps higher accuracy.
It should be emphasized that the embodiments described herein are illustrative rather than restrictive, and thus the present invention is not limited to the embodiments described in the detailed description, but also includes other embodiments that can be derived from the technical solutions of the present invention by those skilled in the art.
Claims (5)
1. A deep learning super-resolution method based on an enhanced upsampling and discrimination fusion mechanism is characterized by comprising the following steps of:
step 1, in a residual branch, directly extracting original features of a low-resolution image by using a single-layer convolution layer based on deep learning;
step 2, extracting deep features by applying 6 cascaded cyclic convolution units based on multilayer feature fusion, and performing up-sampling on the depth features output by each cyclic convolution unit through 6 deconvolution layers;
the specific implementation method comprises the following steps:
(1) extracting deeper features through the first two convolutional layers connected in series, and then cascading the deeper features with the initial features to obtain shallow features; then, the shallow layer features pass through the two subsequent series-connected convolution layers, and the obtained features are cascaded with the shallow layer features to obtain deep layer features; the deep layer features are fused with the multilayer features including the initial features and the shallow layer features;
wherein the sizes of the first two serially connected convolution layers are 3 × 3 × 64, and the sizes of the second two series connected convolution layers are 3 × 3 × 128:
(2) reducing dimension of deep features through a convolution layer of deep learning, and combining the reduced features with a low-resolution image IlrOriginal feature F of0Adding the pixels one by one to obtain the output F of the kth cyclic convolution unitk;
Wherein the dimension reduction layer size is 1 × 1 × 256, k ═ 1., 6;
(3) output characteristic F of each cyclic convolution unit through deconvolution layerkPerforming up-sampling to obtain high-resolution characteristic Uk;
Wherein the size of the deconvolution layer is 3 × 3 × 64;
step 3, fusing the up-sampled high-resolution features by using a feature fusion mode with a discrimination mechanism, and reducing the dimension of the up-sampled features by using a single-layer convolutional layer to obtain a residual error of the high-resolution image; the characteristic fusion mode with the identification mechanism is packaged into an identification fusion module;
the specific implementation method comprises the following steps:
(1) high-resolution features U of different stages are combined through channel combination operationkCascading to obtain a characteristic R;
(2) by integrating the features R, a one-dimensional channel adjustment factor vector beta is obtained1,...,βi,…,β6×64]Adaptively recalibrating each channel of the characteristic R, and reducing the dimension of each channel through the single-layer convolution layer to obtain a characteristic Dhr;
Wherein, betaiRepresenting the adjustment factor of the ith channel, and the dimension reduction layer size is 3 multiplied by 64;
(3) using monolayer convolutional layer pair of features DhrReducing dimension to obtain residual error I of high-resolution imageRb;
Wherein, the dimension reduction layer size is 1 multiplied by 1;
adaptively recalibrating the individual channels of the signature R includes:
(1) performing global pooling on each channel of the cascaded characteristic R, extracting global semantic information of the channel, and then performing convolutional layer, ReLU nonlinear transformation, convolutional layer and Sigmoid nonlinear transformation to obtain a channel calibration coefficient vector beta;
(2) recalibrating the input features R by using the channel calibration coefficient vector beta;
step 4, on the mapping branch, a bicubic interpolation method is used for carrying out up-sampling on the low-resolution image to obtain the mapping of the high-resolution image;
and 5, adding the mapping and the residual error of the high-resolution image pixel by pixel to obtain a final high-resolution image.
2. The enhanced upsampling and discrimination fusion mechanism-based deep learning super-resolution method according to claim 1, wherein: the deep learning super-resolution method comprises a residual branch and a mapping branch.
3. The method for deep learning super-resolution based on enhanced upsampling and discriminative fusion mechanism according to claim 1 or 2, wherein: the step 1 specifically comprises:
at the beginning of the residual branch, directly extracting the low-resolution image I through a single-layer convolutional layer based on deep learninglrOriginal feature F of0;
Wherein the size of the convolution layer is 3 multiplied by 64.
4. The enhanced upsampling and discrimination fusion mechanism-based deep learning super-resolution method according to claim 1, wherein: the specific implementation method of the step 4 comprises the following steps:
in the mapping branch, a bicubic interpolation method is used for low-resolution image IlrUp-sampling to obtain high-resolution image mapping IIb。
5. The enhanced upsampling and discrimination fusion mechanism-based deep learning super-resolution method according to claim 1, wherein: the specific implementation method of the step 5 comprises the following steps:
mapping of high resolution images IIbAnd residual error IRbAdding the pixels one by one to obtain the final high-resolution image Ihr。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910385777.7A CN110288524B (en) | 2019-05-09 | 2019-05-09 | Deep learning super-resolution method based on enhanced upsampling and discrimination fusion mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910385777.7A CN110288524B (en) | 2019-05-09 | 2019-05-09 | Deep learning super-resolution method based on enhanced upsampling and discrimination fusion mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110288524A CN110288524A (en) | 2019-09-27 |
CN110288524B true CN110288524B (en) | 2020-10-30 |
Family
ID=68001452
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910385777.7A Active CN110288524B (en) | 2019-05-09 | 2019-05-09 | Deep learning super-resolution method based on enhanced upsampling and discrimination fusion mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288524B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260551A (en) * | 2020-01-08 | 2020-06-09 | 华南理工大学 | Retina super-resolution reconstruction system and method based on deep learning |
CN112508969B (en) * | 2020-02-18 | 2021-12-07 | 广州柏视医疗科技有限公司 | Tubular structure segmentation graph fracture repair system of three-dimensional image based on deep learning network |
CN111507902B (en) * | 2020-04-15 | 2023-09-26 | 京东城市(北京)数字科技有限公司 | High-resolution image acquisition method and device |
CN112258487B (en) * | 2020-10-29 | 2024-06-18 | 成都芯昇动力科技有限公司 | Image detection system and method |
CN112270645B (en) * | 2020-11-03 | 2022-05-03 | 中南民族大学 | Progressive high-power face super-resolution system and method for multi-order feature cycle enhancement |
CN115546505A (en) * | 2022-09-14 | 2022-12-30 | 浙江工商大学 | Unsupervised monocular image depth estimation method based on deep learning |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8411743B2 (en) * | 2010-04-30 | 2013-04-02 | Hewlett-Packard Development Company, L.P. | Encoding/decoding system using feedback |
CN103514580B (en) * | 2013-09-26 | 2016-06-08 | 香港应用科技研究院有限公司 | For obtaining the method and system of the super-resolution image that visual experience optimizes |
CN105787899A (en) * | 2016-03-03 | 2016-07-20 | 河海大学 | Rapid image super-resolution method based on self-adaptive regression |
CN108122197B (en) * | 2017-10-27 | 2021-05-04 | 江西高创保安服务技术有限公司 | Image super-resolution reconstruction method based on deep learning |
CN108428212A (en) * | 2018-01-30 | 2018-08-21 | 中山大学 | A kind of image magnification method based on double laplacian pyramid convolutional neural networks |
CN108492248A (en) * | 2018-01-30 | 2018-09-04 | 天津大学 | Depth map super-resolution method based on deep learning |
CN108537733B (en) * | 2018-04-11 | 2022-03-11 | 南京邮电大学 | Super-resolution reconstruction method based on multi-path deep convolutional neural network |
CN108734660A (en) * | 2018-05-25 | 2018-11-02 | 上海通途半导体科技有限公司 | A kind of image super-resolution rebuilding method and device based on deep learning |
CN108805814B (en) * | 2018-06-07 | 2020-05-19 | 西安电子科技大学 | Image super-resolution reconstruction method based on multi-band deep convolutional neural network |
CN109064396B (en) * | 2018-06-22 | 2023-04-07 | 东南大学 | Single image super-resolution reconstruction method based on deep component learning network |
CN109509149A (en) * | 2018-10-15 | 2019-03-22 | 天津大学 | A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features |
CN109509192B (en) * | 2018-10-18 | 2023-05-30 | 天津大学 | Semantic segmentation network integrating multi-scale feature space and semantic space |
CN109583321A (en) * | 2018-11-09 | 2019-04-05 | 同济大学 | The detection method of wisp in a kind of structured road based on deep learning |
-
2019
- 2019-05-09 CN CN201910385777.7A patent/CN110288524B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110288524A (en) | 2019-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109886871B (en) | Image super-resolution method based on channel attention mechanism and multi-layer feature fusion | |
CN110288524B (en) | Deep learning super-resolution method based on enhanced upsampling and discrimination fusion mechanism | |
CN111369440B (en) | Model training and image super-resolution processing method, device, terminal and storage medium | |
CN112734646B (en) | Image super-resolution reconstruction method based on feature channel division | |
CN112001847A (en) | Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model | |
CN113139898B (en) | Light field image super-resolution reconstruction method based on frequency domain analysis and deep learning | |
CN111047515A (en) | Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism | |
CN112184554A (en) | Remote sensing image fusion method based on residual mixed expansion convolution | |
CN112801904B (en) | Hybrid degraded image enhancement method based on convolutional neural network | |
CN112699844A (en) | Image super-resolution method based on multi-scale residual error level dense connection network | |
Guo et al. | Multiscale semilocal interpolation with antialiasing | |
CN111861886B (en) | Image super-resolution reconstruction method based on multi-scale feedback network | |
CN111652804A (en) | Super-resolution reconstruction method based on expansion convolution pyramid and bottleneck network | |
CN116188272B (en) | Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores | |
CN116486074A (en) | Medical image segmentation method based on local and global context information coding | |
CN112949636A (en) | License plate super-resolution identification method and system and computer readable medium | |
CN114626984A (en) | Super-resolution reconstruction method for Chinese text image | |
Zhang et al. | Deformable and residual convolutional network for image super-resolution | |
Yang et al. | MRDN: A lightweight Multi-stage residual distillation network for image Super-Resolution | |
CN110047038B (en) | Single-image super-resolution reconstruction method based on hierarchical progressive network | |
CN116681592A (en) | Image super-resolution method based on multi-scale self-adaptive non-local attention network | |
CN113962882B (en) | JPEG image compression artifact eliminating method based on controllable pyramid wavelet network | |
CN113674154B (en) | Single image super-resolution reconstruction method and system based on generation countermeasure network | |
Que et al. | Integrating spectral and spatial bilateral pyramid networks for pansharpening | |
CN114359041A (en) | Light field image space super-resolution reconstruction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |