CN107194872A - Remote sensed image super-resolution reconstruction method based on perception of content deep learning network - Google Patents
Remote sensed image super-resolution reconstruction method based on perception of content deep learning network Download PDFInfo
- Publication number
- CN107194872A CN107194872A CN201710301990.6A CN201710301990A CN107194872A CN 107194872 A CN107194872 A CN 107194872A CN 201710301990 A CN201710301990 A CN 201710301990A CN 107194872 A CN107194872 A CN 107194872A
- Authority
- CN
- China
- Prior art keywords
- mrow
- image
- complexity
- content
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000013135 deep learning Methods 0.000 title claims abstract description 27
- 230000008447 perception Effects 0.000 title abstract 2
- 230000006870 function Effects 0.000 claims abstract description 14
- 238000012549 training Methods 0.000 claims description 26
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 claims 1
- 238000005259 measurement Methods 0.000 abstract description 3
- 238000010801 machine learning Methods 0.000 abstract description 2
- 238000000205 computational method Methods 0.000 abstract 1
- 230000007786 learning performance Effects 0.000 abstract 1
- 238000005457 optimization Methods 0.000 abstract 1
- 230000002123 temporal effect Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of Remote sensed image super-resolution reconstruction method based on perception of content deep learning network, the present invention proposes the comprehensive measurement index and computational methods of picture material complexity, based on this, sample image is classified by content complexity, build and train the deep layer GAN network models of high, medium and low three kinds of complexity not etc., then according to the content complexity for the input picture for treating oversubscription, choose corresponding network and rebuild.In order to improve the learning performance of GAN networks, the present invention gives a kind of loss function definition of optimization simultaneously.The present invention overcomes the contradiction of the over-fitting of generally existing in the super-resolution rebuilding based on machine learning and poor fitting, the super-resolution rebuilding precision of remote sensing image is effectively improved.
Description
Technical Field
The invention belongs to the technical field of image processing, relates to an image super-resolution reconstruction method, and particularly relates to a remote sensing image super-resolution reconstruction method based on a content-aware deep learning network.
Technical Field
The remote sensing image with high spatial resolution can describe the ground features more finely and provide rich detail information, so people usually hope to obtain the image with high spatial resolution. With the rapid development of the spatial detection theory and technology, remote sensing images with meter-level and even sub-meter-level spatial resolution (such as IKNOS and QuickBird) gradually come to be applied, but the temporal resolution is generally low. In contrast, some sensors with lower spatial resolution (e.g., MODIS) have high temporal resolution, and they can acquire a wide range of remote sensing images in a short time. If an image with high spatial resolution can be reconstructed from these images with lower spatial resolution, a remote sensing image with both high spatial resolution and high temporal resolution can be acquired. Therefore, it is necessary to reconstruct the remote sensing image with lower resolution to obtain the image with higher resolution.
In recent years, deep learning has been widely used to solve various problems in computer vision and image processing. In 2014, c.dong et al, university of chinese in hong kong, introduced deep CNN learning into super-resolution reconstruction of images first, and obtained better effect than the previous mainstream sparse expression method; in 2015, jkim et al, seoul national university in korea, further proposed an RNN-based improvement method, with further improvement in performance; in 2016, y.romano et al, Google corporation developed a fast and accurate learning method; shortly thereafter, c.leiig et al, Twitter corporation, used GAN networks (generative countermeasure networks) for image super-resolution, achieving the best reconstruction results to date. Moreover, the GAN is a deep belief network at the bottom, no longer relying strictly on supervised learning, and can be trained even without one-to-one pairs of high and low resolution image samples.
After the deep learning model and the network architecture are determined, the performance of the super-resolution method based on the deep learning is determined by the quality of the network model training to a great extent. Deep learning network models are not trained more thoroughly and efficiently, but rather should be adequately and appropriately sample learned (just as deep network models are not more numerous and better). For complex images, more samples need to be trained, so that more image features can be learned, but the network is easy to overfit for simple-content images, so that the super-resolution result is fuzzy; on the contrary, the training intensity is reduced, the over-fitting phenomenon of the simple content images can be avoided, but the under-fitting problem of the complex content images can be caused, and the naturalness and the fidelity of the reconstructed images are reduced. How to achieve the training network can simultaneously meet the requirements of high-quality reconstruction of images with complex contents and simplicity, and is a problem that a deep learning-based method in the actual super-resolution application cannot avoid.
Disclosure of Invention
In order to solve the technical problem, the invention provides a remote sensing image super-resolution reconstruction method based on a content-aware deep learning network.
The technical scheme adopted by the invention is as follows: a remote sensing image super-resolution reconstruction method based on a content-aware deep learning network is characterized by comprising the following steps:
step 1: collecting high and low resolution remote sensing image samples, and performing block processing;
step 2: calculating the complexity of each image block, dividing the image blocks into a high class, a middle class and a low class according to the complexity, and respectively forming a training sample set with high complexity, middle complexity and low complexity;
and step 3: respectively training three GAN networks with high, medium and low complexity by using the obtained sample set;
and 4, step 4: and calculating the complexity of the input image, and selecting a corresponding GAN network for reconstruction according to the complexity.
Compared with the existing image super-resolution method, the method has the following advantages and positive effects:
(1) by using the simple idea of image classification, the method successfully overcomes the common contradiction of over-fitting and under-fitting in the super-resolution reconstruction based on machine learning, and effectively improves the super-resolution reconstruction precision of the remote sensing image;
(2) the deep learning network model based on the method is a GAN network, and the network does not depend on strictly aligned high-resolution and low-resolution sample blocks one by one during training, so that the application universality is improved, and the method is particularly suitable for the multi-source asynchronous imaging environment of high-resolution and low-resolution images in the field of remote sensing.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the remote sensing image super-resolution reconstruction method based on the content-aware deep learning network provided by the invention comprises the following steps:
step 1: collecting samples of the high-resolution and low-resolution remote sensing images, and uniformly cutting the high-resolution images into 128x128 image blocks and the low-resolution images into 64x64 image blocks;
step 2: calculating the complexity of each image block, dividing the image blocks into a high class, a middle class and a low class according to the complexity, and respectively forming a training sample set with high complexity, middle complexity and low complexity;
the computing principle and method of the image complexity are as follows:
the complexity of the image content comprises texture complexity and structural complexity, the information entropy and the gray scale consistency performance well describe the texture complexity, and the structural complexity is suitably described by the edge ratio of an object in the image. The content complexity measurement index C of the image is formed by weighting an information entropy H, a gray level consistency U and an edge ratio R according to the following formula:
C=wh×H+wu×U+we×E;
where w ish,wu,weEach is a respective weight, which is determined experimentally.
The respective calculation methods of the information entropy, texture consistency, and edge ratio are given below.
(1) Entropy of information
The information entropy reflects the number of image gray levels and the occurrence condition of each gray level pixel, and the higher the entropy value is, the more complicated the image texture is. The calculation formula of the image information entropy H is as follows:
n is the number of gray levels, NiK is the number of gray levels for the number of occurrences of each gray level.
(2) Gray scale uniformity
The gray consistency can reflect the uniformity of the image, and if the value is small, the gray consistency corresponds to a simple image, and conversely, the gray consistency corresponds to a complex image. The gray level consistency formula is:
where M, N are the number of rows and columns, respectively, of the image, f (i, j) is the gray value at pixel (i, j),is the mean of the gray levels of the pixels in the 3 × 3 neighborhood centered at (i, j).
(3) Edge ratio
The number of objects in the image directly reflects the complexity of the image, and if the number of objects is large, the image is generally complex, and vice versa. Since the counting of the target involves complicated graph segmentation and is inconvenient to calculate, the number of target edges indirectly reflects the number of targets in the image and the complexity thereof, and therefore, the counting can be used for describing the complexity of the image. The proportion of the target edge in the image can be described by an edge ratio, and the calculation formula is as follows:
wherein, M and N are the number of rows and columns of the image respectively, and E is the number of edge pixels in the image. Where the edge of the target in the image shows a significant change in gray scale, the edge can be obtained by a difference algorithm, and the edge pixel of the image is generally detected by an edge detection operator (such as Canny operator, Sobel operator, etc.).
The number of the high-resolution sample set image blocks is not less than 500000, the number of the medium-resolution image blocks is not less than 300000, and the number of the low-resolution image blocks is not less than 200000.
And step 3: respectively training three GAN networks with high, medium and low complexity by using the obtained sample set;
the loss function for GAN network training is defined as follows:
the loss function of GAN network training contains content loss, production-confrontation loss and total variation loss. Content loss characterizes the distortion of the image content, and the generation-confrontation loss describes the degree of distinction between the statistical properties of the generated result and data such as natural images, and total variation loss characterizes the continuity of the image content. The overall loss function consists of three loss function weights:
where w isv,wg,wtEach is a respective weight, which is determined experimentally.
The calculation method for each loss function is given below.
(1) Content loss
The traditional content loss function is expressed by MSE (mean square error of pixels), the loss of the image content is investigated pixel by pixel, and the high-frequency components on the image structure are diluted by network training based on the MSE, so that the image is over-blurred. To overcome this drawback, a feature loss function of the image is introduced here. Because the manual definition and the extraction of valuable image features are complex work, and the deep learning has the capability of automatically extracting the features, the method uses hidden layer features obtained by VGG network training for measurement. By phii,jRepresenting the characteristic graph obtained by the jth convolutional layer in front of the ith pooling layer in the VGG network, and defining the characteristic loss as a reconstructed imageAnd a reference imageThe euclidean distance of the VGG feature of (a), i.e.:
here Wi,j,Hi,jRepresenting the dimensions of the VGG feature map.
(2) Generating-fighting loss
The generation-confrontation loss takes into account the generative function of the GAN network, encouraging the network to produce a solution that is spatially consistent with the natural image manifold, so that the discriminator cannot distinguish the generated result from the natural image. The generative-confrontation loss is measured based on the discriminative probability of the discriminators for all training samples, as follows:
here, ,indicates that the discriminator D will reconstruct the resultJudging the probability of the natural image; n represents the total number of training samples.
(3) Total variation loss
The total variation loss is added to strengthen the local continuity of the learning result on the image content, and the calculation formula is as follows:
here, W and H denote the width and height of the reconstructed image.
And 4, step 4: and calculating the complexity of the input image, and selecting a corresponding GAN network for reconstruction according to the complexity.
The method specifically comprises the following substeps:
step 4.1: uniformly dividing an input image into 16 equal parts of subgraphs, calculating the complexity of each subgraph, and judging the subgraphs to belong to high, medium and low complexity types;
step 4.2: and selecting a corresponding GAN network according to the complexity type to perform super-resolution reconstruction.
The method classifies sample images according to the complexity of image contents, constructs and trains deep network models with different complexities, and selects a corresponding network for reconstruction according to the content complexity of input images to be over-classified. The remote sensing image records a large-scale range scene, and is not influenced by fine information of ground targets, and the space homogeneous region with consistent content complexity is more and large in area, such as large land features of urban areas, dry farmlands, paddy fields, lakes, mountainous regions and the like, so that the remote sensing image is more suitable for pre-classification training and reconstruction.
The GAN deep learning network model is adopted, not only is the best super-resolution performance given by the GAN network at present, but also the remote sensing images with high and low spatial resolutions serving as training samples are different in source and belong to multi-temporal images shot asynchronously, one-to-one alignment in pixel meaning cannot exist, so that the training of the CNN network is greatly limited, and the GAN network is an unsupervised learning network, so that the problem does not exist.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (13)
1. A remote sensing image super-resolution reconstruction method based on a content-aware deep learning network is characterized by comprising the following steps:
step 1: collecting high and low resolution remote sensing image samples, and performing block processing;
step 2: calculating the complexity of each image block, dividing the image blocks into a high class, a middle class and a low class according to the complexity, and respectively forming a training sample set with high complexity, middle complexity and low complexity;
and step 3: respectively training three GAN networks with high, medium and low complexity by using the obtained sample set;
and 4, step 4: and calculating the complexity of the input image, and selecting a corresponding GAN network for reconstruction according to the complexity.
2. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 1, characterized in that: in step 1, the high resolution image is evenly sliced into 128x128 image blocks and the low resolution image is evenly sliced into 64x64 image blocks.
3. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 1, wherein the complexity of the image block in step 2 is calculated by:
C=wh×H+wu×U+we×E;
wherein C represents the complexity of the image block, H represents the entropy of the image information, U represents the gray level consistency of the image, R represents the edge ratio of the image, and w represents the edge ratio of the imageh,wu,weEach is a respective weight, which is determined experimentally.
4. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 3, wherein the calculation formula of the image information entropy H is as follows:
<mrow> <mi>H</mi> <mo>=</mo> <mo>-</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>.</mo> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
wherein N is the number of gray levels, NiK is the number of gray levels for the number of occurrences of each gray level.
5. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 3, wherein the image gray level consistency U formula is as follows:
<mrow> <mi>U</mi> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>f</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>;</mo> </mrow>
where M, N are the number of rows and columns, respectively, of the image, f (i, j) is the gray scale value at pixel (i, j),is the mean of the gray levels of the pixels in the 3 × 3 neighborhood centered at (i, j).
6. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 3, wherein the image edge ratio R is calculated by the following formula:
<mrow> <mi>R</mi> <mo>=</mo> <mfrac> <mi>E</mi> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <mo>;</mo> </mrow>
wherein M and N are the number of rows and columns of the image respectively; and E is the number of edge pixels in the image and is obtained by a difference algorithm.
7. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to any one of claims 1 to 6, characterized in that: and 2, the training sample sets with high, medium and low complexity are obtained, wherein the number of the image blocks of the training sample set with high complexity is not less than 500000, the number of the image blocks of the training sample set with medium complexity is not less than 300000, and the number of the image blocks of the training sample set with low complexity is not less than 200000.
8. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network as claimed in claim 1, wherein the loss function of the GAN network training in step 3 is defined as:
<mrow> <mi>C</mi> <mo>=</mo> <msub> <mi>w</mi> <mi>v</mi> </msub> <mo>&times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>V</mi> <mi>G</mi> <mi>G</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>+</mo> <msub> <mi>w</mi> <mi>g</mi> </msub> <mo>&times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>G</mi> <mi>A</mi> <mi>N</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>+</mo> <msub> <mi>w</mi> <mi>t</mi> </msub> <mo>&times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>T</mi> <mi>V</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>;</mo> </mrow>1
wherein C represents a loss function of network training,a function representing the loss of content is represented,the expression generates-a function of the penalty of confrontation,representing a total variation loss function; w is av,wg,wtEach is a respective weight, which is determined experimentally.
9. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network of claim 8, wherein the content loss functionComprises the following steps:
wherein phi isi,jRepresenting a characteristic diagram obtained for the jth convolutional layer preceding the ith pooling layer in a VGG network, Wi,j,Hi,jDimensions representing a VGG feature map;which represents a reference image, is shown,representing the reconstructed image.
10. The remote sensing image super-resolution reconstruction method based on content-aware deep learning network of claim 8, wherein the generation-confrontation loss functionComprises the following steps:
<mrow> <msubsup> <mi>l</mi> <mrow> <mi>G</mi> <mi>A</mi> <mi>N</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>-</mo> <mi>log</mi> <mi> </mi> <mi>D</mi> <mrow> <mo>(</mo> <mi>G</mi> <mo>(</mo> <msubsup> <mi>I</mi> <mi>n</mi> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
wherein,representing a reconstructed image, D (G (I)LR) Means D for reconstructing the resultJudging the probability of the natural image; n represents the total number of training samples.
11. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network of claim 8, wherein the total variation loss functionComprises the following steps:
<mrow> <msubsup> <mi>l</mi> <mrow> <mi>T</mi> <mi>V</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>W</mi> <mi>H</mi> </mrow> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>W</mi> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>H</mi> </munderover> <mo>|</mo> <mo>|</mo> <mo>&dtri;</mo> <mi>G</mi> <msub> <mrow> <mo>(</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msup> <mo>)</mo> </mrow> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mo>|</mo> <mo>|</mo> <mo>;</mo> </mrow>
wherein, G (I)LR) Representing the reconstructed image and W, H representing the width and height of the reconstructed image.
12. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 1, wherein the specific implementation of step 4 comprises the following sub-steps:
step 4.1: uniformly dividing an input image, calculating the complexity of each sub-image, and judging the types of high, medium and low complexity;
step 4.2: and selecting a corresponding GAN network according to the complexity type to perform super-resolution reconstruction.
13. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 12, characterized in that: in step 4.1, the input image is divided evenly into 16 equal parts of subgraphs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710301990.6A CN107194872B (en) | 2017-05-02 | 2017-05-02 | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710301990.6A CN107194872B (en) | 2017-05-02 | 2017-05-02 | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107194872A true CN107194872A (en) | 2017-09-22 |
CN107194872B CN107194872B (en) | 2019-08-20 |
Family
ID=59872637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710301990.6A Active CN107194872B (en) | 2017-05-02 | 2017-05-02 | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107194872B (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767384A (en) * | 2017-11-03 | 2018-03-06 | 电子科技大学 | A kind of image, semantic dividing method based on dual training |
CN108346133A (en) * | 2018-03-15 | 2018-07-31 | 武汉大学 | A kind of deep learning network training method towards video satellite super-resolution rebuilding |
CN108665509A (en) * | 2018-05-10 | 2018-10-16 | 广东工业大学 | A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing |
CN108711141A (en) * | 2018-05-17 | 2018-10-26 | 重庆大学 | The motion blur image blind restoration method of network is fought using improved production |
CN108830209A (en) * | 2018-06-08 | 2018-11-16 | 西安电子科技大学 | Based on the remote sensing images method for extracting roads for generating confrontation network |
CN108876870A (en) * | 2018-05-30 | 2018-11-23 | 福州大学 | A kind of domain mapping GANs image rendering methods considering texture complexity |
CN108921791A (en) * | 2018-07-03 | 2018-11-30 | 苏州中科启慧软件技术有限公司 | Lightweight image super-resolution improved method based on adaptive important inquiry learning |
CN108961217A (en) * | 2018-06-08 | 2018-12-07 | 南京大学 | A kind of detection method of surface flaw based on positive example training |
CN109117944A (en) * | 2018-08-03 | 2019-01-01 | 北京悦图遥感科技发展有限公司 | A kind of super resolution ratio reconstruction method and system of steamer target remote sensing image |
CN109785270A (en) * | 2019-01-18 | 2019-05-21 | 四川长虹电器股份有限公司 | A kind of image super-resolution method based on GAN |
CN109903223A (en) * | 2019-01-14 | 2019-06-18 | 北京工商大学 | A kind of image super-resolution method based on dense connection network and production confrontation network |
CN109949219A (en) * | 2019-01-12 | 2019-06-28 | 深圳先进技术研究院 | A kind of reconstructing method of super-resolution image, device and equipment |
CN110033033A (en) * | 2019-04-01 | 2019-07-19 | 南京谱数光电科技有限公司 | A kind of Maker model training method based on CGANs |
CN110163852A (en) * | 2019-05-13 | 2019-08-23 | 北京科技大学 | The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks |
CN110599401A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院电子学研究所 | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium |
CN110689086A (en) * | 2019-10-08 | 2020-01-14 | 郑州轻工业学院 | Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network |
CN110738597A (en) * | 2018-07-19 | 2020-01-31 | 北京连心医疗科技有限公司 | Size self-adaptive preprocessing method of multi-resolution medical image in neural network |
CN110807740A (en) * | 2019-09-17 | 2020-02-18 | 北京大学 | Image enhancement method and system for window image of monitoring scene |
CN111144466A (en) * | 2019-12-17 | 2020-05-12 | 武汉大学 | Image sample self-adaptive depth measurement learning method |
CN111260705A (en) * | 2020-01-13 | 2020-06-09 | 武汉大学 | Prostate MR image multi-task registration method based on deep convolutional neural network |
CN111275713A (en) * | 2020-02-03 | 2020-06-12 | 武汉大学 | Cross-domain semantic segmentation method based on countermeasure self-integration network |
WO2020177582A1 (en) * | 2019-03-06 | 2020-09-10 | 腾讯科技(深圳)有限公司 | Video synthesis method, model training method, device and storage medium |
CN111712830A (en) * | 2018-02-21 | 2020-09-25 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN111915545A (en) * | 2020-08-06 | 2020-11-10 | 中北大学 | Self-supervision learning fusion method of multiband images |
CN112700003A (en) * | 2020-12-25 | 2021-04-23 | 深圳前海微众银行股份有限公司 | Network structure search method, device, equipment, storage medium and program product |
CN112825187A (en) * | 2019-11-21 | 2021-05-21 | 福州瑞芯微电子股份有限公司 | Super-resolution method, medium and device based on machine learning |
CN113139576A (en) * | 2021-03-22 | 2021-07-20 | 广东省科学院智能制造研究所 | Deep learning image classification method and system combining image complexity |
CN113421189A (en) * | 2021-06-21 | 2021-09-21 | Oppo广东移动通信有限公司 | Image super-resolution processing method and device and electronic equipment |
CN113538246A (en) * | 2021-08-10 | 2021-10-22 | 西安电子科技大学 | Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network |
US11263726B2 (en) | 2019-05-16 | 2022-03-01 | Here Global B.V. | Method, apparatus, and system for task driven approaches to super resolution |
CN116402691A (en) * | 2023-06-05 | 2023-07-07 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
CN117911285A (en) * | 2024-01-12 | 2024-04-19 | 北京数慧时空信息技术有限公司 | Remote sensing image restoration method based on time sequence image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825477A (en) * | 2015-01-06 | 2016-08-03 | 南京理工大学 | Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion |
CN105931179A (en) * | 2016-04-08 | 2016-09-07 | 武汉大学 | Joint sparse representation and deep learning-based image super resolution method and system |
CN106203269A (en) * | 2016-06-29 | 2016-12-07 | 武汉大学 | A kind of based on can the human face super-resolution processing method of deformation localized mass and system |
US20170046816A1 (en) * | 2015-08-14 | 2017-02-16 | Sharp Laboratories Of America, Inc. | Super resolution image enhancement technique |
-
2017
- 2017-05-02 CN CN201710301990.6A patent/CN107194872B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825477A (en) * | 2015-01-06 | 2016-08-03 | 南京理工大学 | Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion |
US20170046816A1 (en) * | 2015-08-14 | 2017-02-16 | Sharp Laboratories Of America, Inc. | Super resolution image enhancement technique |
CN105931179A (en) * | 2016-04-08 | 2016-09-07 | 武汉大学 | Joint sparse representation and deep learning-based image super resolution method and system |
CN106203269A (en) * | 2016-06-29 | 2016-12-07 | 武汉大学 | A kind of based on can the human face super-resolution processing method of deformation localized mass and system |
Non-Patent Citations (1)
Title |
---|
胡传平,等: "基于深度学习的图像超分辨率算法研究", 《铁道警察学院学报》 * |
Cited By (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767384A (en) * | 2017-11-03 | 2018-03-06 | 电子科技大学 | A kind of image, semantic dividing method based on dual training |
CN111712830B (en) * | 2018-02-21 | 2024-02-09 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN111712830A (en) * | 2018-02-21 | 2020-09-25 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN108346133B (en) * | 2018-03-15 | 2021-06-04 | 武汉大学 | Deep learning network training method for super-resolution reconstruction of video satellite |
CN108346133A (en) * | 2018-03-15 | 2018-07-31 | 武汉大学 | A kind of deep learning network training method towards video satellite super-resolution rebuilding |
CN108665509A (en) * | 2018-05-10 | 2018-10-16 | 广东工业大学 | A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing |
CN108711141A (en) * | 2018-05-17 | 2018-10-26 | 重庆大学 | The motion blur image blind restoration method of network is fought using improved production |
CN108711141B (en) * | 2018-05-17 | 2022-02-15 | 重庆大学 | Motion blurred image blind restoration method using improved generation type countermeasure network |
CN108876870A (en) * | 2018-05-30 | 2018-11-23 | 福州大学 | A kind of domain mapping GANs image rendering methods considering texture complexity |
CN108876870B (en) * | 2018-05-30 | 2022-12-13 | 福州大学 | Domain mapping GANs image coloring method considering texture complexity |
CN108961217A (en) * | 2018-06-08 | 2018-12-07 | 南京大学 | A kind of detection method of surface flaw based on positive example training |
CN108830209B (en) * | 2018-06-08 | 2021-12-17 | 西安电子科技大学 | Remote sensing image road extraction method based on generation countermeasure network |
CN108961217B (en) * | 2018-06-08 | 2022-09-16 | 南京大学 | Surface defect detection method based on regular training |
CN108830209A (en) * | 2018-06-08 | 2018-11-16 | 西安电子科技大学 | Based on the remote sensing images method for extracting roads for generating confrontation network |
CN108921791A (en) * | 2018-07-03 | 2018-11-30 | 苏州中科启慧软件技术有限公司 | Lightweight image super-resolution improved method based on adaptive important inquiry learning |
CN110738597A (en) * | 2018-07-19 | 2020-01-31 | 北京连心医疗科技有限公司 | Size self-adaptive preprocessing method of multi-resolution medical image in neural network |
CN109117944A (en) * | 2018-08-03 | 2019-01-01 | 北京悦图遥感科技发展有限公司 | A kind of super resolution ratio reconstruction method and system of steamer target remote sensing image |
CN109117944B (en) * | 2018-08-03 | 2021-01-15 | 北京悦图数据科技发展有限公司 | Super-resolution reconstruction method and system for ship target remote sensing image |
CN109949219A (en) * | 2019-01-12 | 2019-06-28 | 深圳先进技术研究院 | A kind of reconstructing method of super-resolution image, device and equipment |
CN109949219B (en) * | 2019-01-12 | 2021-03-26 | 深圳先进技术研究院 | Reconstruction method, device and equipment of super-resolution image |
CN109903223B (en) * | 2019-01-14 | 2023-08-25 | 北京工商大学 | Image super-resolution method based on dense connection network and generation type countermeasure network |
CN109903223A (en) * | 2019-01-14 | 2019-06-18 | 北京工商大学 | A kind of image super-resolution method based on dense connection network and production confrontation network |
CN109785270A (en) * | 2019-01-18 | 2019-05-21 | 四川长虹电器股份有限公司 | A kind of image super-resolution method based on GAN |
US11356619B2 (en) | 2019-03-06 | 2022-06-07 | Tencent Technology (Shenzhen) Company Limited | Video synthesis method, model training method, device, and storage medium |
WO2020177582A1 (en) * | 2019-03-06 | 2020-09-10 | 腾讯科技(深圳)有限公司 | Video synthesis method, model training method, device and storage medium |
CN110033033A (en) * | 2019-04-01 | 2019-07-19 | 南京谱数光电科技有限公司 | A kind of Maker model training method based on CGANs |
CN110163852B (en) * | 2019-05-13 | 2021-10-15 | 北京科技大学 | Conveying belt real-time deviation detection method based on lightweight convolutional neural network |
CN110163852A (en) * | 2019-05-13 | 2019-08-23 | 北京科技大学 | The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks |
US11263726B2 (en) | 2019-05-16 | 2022-03-01 | Here Global B.V. | Method, apparatus, and system for task driven approaches to super resolution |
CN110599401A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院电子学研究所 | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium |
CN110807740A (en) * | 2019-09-17 | 2020-02-18 | 北京大学 | Image enhancement method and system for window image of monitoring scene |
CN110689086A (en) * | 2019-10-08 | 2020-01-14 | 郑州轻工业学院 | Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network |
CN112825187A (en) * | 2019-11-21 | 2021-05-21 | 福州瑞芯微电子股份有限公司 | Super-resolution method, medium and device based on machine learning |
CN111144466B (en) * | 2019-12-17 | 2022-05-13 | 武汉大学 | Image sample self-adaptive depth measurement learning method |
CN111144466A (en) * | 2019-12-17 | 2020-05-12 | 武汉大学 | Image sample self-adaptive depth measurement learning method |
CN111260705A (en) * | 2020-01-13 | 2020-06-09 | 武汉大学 | Prostate MR image multi-task registration method based on deep convolutional neural network |
CN111260705B (en) * | 2020-01-13 | 2022-03-15 | 武汉大学 | Prostate MR image multi-task registration method based on deep convolutional neural network |
CN111275713A (en) * | 2020-02-03 | 2020-06-12 | 武汉大学 | Cross-domain semantic segmentation method based on countermeasure self-integration network |
CN111275713B (en) * | 2020-02-03 | 2022-04-12 | 武汉大学 | Cross-domain semantic segmentation method based on countermeasure self-integration network |
CN111915545A (en) * | 2020-08-06 | 2020-11-10 | 中北大学 | Self-supervision learning fusion method of multiband images |
CN111915545B (en) * | 2020-08-06 | 2022-07-05 | 中北大学 | Self-supervision learning fusion method of multiband images |
CN112700003A (en) * | 2020-12-25 | 2021-04-23 | 深圳前海微众银行股份有限公司 | Network structure search method, device, equipment, storage medium and program product |
CN113139576A (en) * | 2021-03-22 | 2021-07-20 | 广东省科学院智能制造研究所 | Deep learning image classification method and system combining image complexity |
CN113139576B (en) * | 2021-03-22 | 2024-03-12 | 广东省科学院智能制造研究所 | Deep learning image classification method and system combining image complexity |
CN113421189A (en) * | 2021-06-21 | 2021-09-21 | Oppo广东移动通信有限公司 | Image super-resolution processing method and device and electronic equipment |
CN113538246A (en) * | 2021-08-10 | 2021-10-22 | 西安电子科技大学 | Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network |
CN116402691A (en) * | 2023-06-05 | 2023-07-07 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
CN116402691B (en) * | 2023-06-05 | 2023-08-04 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
CN117911285A (en) * | 2024-01-12 | 2024-04-19 | 北京数慧时空信息技术有限公司 | Remote sensing image restoration method based on time sequence image |
Also Published As
Publication number | Publication date |
---|---|
CN107194872B (en) | 2019-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107194872B (en) | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network | |
US20210264568A1 (en) | Super resolution using a generative adversarial network | |
Wang et al. | Ultra-dense GAN for satellite imagery super-resolution | |
CN111127374B (en) | Pan-sharing method based on multi-scale dense network | |
Zhou et al. | Scale adaptive image cropping for UAV object detection | |
Gu et al. | Blind image quality assessment via learnable attention-based pooling | |
CN111639587B (en) | Hyperspectral image classification method based on multi-scale spectrum space convolution neural network | |
CN102402685B (en) | Method for segmenting three Markov field SAR image based on Gabor characteristic | |
CN112233129B (en) | Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device | |
CN109344818B (en) | Light field significant target detection method based on deep convolutional network | |
CN112926652B (en) | Fish fine granularity image recognition method based on deep learning | |
CN104616308A (en) | Multiscale level set image segmenting method based on kernel fuzzy clustering | |
Veeravasarapu et al. | Adversarially tuned scene generation | |
CN114612456B (en) | Billet automatic semantic segmentation recognition method based on deep learning | |
CN109146925A (en) | Conspicuousness object detection method under a kind of dynamic scene | |
CN104732230A (en) | Pathology image local-feature extracting method based on cell nucleus statistical information | |
Zhou et al. | Attention transfer network for nature image matting | |
Wang et al. | Single image haze removal via attention-based transmission estimation and classification fusion network | |
CN116630971A (en) | Wheat scab spore segmentation method based on CRF_Resunate++ network | |
Shit et al. | An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection | |
Chen et al. | A no-reference quality assessment metric for dynamic 3D digital human | |
CN111401209B (en) | Action recognition method based on deep learning | |
Bai et al. | Soil CT image quality enhancement via an improved super-resolution reconstruction method based on GAN | |
Zeng et al. | Swgan: A new algorithm of adhesive rice image segmentation based on improved generative adversarial networks | |
Yan et al. | Repeatable adaptive keypoint detection via self-supervised learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |