CN107194872A - Remote sensed image super-resolution reconstruction method based on perception of content deep learning network - Google Patents
Remote sensed image super-resolution reconstruction method based on perception of content deep learning network Download PDFInfo
- Publication number
- CN107194872A CN107194872A CN201710301990.6A CN201710301990A CN107194872A CN 107194872 A CN107194872 A CN 107194872A CN 201710301990 A CN201710301990 A CN 201710301990A CN 107194872 A CN107194872 A CN 107194872A
- Authority
- CN
- China
- Prior art keywords
- mrow
- image
- complexity
- perception
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
Abstract
The invention discloses a kind of Remote sensed image super-resolution reconstruction method based on perception of content deep learning network, the present invention proposes the comprehensive measurement index and computational methods of picture material complexity, based on this, sample image is classified by content complexity, build and train the deep layer GAN network models of high, medium and low three kinds of complexity not etc., then according to the content complexity for the input picture for treating oversubscription, choose corresponding network and rebuild.In order to improve the learning performance of GAN networks, the present invention gives a kind of loss function definition of optimization simultaneously.The present invention overcomes the contradiction of the over-fitting of generally existing in the super-resolution rebuilding based on machine learning and poor fitting, the super-resolution rebuilding precision of remote sensing image is effectively improved.
Description
Technical field
The invention belongs to technical field of image processing, it is related to a kind of image super-resolution rebuilding method, and in particular to a kind of
Remote sensed image super-resolution reconstruction method based on perception of content deep learning network.
Technical background
The remote sensing image of high spatial resolution can carry out finer description to atural object, and there is provided abundant details letter
Breath, therefore, people are often desirable to that the image of high spatial resolution can be obtained.With the rapid hair of space exploration theory and technology
Exhibition, even the meter level remote sensing image (such as IKNOS and QuickBird) of sub-meter grade spatial resolution progressively move towards application, but
Its temporal resolution is generally than relatively low.In contrast, there is the sensor (such as MODIS) compared with low spatial resolution but to have for some
Very high temporal resolution, they can obtain large-scale remote sensing image interior in short-term.If can be from these compared with low spatial point
The image of high spatial resolution is reconstructed in the image of resolution, then can just get while having high spatial resolution and height
The remote sensing image of temporal resolution.Therefore, the remote sensing image of low resolution is carried out rebuilding the image for obtaining high-resolution
It is very important.
In recent years, deep learning is widely used in solving the various problems in computer vision and image procossing.2014,
C.Dong of Hong Kong Chinese University et al. takes the lead in learning depth CNN to introduce the super-resolution rebuilding of image, achieves and relatively passes by
Main flow sparse expression the more preferable effect of method;, J.Kim of South Korea Seoul national university et al. in 2015 it is further proposed that
Improved method based on RNN, performance has a further lifting;, Y.Romano of Google companies et al. development in 2016
A kind of quick and accurate learning method;Then soon, C.Ledig of Twitter companies et al. is by GAN network (production pair
Anti- network) it is used for image super-resolution, achieve reconstruction effect best so far.Moreover, GAN bottom is depth conviction
Network, is no longer strictly dependent on the study of supervision, even in the situation of not man-to-man high-low resolution image pattern pair
Under can also train.
After deep learning model and the network architecture are determined, the performance very great Cheng of the super-resolution method based on deep learning
The quality trained on degree by network model is determined.The training of deep learning network model is not more thorough more effective, but should
Carry out abundant and suitable sample learning (as the number of plies of deep layer network model is not The more the better).For complicated figure
Picture is, it is necessary to which more sample trainings, so can just acquire more characteristics of image, but such network is to the simple image of content
Easily there is over-fitting, cause super-resolution result to obscure;Conversely, reducing training strength, the mistake of content simple image is avoided that
Fitting phenomenon, but the poor fitting problem of content complicated image can be caused, reduce naturalness and the fidelity of reconstructed image.How
The network for accomplishing training can be that actual super-resolution is answered while take into account the demand that the complicated and simple image superior quality of content is rebuild
The problem of method based on deep learning can not avoid in.
The content of the invention
In order to solve the above-mentioned technical problem, the present invention proposes a kind of remote sensing figure based on perception of content deep learning network
As super resolution ratio reconstruction method.
The technical solution adopted in the present invention is:A kind of remote sensing images super-resolution based on perception of content deep learning network
Rate method for reconstructing, it is characterised in that comprise the following steps:
Step 1:High-low resolution remote sensing images sample is collected, and carries out piecemeal processing;
Step 2:Calculate the complexity of each image block, be divided into high, medium and low three class by complexity, respectively constitute it is high, in,
The training sample set of low complex degree;
Step 3:Three kinds of GAN networks of high, medium and low complexity are respectively trained using the sample set of acquisition;
Step 4:The complexity of calculating input image, corresponding GAN network reconnections are chosen according to complexity.
Compared with existing image super-resolution method, the present invention has the advantages that:
(1) present invention is by with this simple ideas of image classification, successfully overcoming the super-resolution based on machine learning
The over-fitting and the contradiction of poor fitting of generally existing, effectively improve the super-resolution rebuilding precision of remote sensing image during rate is rebuild;
(2) the inventive method based on deep learning network model be GAN networks, the network is in training independent of strict
The high-low resolution sample block alignd one by one, thus improve and apply universality, it is particularly suitable for remote sensing fields high-low resolution
The asynchronous imaging circumstances of multi-source of image.
Brief description of the drawings
Fig. 1 is the flow chart of the embodiment of the present invention.
Embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, below in conjunction with the accompanying drawings and embodiment is to this hair
It is bright to be described in further detail, it will be appreciated that implementation example described herein is merely to illustrate and explain the present invention, not
For limiting the present invention.
A kind of Remote Sensing Image Super Resolution weight based on perception of content deep learning network provided see Fig. 1, the present invention
Construction method, comprises the following steps:
Step 1:High-low resolution remote sensing images sample is collected, high-definition picture is equably cut into 128x128's
Image block, low-resolution image are equably cut into 64x64 image block;
Step 2:Calculate the complexity of each image block, be divided into high, medium and low three class by complexity, respectively constitute it is high, in,
The training sample set of low complex degree;
The Computing Principle and method of image complexity are as follows:
The complexity of picture material includes texture complexity and structural complexity, and comentropy and gray consistency can be preferably
Texture complexity is portrayed, and the edge ratio of target is described in complicated sexual compatibility image.The content complexity degree of image
Figureofmerit C is made up of comentropy H, gray consistency U and edge ratio R, as the following formula weighting:
C=wh×H+wu×U+we×E;
Here wh, wu, weIt is respective weight respectively, weight is determined by experiment.
Comentropy, texture homogeneity and the respective computational methods of edge ratio are given below.
(1) comentropy
The number of comentropy reflection image gray levels and the appearance situation of each gray-level pixels, entropy is higher to show figure
As texture is more complicated.Image information entropy H calculation formula is:
N is the number of gray level, niThe number occurred for each gray level, K is number of grayscale levels.
(2) gray consistency
Gray consistency can reflect the uniform level of image, if its value is smaller, correspond to simple image, otherwise right
Answer complicated image.Gray consistency formula is:
Wherein, M, N are respectively the line number and columns of image, and f (i, j) is the gray value at pixel (i, j) place,Be with
The gray average of 3 × 3 neighborhood territory pixels centered on (i, j).
(3) edge ratio
How much target number directly reflects the complexity of image in map sheet, if target number is more, the image
It is typically complex, vice versa.Because the counting of target is related to the figure segmentation of complexity, it is not easy to calculate, object edge
It is how many reflect indirectly object in image number and its complexity, therefore can be for describing the complexity of image.Figure
Ratio as in shared by object edge can be described with edge ratio, and calculation formula is:
Wherein, M and N are respectively the line number and columns of image, and E is the number of edge pixel in image.Target in image
Edge shows as the place of gray scale significant changes, can be asked for by difference algorithm, typically by edge detection operator (such as
Canny operators, Sobel operators etc.) detection image edge pixel.
Its middle high-resolution sample set image number of blocks is no less than 500000, and medium resolution image number of blocks is no less than
300000, low-resolution image number of blocks is no less than 200000.
Step 3:Three kinds of GAN networks of high, medium and low complexity are respectively trained using the sample set of acquisition;
The loss function of GAN network trainings is defined as follows:
The loss function of GAN network trainings includes content loss, generation-confrontation loss and total variation loss.Content loss
The distortion of picture material is featured, generation-confrontation loss describes to generate the statistical property of result and this kind of number of natural image
According to discrimination, total variation, which is lost, then features the continuity of picture material.Overall loss function is weighted by three kinds of loss functions
Composition:
Here wv, wg, wtIt is respective weight respectively, weight is determined by experiment.
The computational methods of every kind of loss function are given below.
(1) content loss
MSE (pixel mean square error) expressions of traditional content loss function, the pixel-by-pixel loss of image under consideration content, base
Desalinate the radio-frequency component on picture structure in MSE network training, cause image excessively fuzzy.To overcome this defect, here
Introduce the characteristic loss function of image.Due to Manual definition and the valuable inherently one complicated work of characteristics of image of extraction
Make, while having the ability for automatically extracting feature in view of deep learning, this method borrows the hidden layer that VGG network trainings are obtained
Feature is measured.Use φi,jThe characteristic pattern that j-th of convolutional layer in VGG networks before i-th of pond layer is obtained is represented, by spy
Levy loss and be defined as reconstructed imageWith reference pictureVGG features Euclidean distance, i.e.,:
Here Wi,j, Hi,jRepresent the dimension of VGG characteristic patterns.
(2) generation-confrontation loss
The production function of GAN networks is paid attention in generation-confrontation loss, encourages network to produce and natural image manifold
The solution of space unanimously so that generation result can not be distinguished by arbiter with natural image.Generation-confrontation loss is based on differentiation
Device is weighed to the differentiation probability of all training samples, and formula is as follows:
Here,Represent arbiter D by reconstruction resultIt is determined as the probability of natural image;N is represented
Training sample sum.
(3) total variation is lost
Increase total variation loss is the local coherence in order to strengthen learning outcome in picture material, its calculation formula
For:
Here W, H represent the width and height of reconstructed image.
Step 4:The complexity of calculating input image, corresponding GAN network reconnections are chosen according to complexity.
Specifically it is made up of following sub-step:
Step 4.1:Input picture is evenly dividing into 16 equal portions subgraphs, the complexity of each subgraph is calculated, and judges category
In the type of high, medium and low complexity;
Step 4.2:Corresponding GAN networks are chosen according to complexity type and carry out super-resolution rebuilding.
The present invention classifies sample image by picture material complexity, builds and train the deep layer network mould of complexity not etc.
Type, then according to the content complexity for the input picture for treating oversubscription, chooses corresponding network and is rebuild.Remote sensing image record
Large scale scope scene, because the fine information not by ground target is influenceed, the consistent space homogeneity area of content complexity compared with
Many and area is big, such as city, dry land, paddy field, lake, the large-scale atural object in mountain region, thus compares to be adapted to presort and train and again
Build.
Here GAN deep learning network models are used, GAN networks is not only due to and gives super-resolution best at present
Performance, moreover, it is different to be originated as the high low spatial resolution remote sensing image of training sample, belongs to the multidate of asynchronous shooting
Image, it is impossible to there is the alignment one by one in pixel meaning, this greatly limits the training of CNN networks, and GAN network right and wrong
Supervised learning network, therefore in the absence of this problem.
It should be appreciated that the part that this specification is not elaborated belongs to prior art.
It should be appreciated that the above-mentioned description for preferred embodiment is more detailed, therefore it can not be considered to this
The limitation of invention patent protection scope, one of ordinary skill in the art is not departing from power of the present invention under the enlightenment of the present invention
Profit is required under protected ambit, can also be made replacement or be deformed, each fall within protection scope of the present invention, this hair
It is bright scope is claimed to be determined by the appended claims.
Claims (13)
1. a kind of Remote sensed image super-resolution reconstruction method based on perception of content deep learning network, it is characterised in that including
Following steps:
Step 1:High-low resolution remote sensing images sample is collected, and carries out piecemeal processing;
Step 2:The complexity of each image block is calculated, is divided into high, medium and low three class by complexity, is respectively constituted high, medium and low multiple
The training sample set of miscellaneous degree;
Step 3:Three kinds of GAN networks of high, medium and low complexity are respectively trained using the sample set of acquisition;
Step 4:The complexity of calculating input image, corresponding GAN network reconnections are chosen according to complexity.
2. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network,
It is characterized in that:In step 1, image block, the low-resolution image that high-definition picture is equably cut into 128x128 are uniform
Ground is cut into 64x64 image block.
3. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network,
Characterized in that, the complexity of image block described in step 2, its computational methods is:
C=wh×H+wu×U+we×E;
Wherein, the complexity of C tables image block, H represents image information entropy, and U represents gradation of image uniformity, and R represents image border
Ratio, wh, wu, weIt is respective weight respectively, weight is determined by experiment.
4. the Remote sensed image super-resolution reconstruction method according to claim 3 based on perception of content deep learning network,
Characterized in that, described image comentropy H calculation formula is:
<mrow>
<mi>H</mi>
<mo>=</mo>
<mo>-</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>K</mi>
</munderover>
<msub>
<mi>n</mi>
<mi>i</mi>
</msub>
<mo>/</mo>
<mi>N</mi>
<mo>.</mo>
<mi>log</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>n</mi>
<mi>i</mi>
</msub>
<mo>/</mo>
<mi>N</mi>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
Wherein, N is the number of gray level, niThe number occurred for each gray level, K is number of grayscale levels.
5. the Remote sensed image super-resolution reconstruction method according to claim 3 based on perception of content deep learning network,
Characterized in that, described image gray consistency U formula are:
<mrow>
<mi>U</mi>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<mi>f</mi>
<mo>&OverBar;</mo>
</mover>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>;</mo>
</mrow>
Wherein, M, N are respectively the line number and columns of image, and f (i, j) is the gray value at pixel (i, j) place,It is with (i, j)
Centered on 3 × 3 neighborhood territory pixels gray average.
6. the Remote sensed image super-resolution reconstruction method according to claim 3 based on perception of content deep learning network,
Characterized in that, described image edge ratio R calculation formula is:
<mrow>
<mi>R</mi>
<mo>=</mo>
<mfrac>
<mi>E</mi>
<mrow>
<mi>M</mi>
<mo>&times;</mo>
<mi>N</mi>
</mrow>
</mfrac>
<mo>;</mo>
</mrow>
Wherein, M and N are respectively the line number and columns of image;E is the number of edge pixel in image, is asked for by difference algorithm.
7. the Remote Sensing Image Super Resolution based on perception of content deep learning network according to claim 1-6 any one
Method for reconstructing, it is characterised in that:The training sample set of high, medium and low complexity described in step 2, wherein the training sample of high complexity
This collection image number of blocks is no less than 500000, and the training sample set image number of blocks of middle complexity is no less than 300000, low complexity
The training sample set image number of blocks of degree is no less than 200000.
8. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network,
Characterized in that, the loss function of GAN network trainings is defined as in step 3:
<mrow>
<mi>C</mi>
<mo>=</mo>
<msub>
<mi>w</mi>
<mi>v</mi>
</msub>
<mo>&times;</mo>
<msubsup>
<mi>l</mi>
<mrow>
<mi>V</mi>
<mi>G</mi>
<mi>G</mi>
</mrow>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msubsup>
<mo>+</mo>
<msub>
<mi>w</mi>
<mi>g</mi>
</msub>
<mo>&times;</mo>
<msubsup>
<mi>l</mi>
<mrow>
<mi>G</mi>
<mi>A</mi>
<mi>N</mi>
</mrow>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msubsup>
<mo>+</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>&times;</mo>
<msubsup>
<mi>l</mi>
<mrow>
<mi>T</mi>
<mi>V</mi>
</mrow>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msubsup>
<mo>;</mo>
</mrow>
1
Wherein, C represents the loss function of network training,Content loss function is represented,Represent generation-confrontation loss letter
Number,Represent total variation loss function;wv, wg, wtIt is respective weight respectively, weight is determined by experiment.
9. the Remote sensed image super-resolution reconstruction method according to claim 8 based on perception of content deep learning network,
Characterized in that, the content loss functionFor:
Wherein, φi,jRepresent the characteristic pattern that j-th of convolutional layer in VGG networks before i-th of pond layer is obtained, Wi,j, Hi,jTable
Show the dimension of VGG characteristic patterns;Represent reference picture,Represent reconstructed image.
10. the Remote sensed image super-resolution reconstruction method according to claim 8 based on perception of content deep learning network,
Characterized in that, the generation-confrontation loss functionFor:
<mrow>
<msubsup>
<mi>l</mi>
<mrow>
<mi>G</mi>
<mi>A</mi>
<mi>N</mi>
</mrow>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msubsup>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<mo>-</mo>
<mi>log</mi>
<mi> </mi>
<mi>D</mi>
<mrow>
<mo>(</mo>
<mi>G</mi>
<mo>(</mo>
<msubsup>
<mi>I</mi>
<mi>n</mi>
<mrow>
<mi>L</mi>
<mi>R</mi>
</mrow>
</msubsup>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
Wherein,Represent reconstructed image, D (G (ILR)) represent arbiter D by reconstruction resultIt is determined as natural image
Probability;N represents the sum of training sample.
11. the Remote sensed image super-resolution reconstruction method according to claim 8 based on perception of content deep learning network,
Characterized in that, the total variation loss functionFor:
<mrow>
<msubsup>
<mi>l</mi>
<mrow>
<mi>T</mi>
<mi>V</mi>
</mrow>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msubsup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>W</mi>
<mi>H</mi>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>x</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>y</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>H</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<mo>&dtri;</mo>
<mi>G</mi>
<msub>
<mrow>
<mo>(</mo>
<msup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mi>R</mi>
</mrow>
</msup>
<mo>)</mo>
</mrow>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>|</mo>
<mo>;</mo>
</mrow>
Wherein, G (ILR) reconstructed image is represented, W, H represent the width and height of reconstructed image.
12. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network,
Characterized in that, step 4 is implemented including following sub- sub-step:
Step 4.1:Input picture is evenly dividing, the complexity of each subgraph is calculated, and judges to belong to high, medium and low complexity
Type;
Step 4.2:Corresponding GAN networks are chosen according to complexity type and carry out super-resolution rebuilding.
13. the Remote sensed image super-resolution reconstruction side according to claim 12 based on perception of content deep learning network
Method, it is characterised in that:In step 4.1, input picture is evenly dividing into 16 equal portions subgraphs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710301990.6A CN107194872B (en) | 2017-05-02 | 2017-05-02 | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710301990.6A CN107194872B (en) | 2017-05-02 | 2017-05-02 | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107194872A true CN107194872A (en) | 2017-09-22 |
CN107194872B CN107194872B (en) | 2019-08-20 |
Family
ID=59872637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710301990.6A Active CN107194872B (en) | 2017-05-02 | 2017-05-02 | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107194872B (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767384A (en) * | 2017-11-03 | 2018-03-06 | 电子科技大学 | A kind of image, semantic dividing method based on dual training |
CN108346133A (en) * | 2018-03-15 | 2018-07-31 | 武汉大学 | A kind of deep learning network training method towards video satellite super-resolution rebuilding |
CN108665509A (en) * | 2018-05-10 | 2018-10-16 | 广东工业大学 | A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing |
CN108711141A (en) * | 2018-05-17 | 2018-10-26 | 重庆大学 | The motion blur image blind restoration method of network is fought using improved production |
CN108830209A (en) * | 2018-06-08 | 2018-11-16 | 西安电子科技大学 | Based on the remote sensing images method for extracting roads for generating confrontation network |
CN108876870A (en) * | 2018-05-30 | 2018-11-23 | 福州大学 | A kind of domain mapping GANs image rendering methods considering texture complexity |
CN108921791A (en) * | 2018-07-03 | 2018-11-30 | 苏州中科启慧软件技术有限公司 | Lightweight image super-resolution improved method based on adaptive important inquiry learning |
CN108961217A (en) * | 2018-06-08 | 2018-12-07 | 南京大学 | A kind of detection method of surface flaw based on positive example training |
CN109117944A (en) * | 2018-08-03 | 2019-01-01 | 北京悦图遥感科技发展有限公司 | A kind of super resolution ratio reconstruction method and system of steamer target remote sensing image |
CN109785270A (en) * | 2019-01-18 | 2019-05-21 | 四川长虹电器股份有限公司 | A kind of image super-resolution method based on GAN |
CN109903223A (en) * | 2019-01-14 | 2019-06-18 | 北京工商大学 | A kind of image super-resolution method based on dense connection network and production confrontation network |
CN109949219A (en) * | 2019-01-12 | 2019-06-28 | 深圳先进技术研究院 | A kind of reconstructing method of super-resolution image, device and equipment |
CN110033033A (en) * | 2019-04-01 | 2019-07-19 | 南京谱数光电科技有限公司 | A kind of Maker model training method based on CGANs |
CN110163852A (en) * | 2019-05-13 | 2019-08-23 | 北京科技大学 | The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks |
CN110599401A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院电子学研究所 | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium |
CN110689086A (en) * | 2019-10-08 | 2020-01-14 | 郑州轻工业学院 | Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network |
CN110738597A (en) * | 2018-07-19 | 2020-01-31 | 北京连心医疗科技有限公司 | Size self-adaptive preprocessing method of multi-resolution medical image in neural network |
CN110807740A (en) * | 2019-09-17 | 2020-02-18 | 北京大学 | Image enhancement method and system for window image of monitoring scene |
CN111144466A (en) * | 2019-12-17 | 2020-05-12 | 武汉大学 | Image sample self-adaptive depth measurement learning method |
CN111260705A (en) * | 2020-01-13 | 2020-06-09 | 武汉大学 | Prostate MR image multi-task registration method based on deep convolutional neural network |
CN111275713A (en) * | 2020-02-03 | 2020-06-12 | 武汉大学 | Cross-domain semantic segmentation method based on countermeasure self-integration network |
WO2020177582A1 (en) * | 2019-03-06 | 2020-09-10 | 腾讯科技(深圳)有限公司 | Video synthesis method, model training method, device and storage medium |
CN111712830A (en) * | 2018-02-21 | 2020-09-25 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN111915545A (en) * | 2020-08-06 | 2020-11-10 | 中北大学 | Self-supervision learning fusion method of multiband images |
CN113139576A (en) * | 2021-03-22 | 2021-07-20 | 广东省科学院智能制造研究所 | Deep learning image classification method and system combining image complexity |
CN113538246A (en) * | 2021-08-10 | 2021-10-22 | 西安电子科技大学 | Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network |
US11263726B2 (en) | 2019-05-16 | 2022-03-01 | Here Global B.V. | Method, apparatus, and system for task driven approaches to super resolution |
CN116402691A (en) * | 2023-06-05 | 2023-07-07 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825477A (en) * | 2015-01-06 | 2016-08-03 | 南京理工大学 | Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion |
CN105931179A (en) * | 2016-04-08 | 2016-09-07 | 武汉大学 | Joint sparse representation and deep learning-based image super resolution method and system |
CN106203269A (en) * | 2016-06-29 | 2016-12-07 | 武汉大学 | A kind of based on can the human face super-resolution processing method of deformation localized mass and system |
US20170046816A1 (en) * | 2015-08-14 | 2017-02-16 | Sharp Laboratories Of America, Inc. | Super resolution image enhancement technique |
-
2017
- 2017-05-02 CN CN201710301990.6A patent/CN107194872B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825477A (en) * | 2015-01-06 | 2016-08-03 | 南京理工大学 | Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion |
US20170046816A1 (en) * | 2015-08-14 | 2017-02-16 | Sharp Laboratories Of America, Inc. | Super resolution image enhancement technique |
CN105931179A (en) * | 2016-04-08 | 2016-09-07 | 武汉大学 | Joint sparse representation and deep learning-based image super resolution method and system |
CN106203269A (en) * | 2016-06-29 | 2016-12-07 | 武汉大学 | A kind of based on can the human face super-resolution processing method of deformation localized mass and system |
Non-Patent Citations (1)
Title |
---|
胡传平,等: "基于深度学习的图像超分辨率算法研究", 《铁道警察学院学报》 * |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767384A (en) * | 2017-11-03 | 2018-03-06 | 电子科技大学 | A kind of image, semantic dividing method based on dual training |
CN111712830A (en) * | 2018-02-21 | 2020-09-25 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN111712830B (en) * | 2018-02-21 | 2024-02-09 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN108346133A (en) * | 2018-03-15 | 2018-07-31 | 武汉大学 | A kind of deep learning network training method towards video satellite super-resolution rebuilding |
CN108346133B (en) * | 2018-03-15 | 2021-06-04 | 武汉大学 | Deep learning network training method for super-resolution reconstruction of video satellite |
CN108665509A (en) * | 2018-05-10 | 2018-10-16 | 广东工业大学 | A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing |
CN108711141A (en) * | 2018-05-17 | 2018-10-26 | 重庆大学 | The motion blur image blind restoration method of network is fought using improved production |
CN108711141B (en) * | 2018-05-17 | 2022-02-15 | 重庆大学 | Motion blurred image blind restoration method using improved generation type countermeasure network |
CN108876870B (en) * | 2018-05-30 | 2022-12-13 | 福州大学 | Domain mapping GANs image coloring method considering texture complexity |
CN108876870A (en) * | 2018-05-30 | 2018-11-23 | 福州大学 | A kind of domain mapping GANs image rendering methods considering texture complexity |
CN108961217B (en) * | 2018-06-08 | 2022-09-16 | 南京大学 | Surface defect detection method based on regular training |
CN108961217A (en) * | 2018-06-08 | 2018-12-07 | 南京大学 | A kind of detection method of surface flaw based on positive example training |
CN108830209B (en) * | 2018-06-08 | 2021-12-17 | 西安电子科技大学 | Remote sensing image road extraction method based on generation countermeasure network |
CN108830209A (en) * | 2018-06-08 | 2018-11-16 | 西安电子科技大学 | Based on the remote sensing images method for extracting roads for generating confrontation network |
CN108921791A (en) * | 2018-07-03 | 2018-11-30 | 苏州中科启慧软件技术有限公司 | Lightweight image super-resolution improved method based on adaptive important inquiry learning |
CN110738597A (en) * | 2018-07-19 | 2020-01-31 | 北京连心医疗科技有限公司 | Size self-adaptive preprocessing method of multi-resolution medical image in neural network |
CN109117944A (en) * | 2018-08-03 | 2019-01-01 | 北京悦图遥感科技发展有限公司 | A kind of super resolution ratio reconstruction method and system of steamer target remote sensing image |
CN109117944B (en) * | 2018-08-03 | 2021-01-15 | 北京悦图数据科技发展有限公司 | Super-resolution reconstruction method and system for ship target remote sensing image |
CN109949219A (en) * | 2019-01-12 | 2019-06-28 | 深圳先进技术研究院 | A kind of reconstructing method of super-resolution image, device and equipment |
CN109949219B (en) * | 2019-01-12 | 2021-03-26 | 深圳先进技术研究院 | Reconstruction method, device and equipment of super-resolution image |
CN109903223B (en) * | 2019-01-14 | 2023-08-25 | 北京工商大学 | Image super-resolution method based on dense connection network and generation type countermeasure network |
CN109903223A (en) * | 2019-01-14 | 2019-06-18 | 北京工商大学 | A kind of image super-resolution method based on dense connection network and production confrontation network |
CN109785270A (en) * | 2019-01-18 | 2019-05-21 | 四川长虹电器股份有限公司 | A kind of image super-resolution method based on GAN |
US11356619B2 (en) | 2019-03-06 | 2022-06-07 | Tencent Technology (Shenzhen) Company Limited | Video synthesis method, model training method, device, and storage medium |
WO2020177582A1 (en) * | 2019-03-06 | 2020-09-10 | 腾讯科技(深圳)有限公司 | Video synthesis method, model training method, device and storage medium |
CN110033033A (en) * | 2019-04-01 | 2019-07-19 | 南京谱数光电科技有限公司 | A kind of Maker model training method based on CGANs |
CN110163852A (en) * | 2019-05-13 | 2019-08-23 | 北京科技大学 | The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks |
CN110163852B (en) * | 2019-05-13 | 2021-10-15 | 北京科技大学 | Conveying belt real-time deviation detection method based on lightweight convolutional neural network |
US11263726B2 (en) | 2019-05-16 | 2022-03-01 | Here Global B.V. | Method, apparatus, and system for task driven approaches to super resolution |
CN110599401A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院电子学研究所 | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium |
CN110807740A (en) * | 2019-09-17 | 2020-02-18 | 北京大学 | Image enhancement method and system for window image of monitoring scene |
CN110689086A (en) * | 2019-10-08 | 2020-01-14 | 郑州轻工业学院 | Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network |
CN111144466B (en) * | 2019-12-17 | 2022-05-13 | 武汉大学 | Image sample self-adaptive depth measurement learning method |
CN111144466A (en) * | 2019-12-17 | 2020-05-12 | 武汉大学 | Image sample self-adaptive depth measurement learning method |
CN111260705A (en) * | 2020-01-13 | 2020-06-09 | 武汉大学 | Prostate MR image multi-task registration method based on deep convolutional neural network |
CN111260705B (en) * | 2020-01-13 | 2022-03-15 | 武汉大学 | Prostate MR image multi-task registration method based on deep convolutional neural network |
CN111275713B (en) * | 2020-02-03 | 2022-04-12 | 武汉大学 | Cross-domain semantic segmentation method based on countermeasure self-integration network |
CN111275713A (en) * | 2020-02-03 | 2020-06-12 | 武汉大学 | Cross-domain semantic segmentation method based on countermeasure self-integration network |
CN111915545B (en) * | 2020-08-06 | 2022-07-05 | 中北大学 | Self-supervision learning fusion method of multiband images |
CN111915545A (en) * | 2020-08-06 | 2020-11-10 | 中北大学 | Self-supervision learning fusion method of multiband images |
CN113139576A (en) * | 2021-03-22 | 2021-07-20 | 广东省科学院智能制造研究所 | Deep learning image classification method and system combining image complexity |
CN113139576B (en) * | 2021-03-22 | 2024-03-12 | 广东省科学院智能制造研究所 | Deep learning image classification method and system combining image complexity |
CN113538246A (en) * | 2021-08-10 | 2021-10-22 | 西安电子科技大学 | Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network |
CN116402691A (en) * | 2023-06-05 | 2023-07-07 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
CN116402691B (en) * | 2023-06-05 | 2023-08-04 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
Also Published As
Publication number | Publication date |
---|---|
CN107194872B (en) | 2019-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107194872B (en) | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network | |
CN107437092B (en) | The classification method of retina OCT image based on Three dimensional convolution neural network | |
CN105744256B (en) | Based on the significant objective evaluation method for quality of stereo images of collection of illustrative plates vision | |
CN111524135B (en) | Method and system for detecting defects of tiny hardware fittings of power transmission line based on image enhancement | |
CN103440654B (en) | A kind of LCD foreign body defect detection method | |
CN110211045A (en) | Super-resolution face image method based on SRGAN network | |
CN109191476A (en) | The automatic segmentation of Biomedical Image based on U-net network structure | |
CN104951799B (en) | A kind of SAR remote sensing image oil spilling detection recognition method | |
CN106462771A (en) | 3D image significance detection method | |
CN107105223B (en) | A kind of tone mapping method for objectively evaluating image quality based on global characteristics | |
CN104282008B (en) | The method and apparatus that Texture Segmentation is carried out to image | |
CN106780546B (en) | The personal identification method of motion blur encoded point based on convolutional neural networks | |
CN104102928B (en) | A kind of Classifying Method in Remote Sensing Image based on texture primitive | |
CN110110646A (en) | A kind of images of gestures extraction method of key frame based on deep learning | |
CN104484886B (en) | A kind of dividing method and device of MR images | |
CN104036493B (en) | No-reference image quality evaluation method based on multifractal spectrum | |
CN104574381B (en) | A kind of full reference image quality appraisement method based on local binary patterns | |
CN110516716A (en) | Non-reference picture quality appraisement method based on multiple-limb similarity network | |
Guo et al. | Liver steatosis segmentation with deep learning methods | |
CN101976444A (en) | Pixel type based objective assessment method of image quality by utilizing structural similarity | |
CN104021567B (en) | Based on the fuzzy altering detecting method of image Gauss of first numeral law | |
CN110322403A (en) | A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network | |
CN106127234A (en) | The non-reference picture quality appraisement method of feature based dictionary | |
CN106327501A (en) | Quality evaluation method for thangka image with reference after repair | |
Luo et al. | Bi-GANs-ST for perceptual image super-resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |