CN107194872A - Remote sensed image super-resolution reconstruction method based on perception of content deep learning network - Google Patents

Remote sensed image super-resolution reconstruction method based on perception of content deep learning network Download PDF

Info

Publication number
CN107194872A
CN107194872A CN201710301990.6A CN201710301990A CN107194872A CN 107194872 A CN107194872 A CN 107194872A CN 201710301990 A CN201710301990 A CN 201710301990A CN 107194872 A CN107194872 A CN 107194872A
Authority
CN
China
Prior art keywords
mrow
image
complexity
content
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710301990.6A
Other languages
Chinese (zh)
Other versions
CN107194872B (en
Inventor
王中元
韩镇
杜博
邵振峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710301990.6A priority Critical patent/CN107194872B/en
Publication of CN107194872A publication Critical patent/CN107194872A/en
Application granted granted Critical
Publication of CN107194872B publication Critical patent/CN107194872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于内容感知深度学习网络的遥感图像超分辨率重建方法,本发明提出了图像内容复杂性的综合度量指标及计算方法,以此为基础,将样本图像按内容复杂性分类,构建和训练高、中、低三种复杂性不等的深层GAN网络模型,然后根据待超分的输入图像的内容复杂性,选取对应的网络进行重建。为了提高GAN网络的学习性能,本发明同时给出了一种优化的损失函数定义。本发明方法克服了基于机器学习的超分辨率重建中普遍存在的过拟合和欠拟合的矛盾,有效提升了遥感影像的超分辨率重建精度。

The invention discloses a remote sensing image super-resolution reconstruction method based on a content-aware deep learning network. The invention proposes a comprehensive measurement index and a calculation method for the complexity of the image content. Based on this, the sample images are classified according to the content complexity. , build and train three deep GAN network models of high, medium and low complexity, and then select the corresponding network for reconstruction according to the content complexity of the input image to be super-scored. In order to improve the learning performance of the GAN network, the present invention provides an optimized loss function definition at the same time. The method of the invention overcomes the ubiquitous contradiction of over-fitting and under-fitting in super-resolution reconstruction based on machine learning, and effectively improves the super-resolution reconstruction accuracy of remote sensing images.

Description

基于内容感知深度学习网络的遥感图像超分辨率重建方法Super-resolution reconstruction method of remote sensing images based on content-aware deep learning network

技术领域technical field

本发明属于图像处理技术领域,涉及一种图像超分辨率重建方法,具体涉及一种基于内容感知深度学习网络的遥感图像超分辨率重建方法。The invention belongs to the technical field of image processing, and relates to an image super-resolution reconstruction method, in particular to a remote sensing image super-resolution reconstruction method based on a content-aware deep learning network.

技术背景technical background

高空间分辨率的遥感影像可以对地物进行更加精细的描述,提供丰富的细节信息,因此,人们往往希望能够获取高空间分辨率的影像。随着空间探测理论和技术的迅速发展,米级甚至亚米级空间分辨率的遥感影像(如IKNOS和QuickBird)已逐步走向应用,然而其时间分辨率普遍比较低。与此相反,一些具有较低空间分辨率的传感器(如MODIS)却具有很高的时间分辨率,它们可以在短时内获取大范围的遥感影像。如果能从这些较低空间分辨率的影像中重建出高空间分辨率的影像,那么就能够获取到同时具有高空间分辨率和高时间分辨率的遥感影像。因此,对较低分辨率的遥感影像进行重建得到较高分辨率的影像是非常必要的。Remote sensing images with high spatial resolution can describe the ground objects more finely and provide rich detail information. Therefore, people often hope to obtain images with high spatial resolution. With the rapid development of space detection theory and technology, remote sensing images with meter-level or even sub-meter-level spatial resolution (such as IKNOS and QuickBird) have been gradually applied, but their temporal resolution is generally relatively low. On the contrary, some sensors with lower spatial resolution (such as MODIS) have high temporal resolution, and they can acquire a large-scale remote sensing image in a short time. If high-spatial-resolution images can be reconstructed from these low-spatial-resolution images, remote sensing images with both high spatial and high temporal resolutions can be obtained. Therefore, it is very necessary to reconstruct lower resolution remote sensing images to obtain higher resolution images.

近年来,深度学习被广泛用于解决计算机视觉和图像处理中的各种问题。2014年,香港中文大学的C.Dong等人率先将深度CNN学习引入图像的超分辨率重建,取得了较过去的主流的稀疏表达的方法更好的效果;2015年,韩国首尔国立大学的J.Kim等人进一步提出了基于RNN的改进方法,性能有进一步的提升;2016年,Google公司的Y.Romano等人发展了一种快速而精确的学习方法;随后不久,Twitter公司的C.Ledig等人将GAN网络(产生式对抗网络)用于图像超分辨率,取得了迄今为止最好的重建效果。而且,GAN的底层是深度信念网络,不再严格依赖于有监督的学习,即使在没有一对一的高低分辨率图像样本对的情况下也能训练。In recent years, deep learning has been widely used to solve various problems in computer vision and image processing. In 2014, C.Dong et al. from the Chinese University of Hong Kong took the lead in introducing deep CNN learning into the super-resolution reconstruction of images, which achieved better results than the previous mainstream sparse expression methods; in 2015, J. .Kim et al. further proposed an improved method based on RNN, and the performance was further improved; in 2016, Y.Romano et al. of Google developed a fast and accurate learning method; shortly thereafter, C.Ledig of Twitter et al. used GAN network (generative confrontation network) for image super-resolution and achieved the best reconstruction effect so far. Moreover, the bottom layer of GAN is a deep belief network, which no longer strictly relies on supervised learning, and can be trained even without one-to-one high- and low-resolution image sample pairs.

在深度学习模型和网络架构确定后,基于深度学习的超分辨率方法的性能很大程度上由网络模型训练的好坏决定。深度学习网络模型的训练并非越彻底越有效,而是应该进行充分而适宜的样本学习(正如深层网络模型的层数并非越多越好一样)。对于复杂的图像,需要更多的样本训练,这样才能学到更多的图像特征,但这样的网络对内容简单的图像容易出现过拟合,致使超分辨率结果模糊;反之,减少训练强度,能避免内容简单图像的过拟合现象,但会造成内容复杂图像的欠拟合问题,降低了重构图像的自然度和保真度。如何做到训练的网络能同时兼顾内容复杂和简单的图像高质量重建的需求,是实际超分辨率应用中基于深度学习的方法不能回避的问题。After the deep learning model and network architecture are determined, the performance of the deep learning-based super-resolution method is largely determined by the quality of the network model training. The training of the deep learning network model is not the more thorough and effective, but sufficient and appropriate sample learning (just as the number of layers of the deep network model is not the more the better). For complex images, more sample training is needed so that more image features can be learned, but such a network is prone to overfitting to images with simple content, resulting in blurred super-resolution results; on the contrary, reduce the training intensity, It can avoid the over-fitting phenomenon of images with simple content, but it will cause the under-fitting problem of images with complex content, which reduces the naturalness and fidelity of reconstructed images. How to ensure that the trained network can take into account the high-quality reconstruction of complex and simple images at the same time is an unavoidable problem for deep learning-based methods in practical super-resolution applications.

发明内容Contents of the invention

为了解决上述技术问题,本发明提出了一种基于内容感知深度学习网络的遥感图像超分辨率重建方法。In order to solve the above technical problems, the present invention proposes a remote sensing image super-resolution reconstruction method based on content-aware deep learning network.

本发明所采用的技术方案是:一种基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,包括以下步骤:The technical solution adopted in the present invention is: a method for super-resolution reconstruction of remote sensing images based on a content-aware deep learning network, characterized in that it includes the following steps:

步骤1:收集高低分辨率遥感图像样本,并进行分块处理;Step 1: Collect high and low resolution remote sensing image samples and perform block processing;

步骤2:计算每个图像块的复杂度,按复杂度分成高、中、低三类,分别构成高、中、低复杂度的训练样本集;Step 2: Calculate the complexity of each image block, divide it into three categories according to the complexity: high, medium, and low, and form high, medium, and low complexity training sample sets respectively;

步骤3:利用获得的样本集分别训练高、中、低复杂度的三种GAN网络;Step 3: Use the obtained sample sets to train three GAN networks with high, medium and low complexity respectively;

步骤4:计算输入图像的复杂度,根据复杂度选取对应的GAN网络重建。Step 4: Calculate the complexity of the input image, and select the corresponding GAN network reconstruction according to the complexity.

与现有的图像超分辨率方法相比,本发明具有以下优点和积极效果:Compared with existing image super-resolution methods, the present invention has the following advantages and positive effects:

(1)本发明通过运用图像分类这一简单思想,成功克服了基于机器学习的超分辨率重建中普遍存在的过拟合和欠拟合的矛盾,有效提升了遥感影像的超分辨率重建精度;(1) By using the simple idea of image classification, the present invention successfully overcomes the ubiquitous over-fitting and under-fitting contradictions in super-resolution reconstruction based on machine learning, and effectively improves the super-resolution reconstruction accuracy of remote sensing images ;

(2)本发明方法基于的深度学习网络模型是GAN网络,该网络在训练时不依赖严格一一对齐的高低分辨率样本块,因而提高了应用普适性,尤其适合于遥感领域高低分辨率图像的多源非同步成像环境。(2) The deep learning network model based on the method of the present invention is a GAN network, which does not rely on strict one-to-one alignment of high and low resolution sample blocks during training, thus improving the application universality, especially suitable for high and low resolution in the field of remote sensing A multi-source asynchronous imaging environment of images.

附图说明Description of drawings

图1为本发明实施例的流程图。Fig. 1 is a flowchart of an embodiment of the present invention.

具体实施方式detailed description

为了便于本领域普通技术人员理解和实施本发明,下面结合附图及实施例对本发明作进一步的详细描述,应当理解,此处所描述的实施示例仅用于说明和解释本发明,并不用于限定本发明。In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the implementation examples described here are only used to illustrate and explain the present invention, and are not intended to limit this invention.

请见图1,本发明提供的一种基于内容感知深度学习网络的遥感图像超分辨率重建方法,包括以下步骤:Please see Fig. 1, a kind of remote sensing image super-resolution reconstruction method based on content-aware deep learning network provided by the present invention, comprises the following steps:

步骤1:收集高低分辨率遥感图像样本,将高分辨率图像均匀地切分成128x128的图像块、低分辨率图像均匀地切分成64x64的图像块;Step 1: Collect high and low resolution remote sensing image samples, evenly divide the high resolution image into 128x128 image blocks, and evenly divide the low resolution image into 64x64 image blocks;

步骤2:计算每个图像块的复杂度,按复杂度分成高、中、低三类,分别构成高、中、低复杂度的训练样本集;Step 2: Calculate the complexity of each image block, divide it into three categories according to the complexity: high, medium, and low, and form high, medium, and low complexity training sample sets respectively;

图像复杂度的计算原理和方法如下:The calculation principle and method of image complexity are as follows:

图像内容的复杂性包含纹理复杂性和结构复杂性,信息熵和灰度一致性能较好地刻画纹理复杂性,而结构复杂性适合用图像中目标的边缘比率描述。图像的内容复杂性度量指标C由信息熵H,灰度一致性U和边缘比率R,按下式加权构成:The complexity of image content includes texture complexity and structural complexity. Information entropy and gray level consistency can better describe texture complexity, while structural complexity is suitable to be described by the edge ratio of objects in the image. The image content complexity metric C is composed of information entropy H, gray level consistency U and edge ratio R, weighted by the following formula:

C=wh×H+wu×U+we×E;C=w h ×H+w u ×U+w e ×E;

这里wh,wu,we分别是各自的权重,权重由实验确定。Here w h , w u , we e are their respective weights, and the weights are determined by experiments.

下面给出信息熵、纹理一致性和边缘比率各自的计算方法。The respective calculation methods of information entropy, texture consistency and edge ratio are given below.

(1)信息熵(1) Information entropy

信息熵反映图像灰度级的个数以及每个灰度级像素的出现情况,熵值越高表明图像纹理越复杂。图像信息熵H的计算公式为:Information entropy reflects the number of image gray levels and the appearance of each gray level pixel. The higher the entropy value, the more complex the image texture. The calculation formula of image information entropy H is:

N为灰度级的个数,ni为每个灰度级出现的个数,K为灰度级数目。N is the number of gray levels, ni is the number of occurrences of each gray level, and K is the number of gray levels.

(2)灰度一致性(2) Grayscale consistency

灰度一致性可以反映图像的均一程度,如果其值较小,则对应简单的图像,反之对应复杂的图像。灰度一致性公式为:Gray consistency can reflect the uniformity of the image, if its value is small, it corresponds to a simple image, otherwise it corresponds to a complex image. The gray scale consistency formula is:

其中,M,N分别为图像的行数和列数,f(i,j)是像素(i,j)处的灰度值,是以(i,j)为中心的3×3邻域像素的灰度均值。Among them, M and N are the number of rows and columns of the image respectively, f(i, j) is the gray value at the pixel (i, j), is the gray mean value of the 3×3 neighborhood pixels centered on (i,j).

(3)边缘比率(3) Edge ratio

图幅中目标个数多少直接反映了图像的复杂程度,如果目标个数较多,则该图像一般比较复杂,反之亦然。由于目标的计数涉及到复杂的图分割,不便于计算,目标边缘的多少间接反映了图像中目标物的多少及其复杂程度,因此可以用来描述图像的复杂度。图像中目标边缘所占的比例可以用边缘比率描述,计算公式为:The number of targets in the map directly reflects the complexity of the image. If the number of targets is large, the image is generally more complex, and vice versa. Since the counting of targets involves complex graph segmentation, it is not easy to calculate. The number of target edges indirectly reflects the number and complexity of targets in the image, so it can be used to describe the complexity of the image. The proportion of the target edge in the image can be described by the edge ratio, and the calculation formula is:

其中,M和N分别为图像的行数和列数,E为图像中边缘像素的个数。图像中目标的边缘表现为灰度显著变化的地方,可以由差分算法来求取,一般通过边缘检测算子(如Canny算子、Sobel算子等)检测图像的边缘像素。Among them, M and N are the number of rows and columns of the image, respectively, and E is the number of edge pixels in the image. The edge of the target in the image is a place where the gray level changes significantly, which can be obtained by the difference algorithm. Generally, the edge pixels of the image are detected by edge detection operators (such as Canny operator, Sobel operator, etc.).

其中高分辨率样本集图像块数量不少于500000,中分辨率图像块数量不少于300000,低分辨率图像块数量不少于200000。Among them, the number of high-resolution sample set image blocks is not less than 500,000, the number of medium-resolution image blocks is not less than 300,000, and the number of low-resolution image blocks is not less than 200,000.

步骤3:利用获得的样本集分别训练高、中、低复杂度的三种GAN网络;Step 3: Use the obtained sample sets to train three GAN networks with high, medium and low complexity respectively;

GAN网络训练的损失函数定义如下:The loss function of GAN network training is defined as follows:

GAN网络训练的损失函数包含内容损失,生成-对抗损失和全变差损失。内容损失刻画了图像内容的失真,生成-对抗损失描述的是生成结果的统计特性与自然图像这类数据的区分度,全变差损失则刻画了图像内容的连贯性。总体损失函数由三种损失函数加权组成:The loss functions of GAN network training include content loss, generation-adversarial loss and total variation loss. The content loss describes the distortion of the image content, the generation-adversarial loss describes the statistical characteristics of the generated results and the distinction between data such as natural images, and the total variation loss describes the coherence of the image content. The overall loss function is weighted by three loss functions:

这里wv,wg,wt分别是各自的权重,权重由实验确定。Here w v , w g , and w t are their respective weights, and the weights are determined by experiments.

下面给出每种损失函数的计算方法。The calculation method of each loss function is given below.

(1)内容损失(1) Loss of content

传统的内容损失函数用MSE(像素均方误差)表示,逐像素考察图像内容的损失,基于MSE的网络训练淡化了图像结构上的高频成分,导致图像过模糊。为克服这一缺陷,这里引入图像的特征损失函数。由于人工定义和提取有价值的图像特征本身就是一项复杂的工作,同时考虑到深度学习具有自动提取特征的能力,本方法借用VGG网络训练得到的隐含层特征进行度量。用φi,j表示VGG网络中第i个池化层前面的第j个卷积层得到的特征图,将特征损失定义为重构图像与参考图像的VGG特征的欧式距离,即:The traditional content loss function is represented by MSE (pixel mean square error), which examines the loss of image content pixel by pixel. MSE-based network training dilutes the high-frequency components of the image structure, resulting in excessive blurring of the image. To overcome this defect, the feature loss function of the image is introduced here. Since manually defining and extracting valuable image features is a complex task, and considering the ability of deep learning to automatically extract features, this method uses the hidden layer features obtained by VGG network training for measurement. Use φ i,j to denote the feature map obtained by the jth convolutional layer in front of the ith pooling layer in the VGG network, and define the feature loss as the reconstructed image with the reference image The Euclidean distance of the VGG features, namely:

这里Wi,j,Hi,j表示VGG特征图的维度。Here W i,j , H i,j represent the dimensions of the VGG feature map.

(2)生成-对抗损失(2) Generation-Adversarial Loss

生成-对抗损失将GAN网络的产生式功能予以考虑,鼓励网络产生与自然图像流形空间一致的解,使得判别器无法将生成结果与自然图像区别开来。生成-对抗损失基于判别器对所有训练样本的判别概率来衡量,公式如下:The generation-adversarial loss takes into account the production function of the GAN network, and encourages the network to produce a solution consistent with the manifold space of the natural image, so that the discriminator cannot distinguish the generated result from the natural image. The generation-adversarial loss is measured based on the discriminative probability of the discriminator for all training samples, and the formula is as follows:

这里,表示判别器D将重构结果判别为自然图像的概率;N表示训练样本总数。here, Indicates that the discriminator D will reconstruct the result The probability of identifying a natural image; N represents the total number of training samples.

(3)全变差损失(3) Total variation loss

增加全变差损失是为了加强学习结果在图像内容上的局部连贯性,其计算公式为:The purpose of increasing the total variation loss is to strengthen the local coherence of the learning results on the image content, and its calculation formula is:

这里W,H表示重构图像的宽度和高度。Here W, H represent the width and height of the reconstructed image.

步骤4:计算输入图像的复杂度,根据复杂度选取对应的GAN网络重建。Step 4: Calculate the complexity of the input image, and select the corresponding GAN network reconstruction according to the complexity.

具体由下面子步骤组成:Specifically, it consists of the following sub-steps:

步骤4.1:将输入图像均匀划分成16等份子图,计算每个子图的复杂度,并判断属于高、中、低复杂度的类型;Step 4.1: Divide the input image evenly into 16 equal subgraphs, calculate the complexity of each subgraph, and judge whether it belongs to the type of high, medium and low complexity;

步骤4.2:根据复杂度类型选取相应的GAN网络进行超分辨率重建。Step 4.2: Select the corresponding GAN network for super-resolution reconstruction according to the complexity type.

本发明将样本图像按图像内容复杂性分类,构建和训练复杂性不等的深层网络模型,然后根据待超分的输入图像的内容复杂性,选取对应的网络进行重建。遥感影像记录的是大尺度范围场景,因不受地面目标的精细信息的影响,内容复杂性一致的空间同质区较多且面积大,如城区、旱田、水田、湖泊、山地等大型地物,因而比较适合做预分类训练和重建。The present invention classifies sample images according to the complexity of the image content, constructs and trains deep network models with different complexity, and then selects the corresponding network for reconstruction according to the content complexity of the input image to be super-resolved. Remote sensing images record large-scale scenes. Because they are not affected by the fine information of ground objects, there are many spatial homogeneous areas with consistent content complexity and large areas, such as urban areas, dry fields, paddy fields, lakes, mountains and other large-scale objects. , so it is more suitable for pre-classification training and reconstruction.

这里采用GAN深度学习网络模型,不仅是因为GAN网络给出了目前最好的超分辨率性能,而且,作为训练样本的高低空间分辨率遥感影像来源不同,属于非同步拍摄的多时相图像,不可能存在像素意义上的一一对齐,这极大地限制了CNN网络的训练,而GAN网络是非监督学习网络,故不存在这个问题。The GAN deep learning network model is used here not only because the GAN network gives the best super-resolution performance at present, but also because the sources of high and low spatial resolution remote sensing images used as training samples are different and belong to multi-temporal images taken asynchronously. There may be one-to-one alignment in the pixel sense, which greatly limits the training of the CNN network, and the GAN network is an unsupervised learning network, so this problem does not exist.

应当理解的是,本说明书未详细阐述的部分均属于现有技术。It should be understood that the parts not described in detail in this specification belong to the prior art.

应当理解的是,上述针对较佳实施例的描述较为详细,并不能因此而认为是对本发明专利保护范围的限制,本领域的普通技术人员在本发明的启示下,在不脱离本发明权利要求所保护的范围情况下,还可以做出替换或变形,均落入本发明的保护范围之内,本发明的请求保护范围应以所附权利要求为准。It should be understood that the above-mentioned descriptions for the preferred embodiments are relatively detailed, and should not therefore be considered as limiting the scope of the patent protection of the present invention. Within the scope of protection, replacements or modifications can also be made, all of which fall within the protection scope of the present invention, and the scope of protection of the present invention should be based on the appended claims.

Claims (13)

1.一种基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,包括以下步骤:1. A remote sensing image super-resolution reconstruction method based on content-aware deep learning network, is characterized in that, comprises the following steps: 步骤1:收集高低分辨率遥感图像样本,并进行分块处理;Step 1: Collect high and low resolution remote sensing image samples and perform block processing; 步骤2:计算每个图像块的复杂度,按复杂度分成高、中、低三类,分别构成高、中、低复杂度的训练样本集;Step 2: Calculate the complexity of each image block, divide it into three categories according to the complexity: high, medium, and low, and form high, medium, and low complexity training sample sets respectively; 步骤3:利用获得的样本集分别训练高、中、低复杂度的三种GAN网络;Step 3: Use the obtained sample sets to train three GAN networks with high, medium and low complexity respectively; 步骤4:计算输入图像的复杂度,根据复杂度选取对应的GAN网络重建。Step 4: Calculate the complexity of the input image, and select the corresponding GAN network reconstruction according to the complexity. 2.根据权利要求1所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于:步骤1中,将高分辨率图像均匀地切分成128x128的图像块、低分辨率图像均匀地切分成64x64的图像块。2. The remote sensing image super-resolution reconstruction method based on content-aware deep learning network according to claim 1, characterized in that: in step 1, the high-resolution image is evenly divided into 128x128 image blocks, low-resolution images Divide evenly into 64x64 image blocks. 3.根据权利要求1所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,步骤2中所述图像块的复杂度,其计算方法为:3. the remote sensing image super-resolution reconstruction method based on content perception deep learning network according to claim 1, is characterized in that, the complexity of image block described in step 2, its computing method is: C=wh×H+wu×U+we×E;C=w h ×H+w u ×U+w e ×E; 其中,C表图像块的复杂度,H表示图像信息熵,U表示图像灰度一致性,R表示图像边缘比率,wh,wu,we分别是各自的权重,权重由实验确定。Among them, C represents the complexity of the image block, H represents the image information entropy, U represents the image gray level consistency, R represents the image edge ratio, w h , w u , we e are respective weights, and the weights are determined by experiments. 4.根据权利要求3所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,所述图像信息熵H的计算公式为:4. the remote sensing image super-resolution reconstruction method based on content perception deep learning network according to claim 3, is characterized in that, the computing formula of described image information entropy H is: <mrow> <mi>H</mi> <mo>=</mo> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>.</mo> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> <mrow> <mi>H</mi> <mo>=</mo> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>.</mo> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> 其中,N为灰度级的个数,ni为每个灰度级出现的个数,K为灰度级数目。Among them, N is the number of gray levels, n i is the number of occurrences of each gray level, and K is the number of gray levels. 5.根据权利要求3所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,所述图像灰度一致性U公式为:5. the remote sensing image super-resolution reconstruction method based on content perception deep learning network according to claim 3, is characterized in that, described image grayscale consistency U formula is: <mrow> <mi>U</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>f</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>;</mo> </mrow> <mrow> <mi>U</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>f</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>;</mo> </mrow> 其中,M、N分别为图像的行数和列数,f(i,j)是像素(i,j)处的灰度值,是以(i,j)为中心的3×3邻域像素的灰度均值。Among them, M and N are the number of rows and columns of the image respectively, f(i, j) is the gray value at the pixel (i, j), is the gray mean value of the 3×3 neighborhood pixels centered on (i,j). 6.根据权利要求3所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,所述图像边缘比率R计算公式为:6. the remote sensing image super-resolution reconstruction method based on content-aware deep learning network according to claim 3, is characterized in that, described image edge ratio R computing formula is: <mrow> <mi>R</mi> <mo>=</mo> <mfrac> <mi>E</mi> <mrow> <mi>M</mi> <mo>&amp;times;</mo> <mi>N</mi> </mrow> </mfrac> <mo>;</mo> </mrow> <mrow> <mi>R</mi> <mo>=</mo> <mfrac> <mi>E</mi> <mrow> <mi>M</mi> <mo>&amp;times;</mo> <mi>N</mi> </mrow> </mfrac> <mo>;</mo> </mrow> 其中,M和N分别为图像的行数和列数;E为图像中边缘像素的个数,由差分算法来求取。Among them, M and N are the number of rows and columns of the image respectively; E is the number of edge pixels in the image, which is calculated by the difference algorithm. 7.根据权利要求1-6任意一项所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于:步骤2所述高、中、低复杂度的训练样本集,其中高复杂度的训练样本集图像块数量不少于500000,中复杂度的训练样本集图像块数量不少于300000,低复杂度的训练样本集图像块数量不少于200000。7. The remote sensing image super-resolution reconstruction method based on content-aware deep learning network according to any one of claims 1-6, characterized in that: the high, medium and low complexity training sample sets described in step 2, wherein The number of image blocks in the high-complexity training sample set is not less than 500,000, the number of image blocks in the medium-complexity training sample set is not less than 300,000, and the number of image blocks in the low-complexity training sample set is not less than 200,000. 8.根据权利要求1所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,步骤3中GAN网络训练的损失函数定义为:8. the remote sensing image super-resolution reconstruction method based on content-aware deep learning network according to claim 1, is characterized in that, the loss function of GAN network training in step 3 is defined as: <mrow> <mi>C</mi> <mo>=</mo> <msub> <mi>w</mi> <mi>v</mi> </msub> <mo>&amp;times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>V</mi> <mi>G</mi> <mi>G</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>+</mo> <msub> <mi>w</mi> <mi>g</mi> </msub> <mo>&amp;times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>G</mi> <mi>A</mi> <mi>N</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>+</mo> <msub> <mi>w</mi> <mi>t</mi> </msub> <mo>&amp;times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>T</mi> <mi>V</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>;</mo> </mrow> 1 <mrow> <mi>C</mi> <mo>=</mo> <msub> <mi>w</mi> <mi>v</mi> </msub> <mo>&amp;times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>V</mi> <mi>G</mi> <mi>G</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>+</mo> <msub> <mi>w</mi> <mi>g</mi> </msub> <mo>&amp;times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>G</mi> <mi>A</mi> <mi>N</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>+</mo> <msub> <mi>w</mi> <mi>t</mi> </msub> <mo>&amp;times;</mo> <msubsup> <mi>l</mi> <mrow> <mi>T</mi> <mi>V</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>;</mo> </mrow> 1 其中,C表示网络训练的损失函数,表示内容损失函数,表示生成-对抗损失函数,表示全变差损失函数;wv,wg,wt分别是各自的权重,权重由实验确定。Among them, C represents the loss function of network training, Denotes the content loss function, Denotes the generative-adversarial loss function, Represents the total variation loss function; w v , w g , w t are their respective weights, and the weights are determined by experiments. 9.根据权利要求8所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,所述内容损失函数为:9. the remote sensing image super-resolution reconstruction method based on content perception deep learning network according to claim 8, is characterized in that, described content loss function for: 其中,φi,j表示VGG网络中第i个池化层前面的第j个卷积层得到的特征图,Wi,j,Hi,j表示VGG特征图的维度;表示参考图像,表示重构图像。Among them, φ i,j represents the feature map obtained by the jth convolutional layer in front of the i-th pooling layer in the VGG network, W i,j , H i,j represent the dimension of the VGG feature map; represents the reference image, Represents the reconstructed image. 10.根据权利要求8所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,所述生成-对抗损失函数为:10. the remote sensing image super-resolution reconstruction method based on content-aware deep learning network according to claim 8, is characterized in that, described generation-adversarial loss function for: <mrow> <msubsup> <mi>l</mi> <mrow> <mi>G</mi> <mi>A</mi> <mi>N</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>-</mo> <mi>log</mi> <mi> </mi> <mi>D</mi> <mrow> <mo>(</mo> <mi>G</mi> <mo>(</mo> <msubsup> <mi>I</mi> <mi>n</mi> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>l</mi> <mrow> <mi>G</mi> <mi>A</mi> <mi>N</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>-</mo> <mi>log</mi> <mi> </mi> <mi>D</mi> <mrow> <mo>(</mo> <mi>G</mi> <mo>(</mo> <msubsup> <mi>I</mi> <mi>n</mi> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> </mrow> 其中,表示重构图像,D(G(ILR))表示判别器D将重构结果判别为自然图像的概率;N表示训练样本的总数。in, Indicates the reconstructed image, D(G(I LR )) indicates that the discriminator D will reconstruct the result The probability of identifying a natural image; N represents the total number of training samples. 11.根据权利要求8所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,所述全变差损失函数为:11. the remote sensing image super-resolution reconstruction method based on content-aware deep learning network according to claim 8, is characterized in that, the total variation loss function for: <mrow> <msubsup> <mi>l</mi> <mrow> <mi>T</mi> <mi>V</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>W</mi> <mi>H</mi> </mrow> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>W</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>H</mi> </munderover> <mo>|</mo> <mo>|</mo> <mo>&amp;dtri;</mo> <mi>G</mi> <msub> <mrow> <mo>(</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msup> <mo>)</mo> </mrow> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mo>|</mo> <mo>|</mo> <mo>;</mo> </mrow> <mrow> <msubsup> <mi>l</mi> <mrow> <mi>T</mi> <mi>V</mi> </mrow> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>W</mi> <mi>H</mi> </mrow> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>W</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>H</mi> </munderover> <mo>|</mo> <mo>|</mo> <mo>&amp;dtri;</mo> <mi>G</mi> <msub> <mrow> <mo>(</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msup> <mo>)</mo> </mrow> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mo>|</mo> <mo>|</mo> <mo>;</mo> </mrow> 其中,G(ILR)表示重构图像,W、H表示重构图像的宽度和高度。Among them, G(I LR ) represents the reconstructed image, and W and H represent the width and height of the reconstructed image. 12.根据权利要求1所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于,步骤4的具体实现包括以下子子步骤:12. the remote sensing image super-resolution reconstruction method based on content-aware deep learning network according to claim 1, is characterized in that, the specific realization of step 4 comprises the following sub-sub-steps: 步骤4.1:将输入图像均匀划分,计算每个子图的复杂度,并判断属于高、中、低复杂度的类型;Step 4.1: Divide the input image evenly, calculate the complexity of each subimage, and judge whether it belongs to the type of high, medium, or low complexity; 步骤4.2:根据复杂度类型选取相应的GAN网络进行超分辨率重建。Step 4.2: Select the corresponding GAN network for super-resolution reconstruction according to the complexity type. 13.根据权利要求12所述的基于内容感知深度学习网络的遥感图像超分辨率重建方法,其特征在于:步骤4.1中,将输入图像均匀划分成16等份子图。13. The remote sensing image super-resolution reconstruction method based on content-aware deep learning network according to claim 12, characterized in that: in step 4.1, the input image is evenly divided into 16 equal sub-images.
CN201710301990.6A 2017-05-02 2017-05-02 Super-resolution reconstruction method of remote sensing images based on content-aware deep learning network Active CN107194872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710301990.6A CN107194872B (en) 2017-05-02 2017-05-02 Super-resolution reconstruction method of remote sensing images based on content-aware deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710301990.6A CN107194872B (en) 2017-05-02 2017-05-02 Super-resolution reconstruction method of remote sensing images based on content-aware deep learning network

Publications (2)

Publication Number Publication Date
CN107194872A true CN107194872A (en) 2017-09-22
CN107194872B CN107194872B (en) 2019-08-20

Family

ID=59872637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710301990.6A Active CN107194872B (en) 2017-05-02 2017-05-02 Super-resolution reconstruction method of remote sensing images based on content-aware deep learning network

Country Status (1)

Country Link
CN (1) CN107194872B (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767384A (en) * 2017-11-03 2018-03-06 电子科技大学 A kind of image, semantic dividing method based on dual training
CN108346133A (en) * 2018-03-15 2018-07-31 武汉大学 A kind of deep learning network training method towards video satellite super-resolution rebuilding
CN108665509A (en) * 2018-05-10 2018-10-16 广东工业大学 A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing
CN108711141A (en) * 2018-05-17 2018-10-26 重庆大学 The motion blur image blind restoration method of network is fought using improved production
CN108830209A (en) * 2018-06-08 2018-11-16 西安电子科技大学 Based on the remote sensing images method for extracting roads for generating confrontation network
CN108876870A (en) * 2018-05-30 2018-11-23 福州大学 A kind of domain mapping GANs image rendering methods considering texture complexity
CN108921791A (en) * 2018-07-03 2018-11-30 苏州中科启慧软件技术有限公司 Lightweight image super-resolution improved method based on adaptive important inquiry learning
CN108961217A (en) * 2018-06-08 2018-12-07 南京大学 A kind of detection method of surface flaw based on positive example training
CN109117944A (en) * 2018-08-03 2019-01-01 北京悦图遥感科技发展有限公司 A kind of super resolution ratio reconstruction method and system of steamer target remote sensing image
CN109785270A (en) * 2019-01-18 2019-05-21 四川长虹电器股份有限公司 A kind of image super-resolution method based on GAN
CN109903223A (en) * 2019-01-14 2019-06-18 北京工商大学 An Image Super-Resolution Method Based on Densely Connected Networks and Generative Adversarial Networks
CN109949219A (en) * 2019-01-12 2019-06-28 深圳先进技术研究院 A method, device and device for reconstructing super-resolution images
CN110033033A (en) * 2019-04-01 2019-07-19 南京谱数光电科技有限公司 A kind of Maker model training method based on CGANs
CN110163852A (en) * 2019-05-13 2019-08-23 北京科技大学 The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN110738597A (en) * 2018-07-19 2020-01-31 北京连心医疗科技有限公司 Size self-adaptive preprocessing method of multi-resolution medical image in neural network
CN110807740A (en) * 2019-09-17 2020-02-18 北京大学 An image enhancement method and system for vehicle window images in surveillance scenes
CN111144466A (en) * 2019-12-17 2020-05-12 武汉大学 A deep metric learning method for image sample adaptation
CN111260705A (en) * 2020-01-13 2020-06-09 武汉大学 A multi-task registration method for prostate MR images based on deep convolutional neural network
CN111275713A (en) * 2020-02-03 2020-06-12 武汉大学 A Cross-Domain Semantic Segmentation Method Based on Adversarial Self-Integrated Networks
WO2020177582A1 (en) * 2019-03-06 2020-09-10 腾讯科技(深圳)有限公司 Video synthesis method, model training method, device and storage medium
CN111712830A (en) * 2018-02-21 2020-09-25 罗伯特·博世有限公司 Real-time object detection using depth sensors
CN111915545A (en) * 2020-08-06 2020-11-10 中北大学 A Self-Supervised Learning Fusion Method for Multiband Images
CN112700003A (en) * 2020-12-25 2021-04-23 深圳前海微众银行股份有限公司 Network structure search method, device, equipment, storage medium and program product
CN112825187A (en) * 2019-11-21 2021-05-21 福州瑞芯微电子股份有限公司 Super-resolution method, medium and device based on machine learning
CN113139576A (en) * 2021-03-22 2021-07-20 广东省科学院智能制造研究所 Deep learning image classification method and system combining image complexity
CN113421189A (en) * 2021-06-21 2021-09-21 Oppo广东移动通信有限公司 Image super-resolution processing method and device and electronic equipment
CN113538246A (en) * 2021-08-10 2021-10-22 西安电子科技大学 Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
US11263726B2 (en) 2019-05-16 2022-03-01 Here Global B.V. Method, apparatus, and system for task driven approaches to super resolution
CN116402691A (en) * 2023-06-05 2023-07-07 四川轻化工大学 Image super-resolution method and system based on rapid image feature stitching
CN117911285A (en) * 2024-01-12 2024-04-19 北京数慧时空信息技术有限公司 Remote sensing image restoration method based on time series images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825477A (en) * 2015-01-06 2016-08-03 南京理工大学 Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
CN105931179A (en) * 2016-04-08 2016-09-07 武汉大学 Joint sparse representation and deep learning-based image super resolution method and system
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system
US20170046816A1 (en) * 2015-08-14 2017-02-16 Sharp Laboratories Of America, Inc. Super resolution image enhancement technique

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825477A (en) * 2015-01-06 2016-08-03 南京理工大学 Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
US20170046816A1 (en) * 2015-08-14 2017-02-16 Sharp Laboratories Of America, Inc. Super resolution image enhancement technique
CN105931179A (en) * 2016-04-08 2016-09-07 武汉大学 Joint sparse representation and deep learning-based image super resolution method and system
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡传平,等: "基于深度学习的图像超分辨率算法研究", 《铁道警察学院学报》 *

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767384A (en) * 2017-11-03 2018-03-06 电子科技大学 A kind of image, semantic dividing method based on dual training
CN111712830B (en) * 2018-02-21 2024-02-09 罗伯特·博世有限公司 Real-time object detection using depth sensors
CN111712830A (en) * 2018-02-21 2020-09-25 罗伯特·博世有限公司 Real-time object detection using depth sensors
CN108346133B (en) * 2018-03-15 2021-06-04 武汉大学 Deep learning network training method for super-resolution reconstruction of video satellite
CN108346133A (en) * 2018-03-15 2018-07-31 武汉大学 A kind of deep learning network training method towards video satellite super-resolution rebuilding
CN108665509A (en) * 2018-05-10 2018-10-16 广东工业大学 A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing
CN108711141A (en) * 2018-05-17 2018-10-26 重庆大学 The motion blur image blind restoration method of network is fought using improved production
CN108711141B (en) * 2018-05-17 2022-02-15 重庆大学 Motion blurred image blind restoration method using improved generation type countermeasure network
CN108876870A (en) * 2018-05-30 2018-11-23 福州大学 A kind of domain mapping GANs image rendering methods considering texture complexity
CN108876870B (en) * 2018-05-30 2022-12-13 福州大学 Domain mapping GANs image coloring method considering texture complexity
CN108961217A (en) * 2018-06-08 2018-12-07 南京大学 A kind of detection method of surface flaw based on positive example training
CN108830209B (en) * 2018-06-08 2021-12-17 西安电子科技大学 Remote sensing image road extraction method based on generation countermeasure network
CN108961217B (en) * 2018-06-08 2022-09-16 南京大学 Surface defect detection method based on regular training
CN108830209A (en) * 2018-06-08 2018-11-16 西安电子科技大学 Based on the remote sensing images method for extracting roads for generating confrontation network
CN108921791A (en) * 2018-07-03 2018-11-30 苏州中科启慧软件技术有限公司 Lightweight image super-resolution improved method based on adaptive important inquiry learning
CN110738597A (en) * 2018-07-19 2020-01-31 北京连心医疗科技有限公司 Size self-adaptive preprocessing method of multi-resolution medical image in neural network
CN109117944A (en) * 2018-08-03 2019-01-01 北京悦图遥感科技发展有限公司 A kind of super resolution ratio reconstruction method and system of steamer target remote sensing image
CN109117944B (en) * 2018-08-03 2021-01-15 北京悦图数据科技发展有限公司 Super-resolution reconstruction method and system for ship target remote sensing image
CN109949219A (en) * 2019-01-12 2019-06-28 深圳先进技术研究院 A method, device and device for reconstructing super-resolution images
CN109949219B (en) * 2019-01-12 2021-03-26 深圳先进技术研究院 Reconstruction method, device and equipment of super-resolution image
CN109903223B (en) * 2019-01-14 2023-08-25 北京工商大学 An Image Super-resolution Method Based on Densely Connected Network and Generative Adversarial Network
CN109903223A (en) * 2019-01-14 2019-06-18 北京工商大学 An Image Super-Resolution Method Based on Densely Connected Networks and Generative Adversarial Networks
CN109785270A (en) * 2019-01-18 2019-05-21 四川长虹电器股份有限公司 A kind of image super-resolution method based on GAN
US11356619B2 (en) 2019-03-06 2022-06-07 Tencent Technology (Shenzhen) Company Limited Video synthesis method, model training method, device, and storage medium
WO2020177582A1 (en) * 2019-03-06 2020-09-10 腾讯科技(深圳)有限公司 Video synthesis method, model training method, device and storage medium
CN110033033A (en) * 2019-04-01 2019-07-19 南京谱数光电科技有限公司 A kind of Maker model training method based on CGANs
CN110163852B (en) * 2019-05-13 2021-10-15 北京科技大学 Real-time deviation detection method of conveyor belt based on lightweight convolutional neural network
CN110163852A (en) * 2019-05-13 2019-08-23 北京科技大学 The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks
US11263726B2 (en) 2019-05-16 2022-03-01 Here Global B.V. Method, apparatus, and system for task driven approaches to super resolution
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN110807740A (en) * 2019-09-17 2020-02-18 北京大学 An image enhancement method and system for vehicle window images in surveillance scenes
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN112825187A (en) * 2019-11-21 2021-05-21 福州瑞芯微电子股份有限公司 Super-resolution method, medium and device based on machine learning
CN111144466B (en) * 2019-12-17 2022-05-13 武汉大学 A deep metric learning method for image sample adaptation
CN111144466A (en) * 2019-12-17 2020-05-12 武汉大学 A deep metric learning method for image sample adaptation
CN111260705A (en) * 2020-01-13 2020-06-09 武汉大学 A multi-task registration method for prostate MR images based on deep convolutional neural network
CN111260705B (en) * 2020-01-13 2022-03-15 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
CN111275713A (en) * 2020-02-03 2020-06-12 武汉大学 A Cross-Domain Semantic Segmentation Method Based on Adversarial Self-Integrated Networks
CN111275713B (en) * 2020-02-03 2022-04-12 武汉大学 A Cross-Domain Semantic Segmentation Method Based on Adversarial Self-Integrated Networks
CN111915545A (en) * 2020-08-06 2020-11-10 中北大学 A Self-Supervised Learning Fusion Method for Multiband Images
CN111915545B (en) * 2020-08-06 2022-07-05 中北大学 Self-supervision learning fusion method of multiband images
CN112700003A (en) * 2020-12-25 2021-04-23 深圳前海微众银行股份有限公司 Network structure search method, device, equipment, storage medium and program product
CN113139576A (en) * 2021-03-22 2021-07-20 广东省科学院智能制造研究所 Deep learning image classification method and system combining image complexity
CN113139576B (en) * 2021-03-22 2024-03-12 广东省科学院智能制造研究所 Deep learning image classification method and system combining image complexity
CN113421189A (en) * 2021-06-21 2021-09-21 Oppo广东移动通信有限公司 Image super-resolution processing method and device and electronic equipment
CN113538246A (en) * 2021-08-10 2021-10-22 西安电子科技大学 Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN116402691A (en) * 2023-06-05 2023-07-07 四川轻化工大学 Image super-resolution method and system based on rapid image feature stitching
CN116402691B (en) * 2023-06-05 2023-08-04 四川轻化工大学 Image super-resolution method and system based on rapid image feature stitching
CN117911285A (en) * 2024-01-12 2024-04-19 北京数慧时空信息技术有限公司 Remote sensing image restoration method based on time series images

Also Published As

Publication number Publication date
CN107194872B (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN107194872B (en) Super-resolution reconstruction method of remote sensing images based on content-aware deep learning network
Ledig et al. Photo-realistic single image super-resolution using a generative adversarial network
US11024009B2 (en) Super resolution using a generative adversarial network
Hua et al. A normalized convolutional neural network for guided sparse depth upsampling.
CN110136062B (en) A Super-Resolution Reconstruction Method for Joint Semantic Segmentation
CN109978762A (en) A kind of super resolution ratio reconstruction method generating confrontation network based on condition
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
CN107909015A (en) Hyperspectral image classification method based on convolutional neural networks and empty spectrum information fusion
CN108537102A (en) High Resolution SAR image classification method based on sparse features and condition random field
CN102915527A (en) Face image super-resolution reconstruction method based on morphological component analysis
CN114170088A (en) Relational reinforcement learning system and method based on graph structure data
CN111369442A (en) Remote sensing image super-resolution reconstruction method based on fuzzy kernel classification and attention mechanism
Cai et al. Multiscale attentive image de-raining networks via neural architecture search
CN109146925A (en) Conspicuousness object detection method under a kind of dynamic scene
CN118230131B (en) Image recognition and target detection method
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
Zhou et al. Attention transfer network for nature image matting
Luo et al. Bi-GANs-ST for perceptual image super-resolution
CN107330854A (en) A kind of image super-resolution Enhancement Method based on new type formwork
Hu et al. Hierarchical discrepancy learning for image restoration quality assessment
Morimitsu et al. Recurrent partial kernel network for efficient optical flow estimation
Hu et al. Perceptual quality evaluation for motion deblurring
CN105654070A (en) Low-resolution face recognition method
Wang et al. A CBAM‐GAN‐based method for super‐resolution reconstruction of remote sensing image
Yan et al. Repeatable adaptive keypoint detection via self-supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant