CN107194872B - Remote sensed image super-resolution reconstruction method based on perception of content deep learning network - Google Patents

Remote sensed image super-resolution reconstruction method based on perception of content deep learning network Download PDF

Info

Publication number
CN107194872B
CN107194872B CN201710301990.6A CN201710301990A CN107194872B CN 107194872 B CN107194872 B CN 107194872B CN 201710301990 A CN201710301990 A CN 201710301990A CN 107194872 B CN107194872 B CN 107194872B
Authority
CN
China
Prior art keywords
image
complexity
content
super
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710301990.6A
Other languages
Chinese (zh)
Other versions
CN107194872A (en
Inventor
王中元
韩镇
杜博
邵振峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710301990.6A priority Critical patent/CN107194872B/en
Publication of CN107194872A publication Critical patent/CN107194872A/en
Application granted granted Critical
Publication of CN107194872B publication Critical patent/CN107194872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of Remote sensed image super-resolution reconstruction methods based on perception of content deep learning network, the invention proposes the comprehensive measurement indexs and calculation method of picture material complexity, based on this, sample image is classified by content complexity, the deep layer GAN network model of building and the high, medium and low three kinds of complexity of training not etc., then it according to the content complexity of the input picture to oversubscription, chooses corresponding network and is rebuild.In order to improve the learning performance of GAN network, the present invention gives a kind of loss function definition of optimization simultaneously.The present invention overcomes the contradictions of over-fitting generally existing in the super-resolution rebuilding based on machine learning and poor fitting, effectively improve the super-resolution rebuilding precision of remote sensing image.

Description

Remote sensing image super-resolution reconstruction method based on content-aware deep learning network
Technical Field
The invention belongs to the technical field of image processing, relates to an image super-resolution reconstruction method, and particularly relates to a remote sensing image super-resolution reconstruction method based on a content-aware deep learning network.
Background
The remote sensing image with high spatial resolution can describe the ground features more finely and provide rich detail information, so people usually hope to obtain the image with high spatial resolution. With the rapid development of the spatial detection theory and technology, remote sensing images with meter-level and even sub-meter-level spatial resolution (such as IKNOS and QuickBird) gradually come to be applied, but the temporal resolution is generally low. In contrast, some sensors with lower spatial resolution (e.g., MODIS) have high temporal resolution, and they can acquire a wide range of remote sensing images in a short time. If an image with high spatial resolution can be reconstructed from these images with lower spatial resolution, a remote sensing image with both high spatial resolution and high temporal resolution can be acquired. Therefore, it is necessary to reconstruct the remote sensing image with lower resolution to obtain the image with higher resolution.
In recent years, deep learning has been widely used to solve various problems in computer vision and image processing. In 2014, c.dong et al, university of chinese in hong kong, introduced deep CNN learning into super-resolution reconstruction of images first, and obtained better effect than the previous mainstream sparse expression method; in 2015, jkim et al, seoul national university in korea, further proposed an RNN-based improvement method, with further improvement in performance; in 2016, y.romano et al, Google corporation developed a fast and accurate learning method; shortly thereafter, c.leiig et al, Twitter corporation, used GAN networks (generative countermeasure networks) for image super-resolution, achieving the best reconstruction results to date. Moreover, the GAN is a deep belief network at the bottom, no longer relying strictly on supervised learning, and can be trained even without one-to-one pairs of high and low resolution image samples.
After the deep learning model and the network architecture are determined, the performance of the super-resolution method based on the deep learning is determined by the quality of the network model training to a great extent. Deep learning network models are not trained more thoroughly and efficiently, but rather should be adequately and appropriately sample learned (just as deep network models are not more numerous and better). For complex images, more samples need to be trained, so that more image features can be learned, but the network is easy to overfit for simple-content images, so that the super-resolution result is fuzzy; on the contrary, the training intensity is reduced, the over-fitting phenomenon of the simple content images can be avoided, but the under-fitting problem of the complex content images can be caused, and the naturalness and the fidelity of the reconstructed images are reduced. How to achieve the training network can simultaneously meet the requirements of high-quality reconstruction of images with complex contents and simplicity, and is a problem that a deep learning-based method in the actual super-resolution application cannot avoid.
Disclosure of Invention
In order to solve the technical problem, the invention provides a remote sensing image super-resolution reconstruction method based on a content-aware deep learning network.
The technical scheme adopted by the invention is as follows: a remote sensing image super-resolution reconstruction method based on a content-aware deep learning network is characterized by comprising the following steps:
step 1: collecting high and low resolution remote sensing image samples, and performing block processing;
step 2: calculating the complexity of each image block, dividing the image blocks into a high class, a middle class and a low class according to the complexity, and respectively forming a training sample set with high complexity, middle complexity and low complexity;
and step 3: respectively training three GAN networks with high, medium and low complexity by using the obtained sample set;
and 4, step 4: and calculating the complexity of the input image, and selecting a corresponding GAN network for reconstruction according to the complexity.
Compared with the existing image super-resolution method, the method has the following advantages and positive effects:
(1) by using the simple idea of image classification, the method successfully overcomes the common contradiction of over-fitting and under-fitting in the super-resolution reconstruction based on machine learning, and effectively improves the super-resolution reconstruction precision of the remote sensing image;
(2) the deep learning network model based on the method is a GAN network, and the network does not depend on strictly aligned high-resolution and low-resolution sample blocks one by one during training, so that the application universality is improved, and the method is particularly suitable for the multi-source asynchronous imaging environment of high-resolution and low-resolution images in the field of remote sensing.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the remote sensing image super-resolution reconstruction method based on the content-aware deep learning network provided by the invention comprises the following steps:
step 1: collecting samples of the high-resolution and low-resolution remote sensing images, and uniformly cutting the high-resolution images into 128x128 image blocks and the low-resolution images into 64x64 image blocks;
step 2: calculating the complexity of each image block, dividing the image blocks into a high class, a middle class and a low class according to the complexity, and respectively forming a training sample set with high complexity, middle complexity and low complexity;
the computing principle and method of the image complexity are as follows:
the complexity of the image content comprises texture complexity and structural complexity, the information entropy and the gray scale consistency performance well describe the texture complexity, and the structural complexity is suitably described by the edge ratio of an object in the image. The content complexity measurement index C of the image is formed by weighting an information entropy H, a gray level consistency U and an edge ratio R according to the following formula:
C=wh×H+wu×U+we×R;
where w ish,wu,weEach is a respective weight, which is determined experimentally.
The respective calculation methods of the information entropy, texture consistency, and edge ratio are given below.
(1) Entropy of information
The information entropy reflects the number of image gray levels and the occurrence condition of each gray level pixel, and the higher the entropy value is, the more complicated the image texture is. The calculation formula of the image information entropy H is as follows:
n is the number of gray levels, NiK is the number of gray levels for the number of occurrences of each gray level.
(2) Gray scale uniformity
The gray consistency can reflect the uniformity of the image, and if the value is small, the gray consistency corresponds to a simple image, and conversely, the gray consistency corresponds to a complex image. The gray level consistency formula is:
where M, N are the number of rows and columns, respectively, of the image, f (i, j) is the gray value at pixel (i, j),is the mean of the gray levels of the 3 × 3 neighborhood pixels centered at (i, j).
(3) Edge ratio
The number of objects in the image directly reflects the complexity of the image, and if the number of objects is large, the image is generally complex, and vice versa. Since the counting of the target involves complicated graph segmentation and is inconvenient to calculate, the number of target edges indirectly reflects the number of targets in the image and the complexity thereof, and therefore, the counting can be used for describing the complexity of the image. The proportion of the target edge in the image can be described by an edge ratio, and the calculation formula is as follows:
wherein, M and N are the number of rows and columns of the image respectively, and E is the number of edge pixels in the image. Where the edge of the target in the image shows a significant change in gray scale, the edge can be obtained by a difference algorithm, and the edge pixel of the image is generally detected by an edge detection operator (such as Canny operator, Sobel operator, etc.).
The number of the high-resolution sample set image blocks is not less than 500000, the number of the medium-resolution image blocks is not less than 300000, and the number of the low-resolution image blocks is not less than 200000.
And step 3: respectively training three GAN networks with high, medium and low complexity by using the obtained sample set;
the loss function for GAN network training is defined as follows:
the loss function of GAN network training contains content loss, production-confrontation loss and total variation loss. Content loss characterizes the distortion of the image content, and the generation-confrontation loss describes the degree of distinction between the statistical properties of the generated result and data such as natural images, and total variation loss characterizes the continuity of the image content. The overall loss function consists of three loss function weights:
where w isv,wg,wtEach is a respective weight, which is determined experimentally.
The calculation method for each loss function is given below.
(1) Content loss
The traditional content loss function is expressed by MSE (mean square error of pixels), the loss of the image content is investigated pixel by pixel, and the high-frequency components on the image structure are diluted by network training based on the MSE, so that the image is over-blurred. To overcome this drawback, a feature loss function of the image is introduced here. Because the manual definition and the extraction of valuable image features are complex work, and the deep learning has the capability of automatically extracting the features, the method uses hidden layer features obtained by VGG network training for measurement. By phii,jRepresenting the characteristic graph obtained by the jth convolutional layer in front of the ith pooling layer in the VGG network, and defining the characteristic loss as a reconstructed imageAnd a reference imageThe euclidean distance of the VGG feature of (a), i.e.:
here Wi,j,Hi,jRepresenting the dimensions of the VGG feature map.
(2) Generating-fighting loss
The generation-confrontation loss takes into account the generative function of the GAN network, encouraging the network to produce a solution that is spatially consistent with the natural image manifold, so that the discriminator cannot distinguish the generated result from the natural image. The generative-confrontation loss is measured based on the discriminative probability of the discriminators for all training samples, as follows:
here, ,indicates that the discriminator D will reconstruct the resultJudging the probability of the natural image; n represents the total number of training samples.
(3) Total variation loss
The total variation loss is added to strengthen the local continuity of the learning result on the image content, and the calculation formula is as follows:
here, W and H denote the width and height of the reconstructed image.
And 4, step 4: and calculating the complexity of the input image, and selecting a corresponding GAN network for reconstruction according to the complexity.
The method specifically comprises the following substeps:
step 4.1: uniformly dividing an input image into 16 equal parts of subgraphs, calculating the complexity of each subgraph, and judging the subgraphs to belong to high, medium and low complexity types;
step 4.2: and selecting a corresponding GAN network according to the complexity type to perform super-resolution reconstruction.
The method classifies sample images according to the complexity of image contents, constructs and trains deep network models with different complexities, and selects a corresponding network for reconstruction according to the content complexity of input images to be over-classified. The remote sensing image records a large-scale range scene, and is not influenced by fine information of ground targets, and the space homogeneous region with consistent content complexity is more and large in area, such as large land features of urban areas, dry farmlands, paddy fields, lakes, mountainous regions and the like, so that the remote sensing image is more suitable for pre-classification training and reconstruction.
The GAN deep learning network model is adopted, not only is the best super-resolution performance given by the GAN network at present, but also the remote sensing images with high and low spatial resolutions serving as training samples are different in source and belong to multi-temporal images shot asynchronously, one-to-one alignment in pixel meaning cannot exist, so that the training of the CNN network is greatly limited, and the GAN network is an unsupervised learning network, so that the problem does not exist.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A remote sensing image super-resolution reconstruction method based on a content-aware deep learning network is characterized by comprising the following steps:
step 1: collecting high and low resolution remote sensing image samples, and performing block processing;
step 2: calculating the complexity of each image block, dividing the image blocks into a high class, a middle class and a low class according to the complexity, and respectively forming a training sample set with high complexity, middle complexity and low complexity;
the complexity of the image block is calculated by the following method:
C=wh×H+wu×U+we×R;
wherein C represents the complexity of the image block, H represents the entropy of image information, U represents the gray-scale uniformity of the image, R represents the edge ratio of the image, and w representsh,wu,weThe weights are respectively the respective weights, and the weights are determined by experiments;
and step 3: respectively training three GAN networks with high, medium and low complexity by using the obtained sample set;
wherein the loss function of the GAN network training is defined as:
wherein C represents a loss function of network training,a function representing the loss of content is represented,the expression generates-a function of the penalty of confrontation,representing a total variation loss function; w is av,wg,wtThe weights are respectively the respective weights, and the weights are determined by experiments;
and 4, step 4: calculating the complexity of an input image, and selecting a corresponding GAN network for reconstruction according to the complexity;
the specific implementation of the step 4 comprises the following substeps:
step 4.1: uniformly dividing an input image, calculating the complexity of each sub-image, and judging the types of high, medium and low complexity;
step 4.2: and selecting a corresponding GAN network according to the complexity type to perform super-resolution reconstruction.
2. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 1, characterized in that: in step 1, the high resolution image is evenly sliced into 128x128 image blocks and the low resolution image is evenly sliced into 64x64 image blocks.
3. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 1, wherein the calculation formula of the image information entropy H is as follows:
wherein N is the number of gray levels, NiK is the number of gray levels for the number of occurrences of each gray level.
4. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 1, wherein the image gray level consistency U formula is as follows:
where M, N are the number of rows and columns, respectively, of the image, f (i, j) is the gray scale value at pixel (i, j),is the mean of the gray levels of the 3 × 3 neighborhood pixels centered at (i, j).
5. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 1, wherein the image edge ratio R is calculated by the following formula:
wherein M and N are the number of rows and columns of the image respectively; and E is the number of edge pixels in the image and is obtained by a difference algorithm.
6. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to any one of claims 1 to 5, characterized in that: and 2, the training sample sets with high, medium and low complexity are obtained, wherein the number of the image blocks of the training sample set with high complexity is not less than 500000, the number of the image blocks of the training sample set with medium complexity is not less than 300000, and the number of the image blocks of the training sample set with low complexity is not less than 200000.
7. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network as claimed in claim 1, wherein the content loss functionComprises the following steps:
wherein phi isi,jRepresenting a characteristic diagram obtained for the jth convolutional layer preceding the ith pooling layer in a VGG network, Wi,j,Hi,jDimensions representing a VGG feature map;which represents a reference image, is shown,representing the reconstructed image.
8. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network as claimed in claim 1, wherein the generation-confrontation loss functionComprises the following steps:
wherein,representing a reconstructed image, D (G (I)LR) Means D for reconstructing the resultJudging the probability of the natural image; n represents the total number of training samples.
9. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network of claim 1, wherein the total variation loss functionComprises the following steps:
wherein, G (I)LR) Representing the reconstructed image and W, H representing the width and height of the reconstructed image.
10. The remote sensing image super-resolution reconstruction method based on the content-aware deep learning network according to claim 1, characterized in that: in step 4.1, the input image is divided evenly into 16 equal parts of subgraphs.
CN201710301990.6A 2017-05-02 2017-05-02 Remote sensed image super-resolution reconstruction method based on perception of content deep learning network Active CN107194872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710301990.6A CN107194872B (en) 2017-05-02 2017-05-02 Remote sensed image super-resolution reconstruction method based on perception of content deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710301990.6A CN107194872B (en) 2017-05-02 2017-05-02 Remote sensed image super-resolution reconstruction method based on perception of content deep learning network

Publications (2)

Publication Number Publication Date
CN107194872A CN107194872A (en) 2017-09-22
CN107194872B true CN107194872B (en) 2019-08-20

Family

ID=59872637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710301990.6A Active CN107194872B (en) 2017-05-02 2017-05-02 Remote sensed image super-resolution reconstruction method based on perception of content deep learning network

Country Status (1)

Country Link
CN (1) CN107194872B (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767384B (en) * 2017-11-03 2021-12-03 电子科技大学 Image semantic segmentation method based on countermeasure training
CN111712830B (en) * 2018-02-21 2024-02-09 罗伯特·博世有限公司 Real-time object detection using depth sensors
CN108346133B (en) * 2018-03-15 2021-06-04 武汉大学 Deep learning network training method for super-resolution reconstruction of video satellite
CN108665509A (en) * 2018-05-10 2018-10-16 广东工业大学 A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing
CN108711141B (en) * 2018-05-17 2022-02-15 重庆大学 Motion blurred image blind restoration method using improved generation type countermeasure network
CN108876870B (en) * 2018-05-30 2022-12-13 福州大学 Domain mapping GANs image coloring method considering texture complexity
CN108961217B (en) * 2018-06-08 2022-09-16 南京大学 Surface defect detection method based on regular training
CN108830209B (en) * 2018-06-08 2021-12-17 西安电子科技大学 Remote sensing image road extraction method based on generation countermeasure network
CN108921791A (en) * 2018-07-03 2018-11-30 苏州中科启慧软件技术有限公司 Lightweight image super-resolution improved method based on adaptive important inquiry learning
CN110738597A (en) * 2018-07-19 2020-01-31 北京连心医疗科技有限公司 Size self-adaptive preprocessing method of multi-resolution medical image in neural network
CN109117944B (en) * 2018-08-03 2021-01-15 北京悦图数据科技发展有限公司 Super-resolution reconstruction method and system for ship target remote sensing image
CN109949219B (en) * 2019-01-12 2021-03-26 深圳先进技术研究院 Reconstruction method, device and equipment of super-resolution image
CN109903223B (en) * 2019-01-14 2023-08-25 北京工商大学 Image super-resolution method based on dense connection network and generation type countermeasure network
CN109785270A (en) * 2019-01-18 2019-05-21 四川长虹电器股份有限公司 A kind of image super-resolution method based on GAN
CN109951654B (en) * 2019-03-06 2022-02-15 腾讯科技(深圳)有限公司 Video synthesis method, model training method and related device
CN110033033B (en) * 2019-04-01 2023-04-18 南京谱数光电科技有限公司 Generator model training method based on CGANs
CN110163852B (en) * 2019-05-13 2021-10-15 北京科技大学 Conveying belt real-time deviation detection method based on lightweight convolutional neural network
US11263726B2 (en) 2019-05-16 2022-03-01 Here Global B.V. Method, apparatus, and system for task driven approaches to super resolution
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN110807740B (en) * 2019-09-17 2023-04-18 北京大学 Image enhancement method and system for monitoring scene vehicle window image
CN110689086B (en) * 2019-10-08 2020-09-25 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN112825187A (en) * 2019-11-21 2021-05-21 福州瑞芯微电子股份有限公司 Super-resolution method, medium and device based on machine learning
CN111144466B (en) * 2019-12-17 2022-05-13 武汉大学 Image sample self-adaptive depth measurement learning method
CN111260705B (en) * 2020-01-13 2022-03-15 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
CN111275713B (en) * 2020-02-03 2022-04-12 武汉大学 Cross-domain semantic segmentation method based on countermeasure self-integration network
CN111915545B (en) * 2020-08-06 2022-07-05 中北大学 Self-supervision learning fusion method of multiband images
CN112700003A (en) * 2020-12-25 2021-04-23 深圳前海微众银行股份有限公司 Network structure search method, device, equipment, storage medium and program product
CN113139576B (en) * 2021-03-22 2024-03-12 广东省科学院智能制造研究所 Deep learning image classification method and system combining image complexity
CN113421189A (en) * 2021-06-21 2021-09-21 Oppo广东移动通信有限公司 Image super-resolution processing method and device and electronic equipment
CN113538246B (en) * 2021-08-10 2023-04-07 西安电子科技大学 Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN116402691B (en) * 2023-06-05 2023-08-04 四川轻化工大学 Image super-resolution method and system based on rapid image feature stitching
CN117911285A (en) * 2024-01-12 2024-04-19 北京数慧时空信息技术有限公司 Remote sensing image restoration method based on time sequence image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825477A (en) * 2015-01-06 2016-08-03 南京理工大学 Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
CN105931179A (en) * 2016-04-08 2016-09-07 武汉大学 Joint sparse representation and deep learning-based image super resolution method and system
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9589323B1 (en) * 2015-08-14 2017-03-07 Sharp Laboratories Of America, Inc. Super resolution image enhancement technique

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825477A (en) * 2015-01-06 2016-08-03 南京理工大学 Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
CN105931179A (en) * 2016-04-08 2016-09-07 武汉大学 Joint sparse representation and deep learning-based image super resolution method and system
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的图像超分辨率算法研究;胡传平,等;《铁道警察学院学报》;20161231;第26卷(第121期);第5-10页

Also Published As

Publication number Publication date
CN107194872A (en) 2017-09-22

Similar Documents

Publication Publication Date Title
CN107194872B (en) Remote sensed image super-resolution reconstruction method based on perception of content deep learning network
US20210264568A1 (en) Super resolution using a generative adversarial network
Bergmann et al. Improving unsupervised defect segmentation by applying structural similarity to autoencoders
Zhou et al. Scale adaptive image cropping for UAV object detection
CN105139395B (en) SAR image segmentation method based on small echo pond convolutional neural networks
Gu et al. Blind image quality assessment via learnable attention-based pooling
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
CN102402685B (en) Method for segmenting three Markov field SAR image based on Gabor characteristic
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN109344818B (en) Light field significant target detection method based on deep convolutional network
CN104616308A (en) Multiscale level set image segmenting method based on kernel fuzzy clustering
Veeravasarapu et al. Adversarially tuned scene generation
CN109146925A (en) Conspicuousness object detection method under a kind of dynamic scene
CN113177592B (en) Image segmentation method and device, computer equipment and storage medium
CN114612456B (en) Billet automatic semantic segmentation recognition method based on deep learning
CN115205672A (en) Remote sensing building semantic segmentation method and system based on multi-scale regional attention
Asheghi et al. A comprehensive review on content-aware image retargeting: From classical to state-of-the-art methods
CN110717531A (en) Method for detecting classified change type based on uncertainty analysis and Bayesian fusion
Zhou et al. Attention transfer network for nature image matting
CN104732230A (en) Pathology image local-feature extracting method based on cell nucleus statistical information
Luo et al. Bi-GANs-ST for perceptual image super-resolution
CN116630971A (en) Wheat scab spore segmentation method based on CRF_Resunate++ network
Wang et al. Single image haze removal via attention-based transmission estimation and classification fusion network
Wagner et al. River water segmentation in surveillance camera images: A comparative study of offline and online augmentation using 32 CNNs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant