CN107123089B - Remote sensing image super-resolution reconstruction method and system based on depth convolution network - Google Patents

Remote sensing image super-resolution reconstruction method and system based on depth convolution network Download PDF

Info

Publication number
CN107123089B
CN107123089B CN201710271199.5A CN201710271199A CN107123089B CN 107123089 B CN107123089 B CN 107123089B CN 201710271199 A CN201710271199 A CN 201710271199A CN 107123089 B CN107123089 B CN 107123089B
Authority
CN
China
Prior art keywords
image
space
remote sensing
layer
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710271199.5A
Other languages
Chinese (zh)
Other versions
CN107123089A (en
Inventor
张洪群
李欣
韦宏卫
吴业炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Remote Sensing and Digital Earth of CAS
Original Assignee
Institute of Remote Sensing and Digital Earth of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Remote Sensing and Digital Earth of CAS filed Critical Institute of Remote Sensing and Digital Earth of CAS
Priority to CN201710271199.5A priority Critical patent/CN107123089B/en
Publication of CN107123089A publication Critical patent/CN107123089A/en
Application granted granted Critical
Publication of CN107123089B publication Critical patent/CN107123089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a remote sensing image super-resolution reconstruction method and a remote sensing image super-resolution reconstruction system based on a depth convolution network, wherein the remote sensing image super-resolution reconstruction method and the remote sensing image super-resolution reconstruction system comprise the following steps: converting the remote sensing image to be processed from an RGB space to a YCbCr space, and separating a brightness space and a chromaticity space; constructing a multi-layer depth convolution network, constructing a super-resolution reconstruction model based on the multi-layer depth convolution network, and reconstructing a brightness space by using the super-resolution reconstruction model to obtain a reconstructed brightness space; guiding the chromaticity space to perform combined bilateral filtering by taking the reconstructed brightness space as a guide diagram to obtain the reconstructed chromaticity space; integrating the reconstructed brightness space and the reconstructed chromaticity space, and converting the integrated remote sensing image to be processed back to the RGB space from the YCbCr space to obtain a super-resolution image, wherein the super-resolution image has higher resolution than the remote sensing image to be processed. The method and the system realize super-resolution reconstruction of the remote sensing image under the condition of not depending on the multi-temporal remote sensing image sequence of the same scene, and improve the image resolution.

Description

Remote sensing image super-resolution reconstruction method and system based on depth convolution network
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a remote sensing image super-resolution reconstruction method and system based on a depth convolution network.
Background
With the deep development of remote sensing technology in fields of ground object observation, target recognition and the like, the demand of people for high-resolution remote sensing images is increasing. The improvement of hardware equipment is the most direct method for obtaining the high-Resolution remote sensing image, but has the advantages of high cost, long development period, low maintenance difficulty, poor flexibility, and the like, and the Super Resolution (SR) reconstruction is an economic and convenient technology for improving the image Resolution by reconstructing one or more low-Resolution images from image information. The SR reconstruction technology is applied in the remote sensing field, so that the resolution difference between different images can be reduced, the registration, mosaic and fusion difficulties of the multi-source images are further reduced, a foundation is provided for multi-time phase ground feature observation and data processing, and the detection and identification of targets are facilitated.
SR reconstruction was first proposed by Harris et al in the last 60 th century, and can be classified into a single reconstruction and a plurality of reconstructions according to the number of low resolution images required at the time of reconstruction, and three main methods are currently used: interpolation, reconstruction, and learning.
The interpolation method is the earliest SR reconstruction method, and comprises nearest neighbor interpolation, bilinear interpolation, bicubic interpolation and the like, the gray values of the pixels to be interpolated are generated by using the gray values of the known adjacent pixels, and in all the reconstruction methods, the complexity is lowest, the instantaneity is good, but the edge effect in the result is obvious, and the detail recovery is poor. The reconstruction method is proposed by Tsai et al, developed by Tekalp and Stark et al, models an image imaging process, provides constraint for reconstructing a high-resolution image according to a specific degradation model and an observed low-resolution image sequence, and fuses different information of the same scene to obtain a high-quality reconstruction result.
The learning method is rapidly developed in recent years, overcomes the limitation that the resolution improvement multiple of the reconstruction method is difficult to determine, can face a single image, and can obtain the intrinsic correspondence between the two images through sample learning by constructing a high-resolution image library and a low-resolution image library. The sparse representation method proposed by Yang et al assumes that the input low-resolution image block can be linearly sparsely represented by primitive elements in an overcomplete image block dictionary from the compressed sensing perspective, is reconstructed based on a high-resolution image block and low-resolution image block combined dictionary, has a good effect, is used for improving the resolution of the remote sensing cloud image by reference of Jin and the like, but has low solving speed, has higher selection requirement on the overcomplete dictionary and is not enough in universality. After that, chao et al construct a SRCNN (Super Resolution Convolutional Neutral Network) model based on a three-layer convolutional neural network structure by combining deep learning content, directly learn an end-to-end mapping relation between a low-resolution image and a high-resolution image, and the operation efficiency and the reconstruction effect are in the front of all current SR reconstruction methods, but the required training time is long, the over fitting problem possibly occurring in the learning process is not considered, and the chroma space interpolation result is not processed. Domestic Xu Ran et al propose an SR reconstruction algorithm based on a two-channel convolution, which trades for improvement of the reconstruction signal-to-noise ratio at the cost of model structural complexity and efficiency, but still suffers from over-fitting and edge problems.
Disclosure of Invention
In view of the problems, the invention provides a remote sensing image super-resolution reconstruction method and system based on a depth convolution network, which have short training time, strong adaptability and good super-resolution effect.
According to an aspect of the present invention, there is provided a remote sensing image super-resolution reconstruction method based on a deep convolutional network, including: step S1, converting a remote sensing image to be processed from an RGB space to a YCbCr space, and separating out a brightness space and a chromaticity space of the remote sensing image to be processed; s2, constructing a multi-layer depth convolution network, constructing a super-resolution reconstruction model based on the multi-layer depth convolution network, and reconstructing the brightness space of the remote sensing image to be processed by using the super-resolution reconstruction model to obtain the brightness space of the remote sensing image to be processed after reconstruction; step S3, guiding the chromaticity space of the remote sensing image to be processed to carry out joint bilateral filtering by taking the reconstructed brightness space as a guide map, so as to obtain the reconstructed chromaticity space of the remote sensing image to be processed; and S4, integrating the brightness space after the reconstruction of the remote sensing image to be processed and the chromaticity space after the reconstruction, and converting the integrated remote sensing image to be processed from the YCbCr space back to the RGB space to obtain a super-resolution image of the remote sensing image to be processed, wherein the resolution of the super-resolution image is higher than that of the remote sensing image to be processed.
According to another aspect of the present invention, there is provided a remote sensing image super-resolution reconstruction system based on a deep convolutional network, including: the space conversion module converts the remote sensing image to be processed from RGB space to YCbCr space, separates out the brightness space and the chromaticity space of the remote sensing image to be processed, and sends the brightness space and the chromaticity space to the brightness space reconstruction module and the chromaticity space reconstruction module respectively; the system comprises a space conversion module, a luminance space reconstruction module, a chromaticity space reconstruction module and an integration module, wherein the space conversion module is used for converting the luminance space of a remote sensing image to be processed into a space; the chromaticity space reconstruction module is used for guiding the chromaticity space of the remote sensing image to be processed to carry out joint bilateral filtering by taking the brightness space of the remote sensing image to be processed, which is transmitted by the brightness reconstruction module, as a guide map, so as to obtain the reconstructed chromaticity space of the remote sensing image to be processed, and transmitting the reconstructed chromaticity space to the integration module; the integrating module integrates the brightness space and the reconstructed chromaticity space of the remote sensing image to be processed, which are sent by the brightness space reconstruction module and the chromaticity space module, and converts the integrated remote sensing image to be processed from the YCbCr space back to the RGB space to obtain a super-resolution image of the remote sensing image to be processed, wherein the resolution of the super-resolution image is higher than that of the remote sensing image to be processed.
The remote sensing image super-resolution reconstruction method and the remote sensing image super-resolution reconstruction system based on the depth convolution network are higher in applicability, the remote sensing image sequences of the same scene and different time sequences are not required to be input during reconstruction, only a pre-trained reconstruction model is required to be utilized for inputting each remote sensing image to be reconstructed, and the reconstruction speed is higher. In addition, the reconstructed brightness space is used as a guide image to be combined with bilateral filtering interpolation to reconstruct a chromaticity space, so that the blocking effect of the remote sensing image chromaticity space after interpolation is effectively weakened.
Drawings
Other objects and results of the present invention will become more apparent and readily appreciated by reference to the following detailed description and claims, taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 is a schematic flow chart of a remote sensing image super-resolution reconstruction method based on a depth convolution network;
FIG. 2 is a schematic flow chart of the present invention for constructing a super-resolution reconstruction model;
FIG. 3 is a schematic flow chart of the chromaticity space reconstruction of a remote sensing image to be processed according to the invention;
FIG. 4 is a block diagram of a remote sensing image super-resolution reconstruction system based on a depth convolution network;
FIG. 5 is a schematic flow chart of a preferred embodiment of performing the super-resolution reconstruction of the remote sensing image according to the present invention by using the super-resolution reconstruction system of the remote sensing image according to the present invention;
FIG. 6 is a schematic structural diagram of a super-resolution reconstruction model according to the present invention;
fig. 7 is a comparative diagram before and after the chroma space joint bilateral filtering, in which fig. 7a is a Cb space joint filtering before effect of a remote sensing image to be processed, fig. 7b is a Cb space joint filtering after effect of the remote sensing image to be processed, fig. 7c is a Cr space joint filtering before effect of the remote sensing image to be processed, and fig. 7d is a Cr space joint filtering after effect of the remote sensing image to be processed;
fig. 8 is a graph comparing the reconstruction effect of the remote sensing image super-resolution reconstruction method with the reconstruction effect of other SR reconstruction methods for the remote sensing image, wherein fig. 8a is 2 times of the reconstruction effect of the bicubic interpolation method, fig. 8b is 2 times of the reconstruction effect of the srcan method, fig. 8c is 2 times of the reconstruction effect of the remote sensing image super-resolution reconstruction method, and fig. 8d is an original high-resolution remote sensing image.
In the drawings, like reference numerals designate similar or corresponding features or functions.
Detailed Description
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more embodiments. It may be evident, however, that such embodiment(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more embodiments.
Various embodiments according to the present invention will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a remote sensing image super-resolution reconstruction method based on a depth convolution network, and as shown in fig. 1, the remote sensing image super-resolution reconstruction method comprises the following steps:
step S1, converting a remote sensing image to be processed from an RGB space to a YCbCr space, and separating out a brightness space and a chromaticity space of the remote sensing image to be processed;
s2, constructing a multi-layer depth convolution network, constructing a super-resolution reconstruction model based on the multi-layer depth convolution network, and reconstructing the brightness space of the remote sensing image to be processed by using the super-resolution reconstruction model to obtain the brightness space of the remote sensing image to be processed after reconstruction;
step S3, guiding the chromaticity space of the remote sensing image to be processed to carry out joint bilateral filtering by taking the reconstructed brightness space as a guide diagram to obtain the reconstructed chromaticity space of the remote sensing image to be processed, wherein the joint bilateral filtering expression is shown in a formula (1):
wherein K is p Is a regularization factor which is a function of the data,for the introduced guide map, which is also the brightness space after reconstruction, p and q represent the coordinates of pixels in the remote sensing image to be processed, J p The output of the corresponding position is represented, f and g are weight distribution functions, and omega is the scope of the functions;
and S4, integrating the brightness space after the reconstruction of the remote sensing image to be processed and the chromaticity space after the reconstruction, and converting the integrated remote sensing image to be processed from the YCbCr space back to the RGB space to obtain a super-resolution image of the remote sensing image to be processed, wherein the resolution of the super-resolution image is higher than that of the remote sensing image to be processed.
Preferably, the above remote sensing image super-resolution reconstruction method further comprises:
before the step S1, performing preprocessing such as geometric correction, radiation correction, atmospheric correction, denoising and the like on the remote sensing image to be processed, and eliminating the problems such as geometric distortion, radiation amount distortion, atmospheric extinction and the like of the image to be reconstructed;
between step S3 and step S4, performing adaptive nonlinear unsharp masking (Adaptive Nonlinear UnsharpMask, ANUSM) processing on the reconstructed luminance space and the reconstructed chrominance space of the remote sensing image to be processed, where the processing includes: y, cb and Cr of the remote sensing image to be processed after reconstruction of the luminance space and the chrominance space respectively are subjected to ANUSM enhancement according to the following formula (2),
wherein (x, y) is the coordinates of the pixel points of the brightness space or the chromaticity space of the remote sensing image to be processed, g (x, y) is the Y, cb or Cr enhanced result of the remote sensing image to be processed after reconstruction, f (x, y) is the brightness space or the chromaticity space reconstruction result of the remote sensing image to be processed, namely the pixel values of the pixel points after reconstruction, For passivating the image, wherein the passivation method adopts the enhancement factor k (x, y) shown in the following formula (3),
wherein f max (x, y) is the maximum pixel value in its corresponding spatial reconstruction result.
In the remote sensing image super-resolution reconstruction method, ANUSM enhancement is performed on the reconstructed brightness space and chromaticity space, so that the visual effect of the reconstructed remote sensing image to be processed is further enhanced, and the remote sensing image super-resolution reconstruction method can automatically perform self-adaptive adjustment according to different remote sensing images to be processed and has strong universality.
In step S2, as shown in fig. 2, the method for constructing a super-resolution reconstruction model based on the deep convolutional network includes:
step S21, an image training set is constructed, specifically, a set number of first images are selected, each first image is converted into a YCbCr space, the luminance space component of the first images is taken for downsampling to obtain the luminance space component of a second image, bicubic interpolation operation is carried out on the luminance space component of the second image so that the luminance space component of the second image has the same size as the luminance space component of the first image, each second image and the first image are subjected to block cutting to obtain a plurality of second training image blocks and first training image blocks which are in one-to-one correspondence, wherein the resolution of the first image is higher than that of the second image, and the resolution of the first training image block is higher than that of the second training image block;
Step S22, constructing a multi-layer deep convolution network structure, specifically, constructing an m-layer deep convolution network structure with parameters by adopting a first m-1 layer added parameter correction linear unit (Parametric Rectified Linear Unit, PReLU) layer and a local response normalization (Local Response Normalization, LRN) layer, wherein the mapping relation between the input and the output of the multi-layer deep convolution network structure is shown as a formula (4):
X i =F m (Y i ,Θ) (4)
wherein X is i To output an image, Y i To input an image F m For inputting image Y i And output image X i In the mapping relation of the m layer, Θ is a learned parameter;
the PReLU layer is activated according to equation (5):
wherein P is a PReLU operator, and lambda is a PReLU parameter;
the LRN layer is activated according to equation (6):
wherein k is an initialization constant, n is the local size for normalization, alpha is a scaling factor, and beta is an exponential term;
step S23, constructing a super-resolution reconstruction model, that is, determining parameters of the above-mentioned multi-layer deep convolutional network structure, to form a super-resolution reconstruction model, where the super-resolution reconstruction model is obtained by training in the above-mentioned multi-layer deep convolutional network structure with a second training image block of the image training set as input and a first training image block as output, and preferably includes:
Training the image training set in the super-resolution reconstruction model;
the root mean square error of the output of a first training image block and a second training image block corresponding to the first training image block in the image training set in the convolution network structure is adopted as a loss function, and the loss function is as follows:
wherein N is Sample of The number of the image training set samples is the number;
updating parameters of the multi-layer deep convolutional network by a gradient descent method to minimize a loss function, wherein the parameter value corresponding to the minimum loss function is a parameter of the multi-layer deep convolutional network structure, and the convolution kernel updating process is as follows:
wherein,for the convolution kernel of the input image in the layer I, delta is the weight variation, mu is the impulse unit, and eta is the learning rate;
PReLU parameter lambda l The weight updating process of (2) is as follows:
preferably, in step S23, training is performed in the above multi-layer deep convolution network structure using the second training image block of the image training set as an input, and the convolution result of each layer is as shown in formulas (10) and (11):
wherein l is the layer index, m is the total layer, i is the image block index number, Y i Is the ith second training image block; w (W) l Is the convolution kernel of the first layer, "x" is the convolution symbol, B l For the layer I convolved bias group vector, N l-1 (Y i ) Output for the first-1 LRN layer; lambda (lambda) l For PReLU parameter, F l (Y i ) For the ith second training image block Y i At the output of layer l, n l Normalized local dimension size for layer i.
Preferably, in step S2, reconstructing the luminance space of the remote sensing image to be processed using the super-resolution reconstruction model includes: performing bicubic interpolation on the brightness space component of the remote sensing image to be processed, and reconstructing the brightness space component by using a super-resolution reconstruction model to obtain the brightness space of the remote sensing image to be processed after reconstruction.
In step S3, as shown in fig. 3, the method for reconstructing a chromaticity space from the reconstructed luminance space includes:
step S31, performing bicubic interpolation on the to-be-reconstructed chroma space component (Cb space component and Cr space component) to obtain a chroma space primary reconstruction result.
And S32, taking a brightness space reconstruction result of the remote sensing image to be processed as a guide graph, and carrying out joint bilateral filtering on the chromaticity space primary reconstruction result to obtain a chromaticity space reconstruction result of the final remote sensing image to be processed.
According to the remote sensing image super-resolution reconstruction method based on the depth convolution network, the PReLU and LRN layers in the multi-layer depth convolution network of the super-resolution reconstruction model can accelerate the model convergence speed, the training efficiency is improved, and the super-resolution reconstruction method has stronger anti-overfitting capability.
Fig. 4 is a block diagram of a remote sensing image super-resolution reconstruction system based on a depth convolution network according to the present invention, and as shown in fig. 4, the remote sensing image super-resolution reconstruction system 100 based on a depth convolution network includes:
the space conversion module 120 converts the remote sensing image to be processed from RGB space to YCbCr space, separates out the brightness space and the chromaticity space of the remote sensing image to be processed, and sends the brightness space and the chromaticity space to the brightness space reconstruction module 130 and the chromaticity space reconstruction module 140 respectively;
the luminance space reconstruction module 130 constructs a multi-layer depth convolution network, constructs a super-resolution reconstruction model based on the multi-layer depth convolution network, reconstructs the luminance space of the remote sensing image to be processed transmitted from the space conversion module 120 by using the super-resolution reconstruction model, obtains the reconstructed luminance space of the remote sensing image to be processed, and transmits the reconstructed luminance space to the chromaticity space reconstruction module 140 and the integration module 160;
the chromaticity space reconstruction module 140 guides the chromaticity space of the remote sensing image to be processed to perform joint bilateral filtering by taking the reconstructed luminance space of the remote sensing image to be processed transmitted from the luminance reconstruction module 130 as a guide map, so as to obtain the reconstructed chromaticity space of the remote sensing image to be processed, and sends the reconstructed chromaticity space to the integration module 160, wherein the joint bilateral filtering performs filtering according to the formula (1);
The integrating module 160 integrates the reconstructed luminance space and the reconstructed chrominance space of the remote sensing image to be processed, which are sent by the luminance space reconstructing module 130 and the chrominance space reconstructing module 140, and converts the integrated remote sensing image to be processed from the YCbCr space back to the RGB space, so as to obtain a super-resolution image of the remote sensing image to be processed, wherein the resolution of the super-resolution image is higher than that of the remote sensing image to be processed.
Preferably, the remote sensing image super-resolution reconstruction system 100 further includes:
a preprocessing module 110, configured to preprocess the remote sensing image to be processed, and send the preprocessed remote sensing image to the space conversion module 120, where the preprocessing includes one or more of geometric correction, radiation correction, atmospheric correction, and denoising;
the post-processing enhancement module 150 is respectively connected with the luminance space reconstruction module 130, the chromaticity space reconstruction module 140 and the integration module 160, and is used for receiving the reconstructed luminance space and the reconstructed chromaticity space of the remote sensing image to be processed from the luminance space reconstruction module 130 and the chromaticity space reconstruction module 140, performing adaptive nonlinear unsharp mask processing on the reconstructed luminance space and the reconstructed chromaticity space, and sending the processed luminance space and chromaticity space to the integration module 160.
According to the remote sensing image super-resolution reconstruction system based on the depth convolution network, a user can realize super-resolution reconstruction and enhancement of a single image under the condition of not depending on a multi-temporal remote sensing image sequence of the same scene, the image resolution is improved, and good balance is achieved between reconstruction efficiency and processing effect.
In addition, the remote sensing image super-resolution reconstruction system 100 based on the deep convolutional network preferably further includes a storage module 170, which is respectively connected to the preprocessing module 110, the spatial conversion module 120, the luminance spatial reconstruction unit 130, the chrominance spatial reconstruction unit 140, the post-processing enhancement module 150, and the integration unit 160, and is configured to store the staged results of each part of the modules.
As shown in fig. 4, the luminance space reconstruction module 130 includes:
the image training set construction unit 131 selects a set number of first images, converts each first image into YCbCr space, takes a luminance space component of the first images to perform downsampling to obtain a luminance space component of a second image, performs bicubic interpolation operation on the luminance space component of the second image to enable the luminance space component to have the same size as the luminance space component of the first image, performs block cutting on each second image and the first image to obtain a plurality of second training image blocks and first training image blocks which are in one-to-one correspondence to form an image training set, and sends the image training set to the model parameter determination unit 133, wherein the resolution of the first image is higher than that of the second image, and the resolution of the first training image block is higher than that of the second training image block;
A network structure construction unit 132 that constructs an m-layer deep convolutional network structure with parameters using a first m-1 layer added parameter correction linear unit layer (PReLU layer) and a partial response normalization layer (LRN layer);
the super-resolution reconstruction model construction unit 133 includes a training subunit 133-1, calls a multi-layer deep convolution network structure constructed by the network structure construction unit and an image training set sent by the image training set construction unit, takes a second training image block of the image training set as input, and a first training image block as output to train in the multi-layer deep convolution network structure, determines parameters of the multi-layer deep convolution network structure, and obtains the super-resolution reconstruction model, wherein the second training image block is taken as input to train in the multi-layer deep convolution network structure, and convolution results of each layer are as shown in formulas (10) and (11):
wherein l is the layer index, m is the total layer, i is the image block index number, Y i Is the ith second training image block; w (W) l Is the convolution kernel of the first layer, "x" is the convolution symbol, B l For the layer I convolved bias group vector, N l-1 (Y i ) Output for the first-1 LRN layer; lambda (lambda) l For PReLU parameter, F l (Y i ) For the ith second training image block Y i At the output of layer l, n l Normalized local dimension size for layer i.
The training subunit 133-1 may obtain a plurality of parameter values of the parameter through a plurality of pairs of training image blocks, and may determine a final parameter value of the parameter by a fitting method, an averaging method, or the like, preferably, the final parameter value of the parameter is determined through a minimum loss function, that is, the super-resolution reconstruction model constructing unit 133 includes a training subunit 133-1, an updating subunit 133-2, and a filtering subunit 133-3, wherein,
the training subunit 133-1 invokes the multi-layer deep convolutional network structure constructed by the network structure construction unit and the image training set sent by the image training set construction unit, takes a second training image block of the image training set as input, and takes a first training image block as output to train in the multi-layer deep convolutional network structure;
an updating subunit 133-2 for updating parameters of the multi-layer deep convolutional network structure by gradient descent method so that the loss function in the filtering subunit 133-3 is reduced, wherein the convolution kernel W is updated according to the above formula (7) l Updating the PReLU parameter lambda according to the formula (8) l Weight of (2);
and the screening subunit 133-3 uses the root mean square error output by the first training image block and the second training image block corresponding to the first training image block in the image training set in the multi-layer deep convolution network as a loss function, and screens out the corresponding parameter value when the loss function is minimum.
In a preferred embodiment of the present invention, the above image super-resolution processing is performed on the remote sensing image by using the above remote sensing image super-resolution reconstruction system, as shown in fig. 5, including:
step S100, a preprocessing unit selects a remote sensing image read-in unit to perform preprocessing, for example, the remote sensing image is subjected to downsampling simulation by a Pleiades high-resolution remote sensing image, an ENVI software is called to select a corresponding control point and a correction model for the remote sensing image to be reconstructed by using a geometric correction module, basic parameters are input to realize geometric correction of the remote sensing image, a radiation correction module and an atmosphere correction module are used for completing corresponding radiation correction and atmosphere correction processing similarly, a result after the remote sensing image preprocessing is sent to a storage module for practical use by a space conversion module, and preferably, wavelet denoising with a threshold value of 70HZ is adopted to perform denoising preprocessing on the remote sensing image.
In step S200, the space conversion unit reads the preprocessed remote sensing image in step S100, converts the remote sensing image from RGB space to YCbCr space, and stores the obtained luminance space component (i.e., luminance Y component) and chrominance space component (i.e., cb space component and Cr space component) of the remote sensing image in the storage unit.
In step S300, the luminance spatial component of the remote sensing image in step S200 is read, the spatial component is amplified to a set multiple by double three-time difference (taking 2 times as an example in this embodiment, but not limited thereto), a super-resolution reconstruction model is constructed, the luminance spatial component of the remote sensing image of the set multiple is reconstructed by using the super-resolution reconstruction model, the luminance space of the reconstructed remote sensing image is stored in a storage unit, and preparation is made for the combination of the later integration units, preferably, step S3 of constructing the super-resolution reconstruction model can be subdivided into three steps S310, S320, and S330, wherein:
in step S310, an image training set is constructed, for example, a UC Merced Landuse remote sensing image data set with a resolution of 0.3 m is used as a data source, the data source includes 21 types of remote sensing image samples, 6 remote sensing images are randomly selected as first images in UC Merced Landuse, each size is 256×256, the selected high-resolution first image is converted into YCbCr, the image component in the brightness space Y is sampled to obtain a simulated low-resolution second image, the second image is subjected to double-triple interpolation with a set multiple, each second image interpolation result is segmented, the block size is preferably 38×38, the second training image blocks corresponding to the first training image blocks are subjected to edge removal with the image center as a standard, and the image training set with the size of 24×24 is formed.
Step S320, as shown in FIG. 6, a four-layer deep convolutional network is constructed, and a PReLU layer and an LRN layer are added after each layer of the first three convolutional layers, wherein the convolutional layer 1, the parameter correction linear unit layer 1 and the local response normalization layer 1 form a characteristic extraction part of the network, and the extraction of remote sensing image edge information and texture characteristics is realized through the corresponding convolutional cores; the convolution layer 2, the parameter correction linear unit layer 2 and the local response normalization layer 2 are combined to realize feature enhancement, so that the edge and brightness performance of a feature map result obtained by the layer are further improved; the convolution layer 3, the parameter correction linear unit layer 3 and the local response normalization layer 3 are used for mapping each image block obtained in the previous step from 32-dimensional nonlinearity to 16-dimensional nonlinearity; the convolution layer 4 uses a linear filter bank with the size of 5×5 to complete the reconstruction of the high-resolution remote sensing image, in particular:
second training image block Y input to low resolution i A first image X with high resolution is output i The mapping between inputs and outputs can be represented by equation (12):
X i =F 4 (Y i ,Θ) (12)
where Θ is the learned parameter.
Using equation (13) as a loss function for a deep convolutional network:
the PReLU activation function representation is deformable as shown in equation (14):
P(X i )=max(X i ,λX i )λ∈[0,1] (14)
Wherein P is a PReLU operator, lambda is a PReLU parameter, and the value is dynamically updated along with training.
Further, the convolution result of the first layer is shown in formula (15):
wherein B is l Initially 0.
Preferably, the convolution kernel and bias group preference parameter configurations are as shown in table 1:
TABLE 1
Wherein the first LRN layer outputs N (Y) with the bottom layer input F l The relationship of (Y) is expressed as in formula (16):
preferably, α=0.0001, β=0.5, k=1, the lrn layer mimics the lateral inhibition mechanism of the biological nervous system, achieving "near inhibition", normalizing the local input region such that the response is relatively larger than the larger value, thereby improving model generalization ability and preventing overfitting of training.
Further, the model is trained by continuously updating the weights of the parameters using gradient descent in the back propagation to minimize the loss function value, wherein W l The weight updating process of (2) is as in formula (17):
wherein, according to the empirical value, the learning rate is 0.9, eta is the learning rate, and the eta=10 of the three previous convolution layers is preferable -4 Fourth layer convolution layer η=10 -5 The initial value of the filter weight of each layer is randomly given by a gaussian distribution with a standard deviation of 0.001 and a deviation of 0.
Similarly, lambda in PReLU l The weight update process expression of (2) is shown in formula (18):
And step S330, training the four-layer depth convolution network in step S320 by using a deep learning Caffe framework under Linux, wherein the optimal iteration number is 250 ten thousand, so as to obtain a super-resolution reconstruction model, wherein the super-resolution reconstruction model can be repeatedly used, does not need to be built again after being built for the first time, reconstructs the brightness space component of the remote sensing image by using the super-resolution reconstruction model, stores the super-resolution reconstruction model into a storage unit as the basis of the brightness space reconstruction of the remote sensing image, stores the reconstructed brightness space into the storage unit for the chromaticity space reconstruction unit and the post-processing enhancement unit to call.
Step S400, reading and reconstructing the color space component of the remote sensing image in step S200, and dividing the step into two steps S410 and S420.
Step S410, respectively performing set multiple bicubic interpolation on the chrominance space Cb and Cr components, and obtaining the chrominance space (Cb and Cr) interpolation result of the remote sensing image at the end of the step.
In step S420, the interpolation result is optimized, the luminance space reconstruction result in step S3 is read, and based on the result as guidance, the luminance space Cb and Cr interpolation result is subjected to joint bilateral filtering according to formula (1), preferably, filtering is performed by using a gaussian function with a mean value of 3 and a variance of 0.1, the filtering window size is 5*5, as shown in fig. 7a-7d, fig. 7a is a remote sensing image Cb space joint filtering front effect, fig. 7b is a remote sensing image Cb space joint filtering rear effect, fig. 7c is a remote sensing image Cr space joint filtering front effect, fig. 7d is a remote sensing image Cr space joint filtering rear effect, and it can be seen from comparison of fig. 7a and 7b and comparison of fig. 7c and 7d that abundant information contained after reconstruction of the luminance space is used for guiding the chrominance space filtering, noise and blocking effect are eliminated, and meanwhile, real details of the image are left to the greatest extent, so that the color remote sensing image can have better color performance after reconstruction.
Step S500, the brightness space reconstruction result and the chromaticity space reconstruction result in the storage unit are read, ANUSM enhancement is carried out on the reconstruction results of the remote sensing images Y, cb and Cr according to a formula (2), the enhanced brightness space reconstruction results and chromaticity space reconstruction results of the remote sensing images are stored in the storage unit and are prepared for merging of the integration units, and preferably, a Gaussian low-pass filter with the mean value of 0 and the window size of 3*3 is adopted in the passivation method.
And S600, an integration unit reads the enhanced brightness space reconstruction result and the enhanced chromaticity space reconstruction result in the storage unit, combines the discrete enhanced reconstruction results, converts the discrete enhanced reconstruction results from the YCbCr space back to the RGB space and stores the RGB space into the storage unit to obtain a super-resolution image of the remote sensing image.
Fig. 8 is a comparison graph of effects obtained by 2 times reconstruction of remote sensing images by different SR reconstruction methods, wherein fig. 8a is a reconstruction effect obtained by using a bicubic interpolation method, fig. 8b is an effect obtained by applying an srcan method, fig. 8c is a super-resolution reconstruction and enhancement effect of a single remote sensing image realized by applying the reconstruction method and system of the present invention, and fig. 8d is an original high-resolution remote sensing image. In contrast, the method of the invention can obtain better visual effect on reconstruction of remote sensing images, and has more prominent edge detail performance. Compared with the prior art, the image super-resolution method and system provided by the invention take a four-layer convolution structure as a core, acquire the brightness space detail information of the remote sensing image, add PReLU and LRN layers for optimization, enhance the overfitting resistance of the obtained model and improve the training speed, and the final reconstruction effect of the remote sensing image is improved by combining the chroma space with bilateral filtering interpolation and post-processing enhancement, so that the reconstruction quality is further improved. The method has wide application prospect in the aspects of remote sensing image target identification and extraction and multi-time phase ground feature observation, can play a guiding role in registration, mosaic and fusion of multi-source remote sensing images, provides a new thought for remote sensing image decompression, and is beneficial to saving the transmission time and storage space of the remote sensing images.
The remote sensing image super-resolution reconstruction method and the remote sensing image super-resolution reconstruction system based on the depth convolution network can process color remote sensing images and black-and-white remote sensing images, and when the black-and-white remote sensing images are processed, gray scale is used as chromaticity, and the resolution improvement effect of the remote sensing image super-resolution reconstruction method and the remote sensing image super-resolution reconstruction system based on the depth convolution network for processing edges and textures of the color remote sensing images is obvious compared with that of the black-and-white remote sensing images.
While the foregoing disclosure shows exemplary embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.

Claims (8)

1. A remote sensing image super-resolution reconstruction method based on a depth convolution network is characterized by comprising the following steps:
step S1, converting a remote sensing image to be processed from an RGB space to a YCbCr space, and separating out a brightness space and a chromaticity space of the remote sensing image to be processed;
S2, constructing a multi-layer depth convolution network, constructing a super-resolution reconstruction model based on the multi-layer depth convolution network, and reconstructing the brightness space of the remote sensing image to be processed by using the super-resolution reconstruction model to obtain the brightness space of the remote sensing image to be processed after reconstruction;
step S3, guiding the chromaticity space of the remote sensing image to be processed to carry out joint bilateral filtering by taking the reconstructed brightness space as a guide map, so as to obtain the reconstructed chromaticity space of the remote sensing image to be processed;
step S4, integrating the brightness space and the chromaticity space after the reconstruction of the remote sensing image to be processed, converting the integrated remote sensing image to be processed from the YCbCr space back to the RGB space to obtain a super-resolution image of the remote sensing image to be processed, wherein the resolution of the super-resolution image is higher than that of the remote sensing image to be processed,
the method for constructing the super-resolution reconstruction model based on the depth convolution network comprises the following steps:
constructing an image training set, comprising: selecting a set number of first images, converting each first image into a YCbCr space, taking a brightness space component of the first images to perform downsampling to obtain a brightness space component of a second image, performing bicubic interpolation operation on the brightness space component of the second image to enable the brightness space component to have the same size as the brightness space component of the first image, performing block cutting on each second image and the first image to obtain a plurality of second training image blocks and first training image blocks which are in one-to-one correspondence, wherein the first training image blocks and the second training image blocks form an image training set, the resolution of the first image is higher than that of the second image, and the resolution of the first training image blocks is higher than that of the second training image blocks;
Constructing a multi-layer deep convolutional network structure, comprising: adopting a first m-1 layer to add a PReLU layer and an LRN layer to construct an m-layer depth convolution network structure with parameters, wherein the mapping relation between the input and the output of the multi-layer depth convolution network structure is shown as a formula (4):
X i =F m (Y i ,Θ) (4)
wherein X is i To output an image, Y i To input an image F m For inputting image Y i And output image X i In the mapping relation of the m layer, Θ is a learned parameter;
the PReLU layer is activated according to equation (5):
wherein P is a PReLU operator, and lambda is a PReLU parameter;
the LRN layer is activated according to equation (6):
wherein k is an initialization constant, n is the local size for normalization, alpha is a scaling factor, and beta is an exponential term;
constructing a super-resolution reconstruction model, comprising: training in the multi-layer depth convolution network structure by taking a second training image block of the image training set as an input and a first training image block as an output, determining parameters of the multi-layer depth convolution network structure, and obtaining the super-resolution reconstruction model, wherein the training is performed in the multi-layer depth convolution network structure by taking the second training image block of the image training set as an input, and the convolution result of each layer is shown as formulas (10) and (11):
Wherein l is the layer index, m is the total layer, i is the image block index number, Y i Is the ith second training image block; w (W) l Is the convolution kernel of the first layer, "x" is the convolution symbol, B l For the layer I convolved bias group vector, N l-1 (Y i ) Output for the first-1 LRN layer; lambda (lambda) l For PReLU parameter, F l (Y i ) For the ith second training image block Y i At the output of layer l, n l Normalized local dimension size for layer i.
2. The method for super-resolution reconstruction of a remote sensing image according to claim 1, wherein the constructing the super-resolution reconstruction model further comprises: the method comprises the steps of adopting root mean square errors of output of a first training image block and a second training image block corresponding to the first training image block in an image training set in a convolution network structure as a loss function, wherein a parameter value corresponding to the minimum loss function is a parameter of the multi-layer depth convolution network structure, and the loss function is as follows:
wherein N is Sample of And (5) the number of samples of the image training set.
3. The method for super-resolution reconstruction of a remote sensing image according to claim 2, wherein the constructing the super-resolution reconstruction model further comprises: and updating parameters of the multi-layer deep convolution network structure by a gradient descent method, wherein the convolution kernel updating process is as follows:
Wherein W is i l For the convolution kernel of the input image in the layer I, delta is the weight variation, mu is the impulse unit, and eta is the learning rate;
PReLU parameter lambda l The weight updating process of (2) is as follows:
4. the method of claim 1, further comprising:
before step S1, preprocessing the remote sensing image to be processed, and then carrying out the processing of the steps S1 to S4 on the preprocessed remote sensing image to be processed, wherein the preprocessing comprises one or more of geometric correction, radiation correction, atmospheric correction and denoising; and/or
After step S3, performing adaptive nonlinear unsharp masking processing on the luminance space and the reconstructed chromaticity space after the remote sensing image to be processed is reconstructed, and then performing the processing of step S4 on the luminance space and the chromaticity space after the processing.
5. The method for super-resolution reconstruction of a remote sensing image according to claim 4, wherein the method for performing adaptive nonlinear unsharp masking on the reconstructed luminance space and the reconstructed chrominance space of the remote sensing image to be processed comprises:
y, cb and Cr of the remote sensing image to be processed after reconstruction of the luminance space and the chrominance space respectively are subjected to ANUSM enhancement according to the following formula (2),
Wherein, (x, y) is the remote to be processedThe coordinates of the pixel points in the brightness space or the chromaticity space of the sensing image, g (x, y) is Y, cb or Cr enhanced result of the remote sensing image to be processed after reconstruction, f (x, y) is the brightness space or the chromaticity space reconstruction result of the remote sensing image to be processed, namely the pixel value of the pixel points after reconstruction,for passivating the image, wherein the passivation method adopts an enhancement factor k (x, y) of the following formula (3),
wherein fmax (x, y) is the maximum pixel value in its corresponding spatial reconstruction result.
6. A remote sensing image super-resolution reconstruction system based on a depth convolution network is characterized by comprising the following steps:
the space conversion module converts the remote sensing image to be processed from RGB space to YCbCr space, separates out the brightness space and the chromaticity space of the remote sensing image to be processed, and sends the brightness space and the chromaticity space to the brightness space reconstruction module and the chromaticity space reconstruction module respectively, wherein the remote sensing image to be processed is a color image;
the system comprises a space conversion module, a luminance space reconstruction module, a chromaticity space reconstruction module and an integration module, wherein the space conversion module is used for converting the luminance space of a remote sensing image to be processed into a space;
The chromaticity space reconstruction module is used for guiding the chromaticity space of the remote sensing image to be processed to carry out joint bilateral filtering by taking the brightness space of the remote sensing image to be processed, which is transmitted by the brightness reconstruction module, as a guide map, so as to obtain the reconstructed chromaticity space of the remote sensing image to be processed, and transmitting the reconstructed chromaticity space to the integration module;
an integrating module for integrating the brightness space and the reconstructed chromaticity space of the remote sensing image to be processed sent by the brightness space reconstructing module and the chromaticity space module, converting the integrated remote sensing image to be processed from the YCbCr space back to the RGB space to obtain a super-resolution image of the remote sensing image to be processed, wherein the resolution of the super-resolution image is higher than that of the remote sensing image to be processed,
the luminance space reconstruction module includes:
the image training set construction unit is used for selecting a set number of first images, converting each first image into a YCbCr space, taking a brightness space component of the first images to obtain a second image brightness space component, performing bicubic interpolation operation on the second image brightness space component to enable the second image brightness space component to have the same size as the first image brightness space component, performing block cutting on each second image and the first image to obtain a plurality of second training image blocks and first training image blocks which are in one-to-one correspondence to form an image training set, and sending the image training set to the model parameter determination unit, wherein the resolution of the first image is higher than that of the second image, and the resolution of the first training image block is higher than that of the second training image block;
The network structure construction unit adopts a first m-1 layer added PReLU layer and an LRN layer to construct an m-layer depth convolution network structure with parameters, wherein the mapping relation between the input and the output of the multi-layer depth convolution network structure is shown as a formula (4):
Xi=Fm(Yi,Θ) (4)
wherein Xi is an output image, yi is an input image, fm is a mapping relation between the input image Yi and the output image Xi in m layers, and Θ is a learned parameter;
the PReLU layer is activated according to equation (5):
wherein P is a PReLU operator, and lambda is a PReLU parameter;
the LRN layer is activated according to equation (6):
wherein k is an initialization constant, n is the local size for normalization, alpha is a scaling factor, and beta is an exponential term;
the super-resolution reconstruction model construction unit comprises a training subunit, a multi-layer depth convolution network structure constructed by the network structure construction unit and an image training set sent by the image training set construction unit are called, a second training image block of the image training set is taken as input, a first training image block is taken as output to train in the multi-layer depth convolution network structure, and the super-resolution reconstruction model is obtained, wherein the second training image block is taken as input to train in the multi-layer depth convolution network structure, and the convolution result of each layer is shown as formulas (10) and (11):
Wherein l is the layer index, m is the total layer, i is the image block index number, Y i Is the ith second training image block; w (W) l Is the convolution kernel of the first layer, "x" is the convolution symbol, B l For the layer I convolved bias group vector, N l-1 (Y i ) Output for the first-1 LRN layer; lambda (lambda) l For PReLU parameter, F l (Y i ) For the ith second training image block Y i At the output of layer l, n l Normalized local dimension size for layer i.
7. The remote sensing image super-resolution reconstruction system according to claim 6, wherein the super-resolution reconstruction model construction unit further comprises a screening subunit or/and an updating subunit, wherein:
the filtering subunit adopts root mean square error of output of a first training image block and a second training image block corresponding to the first training image block in the image training set in the convolution network structure as a loss function, and filters out a corresponding parameter value when the loss function is minimum as a parameter of the multi-layer convolution network structure, wherein the loss function is as follows:
wherein N is Sample of The number of the image training set samples is the number;
the updating subunit updates parameters of the super-resolution reconstruction model by a gradient descent method, wherein the convolution kernel updating process is as follows:
Wherein W is i l A convolution kernel of an input image in a layer I, wherein delta is a weight variation, and mu is an impulse unit;
PReLU parameter lambda l The weight updating process of (2) is as follows:
8. the remote sensing image super-resolution reconstruction system according to claim 6, further comprising a preprocessing module or/and a post-processing enhancement module, wherein:
the preprocessing module is used for preprocessing the remote sensing image to be processed and sending the preprocessed remote sensing image to the space conversion module, wherein the preprocessing comprises one or more of geometric correction, radiation correction, atmospheric correction and denoising;
the post-processing enhancement module is respectively connected with the brightness space reconstruction module, the chromaticity space reconstruction module and the integration module, receives the brightness space and the chromaticity space after the remote sensing image to be processed is reconstructed from the brightness space reconstruction module and the chromaticity space reconstruction module, carries out self-adaptive nonlinear unsharp mask processing on the brightness space and the chromaticity space after the reconstruction, and sends the processed brightness space and chromaticity space to the integration module.
CN201710271199.5A 2017-04-24 2017-04-24 Remote sensing image super-resolution reconstruction method and system based on depth convolution network Active CN107123089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710271199.5A CN107123089B (en) 2017-04-24 2017-04-24 Remote sensing image super-resolution reconstruction method and system based on depth convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710271199.5A CN107123089B (en) 2017-04-24 2017-04-24 Remote sensing image super-resolution reconstruction method and system based on depth convolution network

Publications (2)

Publication Number Publication Date
CN107123089A CN107123089A (en) 2017-09-01
CN107123089B true CN107123089B (en) 2023-12-12

Family

ID=59725321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710271199.5A Active CN107123089B (en) 2017-04-24 2017-04-24 Remote sensing image super-resolution reconstruction method and system based on depth convolution network

Country Status (1)

Country Link
CN (1) CN107123089B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610192B (en) * 2017-09-30 2021-02-12 西安电子科技大学 Self-adaptive observation compressed sensing image reconstruction method based on deep learning
CN108022222A (en) * 2017-12-15 2018-05-11 西北工业大学 A kind of thin cloud in remote sensing image minimizing technology based on convolution-deconvolution network
CN110059796B (en) * 2018-01-19 2021-09-21 杭州海康威视数字技术股份有限公司 Method and device for generating convolutional neural network
CN109118428B (en) * 2018-06-07 2023-05-19 西安电子科技大学 Image super-resolution reconstruction method based on feature enhancement
CN109102469B (en) * 2018-07-04 2021-12-21 华南理工大学 Remote sensing image panchromatic sharpening method based on convolutional neural network
CN109376267B (en) * 2018-10-30 2020-11-13 北京字节跳动网络技术有限公司 Method and apparatus for generating a model
CN109493280B (en) * 2018-11-02 2023-03-14 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN109859110B (en) * 2018-11-19 2023-01-06 华南理工大学 Hyperspectral image panchromatic sharpening method based on spectrum dimension control convolutional neural network
CN109712074A (en) * 2018-12-20 2019-05-03 黑龙江大学 The remote sensing images super-resolution reconstruction method of two-parameter beta combine processes dictionary
CN115442515B (en) * 2019-03-25 2024-02-02 华为技术有限公司 Image processing method and apparatus
CN111800629A (en) * 2019-04-09 2020-10-20 华为技术有限公司 Video decoding method, video encoding method, video decoder and video encoder
CN110378254B (en) * 2019-07-03 2022-04-19 中科软科技股份有限公司 Method and system for identifying vehicle damage image modification trace, electronic device and storage medium
CN110599416B (en) * 2019-09-02 2022-10-11 太原理工大学 Non-cooperative target image blind restoration method based on spatial target image database
CN110751699B (en) * 2019-10-15 2023-03-10 西安电子科技大学 Color reconstruction method of optical remote sensing image based on convolutional neural network
CN111272277A (en) * 2020-01-21 2020-06-12 中国工程物理研究院激光聚变研究中心 Laser pulse waveform measurement distortion correction method and system based on neural network
CN111784571A (en) * 2020-04-13 2020-10-16 北京京东尚科信息技术有限公司 Method and device for improving image resolution
CN111598964B (en) * 2020-05-15 2023-02-14 厦门大学 Quantitative magnetic susceptibility image reconstruction method based on space adaptive network
CN112330541A (en) * 2020-11-11 2021-02-05 广州博冠信息科技有限公司 Live video processing method and device, electronic equipment and storage medium
CN112634391B (en) * 2020-12-29 2023-12-29 华中科技大学 Gray image depth reconstruction and fault diagnosis system based on compressed sensing
CN112785506A (en) * 2021-02-25 2021-05-11 北京中科深智科技有限公司 Image super-resolution reconstruction method and device for real-time video stream
CN114792115B (en) * 2022-05-17 2023-04-07 哈尔滨工业大学 Telemetry signal outlier removing method, device and medium based on deconvolution reconstruction network
CN115953297B (en) * 2022-12-27 2023-12-22 二十一世纪空间技术应用股份有限公司 Remote sensing image super-resolution reconstruction and enhancement method and device
CN116777797A (en) * 2023-06-28 2023-09-19 广州市明美光电技术有限公司 Method and system for clearing bright field microscopic image through anisotropic guide filtering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
WO2014174087A1 (en) * 2013-04-25 2014-10-30 Thomson Licensing Method and device for performing super-resolution on an input image
CN104657962A (en) * 2014-12-12 2015-05-27 西安电子科技大学 Image super-resolution reconstruction method based on cascading linear regression
CN106204449A (en) * 2016-07-06 2016-12-07 安徽工业大学 A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
WO2014174087A1 (en) * 2013-04-25 2014-10-30 Thomson Licensing Method and device for performing super-resolution on an input image
CN104657962A (en) * 2014-12-12 2015-05-27 西安电子科技大学 Image super-resolution reconstruction method based on cascading linear regression
CN106204449A (en) * 2016-07-06 2016-12-07 安徽工业大学 A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨仁忠 ; 张洁 ; 韦宏卫 ; 石璐 ; .基于GPU的Landsat8实时解压缩处理技术.计算机工程.2016,(03),301-307. *

Also Published As

Publication number Publication date
CN107123089A (en) 2017-09-01

Similar Documents

Publication Publication Date Title
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN111062872B (en) Image super-resolution reconstruction method and system based on edge detection
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN108830796B (en) Hyperspectral image super-resolution reconstruction method based on spectral-spatial combination and gradient domain loss
Kappeler et al. Video super-resolution with convolutional neural networks
CN111080567B (en) Remote sensing image fusion method and system based on multi-scale dynamic convolutional neural network
CN111127336B (en) Image signal processing method based on self-adaptive selection module
CN112287940A (en) Semantic segmentation method of attention mechanism based on deep learning
CN108269244B (en) Image defogging system based on deep learning and prior constraint
Li et al. Underwater image high definition display using the multilayer perceptron and color feature-based SRCNN
CN110634147A (en) Image matting method based on bilateral boot up-sampling
CN111951164B (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
US11887218B2 (en) Image optimization method, apparatus, device and storage medium
CN111986084A (en) Multi-camera low-illumination image quality enhancement method based on multi-task fusion
CN113284061B (en) Underwater image enhancement method based on gradient network
CN114841856A (en) Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention
CN115393191A (en) Method, device and equipment for reconstructing super-resolution of lightweight remote sensing image
CN115511708A (en) Depth map super-resolution method and system based on uncertainty perception feature transmission
CN115115516A (en) Real-world video super-resolution algorithm based on Raw domain
CN115222614A (en) Priori-guided multi-degradation-characteristic night light remote sensing image quality improving method
Wang et al. Underwater image super-resolution using multi-stage information distillation networks
CN111553856B (en) Image defogging method based on depth estimation assistance
CN117333359A (en) Mountain-water painting image super-resolution reconstruction method based on separable convolution network
US20220247889A1 (en) Raw to rgb image transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant