CN116578821A - Prediction method of atmospheric turbulence phase screen based on deep learning - Google Patents

Prediction method of atmospheric turbulence phase screen based on deep learning Download PDF

Info

Publication number
CN116578821A
CN116578821A CN202310631745.7A CN202310631745A CN116578821A CN 116578821 A CN116578821 A CN 116578821A CN 202310631745 A CN202310631745 A CN 202310631745A CN 116578821 A CN116578821 A CN 116578821A
Authority
CN
China
Prior art keywords
atmospheric turbulence
phase screen
neural network
phase
atmospheric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310631745.7A
Other languages
Chinese (zh)
Inventor
李明
吴治庚
田立峰
张鹏鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Normal University
Original Assignee
Tianjin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Normal University filed Critical Tianjin Normal University
Priority to CN202310631745.7A priority Critical patent/CN116578821A/en
Publication of CN116578821A publication Critical patent/CN116578821A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Optimization (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a prediction method of an atmospheric turbulence phase screen based on deep learning, wherein the method comprises the following steps: firstly, according to the internal relation between distorted light beams, undistorted light beams and ideal output phases, a convolution neural network with low complexity and a mean square error loss function with pixel penalty term are designed, any nonlinear function can be approximated with arbitrary precision in theory according to the neural network, and the corresponding atmospheric turbulence phase screen can be predicted through the distorted light beams when model parameters are fixed after parameter adjustment is continuously optimized. The network model has the advantages of less parameter quantity, higher speed and higher precision.

Description

Prediction method of atmospheric turbulence phase screen based on deep learning
Technical Field
The invention belongs to the field of free space optical communication, and particularly relates to prediction of an atmospheric turbulence phase screen based on deep learning.
Background
Atmospheric transport distortion is one of the major challenges that hinders practical applications of vortex beams carrying orbital angular momentum. How to accurately predict or capture the phase distortion caused by the atmospheric turbulence in the atmospheric channel is an important basis for realizing the atmospheric turbulence compensation. There have been many studies to propose corresponding solutions to improve the robustness of the swirling beam to atmospheric turbulence. Adaptive optics are the most common method of acquiring the wavefront phase of a distorted beam using a Shack-Hartmann wavefront sensor and then correcting the distorted phase using deformable mirrors or spatial light modulators, which are expensive and costly to maintain.
In recent years, adaptive optics without wavefront sensors has received a certain attention, and such systems differ from conventional adaptive optics in that they do not employ wavefront sensors, but rather employ random, local or global search algorithms to extract the wavefront phase of the beam, and then compensate for phase distortions based on phase conjugation. Common algorithms are the Gerchberg-Saxton phase extraction algorithm and the random parallel gradient descent phase extraction algorithm. However, both of the above phase extraction algorithms require multiple iterations, and therefore the system requires a long processing time to extract the wavefront phase. In addition, the algorithms have no learning and memory capacity, and convergence stagnation is often caused by the fact that the convergence is in a local minimum in the iterative calculation process, so that accurate wave-front phase information cannot be obtained.
In summary, the speed and effect balance cannot be achieved with either the wavefront sensor based adaptive optics system or the wavefront sensor free adaptive optics system.
The prediction of the atmospheric turbulence phase screen can be used for estimating an atmospheric optical link of the free space optical communication system, extracting wave front phase distortion of an optical signal and carrying out phase compensation, thereby obviously improving the performance of the free space optical communication system. The transmitting end of the free space optical communication system transmits the modulated optical signal to an atmosphere channel for transmission, and the wave front phase distortion of the optical signal is caused by the atmosphere turbulence in the atmosphere channel. When an optical signal arrives at a receiving end through a distance of atmospheric transmission, the accumulated phase distortion will cause a drastic drop in the system channel capacity. Therefore, predicting and compensating for phase distortions caused by atmospheric turbulence is one of the key issues that free-space optical communication systems need to address.
The atmospheric turbulence phase screen prediction method based on deep learning can effectively solve the problems.
Disclosure of Invention
The present invention aims to solve the above technical problems. For this purpose, the invention proposes a deep learning based atmospheric turbulence phase screen prediction.
The technical scheme adopted for solving the technical problems is as follows:
the prediction method of the atmospheric turbulence phase screen based on deep learning is characterized by comprising the following steps of:
s1: according to the inherent relation between distorted and undistorted light beams and a real phase screen and the characteristic that a function can be approximated with arbitrary precision according to the neural network, the neural network is adopted to predict the feasibility of the phase screen, and a convolution neural network with low complexity and a mean square error loss function with a pixel penalty term are designed;
s2: the method comprises the steps of obtaining the light intensity of two Gaussian beams which are distorted and not distorted and a corresponding real phase screen as training data, and adopting an atmospheric refractive index power spectral density Hill-Andrews model for enabling atmospheric turbulence to be more fit with the actual situation, wherein the expression is as follows:
wherein Indicating when the light beam is inzAtmospheric refractive index power spectrum at transmission in direction, wherein +.>Indicates an exponential constant based on the natural constant e, < ->Represents the atmospheric turbulence intensity, +.>Represents the inner and outer scale factors of the atmospheric turbulence, respectively,/->Respectively arexAndywavenumbers in the direction;
s3: training the designed convolutional neural network model, continuously optimizing parameters of the model after parameter adjustment, reading a distorted light beam intensity graph into the trained convolutional neural network, and realizing prediction of a corresponding atmospheric turbulence phase screen.
The loss function of the invention is a judgment standard for learning parameters by taking the mean square error with pixel penalty term, and the loss functionThe method comprises the following steps:
(2)
wherein ,represents weight parameters, N represents the number of training samples, +.>Representing the%>Intensity profile of individual gaussian beams,/->Representing the%>A personal atmospheric turbulence phase screen->Taking the whole number->Represents a predicted atmospheric turbulence phase screen, +.>Representing a nonlinear activation function +.>Represents penalty factors,/->Representing a summation operation; the predicted atmospheric turbulence phase screen is characterized by determining an optimal weight parameter +.>Then, according to the input dataXA predicted atmospheric turbulence phase screen can be obtained:
(3)
wherein ,represents a predicted atmospheric turbulence phase screen, +.>Representing a minimum operation-> and />Respectively representing the input data and the output phase, and +.>Can be regarded as an abstract nonlinear function approximated by a neural network.
The convolutional neural network with low complexity is characterized in that the model complexity is small, the parameters required to be trained are few, the computational effort required to be trained is small, the whole process is an end-to-end process, no post-processing is required for data preparation and prediction results, and precise and expensive optical instruments are not required. The prediction method further comprises the following steps: the implementation method is used for predicting the atmospheric turbulence phase screen.
The invention further discloses an application of the prediction method of the atmospheric turbulence phase screen based on deep learning in accurately and rapidly extracting phase distortion and phase compensation caused by atmospheric turbulence. The experimental results show that: the trained neural network model can accurately and rapidly predict the atmospheric turbulence phase screen, the time is in the order of milliseconds, and is irrelevant to the atmospheric turbulence intensity, and the method is superior to the traditional Gerchberg-Saxton algorithm in the aspects of the speed and the accuracy of phase extraction.
The invention is described in more detail below:
the atmospheric turbulence phase screen prediction method based on deep learning is characterized by comprising the following steps of:
(1) Based on the inherent correlation between the light intensity of the beam and its wavefront phase, a special convolutional neural network structure predicted by the phase of atmospheric turbulence and a proper loss function are designed. The model has simple structure and small parameter quantity, and is easy to train and infer in real time.
The loss function in this patent consists essentially of three parts:
wherein ,represents weight parameters, N represents the number of training samples, +.>Representing the%>Intensity profile of individual gaussian beams,/->Representing the%>A personal atmospheric turbulence phase screen->Taking the whole number->
Representing a predicted atmospheric turbulence phase screen,representing a nonlinear activation function +.>Represents penalty factors,/->
Representing a summation operation.
(2) And acquiring light intensity data of two distorted Gaussian beams and undistorted Gaussian beams and a corresponding real atmospheric turbulence phase screen as training data. In the patent, hill-Andrews atmospheric refractive index power spectral density model consistent with actual atmospheric turbulence behavior is adopted to numerically simulate an atmospheric turbulence phase screen.
Atmospheric refractive index power spectral density function defined by the modelHas the following expression:
wherein Indicating when the light beam is inzAtmospheric refractive index power spectrum at transmission in direction, wherein +.>Expressed as natural constant->An exponential constant as a base, wherein->Represents the atmospheric turbulence intensity, +.>Represents the inner and outer scale factors of the atmospheric turbulence, respectively,/->Respectively arexAndywavenumbers in the direction;
(3) And (3) taking the distorted and undistorted two Gaussian beam light intensity graphs and the corresponding real atmospheric turbulence phase screen obtained in the step (2) as a training data set and a testing data set, training the established deep convolutional neural network model, and predicting the atmospheric turbulence phase screen according to the input beam intensity distribution diagram after the parameters of the model are fixed.
The invention mainly solves the important problems that the acquisition of the existing atmospheric turbulence phase is difficult to meet the requirement, and the acquisition method is complicated, and mainly examines the feasibility of extracting the phase information by adopting the deep neural network to the light intensity of two Gaussian beams which are not distorted and distorted, optimizes the phase extraction process, and ensures that the phase extraction speed becomes faster and the accuracy meets the requirement.
Compared with the prior art, the atmospheric turbulence phase screen prediction based on the deep convolutional neural network has the following positive effects:
(1) The mapping relation between the original light beam, the distorted light beam and the real phase is fitted through a neural network, so that end-to-end atmospheric turbulence phase screen prediction is realized.
(2) The trained neural network has stronger generalization, and can theoretically realize the phase screen prediction of any turbulence intensity.
The invention further discloses a feasible scheme of atmospheric turbulence phase screen prediction based on deep learning. We describe the proposed CNN model working mechanism as a mathematical problem: for the light intensity distribution of the standard gaussian probe beam e_1 that is not affected by turbulence, the light intensity distribution of the gaussian probe beam e_2 that is affected by turbulence, and the ideal output turbulence phase P, there is the following relation: e_2=f (P, e_1), f representing the light field transfer function of e_1 affected by P. After the change p=g (e_1, e_2), where g is the transform function of f, the CNN can learn the mapping relation g by a large number of (e_1, e_2) and corresponding ideal output phase screens P. This means that the turbulence phase can be well predicted regardless of the turbulence intensity. The reason that the information extraction capability of CNN is not affected by global changes of the object from an algorithm point of view is that CNN can automatically extract intrinsic features of an image through a multi-layer structure, where convolution operations and sub-sampling locally correlate the features of the object. This enables CNN-based turbulent phase prediction to be adapted to a variety of turbulent environments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
In the drawings:
FIG. 1 is a flow chart of deep learning based atmospheric turbulence phase screen prediction;
FIG. 2 is a block diagram of a convolutional neural network employed in this patent;
FIG. 3 is a training loss curve for a neural network;
FIG. 4 is a true atmospheric turbulence phase screen;
FIG. 5 is a predicted atmospheric turbulence phase screen after model parameters are fixed;
fig. 6 is a plot of the predicted atmospheric turbulence phase screen immediately after training of the model.
Detailed Description
The invention is described below by means of specific embodiments. The technical means used in the present invention are methods well known to those skilled in the art unless specifically stated. Further, the embodiments should be construed as illustrative, and not limiting the scope of the invention, which is defined solely by the claims. Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings and are exemplary of the embodiments described below with reference to the drawings, which are intended to illustrate the invention and not to be construed as limiting the invention.
Example 1
Step one:
the convolutional neural network designed by the patent comprises 15 convolutional layers and 3 upsamples, and adopts an encoder-decoder (encoder-decoder) architecture. Wherein the first 8-layer constituent encoder of the network performs downsampling to obtain a feature map. The layer 9 to 18 layer composition decoder performs up-sampling to restore the feature map to the resolution of the original image.
In the encoder section, network inputsTwo-dimensional gray level light intensity images with the size are obtained after 9 convolutions and 3 pooling of the input image, and the number of channels is 128 +.>A feature map of size. In the convolution layer, the input matrix is convolved with a particular feature detector (also called a kernel). When the image is convolved with the kernel, a new matrix is generated, called a feature map. Each kernel may extract a particular feature of the image and different kernels may extract different features. The generated feature map enters the next layer as an output of the convolutional layer. Except for the first layer is +.>All convolution kernels except the convolution are +.>. The feature map is pooled into a pooling layer, we choose +.>Is a maximum pooling of (a). Pool element is from every non-overlapping ++in the convolution profile>The sub-regions of each sub-region are input and the maximum value of each sub-region is selected as output, so that the computational complexity is greatly reduced. In the decoder section, the feature map is output after 7 convolutions and 3 upsamplesThe image is predicted, and the up-sampling operation does not select deconvolution (can be regarded as the inverse operation of convolution) but is bilinear interpolation, and mainly adopts a combination mode of convolution and bilinear interpolation to avoid the chessboard effect caused by deconvolution so that the up-sampled image has better quality. The length and width of each up-sampled image are increased by two times, and the final output of the network is +.>Is used for predicting the phase image of the object. Throughout the network we split the training dataset equally into batches to enhance the stability of convergence. And a batch of normalization layers are connected behind each convolution layer except the last layer, so that the normalization of the input image is realized and the network training time is reduced. And the correction linear unit (ReLU function) is used for nonlinear activation, so that gradient disappearance or explosion is prevented, and the training speed is increased. A sigmoid activation function is used at the final convolutional layer to control the magnitude of the output value.
Step two:
the loss function of this patent can be divided intoThree parts;
(9)
where N represents the number of samples,representing weight parameters->Representing the input +.>Light intensity patterns->Express +.>The labels or phase screens corresponding to the light intensity patterns represent nonlinear activation functions +.>Representing the mean square error loss, is usually used in regression tasks to approximate the true value, here to make the predicted phase more approximate to the true phase, +.>Is a penalty factor to control the dynamics of the penalty term. />Is used to penalize errors between pixels. And then by minimizing the loss function:
the optimal neural network model parameters can be obtained:
step three:
two kinds of beam data and corresponding real phase screens are obtained, a classical and widely used model developed by Hill is adopted in the patent and is defined by Andrews analysis, in which the atmospheric turbulence is simulated by a random phase screen loaded with a refractive index fluctuation spectrum;
wherein Representing the atmospheric refractive index power spectrum when a light beam is transmitted in the z-directionWherein->Expressed as natural constant->An exponential constant as a base, wherein->Represents the atmospheric turbulence intensity, +.>Respectively representing the inside and outside dimension radii. />The wavenumbers in the x and y directions, respectively. Wave front phase fluctuations are further distributed by random with variance +.>To model.
In a random phase screen, the perturbation of the refractive index is approximately represented by the Kolmogorov spectrum;
the size of the grid spacing and the random phase are represented, respectively. Wave number->Representing wave numbers. In addition, Δz is the separation distance between successive phase screens.
For ease of computation, the random distribution of phase disturbances is described in a rectangular coordinate system, and then the phase screen is represented in the frequency domain by a fast fourier transform operation.
Where FFT represents the fast fourier transform, M is a complex random matrix with a mean of 0 and a variance of 1.
For example, according to the flow shown in fig. 1, after the neural network structure shown in fig. 2 is designed, numerical simulation is adopted to obtain data required for training and testing, and the numerical simulation parameters are parameters shown in table 1.
Table 1 simulation parameters
30000 grey scale pictures were generated for use as training data, 3000 of which were used as test sets, the test data being used mainly to verify the effect of the training. Because the corresponding verification image is generated during each round of verification, the training can be stopped timely after the effect is still bad for a plurality of rounds of training, corresponding parameters are modified, fine adjustment is not needed until the training is finished, and a plurality of training time can be saved. In the case of a better parameter setting, a relationship between the loss value and the training epochs can be obtained after the training is completed, as shown in fig. 3, and it can be seen that the loss starts to decrease from fast to slow and converges to a lower value as the epochs increase. In addition, when model training is completed, the output of models in different stages can be compared through fig. 4, 5 and 6, the sequence from top to bottom is respectively a real phase screen, the phase screen predicted after model training is completed and the phase screen predicted when the model starts training can be clearly found, and the predicted phase screen obtained along with model training is more close to the real phase screen.

Claims (5)

1. The prediction method of the atmospheric turbulence phase screen based on deep learning is characterized by comprising the following steps of:
s1: according to the inherent relation between distorted and undistorted light beams and a real phase screen and the characteristic that a function can be approximated with arbitrary precision according to the neural network, the neural network is adopted to predict the feasibility of the phase screen, and a convolution neural network with low complexity and a mean square error loss function with a pixel penalty term are designed;
s2: the method comprises the steps of obtaining the light intensity of two Gaussian beams which are distorted and not distorted and a corresponding real phase screen as training data, and adopting an atmospheric refractive index power spectral density Hill-Andrews model for enabling atmospheric turbulence to be more fit with the actual situation, wherein the expression is as follows:
wherein Indicating when the light beam is inzAtmospheric refractive index power spectrum at transmission in direction, +.>Represents the atmospheric turbulence intensity, +.>Represents the inner and outer scale factors of the atmospheric turbulence, respectively,/->Respectively arexAndywavenumbers in the direction;
s3: training a designed convolutional neural network model, continuously optimizing parameters of the model after parameter adjustment, reading a distorted light beam intensity graph into the trained convolutional neural network, and realizing prediction of a corresponding atmospheric turbulence phase screen; wherein the loss function is a judgment standard of taking the mean square error with pixel penalty term as parameter learning, and the loss functionThe method comprises the following steps:
(2)
wherein ,represents weight parameters, N represents the number of training samples, +.>Representing the%>Intensity profile of individual gaussian beams,/->Representing the%>A personal atmospheric turbulence phase screen->Taking the whole number->Represents a predicted atmospheric turbulence phase screen, +.>Representing a nonlinear activation function +.>Represents penalty factors,/->Representing a summation operation; determining optimal weight parameter->Then, according to the input dataXA predicted atmospheric turbulence phase screen can be obtained:
(3)
wherein Represents a predicted atmospheric turbulence phase screen, +.> The minimum operation is indicated to be taken,Lrepresenting a loss function-> and />Representing input data and output phase, respectively, +.>Represents the optimal weight parameter, and +.>Can be regarded as an abstract nonlinear function approximated by a neural network.
2. The prediction method of claim 1, wherein the convolutional neural network with low complexity has a small model complexity, requires few parameters to train, and requires little computational effort to train.
3. The predictive method of claim 1 wherein the measurement method is an end-to-end process, and no post-processing is required for data preparation and prediction results, and no elaborate and expensive optical instrumentation is required.
4. The prediction method of claim 1, further comprising: the implementation method is used for predicting the atmospheric turbulence phase screen.
5. The prediction method of the atmospheric turbulence phase screen based on deep learning of claim 1 is mainly used for accurately and rapidly extracting the atmospheric turbulence phase screen.
CN202310631745.7A 2023-05-31 2023-05-31 Prediction method of atmospheric turbulence phase screen based on deep learning Pending CN116578821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310631745.7A CN116578821A (en) 2023-05-31 2023-05-31 Prediction method of atmospheric turbulence phase screen based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310631745.7A CN116578821A (en) 2023-05-31 2023-05-31 Prediction method of atmospheric turbulence phase screen based on deep learning

Publications (1)

Publication Number Publication Date
CN116578821A true CN116578821A (en) 2023-08-11

Family

ID=87541259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310631745.7A Pending CN116578821A (en) 2023-05-31 2023-05-31 Prediction method of atmospheric turbulence phase screen based on deep learning

Country Status (1)

Country Link
CN (1) CN116578821A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056732A (en) * 2023-10-11 2023-11-14 山东科技大学 Non-isotropic NARX troposphere delay grid prediction method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056732A (en) * 2023-10-11 2023-11-14 山东科技大学 Non-isotropic NARX troposphere delay grid prediction method
CN117056732B (en) * 2023-10-11 2023-12-15 山东科技大学 Non-isotropic NARX troposphere delay grid prediction method

Similar Documents

Publication Publication Date Title
CN113129247B (en) Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution
CN111353424B (en) Remote sensing image spatial spectrum fusion method of depth recursion residual error network and electronic equipment
CN101819325A (en) Optical system and method for producing optical system
CN110930439B (en) High-grade product automatic production system suitable for high-resolution remote sensing image
CN116578821A (en) Prediction method of atmospheric turbulence phase screen based on deep learning
CN112801904B (en) Hybrid degraded image enhancement method based on convolutional neural network
Suárez Gómez et al. Improving adaptive optics reconstructions with a deep learning approach
Gao et al. Stacked convolutional auto-encoders for single space target image blind deconvolution
Zeng et al. Cascade neural network-based joint sampling and reconstruction for image compressed sensing
CN111353939A (en) Image super-resolution method based on multi-scale feature representation and weight sharing convolution layer
Wang et al. Automated clustering method for point spread function classification
CN113237554B (en) Method and device for generating surface temperature image under cloud and terminal equipment
CN111277809A (en) Image color correction method, system, device and medium
CN113256733B (en) Camera spectral sensitivity reconstruction method based on confidence voting convolutional neural network
CN116299247B (en) InSAR atmospheric correction method based on sparse convolutional neural network
CN113705340A (en) Deep learning change detection method based on radar remote sensing data
CN110895790B (en) Scene image super-resolution method based on posterior degradation information estimation
CN116883799A (en) Hyperspectral image depth space spectrum fusion method guided by component replacement model
Cang et al. Research on hyperspectral image reconstruction based on GISMT compressed sensing and interspectral prediction
CN107085843A (en) System and method for estimating the modulation transfer function in optical system
Wei et al. High-quality blind defocus deblurring of multispectral images with optics and gradient prior
Wang et al. Single-frame super-resolution for high resolution optical remote-sensing data products
Li et al. Multi-sensor multispectral reconstruction framework based on projection and reconstruction
Lu et al. Multi-Supervised Recursive-CNN for Hyperspectral and Multispectral Image Fusion
CN111223044A (en) Method for fusing full-color image and multispectral image based on dense connection network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination