CN111968047A - Adaptive optical image blind restoration method based on generating type countermeasure network - Google Patents

Adaptive optical image blind restoration method based on generating type countermeasure network Download PDF

Info

Publication number
CN111968047A
CN111968047A CN202010713133.9A CN202010713133A CN111968047A CN 111968047 A CN111968047 A CN 111968047A CN 202010713133 A CN202010713133 A CN 202010713133A CN 111968047 A CN111968047 A CN 111968047A
Authority
CN
China
Prior art keywords
image
network
convolution
layer
clear image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010713133.9A
Other languages
Chinese (zh)
Inventor
史江林
张荣之
徐蓉
郭世平
刘长海
谌钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Xian Satellite Control Center
Original Assignee
China Xian Satellite Control Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Xian Satellite Control Center filed Critical China Xian Satellite Control Center
Priority to CN202010713133.9A priority Critical patent/CN111968047A/en
Publication of CN111968047A publication Critical patent/CN111968047A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a self-adaptive optical image blind restoration method based on a generating countermeasure network, which is implemented according to the following steps: step 1, making a training set of a space target adaptive optical fuzzy image and a corresponding clear image; step 2, constructing a generation confrontation network model for training; step 3, inputting the adaptive optical fuzzy image and the training set of the corresponding clear image in the step 1 into the generation confrontation network model established in the step 2 to obtain a trained generation network model; and 4, carrying out size normalization preprocessing on the blurred image to be restored, inputting the blurred image to the generation network model trained in the step 3, and obtaining the restored sharp image. According to the self-adaptive optical image blind restoration method based on the generative confrontation network, due to the fact that PSF does not need to be estimated, single-frame image restoration efficiency is improved.

Description

Adaptive optical image blind restoration method based on generating type countermeasure network
Technical Field
The invention belongs to the technical field of optical image processing, and relates to a self-adaptive optical image blind restoration method based on a generating type confrontation network.
Background
When a space target is detected and imaged by the ground-based adaptive optical telescope, the optical wavefront distortion is caused due to the influence of factors such as atmospheric turbulence and the like, so that high-frequency information in the target image is greatly inhibited and attenuated, and the observed target image is seriously degraded and blurred.
When the ground-based optical telescope is used for aerial observation, due to the influence of factors such as atmospheric turbulence disturbance and the like, images acquired by an optical system generally have serious degradation blur. In order to obtain clearer images, most modern large telescopes adopt an Adaptive Optics (AO) system. Due to the adoption of the wavefront correction technology, the image quality of an image acquired by an AO system is improved to a great extent, however, the unit number and the speed of the AO system can not completely meet the correction requirement of atmospheric dynamic disturbance, and a correction residual exists. Therefore, the target high frequency information in the AO image is still suppressed and attenuated to a large extent. Such degraded blurred images are generally difficult to meet the requirement of aerial observation, and must be subjected to sharpening by means of image restoration.
In the field of image processing, the degraded imaging process is generally described as:
Figure BDA0002597278300000011
where x ═ x, y denotes image space coordinates, i (x) denotes a degraded imaging result, o (x) denotes an ideal sharp image, h (x) denotes a Point Spread Function (PSF) of the optical system, n (x) denotes optical path noise,
Figure BDA0002597278300000012
representing convolution operation, for an ideal imaging system, namely, no degradation exists in the imaging process, the PSF is a unit pulse function, and the process of estimating a clear image o (x) according to a degraded image i (x) is called image deconvolution or image restoration, as shown in FIG. 6; if PSFh (x) is unknown, it is called blind image deconvolutionThe product or image is restored blindly.
In the actual detection imaging process of the space target, the PSF of the imaging optical system is often unknown, and in this case, a blind deconvolution method needs to be adopted to restore an ideal image.
The blind deconvolution of the image is a pathological problem, as shown in formula (1), two unknowns o (x) and h (x)) need to be solved through an equation, so the blind deconvolution technology has greater difficulty and uncertainty in mathematics, and physical constraints need to be continuously applied to the target image and the point spread function in the iteration process during the solution process to ensure the correct and unique solution.
The existing blind restoration technical scheme comprises:
(1) RL-IBD Algorithm
In recent years, foreign research on the ground-based optical image blind deconvolution technology in the field of spatial target detection and identification mainly focuses on how to apply more advanced physical constraints on a target and a PSF (pseudo-static function) to improve deconvolution performance, and no matter single-frame or multi-frame blind deconvolution, a basic framework is generally based on a classic Richardson-Lucy iterative blind deconvolution (RL-IBD) algorithm, the flow of the algorithm is shown in FIGS. 2 and 7, the RL-IBD algorithm absorbs the advantage of simple calculation of the IBD algorithm, and the RL algorithm maintains the characteristics of non-negativity and energy conservation of a restored image, so that a certain restoration effect is achieved. However, in blind deconvolution of an RL-IBD image, the recovery performance is greatly influenced by initial estimation of a point spread function support domain, the recovery effect is obviously reduced due to the fact that the support domain is small, the image recovery calculated amount is increased due to the fact that the support domain is large, and the algorithm efficiency is reduced.
(2) Blind restoration method based on sparse representation
Detail problems that need to be handled in the sparse representation-based image blind restoration method include Dictionary Learning (Dictionary Learning), Smooth Enhancement (Smooth Enhancement), and the like.
The common defects of the existing adaptive optical image blind restoration methods are as follows: 1. the clear image is estimated by estimating the PSF. 2. Before estimating the PSF and the sharp image, physical constraints (sparse constraints) need to be made to provide a priori information. 3. Alternate iteration is needed, the PSF and the clear image are continuously estimated, and the time for restoring one frame of image is long.
Disclosure of Invention
The invention aims to provide an adaptive optical image blind restoration method based on a generative countermeasure network, which improves the single-frame image restoration efficiency because the PSF does not need to be estimated.
The technical scheme adopted by the invention is that a self-adaptive optical image blind restoration method based on a generative confrontation network is implemented according to the following steps:
step 1, making a training set of a space target adaptive optical fuzzy image and a corresponding clear image;
step 2, constructing a generation confrontation network model for training;
step 3, inputting the adaptive optical fuzzy image and the training set of the corresponding clear image in the step 1 into the generation confrontation network model established in the step 2 to obtain a trained generation network model;
and 4, carrying out size normalization preprocessing on the blurred image to be restored, inputting the blurred image to the generation network model trained in the step 3, and obtaining the restored sharp image.
The present invention is also characterized in that,
the step 1 specifically comprises the following steps:
step 1.1, acquiring a real space target simulation 3D model data set;
step 1.2, rendering each target in the 3D model data set obtained in the step 1.1 under different postures to obtain a spatial target clear image data set;
and 1.3, simulating each clear image o (x) of the spatial target clear image data set by adopting a Zernike polynomial method to perform atmospheric turbulence degradation simulation to obtain a corresponding fuzzy observation image i (x) and obtain a training set of the spatial target adaptive optics fuzzy image i (x) and the corresponding clear image o (x).
The calculation method of the fuzzy observation image i (x) comprises the following steps:
Figure BDA0002597278300000041
wherein the content of the first and second substances,
Figure BDA0002597278300000042
in the formula (I), the compound is shown in the specification,
Figure BDA0002597278300000043
for atmospheric turbulence degrading wavefront expressed by Zernike polynomial method, akIs a Zernike polynomial of the k term
Figure BDA0002597278300000044
Each term of
Figure BDA0002597278300000045
Referred to as the wave front mode,
Figure BDA0002597278300000046
1/pi in the unit circle domain, zero outside the circle domain, F is the fourier transform,
Figure BDA0002597278300000047
is the polar axis in the polar coordinate system.
The step 2 specifically comprises the following steps: generating the antagonistic network model comprises generating a network GθGAnd discriminating network DθD
Generating a network GθGThe structure of (1) is as follows: the method comprises 15 layers, wherein the 1 st layer is an input convolution layer, the size of a convolution kernel is 7 multiplied by 7, and the number of the convolution kernels is 64; the 2 nd layer to the 3 rd layer is a step convolution unit, the size of a convolution kernel is 3 multiplied by 3, the step is 2, and the number of the convolution kernels is 128 and 256 respectively; layers 4-12 are residual convolution units, each residual convolution unit comprises two convolution layers of 3 multiplied by 3; the 13 th layer to the 14 th layer are transposition convolution units, the size of a convolution kernel is 3 multiplied by 3, the step size is 2, and the number of the convolution kernels is 128 and 64 respectively; the 15 th layer is an output convolution layer, the size of the convolution kernel is 7 multiplied by 7, and the number of the convolution kernels is 64; except for the 15 th layer of output convolution, each convolution layer is followed by an example normalization unit and a ReLU activation unit;
discriminating network DθDThe structure of (1) is as follows: bag (bag)The multilayer convolution code comprises 5 layers, wherein the 1 st layer to the 3 rd layer are convolution layers, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and the number of the convolution kernels is respectively 64, 128 and 256; the 4 th layer and the 5 th layer are convolution layers, the size of the convolution kernel is 4 multiplied by 4, the step length is 1, and the number of the convolution kernels is respectively 512 and 1.
Generating loss functions against a network model by generating content loss functions for a network
Figure BDA0002597278300000048
Penalty function in connection with discriminant network
Figure BDA0002597278300000049
Weighting and composition:
Figure BDA00025972783000000410
where λ is a weight coefficient.
The content loss function for the generating network employs perceptual loss:
Figure BDA0002597278300000051
wherein phi isi,jIs a feature diagram output by the jth convolutional layer before the ith pooling layer in the convolutional network model structure trained in advance, Wi,jAnd Hi,jIs the dimension of the corresponding feature map.
Judging a countermeasure loss function of the network, adopting Wasserstein distance with gradient penalty as loss, not taking the probability of reconstructing a clear image as output, and calculating the loss according to the following formula:
Figure BDA0002597278300000052
where N is the number of batch samples.
The step 3 specifically comprises the following steps:
step 3.1, pretraining
The space target obtained in the step 1 is adaptive to opticsThe blurred image i (x) and the sharp image o (x) in the training set corresponding to the sharp image o (x) are sequentially input to the discrimination network D for generating the confrontation network modelθDPresetting a termination condition, and continuously iterating until the termination condition is reached, so that the discrimination network has basic discrimination capability;
step 3.2, formal training
Inputting the spatial target adaptive optics fuzzy image i (x) obtained in the step 1 and the spatial target adaptive optics fuzzy image i (x) in the training set corresponding to the clear image o (x) into a generation network for generating a confrontation model to obtain a restoration result, and respectively inputting the restoration result and the corresponding clear image o (x) into a pre-trained discrimination network DθDWhen judging the network DθDThe clear images o (x) and the restored images can be distinguished, and the training is continued to generate the network GθGWhen discriminating the network DθDIf the clear image o (x) and the restored image cannot be distinguished, the training is stopped, and the network training is finished.
Step 3.2, the restoration result and the corresponding clear image o (x) are respectively input into the pre-trained discrimination network DθDThe discriminator network discriminates the input image, discriminates true when a clear image is input and outputs 1, discriminates false when a restored image is input and outputs 0; when the restoration image is input, the judgment is true, and the output of the discriminator is 0.5;
if the restoration result and the corresponding clear image o (x) are inputted into the pre-trained discrimination network D respectivelyθDJudging true when inputting clear image, judging false when inputting restoration image, returning to continuous training to generate network, updating network parameter by gradient descent algorithm for loss function through appointed iteration number, regenerating restoration image, and inputting restoration result and corresponding clear image o (x) in step 3.2 to pre-trained judgment network DθDThe judgment is true until a clear image is input, and the judgment is true when a restored image is input;
if the restoration result and the corresponding clear image o (x) are inputted into the pre-trained discrimination network D respectivelyθDIn the method, when a clear image is input, the image is judged to be false, and the image is returnedTraining the discrimination network continuously, updating network parameters of the loss function by a gradient descent algorithm through appointed iteration times, returning to the step 3.2, and respectively inputting the recovery result and the corresponding clear image o (x) into the updated discrimination network DθDIn the above description, the determination is true until a clear image is input, and the determination is true when a restored image is input.
The invention has the advantages that
The invention relates to a self-adaptive optical image blind restoration method based on a generative confrontation network, which is an end-to-end blind restoration method. And training the network to achieve convergence by utilizing the countermeasure characteristic of the generated countermeasure network model and combining the content loss information of the generated restoration image. The fuzzy observation image to be restored is input into the generation model, so that a restored clear image with clearer target contour and more specific details can be generated, and the restoration precision of the target image and the restoration efficiency of the single-frame image are effectively improved.
Drawings
FIG. 1 is a schematic diagram illustrating the image degradation and restoration process in an adaptive optical image blind restoration method based on a generative countermeasure network according to the present invention;
FIG. 2 is a flow chart of a prior RL-IBD algorithm;
FIG. 3 is a flow chart of a method for blind restoration of an adaptive optical image based on a generative confrontation network according to the present invention;
FIG. 4 is a schematic diagram of a generated network model architecture in an adaptive optical image blind restoration method based on a generated countermeasure network according to the present invention;
FIG. 5 is a diagram of a fuzzy observation (left) and a blind reconstructed sharp image (right) based on GAN in the adaptive optical image blind reconstruction method based on the generative confrontation network according to the present invention;
FIG. 6 is a diagram illustrating a prior art image degradation and restoration process;
FIG. 7 is a flowchart of a prior art RL-IBD algorithm process.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention discloses a self-adaptive optical image blind restoration method based on a generative confrontation network, the flow of which is shown in figure 3 and is implemented according to the following steps:
step 1, making a training set of a space target adaptive optical fuzzy image and a corresponding clear image;
as shown in fig. 1, specifically:
step 1.1, acquiring a real space target simulation 3D model data set;
step 1.2, rendering each target in the 3D model data set obtained in the step 1.1 under different postures to obtain a spatial target clear image data set;
and 1.3, simulating each clear image o (x) of the spatial target clear image data set by adopting a Zernike polynomial method to perform atmospheric turbulence degradation simulation to obtain a corresponding fuzzy observation image i (x) and obtain a training set of the spatial target adaptive optics fuzzy image i (x) and the corresponding clear image o (x).
The calculation method of the fuzzy observation image i (x) comprises the following steps:
Figure BDA0002597278300000081
wherein the content of the first and second substances,
Figure BDA0002597278300000082
in the formula (I), the compound is shown in the specification,
Figure BDA0002597278300000083
for atmospheric turbulence degrading wavefront expressed by Zernike polynomial method, akIs a Zernike polynomial of the k term
Figure BDA0002597278300000084
Each term of
Figure BDA0002597278300000085
Referred to as the wave front mode,
Figure BDA0002597278300000086
1/pi in the unit circle domain, zero outside the circle domain, F is the fourier transform,
Figure BDA0002597278300000087
is the polar axis in the polar coordinate system.
Step 2, constructing a generation confrontation network model for training; the method specifically comprises the following steps: generating the antagonistic network model comprises generating a network GθGAnd discriminating network DθD
As shown in FIG. 4, a network G is generatedθGThe structure of (1) is as follows: the method comprises 15 layers, wherein the 1 st layer is an input convolution layer, the size of a convolution kernel is 7 multiplied by 7, and the number of the convolution kernels is 64; the 2 nd layer to the 3 rd layer is a step convolution unit, the size of a convolution kernel is 3 multiplied by 3, the step is 2, and the number of the convolution kernels is 128 and 256 respectively; layers 4-12 are residual convolution units, each residual convolution unit comprises two convolution layers of 3 multiplied by 3; the 13 th layer to the 14 th layer are transposition convolution units, the size of a convolution kernel is 3 multiplied by 3, the step size is 2, and the number of the convolution kernels is 128 and 64 respectively; the 15 th layer is an output convolution layer, the size of the convolution kernel is 7 multiplied by 7, and the number of the convolution kernels is 64; except for the 15 th layer of output convolution, each convolution layer is followed by an example normalization unit and a ReLU activation unit;
discriminating network DθDThe structure of (1) is as follows: comprises 5 layers, the 1 st layer to the 3 rd layer are convolution layers, the size of the convolution kernel is 4 multiplied by 4, the step length is 2, and the number is respectively 64, 128 and 256; the 4 th layer and the 5 th layer are convolution layers, the size of the convolution kernel is 4 multiplied by 4, the step length is 1, and the number of the convolution kernels is respectively 512 and 1.
Generating loss functions against a network model by generating content loss functions for a network
Figure BDA0002597278300000088
Penalty function in connection with discriminant network
Figure BDA0002597278300000089
Weighting and composition:
Figure BDA0002597278300000091
where λ is a weight coefficient.
The content loss function for the generating network employs perceptual loss:
Figure BDA0002597278300000092
wherein phi isi,jIs a feature diagram output by the jth convolutional layer before the ith pooling layer in the convolutional network model structure trained in advance, Wi,jAnd Hi,jIs the dimension of the corresponding feature map.
Judging a countermeasure loss function of the network, adopting Wasserstein distance with gradient penalty as loss, not taking the probability of reconstructing a clear image as output, and calculating the loss according to the following formula:
Figure BDA0002597278300000093
where N is the number of batch samples.
Step 3, inputting the adaptive optical fuzzy image and the training set of the corresponding clear image in the step 1 into the generation confrontation network model established in the step 2 to obtain a trained generation network model; the method specifically comprises the following steps:
step 3.1, pretraining
Sequentially inputting the spatial target adaptive optics fuzzy image i (x) obtained in the step 1 and the clear image o (x) in the training set corresponding to the clear image o (x) into a discrimination network D for generating a confrontation network modelθDPresetting a termination condition, and continuously iterating until the termination condition is reached, so that the discrimination network has basic discrimination capability;
step 3.2, formal training
The spatial target adaptive optics fuzzy image i (x) obtained in the step 1 and the spatial target adaptive optics fuzzy image i (x) in the training set corresponding to the clear image o (x)) Inputting into a generation network for generating a confrontation model to obtain a restoration result, and inputting the restoration result and corresponding clear images o (x) into a pre-trained discrimination network D respectively as shown in FIG. 5θDWhen judging the network DθDThe clear images o (x) and the restored images can be distinguished, and the training is continued to generate the network GθGWhen discriminating the network DθDIf the clear image o (x) and the restored image cannot be distinguished, the training is stopped, and the network training is finished.
Step 3.2, the restoration result and the corresponding clear image o (x) are respectively input into the pre-trained discrimination network DθDThe discriminator network discriminates the input image, discriminates true when a clear image is input and outputs 1, discriminates false when a restored image is input and outputs 0; when the restoration image is input, the judgment is true, and the output of the discriminator is 0.5;
if the restoration result and the corresponding clear image o (x) are inputted into the pre-trained discrimination network D respectivelyθDJudging true when inputting clear image, judging false when inputting restoration image, returning to continuous training to generate network, updating network parameter by gradient descent algorithm for loss function through appointed iteration number, regenerating restoration image, and inputting restoration result and corresponding clear image o (x) in step 3.2 to pre-trained judgment network DθDThe judgment is true until a clear image is input, and the judgment is true when a restored image is input;
if the restoration result and the corresponding clear image o (x) are inputted into the pre-trained discrimination network D respectivelyθDJudging the image as false when inputting clear image, returning to train and judge network, updating network parameter by gradient decreasing algorithm for loss function through appointed iteration number, returning to step 3.2, inputting the restored result and corresponding clear image o (x) to the updated judge network DθDIn the above description, the determination is true until a clear image is input, and the determination is true when a restored image is input.
And 4, carrying out size normalization preprocessing on the blurred image to be restored, inputting the blurred image to the generation network model trained in the step 3, and obtaining the restored sharp image.
Interpretation of terms:
generating a countermeasure network: a Generative Adaptive Networks (GAN) is a deep learning model, and is one of the most promising methods for unsupervised learning in complex distribution in recent years. The model passes through (at least) two modules in the framework: the mutual game learning of the Generative Model (Generative Model) and the Discriminative Model (Discriminative Model) yields a reasonably good output.
Adaptive optics: adaptive Optics (AO) is one of the most promising techniques to compensate in near real-time for Optical wavefront distortions in the imaging process caused by atmospheric turbulence or other factors.
Blind restoration of an image: the image blind restoration is an image processing method for estimating an original image and an imaging point spread function by using an observed degraded blurred image under the condition that the prior knowledge of the related original image and the imaging point spread function is unknown or incompletely known.
Point spread function: a Point Spread Function (PSF) refers to a light field distribution of an output image of an optical system when an input object is a Point light source, and is referred to as a Point Spread Function. Point sources can be represented mathematically by a function (point pulse) and the light field distribution of the output image is called the impulse response, so the point spread function is the impulse response function of the optical system.

Claims (9)

1. A self-adaptive optical image blind restoration method based on a generative confrontation network is characterized by comprising the following steps:
step 1, making a training set of a space target adaptive optical fuzzy image and a corresponding clear image;
step 2, constructing a generation confrontation network model for training;
step 3, inputting the adaptive optical fuzzy image and the training set of the corresponding clear image in the step 1 into the generation confrontation network model established in the step 2 to obtain a trained generation network model;
and 4, carrying out size normalization preprocessing on the blurred image to be restored, inputting the blurred image to the generation network model trained in the step 3, and obtaining the restored sharp image.
2. The adaptive optical image blind restoration method based on the generative countermeasure network according to claim 1, wherein the step 1 specifically comprises:
step 1.1, acquiring a real space target simulation 3D model data set;
step 1.2, rendering each target in the 3D model data set obtained in the step 1.1 under different postures to obtain a spatial target clear image data set;
and 1.3, simulating each clear image o (x) of the spatial target clear image data set by adopting a Zernike polynomial method to perform atmospheric turbulence degradation simulation to obtain a corresponding fuzzy observation image i (x) and obtain a training set of the spatial target adaptive optics fuzzy image i (x) and the corresponding clear image o (x).
3. The adaptive optical image blind restoration method based on the generative countermeasure network as claimed in claim 2, wherein the calculation method of the blurred observation image i (x) is as follows:
Figure FDA0002597278290000011
wherein the content of the first and second substances,
Figure FDA0002597278290000012
in the formula (I), the compound is shown in the specification,
Figure FDA0002597278290000021
for atmospheric turbulence degrading wavefront expressed by Zernike polynomial method, akIs a Zernike polynomial of the k term
Figure FDA0002597278290000022
Each term of
Figure FDA0002597278290000023
Referred to as the wave front mode,
Figure FDA0002597278290000024
1/pi in the unit circle domain, zero outside the circle domain, F is the fourier transform,
Figure FDA0002597278290000025
is the polar axis in the polar coordinate system.
4. The adaptive optical image blind restoration method based on the generative countermeasure network according to claim 2, wherein the step 2 is specifically: generating the antagonistic network model comprises generating a network GθGAnd discriminating network DθD
The generation network GθGThe structure of (1) is as follows: the method comprises 15 layers, wherein the 1 st layer is an input convolution layer, the size of a convolution kernel is 7 multiplied by 7, and the number of the convolution kernels is 64; the 2 nd layer to the 3 rd layer is a step convolution unit, the size of a convolution kernel is 3 multiplied by 3, the step is 2, and the number of the convolution kernels is 128 and 256 respectively; layers 4-12 are residual convolution units, each residual convolution unit comprises two convolution layers of 3 multiplied by 3; the 13 th layer to the 14 th layer are transposition convolution units, the size of a convolution kernel is 3 multiplied by 3, the step size is 2, and the number of the convolution kernels is 128 and 64 respectively; the 15 th layer is an output convolution layer, the size of the convolution kernel is 7 multiplied by 7, and the number of the convolution kernels is 64; except for the 15 th layer of output convolution, each convolution layer is followed by an example normalization unit and a ReLU activation unit;
said discriminating network DθDThe structure of (1) is as follows: comprises 5 layers, the 1 st layer to the 3 rd layer are convolution layers, the size of the convolution kernel is 4 multiplied by 4, the step length is 2, and the number is respectively 64, 128 and 256; the 4 th layer and the 5 th layer are convolution layers, the size of the convolution kernel is 4 multiplied by 4, the step length is 1, and the number of the convolution kernels is respectively 512 and 1.
5. The adaptive optical image blind restoration method based on generative countermeasure network as claimed in claim 4, wherein the generative countermeasure networkLoss function of network model is generated by generating content loss function of network
Figure FDA0002597278290000026
Penalty function in connection with discriminant network
Figure FDA0002597278290000027
Weighting and composition:
Figure FDA0002597278290000028
where λ is a weight coefficient.
6. The adaptive optical image blind restoration method based on the generative countermeasure network as claimed in claim 5, wherein the content loss function of the generative network employs perceptual loss:
Figure FDA0002597278290000031
wherein phi isi,jIs a feature diagram output by the jth convolutional layer before the ith pooling layer in the convolutional network model structure trained in advance, Wi,jAnd Hi,jIs the dimension of the corresponding feature map.
7. The adaptive optical image blind restoration method based on the generative confrontation network as claimed in claim 5, wherein the confrontation loss function of the discriminant network adopts Wasserstein distance with gradient penalty as loss, which does not take probability of reconstructing a sharp image as output, and the loss is calculated as follows:
Figure FDA0002597278290000032
where N is the number of batch samples.
8. The adaptive optical image blind restoration method based on the generative countermeasure network according to claim 5, wherein the step 3 is specifically:
step 3.1, pretraining
Sequentially inputting the spatial target adaptive optics fuzzy image i (x) obtained in the step 1 and the clear image o (x) in the training set corresponding to the clear image o (x) into a discrimination network D for generating a confrontation network modelθDPresetting a termination condition, and continuously iterating until the termination condition is reached, so that the discrimination network has basic discrimination capability;
step 3.2, formal training
Inputting the spatial target adaptive optics fuzzy image i (x) obtained in the step 1 and the spatial target adaptive optics fuzzy image i (x) in the training set corresponding to the clear image o (x) into a generation network for generating a confrontation model to obtain a restoration result, and respectively inputting the restoration result and the corresponding clear image o (x) into a pre-trained discrimination network DθDWhen judging the network DθDThe clear images o (x) and the restored images can be distinguished, and the training is continued to generate the network GθGWhen discriminating the network DθDIf the clear image o (x) and the restored image cannot be distinguished, the training is stopped, and the network training is finished.
9. The adaptive optical image blind restoration method based on the generative confrontation network as claimed in claim 8, wherein the restoration result and the corresponding clear image o (x) are respectively input to the pre-trained discriminative network D in the step 3.2θDThe discriminator network discriminates the input image, discriminates true when a clear image is input and outputs 1, discriminates false when a restored image is input and outputs 0; when the restoration image is input, the judgment is true, and the output of the discriminator is 0.5;
if the restoration result and the corresponding clear image o (x) are inputted into the pre-trained discrimination network D respectivelyθDIf the clear image is input, the judgment is true, if the restored image is input, the judgment is false, and the method returns to the continuous trainerNetworking, updating network parameters of the loss function by a gradient descent algorithm through appointed iteration times, regenerating a restored image, and respectively inputting the restored result and the corresponding clear image o (x) into a pre-trained discrimination network D in the step 3.2θDThe judgment is true until a clear image is input, and the judgment is true when a restored image is input;
if the restoration result and the corresponding clear image o (x) are inputted into the pre-trained discrimination network D respectivelyθDJudging the image as false when inputting clear image, returning to train and judge network, updating network parameter by gradient decreasing algorithm for loss function through appointed iteration number, returning to step 3.2, inputting the restored result and corresponding clear image o (x) to the updated judge network DθDIn the above description, the determination is true until a clear image is input, and the determination is true when a restored image is input.
CN202010713133.9A 2020-07-22 2020-07-22 Adaptive optical image blind restoration method based on generating type countermeasure network Pending CN111968047A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010713133.9A CN111968047A (en) 2020-07-22 2020-07-22 Adaptive optical image blind restoration method based on generating type countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010713133.9A CN111968047A (en) 2020-07-22 2020-07-22 Adaptive optical image blind restoration method based on generating type countermeasure network

Publications (1)

Publication Number Publication Date
CN111968047A true CN111968047A (en) 2020-11-20

Family

ID=73362449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010713133.9A Pending CN111968047A (en) 2020-07-22 2020-07-22 Adaptive optical image blind restoration method based on generating type countermeasure network

Country Status (1)

Country Link
CN (1) CN111968047A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561828A (en) * 2020-12-23 2021-03-26 北京环境特性研究所 Gas turbulence fuzzy image reconstruction method based on generation countermeasure network
CN113077540A (en) * 2021-03-31 2021-07-06 点昀技术(南通)有限公司 End-to-end imaging equipment design method and device
CN113129232A (en) * 2021-04-15 2021-07-16 中山大学 Weak light speckle imaging recovery method based on countermeasure network generated by deep convolution
CN114742779A (en) * 2022-04-01 2022-07-12 中国科学院光电技术研究所 High-resolution self-adaptive optical image quality evaluation method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520504A (en) * 2018-04-16 2018-09-11 湘潭大学 A kind of blurred picture blind restoration method based on generation confrontation network end-to-end
CN108711141A (en) * 2018-05-17 2018-10-26 重庆大学 The motion blur image blind restoration method of network is fought using improved production
CN110969589A (en) * 2019-12-03 2020-04-07 重庆大学 Dynamic scene fuzzy image blind restoration method based on multi-stream attention countermeasure network
CN111275637A (en) * 2020-01-15 2020-06-12 北京工业大学 Non-uniform motion blurred image self-adaptive restoration method based on attention model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520504A (en) * 2018-04-16 2018-09-11 湘潭大学 A kind of blurred picture blind restoration method based on generation confrontation network end-to-end
CN108711141A (en) * 2018-05-17 2018-10-26 重庆大学 The motion blur image blind restoration method of network is fought using improved production
CN110969589A (en) * 2019-12-03 2020-04-07 重庆大学 Dynamic scene fuzzy image blind restoration method based on multi-stream attention countermeasure network
CN111275637A (en) * 2020-01-15 2020-06-12 北京工业大学 Non-uniform motion blurred image self-adaptive restoration method based on attention model

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561828A (en) * 2020-12-23 2021-03-26 北京环境特性研究所 Gas turbulence fuzzy image reconstruction method based on generation countermeasure network
CN113077540A (en) * 2021-03-31 2021-07-06 点昀技术(南通)有限公司 End-to-end imaging equipment design method and device
CN113077540B (en) * 2021-03-31 2024-03-12 点昀技术(南通)有限公司 End-to-end imaging equipment design method and device
CN113129232A (en) * 2021-04-15 2021-07-16 中山大学 Weak light speckle imaging recovery method based on countermeasure network generated by deep convolution
CN114742779A (en) * 2022-04-01 2022-07-12 中国科学院光电技术研究所 High-resolution self-adaptive optical image quality evaluation method based on deep learning

Similar Documents

Publication Publication Date Title
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
CN111968047A (en) Adaptive optical image blind restoration method based on generating type countermeasure network
CN111028163A (en) Convolution neural network-based combined image denoising and weak light enhancement method
Yin et al. Highly accurate image reconstruction for multimodal noise suppression using semisupervised learning on big data
CN111861894B (en) Image motion blur removing method based on generation type countermeasure network
CN110796616A (en) Fractional order differential operator based L0Norm constraint and adaptive weighted gradient turbulence degradation image recovery method
CN109003234A (en) For the fuzzy core calculation method of motion blur image restoration
CN115578262A (en) Polarization image super-resolution reconstruction method based on AFAN model
CN112200752B (en) Multi-frame image deblurring system and method based on ER network
Tian et al. Underwater sonar image denoising through nonconvex total variation regularization and generalized Kullback–Leibler fidelity
CN112184567A (en) Multi-channel blind identification adaptive optical image restoration method based on alternate minimization
CN112330549A (en) Blind deconvolution network-based blurred image blind restoration method and system
Guan et al. DiffWater: Underwater image enhancement based on conditional denoising diffusion probabilistic model
CN111047537A (en) System for recovering details in image denoising
CN116188265A (en) Space variable kernel perception blind super-division reconstruction method based on real degradation
CN114972081A (en) Blind restoration-based image restoration method under complex optical imaging condition
CN114037636A (en) Multi-frame blind restoration method for correcting image by adaptive optical system
CN112330550A (en) Image restoration method and system based on image blind deconvolution technology
CN112001956A (en) CNN-based schlieren strong laser far-field focal spot measurement image denoising method
Chen et al. GADO-Net: an improved AOD-Net single image dehazing algorithm
CN117611456A (en) Atmospheric turbulence image restoration method and system based on multiscale generation countermeasure network
CN114022385B (en) Image restoration method based on local surface fitting
Gupta et al. Blind restoration method for satellite images using memetic algorithm
CN112132758B (en) Image restoration method based on asymmetric optical system point spread function model
CN113066023B (en) SAR image speckle removing method based on self-calibration convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination