CN112330554B - Structure learning method for deconvolution of astronomical image - Google Patents

Structure learning method for deconvolution of astronomical image Download PDF

Info

Publication number
CN112330554B
CN112330554B CN202011186703.XA CN202011186703A CN112330554B CN 112330554 B CN112330554 B CN 112330554B CN 202011186703 A CN202011186703 A CN 202011186703A CN 112330554 B CN112330554 B CN 112330554B
Authority
CN
China
Prior art keywords
image
astronomical
network
input
deconvolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011186703.XA
Other languages
Chinese (zh)
Other versions
CN112330554A (en
Inventor
马龙
杨薮博
舒聪
黄姗姗
李彦龙
段笑晗
李世飞
喻钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Technological University
Original Assignee
Xian Technological University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Technological University filed Critical Xian Technological University
Priority to CN202011186703.XA priority Critical patent/CN112330554B/en
Publication of CN112330554A publication Critical patent/CN112330554A/en
Application granted granted Critical
Publication of CN112330554B publication Critical patent/CN112330554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Abstract

The invention provides a structure learning method for deconvolution of astronomical images, which comprises the following steps: an astronomical image input step, namely receiving the input of an astronomical image input network and inputting the input of the astronomical image into a backbone network; a backbone network processing step, including a feature extraction step and a signal estimation step; and a feature extraction step: performing feature extraction through two convolution layer operations and overlapped residual error learning; a signal estimation step: inputting the characteristic extraction result into two branches for calculation respectively; and fusing output signals of the first branch calculation result and the second branch calculation result, and performing convolution layer operation twice again to obtain a final result.

Description

Structure learning method for deconvolution of astronomical image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a structure learning method for deconvolution of astronomical images.
Background
Astronomical images are an effective medium for people to explore the universe and monitor space. Through the images, people can directly observe the change of the celestial body and the evolution of the star system, and effectively monitor the running states of the artificial satellites and other artificial objects. Because of the long distances involved, astronomical images generally need to be acquired by a telescopic device. Due to the characteristics of the optical device and the influence of the imaging environment, the images acquired by the telescope can be degraded to different degrees. These degraded images blur, lose detail, and may even cause serious loss of object structure and contours; therefore, they often have difficulty meeting practical needs. In order to obtain a clearer imaging result, many measures have been taken. Of these, adaptive Optics (AO) are the most well known, which compensate for imaging blur caused by atmospheric turbulence or other factors by correcting wavefront distortion. However, due to the complexity and cost of AO systems, their effectiveness and response speed do not always fully meet the correction requirements. In addition, they sometimes leave significant correction residuals, resulting in blurring. In this case, applying the image post-processing technique can further improve the quality of the AO image. The aim of this study is to improve the post-processing of such images. Our working costs are negligible compared to the costs of building and acquiring a more complex AO-system, but their effect is very satisfactory.
In an ideal digital imaging system, one bin in the scene (typically considered a point) corresponds to one pixel on the image screen. When the imaging system is applied to an out-of-focus or image with large aberrations or severe motion, the pixels are expanded into a limited area on the image plane. This can lead to overlapping of images formed by adjacent bins in the scene, resulting in image degradation. In this process, each pixel in the image is obtained by superimposing the image of the corresponding bin and its neighboring bins at that pixel location. If the imaging system is linear and translation is unchanged, the imaging result of the degradation can be described by the following equation:
y=x*k (1)
where y is the degraded image, which is its corresponding ideal sharp image; is a Point Spread Function (PSF), defined as the response of the imaging system to a point source or point object; asterisks indicate convolution operations. The above equation describes that each pixel of the degraded image can be considered as a weighted sum of the corresponding pixel and its neighborhood pixels in the ideal image.
The task of image restoration is to solve the sharp image from the known degraded image. Equation (1) shows that this is a deconvolution problem. Image deconvolution is a common operation in many fields, including astronomical imaging (Starck et al 2002; la Camera et al 2015; ramos et al 2018), electron microscopy (Kenig et al 2010; preibisch et al 2014; li et al 2018), anti-shake photography (Xu et al 2014; sun et al 2015; chakrabarti 2016; nah et al 2017; kupyn et al 2018), medical imaging (Campisi & Egiazarian 2007), (Xie et al 2016), which is a continuing hot spot problem in the image processing field. Image deconvolution can be categorized into blind deconvolution (PSF unknown) (Biggs 1988;You&Kaveh 1996;Prato et al) and non-blind deconvolution (known PSF) (Prato et al 2012; lefkimmiatis and enser 2013; schulter et al 2013; che 2014), depending on whether the PSF is known.
Although a large number of deconvolution methods have been proposed in recent years, most are directed to natural images or other types of images, whereas deconvolution methods applicable to astronomical images are rare. In these methods it is assumed that the degraded image is disturbed by poisson noise, and image deconvolution is achieved by some optimization method. Such as iterative methods (Richardson 1972; lucy 1974), proportional gradient projection methods (Prato et al 2012), accelerated linearization alternate minimization (Chen 2014), or matrix value regularization operators (Lefkimmiatis & Unser 2013). Blind deconvolution is more challenging because the PSF is unknown. The main difficulty with blind deconvolution is the lack of information. One typical solution is to introduce various a priori knowledge about the PSF and the image to compensate for this deficiency and to use iterative methods to alternately estimate the PSF and the sharp image (Biggs 1988;You&Kaveh 1996;Prato et al 2013). For a more overview of the traditional deconvolution method, we recommend (starch 2002;Campisi&Egiazarian 2007;Levin2011).
Disclosure of Invention
The invention provides a structure learning method for deconvolution of astronomical images, which comprises the following steps: an astronomical image input step, namely receiving the input of an astronomical image input network and inputting the input of the astronomical image into a backbone network; a backbone network processing step, including a feature extraction step and a signal estimation step; and a feature extraction step: performing feature extraction through two convolution layer operations and overlapped residual error learning; then, calculating to obtain a feature extraction result through overlapped residual error learning operation; a signal estimation step: inputting the characteristic extraction result into two branches for calculation respectively; and fusing output signals of the first branch calculation result and the second branch calculation result, and performing convolution layer operation twice again to obtain a final result.
In particular, the convolution layer operation comprises the steps of: performing convolution operation on the received input data; and carrying out batch normalization calculation on the result obtained by the convolution operation, and then inputting the result into a nonlinear excitation layer for calculation.
In particular, the nonlinear excitation layer employs a linear rectification function (ReLU) to perform the operation.
Specifically, the first branch firstly carries out convolution layer calculation, wherein the size of output data is controlled through a convolution kernel, and then the output column signals are input into a full-connection layer for calculation to obtain a first branch calculation result; and the second branch is calculated through an up-sampling layer to obtain a second branch calculation result.
In particular, the astronomical image input step further comprises: the received astronomical image is an astronomical image of a network (PSFNet) for point spread function PSF estimation, an astronomical image of a network (nbdnaet) for non-blind deconvolution, or an astronomical image of a network (BDNet) for blind deconvolution.
In particular, when the received astronomical image input network is a network for point spread function PSF estimation, the astronomical image it receives is a degraded image, the end result of which is the point spread function PSF information of the degraded image.
In particular, when the received astronomical image is input to a network (nbdnaet) which is a non-blind deconvolution network, the received astronomical image is a degraded image and point spread function PSF information corresponding to the degraded image; the end result is a sharp image corresponding to the degraded image.
In particular, when the received astronomical image input network is a blind deconvolution network (BDNet), the received astronomical image is a degraded image, which will first obtain the point spread function PSF information corresponding to the degraded image, and again together with the degraded image, the final result is a sharpened image estimated by the degraded image.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a backbone network structure according to the present invention;
FIG. 2 is a schematic diagram of the steps of the astronomical image deconvolution;
fig. 3 is a schematic diagram illustrating calculation of a residual module according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The following detailed description of specific embodiments of the invention refers to the accompanying drawings.
The invention provides a structure learning method for deconvolution of astronomical images, which comprises the following steps of:
step S1: an astronomical image input step, namely receiving the input of an astronomical image input network and inputting the input of the astronomical image into a backbone network; the input network of the astronomical image can be a network for estimating a point spread function PSF (Point Spread Function), PSFNet for short and NBDNet for short; they all use the backbone structure directly; the only difference between the two networks from the outside is their input and output. For PSFNet, our goal is to use it to estimate the PSF that leads to image degradation. Thus, the input of the PSFNet is a degraded image and its output is the corresponding PSF. The number of input channels is equal to the number of channels of the degraded image. For nbdnaet, our goal is to use it to perform a non-blind deconvolution operation, i.e., to recover the sharp image corresponding to the degraded image, given the degraded image and its corresponding PSF. Thus, the input of nbdnaet is a degraded image and a corresponding PSF, and the output thereof is a clear image corresponding to the degraded image. When inputting data, we normalize the degraded image and the PSF separately and then input them in series into the network. Therefore, the number of channels input to nbdnat is equal to the sum of the number of channels of the degraded image and the number of channels of the PSF. The input network also includes a blind deconvolution network (BDNet for short, blind deconvolution problem is more challenging than deconvolution problem), the solution of this problem is to estimate PSF first, then to solve blind deconvolution problem by means of solving non-blind deconvolution problem (Levin et al 2011). In this process, though PSF estimation links exist, the network can be regarded as a black box from outside to form a blind deconvolution network, the first one is used for PSF estimation and can be regarded as PSF estimation sub-network
S2: the backbone network processing step includes S21: feature extraction step and S22: in the signal estimation step, a schematic diagram of a backbone network structure is shown in fig. 2;
s21: and a feature extraction step: performing feature extraction through two convolution layer operations and overlapped residual error learning; wherein the convolution layer operation comprises the steps of: performing convolution operation on the received input data; and (3) carrying out Batch Normalization (BN) calculation on the result obtained by the convolution operation, and then inputting the result into a nonlinear excitation layer for calculation. The batch normalization calculation is mainly used for keeping the feature scale consistent so as to accelerate training and improve accuracy; the nonlinear excitation layer is arranged because the convolution calculation is a linear operation, the nonlinear description capability can be increased for the convolution calculation by adding the nonlinear excitation layer, the nonlinear excitation layer adopts a linear rectification function (Rectified Linear Unit, reLU) ReLU to complete the operation, and then a characteristic extraction result is obtained through the calculation of a superimposed residual error learning module; the residual learning module is a prior art (DeepResidual Learning for Image Recognition, HE et al) that can be quickly approximated to an objective function by constructing residual units. The residual function is that it is not desirable that each layer fit directly to a map, and we explicitly let these layers fit to the residual map. Formally, the optimal demapping is represented by H (X), but we let the stacked nonlinear layers fit another mapping F (X) =h (X) -X, where the original optimal demapping H (X) can be rewritten to F (X) +x, and we assume that the residual mapping is more easily optimized than the original mapping. In the extreme case, if one mapping is optimizable, it is also easy to push the residual to 0, which is much easier than pushing the residual to 0 and approximating this mapping to another nonlinear function, the process of which is shown in fig. 3.
S22: a signal estimation step: inputting the characteristic extraction result into two branches for calculation respectively; and fusing output signals of the first branch calculation result and the second branch calculation result, and performing convolution layer operation twice again to obtain a final result.
The signal estimation step adopts a 'first-division-then-combination' information processing strategy. I.e. two branches are led out from the end of the feature extraction section. One branch adopts an up-sampling structure starting from the output of the feature extraction part, expands the size of the feature map layer by layer, and finally outputs a signal with the same size as the original input signal of the backbone network. The other branch adopts a fully-connected structure, and a large convolution kernel is used for adjusting the output size of the feature extraction part, then a column signal is output, and finally the column signal is shaped into a two-dimensional signal with the same size as the original input signal of the feature extraction part. And at the tail ends of the two branches, the output signals of the two branches are fused by using the two convolution layers, and a final result is output. This combination of up-sampling and fully connected networks is very beneficial for structure learning. The upsampling network adopts a layer-by-layer upsampling method combining interpolation and convolution, which is helpful for maintaining the structure of the signal. The full-connection network processes the two-dimensional signals column by column, so that the original structure of the signals is damaged, the constraint among signal neighborhoods is reduced, and the clear details inside the signals are maintained. The estimation part of the backbone network combines the fully connected network and the up-sampling network, complements each other, effectively retains the respective advantages and overcomes the respective disadvantages.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (4)

1. The structure learning method for deconvolution of astronomical images is characterized by comprising the following steps:
an astronomical image input step, namely receiving the input of an astronomical image input network and inputting the input of the astronomical image into a backbone network; a backbone network processing step, including a feature extraction step and a signal estimation step;
the characteristic extraction step is to extract the characteristics through two convolution layer operations and overlapped residual error learning;
the astronomical image input step further includes: the received astronomical image is an astronomical image of a network (PSFNet) for point spread function PSF estimation, an astronomical image of a network (nbdnaet) for non-blind deconvolution, or an astronomical image of a network (BDNet) for blind deconvolution;
when the received astronomical image input network is a network for point spread function PSF estimation, the received astronomical image is a degraded image, and the final result obtained is the point spread function PSF information of the degraded image;
when the received astronomical image is input into a network (NBDNet) of non-blind deconvolution, the received astronomical image is a degraded image and Point Spread Function (PSF) information corresponding to the degraded image; the final result is a sharp image corresponding to the degraded image;
when the received astronomical image input network is a blind deconvolution network (BDNet), the received astronomical image is a degraded image, the point spread function PSF information corresponding to the degraded image is firstly obtained, and the point spread function PSF information and the degraded image are again together to obtain a sharpened image of which the final result is obtained by estimating the degraded image;
and the signal estimation step inputs the characteristic extraction result into two branches for calculation respectively, fuses output signals of a first branch calculation result and a second branch calculation result, and carries out convolution layer operation on the fused signals for two times again to obtain a final result.
2. The astronomical image deconvolution oriented structure learning method of claim 1, wherein the convolution layer operation includes the steps of: performing convolution operation on the received input data; and carrying out batch normalization calculation on the result obtained by the convolution operation, and then inputting the result into a nonlinear excitation layer for calculation.
3. The astronomical image deconvolution oriented structure learning method of claim 2, wherein the nonlinear excitation layer employs a linear rectification function (ReLU) to complete the operation.
4. The astronomical image deconvolution-oriented structure learning method of claim 1, wherein the first branch firstly carries out convolution layer calculation, wherein the size of output data is controlled through a convolution kernel, and then the output column signals are input into a full-connection layer to calculate to obtain a first branch calculation result; and the second branch is calculated through an up-sampling layer to obtain a second branch calculation result.
CN202011186703.XA 2020-10-30 2020-10-30 Structure learning method for deconvolution of astronomical image Active CN112330554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011186703.XA CN112330554B (en) 2020-10-30 2020-10-30 Structure learning method for deconvolution of astronomical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011186703.XA CN112330554B (en) 2020-10-30 2020-10-30 Structure learning method for deconvolution of astronomical image

Publications (2)

Publication Number Publication Date
CN112330554A CN112330554A (en) 2021-02-05
CN112330554B true CN112330554B (en) 2024-01-19

Family

ID=74297423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011186703.XA Active CN112330554B (en) 2020-10-30 2020-10-30 Structure learning method for deconvolution of astronomical image

Country Status (1)

Country Link
CN (1) CN112330554B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013148139A1 (en) * 2012-03-29 2013-10-03 Nikon Corporation Algorithm for minimizing latent sharp image and point spread function cost functions with spatial mask fidelity
CN105046659A (en) * 2015-07-02 2015-11-11 中国人民解放军国防科学技术大学 Sparse representation-based single lens calculation imaging PSF estimation method
WO2020087607A1 (en) * 2018-11-02 2020-05-07 北京大学深圳研究生院 Bi-skip-net-based image deblurring method
CN111553866A (en) * 2020-05-11 2020-08-18 西安工业大学 Point spread function estimation method for large-field-of-view self-adaptive optical system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013148139A1 (en) * 2012-03-29 2013-10-03 Nikon Corporation Algorithm for minimizing latent sharp image and point spread function cost functions with spatial mask fidelity
CN105046659A (en) * 2015-07-02 2015-11-11 中国人民解放军国防科学技术大学 Sparse representation-based single lens calculation imaging PSF estimation method
WO2020087607A1 (en) * 2018-11-02 2020-05-07 北京大学深圳研究生院 Bi-skip-net-based image deblurring method
CN111553866A (en) * 2020-05-11 2020-08-18 西安工业大学 Point spread function estimation method for large-field-of-view self-adaptive optical system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
卜丽静 ; 卜欣彤 ; 张过 ; 武文波 ; 张正鹏 ; .优化点扩散函数估计与稀疏约束的图像盲复原.测绘科学.2017,(10),全文. *
李岚 ; 张云 ; 杜佳 ; 马少斌 ; .基于改进残差亚像素卷积神经网络的超分辨率图像重建方法研究.长春师范大学学报.2020,(08),全文. *

Also Published As

Publication number Publication date
CN112330554A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
KR100911890B1 (en) Method, system, program modules and computer program product for restoration of color components in an image model
RU2716843C1 (en) Digital correction of optical system aberrations
CN109644230B (en) Image processing method, image processing apparatus, image pickup apparatus, and storage medium
CN103826033B (en) Image processing method, image processing equipment, image pick up equipment and storage medium
JP5868076B2 (en) Image processing apparatus and image processing method
KR20100139030A (en) Method and apparatus for super-resolution of images
Wu et al. A multifocus image fusion method by using hidden Markov model
WO2011132415A1 (en) Imaging device and image restoration method
WO2011132416A1 (en) Imaging device and image restoration method
CA2554989A1 (en) Super-resolution image processing
CN112104847B (en) SONY-RGBW array color reconstruction method based on residual error and high-frequency replacement
CN115115516B (en) Real world video super-resolution construction method based on Raw domain
Dudhane et al. Burstormer: Burst image restoration and enhancement transformer
CN110378850B (en) Zoom image generation method combining block matching and neural network
CN106846250B (en) Super-resolution reconstruction method based on multi-scale filtering
He et al. A regularization framework for joint blur estimation and super-resolution of video sequences
CN112330554B (en) Structure learning method for deconvolution of astronomical image
JP2015115733A (en) Image processing method, image processor, imaging device, and image processing program
CN112330555B (en) Network training method for deconvolution of astronomical image
CN109379532B (en) Computational imaging system and method
JP2013069012A (en) Multi-lens imaging apparatus
Faramarzi et al. Space-time super-resolution from multiple-videos
KR101568743B1 (en) Device and method for the joint of color demosaic and lens distortion correction
JP2020061129A (en) Method for processing image, image processor, imaging device, image processing system, program, and storage medium
Evdokimova et al. Study of GAN-based image reconstruction for diffractive optical systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant