CN115457255B - Foundation telescope image non-uniform correction method based on deep learning - Google Patents

Foundation telescope image non-uniform correction method based on deep learning Download PDF

Info

Publication number
CN115457255B
CN115457255B CN202211082290.XA CN202211082290A CN115457255B CN 115457255 B CN115457255 B CN 115457255B CN 202211082290 A CN202211082290 A CN 202211082290A CN 115457255 B CN115457255 B CN 115457255B
Authority
CN
China
Prior art keywords
uniform
image
generator
training
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211082290.XA
Other languages
Chinese (zh)
Other versions
CN115457255A (en
Inventor
刘俊池
郭祥吉
王建立
李洪文
赵金宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202211082290.XA priority Critical patent/CN115457255B/en
Publication of CN115457255A publication Critical patent/CN115457255A/en
Application granted granted Critical
Publication of CN115457255B publication Critical patent/CN115457255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

A foundation telescope image non-uniform correction method based on deep learning relates to the technical field of image processing and solves the problems that the existing filtering method is often inaccurate in fitting effect on non-uniform background and poor in non-uniform correction effect; the method based on polynomial fitting requires a large amount of calculation time, so that the problems of low utilization rate and the like are caused; training a network; three steps of correcting by using non-uniform background images are realized; the invention adopts a method based on supervised learning, and a corresponding data set is required before training. In the non-uniform background fitting, we do not use prior knowledge of the non-uniform model, but by inference of the network, so our method can realize more spatial image non-uniform correction tasks. The correction efficiency is high; the correction step does not contain a complex iterative algorithm, and the non-uniform image is only operated by convolution pooling and the like through the reasoning process of the generator, so that the restoration speed of the non-uniform background is higher.

Description

Foundation telescope image non-uniform correction method based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a foundation telescope image non-uniform correction method based on deep learning.
Background
Obtaining spatial information through a ground telescope is an important method in space situation awareness, but the ground telescope is susceptible to various optical problems. The detector is subject to vignetting when generating an image, and the stray light affects and then causes image non-uniformity. The non-uniform image affects subsequent image stitching, front and back background segmentation, and detection of spatial targets. Thus, non-uniformity correction of the aerial image is an important image preprocessing step.
Patent "an infrared image non-uniformity correction method and system" discloses an infrared image non-uniformity correction method and system, and relates to the field of infrared image processing. The method for correcting the non-uniformity of the infrared image can eliminate the influence of the temperature of the detector and scene change on infrared imaging and improve the quality of infrared imaging.
The document Vignetting correction for a single star-sky observation image discloses a non-uniform correction method for a foundation telescope image. And the non-uniform background is calculated through the EM algorithm iteration, so that the accurate correction of the space image is realized.
The main purpose of the image non-uniformity correction technology is to correct non-uniformity caused by various vignetting and scattered light irrelevant information in a star map, so as to ensure the performance of target detection and identification, however, some existing image non-uniformity correction algorithms need a known model, so that images affected by unknown complex vignetting and scattered light cannot be accurately corrected. However, the filtering-based method often has an inaccurate fitting effect on the background, which results in an undesirable correction effect. Meanwhile, the traditional correction algorithm is long in time, and real-time correction cannot be achieved.
In view of the above drawbacks, the deep learning-based method proposed by the present invention models a non-uniform background. The method also does not need to obtain a non-uniform function in advance, and the non-uniform background is directly obtained from the non-uniform image through the learning of the network on the non-uniform image. The method of the invention can not only realize accurate non-uniform background modeling, but also realize real-time correction at the speed of processing tens of frames in one second.
Disclosure of Invention
The method aims at solving the problem that the conventional filtering-based method is often inaccurate in fitting effect on the non-uniform background, so that the non-uniform correction effect is poor; the method based on polynomial fitting requires a large amount of calculation time, so that the problems of low utilization rate and the like are caused, and the method for correcting the non-uniformity of the space image based on the deep learning is provided.
The method for correcting the non-uniformity of the space image based on the deep learning is realized by the following steps:
step one, constructing and generating an countermeasure type network structure;
the generating an antagonistic network structure includes a generator and a discriminator; inputting a non-uniform image into a generator, the generator outputting a non-uniform background image;
inputting the non-uniform background image and the real background image into a discriminator, and outputting the discriminator as a probability value for judging that the image is the real background;
step two, network training;
performing countermeasure training of the generator and the discriminator; when the distinguishing probability is 0.5, training is completed;
correcting the non-uniform background image;
generating a non-uniform background image corresponding to the non-uniform image through a trained generator, and obtaining a uniform image according to the following formula;
wherein I is an observed image, I' is a non-uniform image,as a non-uniform function.
Further, the structure of the generator is a convolutional neural network consisting of an encoder and a decoder;
the non-uniform image enters an encoder, and the encoder comprises a plurality of convolution layers, a pooling layer and an activation function layer which are respectively used for feature extraction and pixel compression; the decoder comprises a convolution layer, a batch normalization layer and an activation function layer; the up-sampling of the image takes the form of deconvolution so that the network learns more image information.
Further, the specific process of the network training is as follows:
first, parameters of both the generator G and the discriminator D are initialized.
The data set for training is then a training set of non-uniform image-non-uniform background pairs, a number of non-uniform background samples are extracted from the training set, and a generator generates the same number of samples using the non-uniform image distribution. The generator G is fixed and the discriminator D is trained to distinguish as far as possible between true and false.
Finally, after k times of updating the discriminator D, 1 time of generator G is updated so that the discriminator is as indistinguishable as possible from true or false. After multiple updating iterations, in an ideal state, the final discriminator D can not distinguish whether the image is from a real training sample set or a sample generated by the generator G, and at this time, the discriminating probability is 0.5, so that training is completed.
The invention has the beneficial effects that: the invention relates to a spatial image non-uniformity correction method based on deep learning. The deep convolutional neural network comprises a generator and a discriminator, wherein the generator is responsible for deducing the non-uniform background through inputting the non-uniform image, and the discriminator is responsible for judging the reality degree of the non-uniform background generated by the generator. The two networks are improved in competing training and finally the background is obtained by feeding into the generator non-uniform image.
The method has the following advantages:
1. a non-uniform model is not required; the invention adopts a method based on supervised learning, and a corresponding data set is required before training. In the non-uniform background fitting, we do not use prior knowledge of the non-uniform model, but by inference of the network, so our method can realize more spatial image non-uniform correction tasks.
2. The correction efficiency is high; the correction step does not contain a complex iterative algorithm, and the non-uniform image is only operated by convolution pooling and the like through the reasoning process of the generator, so that the restoration speed of the non-uniform background is higher.
Drawings
Fig. 1 is a schematic diagram of generating an countermeasure network in the spatial image non-uniformity correction method based on deep learning according to the present invention.
Detailed Description
The present embodiment will be described with reference to fig. 1, which is a method for correcting spatial image non-uniformity based on deep learning, the method being implemented by:
the imaging system set undisturbed by the optical system can be expressed as i=f (r+epsilon) 1 )+ε 2 . Wherein I, f, R, ε 12 Representing the observed image, radiation response function, scene radiation, scattering noise, and additive noise (e.g., amplifier noise, a/D, D/a noise, etc.), respectively. Considering that the gain of the exposure camera ignores the additional noise, the imaging system can be expressed as i=f·e (r+epsilon) 1 ). Asymptotic vignetting occurs in front of a lens group, and non-uniformity and stray light caused by camera inclination are obtained, so that an imaging model with non-uniformity influence is obtained:
wherein the method comprises the steps ofIs a non-uniform function. But->Assuming that the vignetting changes slowly and uniformly, the stray light does not cause abnormal exposure of some images, then +.>σ 1 Is the standard deviation of the Gaussian distribution, then +.>Thus, the corrected image can be obtained by:
therefore, an object of the present embodiment is to directly obtain a non-uniform background image by inputting a non-uniform image, and to obtain a uniform optical image by dividing the non-uniform image with the non-uniform background image.
In this embodiment, the method further includes constructing a convolutional neural network: the network structure of the generation countermeasure type is adopted, namely, the network comprises a generator and a discriminator. The structure of the network is shown in fig. 1. Fig. 1 is a diagram showing the overall structure of an reactance network. Where y is the non-uniform image of the input generator,non-uniform background image of generator output, < >>Representing normalized background data, λ being a parameter for adjusting the gray value of the whole image, +.>Namely +.2 in the formula>The generator is used for realizing that the non-uniform background image is obtained by feeding the non-uniform image. The purpose of the discriminator is to resolve the graph fed into the discriminatorWhether a real background image or a generator-generated background image.
The generator structure is a convolutional neural network of encoder-decoder structure. First the image enters the encoder, which contains several structures such as convolution, pooling, and activation functions for feature extraction and pixel compression. While in order to suppress the overfitting phenomenon, a batch normalization is added between pooling and activation functions. The decoder also includes structures such as convolution, batch normalization, and activation functions. The up-sampling of the image takes the form of deconvolution so that the network can learn more image information.
The discriminator often does not require a complex network structure as to whether the input image is authentic or not. The input of the discriminator is tag data (true background data) in the dataset or background data generated by the generator, and is output as a probability value for judging that the image is a true background. The discriminator thus employs several convolution, pooling, batch normalization and activation function layers.
In this embodiment, the training of the network is the countermeasure training of the generator and the discriminator, and the network performance is improved in the cyclic parameter update, and the specific method is as follows:
(a) Parameters of both the generator G and discriminator D are initialized.
(b) A number of samples are extracted from the training set and a generator generates the same number of samples using the non-uniform image distribution. The generator G is fixed and the discriminator D is trained to distinguish as far as possible between true and false.
(c) After k times of updating the discriminator D in a loop, the 1-time generator G is updated so that the discriminator is as indistinguishable as possible from true or false. After multiple updating iterations, in an ideal state, the final discriminator D can not distinguish whether the picture is from a real training sample set or a sample generated by the generator G, and at this time, the discriminating probability is 0.5, so that training is completed.
In this embodiment, in order to train the generator to generate values as close as possible to the true value, two classes of loss functions are designed to enhance the generator performance, the two loss functions being as follows.
L L1 (G)=E y,x [||x-G(y)|| 1 ] (3)
L cGAN (G,D)=E y,x [log D(y,x)]+E y [log(1-D(y,G(y)))] (4)
Wherein y and x are respectively an input non-uniform image and a true non-uniform background image, E is the expectation of the corresponding expression, L L1 Is the Mean Absolute Error (MAE) loss used to evaluate the regression of the corresponding points after network mapping. And the method can combine the result of the discriminator to enable the generator to be more accurate in high-frequency details, and the generated image is more similar to the real image. The final loss function is shown as equation, where γ is a parameter that adjusts the ratio of the two loss functions.
In this embodiment, reasoning and correction of the non-uniform background image can generate a non-uniform background corresponding to the non-uniform image by the trained generator, and further obtain a uniform background according to the formula (2).
The non-uniform correction method of the present embodiment realizes generation of a non-uniform background according to generation of an countermeasure model by training. The traditional binomial method for fitting the non-uniform background has large calculation amount, and the method for generating the non-uniform background by using the generator can output the non-uniform background more quickly and accurately, and the method has strong innovation.
In the present embodiment, non-uniform background information is learned, and unlike many tasks such as image migration, non-uniform background information is learned from non-uniform images instead of learning non-uniform images to uniform images. The learning mode can enable the network to learn simpler content so as to obtain more accurate results, and meanwhile, the non-uniform background is often low-frequency data in the frequency domain, so that the resolution of the image can be properly reduced in the training and pushing of the data so as to accelerate the reasoning speed of the network.

Claims (3)

1. A foundation telescope image non-uniform correction method based on deep learning is characterized by comprising the following steps: the method is realized by the following steps:
step one, constructing and generating an countermeasure type network structure;
the generating an antagonistic network structure includes a generator and a discriminator; inputting a non-uniform image into a generator, the generator outputting a non-uniform background image;
inputting the non-uniform background image and the real background image into a discriminator, and outputting a probability value of the real background by the discriminator;
step two, network training;
performing countermeasure training of the generator and the discriminator; when the distinguishing probability is 0.5, training is completed;
correcting the non-uniform background image;
generating a non-uniform background image corresponding to the non-uniform image through a trained generator, and obtaining a uniform image according to the following formula;
wherein I is an observed image, I' is a non-uniform image,as a non-uniform function.
2. The deep learning-based foundation telescope image non-uniformity correction method according to claim 1, wherein: the generator is a convolutional neural network consisting of an encoder and a decoder;
the non-uniform image enters an encoder, and the encoder comprises a plurality of convolution layers, a pooling layer and an activation function layer which are respectively used for feature extraction and pixel compression; the decoder comprises a convolution layer, a batch normalization layer and an activation function layer; the up-sampling of the image takes the form of deconvolution so that the network learns more image information.
3. The deep learning-based foundation telescope image non-uniformity correction method according to claim 1, wherein: the specific process of the network training is as follows:
firstly, initializing parameters of two networks of a generator G and a discriminator D;
then, taking the non-uniform image-non-uniform background image pair as a training set, extracting a plurality of samples of the non-uniform background image from the training set, and generating the same number of samples by the generator by utilizing non-uniform image distribution; training the discriminator D to distinguish as far as possible between true and false;
finally, after k times of discriminator D are circularly updated, 1 time of generator G is updated, and after a plurality of times of updating iterations, the discriminator D can not distinguish whether the input image is a real training sample or a sample generated by the generator G, and training is finished.
CN202211082290.XA 2022-09-06 2022-09-06 Foundation telescope image non-uniform correction method based on deep learning Active CN115457255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211082290.XA CN115457255B (en) 2022-09-06 2022-09-06 Foundation telescope image non-uniform correction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211082290.XA CN115457255B (en) 2022-09-06 2022-09-06 Foundation telescope image non-uniform correction method based on deep learning

Publications (2)

Publication Number Publication Date
CN115457255A CN115457255A (en) 2022-12-09
CN115457255B true CN115457255B (en) 2024-04-02

Family

ID=84302340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211082290.XA Active CN115457255B (en) 2022-09-06 2022-09-06 Foundation telescope image non-uniform correction method based on deep learning

Country Status (1)

Country Link
CN (1) CN115457255B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6496309B1 (en) * 1999-06-18 2002-12-17 Genomic Solutions, Inc. Automated, CCD-based DNA micro-array imaging system
CN113947555A (en) * 2021-09-26 2022-01-18 国网陕西省电力公司西咸新区供电公司 Infrared and visible light fused visual system and method based on deep neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10410327B2 (en) * 2017-06-02 2019-09-10 Apple Inc. Shallow depth of field rendering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6496309B1 (en) * 1999-06-18 2002-12-17 Genomic Solutions, Inc. Automated, CCD-based DNA micro-array imaging system
CN113947555A (en) * 2021-09-26 2022-01-18 国网陕西省电力公司西咸新区供电公司 Infrared and visible light fused visual system and method based on deep neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
地基大口径红外光电设备快速辐射定标;刘俊池;李洪文;王建立;殷丽梅;;光学学报;20150310(03);25-34 *
基于卷积神经网络的红外图像处理关键技术研究;匡小冬;《博士论文学位论文数据库 信息科技》;20220115(第1期);1-133 *
基于深度学习的空间暗弱目标检测关键技术研究;郭祥吉;《中国博士学位论文全文数据库 工程科技I辑》;20230601(第6期);1-123 *
基于配准的机载红外非均匀性校正技术应用;吕宝林;佟首峰;徐伟;冯钦评;王德江;;中国光学;20201013(05);1124-1137 *
大口径空间巡天望远镜子孔径拼接平场定标法;逯诗桐;张天一;张晓辉;;中国光学;20201013(第05期);1094-1102 *
深度学习驱动的水下图像增强与复原研究进展;丛润民;张禹墨;张晨;李重仪;赵耀;;信号处理;20200915(第09期);1377-1389 *

Also Published As

Publication number Publication date
CN115457255A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
Niu et al. A conditional adversarial network for change detection in heterogeneous images
Golts et al. Unsupervised single image dehazing using dark channel prior loss
Koziarski et al. Image recognition with deep neural networks in presence of noise–dealing with and taking advantage of distortions
CN107945204B (en) Pixel-level image matting method based on generation countermeasure network
Liu et al. Learning converged propagations with deep prior ensemble for image enhancement
CN109754017B (en) Hyperspectral image classification method based on separable three-dimensional residual error network and transfer learning
Wang et al. Enhancing low light videos by exploring high sensitivity camera noise
Liang et al. GIFM: An image restoration method with generalized image formation model for poor visible conditions
Zhou et al. Infrared image segmentation based on Otsu and genetic algorithm
CN112836820B (en) Deep convolution network training method, device and system for image classification task
Fang et al. Laser stripe image denoising using convolutional autoencoder
Wei et al. Non-homogeneous haze removal via artificial scene prior and bidimensional graph reasoning
Yan et al. Hybrur: A hybrid physical-neural solution for unsupervised underwater image restoration
CN113627240B (en) Unmanned aerial vehicle tree species identification method based on improved SSD learning model
US20220414453A1 (en) Data augmentation using brain emulation neural networks
CN115457255B (en) Foundation telescope image non-uniform correction method based on deep learning
Babu et al. ABF de-hazing algorithm based on deep learning CNN for single I-Haze detection
CN116486150A (en) Uncertainty perception-based regression error reduction method for image classification model
CN111460943A (en) Remote sensing image ground object classification method and system
CN113496486B (en) Kiwi fruit shelf life rapid discrimination method based on hyperspectral imaging technology
CN115439669A (en) Feature point detection network based on deep learning and cross-resolution image matching method
Geng et al. Robust core tensor dictionary learning with modified gaussian mixture model for multispectral image restoration
CN110751144B (en) Canopy plant hyperspectral image classification method based on sparse representation
CN113627480A (en) Polarized SAR image classification method based on reinforcement learning
CN114882346A (en) Underwater robot target autonomous identification method based on vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant