CN113158487B - Wavefront phase difference detection method based on long-short term memory depth network - Google Patents
Wavefront phase difference detection method based on long-short term memory depth network Download PDFInfo
- Publication number
- CN113158487B CN113158487B CN202110501935.8A CN202110501935A CN113158487B CN 113158487 B CN113158487 B CN 113158487B CN 202110501935 A CN202110501935 A CN 202110501935A CN 113158487 B CN113158487 B CN 113158487B
- Authority
- CN
- China
- Prior art keywords
- training
- focus
- image
- lstm
- psf
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention discloses a method based onThe method for detecting the wavefront phase difference of the long-short term memory depth network comprises the following steps: inputting characteristic parameters of a certain actual optical system and generating a training data set; generating a focus plane PSF image i from a training data set according to Fourier optics principles1(x, y) and the out-of-focus PSF image i2(x, y); extracting the pair of focal plane PSF images i1(x, y) and the out-of-focus PSF image i2The feature vector of (x, y) is used as input data; extracting a feature vector of a PSF image sequence collected from an actual optical system; inputting the PSF image sequence into a convolutional neural network model which completes training according to a time sequence t to determine a wave surface distortion phase so as to obtain a series of distortion phase parameters for training; generating input data and output data for LSTM deep network training, initializing network parameters and repeatedly training the LSTM deep network until the loss function is converged.
Description
Technical Field
The invention relates to the field of network model training, in particular to a wavefront phase difference detection method based on a long-short term memory depth network.
Background
The appearance of the adaptive optical technology enables a large-aperture telescope to greatly overcome the random disturbance of the atmosphere on the wavefront, and the wavefront phase difference detection technology is one of the key technologies. The Phase contrast (PD) method collects wavefront distorted images with two cameras on the focal plane and the defocused plane of the optical system, and calculates the wavefront Phase difference in real time under the action of a randomly expanded target by using a digital image processing method to reconstruct a high-definition target image.
From the perspective of the reference, there are three types of detectors with wavefront phase detection capability: (1) reference is made to active lighting inside the measuring device, such as laser measurement; (2) point light sources are required as references, such as Hartmann detectors, shearing interferometers; (3) independent of point sources, wavefront detection can also be performed on extended targets, such as correlated Hartmann detectors and wavefront detectors based on PD technology.
The PD technology has the advantages of simple optical structure, low cost and strong practicability, but the defect of large iterative computation amount restricts the development of PD application. PD techniques using advanced Convolutional Neural Networks (CNN) have emerged up to three years, demonstrating that deep networks can bypass the analytical model and extend the design space of wavefront detectors. However, the CNN-based PD technique also has a contradiction between accuracy and calculation amount, and only recovers the image deviation caused by wavefront phase distortion under special conditions. The influence of the vergence and the time variability of atmospheric environment parameters on the result dynamic wavefront estimation is not considered, so that the result is inconsistent with the atmospheric instantaneous disturbance condition in the observation range of the telescope.
Disclosure of Invention
According to the problems in the prior art, the invention discloses a wavefront phase difference detection method based on a long-short term memory depth network, which specifically comprises the following steps:
inputting wavelength, aperture size, focal length, detector pixel size, defocusing length and pupil shape parameter information of an actual optical system, inputting or randomly generating distortion phase parameter information of the optical system, and fusing the system parameter information and the distortion phase parameter information to obtain a group of original data;
calculating a focus plane PSF image i from a set of raw data according to Fourier optical principle1(x, y) and the out-of-focus PSF image i2(x, y) repeating until a training data set is generated;
extracting the pair of focal plane PSF images i1(x, y) and the out-of-focus PSF image i2The (x, y) eigenvector is used as input data, and the corresponding distortion phase parameter information is usedRepeatedly training the convolutional neural network model for outputting data until the loss function is converged;
extracting a feature vector of a PSF image sequence collected from an actual optical system;
inputting the PSF image sequence into a convolutional neural network model which completes training according to a time sequence t to determine a wave surface distortion phase so as to obtain a series of distortion phase parameters for training;
generating input data and output data for LSTM deep network training, initializing network parameters and repeatedly training the LSTM deep network until loss functions are converged;
inputting a PSF image sequence collected from an actual optical system into an LSTM training model for extracting a feature vector;
inputting actual optical system parameters and the obtained characteristic vector into an LSTM training model to obtain a prediction result of a distortion phase parameter;
focal plane image i from the actual optical systemt1(x, y) out-of-focus image it2(x, y) and predicted distortion phase parameter phitThe object o (x, y) is reconstructed as follows:
itk(x,y)=o(x,y)*htk(x,y)
wherein the PSF function is:
wherein p represents the pupil distribution, F-1Represented by the inverse Fourier transform, phitIs the phase of the distortion at a certain moment, thetakIndicating an introduced defocus phase difference of known magnitude.
Further, when the convolutional neural network model is trained:
s31: for the out-of-focus PSF image i2(x, y) carrying out smooth denoising, intensity regularization and sub-pixel conversion preprocessing to obtain a new out-of-focus PSF image i2′(x,y);
S32: discrete orthogonal Chebyshev moment { t) of defocused PSF imagen(x) Satisfy
Respectively calculating the paired images i according to the following recursion formula1Discrete orthogonal Chebyshev moments of (x, y)And a new out-of-focus PSF image i2' (x, y) ofUsing the feature vector as a feature vector;
wherein p is 1 or p is 2;
s33: using the extracted characteristic vector as an input matrix column and using the real distortion phase parameter y as an output matrix column to generate input data and output data for convolutional neural network training;
s34: initializing the number n of network layers of a convolutional neural network, the number m of input and output samples, the weight w of each layer and the bias b;
s35: forward propagation of the convolutional neural network for the current input-output sample: processing the input characteristic vector by a convolution layer, then performing dimensionality reduction on the input characteristic vector to a pooling layer, then integrating the input characteristic vector to a full-link layer, and finally obtaining a predicted value of a distorted phase parameter sequence
S36: performing back propagation of the convolutional neural network: the function characterizing the error between the real and predicted values is called the loss function loss, and the error δ of the last layer l is calculated by loss as defined belowlBackward-deducing the weight w of the l-1 layer from the l layerl -1And bias bl-1And so on until the weight and bias of the second layer are updated;
s37: and selecting the next input and output sample, and turning to S35 to repeatedly execute the continuous reduction loss value until the sample is used up, thereby completing the training process of the convolutional neural network.
Further, when the LSTM deep network is trained:
s61: assuming that the intensity distribution of the image plane i is i ═ o × s, in order to establish an accurate nonlinear mapping of the wavefront phase aberration s and the focal plane image i, the formula is converted into a frequency domain: where I, O and S are Fourier transforms F of I, O and S, respectively, the feature images F of the in-focus and out-of-focus paired images are calculated as follows0;
P1(ψ)=pexp{jψ}
P2(ψ)=pexp{j(ψ+Δψ)}
Where the subscripts 1 and 2 indicate that the relevant variables correspond to the image at two different focal planes, in-focus and out-of-focus, respectively, and p represents the pupil distribution vector, F (F)-1) Representing Fourier (inverse) transformation, psi is a distortion phase vector at a certain moment, and delta psi represents an introduced defocus phase difference vector with a known size;
s62: decomposing the extracted characteristic image into an image patch sequence, taking the sequence as an input matrix column, correspondingly taking the real distortion phase parameter y as an output matrix column, and generating input data and output data for LSTM neural network training;
s63: initializing the number n of LSTM network layers, the number m of input and output samples, the weight w of each layer and the bias b;
s64: forward propagation of LSTM for the current input-output sample: three thresholds in the LSTM are forgetting thresholds ft=σ(wf·[ht-1,xt]+bf) Input threshold it=σ(wi·[ht-1,xt]+bi) And an output threshold ot=σ(wo·[ht-1,xt]+bo) In addition, there are candidate states at=tanh(wa[ht-1,xt]+ba) Internal state Ct=ft·Ct-1+it·atWherein h ist=ot·tanh(Ct),htPredicting a sequence of distorted phase parameters
S65, carrying out back propagation of the LSTM: the function characterizing the error between the true and predicted values is called the loss functionError delta of last layer l is calculated by losslPush-back o of l-1 layer from l layert、at、it、ftWeight w ofl-1And bias bl-1And so on until the weight and bias of the second layer get the update loss;
s66: and selecting the next input and output sample, and turning to S65 to repeatedly execute the continuous reduction loss value until the sample is used up, and the LSTM deep network training process is finished.
By adopting the technical scheme, the method for detecting the wave front phase difference based on the long-short term memory depth network can establish a mathematical model of wave front distortion under the action of a random extended target based on an optical system, generate atmospheric disturbance with different intensities for the simulation of the optical system, estimate the phase difference of an image phase in real time and separate out the influence of the atmospheric disturbance. The algorithm provided by the invention is simple and efficient, has high running speed and is full-automatic, and can meet the actual requirements of wavefront distortion correction and the like of an optical system under the condition of having a known target.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a convolutional neural network model design of the present invention;
FIG. 3 is a diagram of the LSTM network model design in the present invention;
FIG. 4 shows h in the training of the LSTM network model in the present inventiontAnd (5) a schematic calculation process.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
in the implementation process of the method for detecting the wavefront phase difference based on the long-term and short-term memory depth network shown in fig. 1, the optical system adopts the focal point of an interferometer as a point light source, and the light beams can respectively obtain PSF images on a mirror surface and a detector under the action of a spectroscope. The interferometer introduces phase differences of the optical system by slightly tilting or translating the lens, and can directly measure the distorted phase differences of the optical system. The method disclosed by the invention comprises the following specific steps:
s1, inputting the parameter information of the optical system: the wavelength is 0.6328 mu m, the aperture size is 8.5mm, the focal length is 180mm, the detector pixel size is 5.5 mu m, the defocusing length is 2mm, and the pupil diameter is 8.5mm, and 1000 groups of distortion phase parameter information measured by an interferometer are input; within a certain range, C4E < -0.5 >, C5E < -0.7 >, C6E < -0.7,0.7 >, C7E < -0.3,0.3 >, C8E < -0.3,0.3 > and C9E < -0.1,0.1 >, and 50000 groups of distortion phase parameters C4-C9 are generated randomly and repeatedly. Fusing system parameter information and distortion phase parameter information to obtain 51000 groups of original data;
s2 generating a Pair of focal plane PSF images i from a set of raw data under the above system parameters according to Fourier optics principles1(x, y) and the out-of-focus PSF image i2(x, y), calculating to obtain a training data set consisting of 51000 pairs of PSF images;
s3, as shown in FIG. 2, extracting the above-mentioned one focal plane PSF image i1(x, y) and the out-of-focus PSF image i2Taking the characteristic vector of (x, y) as input data, taking corresponding distorted phase parameter information as output data, and repeatedly training the convolutional neural network model until the loss function is converged;
s4, extracting a feature vector of the PSF image sequence collected from the optical system;
s5, inputting the PSF image sequence into the convolutional neural network model which completes training according to the time sequence t to determine the wave surface distortion phase, and obtaining a series of distortion phase parameters for training;
s6, generating input data and output data for LSTM deep network training, initializing network parameters and repeatedly training the LSTM deep network until the loss function is converged;
s7, inputting the PSF image sequence collected from the optical system into an LSTM training model for extracting the feature vector;
s8, inputting the optical system parameters and the obtained characteristic vector into an LSTM training model to obtain a prediction result of the distortion phase parameters;
s9 focal plane image i according to the optical systemt1(x, y) out-of-focus image it2(x, y) and predicted distortion phase parameter phitThe object o (x, y) is reconstructed as follows:
itk(x,y)=o(x,y)*htk(x,y)
wherein the PSF function is:
wherein p represents the pupil distribution, F-1Represented by the inverse Fourier transform, phitIs the phase of the distortion at a certain moment, thetakIndicating an introduced defocus phase difference of known magnitude.
Further, as shown in fig. 3 and 4, the feature extraction, the convolutional network design and the training in S3 specifically adopt the following modes:
s31: for the out-of-focus PSF image i2(x, y) carrying out smooth denoising, intensity regularization and sub-pixel conversion preprocessing to obtain a new out-of-focus PSF image i2′(x,y);
Respectively calculating the paired images i according to the following recursion formula1Discrete orthogonal Chebyshev moments of (x, y)And a new out-of-focus PSF image i2' (x, y) ofUsing the feature vector as a feature vector;
wherein p is 1 or p is 2.
S33: using the extracted feature vector as an input matrix column, correspondingly using the real distortion phase parameter y as an output matrix column, and generating input data and output data for Convolutional Neural Network (CNN) training;
s34: initializing the number n of network layers of the CNN, the number m of input and output samples, the weight w of each layer and the bias b;
s35: for the current input and output sample, performing forward propagation of CNN: processing the input characteristic vector by a convolution layer, then performing dimensionality reduction on the input characteristic vector by a pooling layer, then integrating the input characteristic vector by a full-link layer, and finally obtaining a predicted value of a distorted phase parameter sequence
S36: reverse propagation of CNN was performed: the function characterizing the error between the real and predicted values is called the loss function loss, and the error δ of the last layer l is calculated by loss as defined belowlBackward-deducing the weight w of the l-1 layer from the l layerl-1And bias bl-1And so on until the weight and bias of the second layer are updated;
s37: and selecting the next input and output sample, and going to S35 to repeatedly execute the continuous reduction loss value until the sample is used up, and the CNN training process is finished.
Further, in S6, for a certain phase difference parameter range, C4 e [ -0.5,0.5], C5 e [ -0.7,0.7], C6 e [ -0.7,0.7], C7 e [ -0.3,0.3], C8 e [ -0.3,0.3] and C9 e [ -0.1,0.1], a 50000 phase difference is randomly generated, and we can calculate corresponding 50000 sets of PSF images in the in-focus parameter set and in the out-of-focus plane. For each set of PSF images, we can extract the feature images and then decompose them into a sequence of images patch. The generated phase difference parameters and the image patch sequence respectively form an output data set and an input data set, then the data sets can be used for training the LSTM depth network, namely, each optical system parameter and the extracted feature vector are used as a column of an input matrix, a corresponding series of distorted phase parameters are used as a column of an output matrix, the process is repeated according to the time sequence of the PSF image sequence, and the input data and the output data used for LSTM depth network training are generated, wherein the specific LSTM network design and training process comprises the following steps:
s61: assuming that the intensity distribution of the image plane i is i ═ o × s, in order to establish an accurate nonlinear mapping between the wavefront phase difference s and the focal plane image i, the formula is converted into a frequency domain: where I, O and S are Fourier transforms F of I, O and S, respectively, the feature images F of the in-focus and out-of-focus paired images are calculated as follows0;
P1(ψ)=pexp{jψ}
P2(ψ)=pexp{j(ψ+Δψ)}
Where the subscripts 1 and 2 indicate that the relevant variables correspond to the image at two different focal planes, in-focus and out-of-focus, respectively, and p represents the pupil distribution vector, F (F)-1) Representing Fourier (inverse) transformation, psi is a distortion phase vector at a certain moment, and delta psi represents an introduced defocus phase difference vector with a known size;
s62: decomposing the extracted characteristic image into an image patch sequence, taking the sequence as an input matrix column, correspondingly taking the real distortion phase parameter y as an output matrix column, and generating input data and output data for LSTM neural network training;
s63: initializing the number n of LSTM network layers, the number m of input and output samples, the weight w of each layer and the bias b;
s64: forward propagation of LSTM for the current input-output sample: three thresholds in the LSTM are forgetting thresholds ft=σ(wf·[ht-1,xt]+bf) Input threshold it=σ(wi·[ht-1,xt]+bi) And an output threshold ot=σ(wo·[ht-1,xt]+bo) In addition, there are candidate states at=tanh(wa[ht-1,xt]+ba) Internal state Ct=ft·Ct-1+it·atWherein h ist=ot·tanh(Ct),htPredicting a sequence of distorted phase parameters
S65, carrying out back propagation of the LSTM: the function characterizing the error between the true and predicted values is called the loss functionError delta of last layer l is calculated by losslPush-back o of l-1 layer from l layert、at、it、ftWeight w ofl-1And bias bl-1And so on until the weight and bias of the second layer get the update loss;
s66: and selecting the next input and output sample, and going to S65 to repeatedly execute the continuous reduction loss value until the sample is used up, and the LSTM training process is finished.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (3)
1. A wavefront phase difference detection method based on a long-short term memory depth network is characterized by comprising the following steps:
inputting wavelength, aperture size, focal length, detector pixel size, defocusing length and pupil shape parameter information of an actual optical system, inputting or randomly generating distortion phase parameter information of the optical system, and fusing the system parameter information and the distortion phase parameter information to obtain a group of original data;
calculating a focus plane PSF image i from a set of raw data according to Fourier optical principle1(x, y) and the out-of-focus PSF image i2(x, y) repeating until a training data set is generated;
extracting the pair of focal plane PSF images i1(x, y) and the out-of-focus PSF image i2Taking the characteristic vector of (x, y) as input data, taking corresponding distorted phase parameter information as output data, and repeatedly training the convolutional neural network model until the loss function is converged;
assuming that the intensity distribution of the image plane i is i ═ o × s, in order to establish an accurate nonlinear mapping of the wavefront phase aberration s and the focal plane image i, the formula is converted into a frequency domain: where I, O and S are Fourier transforms F of I, O and S, respectively, the feature images F of the in-focus and out-of-focus paired images are calculated as follows0;
P1(ψ)=pexp{jψ}
P2(ψ)=pexp{j(ψ+Δψ)}
Where the subscripts 1 and 2 indicate that the relevant variables correspond to the image at two different focal planes, in-focus and out-of-focus, respectively, and p represents the pupil distribution vector, F (F)-1) Representing the Fourier transform, Ψ being at a certain timeA distortion phase vector, Δ Ψ representing an introduced defocus aberration vector of known magnitude;
extracting a feature vector of a PSF image sequence collected from an actual optical system;
decomposing the extracted characteristic image into an image patch sequence, taking the sequence as an input matrix column, correspondingly taking the distorted phase parameters obtained by the prediction of the convolutional neural network model as an output matrix column, and generating input data and output data for LSTM neural network training;
inputting the PSF image sequence into a convolutional neural network model which completes training according to a time sequence t to determine a wave surface distortion phase so as to obtain a series of distortion phase parameters for training an LSTM; initializing network parameters and repeatedly training the LSTM deep network until the loss function is converged;
inputting a PSF image sequence collected from an actual optical system into an LSTM training model for extracting a feature vector;
inputting actual optical system parameters and the obtained characteristic vector into an LSTM training model to obtain a prediction result of a distortion phase parameter;
focal plane image i from the actual optical systemt1(x, y) out-of-focus image it2(x, y) and predicted distortion phase parameter phitThe object o (x, y) is reconstructed as follows:
itk(x,y)=o(x,y)*htk(x,y)
wherein the PSF function is:
wherein p represents the pupil distribution, F-1Represented by the inverse Fourier transform, phitIs the phase of the distortion at a certain moment, thetakIndicating an introduced defocus phase difference of known magnitude.
2. The method of claim 1, wherein: when the convolutional neural network model is trained:
s31: for the out-of-focus PSF image i2(x,y) Carrying out smooth denoising, intensity regularization and sub-pixel conversion preprocessing to obtain a new out-of-focus PSF image i'2(x,y);
S32: discrete orthogonal Chebyshev moment { t) of defocused PSF imagen(x) Satisfy
Respectively calculating the paired images i according to the following recursion formula1Discrete orthogonal Chebyshev moments of (x, y)And a new out-of-focus PSF image i'2(x, y) ofUsing the feature vector as a feature vector;
wherein p is 1 or p is 2;
s33: a pair of focal plane PSF images i1(x, y) and the out-of-focus PSF image i2The feature vector of (x, y) is used as input data of the convolutional neural network model, and corresponding distorted phase parameter information is used as output data to train the convolutional neural network model;
s34: initializing the number n of network layers of a convolutional neural network, the number m of input and output samples, the weight w of each layer and the bias b;
s35: forward propagation of the convolutional neural network for the current input-output sample: processing the input characteristic vector by a convolution layer, then performing dimensionality reduction on the input characteristic vector to a pooling layer, then integrating the input characteristic vector to a full-link layer, and finally obtaining a predicted value of a distorted phase parameter sequence
S36: performing back propagation of the convolutional neural network: the function characterizing the error between the real and predicted values is called the loss function loss, and the error δ of the last layer l is calculated by loss as defined belowlBackward-deducing the weight w of the l-1 layer from the l layerl-1And bias bl-1And so on until the weight and bias of the second layer are updated;
s37: and selecting the next input and output sample, and turning to S35 to repeatedly execute the continuous reduction loss value until the sample is used up, thereby completing the training process of the convolutional neural network.
3. The method of claim 1, wherein: after generating input data and output data for LSTM neural network training;
then initializing the number n of LSTM network layers, the number m of input and output samples, the weight w of each layer and the bias b;
forward propagation of LSTM for the current input-output sample: three thresholds in the LSTM are forgetting thresholds ft=σ(wf·[ht-1,xt]+bf) Input threshold it=σ(wi·[ht-1,xt]+bi) And an output threshold ot=σ(wo·[ht-1,xt]+bo) In addition, there are candidate states at=tanh(wa[ht-1,xt]+ba) Internal state Ct=ft·Ct-1+it·atWherein h ist=ot·tanh(Ct),htPredicting a sequence of distorted phase parameters
Reverse propagation of LSTM was performed: the function characterizing the error between the true and predicted values is called the loss functionError delta of last layer l is calculated by losslPush-back o of l-1 layer from l layert、at、it、ftWeight w ofl-1And bias bl-1And so on until the weight and bias of the second layer get the update loss;
and selecting the next input and output sample, and turning to S65 to repeatedly execute the continuous reduction loss value until the sample is used up, and the LSTM deep network training process is finished.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110501935.8A CN113158487B (en) | 2021-05-08 | 2021-05-08 | Wavefront phase difference detection method based on long-short term memory depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110501935.8A CN113158487B (en) | 2021-05-08 | 2021-05-08 | Wavefront phase difference detection method based on long-short term memory depth network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113158487A CN113158487A (en) | 2021-07-23 |
CN113158487B true CN113158487B (en) | 2022-04-12 |
Family
ID=76873848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110501935.8A Active CN113158487B (en) | 2021-05-08 | 2021-05-08 | Wavefront phase difference detection method based on long-short term memory depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113158487B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114129175A (en) * | 2021-11-19 | 2022-03-04 | 江苏科技大学 | LSTM and BP based motor imagery electroencephalogram signal classification method |
CN114004342B (en) * | 2021-11-29 | 2023-05-30 | 中国科学院光电技术研究所 | Laser communication system distortion wavefront prediction method based on LSTM network |
CN115641376B (en) * | 2022-10-17 | 2023-07-21 | 中国科学院长春光学精密机械与物理研究所 | Telescope on-orbit pose offset detection method, device, equipment and medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6787747B2 (en) * | 2002-09-24 | 2004-09-07 | Lockheed Martin Corporation | Fast phase diversity wavefront correction using a neural network |
CN111968099B (en) * | 2020-08-24 | 2023-01-24 | 中国科学院长春光学精密机械与物理研究所 | Large-caliber splicing telescope common-phase method, device, equipment and storage medium |
CN112179504A (en) * | 2020-09-27 | 2021-01-05 | 中国科学院光电技术研究所 | Single-frame focal plane light intensity image depth learning phase difference method based on grating modulation |
-
2021
- 2021-05-08 CN CN202110501935.8A patent/CN113158487B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113158487A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113158487B (en) | Wavefront phase difference detection method based on long-short term memory depth network | |
Ma et al. | Numerical study of adaptive optics compensation based on convolutional neural networks | |
CN106845024B (en) | Optical satellite in-orbit imaging simulation method based on wavefront inversion | |
CN111579097B (en) | High-precision optical scattering compensation method based on neural network | |
CN112880986B (en) | Spliced telescope translation error detection method based on convolutional neural network | |
CN111221123A (en) | Wavefront-sensor-free self-adaptive optical correction method based on model | |
EP2555161A1 (en) | Method and device for calculating a depth map from a single image | |
Suárez Gómez et al. | Improving adaptive optics reconstructions with a deep learning approach | |
Ma et al. | Piston sensing for sparse aperture systems with broadband extended objects via a single convolutional neural network | |
KR102501402B1 (en) | Method for determining the complex amplitude of the electromagnetic field associated with a scene | |
CN111103120B (en) | Optical fiber mode decomposition method based on deep learning and readable medium | |
Pinilla et al. | Unfolding-aided bootstrapped phase retrieval in optical imaging: Explainable AI reveals new imaging frontiers | |
CN113298700A (en) | High-resolution image reconstruction method in scattering scene | |
WO2023144519A1 (en) | Determining optical aberration | |
CN112484968B (en) | Method, system, computing device and storage medium for optical metrology | |
CN115524018A (en) | Solving method and system for phase difference wavefront detection | |
Weddell et al. | Reservoir computing for prediction of the spatially-variant point spread function | |
KR20110089973A (en) | Wavefront aberration retrieval method by 3d beam measurement | |
Allan et al. | Deep neural networks to improve the dynamic range of Zernike phase-contrast wavefront sensing in high-contrast imaging systems | |
Yu et al. | Microscopy image reconstruction method based on convolution network feature fusion | |
Hu et al. | Hybrid method for accurate phase retrieval based on higher order transport of intensity equation and multiplane iteration | |
CN116704070B (en) | Method and system for reconstructing jointly optimized image | |
Hashimoto et al. | Numerical estimation method for misalignment of optical systems using machine learning | |
Cheng et al. | Dual-camera phase retrieval based on fast adaption image restoration and transport of intensity equation | |
Taghinia et al. | Surrogate model-based wavefront sensorless adaptive optics system for correcting atmospheric distorted images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |