CN116929570A - Compressed wavefront detection method based on deep learning - Google Patents

Compressed wavefront detection method based on deep learning Download PDF

Info

Publication number
CN116929570A
CN116929570A CN202310922074.XA CN202310922074A CN116929570A CN 116929570 A CN116929570 A CN 116929570A CN 202310922074 A CN202310922074 A CN 202310922074A CN 116929570 A CN116929570 A CN 116929570A
Authority
CN
China
Prior art keywords
wavefront
slope
layer
compressed
dnncws
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310922074.XA
Other languages
Chinese (zh)
Inventor
胡立发
华晟骁
姜律
杨燕燕
冯佳濠
王红燕
张琪
胡鸣
徐星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202310922074.XA priority Critical patent/CN116929570A/en
Publication of CN116929570A publication Critical patent/CN116929570A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J9/00Measuring optical phase difference; Determining degree of coherence; Measuring optical wavelength
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J9/00Measuring optical phase difference; Determining degree of coherence; Measuring optical wavelength
    • G01J2009/002Wavefront phase distribution

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a compressed wavefront detection method based on deep learning, and belongs to the field of adaptive optics. Aiming at the condition of quickly recovering sparse slope, the application designs a 9-layer neural network structure, wherein the first layer to the sixth layer are of double-path structures, and the slope distribution in the input x direction and the slope distribution in the input y direction are respectively processed in parallel. The data are combined in the seventh layer to the ninth layer, and finally the predicted wave front slope is output. The network can restore the thinned wavefront slope to the original slope with higher precision in a shorter time so as to carry out high-precision wavefront reconstruction. The deep neural network structure is trained with 30000 sets of wavefront and slope data, where slopes with different compression ratios are used. After the optimal model is obtained, high-precision recovery of any sparse wavefront slope can be realized, and the anti-noise performance is good.

Description

Compressed wavefront detection method based on deep learning
Technical Field
The application relates to a compressed wavefront correction method based on deep learning, and belongs to the field of adaptive optics.
Background
When astronomical observation is performed using a ground telescope, dynamic errors are introduced into the optical system due to interference of atmospheric turbulence, resulting in degradation of imaging quality. To compensate for wavefront distortions created by atmospheric turbulence disturbances, correction with adaptive optics is a very effective means [ Lifa Hu, li Xuan, yongjun Liu, zhaoliang Cao, dayu Li, and QuanQuan Mu, phase-only liquid-crystal spatial light modulator for wave-front correction with high precision, opt. Express,12,6403-6409 (2004) ], i.e., real-time measurement and real-time correction to overcome dynamic disturbances, improving image resolution.
The wavefront sensor is one of the important components of the adaptive optics system, most commonly referred to as Shack hartmann wavefront sensor (Shack-Hartmann wavefront sensor, SHWFS), whose structure mainly includes a microlens array and a camera that reconstruct the atmospheric turbulence wavefront by measuring the slope of the wavefront calculated by the centroid of the light spot. Due to the influence of atmospheric turbulence, when the system is used for a self-adaptive optical system of a foundation optical telescope, in a frame of light spot image obtained by detection of a shack Hartmann wavefront detector, the light intensity of each light spot is fluctuated, the signal-to-noise ratio of part of light spots is poor, and even errors are increased when the system is used for wavefront reconstruction; meanwhile, the wavefront measurement of the atmospheric turbulence requires high spatial resolution and high measurement speed of the SHWFS. However, the number of the micro lenses cannot be too large so as to avoid the reduction of the capability of detecting the dark and weak targets, on one hand, the more the number of the micro lenses in the micro lens array is, the finer the wavefront segmentation is, and the higher the measurement precision is; on the other hand, when the number of microlenses is larger, the energy of each microlens is smaller under the same energy of the incident light, the capability of detecting a dark and weak target is correspondingly reduced, and the signal-to-noise ratio is reduced, so that the measurement error is increased. The traditional shack Hartmann wavefront detection method cannot solve the contradiction, and attempts are made to introduce a compressed sensing technology into the wavefront detection field.
Compressed sensing, also known as compressed sampling or sparse sampling, is used to acquire and reconstruct sparse or compressible signals. The method utilizes the sparse characteristic of the signals, and compared with the Nyquist theory, the method can restore the original whole signal to be known from fewer measured values. In 2014, james Polans et al first used compressed sensing technology for wavefront measurement, and the proposed sparzone algorithm uses Zernike polynomials to sparse the slope [ Polans J, mcnabb R P, izatt J a, et al compressed Wavefront Sensing J. Optics Letters,2014,39 (5): 1189-1192 ], which has the advantage of requiring fewer microlenses, facilitating the improvement of wavefront detection speed without degrading the ability to detect dim targets. Gregoriy a.howland et al propose a wavefront sensor of a compression-sensitive single-pixel camera for dark and weak signal measurement, applying a random binary pattern on a high resolution spatial light modulator. Recovering the wavefront with the compressed wavefront sensing technique to obtain a high quality 256 x 256 pixel wavefront from 10000 projections [ Howland G A, L μm D J, howell J C. Compression Wavefront Sensing with Weak Values [ J ]. Optics Express,2014,22 (16): 18870]; in 2018, eddy Chow Mun Tik used compressed wavefront sensing to measure freeform contours [ Eddy Mun Tik CHOW, ningqun GUO, edwin CHONG, and Xin WANG, surface Measurement Using Compressed Wavefront Sensing, photonic Sensors,2019,9 (2): 02115]. In 2022 Ke et al used compressed wavefront sensing to perform wavefront calibration experiments [ KE X, WU J, HAO J, distorted wavefront reconstruction based on compressed sensing [ J ]. Applied Physics B,2022,128:107 ]. In the previous document, the classical sparsification and reconstruction methods used significantly increase the wavefront reconstruction error at small compression ratios. Thus, the target wavefront is mainly a low-order wavefront, in which higher-order aberrations account for a relatively small proportion. In recent years, a great deal of research has been conducted on the use of deep learning for wavefront reconstruction, with which the wavefront is reconstructed directly from the light spot, not actually detected by compression, but the wavefront is reconstructed directly from the light spot, severely depending on hardware, and the calculation time is usually up to several tens of milliseconds, which cannot be used for atmospheric turbulence correction of the external field at present.
Also in the previously reported sparse slope reconstruction method, slopes with smaller values are usually ignored to zero, which significantly increases the wavefront reconstruction error at small compression ratios, so they mainly reconstruct and verify with simple wavefronts composed of low-order Zernike modes, without being able to measure the complex wavefront distortions caused by atmospheric turbulence. Since the aberration caused by the atmospheric turbulence is very complex, the slope distribution range is wide, and small slope values can also contribute to the high-frequency components of the complex wavefront, and the direct view of zero can lead to the loss of part of high-frequency information in the wavefront. However, if the accuracy of slope recovery is low, the accuracy of wavefront reconstruction is correspondingly low, so that the existing compressed wavefront detection algorithm is difficult to apply to an adaptive optical system based on a large-caliber optical telescope.
In the deep neural network proposed by the present application, the input and output data are slopes, not the spot and wavefront image with a large grid number, and the number of data of the slopes is not more than 1800 even for SHWFS with 30×30 microlenses. Therefore, the depth neural network designed by us can achieve balance between the rapid recovery speed and the high wavefront reconstruction precision, and the wavefront reconstruction precision in the compressed wavefront detection algorithm is improved by recovering the slope with high precision.
Disclosure of Invention
In order to solve the problems, the application provides a compressed wavefront correction method based on deep learning, which comprises the following steps that in the process of utilizing SHWFS to detect wavefront, a sparse light lattice is obtained, and the wavefront slope is calculated; secondly, recovering the measured wavefront slope with sparsity by using a deep neural network to recover complete slope data; finally, the recovered slope is used for wavefront reconstruction. The proposed deep neural network takes a sparse slope as input and the slope of the output is close to the original slope.
A compressed wavefront correction method based on deep learning, the method comprising:
step 1, constructing a compressed wavefront sensing network DNNCWS based on a deep neural network;
step 2, simulating and generating a light dot matrix image and a corresponding wave front Zernike coefficient as training data so as to train a constructed compressed wave front sensing network DNNCWS based on a deep neural network;
step 3, acquiring a light spot array image corresponding to the wavefront to be reconstructed, and performing slope recovery by using a trained compressed wavefront sensing network DNNCWS based on a depth neural network;
and 4, reconstructing the wavefront according to the recovered slope.
Optionally, the compressed wavefront sensing network DNNCWS based on the deep neural network is a 9-layer neural network structure, and includes:
the first layer was conv1 layer, convolution kernel size 3×3, step size 1, padding 1, and normalization using Batch Normalization;
the second layer to the sixth layer are basic layers, namely residual modules used in Resnet, each of the second layer to the sixth layer comprises two conv layers, the convolution kernel size is 3 multiplied by 3, the step length is 1, the padding is 1, and the ReLu function is used as an activation function and normalized;
the seventh eighth layer is a double Conv layer, each layer is composed of two conv layers, the convolution kernel size is 3×3, the step size is 1, the padding is 1, normalization is performed by using Batch Normalization, and the ReLu function is used as an activation function;
the ninth layer is the OutConv layer, the convolution kernel size is 1×1, and Batch Normalization and the activation function ReLu are not included.
Optionally, the step 2 further includes:
calculating a slope according to the light dot matrix image, thinning the slope to simulate a sparse acquisition process of light dots, and taking the light dot matrix image, the thinned slope and a corresponding wavefront Zernike coefficient as training data.
Optionally, in the method, when training the constructed compressed wavefront sensing network DNNCWS based on the deep neural network by using training data, training and testing multiple groups of slopes to obtain an optimal network model.
Optionally, the step 2 of thinning the slope includes:
for slope signal S x And S is y And (3) performing sparsification treatment:
S x =Ψθ x ,S y =Ψθ y , (5)
in theta x ∈R M×N ,θ y ∈R M×N Is S x 、S y Is a sparse representation of (i.e., a sparse matrix), using a Discrete Cosine Transform (DCT) matrix as the sparse matrix, the number of non-zero elements of the matrix being much smaller than N 2 N represents the slope array θ x And theta y Element number of (2);
designing an observation matrix phi with the size of MxN, for S x And S is y Observing to obtain corresponding slope observation value S x ' and S y ′。
Optionally, the observation matrix is a gaussian matrix.
Optionally, the step 4 uses a mode method or a region method to reconstruct the final measured wavefront.
The application also provides application of the method in the astronomical observation field.
The application has the beneficial effects that:
the depth neural network provided by the application is based on a depth residual error network (ResNet), and a reasonable network structure is used for obtaining a high-precision reconstruction result under the condition of not increasing calculation time, so that the problem of high-frequency component loss in the traditional compression detection is solved; and in the deep neural network proposed by the present application, the input and output data are slopes, not the spot and wavefront image with a large grid number, and the number of data of the slopes is not more than 1800 even for SHWFS with 30×30 microlenses. Therefore, the depth neural network designed by the application can balance the rapid recovery speed and the high wavefront reconstruction precision, and the wavefront reconstruction precision in the compressed wavefront detection algorithm is improved by recovering the slope with high precision.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a deep neural network designed for the requirement of rapid recovery of sparse slope according to the method of the present application.
FIG. 2 is a graph of a comparison simulation of the wavefront restoration effect, wherein (a) is the x-direction slope of the original wavefront, (b) is the y-direction slope of the original wavefront, (c) is the x-direction slope of the GCWS restoration, (d) is the y-direction slope of the GCWS restoration, (e) is the error of the x-direction slope of the GCWS restoration, (f) is the error of the y-direction slope of the GCWS restoration, (g) is the x-direction slope of the DNNCWS restoration of the present application, (h) is the y-direction slope of the DNNCWS restoration of the present application, (i) is the x-direction slope error of the DNNCWS restoration of the present application, and (j) is the y-direction slope error of the DNNCWS restoration of the present application.
Fig. 3 is a diagram of a wavefront simulation of slope reconstruction based on DNNCWS algorithm recovery, where (a) is the original wavefront diagram, (b) is the slope reconstructed wavefront recovered by DNNCWS algorithm, and (c) is the residual wavefront.
FIG. 4 is a diagram of a wavefront simulation of a slope reconstruction based on GCWS algorithm restoration, where (a) is the original wavefront diagram, (b) is the reconstructed wavefront of the slope restored by GCWS algorithm, and (c) is the residual wavefront.
FIG. 5a is a graph showing the PV value of the wavefront error of 30 sets of random data versus simulation results;
FIG. 5b is a graph of RMS values versus simulation results for wavefront errors for 30 sets of random data.
FIG. 6a is a graph showing the PV value of wavefront residual versus simulation results for different compression ratios;
FIG. 6b is a graph showing the RMS value of wavefront residuals at different compression ratios versus simulation results.
FIG. 7 is a graph of comparison simulation results for different algorithm reconstruction accuracy, wherein (a) is an original wavefront, (b) is a wavefront reconstructed from a slope recovered by the method of the present application using the DNNCWS algorithm, (c) is a residual wavefront of a wavefront reconstructed from a slope recovered by the method of the present application using the DNNCWS algorithm, (d) is a wavefront reconstructed from a slope recovered by the existing GCWS algorithm, and (e) is a residual wavefront of a wavefront reconstructed from a slope recovered by the existing GCWS algorithm.
FIG. 8a is a graph of residual wavefront PV versus simulation results for 30 sets of slopes;
FIG. 8b is a graph of residual wavefront RMS versus simulation results for 30 sets of slopes;
FIG. 9a is a diagram showing the comparison simulation result of residual wave fronts PV of recovered wave fronts under different stars;
fig. 9b is a schematic diagram of residual wavefront RMS contrast simulation results of recovered wavefronts at different stars.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Embodiment one:
the embodiment provides a compressed wavefront detection method based on deep learning, which comprises the steps of firstly generating a light lattice by simulating a wavefront detection process; secondly, calculating a slope according to the light spot array; then, the slope is thinned to simulate the sparse acquisition process of the light spots, meanwhile, in the compression detection algorithm, the slope with sparsity is satisfied, and the slope recovery can be completed through a nonlinear reconstruction algorithm; fourth, recovering the sparse slope; finally, the wavefront can be reconstructed from the slope according to conventional wavefront reconstruction algorithms.
In practical astronomical observation and other applications, the optical lattice obtained by the shack Hartmann wavefront detector has poor signal-to-noise ratio of partial light spots, and the slope cannot be calculated, so that partial slope data, namely sparse slope data, can be obtained, and the wavefront recovered according to the sparse slope data has larger error, so that the mass center offset is calculated by the existing method through a gravity center method, all slopes are obtained through calculation, and finally the measured wavefront is reconstructed by a mode method or a regional method; the method does not need to calculate mass center offset and calculate to obtain all slopes, designs a compressed wavefront sensing network DNNCWS based on a deep neural network, generates training data through a process of simulating wavefront detection, recovers all slopes from the sparsity slopes in practical application by using the trained DNNCWS, and then performs wavefront reconstruction by using the existing wavefront reconstruction method.
SHWFS reconstructs phase information of the wavefront by measurement of the wavefront slope. In SHWFS, the regularly arranged sub-lenses form a microlens array, and the wavefront passes through the microlenses to form a light lattice. The average local tilt of the wavefront over a single sub-aperture is comparable to the spot focused on the camera. The tilt can be measured by comparing the difference in centroid positions of the spots obtained for the reference wavefront and the distorted wavefront. Therefore, the barycenter method is generally used in the existing method to calculate the barycenter, and the formula is as follows:
where I (I, j) is the intensity of the I-th row, j-th column of pixels. Then, the wavefront slope is calculated from the offsets Deltax and Deltay of the spots, and the atmospheric turbulence wavefront phase distribution at the coordinates (x, y) is defined asThe slope signal of the phase distribution is S in the x direction and the y direction respectively x And S is y They are defined as follows: :
where f is the lens focal length. The slope is obtained and the final measured wavefront is reconstructed using a pattern method or a region method.
At sparse base ψ ε R M×N Next, for slope signal S x And S is y And (3) performing sparsification treatment:
S x =Ψθ x ,S y =Ψθ y , (5)
in theta x ∈R M×N ,θ y ∈R M×N Is a slope signal S x ,S y I.e. a sparsified matrix, where a discrete cosine transform, DCT, matrix is used as the sparsifying matrix, the number of non-zero elements of the matrix should be much smaller than N 2
Designing a smooth observation matrix phi with the size of MxN which is irrelevant to a sparse base, and comparing with S x And S is y Observing to obtain a slope observed value S x ’,S y ' the application uses a Gaussian matrix as an observation matrix, as shown in the following formula:
S′ x =ΦS x ,S′ y =ΦS y , (6)
wherein: s is S x ’∈R M×N ,S y ’∈R M×N
According to the formula (5) and the formula (6):
S′ x =ΦΨθ y =Aθ x ,S′ y =ΦΨθ y =Aθ y , (7)
wherein a=Φψ∈r M×N Is a sensing matrix.
Because the observed quantity M is far smaller than the signal length N, the optimization problem in the 1-norm sense is utilized to solve S x And S is y Is a precise or approximate value of (1), namely:
min‖Ψ^TX‖ 1 s.t.A CS S x =ΦΨS x =S′ x (8)
min‖Ψ^TX‖ 1 s.t.A CS S y =ΦΨS y =S′ y (9)
the algorithm is a traditional compression detection algorithm for recovering the slope, and is named as GCWS (Gauss-based compressive wavefront sensoring), but the traditional algorithm has larger error when recovering the turbulent wavefront.
The application provides a compression wavefront detection algorithm DNNCWS (Deep-neural-networks-based compressive wavefront sensoring) of a Deep neural network, which is based on a Deep residual error network (ResNet), and obtains a high-precision reconstruction result by a reasonable network structure under the condition of not increasing calculation time, thereby solving the problem of high-frequency component loss in the traditional compression detection; the network structure is shown in fig. 1, the 30000 groups of slope data are trained, and slopes with different compression ratios are adopted for training during training, so that an optimal model is obtained, and high-precision recovery from any sparse wavefront slope to an original wavefront slope is realized.
Compared with the existing U-Net or Res-Net, the novel structure provided by the embodiment is very simple so as to perform rapid data processing, and the main difference is that: the method of the application gives up the complex network structure of the typical deep learning, and the module is referred to introduce a recovery process of sparse slope, thereby not only improving the accuracy of slope recovery, but also reducing the time of data processing.
Referring to fig. 1, the application designs a 9-layer neural network structure aiming at the situation of quickly recovering sparse slope.
The first layer was conv1 layer, convolution kernel size 3 x 3, step size 1, padding 1, and normalization using Batch Normalization.
The second layer to the sixth layer are basic layers, namely residual modules used in Resnet, the layers comprise two conv layers, the convolution kernel size is 3×3, the step size is 1, the padding is 1, and the ReLu function is used as an activation function and normalized.
The seventh eighth layer is a double conv layer, and a convolution kernel size of 3×3 is formed by two conv layers, a step size of 1, a padding of 1, and normalization using Batch Normalization and a ReLu function as an activation function.
The ninth layer is the OutConv layer, the convolution kernel size is 1×1, and Batch Normalization and the activation function ReLu are not included.
For the thinned wavefront slope, the network can restore the thinned wavefront slope to the original slope with higher precision in a shorter time so as to perform high-precision wavefront reconstruction. 30000 sets of wavefront and slope data were generated for training. During training, slopes with different compression ratios are used, when the difference of the loss functions in two adjacent iteration processes is less than 10 -5 When the training is completed; the loss function may be a common loss function, such as a mean square error. After the optimal model is obtained, high-precision recovery of any sparse wavefront slope can be realized.
Compared with the traditional neural network structure, the neural network has simpler structure and higher calculation speed, refers to the deep learning network structure, can well solve the problems of gradient disappearance, gradient explosion, network degradation and the like, and can ensure high-precision and rapid recovery of sparse slope.
In the existing compression detection algorithm, the slope with a smaller value is generally ignored, so that the accuracy of wavefront reconstruction is reduced under the condition of a small compression ratio; in addition, different slope recovery algorithms may also result in different accuracy of the reconstruction. The algorithm is based on the deep neural network, an optimal network model is obtained through training and testing of multiple groups of slopes, and the high-precision characteristic of the deep neural network is combined, so that the recovery of the wave front slope can be more accurately completed, and the wave front can be reconstructed with higher precision.
In order to verify the effect of the method of the application, simulation experiments are carried out, and the results are analyzed as follows:
1. wavefront reconstruction accuracy of DNNCWS of the method of the application
For the same compression ratio r=0.7, DNNCWS and GCWS were used to reconstruct the wavefront slope and its wavefront, respectively. Fig. 2 shows the wavefront slope in the x and y directions and its reconstruction errors.
Fig. 2 (a) shows the original slope in the x direction, which ranges from-0.7 rad to 0.8rad, and fig. 2 (b) shows the original slope in the y direction, which ranges from-0.9 rad to 1.1rad.
The slope in the x and y directions of the GCWS restoration is given in FIGS. 2 (c) and (d).
The error in the slope in the x and y directions of GCWS recovery is given in FIGS. 2 (e) and (f), respectively, with a PV of 0.5rad and an RMS of 0.03rad.
The slopes in the x and y directions of DNNCWS recovery are given in fig. 2 (g) and (h), respectively.
Corresponding errors are given in FIGS. 2 (i) and (j), with a PV of 0.04rad and an RMS value of 0.003rad. It can be seen that the accuracy of the inventive method DNNCWS is an order of magnitude higher than conventional compressed wavefront sensing algorithms.
The wavefront reconstruction is performed based on the obtained slope, resulting in a reconstructed wavefront as shown in fig. 3 and 4. The CPU used was AMD Ryzen7, 8GB memory, and 4.4ms time from slope to wavefront reconstruction. The original wavefront, DNNCWS reconstructed wavefront and residual wavefront are given in fig. 3 (a), (b) and (c), respectively. The original wavefront, the GCWS reconstructed wavefront and the residual wavefront are given in fig. 4 (a), (b) and (c), respectively. The original wavefront in FIG. 3 (a) is the same as that in FIG. 4 (a), and the PV is 0.464 μm and the RMS is 0.0635. Mu.m. The wavefront PV in FIG. 3 (b) was 0.460 μm and the RMS was 0.0633. Mu.m. The wavefront PV in FIG. 4 (b) was 0.379 μm and the RMS was 0.0615. Mu.m. The RMS of the residual wavefront of DNNCWS in fig. 3 (c) is 0.0014 μm, which is much smaller than the RMS of the residual wavefront of GCWS in fig. 4 (c) by 0.006 μm, and it is clear that the accuracy of the result obtained by DNNCWS according to the method of the present application is much higher than that of GCWS, since the DNNCWS method recovers the slope with higher accuracy, and thus the wavefront reconstruction error is reduced.
To check the stability of the algorithm, this example selects 30 sets of data for testing. Fig. 5a and 5b show the PV and RMS (abscissa serial number represents serial number corresponding to slope data, 30 groups) of residual wavefronts recovered from 30 groups of random slopes using two methods, respectively. This shows that the method DNNCWS of the application has remarkable stability and high precision compared with GCWS. The RMS value of the residual wavefront of DNNCWS algorithm is about 0.001 μm with small amplitude of variation.
2. Influence of different slope compression ratios on wavefront reconstruction accuracy
Fig. 6a and 6b show a comparison of PV and RMS, respectively, of residual wavefronts recovered by the two algorithms at different compression ratios. The residual wavefront reconstructed based on DNNCWS had a PV range of 0.005 μm to 0.014 μm and an RMS range of 0.0007 μm to 0.002 μm; whereas the residual wavefront PV range based on GCWS reconstruction is 0.07 μm to 0.4 μm, the RMS range is 0.004 μm to 0.066 μm; the result shows that the wavefront measurement accuracy of DNNCWS is far better than that of GCWS. Similar to GCWS, the RMS of the residual wavefront reconstructed by DNNCWS according to the method of the present application decreases from 0.002 μm to 0.0007 μm as the slope compression ratio increases. In all cases, the RMS of the residual wavefront of the DNNCWS method of the application is around 0.001 μm. The method DNNCWS of the application uses the deep neural network to improve the accuracy of the wavefront slope recovered from the sparse wavefront slope, so the reconstructed wavefront effect is better. At the same time, DNNCWS can use fewer microlenses for measurement for the same wavefront measurement accuracy.
3. Accuracy of compressed wavefront detection at low signal-to-noise ratio
For the targets such as 11 stars, which are dark and weak, the signal to noise ratio is lower than snr=8.11, and the compression ratio r=0.7, where (a), (b) and (c) in fig. 7 are the original wavefront and the wavefront reconstructed by the slope recovered by the DNNCWS algorithm, and (d) and (e) in fig. 7 are the wavefront reconstructed by the slope recovered by the GCWS algorithm and the wavefront reconstructed by the residual wavefront, respectively, and the PV value of the original wavefront in fig. 7 (a) is 0.448 μm, and the RMS value is 0.0641 μm; the DNNCWS algorithm in fig. 7 (b) recovers a wavefront PV value of 0.458 μm and an RMS value of 0.0626 μm; the GCWS algorithm in FIG. 7 (d) recovers a wavefront PV value of 0.407 μm and an RMS value of 0.0620 μm. The RMS value of the wavefront residual reconstructed based on DNNCWS in fig. 7 (c) was 0.0009 μm, and the RMS value of the wavefront residual reconstructed based on GCWS in fig. 7 (e) was 0.004 μm. It can be seen that the inventive method DNNCWS recovered a wavefront that was closer to the original wavefront than the GCWS. Under the condition of low signal-to-noise ratio, the wavefront reconstruction precision based on DNNCWS is obviously higher than that of GCWS.
Fig. 8a and 8b are respectively the RMS value versus PV value of the wavefront residual obtained for 30 sets of random data at low signal-to-noise ratio (the abscissa of the serial number represents the serial number corresponding to the slope data, 30 sets). As can be seen from fig. 8a and 8b, the wavefront residual PV values reconstructed based on DNNCWS algorithm are in the range of 0.004 μm to 0.013 μm, and the RMS values are in the range of 0.0007 μm to 0.0019 μm; whereas the GCWS algorithm corresponds to a wavefront residual with a PV value in the range of 0.066 μm to 0.198 μm and an RMS value in the range of 0.004 μm to 0.008 μm; it is clear that DNNCWS has higher accuracy and better stability than GCWS.
Fig. 9a and 9b show the residual contrast of RMS and PV of the reconstructed wavefront at different stars, respectively. The results show that the residual wavefront PV based on DNNCWS reconstruction is between 0.005 μm and 0.011 μm and RMS is between 0.0009 μm and 0.0017 μm. By way of comparison, the residual wavefront PV range based on GCWS reconstruction was 0.06 μm to 0.11 μm and the RMS range was 0.004 μm to 0.005 μm. The wavefront reconstruction accuracy of DNNCWS is higher, and the influence of different amplitudes on the wavefront measurement accuracy is smaller.
Some steps in the embodiments of the present application may be implemented by using software, and the corresponding software program may be stored in a readable storage medium, such as an optical disc or a hard disk.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (8)

1. A compressed wavefront sensing method based on deep learning, the method comprising:
step 1, constructing a compressed wavefront sensing network DNNCWS based on a deep neural network;
step 2, simulating and generating a light dot matrix image and a corresponding wave front Zernike coefficient as training data so as to train a constructed compressed wave front sensing network DNNCWS based on a deep neural network;
step 3, acquiring a light spot array image corresponding to the wavefront to be reconstructed, and performing slope recovery by using a trained compressed wavefront sensing network DNNCWS based on a depth neural network;
and 4, reconstructing the wavefront according to the recovered slope.
2. The method of claim 1, wherein the depth neural network-based compressed wavefront sensing network DNNCWS is a 9-layer neural network structure, comprising:
the first layer was conv1 layer, convolution kernel size 3×3, step size 1, padding 1, and normalization using Batch Normalization;
the second layer to the sixth layer are basic layers, namely residual modules used in Resnet, each of the second layer to the sixth layer comprises two conv layers, the convolution kernel size is 3 multiplied by 3, the step length is 1, the padding is 1, and the ReLu function is used as an activation function and normalized;
the seventh eighth layer is a double Conv layer, each layer is composed of two conv layers, the convolution kernel size is 3×3, the step size is 1, the padding is 1, normalization is performed by using Batch Normalization, and the ReLu function is used as an activation function;
the ninth layer is the OutConv layer, the convolution kernel size is 1×1, and Batch Normalization and the activation function ReLu are not included.
3. The method according to claim 2, wherein the step 2 further comprises:
calculating a slope according to the light dot matrix image, thinning the slope to simulate a sparse acquisition process of light dots, and taking the light dot matrix image, the thinned slope and a corresponding wavefront Zernike coefficient as training data.
4. The method of claim 3, wherein the method trains and tests multiple groups of slopes to obtain an optimal network model when training the constructed compressed wavefront sensing network DNNCWS based on the deep neural network by using training data.
5. The method of claim 4, wherein the step 2 of thinning the slope comprises:
for slope signal S x And S is y And (3) performing sparsification treatment:
S x =Ψθ x ,S y =Ψθ y , (5)
in theta x ∈R M×N ,θ y ∈R M×N Is S x 、S y Is a sparse representation of (i.e., a sparse matrix), using a Discrete Cosine Transform (DCT) matrix as the sparse matrix, the number of non-zero elements of the matrix being much smaller than N 2 N represents the slope array θ x And theta y Element number of (2);
designing an observation matrix phi with the size of MxN, for S x And S is y Observing to obtain corresponding slope observation value S x ' and S y ′。
6. The method of claim 5, wherein the observation matrix is a gaussian matrix.
7. The method of claim 6, wherein step 4 uses a pattern method or a region method to reconstruct the final measured wavefront.
8. Use of the method according to any one of claims 1-7 in the field of astronomical observations.
CN202310922074.XA 2023-07-26 2023-07-26 Compressed wavefront detection method based on deep learning Pending CN116929570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310922074.XA CN116929570A (en) 2023-07-26 2023-07-26 Compressed wavefront detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310922074.XA CN116929570A (en) 2023-07-26 2023-07-26 Compressed wavefront detection method based on deep learning

Publications (1)

Publication Number Publication Date
CN116929570A true CN116929570A (en) 2023-10-24

Family

ID=88387522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310922074.XA Pending CN116929570A (en) 2023-07-26 2023-07-26 Compressed wavefront detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN116929570A (en)

Similar Documents

Publication Publication Date Title
CN110044498B (en) Hartmann wavefront sensor mode wavefront restoration method based on deep learning
CN105589210B (en) Digital synthetic aperture imaging method based on pupil modulation
CN104168429B (en) A kind of multiple aperture subrane high resolution imaging apparatus and its imaging method
CN110146180B (en) Large-view-field image sharpening device and method based on focal plane Hartmann wavefront sensor
CN111458045A (en) Large-view-field wavefront detection method based on focal plane Hartmann wavefront sensor
CN111626997B (en) Method for directly detecting optical distortion phase by high-speed single image based on deep learning
CN106845024A (en) A kind of in-orbit imaging simulation method of optical satellite based on wavefront inverting
CN105203213B (en) Method for calculating composite wavefront sensing adaptive optical system recovery voltage
CN106546326A (en) The wavefront sensing methods of multinomial pattern in Hartman wavefront detector sub-aperture
CN111579097A (en) High-precision optical scattering compensation method based on neural network
CN114186664B (en) Mode wavefront restoration method based on neural network
CN115031856A (en) Sub-light spot screening-based wavefront restoration method for shack-Hartmann wavefront sensor
CN106482838B (en) Wavefront sensor based on self-adaptive fitting
CN114323310A (en) High-resolution Hartmann wavefront sensor
CN117195960A (en) Atmospheric turbulence wave front detection method based on shack Hartmann UT conversion model
CN108181007A (en) The facula mass center computational methods of Hartman wavefront detector weak signal
CN116929570A (en) Compressed wavefront detection method based on deep learning
CN113405676B (en) Correction method for micro-vibration influence in phase difference wavefront detection of space telescope
CN111998962B (en) Hartmann wavefront sensor based on array type binary phase modulation
CN117451189A (en) Wavefront detection method based on Hartmann detector
CN109413302B (en) Dynamic interference fringe distortion correction method for pixel response frequency domain measurement
Engler Pyramid wavefront sensing in the context of extremely large telescopes
Yatawatta Shapelets and related techniques in radio-astronomical imaging
Norouzi et al. CNN to mitigate atmospheric turbulence effect on Shack-Hartmann Wavefront Sensing: A case study on the Magdalena Ridge Observatory Interferometer.
CN116465503A (en) High-spatial-resolution wavefront restoration method based on sub-aperture curved surface information extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination