CN117451190A - Deep learning defocusing scattering wavefront sensing method - Google Patents

Deep learning defocusing scattering wavefront sensing method Download PDF

Info

Publication number
CN117451190A
CN117451190A CN202311385568.5A CN202311385568A CN117451190A CN 117451190 A CN117451190 A CN 117451190A CN 202311385568 A CN202311385568 A CN 202311385568A CN 117451190 A CN117451190 A CN 117451190A
Authority
CN
China
Prior art keywords
light
light path
scattering
defocused
wavefront sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311385568.5A
Other languages
Chinese (zh)
Inventor
薛晋宇
秦成兵
胡建勇
陈瑞云
张国峰
肖连团
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi University
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202311385568.5A priority Critical patent/CN117451190A/en
Publication of CN117451190A publication Critical patent/CN117451190A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J9/00Measuring optical phase difference; Determining degree of coherence; Measuring optical wavelength
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/47Scattering, i.e. diffuse reflection
    • G01N21/49Scattering, i.e. diffuse reflection within a body or fluid
    • G01N21/53Scattering, i.e. diffuse reflection within a body or fluid within a flowing fluid, e.g. smoke
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/47Scattering, i.e. diffuse reflection
    • G01N21/49Scattering, i.e. diffuse reflection within a body or fluid
    • G01N21/53Scattering, i.e. diffuse reflection within a body or fluid within a flowing fluid, e.g. smoke
    • G01N21/538Scattering, i.e. diffuse reflection within a body or fluid within a flowing fluid, e.g. smoke for determining atmospheric attenuation and visibility
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Mechanical Light Control Or Optical Switches (AREA)

Abstract

The invention discloses a defocusing scattering wavefront sensing method for deep learning, which relates to the technical field of wavefront sensing and correction. The defocusing scattering wavefront sensing method adopting the structure solves the problem of multiple solutions caused by the ambiguity of the Zernike coefficient in a single-frame far-field picture through defocusing scattering facula images, and simultaneously amplifies the characteristics of the picture through collecting the pictures of defocusing scattering distortion facula, so that the network model can better extract the characteristics, the problem of multiple iterations of wavefront sensing in the self-adaptive optical system is overcome, and the timeliness of the whole system is improved.

Description

Deep learning defocusing scattering wavefront sensing method
Technical Field
The invention relates to the technical field of wavefront sensing and correction, in particular to a defocusing scattering wavefront sensing method for deep learning.
Background
The existence of the atmospheric turbulence can generate certain aberration in the process of transmitting the laser in the atmosphere, so that the laser transmission energy is changed and dispersed from concentration in the process of transmitting the laser, and the resolution of an optical imaging system is deteriorated. Adaptive optics can enable the entire imaging system to overcome dynamic disturbances created by the atmosphere, thereby improving imaging capabilities.
Wavefront sensors in adaptive optics are expensive and time-efficient and are difficult to apply well in current real-time systems.
One such mapping method, which derives the corresponding phase information from the measured light intensity distribution information, is called a phase inversion technique. The phase inversion technique, otherwise known as Phase Recovery (PR), generally refers to all techniques that recover phase information from intensity information, but sometimes also refers to recovering phase information from only a single intensity pattern. However, due to the problem of multiple solutions of the same far field to multiple wave fronts, the method is easy to trap into local extremum, and the convergence accuracy is low. If two light intensity patterns are used for phase recovery, one of the two light intensity patterns is a light intensity pattern of a focal plane, and the other is a light intensity pattern of a defocus plane, and such a method is called phase difference inversion (PD). Because the whole method only needs the light intensity graph, compared with other wavefront sensors, the phase inversion method has the advantage of low optical hardware requirements.
With the continuous development of computers, the strong fitting ability of deep learning is greatly enhanced in various fields. The traditional wavefront sensing or wavefront-free sensing methods in the field of adaptive optics are combined to a certain extent, alexNet is modified by Ma Huimin and the like of Anhui agricultural university, focal plane and defocused plane images under different atmospheric turbulence parameters are generated through simulation, and the images are used as input to train and output a 35-order Zernike coefficient (Numerical study of adaptive optics compensation based on Convolutional Neural Networks [ J ]. Optics Communications,2019, 433:283-289.). Similarly, wu Yuhe Guo Youming and the like of the institute of photoelectric technology of the national academy of sciences in 2020 reform the LeNet-5 and name the reformed network as PD-CNN, compare a single focal plane and Shan Zhangli focal plane with the simultaneous input of focal plane and defocusing plane images, and output a 13-order Zernike coefficient. It is verified that the network model for simultaneously inputting the focal plane and the out-of-focus plane images will have higher prediction accuracy (Sub-millisecond phase retrieval for ph ase-diversity wavefront sensor [ J ]. Senses, 2020, 20:4877.)
The Nishizaki et al, university of osaka, japan, propose to estimate Zernike coefficients directly from a single light intensity image using the CNN model that is leading at the time, and also propose to pre-process the light intensity image first so that the target energy spreads on the image sensor, thereby allowing more pixel units on the image sensor to acquire information. (see for details Nishizaki Y, valdiviaM, hori-saki R, et al deep learning wavefront sensing [ J ]. Optics Express,2019,27 (1): 240-251.)
Therefore, the invention provides a defocusing scattering wavefront sensing method based on deep learning, and designs the defocusing scattering wavefront sensing method into an adaptive optical system.
Disclosure of Invention
The invention aims to provide a defocusing scattering wavefront sensing method for deep learning, which uses a method for deep learning to realize a wavefront sensing part in adaptive optics, uses a phase difference inversion method to solve the problem of multiple solutions of the same far field to multiple wave fronts, and uses a scattering preprocessing method to spread the energy of target laser on an image sensor, so that the image sensor collects more pixels carrying information, thereby improving the fitting capacity of a neural network.
In order to achieve the above object, the present invention provides a deep learning defocused scattered wavefront sensing method, comprising the steps of:
step 1: based on a Kolmogorov turbulence statistical theory, zernike polynomial coefficients conforming to the statistical theory are obtained as distorted wave fronts given to a spatial light modulator for manufacturing aberrations;
step 2: after the laser emitted by the 532nm laser is subjected to beam expansion by a beam expansion system, the laser is beaten on a spatial light modulator to obtain distorted spots;
step 3: by the first 5: forming a correction light path after 5BS beam splitting, and collecting the light path;
step 4: the correction light path passes through the second 5: the 5BS beam splitting is carried out to obtain a reference light path and a corrected light path;
step 5: the collection light path passes through a third 5: a scattering light path and a defocusing scattering light path are obtained after 5BS beam splitting;
step 6: the corrected light path firstly irradiates on the deformable mirror to correct imaging, and the corrected light passes through the second 5:5BS beam splitting and imaging to a first CMOS camera through a focusing lens;
step 7: the light of the scattered light path is imaged to a second CMOS camera after passing through an imaging lens, the distance of the light is the focal length of the imaging lens, a scattering sheet is added in the midpoint of the focal length, and a required scattered distortion light spot is obtained through the second CMOS camera;
step 8: the light of the defocused scattering light path is imaged to a third CMOS camera after passing through the imaging lens, the distance of the light is adjusted to be the focal length of the non-imaging lens, a scattering sheet is added in the middle, and the required defocused scattering distortion light spot is obtained through the third CMOS camera.
Preferably, the Zernike polynomial coefficients according with the statistical theory are obtained in the step 1 and used as labels in the deep learning neural network.
Preferably, the deformable mirror in the step 6 is controlled by the output of the PC-side neural network module.
Preferably, the neural network module needs a certain amount of data set to learn before use, the training set and the test set data meet the condition that the image pair corresponds to the label, the neural network module learns the correct mapping relation between the image and the wave-front information, and the evaluation index of residual analysis is the root mean square value of the residual wave-front.
Preferably, the scattered and defocused distorted spots in the step 7 and the step 8 are used as a group of image pairs as input pictures in a deep learning neural network module.
Therefore, the defocusing scattering wavefront sensing method adopting the structure has the following beneficial effects:
(1) According to the invention, the spatial light modulator is used as an aberration simulation system to simulate and generate distorted wavefront with atmospheric turbulence, the obtained distorted light beam passes through the CMOS camera, the distorted light spot is obtained by the CMOS camera, the obtained defocused scattered light spot image predicts distortion information of a deep learning neural network carried by a wavefront controller in a computer, the predicted information is converted into control voltage corresponding to a deformable mirror, and the atmospheric distortion information loaded in the spatial light modulator is counteracted by using the change of the deformable mirror, so that the effect of no wavefront sensing is realized.
(2) The invention solves the problem of multiple solutions caused by the ambiguity of the Zernik e coefficient in a single-frame far-field picture by defocusing scattered light spot image pairs, simultaneously amplifies the characteristics of the picture by collecting the picture of defocusing scattered distorted light spots, ensures that a network model extracts the characteristics better, and simultaneously overcomes the problem of multiple iterations without wavefront sensing in a self-adaptive optical system so as to improve the timeliness of the whole system.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a schematic diagram of an adaptive optics system for a deep learning defocused scattered wavefront sensing method of the present invention;
FIG. 2 is a flow chart of a method of deep learning defocused scattered wavefront sensing according to the present invention;
FIG. 3 is a schematic diagram of a deep learning network model of a deep learning defocused scattered wavefront sensing method according to the present invention;
fig. 4 is a working principle diagram of a network model control algorithm based on deep learning of the defocusing scattering wavefront sensing method of the present invention.
Detailed Description
The technical scheme of the invention is further described below through the attached drawings and the embodiments.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
Examples
Referring to fig. 1-4, as shown in fig. 1, the whole device of the deep learning defocused scattered wavefront sensing method of the present invention, i.e. the adaptive optics system, comprises the following components: a) A 532nm wavelength laser for emitting laser light having a wavelength of 532 nm; b) The beam expanding system is used for intercepting an illumination light field which is uniform in the middle of the laser light source; c) A spatial light modulator for generating a distorted spot of light under atmospheric turbulence; d) The CMOS camera is used for receiving the light spots passing through the BS (beam splitter), establishing a data set in the deep learning and inputting a network model after training is completed; e) The computer is used as a controller in the whole self-adaptive optical system to control the deformable mirror to realize the function of correcting distortion light spots; f) And the deformable mirror is used for correcting the distorted light spots formed by the spatial light modulator.
The specific steps of the deep learning defocused scattering wavefront sensing method are as follows:
step 1: based on the Kolmogorov turbulence statistical theory, zernike polynomial coefficients conforming to the statistical theory are obtained as a distorted wavefront given to the spatial light modulator that produces aberrations. And (3) obtaining the Zernike polynomial coefficient which accords with the statistical theory in the step (1) as a label in the deep learning neural network.
Step 2: after laser emitted by a 532nm laser is subjected to beam expansion by a beam expansion system, the laser is beaten on a spatial light modulator to obtain a distorted light spot.
Step 3: by the first 5: and 5BS beam splitting to form a correction light path and collecting the light path.
Step 4: the correction light path passes through the second 5: and obtaining a reference light path and a corrected light path after 5BS beam splitting.
Step 5: the collection light path passes through a third 5: and (5) splitting the beam by the 5BS to obtain a scattered light path and a defocused scattered light path.
Step 6: the corrected light path firstly irradiates on the deformable mirror to correct imaging, and the corrected light passes through the second 5:5BS beam splitting and imaging to a first CMOS camera through a focusing lens; wherein, the deformable mirror is controlled by the output of the PC end neural network module. Before the neural network module is used, a certain amount of data sets are required to learn, the data of the training set and the test set meet the requirement that the image pair corresponds to the label, the neural network module can learn the correct mapping relation between the image and the wave-front information, the evaluation index of residual analysis is the root mean square value (RMS) of the residual wave-front, and the smaller the root mean square value (RMS), the better the correction effect is proved.
Step 7: the light of the scattered light path is imaged to a second CMOS camera after passing through an imaging lens, the distance of the light is the focal length of the imaging lens, a scattering sheet is added in the midpoint of the focal length, and a required scattered distortion light spot is obtained through the second CMOS camera.
Step 8: the light of the defocused scattering light path is imaged to a third CMOS camera after passing through the imaging lens, the distance of the light is adjusted to be the focal length of the non-imaging lens, a scattering sheet is added in the middle, and the required defocused scattering distortion light spot is obtained through the third CMOS camera.
The scattered and defocused speckle in steps 7 and 8 are used as a set of image pairs as input pictures in the deep learning neural network module.
Step 9: if the above steps are achievable, then the 11000 sets of Zernike mode coefficients are prepared in step 1 as labels in the neural network, corresponding to the preparation of 11000 distorted wavefronts on the spatial light modulator.
Step 10: the corresponding 11000 pairs of corresponding distorted images are obtained as input images to the dataset by the second CMOS camera and the third CMOS camera in steps 7 and 8.
Step 11: 10000 pairs of distorted images are randomly extracted to be used as a training set, and the training set is a direct mapping relation between a far-field distorted image pair and a Zernike mode coefficient of network learning. The remaining 1000 pairs of distorted images are used as verification sets. For adjusting the hyper-parameters of the network and verifying the fitting ability.
Step 12: and configuring a deep learning environment and building a convolutional neural network. Fig. 3 is a schematic diagram of the architecture of the deep learning network model. The convolutional neural network architecture has 11 layers in total, the input layer is 256×256×2 samples, 2 channels are respectively a scattered light spot image and a scattered defocused light spot image, the convolution kernels of 4 convolutional layers are respectively 5×5, 3×3 and 3×3, and the channel numbers of four convolutional layers are respectively 32, 64 and 64. And a pooling layer is arranged behind each convolution layer, the pooling layers are all the maximum pooling, and the pooling step length is 2. The nodes of the two layers of full-connection layers are 1024 and 13 respectively, and the output is 3 to 15-order Zernike coefficients. The convolution network selects Adam function as gradient descent function, and initial learning rate is 10 -3 To avoid network overfitting, batch regularization is introduced for each layer of convolutional layers. The Epoch is set to 200 and the batch size is set to 100.
After the network training converges, only the facula images of the CMOS cameras in the steps 7 and 8 are used as input, the neural network model obtains the wave front phase through the output 13 pieces of neuron information corresponding to the 3-15-order Zernike mode coefficients, and compared with the known wave front information which is given to the spatial light modulator and enables the incident light to be distorted, the Root Mean Square (RMS) value of the residual wave front is smaller, and the correction effect is better.
The root mean square error formula is:
wherein N is the pixel resolution of the wavefront, namely the number of wavefront pixel points; x is x jp The predicted value of the distorted wavefront pixel point j is obtained by the Zernike mode coefficient predicted by the neural network model; x is x jt The true value of the simulated distorted wavefront pixel point j is modulated for the Zernike mode coefficient given to the spatial light modulator; x is x i Is the average of all pixels of the distorted wavefront.
The limited dynamic range of the image sensor CMOS or CCD is a very common problem in the field of computational imaging, the problem is more obvious in the wavefront sensing problem, especially because residual aberration exists, most of energy of light spots is concentrated on a limited pixel point, image conversion of aberration mapping to the image sensor in the wavefront sensing problem is very tiny, deep learning is used as a method for effectively fitting through image features, the problem of the image sensor can lead to very limited image features extracted by the deep learning, the scattering sheets in the step 8 and the step 9 are used for enabling the energy of the captured image to be distributed widely on a detector array, compared with the image energy, the feature extracted by the deep learning method is more obvious, and the fitting capability of a network model to the problem is effectively improved. And then the error between the Zernike mode coefficient obtained by the network model output and the known Zernike mode coefficient label is reduced. And further, the accuracy of the whole wavefront sensing can be effectively improved.
And evaluating the fitting capacity of the convolution network model by using different convolution network models, finally selecting the optimal convolution network model as a controller in the whole self-adaptive optics, and correcting the distortion facula by using the output Zernike mode coefficient as a signal for controlling the distortion mirror, so that the distortion facula modulated by the spatial light modulator is corrected by the self-adaptive optics system. In the process, iterative operation is not involved any more, and the calculated amount can be greatly improved.
Therefore, the defocusing scattering wavefront sensing method adopting the structure simulates and generates distorted wavefront with atmospheric turbulence by using the spatial light modulator as an aberration simulation system, the distorted light beam obtained by using the method passes through the CMOS camera, the distorted light spot is obtained by using the CMOS camera, the defocusing scattering light spot image obtained by using the method carries out prediction of distortion information through a deep learning neural network in a computer, wavefront sensing is realized, the predicted information is converted into control voltage corresponding to a deformable mirror, and the atmospheric distortion information loaded in the spatial light modulator is counteracted by using the change of the deformable mirror, so that the effect of no wavefront sensing is realized.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.

Claims (5)

1. A deep-learning defocused scattered wavefront sensing method, comprising the steps of:
step 1: based on a Kolmogorov turbulence statistical theory, zernike polynomial coefficients conforming to the statistical theory are obtained as distorted wave fronts given to a spatial light modulator for manufacturing aberrations;
step 2: after the laser emitted by the 532nm laser is subjected to beam expansion by a beam expansion system, the laser is beaten on a spatial light modulator to obtain distorted spots;
step 3: by the first 5: forming a correction light path after 5BS beam splitting, and collecting the light path;
step 4: the correction light path passes through the second 5: the 5BS beam splitting is carried out to obtain a reference light path and a corrected light path;
step 5: the collection light path passes through a third 5: a scattering light path and a defocusing scattering light path are obtained after 5BS beam splitting;
step 6: the corrected light path firstly irradiates on the deformable mirror to correct imaging, and the corrected light passes through the second 5:5BS beam splitting and imaging to a first CMOS camera through a focusing lens;
step 7: the light of the scattered light path is imaged to a second CMOS camera after passing through an imaging lens, the distance of the light is the focal length of the imaging lens, a scattering sheet is added in the midpoint of the focal length, and a required scattered distortion light spot is obtained through the second CMOS camera;
step 8: the light of the defocused scattering light path is imaged to a third CMOS camera after passing through the imaging lens, the distance of the light is adjusted to be the focal length of the non-imaging lens, a scattering sheet is added in the middle, and the required defocused scattering distortion light spot is obtained through the third CMOS camera.
2. A deep-learning defocused scattered wavefront sensing method as claimed in claim 1, wherein: and (3) obtaining a Zernike polynomial coefficient which accords with a statistical theory in the step (1) as a label in the deep learning neural network.
3. A deep-learning defocused scattered wavefront sensing method as claimed in claim 2, wherein: the deformable mirror in the step 6 is controlled by the output of the PC end neural network module.
4. A deep-learning defocused scattered wavefront sensing method as claimed in claim 3, wherein: before the neural network module is used, a certain amount of data sets are required to learn, the image pairs corresponding to the labels are met through the data of the training sets and the test sets, the neural network module learns the correct mapping relation between the images and the wavefront information, and the evaluation index of residual analysis is the root mean square value of the residual wavefront.
5. The deep-learning defocused scattered wavefront sensing method of claim 4, wherein: the scattered and defocused speckle in the step 7 and the step 8 are used as a set of image pairs as input pictures in a deep learning neural network module.
CN202311385568.5A 2023-10-24 2023-10-24 Deep learning defocusing scattering wavefront sensing method Pending CN117451190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311385568.5A CN117451190A (en) 2023-10-24 2023-10-24 Deep learning defocusing scattering wavefront sensing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311385568.5A CN117451190A (en) 2023-10-24 2023-10-24 Deep learning defocusing scattering wavefront sensing method

Publications (1)

Publication Number Publication Date
CN117451190A true CN117451190A (en) 2024-01-26

Family

ID=89594102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311385568.5A Pending CN117451190A (en) 2023-10-24 2023-10-24 Deep learning defocusing scattering wavefront sensing method

Country Status (1)

Country Link
CN (1) CN117451190A (en)

Similar Documents

Publication Publication Date Title
CN109031654A (en) A kind of adaptive optics bearing calibration and system based on convolutional neural networks
CN110207835A (en) A kind of wave front correction method based on out-of-focus image training
CN107843982B (en) Wave front-free detection self-adaptive optical system based on real-time phase difference technology
CN106845024B (en) Optical satellite in-orbit imaging simulation method based on wavefront inversion
CN105589210B (en) A kind of digitlization synthetic aperture imaging method based on pupil modulation
CN111579097B (en) High-precision optical scattering compensation method based on neural network
JP2022514580A (en) Optical correction by machine learning
US20220171204A1 (en) Light field-based beam correction systems and methods
CN111695676B (en) Wavefront restoration method and system based on generation countermeasure network
CN117451190A (en) Deep learning defocusing scattering wavefront sensing method
Liu et al. Performance analysis of coherent optical communication based on hybrid algorithm
Saha et al. Turbulence strength C n2 estimation from video using physics-based deep learning
CN105204168B (en) It is a kind of based on double wave front calibrator without wave front detector far-field laser beam apparatus for shaping and method
Weddell et al. Reservoir computing for prediction of the spatially-variant point spread function
CN114529476A (en) Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network
Wu et al. Technique for Recovering Wavefront Phase Bad Points by Deep Learning
CN114488518B (en) Self-adaptive optical wavefront correction method based on machine learning
Norouzi et al. CNN to mitigate atmospheric turbulence effect on Shack-Hartmann Wavefront Sensing: A case study on the Magdalena Ridge Observatory Interferometer.
CN117631068A (en) Wavefront measurement system and method based on deep learning
CN117760571B (en) Unsupervised learning wavefront detection method based on Hartmann detector
Zhao et al. Wavefront Reconstruction Method Based on Improved U-Net
Hashimoto et al. Numerical estimation method for misalignment of optical systems using machine learning
CN117670981A (en) Laser spot centroid positioning method and system integrating physical information deep learning
CN117132757A (en) AO system wavefront correction method based on SheffleNet network
CN117977356A (en) Apparatus and method for improving beam quality of large aperture laser

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication