CN114022730B - Point target phase retrieval method based on self-supervision learning neural network - Google Patents

Point target phase retrieval method based on self-supervision learning neural network Download PDF

Info

Publication number
CN114022730B
CN114022730B CN202111260725.0A CN202111260725A CN114022730B CN 114022730 B CN114022730 B CN 114022730B CN 202111260725 A CN202111260725 A CN 202111260725A CN 114022730 B CN114022730 B CN 114022730B
Authority
CN
China
Prior art keywords
neural network
self
supervision learning
learning neural
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111260725.0A
Other languages
Chinese (zh)
Other versions
CN114022730A (en
Inventor
郭弘扬
徐杨杰
唐薇
王子豪
黄永梅
贺东
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Original Assignee
Institute of Optics and Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CAS filed Critical Institute of Optics and Electronics of CAS
Priority to CN202111260725.0A priority Critical patent/CN114022730B/en
Publication of CN114022730A publication Critical patent/CN114022730A/en
Application granted granted Critical
Publication of CN114022730B publication Critical patent/CN114022730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a point target phase retrieval method based on a self-supervision learning neural network, which utilizes a system comprising: an imaging detector and a self-supervision learning neural network module. In the training process, the self-supervision learning neural network maps the data sample of the input facula image to the Zernike coefficient, then the optical imaging system calculates the facula image by utilizing the Zernike coefficient, and the self-supervision learning is realized by calculating the loss function according to the similarity between the input facula image and the output facula image. In the test process, a single defocused light spot image acquired by an imaging detector is input into a self-supervision learning neural network, and the wave front parameters required by the system are output according to the mapping relation, so that the phase retrieval is realized. The method does not depend on any label value, and only establishes a mapping relation between collected light spot samples through the internal characteristics of distorted light spot data; the technical bottleneck that a large number of label samples are needed in traditional supervised learning is effectively avoided, and the practical application of the neural network phase retrieval method is facilitated.

Description

Point target phase retrieval method based on self-supervision learning neural network
Technical Field
The invention relates to the field of phase retrieval, in particular to a point target phase retrieval method based on a self-supervision learning neural network, which mainly realizes efficient detection of point target wavefront aberration by establishing a self-supervision learning neural network model.
Background
A neural network is a nonlinear, autonomously-regulated information processing system for learning and storing a large number of nonlinear mappings between input and output models. The neural network phase retrieval model adopts a light intensity detection device to collect distorted light spots, and learns sample relation to realize phase recovery. The method has stable detection performance and has a certain application prospect under the condition of strong phase distortion. However, in the actual application process of the neural network phase retrieval model, a large number of data samples with labels need to be acquired in the training process of the traditional supervised learning neural network, so that the application cost and difficulty are greatly increased.
Aiming at the problems, the invention provides a point target phase retrieval method based on a self-supervision learning neural network. And acquiring a distorted light spot image of the defocused position by adopting an imaging detector, and providing a data sample for the self-supervision learning network. The self-supervision learning neural network module trains and improves the similarity of input and output light spots. In the test process, a single defocused light spot image acquired by an imaging detector is input into a self-supervision learning neural network, and the wave front parameters required by the system are output according to the mapping relation, so that the phase retrieval is realized. The method does not depend on any label value, and establishes the relation among collected light spot samples only through the inherent characteristics of distorted light spot data. The technical bottleneck that a large number of label samples are needed in traditional supervised learning is avoided, and the practical application of the neural network phase retrieval method is facilitated.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the point target phase retrieval method based on the self-supervision learning neural network mainly solves the technical bottleneck that the traditional supervision learning neural network needs a large number of label samples by the method of the self-supervision learning neural network phase retrieval, and is beneficial to practical application of the neural network phase retrieval method.
The invention adopts the technical scheme that: a point target phase retrieval method based on a self-supervision learning neural network comprises the following steps:
step one, an imaging detector acquires a distorted facula image;
acquiring defocused position distortion facula images according to an imaging detector, and performing self-supervision learning neural network training; the convolutional neural network module uses the classical network mobiletv 1, concrete structure:
1) The structure of MobileNet is first a 3x3 standard convolution followed by a stacked depthwiseseparable convolution, where some depthwise convolution is downsampled by stride=2. The average pooling is used for changing the characteristics into 1x1 and adding a full connection layer according to the predicted class size;
2) Since this is not a classification problem, we have removed the last softmax layer. The core is decomposabledepthwise separableconvolution, so that the calculation complexity of the model can be reduced, and the volume of the model can be greatly reduced;
3) DepthwiseParabaleconsolution is a factorized convolution operation that can be broken down into two smaller operations: depthwisecondionsolution and pointwisecondionsolution. Depthwise convolution uses a different convolution kernel for each input channel, and pointwise convolution uses a 1x1 convolution kernel. The different input channels are first convolved separately using a depthwise convolution, and then the outputs are combined using a pointwise convolution.
And thirdly, after learning and training by a large amount of light spot data, obtaining defocusing distortion light spot images according to detection of an imaging system, and outputting wavefront parameters required by the system by a self-supervision learning neural network to realize phase retrieval.
Further, the imaging detector is responsible for collecting distorted spot images of the defocused position, and training and testing data are provided for the self-supervision learning network.
Further, according to the facula images acquired by the imaging detector, self-supervision learning neural network training is performed:
1) Building a self-supervision learning neural network structure: the method comprises the following steps: network coding, hidden layer vector and network decoding;
2) Self-supervision learning neural network coding part: the method comprises the following steps: a plurality of convolution layers, a pooling layer and a full connection layer;
3) The decoding part of the self-supervision learning neural network is designed according to the imaging principle of the optical system, so that the function of outputting light spot distribution is realized;
4) Training a self-supervision learning neural network: the data samples of the input speckle image are mapped to hidden layer vectors, i.e. Zernike coefficients, which are then calculated back into the speckle image by the optical imaging system. And calculating a loss function according to the similarity between the input and output facula images, for example, taking the pixel mean square difference value of the two facula images as a loss function, and finally obtaining the self-supervision learning neural network for searching the point target phase.
Further, in the testing process, a single defocused light spot image acquired by an imaging detector is input into a self-supervision learning neural network, and a wavefront parameter required by a system is output according to a mapping relation, so that phase retrieval is realized.
Compared with the prior art, the method has the following advantages:
(1) Compared with the traditional wavefront sensor, the invention adopts the neural network to combine the imaging sensor to search the phase, has simple light path and convenient application.
(2) Compared with the existing neural network phase retrieval method, the method does not depend on any label value, and the relation among collected light spot samples is built only through the inherent characteristics of distorted light spot data. The technical bottleneck that a large number of label samples are needed in traditional supervised learning is avoided, and the practical application of the neural network phase retrieval method is facilitated.
(3) The invention has simple structure and easy realization.
Drawings
FIG. 1 is a flow chart of a point target phase retrieval method based on a self-supervised learning neural network of the present invention;
FIG. 2 is a block diagram of a self-supervised learning neural network model of the present invention;
fig. 3 is a flow chart of the self-supervised learning neural network training process of the present invention.
Detailed Description
The following describes the embodiments of the present invention in detail with reference to the drawings.
As shown in fig. 1, the method for searching the point target phase based on the self-supervision learning neural network comprises the following steps:
step one, an imaging detector acquires a distorted facula image;
the system utilized comprises: an imaging detector and a self-supervision learning neural network module. The imaging detector is responsible for collecting distorted facula images of the defocusing position and providing training and testing data for the self-supervision learning network.
The generalized pupil function of an optical system can be described as:
P(u,v)=O(u,v)e 2πiφ(u,v) (1)
where O (u, v) is the aperture function and phi (u, v) is the aberration to be measured, typically expressed as a weighted combination of Zernike polynomials.
The point spread function of the optical system is:
psf(u,v)=|FFT(P(u,v))| 2 (2)
the defocus camera is typically placed around 10mm from the focal point and a specific defocus distance needs to be known.
Acquiring defocused position distortion facula images according to an imaging detector, and performing self-supervision learning neural network training;
the self-supervision learning neural network module mainly comprises two parts, namely network training and network testing, and specifically comprises the steps that in the training process, the self-supervision learning neural network maps data samples of an input facula image to a hidden layer vector (namely a Zernike coefficient), then the hidden layer vector is calculated into a facula image through an optical imaging system, and loss function calculation is carried out according to the similarity between the input facula image and the output facula image, so that self-supervision learning is realized.
The similarity loss function is a correlation coefficient, which can be described as:
wherein A is mn And B mn Is the pixel value of the two spot images,and->Respectively A mn And B mn M and n represent the numbers of pixels in the lateral and longitudinal directions, respectively.
And thirdly, after learning and training by a large amount of light spot data, obtaining defocusing distortion light spot images according to detection of an imaging system, and outputting wavefront parameters required by the system by a self-supervision learning neural network to realize phase retrieval.
In the test process, a single defocused light spot image acquired by an imaging detector is input into a self-supervision learning neural network, and the wave front parameters required by the system are output according to the mapping relation, so that the phase retrieval is realized.
The correlation coefficient on the test set is 97%, the phase root mean square in the test set is 0.6284 lambda, the phase root mean square after self-supervision detection and correction is 0.0648 lambda, and the phase root mean square after supervision network detection and correction under the same condition is 0.0447 lambda.
The imaging detector is responsible for collecting distorted facula images of the defocusing position and providing training and testing data for the self-supervision learning network.
As shown in fig. 2, according to the flare image acquired by the imaging detector, self-supervision learning neural network training is performed:
1) Building a self-supervision learning neural network structure: the method comprises the following steps: network coding, hidden layer vector and network decoding;
2) Self-supervision learning neural network coding part: the method comprises the following steps: a plurality of convolution layers, a pooling layer and a full connection layer;
3) The decoding part of the self-supervision learning neural network is designed according to the imaging principle of the optical system, so that the function of outputting light spot distribution is realized;
4) Training a self-supervision learning neural network: the data samples of the input speckle image are mapped to hidden layer vectors, i.e. Zernike coefficients, which are then calculated back into the speckle image by the optical imaging system. And calculating a loss function according to the similarity between the input and output facula images, for example, taking the pixel mean square difference value of the two facula images as a loss function, and finally obtaining the self-supervision learning neural network for searching the point target phase.
In the test process, a single defocused light spot image acquired by an imaging detector is input into a self-supervision learning neural network, and the wave front parameters required by the system are output according to the mapping relation, so that the phase retrieval is realized.
As shown in fig. 3, a self-supervised learning neural network training flowchart. Firstly, training a convolutional neural network, inputting a large number of distorted images into a self-supervision learning neural network, and performing self-supervision learning according to the similarity between input facula samples and output facula images. In the test process, the distorted image acquired by the actual system is input into a trained self-supervision learning neural network, and the wavefront parameters required by the system are output according to the mapping relation, so that the phase retrieval is realized.

Claims (3)

1. A point target phase retrieval method based on a self-supervision learning neural network is characterized by comprising the following steps of: the method comprises the following steps:
step one, an imaging detector acquires a distorted facula image;
acquiring defocused position distortion facula images according to an imaging detector, and performing self-supervision learning neural network training; the self-supervision learning neural network comprises a convolutional neural network module and an imaging model module, wherein the convolutional neural network module uses a classical network Mobilene V1, and the imaging model module realizes physical imaging from phase to light spots;
step three, after a large amount of facula data are learned and trained, defocusing distortion facula images are obtained according to detection of an imaging system, wavefront parameters required by a self-supervision learning neural network output system are output by a convolutional neural network module, and phase retrieval is achieved;
according to the facula images acquired by the imaging detector, self-supervision learning neural network training is performed:
1) Building a self-supervision learning neural network structure: the method comprises the following steps: network coding, hidden layer vector and network decoding;
2) Self-supervision learning neural network coding part: the method comprises the following steps: a plurality of convolution layers, a pooling layer and a full connection layer;
3) The decoding part of the self-supervision learning neural network is designed according to the imaging principle of the optical system, so that the function of outputting light spot distribution is realized;
4) Training a self-supervision learning neural network: mapping the data sample of the input facula image to a hidden layer vector, namely a Zernike coefficient, and then calculating the hidden layer vector back to the facula image through an optical imaging system; and calculating a loss function according to the similarity between the input and output facula images, taking the pixel mean square difference value of the two facula images as a loss function, and finally obtaining the self-supervision learning neural network for searching the point target phase.
2. The method for searching the point target phase based on the self-supervision learning neural network according to claim 1, wherein the method comprises the following steps: the imaging detector is responsible for collecting distorted facula images of defocusing positions, training and testing data are provided for the self-supervision learning network, and the convolutional neural network module uses a classical network Mobilene V1, and the specific structure is as follows:
1) The structure of MobileNet V1 is first a 3x3 standard convolution followed by stacked depthwise separable convolution, where some depthwise convolutions are downsampled by stride=2, averaged pooling is used to change features to 1x1 convolutions, and a fully connected layer is added according to the predicted class size;
2) Because the method is not a classification problem, the last softmax layer is removed, and the core is decomposable depthwise separable convolution, so that the calculation complexity of the model can be reduced, and the volume of the model can be greatly reduced;
3) Depthwise separable convolution is a factored convolution operation that can be broken down into two smaller operations: depthwise convolution and pointwise convolution, depthwise convolution use a different convolution kernel for each input channel, pointwise convolution use a 1x1 convolution kernel, first convolving the different input channels separately using Depthwise convolution, and then combining the outputs using pointwise convolution.
3. The method for searching the point target phase based on the self-supervision learning neural network according to claim 1, wherein the method comprises the following steps: in the test process, a single defocused light spot image acquired by an imaging detector is input into a self-supervision learning neural network, and the wave front parameters required by the system are output according to the mapping relation, so that the phase retrieval is realized.
CN202111260725.0A 2021-10-28 2021-10-28 Point target phase retrieval method based on self-supervision learning neural network Active CN114022730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111260725.0A CN114022730B (en) 2021-10-28 2021-10-28 Point target phase retrieval method based on self-supervision learning neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111260725.0A CN114022730B (en) 2021-10-28 2021-10-28 Point target phase retrieval method based on self-supervision learning neural network

Publications (2)

Publication Number Publication Date
CN114022730A CN114022730A (en) 2022-02-08
CN114022730B true CN114022730B (en) 2023-08-15

Family

ID=80058136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111260725.0A Active CN114022730B (en) 2021-10-28 2021-10-28 Point target phase retrieval method based on self-supervision learning neural network

Country Status (1)

Country Link
CN (1) CN114022730B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115333621B (en) * 2022-08-10 2023-07-18 长春理工大学 Facula centroid prediction method fusing space-time characteristics under distributed framework

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109212735A (en) * 2018-10-10 2019-01-15 浙江大学 High-speed adaptive optics ring-shaped light spot based on machine learning corrects system and method
CN110020719A (en) * 2019-03-26 2019-07-16 中国科学院光电技术研究所 A kind of wave front correction method based on image moment characteristics
CN110207835A (en) * 2019-05-23 2019-09-06 中国科学院光电技术研究所 A kind of wave front correction method based on out-of-focus image training
CN110349095A (en) * 2019-06-14 2019-10-18 浙江大学 Learn the adaptive optics wavefront compensation method of prediction wavefront zernike coefficient based on depth migration
CN110648298A (en) * 2019-11-01 2020-01-03 中国工程物理研究院流体物理研究所 Optical aberration distortion correction method and system based on deep learning
CN111351450A (en) * 2020-03-20 2020-06-30 南京理工大学 Single-frame stripe image three-dimensional measurement method based on deep learning
CN112561831A (en) * 2020-12-24 2021-03-26 中国计量大学 Distortion correction method based on neural network
CN113383225A (en) * 2018-12-26 2021-09-10 加利福尼亚大学董事会 System and method for propagating two-dimensional fluorescence waves onto a surface using deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6787747B2 (en) * 2002-09-24 2004-09-07 Lockheed Martin Corporation Fast phase diversity wavefront correction using a neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109212735A (en) * 2018-10-10 2019-01-15 浙江大学 High-speed adaptive optics ring-shaped light spot based on machine learning corrects system and method
CN113383225A (en) * 2018-12-26 2021-09-10 加利福尼亚大学董事会 System and method for propagating two-dimensional fluorescence waves onto a surface using deep learning
CN110020719A (en) * 2019-03-26 2019-07-16 中国科学院光电技术研究所 A kind of wave front correction method based on image moment characteristics
CN110207835A (en) * 2019-05-23 2019-09-06 中国科学院光电技术研究所 A kind of wave front correction method based on out-of-focus image training
CN110349095A (en) * 2019-06-14 2019-10-18 浙江大学 Learn the adaptive optics wavefront compensation method of prediction wavefront zernike coefficient based on depth migration
CN110648298A (en) * 2019-11-01 2020-01-03 中国工程物理研究院流体物理研究所 Optical aberration distortion correction method and system based on deep learning
CN111351450A (en) * 2020-03-20 2020-06-30 南京理工大学 Single-frame stripe image three-dimensional measurement method based on deep learning
CN112561831A (en) * 2020-12-24 2021-03-26 中国计量大学 Distortion correction method based on neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭弘扬.基于液晶校正的空间光耦合技术研究.《中国博士学位论文全文数据库》.2021,(第08期),第I136-12页. *

Also Published As

Publication number Publication date
CN114022730A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN112150493B (en) Semantic guidance-based screen area detection method in natural scene
CN109766805B (en) Deep learning-based double-layer license plate character recognition method
CN111222396A (en) All-weather multispectral pedestrian detection method
CN111008576B (en) Pedestrian detection and model training method, device and readable storage medium
CN114821390B (en) Method and system for tracking twin network target based on attention and relation detection
CN114972208B (en) YOLOv 4-based lightweight wheat scab detection method
CN115690542A (en) Improved yolov 5-based aerial insulator directional identification method
CN114022730B (en) Point target phase retrieval method based on self-supervision learning neural network
CN113901897A (en) Parking lot vehicle detection method based on DARFNet model
CN114170537A (en) Multi-mode three-dimensional visual attention prediction method and application thereof
CN116342894A (en) GIS infrared feature recognition system and method based on improved YOLOv5
CN112288772A (en) Channel attention target tracking method based on online multi-feature selection
CN113327271B (en) Decision-level target tracking method and system based on double-optical twin network and storage medium
CN116681885B (en) Infrared image target identification method and system for power transmission and transformation equipment
CN111881914B (en) License plate character segmentation method and system based on self-learning threshold
US20240005635A1 (en) Object detection method and electronic apparatus
CN116091793A (en) Light field significance detection method based on optical flow fusion
CN115761667A (en) Unmanned vehicle carried camera target detection method based on improved FCOS algorithm
CN111833363A (en) Detection method and device
CN115187982A (en) Algae detection method and device and terminal equipment
CN113870311A (en) Single-target tracking method based on deep learning
CN113112522A (en) Twin network target tracking method based on deformable convolution and template updating
CN113609904B (en) Single-target tracking algorithm based on dynamic global information modeling and twin network
CN114880953B (en) Rapid wavefront restoration method of four-step phase type Fresnel zone plate
CN117689892B (en) Remote sensing image focal plane discriminating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant