CN117237236B - Fringe pattern model optimization method, fringe pattern model optimization device, fringe pattern model optimization equipment and computer-readable storage medium - Google Patents

Fringe pattern model optimization method, fringe pattern model optimization device, fringe pattern model optimization equipment and computer-readable storage medium Download PDF

Info

Publication number
CN117237236B
CN117237236B CN202311497957.7A CN202311497957A CN117237236B CN 117237236 B CN117237236 B CN 117237236B CN 202311497957 A CN202311497957 A CN 202311497957A CN 117237236 B CN117237236 B CN 117237236B
Authority
CN
China
Prior art keywords
model
optimization
fringe pattern
characteristic parameters
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311497957.7A
Other languages
Chinese (zh)
Other versions
CN117237236A (en
Inventor
吕赐兴
王明丰
秦毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan University of Technology
Original Assignee
Dongguan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan University of Technology filed Critical Dongguan University of Technology
Priority to CN202311497957.7A priority Critical patent/CN117237236B/en
Publication of CN117237236A publication Critical patent/CN117237236A/en
Application granted granted Critical
Publication of CN117237236B publication Critical patent/CN117237236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a fringe pattern model optimization method, a fringe pattern model optimization device, fringe pattern model optimization equipment and a computer-readable storage medium, wherein the fringe pattern model optimization method comprises the following steps: s1, setting up a fringe pattern model based on each pixel of continuous multi-frame images, wherein characteristic parameters of the fringe pattern model comprise fringe background light intensity, amplitude, phase and phase shift related parameters; s2, respectively creating a corresponding optimization objective function for each group of characteristic parameters; s3, optimizing each group of characteristic parameters one by utilizing an optimization model, wherein when any group of characteristic parameters is optimized, the group of characteristic parameters are used as variables, the other two groups of characteristic parameters are used as known quantities, and the optimization model comprises a weighted least square method model and a neural network model; s4, repeating the step S3 until the final convergence condition is reached, and outputting all groups of characteristic parameters which are finally optimized. The method for obtaining the weighting coefficient of the weighted least square method by using the neural network model solves the problem that the inaccurate weighting coefficient affects the selection of the accurate weighting coefficient.

Description

Fringe pattern model optimization method, fringe pattern model optimization device, fringe pattern model optimization equipment and computer-readable storage medium
Technical Field
The present disclosure relates to the field of fringe pattern reconstruction technologies, and in particular, to a fringe pattern model optimization method, apparatus, device, and computer readable storage medium.
Background
Phase-shift interferometry (PSI) is widely used in optical metrology as a simple, accurate technique. In phase-shifting interferometry, a series of phase-shifting fringe patterns are generated and captured from which a phase distribution associated with a measured quantity (e.g., shape, deformation, or refractive index) is extracted. However, in practical measurement, due to various factors such as rigid and non-rigid movement between the reference objects, vibration caused by air turbulence, high reflection and low reflection in the image, large shadow areas generated by objects, random noise generated by interference devices, and the like, the shot fringe patterns can form uneven phase shift distribution in time and space, namely uneven phase shift distribution between frames and between pixels in the frames. In order to accurately extract the phase in the presence of these error sources, a number of Phase Shift Algorithms (PSA) have been developed. Examples of typical unknown phase shifts are advanced iterative algorithms AIA based on an alternative linear Least Squares Fit (LSF) framework, principal component analysis algorithms PCA with good accuracy and fast computation speed, algorithms based on euclidean matrix norms and Gram-Schmidt orthonormal, etc. But these algorithms all solve for the case of intra-frame distributed uniform phase shifts and inter-frame non-uniform phase shifts.
The improved algorithm GIA is derived based on AIA in order to consider the case of simultaneously solving inter-and intra-frame distributed non-uniform phase shifts. It proposes a general iterative algorithm that uses the most general stripe model. In the GIA, many unknowns in the fringe pattern model are grouped into three groups, including (i) fringe background light intensity and amplitude, (ii) phase, and (iii) phase shift related parameters, and optimized group by univariate search techniques to improve accuracy and convergence. Because the Levenberg-Marquart method (hereinafter referred to as LM) has good precision and robustness, each group of unknowns is optimized by adopting the LM method, so that the least square error between the noiseless stripe intensity and the actually measured stripe intensity is minimized, and the corresponding unknown vector solution is returned when the least square error is minimized, thereby extracting relevant data such as phase, uneven phase shift and the like.
However, in practical processes, the image captured by us may cause high reflection due to the material of the object, so that originally correct sampling data is contaminated, for example: the high reflection causes the black-and-white stripe value to be increased at the same time, but the maximum read image value is only 255, so that the phenomenon of gradual shortening of the black-and-white stripe occurs. These contaminated fringe pattern data render the GIA fit to the maxwellian polynomial intra-frame phase shifts inaccurate, and may cause gaps or cliffs in the portion of the object reconstructed at the fringe boundaries. To overcome this problem, a discontinuous optical phase map weighted least squares phase unwrapping algorithm based on directional coherence is proposed. In the algorithm, directional coherence is introduced to define a new weighting coefficient, and the reconstructed object surface is smoother and linked by using the weighting coefficient. However, the weight value of the algorithm is a weight coefficient depending on the parcel phase map, and an inaccurate area of the parcel phase map affects the weight selection of the accurate area. The algorithm also shows experimentally that the spread out phase values in the continuous region appear not to be smooth enough.
Disclosure of Invention
The present application is directed to a fringe pattern optimization method, apparatus, device, and computer-readable storage medium, which can solve at least one technical problem existing in the background art.
In order to achieve the above object, the present application provides a fringe pattern model optimization method, which includes:
s1, setting up a fringe pattern model based on each pixel of continuous multi-frame images, wherein characteristic parameters of the fringe pattern model comprise fringe background light intensity, amplitude, phase and phase shift related parameters;
s2, respectively creating a corresponding optimization objective function for each group of characteristic parameters;
s3, optimizing each group of characteristic parameters one by utilizing an optimization model, wherein when any group of characteristic parameters is optimized, the group of characteristic parameters are used as variables, the other two groups of characteristic parameters are used as known quantities, the optimization model comprises a weighted least square method model and a neural network model, and the optimization method of each group of characteristic parameters comprises the following steps:
s31, constructing the weighted least square method model and the neural network model based on each group of characteristic parameters, weight coefficients and cost coefficients of each pixel of the continuous multi-frame image and set optimization parameters;
s32, inputting a group of variables to be optimized and corresponding known quantities into the weighted least square model;
s33, optimizing the variable to be optimized by the weighted least square method model, and updating a weight coefficient and a cost coefficient;
s34, transmitting the updated weight coefficient to the neural network model, wherein the neural network model updates the neural network parameter by using the updated weight coefficient and reversely transmits the neural network parameter to the weighted least square model;
s35, taking the updated neural network parameters as the new optimized parameters of the weighted least square method model, and repeating the steps S33 and S34;
s36, substituting the re-optimized variable and the corresponding known quantity into the corresponding optimized objective function and judging whether the optimization objective function is converged, if so, entering a step S38, and if not, reversely transmitting the re-optimized variable to the weighted least square model and entering a step S37;
s37, repeating the step S35 and the step S36 until the optimization objective function converges, and entering the step S38;
s38, outputting a variable for completing optimization;
s4, repeating the step S3 until the final convergence condition is reached, and outputting all groups of characteristic parameters which are finally optimized.
Optionally, the fringe pattern model is:
the optimization objective function corresponding to the stripe background light intensity and the amplitude is as follows:
the optimization objective function corresponding to the phase is as follows:
the optimization objective function corresponding to the phase shift related parameter is as follows:
wherein the method comprises the steps ofFor the fringe pattern model, ">Is stripe background light intensity>For amplitude +.>For the phase +.>For phase shift, < >>Harmonic of fringe pattern model->Order of->For the actual fringe pattern, +.>For frame number>Pixels are numbered.
Optionally, the parameters of the weighted least square model include: an unknown vector, a residual vector and a jacobian matrix;
the unknown quantity vector, residual vector and jacobian matrix corresponding to the stripe background light intensity and the amplitude are respectively as follows:
the unknown vector, residual vector and jacobian matrix corresponding to the phase are respectively as follows:
the unknown vector, residual vector and jacobian matrix corresponding to the phase shift related parameter are respectively as follows:
wherein the method comprises the steps ofIs stripe background light intensity>For amplitude +.>For the phase +.>For phase shift, < >>For the phase shift related parameter>Is an unknown vector, ++>For residual vector, ++>Is jacobian matrix->Harmonic of fringe pattern model->Order of->For the total number of pixels per frame of image, +.>Is the total number of frames->For frame number>For pixel numbering +.>And->For the coordinates corresponding to the pixel, < >>Total number of direction pixels>Total number of direction pixels>Is->Maxwork Lin Jie times, < ->Is->Maxwellian Lin Jieci of the directional pixels.
Optionally, the loss function of the weighted least square model is:
wherein,is a weight coefficient>Cost coefficient->To optimize the parameters +.>For the variables to be optimized +.>Is a residual vector expressed as a weight coefficient +.>And cost coefficient->Is a product of (2);
the loss function of the neural network model is as follows:
wherein,for the previous optimization parameters +.>And the current updated optimization parameters.
Optionally, the neural network model includes: an input layer, a hidden layer and an output layer;
the connection mode of the neural network model adopts a full connection mode for connection;
wherein the number of neurons of the input layer and the output layer is consistent with the magnitude of the unknown vector;
wherein the number of neurons of the hidden layer is consistent with the number of the weight coefficients.
Optionally, the optimization objective function convergence is determined by the following inequality:
wherein the method comprises the steps ofFor the optimization objective function corresponding to the variable to be optimized, < ->For the number of iterations->Is a threshold value.
Optionally, the final convergence condition includes:
selecting an optimized objective function corresponding to any group of characteristic parameters, substituting the following inequality and enabling the inequality to be established:
wherein the method comprises the steps ofOptimizing objective function corresponding to any group of characteristic parameters, < ->For the number of iterations->Is a threshold value.
To achieve the above object, the present application provides a fringe pattern model optimizing apparatus, including:
the system comprises a setting module, a setting module and a control module, wherein the setting module is used for setting a fringe pattern model based on each pixel of continuous multi-frame images, and characteristic parameters of the fringe pattern model comprise fringe background light intensity, amplitude, phase and phase shift related parameters;
the creation module is used for creating a corresponding optimization objective function for each group of characteristic parameters respectively;
the optimization module is used for optimizing each group of characteristic parameters one by utilizing an optimization model, when any group of characteristic parameters are optimized, the group of characteristic parameters are used as variables, the other two groups of characteristic parameters are used as known quantities, the optimization model comprises a weighted least square method model and a neural network model, and the optimization method of each group of characteristic parameters comprises the following steps:
s31, constructing the weighted least square method model and the neural network model based on each group of characteristic parameters, weight coefficients and cost coefficients of each pixel of the continuous multi-frame image and set optimization parameters;
s32, inputting a group of variables to be optimized and corresponding known quantities into the weighted least square model;
s33, optimizing the variable to be optimized by the weighted least square method model, and updating a weight coefficient and a cost coefficient;
s34, transmitting the updated weight coefficient to the neural network model, wherein the neural network model updates the neural network parameter by using the updated weight coefficient and reversely transmits the neural network parameter to the weighted least square model;
s35, taking the updated neural network parameters as the new optimized parameters of the weighted least square method model, and repeating the steps S33 and S34;
s36, substituting the re-optimized variable and the corresponding known quantity into the corresponding optimized objective function and judging whether the optimization objective function is converged, if so, entering a step S38, and if not, reversely transmitting the re-optimized variable to the weighted least square model and entering a step S37;
s37, repeating the step S35 and the step S36 until the optimization objective function converges, and entering the step S38;
s38, outputting a variable for completing optimization;
and the output module is used for outputting various groups of characteristic parameters which finish final optimization when the final convergence condition is reached.
To achieve the above object, the present application further provides an apparatus, including:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform a fringe pattern model optimization method as described above via execution of the executable instructions.
To achieve the above object, the present application also provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the fringe pattern model optimizing method as described above.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device performs the fringe pattern model optimization method as described above.
The method comprises the steps that a weighted least square method model is used for combining a neural network model to optimize stripe background light intensity, amplitude, phase and phase shift related parameters of a stripe graph model one by one, when optimization is carried out, the weighted least square method model can forward transmit updated weight coefficients to the neural network model, the neural network model updates the neural network parameters by using received weight coefficients and reversely transmits the updated weight coefficients to the weighted least square method model, the weighted least square method model can re-optimize the updated neural network parameters as new optimization parameters, then substitutes the re-optimized variables and corresponding known quantities into corresponding optimization objective functions and judges whether the optimization objective functions are converged, if yes, the optimized variables are output, if no, the re-optimized variables are reversely transmitted to the weighted least square method model and repeatedly optimized until the optimized variables are output when the optimization objective functions are converged. The method repeatedly optimizes the stripe background light intensity, amplitude, phase and phase shift related parameters one by one until reaching the final convergence condition, and outputs various groups of characteristic parameters which finish the final optimization. Because the neural network model establishes abstract representation of characteristics and modes by learning statistical rules of a large amount of data in the training process, and keeps the output stability of accurate data, the method selects the neural network model to acquire the weight coefficient of the weighted least square method, and solves the problem that the inaccurate weight coefficient affects the selection of the accurate weight coefficient. And secondly, the neural network has end-to-end learning capability, and the neural network model is combined with the weighted least square method model, so that end-to-end learning can be realized, that means that the learning of the weight coefficient can be directly started from the original input data, and different tasks and data can be better adapted through the neural network model.
Drawings
FIG. 1 is a flow chart of a fringe pattern model optimization method in accordance with an embodiment of the present application.
FIG. 2 is a flow chart of a method of optimizing a set of characteristic parameters in an embodiment of the present application.
FIG. 3 is a neural network model neuron structure when the variables to be optimized are stripe background light intensity and amplitude according to the embodiment of the present application.
FIG. 4 is a schematic diagram of an optimization model of an embodiment of the present application.
FIG. 5 is a flow chart of a fringe pattern model optimization method in accordance with another embodiment of the present application.
FIG. 6 is a schematic block diagram of a fringe pattern optimizing apparatus according to an embodiment of the present application.
Fig. 7 is a schematic block diagram of an apparatus of an embodiment of the present application.
Detailed Description
In order to describe the technical content, constructional features, achieved objects and effects of the present application in detail, the following description is made in connection with the embodiments and the accompanying drawings.
Example 1
Referring to fig. 1 to 5, the present application discloses a fringe pattern model optimization method, which includes:
s1, setting up a fringe pattern model based on each pixel of continuous multi-frame images, wherein characteristic parameters of the fringe pattern model comprise fringe background light intensity, amplitude, phase and phase shift related parameters. The background light intensity and amplitude of the stripes are a set of characteristic parameters, the phase is a set of characteristic parameters, and the phase shift related parameters are a set of characteristic parameters.
Specifically, the fringe pattern model is:
wherein the method comprises the steps ofFor the fringe pattern model, ">Is stripe background light intensity>For amplitude +.>For the phase +.>For phase shift, < >>Harmonic of fringe pattern model->Is a function of the order of (2).
S2, respectively creating a corresponding optimization objective function for each group of characteristic parameters.
Specifically, the optimization objective function corresponding to the stripe background light intensity and the amplitude is:
the optimization objective function corresponding to the phase is:
the optimization objective function corresponding to the phase shift related parameter is:
wherein the method comprises the steps ofFor the fringe pattern model, < >>For the actual fringe pattern, +.>For frame number>Pixels are numbered.
And S3, optimizing each group of characteristic parameters one by utilizing an optimization model, wherein when any group of characteristic parameters is optimized, the group of characteristic parameters are used as variables, the other two groups of characteristic parameters are used as known quantities, and the optimization model comprises a weighted least square (LM (Levenberg-Marquart) model, namely an optimizer LM in fig. 4) and a neural network model (optimizer Adam). It will be appreciated that when optimizing a set of variables, the optimization is continued on the basis of the latest optimization, and both the input variables and the known quantities are the results of the latest optimization.
Specifically, the parameters of the weighted least squares model include: an unknown vector, a residual vector and a jacobian matrix.
The unknown vector, residual vector and jacobian matrix corresponding to the stripe background light intensity and amplitude (the unknown vector, residual vector and jacobian matrix are established for the target pixel j) are respectively as follows:
wherein,the magnitude of the vector is 1× (H+1) for unknown quantity and +.>Is a residual vector with the size of N multiplied by 1, & lt/L>Is jacobian matrix with size of Nx (H+1) and +.>Is stripe background light intensity>For amplitude +.>For the phase +.>For phase shift, < >>Harmonic of fringe pattern model->Order of->For the total number of pixels per frame of image, +.>Is the total number of frames->For frame number>For pixel numbering +.>And->Is the coordinates corresponding to the pixel. H is custom, h=2 in the specific experiment.
Specifically, when the weighted least square model optimizes the stripe background light intensity and amplitude, parameters to be transferred further include: maximum iteration number LM_MaxIter and damping factorAnd tolerance Tol.
According to experiments, the maximum iteration times LM_MaxIter is equal to 200 and the damping factor is set upEqual to 1 and tolerance Tol equal to +.>. But is not limited thereto.
The unknown vector, residual vector and jacobian matrix corresponding to the phase (the unknown vector, residual vector and jacobian matrix are established for the target pixel j) are respectively:
wherein the method comprises the steps ofIs an unknown vector with a size of 1×1,/for>Is a residual vector with the size of Fx1, < >>Is jacobian matrix with size of Fx1,/L>For amplitude +.>For the phase +.>For phase shift, < >>Harmonic of fringe pattern model->Order of->For frame number>Pixels are numbered.
Specifically, when the weighted least square model optimizes the phase, parameters to be transferred further include: maximum iteration number LM_MaxIter and damping factorAnd tolerance Tol.
According to experiments, the maximum iteration times LM_MaxIter is equal to 200 and the damping factor is set upEqual to 1 and tolerance Tol equal to +.>. Because the three parameters have universality, the parameter values when the weighted least square method model is adopted to optimize the stripe background light intensity and the amplitude, or the three parameters of the weighted least square method model for optimizing the stripe background light intensity and the amplitude are the same as the three parameters of the weighted least square method model for optimizing the phase.
The unknown vector, residual vector and jacobian matrix corresponding to the phase shift related parameter are respectively as follows:
wherein the method comprises the steps ofAs an unknown vector (i.e., phase shift related parameter), the magnitude is 1× [ (q+1) × (q+2)/2],/>Is a residual vector with the size of N multiplied by 1, & lt/L>Is a jacobian matrix, and has a size of Nx [ (Q+1) × (Q+2)/2],/>For amplitude +.>For the phase +.>For phase shift, < >>For the phase shift related parameter>Harmonic of fringe pattern model->Order of->Is->Total number of direction pixels>Total number of direction pixels>For frame number>For pixel numbering +.>And->Is the coordinates corresponding to the pixel,is->Maxwork Lin Jie times, < ->Is->Maxwellian Lin Jieci of the directional pixels. In particular/> =1,2,…,/>U=0, 1 …, Q, v=0, 1 …, u. Q may be set according to the image result, and q=4 in a specific experiment, but is not limited thereto.
For the phase shift related parameters, the optimization is frame by frame and the phase shift of the first frame is assumed to be 0. For the target frameiAnd establishing the unknown quantity vector, the residual vector and the jacobian matrix.
Specifically, when the weighted least square method model optimizes the phase shift related parameters, the parameters to be transferred further include: maximum iteration number LM_MaxIter and damping factorAnd tolerance Tol.
According to experiments, the maximum iteration times LM_MaxIter is equal to 200 and the damping factor is set upEqual to 1 and tolerance Tol equal to +.>. Because the three parameters have universality, the parameter values when the weighted least square method model is adopted to optimize the stripe background light intensity and the amplitude, or the three parameters of the weighted least square method model for optimizing the stripe background light intensity and the amplitude are the same as the three parameters of the weighted least square method model for optimizing the phase.
Specifically, the neural network model includes: an input layer, a hidden layer and an output layer;
the connection mode of the neural network model adopts a full connection mode for connection;
the number of neurons of the input layer and the output layer is consistent with the size of the unknown vector;
wherein the number of neurons of the hidden layer is consistent with the number of weight coefficients.
Wherein fig. 3 shows the neuron structure of the neural network model when the variables to be optimized are the background light intensity and the amplitude of the stripes. Neurons of the hidden layer represent weight coefficients in forward transmissionThe number of neurons is n, which can be defined according to the image size.
The optimization method of each group of characteristic parameters comprises the following steps:
s31, constructing a weighted least square method model and a neural network model based on each group of characteristic parameters, weight coefficients and cost coefficients of each pixel of the continuous multi-frame image and set optimization parameters (initial value is set to be 1).
Specifically, the loss function of the weighted least squares model is:
wherein,for weight coefficient (weight coefficient of the i-th frame), for example>Cost coefficient (cost coefficient of i frame), and>to optimize the parameters +.>For the variables to be optimized +.>Is a residual vector expressed as a weight coefficient +.>And cost coefficient->Is a product of (a) and (b). />Is the value of the corresponding Sleast squares solution, which is processed by the hidden layer of the optimization modelIf the convergence criterion is met, the convergence criterion is finally transferred to an output layer of the optimization model.
The weighted least squares model is:
the loss function of the neural network model is:
wherein,for the previous optimization parameters +.>And the current updated optimization parameters.
The optimization function of the neural network model is:
s32, inputting a group of variables to be optimized and the corresponding known quantities into a weighted least square model.
Where the variables to be optimized are fringe background light intensity and amplitude, the known quantities include phase and phase shift related parameters. When the variable to be optimized is phase, the known quantities are fringe background light intensity and amplitude, and phase shift related parameters. When the variables to be optimized are phase shift related parameters, the known quantities are fringe background light intensity and amplitude, as well as phase.
And S33, optimizing the variable to be optimized by the weighted least square method model, and updating the weight coefficient and the cost coefficient. Specifically, the weighted least squares model calculates a new S (θ; v) from the loss function.
And S34, transmitting the updated weight coefficient to a neural network model, and updating the neural network parameter by the neural network model through the updated weight coefficient and reversely transmitting the neural network parameter to a weighted least square method model.
Specifically, the neural network model will be based on the loss functionPerforming gradient descent type optimization of the standard in deep learning on the neural network parameter v to generate +.>And reversely transmitting the data to the LM optimizer.
And S35, taking the updated neural network parameters as new optimization parameters of the weighted least square method model, and repeating the steps S33 and S34.
S36, substituting the re-optimized variable and the corresponding known quantity into the corresponding optimization objective function and judging whether the variable is converged, if so, entering step S38, and if not, reversely transmitting the re-optimized variable to the weighted least square model and entering step S37.
Specifically, when the variables are stripe background light intensity and amplitude, the substituted optimization objective function is:
when the variable is phase, the substituted optimization objective function is:
when the variable is a phase shift related parameter, the substituted optimization objective function is:
specifically, the optimization objective function convergence is determined by the following inequality:
wherein the method comprises the steps ofFor the optimization objective function corresponding to the variable to be optimized, < +.>For the number of iterations->Is a threshold value.
S37, repeating the step S35 and the step S36 until the optimization objective function converges, and entering the step S38.
S38, outputting the optimized variable.
For the whole optimization model, the model can be divided into an input layer, a hidden layer and an output layer as shown in fig. 4, wherein the input layer is used for inputting a group of variables to be optimized and corresponding known quantities, the output layer is used for outputting variables meeting iteration termination conditions in the hidden layer, namely, the variables completing optimization, and the hidden layer is responsible for working except for input and output.
Referring to fig. 5, specifically, before optimizing each set of feature parameters one by one using the optimization model, the method further includes: all characteristic parameters and phase shifts are initialized by adopting an AIA iteration method.
The initialized data can affect the accuracy of the final convergence data of the application. When the initialization data deviates from the convergence data too much, the accuracy of the obtained final result is not high, and the number of convergence iterations is increased to influence the convergence speed. In order to improve the accuracy of the final result, data close to the real result is selected as far as possible, so the application can select the data calculated by using the AIA as an initial value, namely, initializing the stripe background light intensity, amplitude, phase shift and phase shift related parameters by using the AIA.
Further, when initializing all the characteristic parameters by AIA iteration, the relevant phase shift parameter is set to zero (i.e. the coefficient vector of maxwellian is set to 0, i.e. a one-dimensional vector of all zeros, ensuring that the phase shift plane of the first frame starts from 0), andthe amplitude of the higher harmonic wave stripe is +.>And is set to zero to facilitate better final convergence.
S4, repeating the step S3 until the final convergence condition is reached, and outputting all groups of characteristic parameters which are finally optimized.
Specifically, the final convergence condition includes: selecting an optimized objective function corresponding to any group of characteristic parameters, substituting the following inequality and enabling the inequality to be established:
wherein the method comprises the steps ofOptimizing objective function corresponding to any group of characteristic parameters, < ->As the number of iterations (the number of overall iterations),is a threshold value. Specifically, here->Is custom, in experiment->=/>
Specifically, the final convergence condition further includes a maximum iteration number MaxIter. Optionally, the maximum iteration number maxiter=20-50.
In the example provided in fig. 5, when optimizing each set of characteristic parameters one by one, the optimization order is to optimize the stripe background light intensity and amplitude first, then optimize the phase, and finally optimize the phase shift related parameters. Of course, the optimization is not limited to this optimization order.
The method comprises the steps that a weighted least square method model is used for combining a neural network model to optimize stripe background light intensity, amplitude, phase and phase shift related parameters of a stripe graph model one by one, when optimization is carried out, the weighted least square method model can forward transmit updated weight coefficients to the neural network model, the neural network model updates the neural network parameters by using received weight coefficients and reversely transmits the updated weight coefficients to the weighted least square method model, the weighted least square method model can re-optimize the updated neural network parameters as new optimization parameters, then substitutes re-optimized variables and corresponding known quantities into corresponding optimization objective functions and judges whether the optimization objective functions are converged, if yes, the optimized variables are output, if no, the re-optimized variables are reversely transmitted to the weighted least square method model and repeatedly optimized until the optimized variables are output when the optimization objective functions are converged. The method repeatedly optimizes the stripe background light intensity, amplitude, phase and phase shift related parameters one by one until reaching the final convergence condition, and outputs various groups of characteristic parameters which finish the final optimization. Because the neural network model establishes abstract representation of characteristics and modes by learning statistical rules of a large amount of data in the training process, and keeps the output stability of accurate data, the method selects the neural network model to acquire the weight coefficient of the weighted least square method, and solves the problem that the inaccurate weight coefficient affects the selection of the accurate weight coefficient. And secondly, the neural network has end-to-end learning capability, and the neural network model is combined with the weighted least square method model, so that end-to-end learning can be realized, that means that the learning of the weight coefficient can be directly started from the original input data, and different tasks and data can be better adapted through the neural network model.
Example two
Referring to fig. 2 and 6, the present application discloses a fringe pattern model optimizing apparatus, which includes:
a setting module 201, configured to set up a fringe pattern model based on each pixel of the continuous multi-frame images, where characteristic parameters of the fringe pattern model include fringe background light intensity and amplitude, phase, and phase shift related parameters.
A creation module 202 is configured to create a corresponding optimization objective function for each set of feature parameters.
And the optimization module 203 is configured to optimize each set of feature parameters one by using an optimization model, and when any one set of feature parameters is optimized, the set of feature parameters is used as variables, the other two sets of feature parameters are used as known quantities, and the optimization model includes a weighted least square model and a neural network model. The optimization method of each group of characteristic parameters comprises the following steps:
s31, constructing a weighted least square method model and a neural network model based on each group of characteristic parameters, weight coefficients and cost coefficients of each pixel of the continuous multi-frame image and set optimization parameters;
s32, inputting a group of variables to be optimized and corresponding known quantities into a weighted least square model;
s33, optimizing variables to be optimized by a weighted least square method model, and updating weight coefficients and cost coefficients;
s34, transmitting the updated weight coefficient to a neural network model, and updating the neural network parameter by the neural network model through the updated weight coefficient and reversely transmitting the neural network parameter to a weighted least square method model;
s35, taking the updated neural network parameters as new optimization parameters of the weighted least square method model, and repeating the steps S33 and S34;
s36, substituting the re-optimized variable and the corresponding known quantity into the corresponding optimization objective function and judging whether the variable is converged, if so, entering a step S38, and if not, reversely transmitting the re-optimized variable to a weighted least square model and entering a step S37;
s37, repeating the step S35 and the step S36 until the optimization objective function converges, and entering the step S38;
s38, outputting the optimized variable.
And the output module 204 is used for outputting various groups of characteristic parameters which finish final optimization when the final convergence condition is reached.
The method comprises the steps that a weighted least square method model is used for combining a neural network model to optimize stripe background light intensity, amplitude, phase and phase shift related parameters of a stripe graph model one by one, when optimization is carried out, the weighted least square method model can forward transmit updated weight coefficients to the neural network model, the neural network model updates the neural network parameters by using received weight coefficients and reversely transmits the updated weight coefficients to the weighted least square method model, the weighted least square method model can re-optimize the updated neural network parameters as new optimization parameters, then substitutes re-optimized variables and corresponding known quantities into corresponding optimization objective functions and judges whether the optimization objective functions are converged, if yes, the optimized variables are output, if no, the re-optimized variables are reversely transmitted to the weighted least square method model and repeatedly optimized until the optimized variables are output when the optimization objective functions are converged. The method repeatedly optimizes the stripe background light intensity, amplitude, phase and phase shift related parameters one by one until reaching the final convergence condition, and outputs various groups of characteristic parameters which finish the final optimization. Because the neural network model establishes abstract representation of characteristics and modes by learning statistical rules of a large amount of data in the training process, and keeps the output stability of accurate data, the method selects the neural network model to acquire the weight coefficient of the weighted least square method, and solves the problem that the inaccurate weight coefficient affects the selection of the accurate weight coefficient. And secondly, the neural network has end-to-end learning capability, and the neural network model is combined with the weighted least square method model, so that end-to-end learning can be realized, that means that the learning of the weight coefficient can be directly started from the original input data, and different tasks and data can be better adapted through the neural network model.
Example III
Referring to fig. 7, the present application discloses an apparatus comprising:
a processor 30;
a memory 40 having stored therein executable instructions of the processor 30;
wherein the processor 30 is configured to perform the fringe pattern model optimization method of embodiment one via execution of the executable instructions.
Example IV
The application discloses a computer readable storage medium having a program stored thereon, which when executed by a processor implements a fringe pattern model optimization method as described in embodiment one.
Example five
Embodiments of the present application disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device performs the fringe pattern model optimizing method as described in the first embodiment.
It should be appreciated that in embodiments of the present application, the processor may be a central processing module (CentralProcessing Unit, CPU), which may also be other general purpose processors, digital signal processors (DigitalSignal Processor, DSP), application specific integrated circuits (Application SpecificIntegrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that all or part of the processes in the methods of the embodiments described above may be implemented by hardware associated with computer program instructions, where the program may be stored on a computer readable storage medium, where the program, when executed, may include processes in embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disc, a Read-only memory (ROM), a Random access memory (Random AccessMemory, RAM), or the like.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The foregoing disclosure is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the claims hereof, as defined by the equivalents of the claims.

Claims (10)

1. A fringe pattern model optimization method, comprising:
s1, setting up a fringe pattern model based on each pixel of continuous multi-frame images, wherein characteristic parameters of the fringe pattern model comprise fringe background light intensity, amplitude, phase and phase shift related parameters;
s2, respectively creating a corresponding optimization objective function for each group of characteristic parameters;
s3, optimizing each group of characteristic parameters one by utilizing an optimization model, wherein when any group of characteristic parameters is optimized, the group of characteristic parameters are used as variables, the other two groups of characteristic parameters are used as known quantities, the optimization model comprises a weighted least square method model and a neural network model, and the optimization method of each group of characteristic parameters comprises the following steps:
s31, constructing the weighted least square method model and the neural network model based on each group of characteristic parameters, weight coefficients and cost coefficients of each pixel of the continuous multi-frame image and set optimization parameters;
s32, inputting a group of variables to be optimized and corresponding known quantities into the weighted least square model;
s33, optimizing the variable to be optimized by the weighted least square method model, and updating a weight coefficient and a cost coefficient;
s34, transmitting the updated weight coefficient to the neural network model, wherein the neural network model updates the neural network parameter by using the updated weight coefficient and reversely transmits the neural network parameter to the weighted least square model;
s35, taking the updated neural network parameters as the new optimized parameters of the weighted least square method model, and repeating the steps S33 and S34;
s36, substituting the re-optimized variable and the corresponding known quantity into the corresponding optimized objective function and judging whether the optimization objective function is converged, if so, entering a step S38, and if not, reversely transmitting the re-optimized variable to the weighted least square model and entering a step S37;
s37, repeating the step S35 and the step S36 until the optimization objective function converges, and entering the step S38;
s38, outputting a variable for completing optimization;
s4, repeating the step S3 until the final convergence condition is reached, and outputting all groups of characteristic parameters which are finally optimized.
2. The fringe pattern optimization method of claim 1, wherein the fringe pattern model is:
the optimization objective function corresponding to the stripe background light intensity and the amplitude is as follows:
the optimization objective function corresponding to the phase is as follows:
the optimization objective function corresponding to the phase shift related parameter is as follows:
wherein the method comprises the steps ofIn the form of a fringe pattern model,/>is stripe background light intensity>For amplitude +.>For the phase +.>For phase shift, < >>Harmonic of fringe pattern model->Order of->For the actual fringe pattern, +.>For frame number>Pixels are numbered.
3. The fringe pattern model optimizing method of claim 2, wherein the parameters of the weighted least squares model comprise: an unknown vector, a residual vector and a jacobian matrix;
the unknown quantity vector, residual vector and jacobian matrix corresponding to the stripe background light intensity and the amplitude are respectively as follows:
the unknown vector, residual vector and jacobian matrix corresponding to the phase are respectively as follows:
the unknown vector, residual vector and jacobian matrix corresponding to the phase shift related parameter are respectively as follows:
wherein the method comprises the steps ofIs stripe background light intensity>For amplitude +.>For the phase +.>For phase shift, < >>For the phase shift related parameter>Is an unknown vector, ++>For residual vector, ++>Is jacobian matrix->Harmonic of fringe pattern model->Order of->For the total number of pixels per frame of image, +.>Is the total number of frames->For frame number>For pixel numbering +.>And->For the coordinates corresponding to the pixel, < >>Total number of direction pixels>Total number of direction pixels>Is->Maxwork Lin Jie times, < ->Is->Maxwellian Lin Jieci of the directional pixels.
4. The fringe pattern model optimization method of claim 3, wherein,
the loss function of the weighted least square method model is as follows:
wherein,is a weight coefficient>Cost coefficient->To optimize the parameters +.>For the variables to be optimized +.>Is a residual vector expressed as a weight coefficient +.>And cost coefficient->Is a product of (2);
the loss function of the neural network model is as follows:
wherein,for the previous optimization parameters +.>And the current updated optimization parameters.
5. The fringe pattern model optimization method of claim 3, wherein,
the neural network model includes: an input layer, a hidden layer and an output layer;
the connection mode of the neural network model adopts a full connection mode for connection;
wherein the number of neurons of the input layer and the output layer is consistent with the magnitude of the unknown vector;
wherein the number of neurons of the hidden layer is consistent with the number of the weight coefficients.
6. The fringe pattern optimization method of claim 2, wherein the optimization objective function convergence is determined by the following inequality:
wherein the method comprises the steps ofFor the optimization objective function corresponding to the variable to be optimized, < ->For the number of iterations->Is a threshold value.
7. The fringe pattern optimization method of claim 2, wherein the final convergence condition comprises:
selecting an optimized objective function corresponding to any group of characteristic parameters, substituting the following inequality and enabling the inequality to be established:
wherein the method comprises the steps ofOptimizing objective function corresponding to any group of characteristic parameters, < ->For the number of iterations->Is a threshold value.
8. A fringe pattern optimizing apparatus, comprising:
the system comprises a setting module, a setting module and a control module, wherein the setting module is used for setting a fringe pattern model based on each pixel of continuous multi-frame images, and characteristic parameters of the fringe pattern model comprise fringe background light intensity, amplitude, phase and phase shift related parameters;
the creation module is used for creating a corresponding optimization objective function for each group of characteristic parameters respectively;
the optimization module is used for optimizing each group of characteristic parameters one by utilizing an optimization model, when any group of characteristic parameters are optimized, the group of characteristic parameters are used as variables, the other two groups of characteristic parameters are used as known quantities, the optimization model comprises a weighted least square method model and a neural network model, and the optimization method of each group of characteristic parameters comprises the following steps:
s31, constructing the weighted least square method model and the neural network model based on each group of characteristic parameters, weight coefficients and cost coefficients of each pixel of the continuous multi-frame image and set optimization parameters;
s32, inputting a group of variables to be optimized and corresponding known quantities into the weighted least square model;
s33, optimizing the variable to be optimized by the weighted least square method model, and updating a weight coefficient and a cost coefficient;
s34, transmitting the updated weight coefficient to the neural network model, wherein the neural network model updates the neural network parameter by using the updated weight coefficient and reversely transmits the neural network parameter to the weighted least square model;
s35, taking the updated neural network parameters as the new optimized parameters of the weighted least square method model, and repeating the steps S33 and S34;
s36, substituting the re-optimized variable and the corresponding known quantity into the corresponding optimized objective function and judging whether the optimization objective function is converged, if so, entering a step S38, and if not, reversely transmitting the re-optimized variable to the weighted least square model and entering a step S37;
s37, repeating the step S35 and the step S36 until the optimization objective function converges, and entering the step S38;
s38, outputting a variable for completing optimization;
and the output module is used for outputting various groups of characteristic parameters which finish final optimization when the final convergence condition is reached.
9. An apparatus, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the fringe pattern model optimization method of any one of claims 1-7 via execution of the executable instructions.
10. A computer-readable storage medium having a program stored thereon, which when executed by a processor implements the fringe pattern model optimization method of any one of claims 1 to 7.
CN202311497957.7A 2023-11-13 2023-11-13 Fringe pattern model optimization method, fringe pattern model optimization device, fringe pattern model optimization equipment and computer-readable storage medium Active CN117237236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311497957.7A CN117237236B (en) 2023-11-13 2023-11-13 Fringe pattern model optimization method, fringe pattern model optimization device, fringe pattern model optimization equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311497957.7A CN117237236B (en) 2023-11-13 2023-11-13 Fringe pattern model optimization method, fringe pattern model optimization device, fringe pattern model optimization equipment and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN117237236A CN117237236A (en) 2023-12-15
CN117237236B true CN117237236B (en) 2024-01-12

Family

ID=89095174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311497957.7A Active CN117237236B (en) 2023-11-13 2023-11-13 Fringe pattern model optimization method, fringe pattern model optimization device, fringe pattern model optimization equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN117237236B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105091781A (en) * 2015-05-21 2015-11-25 中国科学院光电技术研究所 Optical surface measuring method and apparatus using single-frame interference fringe pattern
CN112082512A (en) * 2020-09-08 2020-12-15 深圳广成创新技术有限公司 Calibration optimization method and device for phase measurement deflection technique and computer equipment
CN116645466A (en) * 2023-04-12 2023-08-25 杭州华橙软件技术有限公司 Three-dimensional reconstruction method, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105091781A (en) * 2015-05-21 2015-11-25 中国科学院光电技术研究所 Optical surface measuring method and apparatus using single-frame interference fringe pattern
CN112082512A (en) * 2020-09-08 2020-12-15 深圳广成创新技术有限公司 Calibration optimization method and device for phase measurement deflection technique and computer equipment
CN116645466A (en) * 2023-04-12 2023-08-25 杭州华橙软件技术有限公司 Three-dimensional reconstruction method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117237236A (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN109493375A (en) The Data Matching and merging method of three-dimensional point cloud, device, readable medium
CN112116616B (en) Phase information extraction method based on convolutional neural network, storage medium and equipment
CN109945802B (en) Structured light three-dimensional measurement method
WO2021097442A1 (en) Guided training of machine learning models with convolution layer feature data fusion
CN104050643A (en) Remote-sensing image relative correction method and system integrating geometry and radiation
CN115563444B (en) Signal reconstruction method, device, computer equipment and storage medium
Iglesias et al. Iterative sequential bat algorithm for free-form rational Bézier surface reconstruction
Dufour et al. 3D surface measurements with isogeometric stereocorrelation—application to complex shapes
CN111028346B (en) Reconstruction method and device of video object
Lichtenstein et al. Deep eikonal solvers
CN111277809A (en) Image color correction method, system, device and medium
CN117237236B (en) Fringe pattern model optimization method, fringe pattern model optimization device, fringe pattern model optimization equipment and computer-readable storage medium
CN110244523A (en) Integrated optical carving method and lithography system
CN116563096B (en) Method and device for determining deformation field for image registration and electronic equipment
Yu et al. A hybrid point cloud alignment method combining particle swarm optimization and iterative closest point method
CN116664531A (en) Deep learning-based large deformation measurement method and system
Arnavaz et al. Differentiable Depth for Real2Sim Calibration of Soft Body Simulations
CN113011107B (en) One-dimensional optical fiber sensing signal phase recovery method based on deep convolutional neural network
CN109697704A (en) Adaptive full variation ESPI image denoising method and system based on BM3D algorithm
Zhang et al. Modeling of binocular vision system for 3D reconstruction with improved genetic algorithms
CN111177290B (en) Evaluation method and device for accuracy of three-dimensional map
Watanabe et al. A Least Median of Squares Method Based on Fuzzy Reinforcement Learning for Modeling of Computer Vision Applications
CN107704724B (en) Bayesian compressed sensing parameter selection method based on Meridian distribution
CN116704070B (en) Method and system for reconstructing jointly optimized image
Fakhri Investigating the Effect of Pso Algorithm on Reducing Control Points in Camera Calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant