CN110210119A - A kind of high efficiency phase developing method based on deep layer convolutional neural networks - Google Patents

A kind of high efficiency phase developing method based on deep layer convolutional neural networks Download PDF

Info

Publication number
CN110210119A
CN110210119A CN201910467171.8A CN201910467171A CN110210119A CN 110210119 A CN110210119 A CN 110210119A CN 201910467171 A CN201910467171 A CN 201910467171A CN 110210119 A CN110210119 A CN 110210119A
Authority
CN
China
Prior art keywords
phase
neural networks
convolutional neural
deep layer
layer convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910467171.8A
Other languages
Chinese (zh)
Inventor
王辰星
汪懋荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910467171.8A priority Critical patent/CN110210119A/en
Publication of CN110210119A publication Critical patent/CN110210119A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The present invention provides a kind of high efficiency phase developing methods based on deep layer convolutional neural networks.Method includes the following steps: step 1: obtaining wrapped phase by software emulation and phase wraps up the data of number, and thus establish training sample database;Step 2: using convolutional layer, pond layer, batch normalization, activation primitive ReLu, up-sampling layer and Softmax classifier, building has the deep layer convolutional neural networks of residual error access;Step 3: the data set that step 1 is obtained pre-processes, and pretreated image obtains network model parameter as training data, training deep layer convolutional neural networks model;Step 4: inputting wrapped phase to be deployed, using the convolutional neural networks model in step 3, wrapped phase is unfolded and is visualized.The present invention solves the Construct question of sample database, under the premise of guaranteeing measurement efficiency, realizes high-precision phase unwrapping.

Description

A kind of high efficiency phase developing method based on deep layer convolutional neural networks
Technical field
The present invention relates to a kind of high efficiency phase developing methods based on deep layer convolutional neural networks, belong to optics, calculate Machine vision and field of artificial intelligence.
Background technique
In recent years, with the development of computer technology, computer measurement meter is become for the three dimension profile measurement of object One important branch in calculation field is widely used in every field, such as recognition of face and reconstruction, satellite radar interference are surveyed Amount etc..In many three dimension profile measurement methods, the optical 3-dimensional surface shape measurement method based on phase analysis class, due to having Non-contact, the advantages that measuring speed is fast, measurement accuracy is high, extensive concern and research are obtained.And the light based on phase analysis Three steps: the mapping of phase recovery, phase unwrapping and phase to 3 d shape depth can be substantially divided by learning measurement.
In the three dimension profile measurement based on phase analysis, since during phase recovery, phase distribution is to pass through Arctan function operation obtains, thus the phase value being calculated be truncated arctan function codomain (- π, π] in, claim Phase is wrapped up.In order to enable phase completely embodies the surface condition of three-dimension object, need to carry out the phase wrapped up Amendment, is allowed to become absolute phase, such process is referred to as phase unwrapping.
Due to the complexity of Phase unwrapping, the precision of phase unwrapping has been largely fixed entire measuring system Measurement accuracy.Influencing the factor of phase unwrapping precision, to be common in interference noise, the acute variation of phase and phase discontinuous etc., Currently used phase developing method is broadly divided into time phase expansion and space phase expansion, the former needs several to measure image Auxiliary, to realize dynamic, more demanding to the frame per second of hardware device rapid survey;The latter is based on single width phase diagram, but Measurement accuracy and robustness are lower, and computationally intensive, are not able to satisfy the demand of practical rapid survey still.There is presently no take into account Shandong The method of stick and measurement efficiency.
Deep learning develops intimately in recent years, and is proved to possess powerful ability in feature extraction.Artificial intelligence, Many fields such as computer vision and optical measurement, the method based on deep learning in most cases represent current field The interior attainable state-of-the-art level of institute.Deep learning is a frame, contains many important algorithms, such as convolutional Neural Network, autocoder, limited Boltzmann machine, feedback cycle neural network etc..
As a classical solution in deep learning, the status in deep learning field is convolutional neural networks It is self-evident.Theoretically, deeper convolutional neural networks possess more powerful feature extraction and classification capacity, and In practice, degenerate problem is produced as depth increases due to deep layer convolutional neural networks, network is caused to be difficult to instruct well Practice.For degenerate problem, the method for comparative maturity is solved using the structure with residual error access, such as residual error network in field Certainly this problem.The powerful classification capacity of deep layer convolutional neural networks, and residual error access of having arranged in pairs or groups exactly is utilized in the present invention Network training is helped, the purpose of phase unwrapping is realized.
Summary of the invention
To solve the above problems, the invention discloses a kind of high efficiency phases based on deep layer convolutional neural networks Position method of deploying, solves the Construct question of sample database, under the premise of guaranteeing measurement efficiency, realizes high-precision phase Position expansion.
Above-mentioned purpose is achieved through the following technical solutions:
A kind of high efficiency phase developing method based on deep layer convolutional neural networks, this method comprises the following steps:
Step 1: wrapped phase being obtained by software emulation and phase wraps up the data of number, and thus establishes training sample Database;
Step 2: being classified using convolutional layer, pond layer, batch normalization, activation primitive ReLu, up-sampling layer and Softmax Device, building have the deep layer convolutional neural networks of residual error access;
Step 3: the data set that step 1 is obtained pre-processes, and pretreated image is as training data, training deep layer volume Product neural network model, obtains network model parameter;
Step 4: inputting wrapped phase to be deployed, using the convolutional neural networks model in step 3, wrapped phase is unfolded And it is visualized.
The high efficiency phase developing method based on deep layer convolutional neural networks generates described in step 1 in emulation When wrapped phase, the carrier frequency of different frequency is superimposed to wrapped phase or is not superimposed, i.e., carrier frequency is 0, while basis Wrapped phase before superposition, addition phase package number is as label, according to carrier frequency difference, establishes without carrier frequency, low carrier frequency, Containing being less than, 5 complete radio-cycles, middle carrier frequency, that is, piece image are interior to contain 5 to 10 complete carrier frequency i.e. in piece image Containing the database more than 10 complete radio-cycles in period and high carrier frequency, i.e. piece image, format is carried out for data Standardization processing, treated wrapped phase picture and package number label are 8 single channel images.
The high efficiency phase developing method based on deep layer convolutional neural networks generates described in step 1 in emulation When wrapped phase, wrapped phasePhase Φ (x, y), phase package number k (x, y) and carrier phase main value f is unfolded (x, y) meets:
Wherein, phase package number k (x, y) is integer;Wrapped phaseCodomain be [0,2 π);Carrier phase f The codomain of (x, y) be [0,2 π).
The high efficiency phase developing method based on deep layer convolutional neural networks, in building deep layer described in step 2 During convolutional neural networks, in order to avoid degenerate problem occur in deep layer convolutional neural networks, rolled up in the deep layer of phase unwrapping Residual error access is added in product neural network;Add or delete part residual error access, or using residual error network or in which residual error mould Block and its modified version realized this step function originally.
The high efficiency phase developing method based on deep layer convolutional neural networks, in building deep layer described in step 2 During convolutional neural networks, classified using Softmax classifier in deep layer convolutional neural networks end;Or it uses Support vector machine classifier, k close on classifier classifier and realize this step function.
The high efficiency phase developing method based on deep layer convolutional neural networks, in building deep layer described in step 2 Nonlinear activation function or use during convolutional neural networks, using ReLu function as deep layer convolutional neural networks Sigmoid function or hyperbolic tangent function realize this step function.
The high efficiency phase developing method based on deep layer convolutional neural networks, after being pre-processed described in step 3 Phase data be divided into training sample, verifying sample and test sample three parts, with training sample training deep layer convolutional Neural net Network updates network parameter using the training method and back-propagation algorithm for having supervision, uses graphics processor (GPU) training deep layer Convolutional neural networks, meanwhile, using verifying sample observation grid training process and its performance during training, finally, will Test sample input deep layer convolutional neural networks are tested.
The utility model has the advantages that
In contrast to traditional phase developing method, method provided by the invention can take into account high efficiency and high-precision, simultaneously It obtains efficiency and precision is better than the solution of conventional method;
In contrast to other phase developing methods, such as Schwartzkopf based on deep learning et al. proposition based on preceding The phase developing method of Multilayer perceptron network is presented, phase developing method of the invention has higher precision, and phase Position expansion result is unrelated with path;
In contrast to the convolutional neural networks that Spoorthi et al. is used, network committed memory of the invention may be significantly smaller, Precision is higher, and apparent more easily trained.
Detailed description of the invention
Fig. 1 is total algorithm schematic diagram of the invention, and training and utilization including deep layer convolutional neural networks are trained The Principle of Process of network progress phase unwrapping.
Fig. 2 is the schematic network structure for the deep layer convolutional neural networks that the present invention uses.
Specific embodiment
Refering to fig. 1, for the present invention in order to solve the Phase unwrapping, the technical solution used is to provide a kind of base In the method for deep layer convolutional neural networks, comprising the following steps:
Step 1: building data set.In a plane, high by the random two dimension of four positions, standard deviation, peak values The superposition of this function, to simulate the absolute phase of the 3 d shape in objective reality;Carrier phase is superimposed in absolute phase Main value, and the main value of superimposed result is taken to simulate the wrapped phase by measuring after three dimension profile measurement;Wrapped phase During corresponding phase package number takes main value by absolute phase, the number of cycles intercepted is determined.It is following to be based on The code of Matlab language is the important algorithm code in the embodiment for construct data set and data prediction.
The absolute phase of 3 d shape is first simulated,
Phase (x, y)=exp (- (((x-128)+ox1) .^2+ ((y-128)+oy1) .^2) ./sig1^2) * p1+exp (-(((x-128)+ox2).^2+((y-128)+oy2).^2)./sig2^2)*p2+exp(-(((x-128)+ox3).^2+((y- 128)+oy3).^2)./sig3^2)*p3+exp(-(((x-128)+ox4).^2+((y-128)+oy4).^2)./sig4^2)* p4;
Wherein, ox1, ox2, ox3, ox4, oy1, oy2, oy3, oy4 are the 8 Gaussian function positional shift being randomly generated ginsengs Number;Sig1, sig2, sig3, sig4 are 4 Gaussian function standard deviation criterias being randomly generated;P1, p2, p3, p4 are 4 random The Gaussian function peak parameters of generation, and this parameter is positive and negative random.
Then wrapped phase corresponding to above-mentioned 3 d shape absolute phase and phase package number are simulated,
N (x, y)=floor (phase (x, y));
Phasei (x, y)=phase (x, y)-n (x, y)+carry (x, y)-floor (phase (x, y)-n (x, y)+ carry(x,y));Wherein, n (x, y) is the phase package number simulated;Carry (x, y) is the master of the carrier phase of simulation Value;Phasei (x, y) is the wrapped phase simulated.
Finally the phase data of acquisition is pre-processed, and phase data library is written.
Step 2: being classified using convolutional layer, pond layer, batch normalization, activation primitive ReLu, up-sampling layer and Softmax Device, building have the deep layer convolutional neural networks of residual error access.
Convolutional layer: convolution algorithm is carried out using image of the different convolution kernels to input, to obtain a sheet by a sheet characteristic pattern Operation layer, be module very traditional in convolutional neural networks.
Pond layer: compressing the data of input, removes unessential sample in characteristic pattern, to reduce trained ginseng Number reduces network internal storage consumption, prevents the module of network over-fitting, be the conventional module in convolutional neural networks.Pond layer master There are average pondization and two kinds of maximum value pondization, rule of thumb, the present invention selects maximum value pond.
Criticize normalization layer: the layer that the result of convolutional layer output is normalized, in order to avoid instructing in network Occur gradient in experienced process to disappear and gradient explosion phenomenon.Criticizing the convolutional neural networks field of normalization operation in recent years is One more orthodox operation.
Activation primitive ReLu: the purpose of activation primitive be introduced during network training it is non-linear, to avoid ladder Degree disappears, and the activation primitive used in the present embodiment can also realize this using activation primitives such as such as sigmoid, tanh for ReLu Function.
Up-sample layer: corresponding with pond layer, for the feature that releasing network extracts, the purpose used is so that image Size it is identical as wrapped phase picture size to be deployed.
Softmax classifier: being the mainstream classifier for realizing classification in recent years, using other classifiers such as supporting vector Machine (SVM) etc., can also realize this function.
Residual error access: since network characteristic of the invention is that level is deeper, the superpower of deep layer convolutional neural networks is utilized Ability in feature extraction, and deep layer network bring side effect is to cause to be difficult to trained problem since network is degenerated.In order to solve This problem, widespread practice is introducing residual error access (bibliography " the Deep Residual in network in field Learning for Image Recognition ", author Kaiming He etc.).Residual error access only uses initial data Scale operation, there is no the operations for carrying out complicated to avoid to remain the gradient of initial data to the full extent The problem of network is degenerated.
Referring to Fig.2, figure is one embodiment model of deep layer convolutional neural networks provided by the invention.Novelty of the invention Place is that taking the lead in combining the newest technology such as residual error access, batch normalization, Softmax classifier carries out phase unwrapping network It builds, due to the reference of residual error access, network of the invention possesses very deep depth, shares 38 concatenated convolutional layers, guarantees Powerful ability in feature extraction.Meanwhile the present invention utilizes various optimisation techniques, while guaranteeing the operational effect of network, The request memory of network training 4GB is compressed to, it is sufficient to meet the configuration requirement of most mainstream hardwares.If using deep The half precision learning framework in learning areas more forward position is spent, the request memory of network training can be down to 2GB.
Step 3: the data set that step 1 is obtained pre-processes, and pretreated image is as training data, using there is supervision Back-propagation algorithm training deep layer convolutional neural networks model, obtain network model parameter.Detailed process is as follows.
It is that the Caffe deep learning frame of the invention used can by the data prediction in the database obtained in step 1 With the data of receiving, the more orthodox back-propagation algorithm of use has used the cross entropy of Softmax as the damage of the present embodiment Function is lost, has cooperated momentum (Momentum) optimization algorithm, has updated the parameter of network.Meanwhile by repeatedly attempting, I is obtained One group of effect relatively good network training hyper parameter.Learning rate takes 0.0008, and momentum takes 0.9, and batch size takes 4, by 50- The training of 100 periods (epoch), may finally obtain relatively good training result.The network hyper parameter provided is carried out Certain fine tuning can also realize the purpose of this step.
Step 4: refering to fig. 1, using trained deep layer convolutional neural networks model in step 3, in network inputs Part inputs wrapped phase information to be deployed, wraps up number by the phase that wrapped phase can be obtained in network operations.In conjunction with The present invention writes visual code in Python, may be implemented what wrapped phase and phase the package number in Fig. 1 were superimposed Function, the final purpose for realizing phase unwrapping, obtains high-precision absolute phase.
Above content is only one embodiment of the present of invention, by adding or deleting part convolutional layer, adding or deleting portion Divide residual error access, the hyper parameter of the part or all of network training of change or using other network training optimization algorithms, the present invention Provided network structure can also there are many variants, in those skilled in the art in the premise for not making creative labor Under, the network variant that network obtains is changed through the above way to be included within the scope of the present invention.

Claims (7)

1. a kind of high efficiency phase developing method based on deep layer convolutional neural networks, it is characterised in that: this method includes as follows Step:
Step 1: wrapped phase being obtained by software emulation and phase wraps up the data of number, and thus establishes training sample data Library;
Step 2: utilizing convolutional layer, pond layer, batch normalization, activation primitive ReLu, up-sampling layer and Softmax classifier, structure Build the deep layer convolutional neural networks with residual error access;
Step 3: the data set that step 1 is obtained pre-processes, and pretreated image is as training data, training deep layer convolution mind Through network model, network model parameter is obtained;
Step 4: inputting wrapped phase to be deployed, using the convolutional neural networks model in step 3, expansion wrapped phase simultaneously will It is visualized.
2. the high efficiency phase developing method according to claim 1 based on deep layer convolutional neural networks, it is characterised in that: Described in step 1 when emulation generates wrapped phase, the carrier frequency of different frequency is superimposed to wrapped phase or is not superimposed, i.e., Carrier frequency is 0, while according to the wrapped phase before superposition, adding phase package number as label, not according to carrier frequency Together, it establishes without carrier frequency, low carrier frequency, i.e., containing less than in 5 complete radio-cycles, middle carrier frequency, that is, piece image in piece image Containing 5 to 10 complete radio-cycles and high carrier frequency, i.e., contain the number more than 10 complete radio-cycles in piece image According to library, format specification processing is carried out for data, treated wrapped phase picture and package number label are 8 lists Channel image.
3. the high efficiency phase developing method according to claim 1 based on deep layer convolutional neural networks, it is characterised in that: Described in step 1 when emulation generates wrapped phase, wrapped phasePhase Φ (x, y) is unfolded, phase wraps up number k (x, y) and carrier phase main value f (x, y) meet:
Wherein, phase package number k (x, y) is integer;Wrapped phaseCodomain be [0,2 π);Carrier phase f (x, y) Codomain be [0,2 π).
4. the high efficiency phase developing method according to claim 1 based on deep layer convolutional neural networks, it is characterised in that: Described in step 2 during constructing deep layer convolutional neural networks, asked in order to avoid degenerating occur in deep layer convolutional neural networks Topic adds residual error access in the deep layer convolutional neural networks of phase unwrapping;Part residual error access is added or deleted, or using residual Poor network or in which residual error module and its modified version realized this step function originally.
5. the high efficiency phase developing method according to claim 1 based on deep layer convolutional neural networks, it is characterised in that: Described in step 2 during constructing deep layer convolutional neural networks, using Softmax classifier in deep layer convolutional neural networks Classify end;Or support vector machine classifier is used, k closes on classifier classifier and realizes this step function.
6. the high efficiency phase developing method according to claim 1 based on deep layer convolutional neural networks, it is characterised in that: Described in step 2 during constructing deep layer convolutional neural networks, using ReLu function as deep layer convolutional neural networks Nonlinear activation function realizes this step function using sigmoid function or hyperbolic tangent function.
7. the high efficiency phase developing method according to claim 1 based on deep layer convolutional neural networks, it is characterised in that: Pretreated phase data is divided into training sample, verifying sample and test sample three parts described in step 3, with training sample This training deep layer convolutional neural networks update network parameter using the training method and back-propagation algorithm for having supervision, use figure Shape processor (GPU) trains deep layer convolutional neural networks, meanwhile, using verifying sample observation grid training during training Process and its performance, finally, test sample input deep layer convolutional neural networks are tested.
CN201910467171.8A 2019-05-30 2019-05-30 A kind of high efficiency phase developing method based on deep layer convolutional neural networks Pending CN110210119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910467171.8A CN110210119A (en) 2019-05-30 2019-05-30 A kind of high efficiency phase developing method based on deep layer convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910467171.8A CN110210119A (en) 2019-05-30 2019-05-30 A kind of high efficiency phase developing method based on deep layer convolutional neural networks

Publications (1)

Publication Number Publication Date
CN110210119A true CN110210119A (en) 2019-09-06

Family

ID=67789821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910467171.8A Pending CN110210119A (en) 2019-05-30 2019-05-30 A kind of high efficiency phase developing method based on deep layer convolutional neural networks

Country Status (1)

Country Link
CN (1) CN110210119A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110500957A (en) * 2019-09-10 2019-11-26 中国科学院苏州纳米技术与纳米仿生研究所 A kind of active three-D imaging method, device, equipment and storage medium
CN110751268A (en) * 2019-09-27 2020-02-04 北京理工大学 Phase aliasing error removing method and device based on end-to-end convolutional neural network
CN111189414A (en) * 2020-01-09 2020-05-22 西安知象光电科技有限公司 Real-time single-frame phase extraction method
CN111351450A (en) * 2020-03-20 2020-06-30 南京理工大学 Single-frame stripe image three-dimensional measurement method based on deep learning
CN111461224A (en) * 2020-04-01 2020-07-28 西安交通大学 Phase data unwrapping method based on residual self-coding neural network
CN111561877A (en) * 2020-04-24 2020-08-21 西安交通大学 Variable resolution phase unwrapping method based on point diffraction interferometer
CN111794741A (en) * 2020-08-11 2020-10-20 中国石油天然气集团有限公司 Method for realizing sliding directional drilling simulator
CN111797678A (en) * 2020-05-15 2020-10-20 华南师范大学 Phase unwrapping method and device based on composite neural network
CN111928794A (en) * 2020-08-04 2020-11-13 北京理工大学 Closed fringe compatible single interference diagram phase method and device based on deep learning
CN112116616A (en) * 2020-08-05 2020-12-22 西安交通大学 Phase information extraction method based on convolutional neural network, storage medium and equipment
CN112836422A (en) * 2020-12-31 2021-05-25 电子科技大学 Interference and convolution neural network mixed scheme measuring method
CN113093379A (en) * 2021-03-25 2021-07-09 上海交通大学 Photon Itanium machine-oriented orthogonal space phase modulation method
CN113314216A (en) * 2021-06-01 2021-08-27 南方科技大学 Functional brain network construction method and device, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940371A (en) * 2014-05-12 2014-07-23 电子科技大学 High-precision three-dimensional shape measurement method for jump object
CN109253708A (en) * 2018-09-29 2019-01-22 南京理工大学 A kind of fringe projection time phase method of deploying based on deep learning
CN109307483A (en) * 2018-11-20 2019-02-05 西南石油大学 A kind of phase developing method based on structured-light system geometrical constraint

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940371A (en) * 2014-05-12 2014-07-23 电子科技大学 High-precision three-dimensional shape measurement method for jump object
CN109253708A (en) * 2018-09-29 2019-01-22 南京理工大学 A kind of fringe projection time phase method of deploying based on deep learning
CN109307483A (en) * 2018-11-20 2019-02-05 西南石油大学 A kind of phase developing method based on structured-light system geometrical constraint

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
G. E. SPOORTHI 等: "PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping", 《IEEE SIGNAL PROCESSING LETTERS》 *
KAIQIANG WANG 等: "One-step robust deep learning phase unwrapping", 《OPTICS EXPRESS》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110500957B (en) * 2019-09-10 2021-09-14 中国科学院苏州纳米技术与纳米仿生研究所 Active three-dimensional imaging method, device, equipment and storage medium
CN110500957A (en) * 2019-09-10 2019-11-26 中国科学院苏州纳米技术与纳米仿生研究所 A kind of active three-D imaging method, device, equipment and storage medium
CN110751268B (en) * 2019-09-27 2022-07-26 北京理工大学 Phase aliasing error removing method and device based on end-to-end convolutional neural network
CN110751268A (en) * 2019-09-27 2020-02-04 北京理工大学 Phase aliasing error removing method and device based on end-to-end convolutional neural network
CN111189414A (en) * 2020-01-09 2020-05-22 西安知象光电科技有限公司 Real-time single-frame phase extraction method
CN111189414B (en) * 2020-01-09 2021-09-03 西安知象光电科技有限公司 Real-time single-frame phase extraction method
CN111351450A (en) * 2020-03-20 2020-06-30 南京理工大学 Single-frame stripe image three-dimensional measurement method based on deep learning
CN111351450B (en) * 2020-03-20 2021-09-28 南京理工大学 Single-frame stripe image three-dimensional measurement method based on deep learning
CN111461224B (en) * 2020-04-01 2022-08-16 西安交通大学 Phase data unwrapping method based on residual self-coding neural network
CN111461224A (en) * 2020-04-01 2020-07-28 西安交通大学 Phase data unwrapping method based on residual self-coding neural network
CN111561877A (en) * 2020-04-24 2020-08-21 西安交通大学 Variable resolution phase unwrapping method based on point diffraction interferometer
CN111561877B (en) * 2020-04-24 2021-08-13 西安交通大学 Variable resolution phase unwrapping method based on point diffraction interferometer
CN111797678A (en) * 2020-05-15 2020-10-20 华南师范大学 Phase unwrapping method and device based on composite neural network
CN111797678B (en) * 2020-05-15 2023-07-07 华南师范大学 Phase unwrapping method and device based on composite neural network
CN111928794B (en) * 2020-08-04 2022-03-11 北京理工大学 Closed fringe compatible single interference diagram phase method and device based on deep learning
CN111928794A (en) * 2020-08-04 2020-11-13 北京理工大学 Closed fringe compatible single interference diagram phase method and device based on deep learning
CN112116616B (en) * 2020-08-05 2022-06-07 西安交通大学 Phase information extraction method based on convolutional neural network, storage medium and equipment
CN112116616A (en) * 2020-08-05 2020-12-22 西安交通大学 Phase information extraction method based on convolutional neural network, storage medium and equipment
CN111794741A (en) * 2020-08-11 2020-10-20 中国石油天然气集团有限公司 Method for realizing sliding directional drilling simulator
CN111794741B (en) * 2020-08-11 2023-08-18 中国石油天然气集团有限公司 Method for realizing sliding directional drilling simulator
CN112836422A (en) * 2020-12-31 2021-05-25 电子科技大学 Interference and convolution neural network mixed scheme measuring method
CN112836422B (en) * 2020-12-31 2022-03-18 电子科技大学 Interference and convolution neural network mixed scheme measuring method
CN113093379A (en) * 2021-03-25 2021-07-09 上海交通大学 Photon Itanium machine-oriented orthogonal space phase modulation method
CN113093379B (en) * 2021-03-25 2022-02-25 上海交通大学 Photon Itanium machine-oriented orthogonal space phase modulation method
CN113314216A (en) * 2021-06-01 2021-08-27 南方科技大学 Functional brain network construction method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN110210119A (en) A kind of high efficiency phase developing method based on deep layer convolutional neural networks
US11574097B2 (en) Deep learning based identification of difficult to test nodes
Howard et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications
US20190295228A1 (en) Image in-painting for irregular holes using partial convolutions
US20190035113A1 (en) Temporally stable data reconstruction with an external recurrent neural network
US20190294972A1 (en) Representing a neural network utilizing paths within the network to improve a performance of the neural network
CN110176054A (en) For training the generation of the composograph of neural network model
CN108416327A (en) A kind of object detection method, device, computer equipment and readable storage medium storing program for executing
US11062471B1 (en) Neural network system for stereo image matching
CN112950775A (en) Three-dimensional face model reconstruction method and system based on self-supervision learning
US11961001B2 (en) Parallel forward and backward propagation
CN111210498A (en) Reducing the level of detail of a polygon mesh to reduce the complexity of rendered geometry
CN109936745A (en) For improving the method and system of the decompression of original video data
Denninger et al. 3d scene reconstruction from a single viewport
DE102021121109A1 (en) RECOVERY OF THREE-DIMENSIONAL MODELS FROM TWO-DIMENSIONAL IMAGES
DE102019106996A1 (en) PRESENTING A NEURONAL NETWORK USING PATHS INSIDE THE NETWORK TO IMPROVE THE PERFORMANCE OF THE NEURONAL NETWORK
DE102022107232A1 (en) PACKED ERROR CORRECTION CODE (ECC) FOR COMPRESSED PRIVACY
CN116736624A (en) Parallel mask rule checking for evolving mask shapes in an optical proximity correction stream
Hattori et al. Learning self-prior for mesh denoising using dual graph convolutional networks
DE112019001978T5 (en) IMPROVING THE REALISM OF SCENES WITH WATER SURFACES DURING RENDERING
CN103678888B (en) The flowing of a kind of heart blood based on Euler's fluid simulation algorithm schematically shows method
DE102021114013A1 (en) TECHNIQUES FOR EFFICIENT SCANNING OF AN IMAGE
Mlakar et al. Subdivision‐specialized linear algebra kernels for static and dynamic mesh connectivity on the gpu
CN106910246A (en) Speckle three-D imaging method and device that space-time is combined
CN112861977B (en) Migration learning data processing method, system, medium, equipment, terminal and application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190906