CN111947599A - Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation - Google Patents

Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation Download PDF

Info

Publication number
CN111947599A
CN111947599A CN202010721119.3A CN202010721119A CN111947599A CN 111947599 A CN111947599 A CN 111947599A CN 202010721119 A CN202010721119 A CN 202010721119A CN 111947599 A CN111947599 A CN 111947599A
Authority
CN
China
Prior art keywords
phase
fringe
learning
speckle
dimensional measurement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010721119.3A
Other languages
Chinese (zh)
Other versions
CN111947599B (en
Inventor
尹维
左超
陈钱
冯世杰
孙佳嵩
胡岩
尚昱昊
陶天阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010721119.3A priority Critical patent/CN111947599B/en
Publication of CN111947599A publication Critical patent/CN111947599A/en
Application granted granted Critical
Publication of CN111947599B publication Critical patent/CN111947599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré

Abstract

The invention discloses a learning-based fringe phase retrieval and speckle correlation three-dimensional measurement method. An initial disparity map is first generated from a speckle pattern using a stereo matching network. The wrapped phase map is extracted from the additional one of the fringe patterns with high accuracy using a U-net network. And optimizing the initial parallax map by using the wrapped phase map as an additional constraint, thereby finally realizing high-speed and high-precision absolute three-dimensional topography measurement. The invention can realize high-speed and high-precision absolute three-dimensional shape measurement only by two projection patterns.

Description

Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation
Technical Field
The invention belongs to the technical field of optical measurement, and particularly relates to a three-dimensional measurement method based on learning fringe phase recovery and speckle correlation.
Background
At present, the rapid three-dimensional topography measurement technology is widely applied to various fields, such as intelligent monitoring, industrial quality control, three-dimensional face recognition and the like. Among the numerous three-dimensional topography measurement methods, fringe projection profilometry based on structured light and triangulation principles is one of the most practical techniques because it has the advantages of no contact, full field, high accuracy and high efficiency. The mainstream fringe projection profilometry generally needs three processes to realize three-dimensional measurement, namely phase recovery, phase expansion and phase-to-height mapping. Among the phase recovery techniques, the two most commonly used methods are fourier profilometry and phase-shift profilometry. The phase can be extracted by only one fringe pattern in the Fourier profile technology, but the method is influenced by frequency spectrum aliasing, so that the quality of a measuring result is poor, and an object with a complex appearance cannot be measured. Compared to fourier profilometry, phase-shift profilometry has the advantage of being insensitive to ambient light, enabling pixel-level phase measurements to be obtained, which is suitable for measuring objects with complex surfaces. But this method typically requires the projection of multiple phase-shifted fringe patterns (at least three) to achieve phase extraction. With the rapid development of high-speed cameras and DLP projection technology, phase-shift profilometry can also be used to achieve rapid three-dimensional measurements. However, both fourier and phase-shift profilometry use an arctangent function to extract the phase, the value domain of the arctangent function [0, 2 π ], and therefore both methods can only obtain wrapped phase maps, where there are phase jumps of 2 π. Therefore, it is necessary to implement a phase unwrapping technique to change the wrapped phase map into an absolute phase map. The mainstream phase unwrapping method at present is time domain phase unwrapping and space domain phase unwrapping. On the one hand, the spatial domain phase unwrapping can realize the phase unwrapping only by wrapping the phase diagram, but can not effectively measure complex objects or a plurality of isolated objects, and phase unwrapping errors easily occur. On the other hand, time-domain phase unwrapping can stably unwrappe the wrapped phase, but requires the use of multiple wrapped phase maps of different frequencies, which greatly affects the efficiency of phase unwrapping and thus reduces the speed of three-dimensional measurement. Three common spatial domain phase unwrapping techniques are used: multifrequency methods, multiwavelength methods and number theory methods. Among them, the multifrequency method can achieve the best phase unwrapping result, and the multi-wavelength rule is the most sensitive to noise (document "Temporal phase unwrapting algorithms for fringe projection profile: A complementary view", author Chao Zuo, etc.). The principle of the multi-frequency method is to use a single-period low-frequency wrapped phase to expand a high-frequency wrapped phase diagram, and due to the influence of noise in the measurement process, the multi-frequency method can only expand a high-frequency wrapped phase diagram with the frequency of 20. The phase diagram with higher frequency has higher precision, so that a plurality of groups of fringe diagrams with different frequencies are projected to realize high-precision three-dimensional measurement. This further reduces the measurement efficiency of fringe projection profilometry and thus inhibits its ability to measure moving objects.
Therefore, for the three-dimensional imaging technology based on fringe projection profilometry, a method with both measurement accuracy and measurement efficiency is not available at present.
Disclosure of Invention
The invention aims to provide a three-dimensional measurement method based on learning fringe phase recovery and speckle correlation.
The technical solution for realizing the purpose of the invention is as follows: a three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation comprises the following specific steps:
step 1: synchronously acquiring a high-frequency fringe pattern and a speckle pattern by two cameras;
step 2: obtaining an initial parallax image based on a stereo matching network according to speckle patterns synchronously acquired by two cameras;
and step 3: inputting the high-frequency fringe pattern into the trained U-net network to obtain a wrapping phase;
and 4, step 4: optimizing the initial disparity map by using wrapping phase constraint to obtain a high-precision disparity map;
and 5: and converting the high-precision parallax map into three-dimensional information according to the calibration parameters of the camera to finish three-dimensional measurement.
Preferably, the specific method for obtaining the initial disparity map based on the stereo matching network is as follows:
respectively carrying out three-dimensional correction on two speckle patterns from two different visual angles, and cutting the speckle patterns into block data with different sizes;
inputting the block data into a trained stereo matching network to obtain initial matching cost;
optimizing the initial matching cost by adopting a cost aggregation method based on semi-global matching;
and taking the candidate parallax with the lowest cost value as an integer pixel parallax value through a winner-king algorithm, and obtaining a parallax image with sub-pixel precision through a five-point quadratic curve fitting model.
Preferably, the stereo matching network comprises a sharing sub-network and a plurality of fully connected layers with sharing weights, the sharing sub-network comprises a convolution layer and a plurality of residual blocks which are sequentially stacked at the front end, and nine convolution layers are stacked at the back end.
Preferably, the labels in the U-net network training are numerator terms and denominator terms M (x, y) and D (x, y) of the arctan function obtained by using a three-step phase shift algorithm and after background information is removed.
Preferably, the specific method for removing the background information is as follows:
and comparing the actual fringe modulation degree corresponding to the pixel point with a set threshold, and if the actual fringe modulation degree corresponding to the pixel point is lower than the threshold, setting the numerator item and the denominator item of the arctangent function corresponding to the pixel point to be 0.
Preferably, the actual fringe modulation degree corresponding to the pixel point is specifically:
Figure BDA0002600052030000031
wherein, I1(x,y),I2(x,y),I3(x, y) are the corresponding three-step phase-shifted fringe pattern intensities. Modulation (x, y) is the actual fringe Modulation.
Preferably, inputting the high-frequency fringe pattern into the trained U-net network, and obtaining the wrapping phase specifically comprises:
inputting the high-frequency fringe pattern into the trained U-net network to obtain a numerator term, a denominator term M (x, y) and a denominator term D (x, y) of the arctangent function;
obtaining wrapped phase map according to M (x, y) and D (x, y)
Figure BDA0002600052030000032
The method specifically comprises the following steps:
Figure BDA0002600052030000033
preferably, the initial disparity map is optimized by using wrapped phase constraint, and the specific method for obtaining the high-precision disparity map comprises the following steps:
according to the initial disparity map and the wrapping phase information, three-dimensional matching based on the phase information is realized by minimizing the difference between the wrapping phases of two visual angles, and a matching point of an integer pixel is obtained;
and completing sub-pixel matching by using the phase information of the matching point of the whole pixel and the adjacent points thereof through a linear interpolation algorithm to obtain a high-precision parallax map.
Compared with the prior art, the invention has the following remarkable advantages: the invention can realize high-speed and high-precision absolute three-dimensional shape measurement only by two projection patterns.
The present invention is described in further detail below with reference to the attached drawings.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a basic schematic diagram of the robust stereo matching algorithm based on deep learning according to the present invention.
Fig. 3 is a basic schematic diagram of the high-precision phase extraction algorithm based on the improved U-net type network.
Fig. 4 is a diagram showing the results of the three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation according to the present invention.
Detailed Description
A three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation can realize high-speed and high-precision absolute three-dimensional shape measurement only by two projection patterns, and comprises the following steps:
step 1: a high-frequency fringe pattern and a speckle pattern are synchronously acquired by two cameras.
In a further embodiment, the acquired high frequency fringe pattern is:
I(x,y)=A(x,y)+B(x,y)cosΦ(x,y)
wherein, I (x, y) is the intensity of the high frequency fringe pattern, (x, y) is the pixel coordinate of the camera plane, a (x, y) is the background intensity, B (x, y) is the modulation degree of the fringe, and Φ (x, y) is the phase to be calculated. The light intensity of the collected speckle image is Ispk(x,y)。
Step 2: obtaining an initial parallax image based on a stereo matching network according to speckle patterns synchronously acquired by two cameras;
further, a specific method for obtaining the initial disparity map based on the stereo matching network is as follows:
the two speckle patterns from two different perspectives are respectively subjected to stereo correction, and the speckle patterns are cut into block data with different sizes so as to be input into the network during the training of the network.
Specifically, for the left camera, clipping is performed with a certain pixel (x, y) to be matched in the speckle pattern acquired from the left camera as the center, and Block data Block is usedLIs L × L. Correspondingly, for the right camera, clipping is carried out by taking a pixel (x, y) in the speckle pattern acquired from the right camera as the center, and Block data Block is usedRHas a size of L × (L + D-1). Wherein, the Block data BlockRIs a rectangular Block with pixel (x, y) as center in the speckle pattern collected from the right camera, Block data BlockRBoth the upper radius and the lower radius of
Figure BDA0002600052030000041
Left radius of
Figure BDA0002600052030000042
Right radius of
Figure BDA0002600052030000043
DminAnd DmaxRespectively representing the minimum parallax value and the maximum parallax value in a stereo matching system, D is the absolute parallax range value of the stereo matching system, and D is equal to Dmax-Dmin+1。
Inputting the block data into a trained stereo matching network to obtain initial matching cost;
the stereo matching network adopts a Siamese structure as a matching strategy and comprises a sharing sub-network and a plurality of full connection layers with sharing weight.
The sharing sub-network comprises a convolution layer and a plurality of residual blocks which are sequentially stacked at the front end to enhance the feature extraction capability of the network, and nine convolution layers with effective padding which are stacked at the back end. For example, L is 19pixels, and block data BlockL and block data BlockR are input into the sharing sub-network, respectively, feature tensors of sizes 1 × 1 × 64 and 1 × D × 64 are obtained. Clustering the two eigentensors along the eigenchannel yields an eigentensor of 1 x (D +1) x 64. The feature tensor of 1 × (D +1) × 64 is input to the fully-connected layer sharing the weight to realize the matching cost calculation, thereby obtaining the initial matching cost.
And further optimizing the initial matching cost by adopting a cost aggregation method based on semi-global matching (SGM).
And directly taking the candidate parallax with the lowest cost value as an integer pixel parallax value through a winner-of-the-world algorithm (WTA) and fitting a model through a five-point quadratic curve to obtain a parallax map with sub-pixel precision.
And step 3: and inputting the high-frequency fringe pattern into a trained U-net network to realize extraction of the wrapping phase.
In a further embodiment, the input of the U-net network training is the collected high-frequency fringe pattern I (x, y), and the labels are the numerator term and denominator terms M (x, y) and D (x, y) of the arctan function after background information is removed.
The method for acquiring the label specifically comprises the following steps:
the data items M (x, y) and D (x, y) required in acquiring the wrapped phases are obtained using a three-step phase shift algorithm. Then removing background information in M (x, y) and D (x, y) to further enhance the learning ability of the network to effective information in the detected scene, specifically:
and comparing the actual fringe modulation degree corresponding to the pixel point with a set threshold, and if the actual fringe modulation degree corresponding to the pixel point is lower than the threshold, setting the numerator term and denominator term M (x, y) and D (x, y) of the arctan function corresponding to the pixel point to be 0 so as to remove background information in M (x, y) and D (x, y).
The actual fringe modulation degree calculation formula corresponding to the pixel point is as follows:
Figure BDA0002600052030000052
wherein, I1(x,y),I2(x,y),I3(x, y) are the corresponding three-step phase-shifted fringe pattern intensities. Modulation (x, y) is the actual fringe Modulation.
The Modulation (x, y) corresponding to the pixels belonging to the background in the picture is much smaller than the Modulation (x, y) corresponding to the pixels of the object to be measured, and in some embodiments, a suitable threshold is set to remove the background information in M (x, y) and D (x, y), specifically, the following formula:
M(x,y)=0 if Modulation(x,y)≤0.01
D(x,y)=0 if Modulation(x,y)≤0.01
then, M (x, y) and D (x, y) with background information removed are used as tags of the network. Through this optimization strategy, it can be found that the prediction result directly output by the U-net network will have a valid value only in the foreground and a negligible value in the background.
During network test, the input of the U-net network is the collected high-frequency fringe pattern I (x, y), the output of the U-net network is the numerator term and denominator term M (x, y) and D (x, y) of the arctan function, and the wrapped phase pattern is obtained according to M (x, y) and D (x, y)
Figure BDA0002600052030000053
The method specifically comprises the following steps:
Figure BDA0002600052030000051
based on the optimization strategies, the high-precision phase extraction algorithm based on the U-net type network can successfully extract the high-precision wrapping phase.
And 4, step 4: optimizing the initial disparity map by using wrapped phase constraint to obtain a high-precision disparity map, which specifically comprises the following steps:
the initial disparity map obtained in step 2 can provide a rough matching position for each valid point to be matched. Stereoscopic matching based on phase information is achieved by minimizing the difference between wrapped phases for two views to obtain matching points for integer pixels from the initial disparity map and wrapped phase information. And then, sub-pixel matching is realized through a linear interpolation algorithm by utilizing the phase information of the matching point of the whole pixel and the adjacent points thereof, so that a high-precision and dense disparity map is obtained.
And 5: based on parallax data between two camera visual angles, the parallax data is converted into three-dimensional information by using calibration parameters of the cameras, and finally high-speed and high-precision absolute three-dimensional shape measurement is realized.
The present invention first uses a stereo matching network to generate an initial disparity map from a speckle pattern (but with less precision). The wrapped phase map is then extracted with high accuracy (but with depth ambiguity) from an additional piece of fringe pattern using an improved U-net network. And optimizing the initial parallax map by using the wrapped phase map as an additional constraint, thereby finally realizing high-speed and high-precision absolute three-dimensional topography measurement. The invention can realize high-speed and high-precision absolute three-dimensional shape measurement only by two projection patterns.
Example (b):
to verify the effectiveness of the present invention, a three-dimensional measurement device based on learning fringe phase recovery and speckle correlation was constructed using two cameras (model acA640-750um, Basler), a DLP projector (model LightCraft 4500PRO, TI) and a computer. The shooting speed of the device when the three-dimensional measurement of the object is carried out is 25 frames per second. Using a stereo matching network to generate an initial disparity map from a speckle pattern (but with less accuracy) as described in steps 1 and 2. Fig. 2 is a basic schematic diagram of the robust stereo matching algorithm based on deep learning according to the present invention. Using step/3, the wrapped phase map is extracted with high accuracy (but with depth ambiguity) from an additional one of the fringe patterns using an improved U-net network. Fig. 3 is a basic schematic diagram of a high-precision phase extraction algorithm based on an improved U-net type network of the present invention. And (4) optimizing the initial parallax map by using the wrapped phase map as an additional constraint so as to finally realize high-speed and high-precision absolute three-dimensional topography measurement. Throughout the experiment, 1200 sets of data were projected and photographed, with 800 as the training set, 200 as the validation set, and 200 as the test set. Notably, none of the data in the training set, validation set, and test set is reused. The loss function of the network is set as Mean Square Error (MSE), the optimizer is Adam, and the training period of the network is set as 500 rounds. Fig. 4 is a diagram showing the results of the three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation according to the present invention. The results of fig. 4 demonstrate that the present invention requires only two projected patterns to achieve high speed and high accuracy absolute three-dimensional topography measurements.

Claims (8)

1. A three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation is characterized by comprising the following specific steps:
step 1: synchronously acquiring a high-frequency fringe pattern and a speckle pattern by two cameras;
step 2: obtaining an initial parallax image based on a stereo matching network according to speckle patterns synchronously acquired by two cameras;
and step 3: inputting the high-frequency fringe pattern into the trained U-net network to obtain a wrapping phase;
and 4, step 4: optimizing the initial disparity map by using wrapping phase constraint to obtain a high-precision disparity map;
and 5: and converting the high-precision parallax map into three-dimensional information according to the calibration parameters of the camera to finish three-dimensional measurement.
2. The three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation as claimed in claim 1, wherein the specific method for obtaining the initial disparity map based on the stereo matching network is as follows:
respectively carrying out three-dimensional correction on two speckle patterns from two different visual angles, and cutting the speckle patterns into block data with different sizes;
inputting the block data into a trained stereo matching network to obtain initial matching cost;
optimizing the initial matching cost by adopting a cost aggregation method based on semi-global matching;
and taking the candidate parallax with the lowest cost value as an integer pixel parallax value through a winner-king algorithm, and obtaining a parallax image with sub-pixel precision through a five-point quadratic curve fitting model.
3. The three-dimensional measurement method for fringe phase retrieval and speckle correlation based on learning of claim 2, wherein the stereo matching network comprises a sharing sub-network and a plurality of fully connected layers with shared weight, the sharing sub-network comprises a convolution layer and a plurality of residual blocks stacked in sequence at the front end, and nine convolution layers stacked at the back end.
4. The three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation as claimed in claim 1, wherein the label in the U-net network training is the numerator and denominator M (x, y) and D (x, y) of the arctan function obtained and removed the background information by using the three-step phase shift algorithm.
5. The three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation as claimed in claim 4, wherein the specific method for removing background information is:
and comparing the actual fringe modulation degree corresponding to the pixel point with a set threshold, and if the actual fringe modulation degree corresponding to the pixel point is lower than the threshold, setting the numerator item and the denominator item of the arctangent function corresponding to the pixel point to be 0.
6. The three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation according to claim 5, wherein the actual fringe modulation degree corresponding to the pixel point is specifically:
Figure FDA0002600052020000021
wherein, I1(x,y),I2(x,y),I3(x, y) are the corresponding three-step phase-shifted fringe pattern intensities. Modulation (x, y) is the actual fringe Modulation.
7. The three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation according to claim 1, wherein the high-frequency fringe pattern is input into a trained U-net network, and the acquisition of the wrapping phase specifically comprises:
inputting the high-frequency fringe pattern into the trained U-net network to obtain a numerator term, a denominator term M (x, y) and a denominator term D (x, y) of the arctangent function;
obtaining wrapped phase map according to M (x, y) and D (x, y)
Figure FDA0002600052020000022
The method specifically comprises the following steps:
Figure FDA0002600052020000023
8. the three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation according to any one of claims 1 to 7, characterized in that the initial disparity map is optimized by using wrapped phase constraint, and the specific method for obtaining the high-precision disparity map is as follows:
according to the initial disparity map and the wrapping phase information, three-dimensional matching based on the phase information is realized by minimizing the difference between the wrapping phases of two visual angles, and a matching point of an integer pixel is obtained;
and completing sub-pixel matching by using the phase information of the matching point of the whole pixel and the adjacent points thereof through a linear interpolation algorithm to obtain a high-precision parallax map.
CN202010721119.3A 2020-07-24 2020-07-24 Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation Active CN111947599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010721119.3A CN111947599B (en) 2020-07-24 2020-07-24 Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010721119.3A CN111947599B (en) 2020-07-24 2020-07-24 Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation

Publications (2)

Publication Number Publication Date
CN111947599A true CN111947599A (en) 2020-11-17
CN111947599B CN111947599B (en) 2022-03-22

Family

ID=73337922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010721119.3A Active CN111947599B (en) 2020-07-24 2020-07-24 Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation

Country Status (1)

Country Link
CN (1) CN111947599B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381172A (en) * 2020-11-28 2021-02-19 桂林电子科技大学 InSAR interference image phase unwrapping method based on U-net
CN113450460A (en) * 2021-07-22 2021-09-28 四川川大智胜软件股份有限公司 Phase-expansion-free three-dimensional face reconstruction method and system based on face shape space distribution
CN114252020A (en) * 2021-12-22 2022-03-29 西安交通大学 Multi-station full-field fringe pattern phase shift auxiliary speckle large length-width ratio gap measurement method
CN114777677A (en) * 2022-03-09 2022-07-22 南京理工大学 Single-frame dual-frequency multiplexing fringe projection three-dimensional surface type measuring method based on deep learning
CN114777677B (en) * 2022-03-09 2024-04-26 南京理工大学 Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608908A (en) * 2009-07-20 2009-12-23 杭州先临三维科技股份有限公司 The three-dimension digital imaging method that digital speckle projection and phase measuring profilometer combine
WO2018067962A1 (en) * 2016-10-06 2018-04-12 Google Llc Image processing neural networks with separable convolutional layers
CN108088391A (en) * 2018-01-05 2018-05-29 深度创新科技(深圳)有限公司 A kind of method and system of measuring three-dimensional morphology
AU2016393639A1 (en) * 2016-02-18 2018-09-06 Google Llc Image classification neural networks
CN110197505A (en) * 2019-05-30 2019-09-03 西安电子科技大学 Remote sensing images binocular solid matching process based on depth network and semantic information
CN110487216A (en) * 2019-09-20 2019-11-22 西安知象光电科技有限公司 A kind of fringe projection 3-D scanning method based on convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608908A (en) * 2009-07-20 2009-12-23 杭州先临三维科技股份有限公司 The three-dimension digital imaging method that digital speckle projection and phase measuring profilometer combine
AU2016393639A1 (en) * 2016-02-18 2018-09-06 Google Llc Image classification neural networks
WO2018067962A1 (en) * 2016-10-06 2018-04-12 Google Llc Image processing neural networks with separable convolutional layers
CN108088391A (en) * 2018-01-05 2018-05-29 深度创新科技(深圳)有限公司 A kind of method and system of measuring three-dimensional morphology
CN110197505A (en) * 2019-05-30 2019-09-03 西安电子科技大学 Remote sensing images binocular solid matching process based on depth network and semantic information
CN110487216A (en) * 2019-09-20 2019-11-22 西安知象光电科技有限公司 A kind of fringe projection 3-D scanning method based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YIN WEI.ET AL.: "High-speed 3D shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system", 《OPTICS EXPRESS》 *
钟锦鑫等: "基于深度学习的散斑投影轮廓术", 《红外与激光工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381172A (en) * 2020-11-28 2021-02-19 桂林电子科技大学 InSAR interference image phase unwrapping method based on U-net
CN113450460A (en) * 2021-07-22 2021-09-28 四川川大智胜软件股份有限公司 Phase-expansion-free three-dimensional face reconstruction method and system based on face shape space distribution
CN114252020A (en) * 2021-12-22 2022-03-29 西安交通大学 Multi-station full-field fringe pattern phase shift auxiliary speckle large length-width ratio gap measurement method
CN114252020B (en) * 2021-12-22 2022-10-25 西安交通大学 Multi-station full-field fringe pattern phase shift auxiliary speckle large length-width ratio gap measurement method
CN114777677A (en) * 2022-03-09 2022-07-22 南京理工大学 Single-frame dual-frequency multiplexing fringe projection three-dimensional surface type measuring method based on deep learning
CN114777677B (en) * 2022-03-09 2024-04-26 南京理工大学 Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning

Also Published As

Publication number Publication date
CN111947599B (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN109253708B (en) Stripe projection time phase unwrapping method based on deep learning
CN111947599B (en) Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation
CN111563564B (en) Speckle image pixel-by-pixel matching method based on deep learning
CN111351450B (en) Single-frame stripe image three-dimensional measurement method based on deep learning
CN111563952B (en) Method and system for realizing stereo matching based on phase information and spatial texture characteristics
CN106032976A (en) Three-fringe projection phase unwrapping method based on wavelength selection
CN112504165A (en) Composite stereo phase unfolding method based on bilateral filtering optimization
CN113379818A (en) Phase analysis method based on multi-scale attention mechanism network
CN110375675B (en) Binocular grating projection measurement method based on space phase expansion
WO2013012054A1 (en) Image processing method and apparatus
CN111947600B (en) Robust three-dimensional phase unfolding method based on phase level cost filtering
CN112212806B (en) Three-dimensional phase unfolding method based on phase information guidance
Furukawa et al. Multiview projectors/cameras system for 3d reconstruction of dynamic scenes
CN113137939B (en) Unpacking method based on binary characteristic pattern matching
CN113551617B (en) Binocular double-frequency complementary three-dimensional surface type measuring method based on fringe projection
Zhang et al. Determination of edge correspondence using color codes for one-shot shape acquisition
CN111815697B (en) Thermal deformation dynamic three-dimensional measurement method
CN114739322A (en) Three-dimensional measurement method, equipment and storage medium
Kawasaki et al. One-shot scanning method using an unealibrated projector and camera system
CN113450460A (en) Phase-expansion-free three-dimensional face reconstruction method and system based on face shape space distribution
CN112212805B (en) Efficient three-dimensional phase unwrapping method based on composite coding
Li et al. Single shot dual-frequency structured light based depth sensing
CN114777677B (en) Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning
CN113503832B (en) Absolute phase recovery method based on object transverse dimension assistance
CN113709442B (en) Single-pixel imaging method based on projection reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant