CN110163817A - A kind of phase main value extracting method based on full convolutional neural networks - Google Patents

A kind of phase main value extracting method based on full convolutional neural networks Download PDF

Info

Publication number
CN110163817A
CN110163817A CN201910347403.6A CN201910347403A CN110163817A CN 110163817 A CN110163817 A CN 110163817A CN 201910347403 A CN201910347403 A CN 201910347403A CN 110163817 A CN110163817 A CN 110163817A
Authority
CN
China
Prior art keywords
layer
phase
main value
layers
phase main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910347403.6A
Other languages
Chinese (zh)
Other versions
CN110163817B (en
Inventor
王海霞
吴晨阳
胡苏杭
陈朋
梁荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910347403.6A priority Critical patent/CN110163817B/en
Publication of CN110163817A publication Critical patent/CN110163817A/en
Application granted granted Critical
Publication of CN110163817B publication Critical patent/CN110163817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

A kind of phase main value extracting method based on full convolutional neural networks, the following steps are included: 1) on computers by the bar graph by Sine distribution needed for preparatory coding, the bar graph encoded in advance is projected to determinand, and acquires determinand bar graph using industrial camera;2) full convolutional neural networks model is constructed, training parameter and loss function are set, 1) picture obtained in is input in neural network, neural network is run, obtains required phase main value;3) solution is carried out to phase main value obtained in 2) using the method being oriented to based on Quality Map to twine, obtain accurate phase value.The quantity that the present invention provides a kind of Image Acquisition is few, without the higher phase main value extracting method based on full convolutional neural networks of training dataset and training process, precision.

Description

A kind of phase main value extracting method based on full convolutional neural networks
Technical field
The present invention relates to a kind of image processing method, especially a kind of phase main value based on full convolutional neural networks is extracted Method.
Background technique
With the diversification of rapid development and the social demand of information technology, the space three-dimensional of contour of object is measured in industry The various fields such as automatic detection, control of product quality, reverse engineer, biomedicine, virtual reality, the reproduction of the cultural relics, anthropological measuring In be widely applied.Especially with the three-dimensional measurement of optical means, due to being non-cpntact measurement, measurement accuracy height, obtaining It takes data volume big, light, mechanical, electrical integration is easily realized under computer and is greatly developed in last decade.
Phase measuring profilometer (phase measuring profilometry, abbreviation PMP) is thrown using sinusoidal grating A kind of method for three-dimensional measurement that shadow and phase-shifting technique combine.Its basic thought is: when a sinusoidal grating graphic projection to three When tieing up diffusing reflection body surface, the deforming stripe modulated by body surface face shape can be obtained from imaging system, utilizes discrete phase Shifting technology obtains N amplitude variation shape light field image, calculates phase distribution further according to N step phase shift algorithm.The phase main value of mainstream at present Extraction algorithm includes three step phase shift methods, four-stepped switching policy, these algorithms need at least three width images to extract phase main value, and are Reduce noise, background influence and guarantee that solution twines precision, often need to shoot different stripeds frequencies in actual measurement respectively The image of rate just needs to shoot 16 width pictures to guarantee precision, such as using the four-stepped switching policy progress one-shot measurement of 4 kinds of frequencies, Excessive picture shooting quantity significantly reduces the speed of three-dimensional reconstruction.And the routine use condition poor in experiment condition Under, the case where mobile situation of measured object occurs, measurement accuracy is caused to decline often is had in the shooting process of more than ten pictures.
Summary of the invention
When in order to overcome the excessive required picture number in actual measurement of conventional images phase main value extracting method, shooting Between too long, the lower deficiency of precision, in order to extract accurate phase main value in single frames stripe pattern, based on such Thinking, the present invention propose a kind of higher phase main value extracting method based on full convolutional neural networks of precision.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of phase main value extracting method based on full convolutional neural networks, comprising the following steps:
1) on computers by the bar graph by Sine distribution needed for preparatory coding, the bar graph encoded in advance is projected Determinand bar graph is acquired to determinand, and using industrial camera;
2) full convolutional neural networks model is constructed, training parameter and loss function are set, 1) picture obtained in is inputted Into neural network, neural network is run, required phase main value is obtained;
3) solution is carried out to phase main value obtained in 2) using the method being oriented to based on Quality Map to twine, obtain accurate phase Value.
Further, in the step 1), acquisition bar graph process the following steps are included:
1.1) striped precoding is carried out on computers, constructs the required bar graph being distributed by sinusoidal rule, striped It is encoded according to following formula:
Wherein I (x, y) is gray value of image, and x is abscissa;
1.2) design shooting optical path, and industrial camera, DLP projector and determinand are placed by designed optical path;
1.3) stripe pattern of high definition is successively acquired using industrial camera.
Further, in the step 2), extract phase main value process the following steps are included:
2.1) neural network constructs
When projector projects the bar graph encoded in advance to determinand surface, due to by body surface height Modulation, the deforming stripe obtained by CCD camera indicate are as follows:
Wherein a (x, y) and b (x, y) reflects the variation of bias light and surface reflectivity respectively; It is to be calculated Relative phase values, also referred to as phase main value, it reflects elevation information in object corresponding points;
A full convolutional neural networks model is constructed, the layer of entire full convolutional neural networks includes ten parts, wherein the One layer is coding layer to layer 5, and each layer is made of two convolutional layers and two BN layers, wherein the convolution kernel of the first convolutional layer Size is 3 × 3, and quantity is 128, stride 2, and the convolution kernel size of the second convolutional layer is 3 × 3, and quantity is 128, stride Be 1, the image of input first passes around the first convolutional layer, BN layers and Leaky ReLU activation primitive, using the second convolutional layer, BN layers and Leaky ReLU activation primitive processing;
Layer 6 is decoding layer to the tenth layer, and each layer is by two convolutional layers, three BN layers and a up-sampling layer group At, the convolution kernel size of two of them convolutional layer is 3 × 3, and quantity is 128, and stride 1, input feature vector successively passes through BN layers, First convolutional layer, BN layers and Leaky ReLU activation primitive;Successively pass through the second convolutional layer, BN layers and Leaky ReLU again Activation primitive processing up-samples feature by bilinear interpolation or closest method finally by up-sampling layer;
In addition, first layer and the tenth layer, the second layer and the 9th layer, third layer and the 8th layer, the 4th layer and layer 7, the 5th Skip connections is also added between layer and layer 6, skip connections is by convolutional layer, BN layers and Leaky ReLU activation primitive is constituted, and wherein the convolution kernel size of convolutional layer is 1 × 1, and convolution nuclear volume is 4;
The image that size is 512 × 512 × 1 is inputted into neural network, successively by first layer to layer 6, the spy of output Levying size is 16 × 16 × 128, then successively by layer 6 to the tenth layer, the feature sizes of output are 512 × 512 × 128;
The convolution kernel and RELU activation primitive that the last one convolutional layer is 1 × 1 by 1 size are handled, and output feature is big Small is 512 × 512 × 1;
2.2) neural network is run
The input of neural network is the three width sizes Uniform noise image being randomly generated identical with 1) middle shooting image, point It is not set to be fitted to background a (x, y), surface reflectivity b (x, y) and phase main value φ (x, y), wherein to use two methods To construct input picture: the first, generating random number in the section of [0,0.1] to fill whole image;The second, it uses Meshgrid function generates the grid in [0,1] section as input;
The parameter for determining full convolutional neural networks is added after being fitted above-mentioned three width image input neural network, Energy function is that shooting obtains the root mean square MSE minimum of image in image after being added and step 1), and the MSE is defined as follows:
Wherein yiTo shoot obtained image in step 1),For three width input picture images after being added;
Using AdamOptimizer optimizer to iteration is optimized, optimal solution is obtained after iteration is multiple, at this time three it is defeated Required phase main value, surface reflectivity and background light intensity will be fitted to respectively by entering image.
Further, in the step 3), using the method for the phase unwrapping based on Quality Map guiding method, steps are as follows:
3.1) then the pixel high from quality checks 4 pixels near this pixel point, then to this 4 A pixel carries out phase unwrapping, that is, phase unwrapping, wherein the formula of phase unwrapping are as follows:
3.2) pixel (not carrying out phase unwrapping) closed on around pixel for having carried out phase unwrapping is stored into " adjacent In column ";
3.3) according to the Quality Map of phase, the high pixel of quality is selected from " adjacent column " carry out solution and twine, and update this Column;
3.4) repeat 3.1), 3.2) step, until all phase unwrappings finish;
Wherein mass M is defined as:
D is second differnce, is defined as:
Wherein, V=unwrap (A (i, j-1)-A (i, j))-unwrap (A (i, j)-A (i, j+1))
H=unwrap (A (i-1, j)-A (i, j))-unwrap (A (i, j)-A (i+1, j))
D1=unwrap (A (i-1, j-1)-A (i, j))-unwrap (A (i, j)-A (i+1, j+1))
D2=unwrap (A (i-1, j+1)-A (i, j))-unwrap (A (i, j)-A (i+1, j-1))
Unwrap indicates that solution twines operation i.e. formula (4).
Beneficial effects of the present invention are mainly manifested in: reduce the quantity of Image Acquisition, it must without traditional neural network The data set and training process of palpus reduce neural network to the requirement of hardware and operation runing time.
Detailed description of the invention
Fig. 1 is a kind of three-dimensional reconstruction system flow chart neural network based of the present invention;
Fig. 2 is a kind of three-dimensional reconstruction system hardware schematic neural network based of the present invention;
Fig. 3 is the structure chart of neural network of the present invention, wherein volume represents convolutional layer, lower to represent down-sampling layer, B represents BN Layer, R represent Leaky ReLU activation primitive, and upper representative up-samples layer.
Specific embodiment
The present invention is described further with reference to the accompanying drawing:
Referring to Fig.1~Fig. 3, a kind of phase main value extracting method based on full convolutional neural networks, comprising the following steps:
1) referring to fig. 2, acquisition stripe pattern method is to project the bar graph encoded in advance to determinand, and use Industrial camera acquires determinand bar graph, comprising the following steps:
1.1) striped precoding is carried out on computers, constructs the required bar graph being distributed by sinusoidal rule, striped It is encoded according to following formula:
Wherein x is abscissa,For initial phase, γ is pre- calibration gamma value;
1.2) the good bar graph of precoding is projected to determinand surface using DLP projector;
1.3) stripe pattern of high definition is successively acquired using industrial camera.
2) full convolutional neural networks model is constructed, training parameter and loss function are set, 1) picture obtained in is inputted Into neural network, neural network is run, required phase main value is obtained, comprising the following steps:
2.1) neural network constructs
When projector projects the bar graph encoded in advance to determinand surface, due to by body surface height Modulation, the deforming stripe obtained by CCD camera indicate are as follows:
Wherein a (x, y) and b (x, y) reflects the variation of bias light and surface reflectivity respectively; It is to be calculated Relative phase values, also referred to as phase main value, it reflects elevation information in object corresponding points;
Referring to Fig. 3, a full convolutional neural networks model is constructed, the layer of entire full convolutional neural networks includes ten portions Point, wherein first layer to layer 5 is coding layer, and each layer is made of two convolutional layers and two BN layers, wherein the first convolutional layer Convolution kernel size be 3 × 3, quantity is 128, stride 2, and the convolution kernel size of the second convolutional layer is 3 × 3, quantity 128 A, stride 1, the image of input successively passes through the first convolutional layer, BN layers and Leaky ReLU activation primitive first, then successively By the second convolutional layer, BN layers and the processing of Leaky ReLU activation primitive;
Layer 6 is decoding layer to the tenth layer, and each layer is by two convolutional layers, three BN layers and a up-sampling layer group At, the convolution kernel size of two of them convolutional layer is 3 × 3, and quantity is 128, and stride 1, input feature vector successively passes through BN layers, First convolutional layer, BN layers and Leaky ReLU activation primitive.Successively pass through the second convolutional layer, BN layers and Leaky ReLU again Activation primitive processing up-samples feature by bilinear interpolation or closest method finally by up-sampling layer;
In addition, first layer and the tenth layer, the second layer and the 9th layer, third layer and the 8th layer, the 4th layer and layer 7, the 5th Skip connections is also added between layer and layer 6, skip connections is by convolutional layer, BN layers and Leaky ReLU activation primitive is constituted, and wherein the convolution kernel size of convolutional layer is 1 × 1, and convolution nuclear volume is 4;
The image that size is 512 × 512 × 1 is inputted into neural network, successively by first layer to layer 6, the spy of output Levying size is 16 × 16 × 128, then successively by layer 6 to the tenth layer, the feature sizes of output are 512 × 512 × 128;
The convolution kernel and RELU activation primitive that the last one convolutional layer is 1 × 1 by 1 size are handled, and output feature is big Small is 512 × 512 × 1;
2.2) neural network is run
The input of neural network is the three width sizes Uniform noise image being randomly generated identical with 1) middle shooting image, point It is not set to be fitted to background a (x, y), surface reflectivity b (x, y) and phase main value φ (x, y), wherein to use two methods Construct input picture, 1, in the section of [0,0.1] generate random number to fill whole image;2, using meshgrid function The grid in [0,1] section is generated as input;
The parameter for determining full convolutional neural networks is added after being fitted above-mentioned three width image input neural network, Energy function is that shooting obtains the root mean square MSE minimum of image in image after being added and step 1), and the MSE is defined as follows:
Wherein yiTo shoot obtained image in step 1),For three width input picture images after being added;
Using AdamOptimizer optimizer to iteration is optimized, optimal solution is obtained after iteration is multiple, at this time three it is defeated Required phase main value, surface reflectivity and background light intensity will be fitted to respectively by entering image.
Further, using the method for the phase unwrapping based on Quality Map guiding method in the step 3), steps are as follows:
3.1) then the pixel high from quality checks 4 pixels near this pixel point, then to this 4 A pixel carries out phase unwrapping, that is, phase unwrapping, wherein the formula of phase unwrapping are as follows:
3.2) pixel (not carrying out phase unwrapping) closed on around pixel for having carried out phase unwrapping is stored into " adjacent In column ";
3.3) according to the Quality Map of phase, the high pixel of quality is selected from " adjacent column " carry out solution and twine, and update this Column;
3.4) repeat 3.1), 3.2) step, until all phase unwrappings finish;
Wherein mass M is defined as:
D is second differnce, is defined as:
Wherein, V=unwrap (A (i, j-1)-A (i, j))-unwrap (A (i, j)-A (i, j+1))
H=unwrap (A (i-1, j)-A (i, j))-unwrap (A (i, j)-A (i+1, j))
D1=unwrap (A (i-1, j-1)-A (i, j))-unwrap (A (i, j)-A (i+1, j+1))
D2=unwrap (A (i-1, j+1)-A (i, j))-unwrap (A (i, j)-A (i+1, j-1))
Unwrap indicates that solution twines operation i.e. formula (4).

Claims (4)

1. a kind of phase main value extracting method based on full convolutional neural networks, which is characterized in that the method includes following steps It is rapid:
1) on computers will in advance coding needed for the bar graph by Sine distribution, the bar graph encoded in advance project to Object is surveyed, and acquires determinand bar graph using industrial camera;
2) full convolutional neural networks model is constructed, training parameter and loss function are set, 1) picture obtained in is input to mind Through running neural network, obtaining required phase main value in network;
3) solution is carried out to phase main value obtained in 2) using the method being oriented to based on Quality Map to twine, obtain accurate phase value.
2. a kind of phase main value extracting method based on full convolutional neural networks as described in claim 1, which is characterized in that institute State in step 1), acquisition bar graph process the following steps are included:
1.1) on computers carry out striped precoding, construct it is required by sinusoidal rule be distributed bar graph, striped according to Following formula is encoded:
Wherein I (x, y) is gray value of image, and x is abscissa;
1.2) design shooting optical path, and industrial camera, DLP projector and determinand are placed by designed optical path;
1.3) stripe pattern of high definition is successively acquired using industrial camera.
3. a kind of phase main value extracting method based on full convolutional neural networks as claimed in claim 2, which is characterized in that institute State in step 2), extract phase main value process the following steps are included:
2.1) neural network constructs
When projector projects the bar graph encoded in advance to determinand surface, due to the tune by body surface height System, the deforming stripe obtained by CCD camera indicate are as follows:
Wherein a (x, y) and b (x, y) reflects the variation of bias light and surface reflectivity respectively; It is to be calculated opposite Phase value, also referred to as phase main value, it reflects elevation information in object corresponding points;
A full convolutional neural networks model is constructed, the layer of entire full convolutional neural networks includes ten parts, wherein first layer It is coding layer to layer 5, each layer is made of two convolutional layers and two BN layers, wherein the convolution kernel size of the first convolutional layer It is 3 × 3, quantity is 128, stride 2, and the convolution kernel size of the second convolutional layer is 3 × 3, and quantity is 128, stride 1, The image of input first passes around the first convolutional layer, BN layers and Leaky ReLU activation primitive, using the second convolutional layer, BN layers And Leaky ReLU activation primitive processing;
Layer 6 is decoding layer to the tenth layer, and each layer is made of two convolutional layers, three BN layers and a up-sampling layer, In the convolution kernel sizes of two convolutional layers be 3 × 3, quantity is 128, and stride 1, input feature vector successively passes through BN layers, first Convolutional layer, BN layers and Leaky ReLU activation primitive;Again successively by the second convolutional layer, BN layers and Leaky ReLU activation Function processing up-samples feature by bilinear interpolation or closest method finally by up-sampling layer;
In addition, first layer and the tenth layer, the second layer and the 9th layer, third layer and the 8th layer, the 4th layer and layer 7, layer 5 with Skip connections is also added between layer 6, skip connections is by convolutional layer, BN layers and Leaky ReLU activation primitive is constituted, and wherein the convolution kernel size of convolutional layer is 1 × 1, and convolution nuclear volume is 4;
The image that size is 512 × 512 × 1 is inputted into neural network, successively by first layer to layer 6, the feature of output is big Small is 16 × 16 × 128, then successively by layer 6 to the tenth layer, the feature sizes of output are 512 × 512 × 128;
The convolution kernel and RELU activation primitive that the last one convolutional layer is 1 × 1 by 1 size are handled, and output feature sizes are 512×512×1;
2.2) neural network is run
The input of neural network is the three width sizes Uniform noise image being randomly generated identical with 1) middle shooting image, is made respectively It is fitted to background a (x, y), surface reflectivity b (x, y) and phase main value φ (x, y), wherein using two methods come structure It builds input picture: the first, generating random number in the section of [0,0.1] to fill whole image;The second, using meshgrid letter Number generates the grid in [0,1] section as input;
The parameter for determining full convolutional neural networks is added, energy after being fitted above-mentioned three width image input neural network Function is that shooting obtains the root mean square MSE minimum of image in image after being added and step 1), and the MSE is defined as follows:
Wherein yiTo shoot obtained image in step 1),For three width input picture images after being added;
Using AdamOptimizer optimizer to iteration is optimized, optimal solution is obtained after iteration is multiple, the figure of three width input at this time As required phase main value, surface reflectivity and background light intensity will be fitted to respectively.
4. a kind of phase main value extracting method based on full convolutional neural networks as claimed in claim 1 or 2, feature exist In in the step 3), using the phase unwrapping based on Quality Map guiding method, steps are as follows:
3.1) then the pixel high from quality checks 4 pixels near this pixel point, then to this 4 pictures Vegetarian refreshments carries out phase unwrapping, that is, phase unwrapping, wherein the formula of phase unwrapping are as follows:
3.2) pixel around pixel of closing on for having carried out phase unwrapping is stored into " adjacent column ";
3.3) according to the Quality Map of phase, the high pixel of quality is selected from " adjacent column " carry out solution and twine, and update this column;
3.4) repeat 3.1), 3.2) step, until all phase unwrappings finish;
Wherein mass M is defined as:
D is second differnce, is defined as:
Wherein, V=unwrap (A (i, j-1)-A (i, j))-unwrap (A (i, j)-A (i, j+1))
H=unwrap (A (i-1, j)-A (i, j))-unwrap (A (i, j)-A (i+1, j))
D1=unwrap (A (i-1, j-1)-A (i, j))-unwrap (A (i, j)-A (i+1, j+1))
D2=unwrap (A (i-1, j+1)-A (i, j))-unwrap (A (i, j)-A (i+1, j-1))
Unwrap indicates that solution twines operation i.e. formula (4).
CN201910347403.6A 2019-04-28 2019-04-28 Phase principal value extraction method based on full convolution neural network Active CN110163817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910347403.6A CN110163817B (en) 2019-04-28 2019-04-28 Phase principal value extraction method based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910347403.6A CN110163817B (en) 2019-04-28 2019-04-28 Phase principal value extraction method based on full convolution neural network

Publications (2)

Publication Number Publication Date
CN110163817A true CN110163817A (en) 2019-08-23
CN110163817B CN110163817B (en) 2021-06-18

Family

ID=67638752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910347403.6A Active CN110163817B (en) 2019-04-28 2019-04-28 Phase principal value extraction method based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN110163817B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110487216A (en) * 2019-09-20 2019-11-22 西安知象光电科技有限公司 A kind of fringe projection 3-D scanning method based on convolutional neural networks
CN110500957A (en) * 2019-09-10 2019-11-26 中国科学院苏州纳米技术与纳米仿生研究所 A kind of active three-D imaging method, device, equipment and storage medium
CN111189414A (en) * 2020-01-09 2020-05-22 西安知象光电科技有限公司 Real-time single-frame phase extraction method
CN111461295A (en) * 2020-03-20 2020-07-28 南京理工大学 Single-frame stripe analysis method for generating antagonistic neural network based on multiple scales
CN111812647A (en) * 2020-07-11 2020-10-23 桂林电子科技大学 Phase unwrapping method for interferometric synthetic aperture radar
CN113340211A (en) * 2021-08-03 2021-09-03 中国工程物理研究院激光聚变研究中心 Interference image phase demodulation method based on deep learning
CN113985566A (en) * 2021-09-10 2022-01-28 西南科技大学 Scattered light focusing method based on spatial light modulation and neural network
CN114152217A (en) * 2022-02-10 2022-03-08 南京南暄励和信息技术研发有限公司 Binocular phase expansion method based on supervised learning
CN117523344A (en) * 2024-01-08 2024-02-06 南京信息工程大学 Interference phase unwrapping method based on phase quality weighted convolution neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103543453A (en) * 2013-10-28 2014-01-29 北京理工大学 Elevation inversion method for geosynchronous orbit synthetic aperture radar interference
WO2018107584A1 (en) * 2016-12-15 2018-06-21 东南大学 Error correction method for grating projection three-dimensional measurement system
CN109253708A (en) * 2018-09-29 2019-01-22 南京理工大学 A kind of fringe projection time phase method of deploying based on deep learning
CN109459923A (en) * 2019-01-02 2019-03-12 西北工业大学 A kind of holographic reconstruction algorithm based on deep learning
CN109596227A (en) * 2018-12-06 2019-04-09 浙江大学 A kind of phase recovery detection system of the optical element intermediate frequency error of convolutional neural networks priori enhancing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103543453A (en) * 2013-10-28 2014-01-29 北京理工大学 Elevation inversion method for geosynchronous orbit synthetic aperture radar interference
WO2018107584A1 (en) * 2016-12-15 2018-06-21 东南大学 Error correction method for grating projection three-dimensional measurement system
CN109253708A (en) * 2018-09-29 2019-01-22 南京理工大学 A kind of fringe projection time phase method of deploying based on deep learning
CN109596227A (en) * 2018-12-06 2019-04-09 浙江大学 A kind of phase recovery detection system of the optical element intermediate frequency error of convolutional neural networks priori enhancing
CN109459923A (en) * 2019-01-02 2019-03-12 西北工业大学 A kind of holographic reconstruction algorithm based on deep learning

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110500957A (en) * 2019-09-10 2019-11-26 中国科学院苏州纳米技术与纳米仿生研究所 A kind of active three-D imaging method, device, equipment and storage medium
CN110500957B (en) * 2019-09-10 2021-09-14 中国科学院苏州纳米技术与纳米仿生研究所 Active three-dimensional imaging method, device, equipment and storage medium
CN110487216A (en) * 2019-09-20 2019-11-22 西安知象光电科技有限公司 A kind of fringe projection 3-D scanning method based on convolutional neural networks
CN110487216B (en) * 2019-09-20 2021-05-25 西安知象光电科技有限公司 Fringe projection three-dimensional scanning method based on convolutional neural network
CN111189414B (en) * 2020-01-09 2021-09-03 西安知象光电科技有限公司 Real-time single-frame phase extraction method
CN111189414A (en) * 2020-01-09 2020-05-22 西安知象光电科技有限公司 Real-time single-frame phase extraction method
CN111461295B (en) * 2020-03-20 2022-08-16 南京理工大学 Single-frame stripe analysis method for generating antagonistic neural network based on multiple scales
CN111461295A (en) * 2020-03-20 2020-07-28 南京理工大学 Single-frame stripe analysis method for generating antagonistic neural network based on multiple scales
CN111812647A (en) * 2020-07-11 2020-10-23 桂林电子科技大学 Phase unwrapping method for interferometric synthetic aperture radar
CN113340211A (en) * 2021-08-03 2021-09-03 中国工程物理研究院激光聚变研究中心 Interference image phase demodulation method based on deep learning
CN113985566A (en) * 2021-09-10 2022-01-28 西南科技大学 Scattered light focusing method based on spatial light modulation and neural network
CN113985566B (en) * 2021-09-10 2023-09-12 西南科技大学 Scattered light focusing method based on spatial light modulation and neural network
CN114152217A (en) * 2022-02-10 2022-03-08 南京南暄励和信息技术研发有限公司 Binocular phase expansion method based on supervised learning
CN114152217B (en) * 2022-02-10 2022-04-12 南京南暄励和信息技术研发有限公司 Binocular phase expansion method based on supervised learning
CN117523344A (en) * 2024-01-08 2024-02-06 南京信息工程大学 Interference phase unwrapping method based on phase quality weighted convolution neural network
CN117523344B (en) * 2024-01-08 2024-03-19 南京信息工程大学 Interference phase unwrapping method based on phase quality weighted convolution neural network

Also Published As

Publication number Publication date
CN110163817B (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN110163817A (en) A kind of phase main value extracting method based on full convolutional neural networks
Laine et al. Modular primitives for high-performance differentiable rendering
Yariv et al. Bakedsdf: Meshing neural sdfs for real-time view synthesis
US20160307368A1 (en) Compression and interactive playback of light field pictures
JP5830546B2 (en) Determination of model parameters based on model transformation of objects
CN104541127B (en) Image processing system and image processing method
CN109945802B (en) Structured light three-dimensional measurement method
CN106101535B (en) A kind of video stabilizing method based on part and mass motion disparity compensation
US8803902B2 (en) Computing level of detail for anisotropic filtering
CN108955571B (en) The method for three-dimensional measurement that double frequency heterodyne is combined with phase-shift coding
CN106408523A (en) Denoising filter
CN108955574A (en) A kind of method for three-dimensional measurement and system
WO2020169983A1 (en) Facial shape representation and generation system and method
Andersson et al. Adaptive texture space shading for stochastic rendering
CN104025155B (en) Variable depth compresses
CN117011478B (en) Single image reconstruction method based on deep learning and stripe projection profilometry
CN109993701A (en) A method of the depth map super-resolution rebuilding based on pyramid structure
CN113008163A (en) Encoding and decoding method based on frequency shift stripes in structured light three-dimensional reconstruction system
CN117132704A (en) Three-dimensional reconstruction method of dynamic structured light, system and computing equipment thereof
CN112802084A (en) Three-dimensional topography measuring method, system and storage medium based on deep learning
Fu et al. High Dynamic Range Structured Light 3-D Measurement Based on Region Adaptive Fringe Brightness
CN116645466A (en) Three-dimensional reconstruction method, electronic equipment and storage medium
CN107644393A (en) A kind of Parallel Implementation method of the abundance algorithm for estimating based on GPU
Law et al. Projector placement planning for high quality visualizations on real-world colored objects
CN114166150B (en) Stripe reflection three-dimensional measurement method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant