CN1387167A - Method for creating 3D dual-vision model with structural light - Google Patents

Method for creating 3D dual-vision model with structural light Download PDF

Info

Publication number
CN1387167A
CN1387167A CN 01118302 CN01118302A CN1387167A CN 1387167 A CN1387167 A CN 1387167A CN 01118302 CN01118302 CN 01118302 CN 01118302 A CN01118302 A CN 01118302A CN 1387167 A CN1387167 A CN 1387167A
Authority
CN
China
Prior art keywords
network
training
sigma
partiald
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 01118302
Other languages
Chinese (zh)
Other versions
CN1168045C (en
Inventor
张广军
魏振忠
李鑫
贺俊吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing University of Aeronautics and Astronautics
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNB011183020A priority Critical patent/CN1168045C/en
Publication of CN1387167A publication Critical patent/CN1387167A/en
Application granted granted Critical
Publication of CN1168045C publication Critical patent/CN1168045C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A structure light method for creating 3D dual-vision model used for industrial inspection and vision pilot is disclosed. The RBF nerve network is composed of 3 layers. The input layer has two nodes. The output layer has three nodes. The default layer has three nodes, whose action function is Gauss nuclear function. The data of scale points in global coordinate system is used to train the dual-vision nerve network. Its advantages are high precision, high training speed, and no blind area.

Description

Method for creating 3 D dual-vision model with structural light
The present invention relates to a kind of structural light three-dimensional double vision vision model modeling legislation that is used for industrial detection and visual guidance.
Vision detection technology with its wide range, big visual field, measuring speed is fast, optical strip image is easy to extract and characteristics such as degree of precision have obtained using more and more widely in industrial environment.Structured light three-dimensional vision detects and is widely used in the integrality of workpiece, the measurement of surface smoothness; The automatic detection of microelectronic component (IC chip, PC plate, BGA) etc.; The detection of soft, easy crisp parts; The detection of various mould 3D shapes; Robotic vision guiding etc.This system flexibility is good; Be measured as contactless, dynamic response is fast, can satisfy a large amount of to produce " beat " short requirements, and whole measuring process is increasingly automated.
Set up rational vision-based detection model and be the important step in the structured light three-dimensional vision testing process, the modeling method of vision-based detection model mainly contains two kinds at present: conventional modeling method and based on the modeling method of BP (BackPropagation) neural network.
(1) conventional modeling method is based on the method for video camera pin-hole imaging theory, but the pin-hole imaging theory is a kind of approximate, otherwise does not have any light intensity on the video camera light-sensitive surface.Therefore, strictly speaking the vision-based detection universal model is inaccurate, especially this approximate even more serious under the situation of distance light axle.
In addition, vision detection system is accurate and complicated, influences the parameter of system accuracy, except error of mathematical model, also comprise many systematic parameters and video camera inside, external parameter, adjust error, the non-homogeneous error of the photosensitive unit of CCD, vision signal transformed error etc. as optical system.Wherein a part can be described with mathematical model, and some then is difficult to use model description.Therefore, still there is certain difference in the mathematical model that had of structured light three-dimensional vision detection model and the real system that adopts conventional method to set up.Disturb if ignored these small sum of errors, will reduce the measuring accuracy of system.Many vision-based detection of structural light three-dimensional precision is about 0.5-1mm at present.
(2) based on the vision mode modeling method of BP neural network: this method all has relevant bibliographical information both at home and abroad, such as people (Optical Engineering such as Ming Cheng, Vol.34, No.12,1995, pp3572-3576), people such as Deng Wenyi (HUST's journal, Vol.27, No.1,1999, pp78-80) all adopted this method.But these reports all adopt BP network commonly used, and are to be used for the single vision detection model.Yet there are some shortcoming and defect that can't overcome in the BP network, and mainly shows: the BP network uses more hidden layer usually on the structure, makes network become complicated, and training speed is slow.In the hands-on, there is the local optimum problem in the BP network, and speed of convergence is slow, efficient is low, and its precision is often not high, and the precision that provides in the report document has only 0.31-0.34mm.
The objective of the invention is to set up with few hidden layer, high precision, the double vision feel detection model of fast convergence rate.
Technical solution of the present invention is, radial basis function neural network (RBF neural network) is formed by three layers, and input layer is provided with two nodes, and output layer is provided with three nodes, and the action function in the hidden node adopts gaussian kernel function, for u j = exp [ - ( X - C j ) T ( X - C j ) 2 σ j 2 ] J=1,2 ..., N hNetwork is output as the linear combination of hidden node output, promptly y i = Σ j = 1 N h ω ij u j - θ = W i T U i=1,2,3;
After network structure is determined, utilize the data of calibration point of gathering under the global coordinate system that double vision feel neural network is trained, its training step is:
(1) start training, the initialization network architecture parameters comprises the central value of gaussian kernel function, variances sigma and hidden layer-output layer weights battle array W.
(2) network training parameter learning rate μ (k) is set, the minimum training error of the network in general of momentum term α (k) and expectation
(3) from internal memory, from N training sample, take out one group in order, with input training sample (x wherein 1i, x 2i) be input to the input layer of network.
(4) the actual output (y of three nodes of computational grid output layer O1i, y O2i, y O3i), and calculate itself and network desired output (y respectively E1i, y E2i, y E3i) promptly export the quadratic sum of the residual error of training sample, be added among the SUM, promptly SUM = SUM + Σ j = 1 3 ( y eji - y oji ) 2 ;
(5) adjust each network architecture parameters by the gradient descent method that drives quantifier, concrete adjustment algorithm is as follows: the definition criterion function Calculation criterion function J (E) is to the local derviation numerical value of each network architecture parameters, ∂ J ( E ) / ∂ E | E = E ( k ) , And adjust the value of each network architecture parameters by following formula:
The adjustment type of hidden layer-output layer connection weight value matrix W is W ( k + 1 ) = W ( k ) + μ ( k ) × ( - ∂ J ∂ W ) | W = W ( k ) + α ( k ) × [ W ( k ) - W ( k - 1 ) ] The adjustment type of hidden layer central value Matrix C is C ( k + 1 ) = C ( k ) + μ ( k ) × ( - ∂ J ∂ C ) | C = C ( k ) + α ( k ) × [ C ( k ) - C ( k - 1 ) ] The adjustment type of hidden layer variance matrix σ is σ ( k + 1 ) = σ ( k ) + μ ( k ) × ( - ∂ J ∂ σ ) | σ = σ ( k ) + α ( k ) × [ σ ( k ) - σ ( k - 1 ) ]
(6) judging whether that N group training sample is all imported has trained one time.If not, then get back to (3); If then the overall training error of computational grid is a root-mean-square deviation E RMS = SUM / N ;
(7) judge E RMSWhether less than the value of expectation.If not, then get back to (3), and suitably adjust network training parameter learning rate μ (k) and momentum term α (k) according to the observation; If, then utilize current network model, use the test error of each node output of network test sample set computational grid output layer.The calculating of test error is undertaken by following formula:
Figure A0111830200081
(8) judge test error, if test error can not meet the demands, the minimum training error of the network in general that then can reduce to expect Turning back to (3) continues training or turns back to (1) to restart training; If reached requirement, then preserve network architecture parameters, finish training.
The present invention is first based on the RBF neural network, set up structural light three-dimensional double vision vision model, not only overcome the deficiency that conventional modeling method exists effectively, also solved the blind zone problem of single vision detection system in the BP network, and improved the speed of convergence of network significantly, avoided local minimum point's problem, has overall optimal approximation performance, improved the model modeling precision, because the present invention has adopted the gradient descent method that drives quantifier to come training network, therefore, the fast convergence rate of modeling, good stability, the training precision height, training method is easily gone fast.
Fig. 1 is a network structure of the present invention;
Fig. 2 is a training process flow diagram of the present invention;
Fig. 3 is a double vision feel calibration point generating means synoptic diagram of the present invention.
The structure of network is three layers, and input layer is provided with two nodes, represents two dimensional image coordinate (x respectively 1, x 2), output layer is provided with three nodes, represents three-dimensional article coordinate (y respectively 1, y 2, y 3).Action function in the hidden node adopts gaussian kernel function (Gauss Kernel Function), is shown below: u j = exp [ - ( X - C j ) T ( X - C j ) 2 σ j 2 ] j = 1,2 … … N h - - - - - ( 1 )
Wherein, u jBe the output of j hidden node, x=(x 1, x 2) rBe the input sample, i.e. two dimensional image coordinate, C jBe the central value of Gaussian function, σ jBe variance, N hIt is the number of hidden nodes.
Network is output as the linear combination of hidden node output, promptly y i = Σ j = 1 N h ω ij u j - θ = W i T U - - - i = 1,2,3 - - - - - ( 2 )
Wherein, W 1 = ( ω i 1 , ω i 2 , K , ω i N h , - θ ) Be the weights battle array of hidden layer and output layer, U = ( u 1 , u 2 , K , u N h , 1 ) T Output vector for hidden layer.
After double vision feels that RBF network model structure is determined, next be exactly to utilize the data of calibration point of gathering under the global coordinate system that double vision feel RBF neural network is trained, to set up the double vision vision model.
Based on the training implementation procedure of RBF neural network double vision vision model as shown in Figure 2, detailed steps is as follows:
(1) start training, the initialization network architecture parameters comprises the central value of gaussian kernel function, variances sigma and hidden layer-output layer weights battle array W.
(2) network training parameter learning rate μ (k) is set, the minimum training error of the network in general of momentum term α (k) and expectation
Figure A0111830200093
(3) from internal memory, from N training sample, take out one group in order, with input training sample (x wherein 1i, x 2i) be input to the input layer of network.
(4) the actual output (y of three nodes of computational grid output layer O1i, y O2i, y O3i), and calculate itself and network desired output (y respectively E1i, y E2i, y E3i) promptly export the quadratic sum of the residual error of training sample, be added among the SUM, promptly SUM = SUM + Σ j = 1 3 ( y eji - y oji ) 2
(5) adjust each network architecture parameters by the gradient descent method that drives quantifier, concrete adjustment algorithm is as follows:
The definition criterion function
Wherein, output is wished in Y (k) representative,
Figure A0111830200096
(E k) is the actual output of network, and E is the vector that all parameters of network are formed, and comprises hidden layer central value, hidden layer variance and output weights, and (E k) is ε
Figure A0111830200097
(E is k) to the residual error of Y (k).
Calculation criterion function J (E) is to the local derviation numerical value of each network architecture parameters, ∂ J ( E ) / ∂ E | E = E ( k ) , And adjust the value of each network architecture parameters by following formula:
The adjustment type of hidden layer-output layer connection weight value matrix W is W ( k + 1 ) = W ( k ) + μ ( k ) × ( - ∂ J ∂ W ) | W = W ( k ) + α ( k ) × [ W ( k ) - W ( k - 1 ) ] The adjustment type of hidden layer central value Matrix C is C ( k + 1 ) = C ( k ) + μ ( k ) × ( - ∂ J ∂ C ) | C = C ( k ) + α ( k ) × [ C ( k ) - C ( k - 1 ) ] The adjustment type of hidden layer variance matrix σ is σ ( k + 1 ) = σ ( k ) + μ ( k ) × ( - ∂ J ∂ σ ) | σ = σ ( k ) + α ( k ) × [ σ ( k ) - σ ( k - 1 ) ]
Wherein, the number of times of k for adjusting, μ (k) is a learning rate, α (k) is a k forgetting factor constantly, claims momentum term or damping term again.When last time promptly the k+1 time network parameter values by i.e. the k time network parameter values acquisition of last time, promptly the k time parameter value adds the product of the criterion function negative gradient of μ (k) and this parameter, adds the product of α (k) and the k time and k-1 subparameter value residual error.The effect of μ (k) is the training speed that is used to adjust network, and when μ (k) was big, the amplitude of parameter adjustment was big, otherwise then little.The effect of α (k) is equivalent to damping force, and when the training error of network descended rapidly, it made network convergence steady gradually, and when the network training error increased rapidly, it made more and more slower that network disperses.Like this, make and the excessive vibration of the unlikely appearance of convergence process of network help the steady convergence of network.
(6) judging whether that N group training sample is all imported has trained one time.If not, then get back to (3); If then the overall training error of computational grid is a mean square deviation E RMS = SUM / N
(7) judge E RMSWhether less than the value of expectation.If not, then get back to (3), and suitably adjust network training parameter learning rate μ (k) and momentum term α (k) according to the observation; If, then utilize current network model, use the test error of each node output of network test sample set computational grid output layer.The calculating of test error is undertaken by following formula:
Figure A0111830200111
Figure A0111830200112
Wherein, N 1Be test sample book sum, Y XiBe the desired output of the x coordinate of i test sample book,
Figure A0111830200114
Be the actual output of the x coordinate of neural network, the rest may be inferred by analogy for it.
(8) test error is the final precision of network model.If test error can not meet the demands, the minimum training error of the network in general that then can reduce to expect Turning back to (3) continues training or turns back to (1) to restart training.If reached requirement, then preserve network architecture parameters, finish training.
Obtain the data of calibration point that is used for network training, comprise two dimensional image coordinate (x 1, x 2) and three-dimensional article coordinate (y 1, y 2, y 3).This process is finished by a high-precision three-dimensional double vision feel calibration point generating means, as shown in Figure 3.Wherein 1,2 is laser projecting apparatus.3,4 is ccd video camera, realizes obtaining of scene image.5 is two-way photoelectricity sighting device.6 is three-dimensional transfer table.7 is image pick-up card.8 is computing machine, control and data processing.(technology contents of this generating means is open in 01115655.4 patent at application number, no longer it is done set forth in detail herein) for the left side vision detection system, controlling this device moves with step-length 4mm in x direction and z direction respectively, gather the calibration point of the flat interior 60mm of light * 60mm scope, the calibration point number is 256 altogether.Do test sample book for wherein 64, do training sample for 192.For the right side vision detection system, control this device equally and move with step-length 4mm in x direction and z direction respectively, gather the calibration point of the flat interior 60mm of light * 60mm scope, the calibration point number is 256 altogether.Do test sample book for wherein 64, do training sample for 192.
For this structured light double vision vision model based on the RBF neural network, utilize above-mentioned training sample that obtains and test sample book, adopt 5.2 steps to carry out patience respectively to the vision detection system of the left and right sides and train fully, finally obtained the three-dimensional double vision of optimum structure light and felt that the RBF neural network model is as follows:
(1) structural parameters of the best RBF neural network model of right side vision detection system (the network test precision is 0.080mm)
Adopt three layers of RBF network, input number of nodes is 2, and the number of hidden nodes is 16, and the output node number is 3.Then each structure of adjusting parametric array is as follows: 1. hidden layer variance matrix σ 16 * 1=[σ 0, σ 1, Λ, σ 15] T2. hidden layer central value matrix C 2 × 16 = C 0,0 C 0,1 Λ C 0,15 C 1,0 C 1 , 1 Λ C 1,15 3. hidden layer---output layer weight matrix W 3 × 16 = w 0,0 w 0,1 Λ w 0,15 w 1,0 w 1,1 Λ w 1,15 w 2,0 w 2,1 Λ w 2,15
Each network training parameter, network architecture parameters initialization value and final trained values are as follows:
The initial value 0.1 of learning rate μ, stop value 0.003.The initial value 0.01 of momentum term α, stop value 0.Hidden layer-output layer connects the initialization interval [0.1,0.1] of weights W, the initialization interval [1,1] of hidden layer variances sigma, and the initialization interval of hidden layer central value is [1,1].To network training to 13500, each model structure parameter of the network that obtains is as follows: 1. hidden layer variance matrix σ 16 * 1=[2.525,0.665 ,-0.914,2.369 ,-1.679 ,-0.160,1.987 ,-0.057 ,-0.067,1.443 ,-1.125-1.395,1.415 ,-0.592,0.629 ,-0.327] T2. hidden layer central value Matrix C 2 * 163. hidden layer---output layer weight matrix C 3 * 16
Figure A0111830200132
(2) structural parameters (the network test precision is 0.081mm) of the best RBF neural network model of left side vision detection system
Adopt three layers of RBF network, input number of nodes is 2, and the number of hidden nodes is 20, and the output node number is 3.Then each structure of adjusting parametric array is as follows: 1. hidden layer variance matrix σ 20 * 1=[σ 0, σ 1, Λ, σ 19] T2. hidden layer central value matrix C 2 × 20 = C 0,0 C 0,1 Λ C 0,19 C 1,0 C 1,1 Λ C 1,19 3. hidden layer---output layer weight matrix W 3 × 20 = w 0,0 w 0,1 Λ w 0,19 w 1,0 w 1,1 Λ w 1,19 w 2,0 w 2,1 Λ w 2,19 Each network training parameter, network architecture parameters initialization value and final trained values are as follows:
The initial value 0.1 of learning rate μ, stop value 0.003.The initial value 0.01 of momentum term α, stop value 0.Hidden layer-output layer connects the initialization interval [0.1,0.1] of weights W, the initialization interval [1,1] of hidden layer variances sigma, and the initialization interval of hidden layer central value is [1,1].To network training to 15800, each model structure parameter of the network that obtains is as follows:
1. hidden layer variance matrix
σ 20×1=[-0.493,-1.376,-2.769,-3.075,-0.370,-3.025,-0.069,-0.013,2.464,
3.272 ,-0.251,0.207,0.575,0.011 ,-0.001,0.368 ,-0.003,0.198,0.431,0.276] T2. hidden layer central value Matrix C 2 * 203. hidden layer---output layer weight matrix
Figure A0111830200142

Claims (2)

1, a kind of method for creating 3 D dual-vision model with structural light is characterized in that, radial basis function neural network (RBF neural network) is formed by three layers, input layer is provided with two nodes, output layer is provided with three nodes, and the action function in the hidden node adopts gaussian kernel function, for u j = exp [ - ( X - C j ) T ( X - C j ) 2 σ j 2 ] J=1,2 .., N hNetwork is output as the linear combination of hidden node output, promptly y i = Σ i = 1 N k ω ij u j - θ = W i T U i=1,2,3;
After network structure is determined, utilize the data of calibration point of gathering under the global coordinate system that double vision feel neural network is trained.
2, structural light three-dimensional double vision vision model modeling legislation according to claim 1 is characterized in that, the step of utilizing data of calibration point that double vision feel neural network is trained is as follows:
(1) start training, the initialization network architecture parameters comprises the central value of gaussian kernel function, variances sigma and hidden layer one output layer weights battle array W;
(2) network training parameter learning rate μ (k) is set, the minimum training error of the network in general of momentum term α (k) and expectation
(3) from internal memory, from N training sample, take out one group in order, with input training sample (x wherein 1i, x 2i) be input to the input layer of network;
(4) the actual output (y of three nodes of computational grid output layer O1i, y O2i, y O3i), and calculate itself and network desired output (y respectively E1i, y E2i, y E3i) promptly export the quadratic sum of the residual error of training sample, be added among the SUM, promptly SUM = SUM + Σ j = 1 3 ( y eji - y oji ) 2 ;
(5) adjust each network architecture parameters by the gradient descent method that drives quantifier, concrete adjustment algorithm is as follows:
The definition criterion function
Figure A0111830200031
Calculation criterion function J (E) is to the local derviation numerical value of each network architecture parameters, ∂ J ( E ) / ∂ E | E = E ( k ) , And adjust the value of each network architecture parameters by following formula:
The adjustment type of hidden layer-output layer connection weight value matrix W is W ( k + 1 ) = W ( k ) + μ ( k ) × ( - ∂ J ∂ W ) | W = W ( k ) + α ( k ) × [ W ( k ) - W ( k - 1 ) ] The adjustment type of hidden layer central value Matrix C is C ( k + 1 ) = C ( k ) + μ ( k ) × ( - ∂ J ∂ C ) | C = C ( k ) + α ( k ) × [ C ( k ) - C ( k - 1 ) ] The adjustment type of hidden layer variance matrix σ is σ ( k + 1 ) = σ ( k ) + μ ( k ) × ( - ∂ J ∂ σ ) | σ = σ ( k ) + α ( k ) × [ σ ( k ) - σ ( k - 1 ) ]
(6) judging whether that N group training sample is all imported has trained one time.If not, then get back to (3); If then the overall training error of computational grid is a mean square deviation E RMS = SUM / N ;
(7) judge E RMSWhether less than the value of expectation.If not, then get back to (3), and suitably adjust network training parameter learning rate μ (k) and momentum term α (k) according to the observation; If, then utilize current network model, use the test error of each node output of network test sample set computational grid output layer.The calculating of test error is undertaken by following formula:
Figure A0111830200042
(8) judge test error, if test error can not meet the demands, the minimum training error of the network in general that then can reduce to expect
Figure A0111830200044
Turning back to (3) continues training or turns back to (1) to restart training; If reached requirement, then preserve network architecture parameters, finish training.
CNB011183020A 2001-05-22 2001-05-22 Method for creating 3D dual-vision model with structural light Expired - Fee Related CN1168045C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB011183020A CN1168045C (en) 2001-05-22 2001-05-22 Method for creating 3D dual-vision model with structural light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB011183020A CN1168045C (en) 2001-05-22 2001-05-22 Method for creating 3D dual-vision model with structural light

Publications (2)

Publication Number Publication Date
CN1387167A true CN1387167A (en) 2002-12-25
CN1168045C CN1168045C (en) 2004-09-22

Family

ID=4663088

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB011183020A Expired - Fee Related CN1168045C (en) 2001-05-22 2001-05-22 Method for creating 3D dual-vision model with structural light

Country Status (1)

Country Link
CN (1) CN1168045C (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100367310C (en) * 2004-04-08 2008-02-06 复旦大学 Wild size variable hierarchical network model of retina ganglion cell sensing and its algorithm
CN101187649B (en) * 2007-12-12 2010-04-07 哈尔滨工业大学 Heterogeneous material diffusion welding interface defect automatic identification method
CN105319655A (en) * 2014-06-30 2016-02-10 北京世维通科技发展有限公司 Automatic coupling method and system for optical integrated chip and optical fiber assembly
CN111383281A (en) * 2018-12-29 2020-07-07 天津大学青岛海洋技术研究院 Video camera calibration method based on RBF neural network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100367310C (en) * 2004-04-08 2008-02-06 复旦大学 Wild size variable hierarchical network model of retina ganglion cell sensing and its algorithm
CN101187649B (en) * 2007-12-12 2010-04-07 哈尔滨工业大学 Heterogeneous material diffusion welding interface defect automatic identification method
CN105319655A (en) * 2014-06-30 2016-02-10 北京世维通科技发展有限公司 Automatic coupling method and system for optical integrated chip and optical fiber assembly
CN105319655B (en) * 2014-06-30 2017-02-01 北京世维通科技发展有限公司 Automatic coupling method and system for optical integrated chip and optical fiber assembly
CN111383281A (en) * 2018-12-29 2020-07-07 天津大学青岛海洋技术研究院 Video camera calibration method based on RBF neural network

Also Published As

Publication number Publication date
CN1168045C (en) 2004-09-22

Similar Documents

Publication Publication Date Title
EP3715780B1 (en) Method for the establishment and the spatial calibration of a 3d measurement model based on a 1d displacement sensor
CN109323650B (en) Unified method for measuring coordinate system by visual image sensor and light spot distance measuring sensor in measuring system
CN110031829B (en) Target accurate distance measurement method based on monocular vision
CN101907448B (en) Depth measurement method based on binocular three-dimensional vision
CN1975324A (en) Double-sensor laser visual measuring system calibrating method
Giovannetti et al. Uncertainty assessment of coupled Digital Image Correlation and Particle Image Velocimetry for fluid-structure interaction wind tunnel experiments
Hsu et al. A triplane video-based experimental system for studying axisymmetrically inflated biomembranes
CN111220120B (en) Moving platform binocular ranging self-calibration method and device
CN113446957B (en) Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking
CN111823221A (en) Robot polishing method based on multiple sensors
CN107329233A (en) A kind of droplet type PCR instrument Atomatic focusing method based on neutral net
CN1387167A (en) Method for creating 3D dual-vision model with structural light
CN110940358A (en) Laser radar and inertial navigation combined calibration device and calibration method
CN112562006B (en) Large-view-field camera calibration method based on reinforcement learning
CN112525106B (en) Three-phase machine cooperative laser-based 3D detection method and device
CN113702384A (en) Surface defect detection device, detection method and calibration method for rotary component
CN111709998B (en) ELM space registration model method for TOF camera depth data measurement error correction
CN111553954A (en) Direct method monocular SLAM-based online luminosity calibration method
CN1916566A (en) System for testing life buoy for military use on controlling pose of human body in water
Tanyeri Image processing based tactile tactical sensor development and sensitivity determination to extract the 3D surface topography of objects
CN110966937A (en) Large member three-dimensional configuration splicing method based on laser vision sensing
CN101672631A (en) Surface form deviation measurement method of flat optical element
CN114998444A (en) Robot high-precision pose measurement system based on two-channel network
Rajaei et al. Vision-based large-field measurements of bridge deformations
CN106556350A (en) A kind of measuring method and microscope of microslide curved surface height value

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee