CN115682976A - Stereoscopic vision deformation measurement method for inhibiting heat flow disturbance - Google Patents

Stereoscopic vision deformation measurement method for inhibiting heat flow disturbance Download PDF

Info

Publication number
CN115682976A
CN115682976A CN202211380567.7A CN202211380567A CN115682976A CN 115682976 A CN115682976 A CN 115682976A CN 202211380567 A CN202211380567 A CN 202211380567A CN 115682976 A CN115682976 A CN 115682976A
Authority
CN
China
Prior art keywords
speckle
measured
camera
heat flow
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211380567.7A
Other languages
Chinese (zh)
Inventor
刘聪
汪立诚
章闯
徐志洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202211380567.7A priority Critical patent/CN115682976A/en
Publication of CN115682976A publication Critical patent/CN115682976A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a stereoscopic vision deformation measurement method for inhibiting heat flow disturbance, which is characterized in that a multi-camera system is utilized to combine digital image correlation with machine learning to measure object deformation, the multi-camera system is calibrated, and internal and external parameters of each camera are solved; the quality of speckle images disturbed by heat flow is improved by utilizing a neural network; and solving the three-dimensional displacement of the object by combining the position relation of the corresponding points of the images before and after deformation, and solving the three-dimensional strain through the three-dimensional displacement. The method can inhibit the influence of heat flow disturbance on the stereo vision deformation measurement, and has better applicability for the measurement of different numbers of cameras.

Description

Stereoscopic vision deformation measurement method for inhibiting heat flow disturbance
Technical Field
The invention relates to the field of optical measurement experiment solid mechanics and an image measurement technology, in particular to a stereoscopic vision deformation measurement method for inhibiting heat flow disturbance.
Background
The digital image correlation technique is a non-contact optical measurement method for measuring the deformation of an object by spraying random speckles on the surface of the object and acquiring speckle images before and after the deformation of the object to accurately match corresponding points. In the imaging process, temperature variation can cause nonuniformity of a propagation medium (air), so that light deflection and imaging drift are caused, and the testing precision is influenced. And the influence of air disturbance caused by heat flow on imaging errors is difficult to quantitatively analyze and establish a mathematical description model. By utilizing the random complex pattern classification capability and the excellent multidimensional function mapping capability of the neural network and adding the heat flow disturbance in the neural network training set, the imaging error caused by the heat flow disturbance can be separated from the neural network, thereby achieving the effect of inhibiting the heat flow disturbance.
Disclosure of Invention
In order to achieve the above-described technology, an object of the present invention is to provide a stereoscopic deformation measurement method that suppresses disturbance of heat flow.
The technical solution for realizing the purpose of the invention is as follows: a stereoscopic vision deformation measurement method for inhibiting heat flow disturbance comprises an experimental device, an industrial camera, a lens, an optical platform, a camera fixing device, an electronic computer and an object to be measured, and the measurement method comprises the following steps:
step 1, fixing an experimental device: the method comprises the following steps of orthogonally arranging four industrial cameras, fixing the industrial cameras on an optical platform, fixing an object to be measured on the optical platform, adjusting the directions of lenses of the four cameras to point to the object to be measured, enabling the object to be measured to be located at a central position in the visual angle of the cameras, and adjusting the aperture and the focal length of the cameras to be proper;
step 2, calibrating the system parameters of the multiple cameras: calibrating every two of the four cameras, and determining internal and external parameters of each camera;
step 3, obtaining training data: the speckle pattern is sprayed on the object to be measured, the object to be measured is randomly placed in the field of view of the multi-camera system, and different operations are carried out aiming at different neural networks:
(1) For convolutional neural networks: when heat flow disturbance exists, moving the object to be measured to different positions, and collecting N speckle images by each camera; when no heat flow disturbance exists, moving the object to be measured to the same collecting position when the heat flow disturbance exists, and collecting N speckle images by each camera;
(2) For the BP neural network: when heat flow disturbance exists, the position of an object to be measured is moved to different positions, each camera collects N speckle images, and 8N image coordinates are obtained through solving (1 speckle image collected by 1 camera has image coordinates in 2 directions, so that N speckle images collected by 4 cameras have 8N image coordinates); when no heat flow disturbance exists, the position of the object to be measured is moved to the same collection position when the heat flow disturbance exists, each camera collects N speckle images, and 3N world coordinate errors are obtained through solving (1 speckle image is collected by 4 cameras, so that three-dimensional coordinate errors in 3 directions of the piece to be tested can be obtained, and 3N world coordinate errors can be obtained through the N speckle images collected by the 4 cameras);
step 4, building a neural network and training, and carrying out different operations aiming at different neural networks:
(1) For convolutional neural networks: building a convolution neural network with N speckle image inputs and N speckle image outputs, taking the speckle image with heat flow disturbance as the neural network input, carrying out normalization processing on the input, taking the speckle image without heat flow disturbance as the neural network output, and training a convolution neural network model;
(2) For the BP neural network: building a BP neural network with 8N data inputs and 3N data outputs, taking image coordinates with heat flow disturbance as the neural network inputs, normalizing the inputs, taking world coordinate errors without heat flow disturbance as the neural network outputs, and training a BP neural network model;
step 5, obtaining experimental data: when heat flow disturbance exists, moving the object to be detected within the moving range of the object to be detected in the step 3, and collecting a speckle image of the object to be detected;
and 6, inputting experimental data, and performing different operations aiming at different neural networks:
(1) For convolutional neural networks: taking the speckle images with heat flow disturbance before and after deformation in the step 5 as the input of the neural network, carrying out normalization processing on the input, and outputting the speckle images after heat flow disturbance is inhibited;
(2) For a BP neural network: and (5) taking the image coordinates with heat flow disturbance before and after deformation in the step 5 as the input of the neural network, carrying out normalization processing on the input, and outputting the world coordinate error after heat flow disturbance is restrained.
Step 7, calculating the deformation of the object to be measured, and performing different operations aiming at different neural networks:
(1) For convolutional neural networks: and (5) calculating the world coordinates of the object to be measured in the world coordinate system by using the speckle images predicted in the step 6, and further solving the deformation of the object to be measured in the step 5.
(2) For the BP neural network: and (5) calculating the world coordinates of the object to be measured in the world coordinate system by using the world coordinate errors predicted in the step (6), and further solving the deformation of the object to be measured in the step (5).
Further, in step 2, calibrating the multiple-camera system pairwise to determine the internal and external parameters of each camera, wherein the process is as follows:
step 2.1, determining a certain camera in the multi-camera system as a central camera, and calibrating the other 3 cameras with the certain camera;
2.2, placing a black and white checkerboard with a proper size in the field of view of the camera, so that the checkerboard occupies half of the field of view of the camera;
2.3, collecting images of different positions of the checkerboard, wherein the positions of the checkerboard need to be changed for at least 10 times;
step 2.4, images of different positions and postures of checkerboards of the center camera and the 1 st camera are taken, and internal parameters of the 2 cameras and external parameters between the 2 cameras are determined by recognizing the positions of the checkerboard angular points;
and 2.5, taking images of different positions of the checkerboard of the central camera and the cameras 2 and 3, and repeating the step 2.4 to obtain internal parameters of all the cameras and external parameters among the cameras.
Further, in step 3, acquiring training data, spraying speckle patterns on the object to be measured, randomly placing the object to be measured in the field of view of the multi-camera system, and performing different operations aiming at different neural networks, wherein the specific process is as follows:
(1) For convolutional neural networks:
step 3A.1, spraying speckle patterns on the object to be measured, wherein the speckle patterns are generated according to a digital speckle field, the digital speckle field is designed and manufactured by controlling the number of spots, the coordinates of the circle center and the radius of a circle, and the digital speckle field is generated by the following 4 formulas:
Figure BDA0003927705090000031
Figure BDA0003927705090000032
Figure BDA0003927705090000033
n=ρA/(0.25·πd 2 ) (4)
wherein (X) 1 ,Y 1 ) As the center coordinates of the first scattered spot, (X) i ,Y i ) And (X) i ',Y i ') coordinates of the centers of the scattered spots in the regularly distributed speckle field and the randomly distributed speckle field respectively, a is the distance between the centers of the two scattered spots in the regularly distributed speckle field, rho is the duty ratio, d is the speckle diameter, f (r) represents a random function with the interval (-r, r), and r is the range interval (0, 1)]N is the number of speckles and is related to the resolution A of the camera;
step 3A.2, gather speckle image, the concrete mode is as follows:
when heat flow disturbance exists, moving the object to be measured to different positions, and acquiring N speckle images by each camera; when no heat flow disturbance exists, moving the object to be measured to the same collecting position when the heat flow disturbance exists, and collecting N speckle images by each camera;
(2) For the BP neural network:
step 3B.1, spraying a speckle pattern on the object to be measured, which is the same as the step 3A.1;
step 3B.2, collecting speckle images, which is the same as the step 3A.2;
step 3B.3, solving the image coordinates by the speckle images, wherein the specific mode is as follows:
firstly, selecting a certain speckle image before deformation as a reference image, taking a certain point in the reference image as a point to be measured, and determining the image coordinates (u) of the point to be measured 0 ,v 0 ) (since the point is artificially selected, the image coordinates of the point can be determined), a reference sub-area is set with the point to be measured as the center, and the cross-correlation coefficient C is satisfied cc Finding out the corresponding target sub-area on the target image by the maximum value of (u), wherein the central position of the target sub-area is the image coordinate (u) of the point to be measured in the target image 1 ,v 1 ) Wherein the cross correlation coefficient C cc Is represented as follows:
Figure BDA0003927705090000041
in the formula, f (x) i ,y i ) As the coordinate (x) in the sub-region of the reference image i ,y i ) Grey value of the dot, g (x) i ′,y i ') is the coordinate (x) in the target image subregion i ′,y i ') the gray value of the point (the coordinates are all local coordinates centered on the middle point of the sub-area),
Figure BDA0003927705090000042
is the average gray value of the reference sub-area,
Figure BDA0003927705090000043
the average gray value of sub-regions with the same size as the reference sub-region in the target image is obtained;
and 3B.4, solving the world coordinate by the image coordinate, wherein the specific mode is as follows:
taking four cameras as an example, a point P in the world coordinate system has coordinates (X, Y, Z), and an image coordinate in the camera coordinate system has coordinates (u, v), and the world coordinate and the image coordinate have the following relationship according to the pinhole imaging model:
Figure BDA0003927705090000044
wherein A is i Is an internal reference matrix of the camera, R i 、T i Called the rotation matrix and translation matrix of the camera, r and t are elements in the matrix respectively, s is the projection of the distance from an object point to an optical center in the direction of an optical axis, and the image coordinates of the intersection point of the optical axis and an image plane are c respectively x And c y The optical axis refers to the symmetry axis of the optical system, f s The ratio of the focal length f to the horizontal and vertical physical dimensions of a single pixel is the equivalent focal length f x 、f y For a four-camera system, superscripts 0,1, 2, 3 denote left, right, top, and bottom cameras, then the projection of the spatial points onto the camera plane is:
Figure BDA0003927705090000051
obtaining world coordinates of three-dimensional points in a world coordinate system by using image coordinates of corresponding points in a camera:
Figure BDA0003927705090000052
and 3B.5, solving a world coordinate error by the world coordinate, wherein the specific mode is as follows:
under the condition of no heat flow, 100 groups of static speckle patterns of the object to be detected are taken in each moving, and the world coordinate mean value is calculated:
Figure BDA0003927705090000053
the world coordinate error is therefore expressed as:
Figure BDA0003927705090000054
in the formula (I), the compound is shown in the specification,
Figure BDA0003927705090000055
is the world coordinate mean, X, of the point to be measured i 、Y i 、Z i Is the world coordinate of the point to be measured.
Further, in step 4, a neural network is built and trained, different operations are performed on different neural networks, and the specific process is as follows:
(1) For convolutional neural networks:
taking the N speckle images with heat flow disturbance obtained in the step 3 as input and the N speckle images without heat flow disturbance as output, and constructing a convolutional neural network to train the speckle images;
step 4A.1, feature extraction: firstly, a speckle image is input to pass through a first convolution layer and a first batch normalization layer to obtain a shallow layer characteristic X 1 Shallow feature X 1 Sequentially passing through a second convolution layer, a second batch normalization layer and a first dropout layer to obtain a further characteristic X 2 Further feature X 2 Sequentially passing through a third convolution layer, a third batch normalization layer and a second dropout layer to obtain a further characteristic X 3 And still further, feature X 3 Sequentially passing through a fourth convolution layer, a fourth batch normalization layer and a third dropout layer to obtain a deep layer characteristic X 4
Step 4A.2, feature fusion: characteristic X 4 After sequentially passing through the first deconvolution layer, the fifth batch normalization layer and the fourth dropout layer, the feature X is compared with the feature X 3 Obtaining the feature X through the fusion of a first adder 5 Characteristic X 5 After passing through the second deconvolution, the sixth batch normalization layer and the fifth dropout layer in sequence, the first layer and the second layer are compared with the feature X 2 Fusing the two to obtain the feature X through a second adder 6 Feature X 6 After passing through the third deconvolution layer, the seventh batch normalization layer and the sixth dropout layer in sequence, the feature X is compared with the feature X 1 Fusing the three summers to obtain a characteristic X 7 Characteristic X 7 Obtaining the feature X through a fourth deconvolution 8 Further feature X 8 Obtaining a feature X by a set of residual structure 9 Last feature X 9 After passing through the fifth convolution layer and feature X 8 The final fusion feature X is obtained through a fourth adder 10 The speckle image is the speckle image which needs to be output;
(2) For the BP neural network:
taking the 8N image coordinates without heat flow disturbance obtained in the step 3 as input and 3N world coordinate errors without heat flow disturbance as output, and constructing a BP neural network to train data;
step 4B.1, forward propagation of signals: firstly, the image coordinate passes through each node of the input layer to obtain the output value O of the input layer j (ii) a Followed by O j Propagating to each node of the hidden layer to obtain the output value P of the hidden layer j (ii) a Last P j Propagating to each node of the output layer to obtain the output value Q of the output layer k
Step 4B.2, error back propagation: firstly, establishing an error function E of an output value and a true value of an output layer, optimizing the structure of the output layer by enabling E to take a minimum value, and then optimizing the structure of a hidden layer by enabling E to take a minimum value;
step 4B.3, repeat step 4B.1 and step 4B.2 until the error function E is satisfied, at which time the output value Q of the output layer k Namely the world coordinate error required to be output.
Further, in step 7, the deformation of the object to be measured is calculated, and different operations are performed for different neural networks, and the specific process is as follows:
(1) For convolutional neural networks:
step 7A.1, calculating the world coordinates of the object to be measured through the speckle image predicted by the convolutional neural network in the step 6;
step 7A.2, calculating from world coordinatesMeasuring the three-dimensional displacement of the object in the step 5, wherein the world coordinate of the to-be-tested piece before deformation is (X) 0 ,Y 0 ,Z 0 ) The transformed world coordinate is (X) i ,Y i ,Z i ) Then the three-dimensional displacement is:
Figure BDA0003927705090000071
in the formula, displacement of U, V and W in three directions, r i Is the total displacement;
step 7A.3, calculating the three-dimensional strain of the object to be measured in step 5 by the three-dimensional displacement, and establishing a local coordinate system O e Converting the three-dimensional coordinates and three-dimensional displacement of the grid points in the world coordinate system before deformation into a coordinate system O e In (b) to obtain (X) e ,Y e ,Z e ) And (U) e ,V e ,W e ) And obtaining a displacement field function by using a quadric surface fitting method, wherein the displacement field function is expressed as follows:
Figure BDA0003927705090000072
wherein the content of the first and second substances,
Figure BDA0003927705090000073
and
Figure BDA0003927705090000074
are respectively a displacement field function U e ,V e ,W e The full field strain is then expressed as follows:
Figure BDA0003927705090000081
in the formula, epsilon xx 、ε yy 、ε zz 、ε yz 、ε zy 、ε xy 、ε yx 、ε zx 、ε xz Representing the strain tensor, (X) e ,Y e ,Z e ) And (U) e ,V e ,W e ) Representing coordinates and displacements in a local coordinate system;
(2) For the BP neural network:
step 7B.1, calculating the world coordinate of the object to be measured through the world coordinate error predicted by the BP neural network in the step 6, and the specific process is as follows:
Figure BDA0003927705090000082
in the formula (I), the compound is shown in the specification,
Figure BDA0003927705090000083
is the world coordinate mean, Δ X i 、ΔY i 、ΔZ i In the form of a world coordinate error,
step 7B.2, calculating the three-dimensional displacement of the object to be measured in step 5 from the world coordinates, the same as step 7A.2;
and 7B.3, calculating the three-dimensional strain of the object to be measured in the step 5 by using the three-dimensional displacement, and the same as the step 7 A.3.
Compared with the prior art, the invention has the following remarkable advantages: 1) The machine learning model is applied to the digital image correlation technique in an original way, so that the influence of heat flow disturbance on the imaging quality is solved, light deflection and imaging drift caused by the heat flow disturbance in the image are effectively inhibited, and the calculation accuracy of the digital image correlation technique is improved. 2) The built neural network improves the precision of a specific application scene, can achieve the purpose of rapid measurement by using few data sets, and has a good effect in an experiment. 3) The method has better applicability to the measurement of different numbers of cameras, and has high universality to different specific measurement processes.
Drawings
FIG. 1 is a schematic view of an experimental apparatus according to the present invention.
FIG. 2 is a flow chart of the method of the present invention.
FIG. 3 is a comparison of the effect of the present invention method and the conventional method for different numbers of cameras, wherein (a) the method is a dual camera system; (b) a three-phase machine system; and (c) a four-camera system.
1: an electronic computer;
2: an optical platform;
3: a camera fixture;
4: an industrial camera;
5: a high resolution lens;
6: and (5) an object to be measured.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method is convenient and quick to solve the influence of heat flow disturbance on the stereo vision imaging quality from the aspects of algorithm design and experimental detection. The method has the advantages of high calculation accuracy, simple required equipment, convenience, practicability, low algorithm complexity, high calculation speed and the like.
As shown in fig. 1, a stereoscopic vision deformation measurement method for suppressing heat flow disturbance includes the following devices: the device comprises an electronic computer 1, an optical platform 2, a camera fixing device 3, an industrial camera 4, a high-resolution lens 5 and an object to be measured 6. The pixels of the industrial camera adopted in the test experiment are 400 ten thousand pixels, and the focal length of the lens is 35mm. The method comprises the following steps:
step 1, fixing an experimental device: the method comprises the following steps of orthogonally arranging four industrial cameras, fixing the industrial cameras on an optical platform, fixing an object to be measured on the optical platform, adjusting the lens directions of the four cameras to point to the object to be measured, enabling the object to be measured to be in a central position in the visual angle of the cameras, and adjusting the aperture and the focal distance of the cameras to be appropriate;
step 2, calibrating system parameters of the multiple cameras: calibrating every two of the four cameras to determine internal and external parameters of each camera, which comprises the following steps:
step 2.1, determining a certain camera in the multi-camera system as a central camera, and calibrating the other 3 cameras with the camera;
2.2, placing a black and white checkerboard with a proper size in the field of view of the camera, so that the checkerboard occupies half of the field of view of the camera;
2.3, acquiring images of different poses of the checkerboard, wherein the poses of the checkerboard need to be changed for at least 10 times;
step 2.4, obtaining images of different positions of checkerboards of the center camera and the 1 st camera, and determining internal parameters of the 2 cameras and external parameters between the 2 cameras by identifying the positions of the corner points of the checkerboards;
and 2.5, taking images of different positions of the checkerboard of the central camera and the cameras 2 and 3, and repeating the step 2.4 to obtain internal parameters of all the cameras and external parameters among the cameras.
Step 3, obtaining training data: the speckle pattern is sprayed on the object to be measured, the object to be measured is randomly placed in the field of view of the multi-camera system, and different operations are carried out aiming at different neural networks:
(1) For convolutional neural networks: when heat flow disturbance exists, moving the object to be measured to different positions, and acquiring N speckle images by each camera; when no heat flow disturbance exists, moving the object to be measured to the same collecting position when the heat flow disturbance exists, and collecting N speckle images by each camera;
(2) For a BP neural network: when heat flow disturbance exists, the position of an object to be measured is moved to the same collection position when the heat flow disturbance exists, each camera collects N speckle images, and 8N image coordinates are obtained through solving (1 speckle image collected by 1 camera has image coordinates in 2 directions, so that N speckle images collected by 4 cameras have 8N image coordinates); when no heat flow disturbance exists, the position of an object to be measured is moved to the same position, each camera collects N speckle images, and 3N world coordinate errors are obtained through solving (3-direction three-dimensional coordinate errors of a piece to be measured can be obtained by collecting 1 speckle image through 4 cameras, so that 3N world coordinate errors can be obtained by collecting N speckle images through 4 cameras);
the specific process is as follows:
(1) For convolutional neural networks:
step 3.1, spraying speckle patterns on the object to be measured, wherein the speckle patterns are generated according to a digital speckle field, the digital speckle field is designed and manufactured by controlling the number of spots, the coordinates of the circle center and the radius of a circle, and the digital speckle field is generated by the following 4 formulas:
Figure BDA0003927705090000101
Figure BDA0003927705090000111
Figure BDA0003927705090000112
n=ρA/(0.25·πd 2 ) (4)
wherein (X) 1 ,Y 1 ) As the center coordinates of the first scattered spot, (X) i ,Y i ) And (X) i ',Y i ') coordinates of the centers of the scattered spots in the regularly distributed speckle field and the randomly distributed speckle field respectively, a is the distance between the centers of the two scattered spots in the regularly distributed speckle field, rho is the duty ratio, d is the speckle diameter, f (r) represents a random function with the interval (-r, r), and r is the range interval (0, 1)]N is the number of speckles and is related to the resolution A of the camera;
step 3.2, collecting speckle images, wherein the specific modes are as follows:
when heat flow disturbance exists, moving the object to be measured to different positions, and acquiring N speckle images by each camera; when no heat flow disturbance exists, moving the object to be measured to the same position, and collecting N speckle images by each camera;
(2) For the BP neural network:
and 3.3, spraying a speckle pattern on the object to be measured, which is the same as the step 3.1.
And 3.4, collecting the speckle images, which is the same as the step 3.2.
And 3.5, solving the image coordinates by the speckle images, wherein the specific mode is as follows:
firstly, a certain sheet before deformation is selectedThe speckle image is taken as a reference image, a certain point in the reference image is taken as a point to be measured, and the image coordinate (u) of the point to be measured is determined 0 ,v 0 ) (since the point is artificially selected, the image coordinates of the point can be determined), a reference sub-area is set with the point to be measured as the center, and the cross-correlation coefficient C is satisfied cc Finding out the corresponding target sub-area on the target image, wherein the central position of the target sub-area is the image coordinate (u) of the point to be measured in the target image 1 ,v 1 ) Wherein the cross-correlation coefficient C cc Is represented as follows:
Figure BDA0003927705090000113
in the formula, f (x) i ,y i ) As a coordinate (x) in a subregion of the reference image i ,y i ) Grey value of the dot, g (x) i ′,y i ') is the coordinate (x) in the target image subregion i ′,y i ') the gray value of the point (the coordinates are all local coordinates centered on the middle point of the sub-area),
Figure BDA0003927705090000121
is the average gray value of the reference sub-area,
Figure BDA0003927705090000122
the average gray value of the sub-area with the same size as the reference sub-area in the target image is obtained;
and 3.6, solving the world coordinate by the image coordinate, wherein the specific mode is as follows:
taking four cameras as an example, a point P coordinate in a world coordinate system is (X, Y, Z), an image coordinate in a camera coordinate system is (u, v), and the world coordinate and the image coordinate have the following relationship according to a pinhole imaging model:
Figure BDA0003927705090000123
wherein A is i Is an internal reference matrix of the camera, R i 、T i So-called camera revolvesA torque matrix and a translation matrix, r and t being elements in the matrix, respectively. s is the projection of the distance from the object point to the optical center in the direction of the optical axis, and the image coordinates of the intersection points of the optical axis and the image plane are respectively c x And c y And the optical axis refers to the symmetry axis of the optical system. f. of s The tilt factor, also called distortion parameter, between two coordinate axes of the image plane is generally not considered. The ratio of the focal length f to the horizontal and vertical physical dimensions of a single pixel is the equivalent focal length f x 、f y For a four-camera system, superscripts 0,1, 2, 3 denote left, right, top, and bottom cameras, and the projection of spatial points on the camera plane is:
Figure BDA0003927705090000124
conversely, once the image coordinates of the corresponding point in the camera are known, the world coordinates of the three-dimensional points in the world coordinate system can be obtained:
Figure BDA0003927705090000125
the world coordinate of the point to be measured can be solved by solving the hyperstatic equation
And 3.6, solving a world coordinate error by the world coordinate, wherein the specific mode is as follows:
under the condition of no heat flow, 100 groups of static speckle patterns of the object to be detected are taken every time the object to be detected moves, and the world coordinate mean value is calculated:
Figure BDA0003927705090000131
the world coordinate error is therefore expressed as:
Figure BDA0003927705090000132
in the formula (I), the compound is shown in the specification,
Figure BDA0003927705090000133
is the world coordinate mean, X, of the point to be measured i 、Y i 、Z i Is the world coordinate of the point to be measured.
Step 4, building a neural network and training, and carrying out different operations aiming at different neural networks:
(1) For convolutional neural networks: building a convolution neural network with N speckle image inputs and N speckle image outputs, taking the speckle image with heat flow disturbance as the neural network input, carrying out normalization processing on the input, taking the speckle image without heat flow disturbance as the neural network output, and training a convolution neural network model;
(2) For the BP neural network: and constructing a BP neural network with 8N data inputs and 3N data outputs, taking the image coordinates with heat flow disturbance as the neural network inputs, normalizing the inputs, taking the world coordinate error without heat flow disturbance as the neural network outputs, and training a BP neural network model.
The specific process is as follows:
(1) For convolutional neural networks:
and 3, obtaining N speckle images without heat flow disturbance as input and N speckle images without heat flow disturbance as output, and constructing a convolutional neural network below to train data.
Step 4.1, feature extraction: firstly, a speckle image is input to pass through a first convolution layer and a first batch normalization layer to obtain a shallow layer characteristic X 1 Shallow feature X 1 Sequentially passing through a second convolution layer, a second batch normalization layer and a first dropout layer to obtain a further characteristic X 2 Further characteristic X 2 Sequentially passing through a third convolution layer, a third batch normalization layer and a second dropout layer to obtain a further characteristic X 3 And still further, feature X 3 Sequentially passing through a fourth convolution layer, a fourth batch normalization layer and a third dropout layer to obtain a deep layer characteristic X 4
Step 4.2, feature fusion: characteristic X 4 Sequentially go through the first deconvolution and the fifth deconvolutionThe pitch normalization layer and the fourth dropout layer are followed by the feature X 3 The feature X is obtained through the fusion of a first adder 5 Characteristic X 5 After passing through the second deconvolution, the sixth batch normalization layer and the fifth dropout layer in sequence, the feature X is compared with the feature X 2 Fusing the two to obtain the feature X through a second adder 6 Feature X 6 After passing through the third deconvolution layer, the seventh batch normalization layer and the sixth dropout layer in sequence, the feature X is compared with the feature X 1 Fusing the three summers to obtain a feature X 7 Feature X 7 Obtaining the feature X through a fourth deconvolution 8 Further feature X 8 Obtaining a feature X by a set of residual structure 9 Last feature X 9 After passing through the fifth convolution layer and feature X 8 The final fusion feature X is obtained through a fourth adder 10 Namely, the speckle image is the speckle image which needs to be output.
(2) For the BP neural network:
8N image coordinates are obtained as input through the step 3, and 3N world coordinate errors are obtained as output, and a BP neural network is constructed below to train data.
And 4.3, forward propagation of signals: firstly, the image coordinate passes through each node of the input layer to obtain the output value O of the input layer j (ii) a Followed by O j Propagating to each node of the hidden layer to obtain the output value P of the hidden layer j (ii) a Last P j Propagating to each node of the output layer to obtain the output value Q of the output layer k
Step 4.4, error back propagation: firstly, an error function E of the output value and the true value of the output layer is established. The structure of the output layer is optimized by minimizing E, and then the structure of the hidden layer is optimized by minimizing E.
Step 4.5, repeating step 4.3 and step 4.4 until the error function E is satisfied, wherein the output value Q of the output layer k Namely the world coordinate error required to be output.
Step 5, obtaining experimental data: when heat flow disturbance exists, moving the object to be detected within the moving range of the object to be detected in the step 3, and collecting an image of the object to be detected;
step 6, inputting experimental data:
(1) For convolutional neural networks: performing normalization processing on the speckle images with heat flow disturbance before and after deformation in the step 5, inputting the speckle images into a trained convolutional neural network, and outputting to obtain the speckle images after heat flow disturbance suppression;
(2) For the BP neural network: and (5) normalizing the image coordinates with the heat flow disturbance before and after movement in the step (5), inputting the image coordinates into the trained BP neural network, and outputting the heat flow disturbance to obtain the suppressed world coordinate error.
And 7, calculating the deformation of the object to be detected, and performing different operations aiming at different neural networks:
(1) For convolutional neural networks: calculating the world coordinate of the object to be measured in the world coordinate system through the speckle images predicted in the step 6, and further solving the deformation of the object to be measured in the step 5;
(2) For the BP neural network: and (5) calculating the world coordinates of the object to be measured in the world coordinate system through the predicted world coordinate errors in the step 6, and further solving the deformation of the object to be measured in the step 5.
The method comprises the following specific steps:
(1) For convolutional neural networks:
7.1, calculating the world coordinates of the object to be measured through the speckle image predicted by the convolutional neural network in the step 6, wherein the world coordinates are the same as those in the step 3.6;
and 7.2, calculating the deformation of the object to be measured in the step 5 according to the world coordinates. The world coordinate of the piece to be tested before deformation is (X) 0 ,Y 0 ,Z 0 ) The transformed world coordinate is (X) i ,Y i ,Z i ) Then the three-dimensional displacement is:
Figure BDA0003927705090000151
in the formula, displacement of U, V and W in three directions, r i Is the total displacement;
and 7.3, calculating the three-dimensional strain of the object to be measured in the step 5 by using the three-dimensional displacement. Establishing a local coordinate system O e Converting the three-dimensional coordinates and three-dimensional displacement of the grid points in the world coordinate system before deformation into a coordinate system O e In (b) to obtain (X) e ,Y e ,Z e ) And (U) e ,V e ,W e ) And obtaining a displacement field function by using a quadric surface fitting method, wherein the displacement field function is expressed as follows:
Figure BDA0003927705090000152
wherein the content of the first and second substances,
Figure BDA0003927705090000153
and
Figure BDA0003927705090000154
are respectively a displacement field function U e ,V e ,W e The full field strain is then expressed as follows:
Figure BDA0003927705090000161
in the formula, epsilon xx 、ε yy 、ε zz 、ε yz 、ε zy 、ε xy 、ε yx 、ε zx 、ε xz Representing the strain tensor, (X) e ,Y e ,Z e ) And (U) e ,V e ,W e ) Representing the coordinates and displacements in the local coordinate system.
(2) For a BP neural network:
and 7.4, calculating the world coordinates of the object to be measured through the world coordinate error predicted by the BP neural network in the step 6, wherein the specific process is as follows:
Figure BDA0003927705090000162
in the formula (I), the compound is shown in the specification,
Figure BDA0003927705090000163
is the mean value of the world coordinates, Δ X i 、ΔY i 、ΔZ i Is the world coordinate error.
And 7.5, calculating the three-dimensional displacement of the object to be measured in the step 5 according to the world coordinates, and the same as the step 7.2.
And 7.6, calculating the three-dimensional strain of the object to be measured in the step 5 by using the three-dimensional displacement, wherein the step is the same as the step 7.3.
Examples
To verify the effectiveness of the inventive protocol, the following experiment was performed.
1) Data acquisition and preprocessing
The speckle pattern is sprayed on the object to be measured and randomly placed in the field of view of the multi-camera system. When heat flow disturbance exists, the position of a moving object to be measured acquires 100 images moving 0mm, 2.75mm, 5.5mm, 8.25mm and 11mm respectively, and the images are normalized and then used as the input of a training set; when there is no thermal flow disturbance, the position of the object to be measured is moved to acquire 100 images of the same position as the output of the training set.
2) Building neural network model and training
Building a BP neural network consisting of 8 input layers, 3 output layers and 10 hidden layers; selecting tansig as an activation function, and mse as an error function for training.
Building a convolution neural network which is input by 3 image data and output by 3 image data; an adaptive moment estimate ADAM is selected as the optimizer training.
3) Predicting actual data
In practical experiments, speckles are sprayed on the surface of the flat plate, and the flat plate is moved by 1mm, 2mm, 3mm, 4mm, 5mm, 6mm, 7mm, 8mm, 9mm and 10mm. The collected images are input into a trained neural network after being normalized, so that the images after heat flow disturbance is restrained can be obtained, and then the three-dimensional coordinates are solved according to the images. The prediction result is shown in fig. 2, the neural network constructed by the method can effectively eliminate the influence of heat flow disturbance on the speckle image, and improve the measurement accuracy; as shown in fig. 2, the black solid line represents the displacement error calculated by the conventional three-dimensional reconstruction method, and the red solid line represents the displacement error calculated by the method, so that it is obvious that the method of the present invention can effectively improve the measurement accuracy related to the digital image under the thermal flow disturbance for different numbers of multi-camera systems (where a is a dual-camera system, b is a three-camera system, and c is a four-camera system).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (5)

1. A stereoscopic vision deformation measurement method for inhibiting heat flow disturbance is characterized in that an experimental device comprises an industrial camera, a lens, an optical platform, a camera fixing device, an electronic computer and an object to be measured, and the measurement method comprises the following steps:
step 1, fixing an experimental device: the method comprises the following steps of orthogonally arranging four industrial cameras, fixing the industrial cameras on an optical platform, fixing an object to be measured on the optical platform, adjusting the directions of lenses of the four cameras to point to the object to be measured, enabling the object to be measured to be located at a central position in the visual angle of the cameras, and adjusting the aperture and the focal length of the cameras to be proper;
step 2, calibrating the system parameters of the multiple cameras: calibrating every two of the four cameras, and determining internal and external parameters of each camera;
step 3, obtaining training data: the speckle pattern is sprayed on the object to be measured, the object to be measured is randomly placed in the field of view of the multi-camera system, and different operations are carried out aiming at different neural networks:
(1) For convolutional neural networks: when heat flow disturbance exists, moving the object to be measured to different positions, and acquiring N speckle images by each camera; when no heat flow disturbance exists, moving the object to be measured to the same collecting position when the heat flow disturbance exists, and collecting N speckle images by each camera;
(2) For a BP neural network: when heat flow disturbance exists, the position of an object to be measured is moved to different positions, each camera collects N speckle images, and 8N image coordinates are obtained through solving; when no heat flow disturbance exists, moving the position of the object to be measured to the same collecting position when the heat flow disturbance exists, collecting N speckle images by each camera, and solving to obtain 3N world coordinate errors;
step 4, building a neural network and training, and carrying out different operations aiming at different neural networks:
(1) For convolutional neural networks: building a convolution neural network with N speckle image inputs and N speckle image outputs, taking the speckle image with heat flow disturbance as the neural network input, carrying out normalization processing on the input, taking the speckle image without heat flow disturbance as the neural network output, and training a convolution neural network model;
(2) For the BP neural network: building a BP neural network with 8N data inputs and 3N data outputs, taking image coordinates with heat flow disturbance as the neural network inputs, normalizing the inputs, taking world coordinate errors without heat flow disturbance as the neural network outputs, and training a BP neural network model;
step 5, obtaining experimental data: when heat flow disturbance exists, moving the object to be detected within the moving range of the object to be detected in the step 3, and collecting a speckle image of the object to be detected;
and 6, inputting experimental data, and performing different operations aiming at different neural networks:
(1) For convolutional neural networks: taking the speckle images with heat flow disturbance before and after deformation in the step 5 as the input of the neural network, carrying out normalization processing on the input, and outputting the speckle images after heat flow disturbance is inhibited;
(2) For the BP neural network: and (5) taking the image coordinates with heat flow disturbance before and after deformation in the step 5 as the input of the neural network, carrying out normalization processing on the input, and outputting the world coordinate error after heat flow disturbance is restrained.
Step 7, calculating the deformation of the object to be measured, and performing different operations aiming at different neural networks:
(1) For convolutional neural networks: and (5) calculating the world coordinates of the object to be measured in the world coordinate system by using the speckle images predicted in the step (6), and further solving the deformation of the object to be measured in the step (5).
(2) For the BP neural network: and (5) calculating the world coordinates of the object to be measured in the world coordinate system by using the world coordinate errors predicted in the step 6, and further solving the deformation of the object to be measured in the step 5.
2. The method for measuring stereoscopic vision deformation for inhibiting heat flow disturbance according to claim 1, wherein in the step 2, the multi-camera system is calibrated two by two to determine the internal and external parameters of each camera, and the process is as follows:
step 2.1, determining a certain camera in the multi-camera system as a central camera, and calibrating the other 3 cameras with the camera;
2.2, placing a black and white checkerboard with a proper size in the field of view of the camera, so that the checkerboard occupies half of the field of view of the camera;
2.3, acquiring images of different poses of the checkerboard, wherein the poses of the checkerboard need to be changed for at least 10 times;
step 2.4, obtaining images of different positions of checkerboards of the center camera and the 1 st camera, and determining internal parameters of the 2 cameras and external parameters between the 2 cameras by identifying the positions of the corner points of the checkerboards;
and 2.5, taking images of different positions of the checkerboard of the central camera and the cameras 2 and 3, and repeating the step 2.4 to obtain internal parameters of all the cameras and external parameters among the cameras.
3. The stereoscopic vision deformation measurement method for suppressing the heat flow disturbance according to claim 1, wherein in step 3, training data is obtained, the object to be measured is sprayed with a speckle pattern, the speckle pattern is randomly placed in the field of view of the multi-camera system, and different operations are performed for different neural networks, and the specific process is as follows:
(1) For convolutional neural networks:
step 3A.1, spraying speckle patterns on the object to be measured, wherein the speckle patterns are generated according to a digital speckle field, the digital speckle field is designed and manufactured by controlling the number of spots, coordinates of circle centers and the radius of a circle, and the digital speckle field is generated by the following 4 formulas:
Figure FDA0003927705080000021
Figure FDA0003927705080000031
Figure FDA0003927705080000032
n=ρA/(0.25·πd 2 ) (4)
wherein (X) 1 ,Y 1 ) As the center coordinates of the first scattered spot, (X) i ,Y i ) And (X) i ',Y i ') coordinates of the centers of the scattered spots in the regularly distributed speckle field and the randomly distributed speckle field respectively, a is the distance between the centers of the two scattered spots in the regularly distributed speckle field, rho is the duty ratio, d is the speckle diameter, f (r) represents a random function with the interval (-r, r), and r is the range interval (0, 1)]N is the number of speckles and is related to the resolution A of the camera;
step 3A.2, gather speckle image, the concrete mode is as follows:
when heat flow disturbance exists, moving the object to be measured to different positions, and acquiring N speckle images by each camera; when no heat flow disturbance exists, moving the object to be measured to the same collecting position when the heat flow disturbance exists, and collecting N speckle images by each camera;
(2) For a BP neural network:
step 3B.1, spraying a speckle pattern on the object to be measured, which is the same as the step 3A.1;
step 3B.2, collecting speckle images, which is the same as the step 3A.2;
step 3B.3, solving the image coordinates by the speckle images, wherein the specific mode is as follows:
firstly, selecting a certain speckle image before deformation as a reference image, taking a certain point in the reference image as a point to be measured, and determining the image coordinates (u) of the point to be measured 0 ,v 0 ) Setting a reference subarea by taking a point to be measured as a center and satisfying a cross-correlation coefficient C cc Finding out the corresponding target sub-area on the target image, wherein the central position of the target sub-area is the image coordinate (u) of the point to be measured in the target image 1 ,v 1 ) Wherein the cross-correlation coefficient C cc Is represented as follows:
Figure FDA0003927705080000033
in the formula, f (x) i ,y i ) As the coordinate (x) in the sub-region of the reference image i ,y i ) Gray value of dot, g (x) i ′,y i ') is the coordinate (x) in the sub-area of the target image i ′,y i ') the gray value of the point,
Figure FDA0003927705080000034
is the average gray value of the reference sub-area,
Figure FDA0003927705080000035
the average gray value of the sub-area with the same size as the reference sub-area in the target image is obtained;
and 3B.4, solving the world coordinate by the image coordinate, wherein the specific mode is as follows:
taking four cameras as an example, a point P in the world coordinate system has coordinates (X, Y, Z), and an image coordinate in the camera coordinate system has coordinates (u, v), and the world coordinate and the image coordinate have the following relationship according to the pinhole imaging model:
Figure FDA0003927705080000041
wherein A is i Is an internal reference matrix of the camera, R i 、T i Called the rotation matrix and translation matrix of the camera, r and t are elements in the matrix respectively, s is the projection of the distance from an object point to an optical center in the direction of an optical axis, and the image coordinates of the intersection point of the optical axis and an image plane are c respectively x And c y The optical axis refers to the symmetry axis of the optical system, f s The ratio of the focal length f to the horizontal and vertical physical dimensions of a single pixel is the equivalent focal length f x 、f y For a four-camera system, superscripts 0,1, 2, 3 denote left, right, top, and bottom cameras, then the projection of the spatial points onto the camera plane is:
Figure FDA0003927705080000042
obtaining world coordinates of three-dimensional points in a world coordinate system by using image coordinates of corresponding points in a camera:
Figure FDA0003927705080000043
and 3B.5, solving a world coordinate error by the world coordinate, wherein the specific mode is as follows:
under the condition of no heat flow, 100 groups of static speckle patterns of the object to be detected are taken every time the object to be detected moves, and the world coordinate mean value is calculated:
Figure FDA0003927705080000051
the world coordinate error is therefore expressed as:
Figure FDA0003927705080000052
in the formula (I), the compound is shown in the specification,
Figure FDA0003927705080000053
is the world coordinate mean, X, of the point to be measured i 、Y i 、Z i Is the world coordinate of the point to be measured.
4. The stereoscopic vision deformation measurement method for inhibiting heat flow disturbance according to claim 1, wherein in the step 4, a neural network is built and trained, different operations are performed on different neural networks, and the specific process is as follows:
(1) For convolutional neural networks:
taking the N speckle images without heat flow disturbance obtained in the step 3 as input and the N speckle images without heat flow disturbance as output, and constructing a convolutional neural network to train the speckle images;
step 4A.1, feature extraction: firstly, a speckle image is input to pass through a first convolution layer and a first batch normalization layer to obtain a shallow layer characteristic X 1 Shallow feature X 1 Sequentially passing through a second convolution layer, a second batch normalization layer and a first dropout layer to obtain a further characteristic X 2 Further characteristic X 2 Sequentially passing through a third convolution layer, a third batch normalization layer and a second dropout layer to obtain a further characteristic X 3 And still further, feature X 3 Sequentially passing through a fourth convolution layer, a fourth batch normalization layer and a third dropout layer to obtain a deep layer characteristic X 4
Step 4A.2, feature fusion: characteristic X 4 After the first deconvolution, the fifth batch normalization layer and the fourth dropout layer are sequentially performed, the first deconvolution layer, the fifth batch normalization layer and the fourth dropout layer are compared with the characteristic X 3 The feature X is obtained through the fusion of a first adder 5 A, cSign X 5 After passing through the second deconvolution, the sixth batch normalization layer and the fifth dropout layer in sequence, the first layer and the second layer are compared with the feature X 2 Fusing the two to obtain the feature X through a second adder 6 Feature X 6 After passing through the third deconvolution layer, the seventh batch normalization layer and the sixth dropout layer in sequence, the feature X is compared with the feature X 1 Fusing the three summers to obtain a feature X 7 Feature X 7 Obtaining the feature X through fourth deconvolution 8 Further characteristic X 8 Obtaining a feature X by a set of residual structure 9 Last feature X 9 After passing through the fifth convolution layer and feature X 8 The final fusion feature X is obtained through a fourth adder 10 The speckle image is the speckle image which needs to be output;
(2) For the BP neural network:
taking the 8N image coordinates without heat flow disturbance obtained in the step 3 as input and 3N world coordinate errors without heat flow disturbance as output, and constructing a BP neural network to train data;
step 4B.1, forward propagation of signals: firstly, the image coordinate passes through each node of the input layer to obtain the output value O of the input layer j (ii) a Then O j Propagating to each node of the hidden layer to obtain the output value P of the hidden layer j (ii) a Last P j Propagating to each node of the output layer to obtain the output value Q of the output layer k
Step 4B.2, error back propagation: firstly, establishing an error function E of an output value and a true value of an output layer, optimizing the structure of the output layer by enabling E to take a minimum value, and then optimizing the structure of a hidden layer by enabling E to take a minimum value;
step 4B.3, repeat step 4B.1 and step 4B.2 until the error function E is satisfied, at which time the output value Q of the output layer k Namely the world coordinate error required to be output.
5. The method of claim 1, wherein in step 7, the deformation of the object to be measured is calculated, and different operations are performed on different neural networks, and the specific process is as follows:
(1) For convolutional neural networks:
step 7A.1, calculating the world coordinates of the object to be measured through the speckle image predicted by the convolutional neural network in the step 6;
step 7A.2, calculating the three-dimensional displacement of the object to be tested in step 5 from the world coordinates, wherein the world coordinates of the object to be tested before deformation are (X) 0 ,Y 0 ,Z 0 ) The transformed world coordinate is (X) i ,Y i ,Z i ) Then the three-dimensional displacement is:
Figure FDA0003927705080000061
Figure FDA0003927705080000062
in the formula, displacement of U, V and W in three directions, r i Is the total displacement;
step 7A.3, calculating the three-dimensional strain of the object to be measured in the step 5 by the three-dimensional displacement, and establishing a local coordinate system O e Converting the three-dimensional coordinates and three-dimensional displacement of the grid points in the world coordinate system before deformation into a coordinate system O e In (b) to obtain (X) e ,Y e ,Z e ) And (U) e ,V e ,W e ) And obtaining a displacement field function by using a quadratic surface fitting method, wherein the displacement field function is expressed as follows:
Figure FDA0003927705080000071
wherein the content of the first and second substances,
Figure FDA0003927705080000072
and
Figure FDA0003927705080000073
are respectively asDisplacement field function U e ,V e ,W e The full field strain is then expressed as follows:
Figure FDA0003927705080000074
in the formula, epsilon xx 、ε yy 、ε zz 、ε yz 、ε zy 、ε xy 、ε yx 、ε zx 、ε xz Representing the strain tensor, (X) e ,Y e ,Z e ) And (U) e ,V e ,W e ) Representing coordinates and displacements in a local coordinate system;
(2) For the BP neural network:
and 7B.1, calculating the world coordinates of the object to be measured through the world coordinate error predicted by the BP neural network in the step 6, wherein the specific process is as follows:
Figure FDA0003927705080000075
in the formula (I), the compound is shown in the specification,
Figure FDA0003927705080000076
is the mean value of the world coordinates, Δ X i 、ΔY i 、ΔZ i In order to be a world coordinate error,
step 7B.2, calculating the three-dimensional displacement of the object to be measured in the step 5 by the world coordinates, which is the same as the step 7A.2;
and 7B.3, calculating the three-dimensional strain of the object to be measured in the step 5 by using the three-dimensional displacement, and the same as the step 7 A.3.
CN202211380567.7A 2022-11-04 2022-11-04 Stereoscopic vision deformation measurement method for inhibiting heat flow disturbance Pending CN115682976A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211380567.7A CN115682976A (en) 2022-11-04 2022-11-04 Stereoscopic vision deformation measurement method for inhibiting heat flow disturbance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211380567.7A CN115682976A (en) 2022-11-04 2022-11-04 Stereoscopic vision deformation measurement method for inhibiting heat flow disturbance

Publications (1)

Publication Number Publication Date
CN115682976A true CN115682976A (en) 2023-02-03

Family

ID=85050845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211380567.7A Pending CN115682976A (en) 2022-11-04 2022-11-04 Stereoscopic vision deformation measurement method for inhibiting heat flow disturbance

Country Status (1)

Country Link
CN (1) CN115682976A (en)

Similar Documents

Publication Publication Date Title
CN109813251B (en) Method, device and system for three-dimensional measurement
CN109859272B (en) Automatic focusing binocular camera calibration method and device
RU2626051C2 (en) Method for determining distances to objects using images from digital video cameras
CN109883391B (en) Monocular distance measurement method based on digital imaging of microlens array
EP2120209A1 (en) Apparatus for evaluating images from a multi camera cystem, multi camera system and process for evaluating
CN111709985A (en) Underwater target ranging method based on binocular vision
CN111080705B (en) Calibration method and device for automatic focusing binocular camera
CN112967312B (en) Real-time robust displacement monitoring method and system for field rigid body target
CN110782498A (en) Rapid universal calibration method for visual sensing network
CN113393439A (en) Forging defect detection method based on deep learning
Krutikova et al. Creation of a depth map from stereo images of faces for 3D model reconstruction
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN117058236A (en) Target identification positioning method based on multi-vision system self-switching
CN115682976A (en) Stereoscopic vision deformation measurement method for inhibiting heat flow disturbance
CN112734712B (en) Imaging detection method and system for health state of ship vibration equipment
JP2923063B2 (en) Multi-view stereo image measurement method
CN114754695A (en) Multi-view-field bridge deflection measuring device and method and storage medium
CN114299153A (en) Camera array synchronous calibration method and system for ultra-large power equipment
RU2685761C1 (en) Photogrammetric method of measuring distances by rotating digital camera
TWI604261B (en) A method for capturing multi-dimensional visual image and the system thereof
IL271487B2 (en) Dimensional calibration of the field-of-view of a single camera
JP7399632B2 (en) Photography processing device and photography processing method
CN113916907B (en) Grating stereograph printing quality detection method
JP3446020B2 (en) Shape measurement method
CN111080689B (en) Method and device for determining face depth map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination