CN116105628A - High-precision three-dimensional morphology and deformation measurement method based on projection imaging - Google Patents

High-precision three-dimensional morphology and deformation measurement method based on projection imaging Download PDF

Info

Publication number
CN116105628A
CN116105628A CN202211126873.8A CN202211126873A CN116105628A CN 116105628 A CN116105628 A CN 116105628A CN 202211126873 A CN202211126873 A CN 202211126873A CN 116105628 A CN116105628 A CN 116105628A
Authority
CN
China
Prior art keywords
camera
projector
deformation
image
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211126873.8A
Other languages
Chinese (zh)
Inventor
刘聪
章闯
汪立诚
徐志洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202211126873.8A priority Critical patent/CN116105628A/en
Publication of CN116105628A publication Critical patent/CN116105628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
    • G01B11/167Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge by projecting a pattern on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a high-precision three-dimensional morphology and deformation measurement method based on projection imaging, which combines a grid line projection technology with a digital image related technology to measure the three-dimensional morphology and deformation of an object by using a camera-projector system, and calibrates the camera-projector system to solve the internal and external parameters of a camera and a projector; the quality of the grid image influenced by the speckle is improved by utilizing a convolutional neural network; solving the phase of the grid line by using a phase shift method, and calculating the three-dimensional morphology of the object before and after deformation by combining the calibration result; solving the three-dimensional displacement of the object by combining the position relation of the corresponding points before and after the speckle image matching deformation; and solving the three-dimensional strain through the three-dimensional displacement. The invention improves the three-dimensional morphology and deformation measurement precision.

Description

High-precision three-dimensional morphology and deformation measurement method based on projection imaging
Technical Field
The invention relates to the field of solid mechanics of photometric experiments and an image measurement technology, in particular to a high-precision three-dimensional morphology and deformation measurement method based on projection imaging.
Background
The grid line projection method and the digital image related technology are rapidly developed, and have wide application in the field of three-dimensional measurement. The grid line projection is to utilize the grid line modulated by the surface shape of the object to solve the phase value of each pixel position point by point, and then calculate the height information of the object according to the phase value. The related technology of digital images is a non-contact optical measurement method for measuring data such as displacement, strain and the like of an object by spraying random speckles on the surface of the object and precisely matching corresponding points through speckle images before and after deformation of the object. In a camera and projector system, the grid line projection technology can reconstruct the three-dimensional morphology of the object before and after deformation, but the positions of corresponding points before and after deformation cannot be obtained, and the digital image correlation technology can precisely match the positions of the corresponding points before and after the deformation of the object by utilizing the speckles on the surface of the object, and can measure the three-dimensional morphology and the deformation of the object by combining the two. However, the related technology of the digital image needs to spray speckle on the surface of an object, which seriously affects the quality of a grid line acquired by a camera, further affects the calculation accuracy and precision of a phase, and no accurate and quick method for solving the problem at home and abroad at present.
Disclosure of Invention
In order to solve the problems, the invention provides a high-precision three-dimensional morphology and deformation measurement method based on projection imaging.
The technical solution for realizing the purpose of the invention is as follows: the high-precision three-dimensional morphology and deformation measurement method based on projection imaging comprises an industrial camera, a lens, an optical platform, an electronic computer, a projector and an object to be measured, wherein the measurement method comprises the following steps:
step 1, fixing an experimental device: fixing an industrial camera and a projector on an optical platform, fixing an object to be measured on the optical platform, adjusting the direction of a camera lens to point to the object to be measured, enabling the object to be measured to be in a central position in a camera view angle, positioning a projector lens to be aligned to the maximum plane direction of the object to be measured, and focusing the camera and the projector on the object to be measured;
step 2, system calibration: calibrating a camera-projector system to obtain internal and external parameters of a camera and a projector;
step 3, training data acquisition: respectively projecting sinusoidal grid lines and sinusoidal grid lines with speckles to an object to be measured by using a projector, rotating or moving the position of the object to be measured to acquire data of different scenes, and carrying out normalization processing to construct a training data set of the neural network;
step 4, building a convolutional neural network and training: building a convolution neural network with N image inputs and N image outputs, taking a grid line image with sinusoidal speckles as the input of the neural network, taking the sinusoidal grid line image as the output of the neural network, and training a convolution neural network model;
step 5, obtaining experimental data: spraying speckle patterns on an object to be measured, randomly placing the speckle patterns in fields of view of a projector and a camera, focusing the camera and the projector on the object to be measured, respectively projecting sinusoidal grid lines and white light by the projector, then applying deformation to the object to be measured, and respectively collecting sinusoidal grid line images with speckle and speckle images before and after the deformation of the object to be measured by the camera;
step 6, carrying out normalization processing on the grid line images with the speckles before and after the deformation in the step 5, and inputting the grid line images into a trained convolutional neural network, so that the speckles in the grid line images can be eliminated, and the quality of the grid line images is improved;
and 7, calculating pixel displacement of speckles by using speckle images before and after the deformation in the step 5, calculating the phase of the object to be measured by using the grid line image predicted in the step 6, and further calculating the three-dimensional shape and deformation of the object to be measured in a world coordinate system by combining the internal and external parameters of the camera and the projector in the step 3.
Further, in step 2, calibrating the camera-projector system to obtain the internal and external parameters of the camera and projector, and the specific method is as follows: the projector is taken as another camera to capture an image, so that a dual-camera system calibration system is formed, and the specific operation steps are as follows:
(1) Preparing a red/blue checkerboard calibration plate and grid lines in the horizontal and vertical directions;
(2) Placing the red/blue checkerboard into the field of view of the camera such that it occupies half of the field of view of the camera;
(3) The projector projects red light or blue light, a red/blue checkerboard is displayed as black/Bai Qipan checkers in a shot image, and a camera aperture is adjusted to enable gray values in a checkerboard area to be between 150 and 200, and a checkerboard image is acquired;
(4) The projector projects horizontal and vertical grating line patterns by using white light, and a camera is used for collecting grating line images;
(5) Rotating the positions of the checkerboard, and repeating the operations in (3) and (4), wherein in order to restrain the influence of noise, the checkerboard needs to change 10 positions at least;
(6) Searching angular point coordinates on a checkerboard of red light or blue light for camera calibration and subsequent projector calibration;
(7) Calculating absolute phases of the horizontal and vertical grid lines prepared in (1) and absolute phases of the horizontal and vertical grid lines acquired in (4) respectively serving as standard phases and actual phases in two directions of the grid lines;
(8) Determining standard phases of the corner points in two directions according to the coordinates of the corner points and the actual phases of the grid lines, taking the standard phases of the horizontal directions as row coordinates and the standard phases of the vertical directions as column coordinates, and forming the row coordinates of the corner points in the projector image;
(9) Calibrating respective internal and external parameters of the camera and the projector based on the angular point coordinates in the step (6) and the step (8) by using a single-camera calibration principle respectively;
sI=A[R,t]X W (1)
wherein i= [ u, v,1] T Pixel coordinates, X, of corner points W =[x w ,y w ,z w ,1] T World coordinates of corner points, s being the scaleFactor a is an internal parameter of the camera or projector, R is a rotation matrix between world coordinates and camera or projector coordinates, t is a translation vector between world coordinates and camera or projector coordinates, [ R, t]Constituting the external parameters of the camera or projector.
Further, step 3, a projector is used to project sinusoidal grid lines and sinusoidal grid lines with speckles to an object to be measured respectively, the position of the object to be measured is rotated or moved to collect data of different scenes, normalization processing is carried out, and a training data set of the neural network is constructed, wherein the specific method comprises the following steps:
step 3.1, the projector projects sinusoidal grid lines, and the basic mode is as follows:
in a grayscale image, the maximum grayscale value of the image is 255, the minimum grayscale is 0, and the closer the grayscale value is 255, the brighter the image, the closer the grayscale value is 0, and the darker the image, so that a sine-shaped raster line is generated by the following expression:
Figure BDA0003849186460000031
wherein ,In N=1, 2,3, …, N represents the generated gate line pattern, ω represents the frequency of the gate line, x represents the width of the gate line image, and N represents the phase step number;
step 3.2, the projector projects a grid line with sinusoidal speckles, and the basic mode is as follows:
the sine-grid-line speckle image only needs to add a digital speckle field into the sine-grid-line image in the step 3.1, wherein the digital speckle field is designed and manufactured by controlling the number of spots, the center coordinates and the radius of a circle, and the digital speckle field is generated by the following 4 formulas:
Figure BDA0003849186460000032
Figure BDA0003849186460000041
Figure BDA0003849186460000042
n=ρA/(0.25·πd 2 ) (6)
wherein ,(X1 ,Y 1 ) Is the center coordinates of the first scattered spot, (X) i ,Y i) and (Xi ',Y i ' A is the center coordinates of the speckle in the regularly distributed speckle field and the randomly distributed speckle field, alpha is the center distance of the two speckle in the regularly distributed speckle field, ρ is the duty cycle, d is the speckle diameter, f (r) is the random function with interval (-r, r), r is the range interval (0, 1)]N is the number of speckle related to the resolution a of the camera;
and 3.3, rotating or moving the position of the object to be measured to acquire data of different scenes, performing normalization processing, and constructing a training data set of the neural network, wherein the basic mode is as follows:
since the gray value of the acquired image is between 0 and 255, the gray value for normalization processing is divided by 255 to obtain a training data set.
In step 4, a convolutional neural network with N image inputs and N image outputs is built, a sinusoidal grid line image with speckles is used as the neural network input, a sinusoidal grid line image is used as the neural network output, and a convolutional neural network model is trained, wherein the convolutional neural network model comprises two parts of feature extraction and feature fusion, and adaptive moment estimation ADAM is selected as an optimizer for training, and the specific mode is as follows:
(a) Feature extraction
First, the image data is input through a first convolution layer and a first batch normalization layer to obtain shallow layer characteristics X 1 Shallow layer feature X 1 Sequentially passing through a second convolution layer, a second batch normalization layer and a first dropout layer to obtain further feature X 2 Further feature X 2 Sequentially passing through a third convolution layer, a third batch normalization layer and a second dropout layer to obtain further characteristic X 3 Still further, feature X 3 Sequentially passing through a fourth convolution layer, a fourth batch normalization layer and a third dropout layer to obtain deep features X 4
(b) Feature fusion
Feature X 4 After passing through the first deconvolution, the fifth batch normalization layer and the fourth dropout layer in turn, the method is combined with the characteristic X 3 The characteristic X is obtained through the fusion of a first adder 5 Feature X 5 After passing through the second deconvolution, the sixth batch normalization layer and the fifth dropout layer in turn, the method is combined with the characteristic X 2 Obtaining the characteristic X through the fusion of a second adder 6 Feature X 6 After passing through the third deconvolution, the seventh batch normalization layer and the sixth dropout layer in turn, the method is combined with feature X 1 The third adder is used for fusion to obtain the characteristic X 7 Feature X 7 Obtaining the characteristic X through fourth deconvolution 8 Further feature X 8 Obtaining the characteristic X through a group of residual error structures 9 Last feature X 9 After passing through the fifth convolution layer, and feature X 8 Obtaining a final fusion characteristic X through a fourth adder 10 I.e. the image data to be output.
Further, in step 7, the pixel displacement of the speckle is calculated by using the speckle images before and after the deformation in step 5, the phase of the object to be measured is calculated by using the grating line image predicted in step 6, and then the three-dimensional shape and deformation of the object to be measured in the world coordinate system are calculated by combining the internal and external parameters of the camera and the projector in step 3, and the specific method is as follows:
step 7.1, calculating the pixel displacement of the speckle by using the speckle images before and after the deformation in the step 5, wherein the basic mode is as follows:
the camera acquires speckle images before and after deformation, calculates a correlation coefficient between a search window and a reference subarea, selects the search window with the maximum correlation coefficient as a target subarea, wherein the center point of the target subarea is regarded as a corresponding point after deformation of the reference subarea, and the coordinates of the center points of the reference subarea and the target subarea are subtracted to obtain pixel displacement, wherein the correlation coefficient C cc Representation ofThe following are provided:
Figure BDA0003849186460000051
in the formula ,f(xi ,y i ) Is the coordinates in the reference subarea as (x i ,y i ) Gray value of dot, g (x i ′,y i ') is the coordinate (x) in the target subregion i ′,y i ' the gray values of the points (coordinates are local coordinates centered on the midpoint of the sub-region),
Figure BDA0003849186460000052
for the mean gray value of the reference subregion>
Figure BDA0003849186460000053
The average gray value of the subarea with the same size as the reference subarea in the target image;
step 7.2, calculating the phase of the object to be measured by using the grid line image predicted in step 6, wherein the basic mode is as follows:
ideally, the camera captures an image I n Expressed as:
Figure BDA0003849186460000054
where a (u, v) is the background light intensity, b (u, v) is the reflectivity of the object surface,
Figure BDA0003849186460000055
for the phase to be calculated, (u, v) is the position of the pixel point, θ n N=1, 2,3, …, N being the phase shift phase, expressed as:
Figure BDA0003849186460000061
the least squares phase solution formula is expressed as:
Figure BDA0003849186460000062
obtained by solving above
Figure BDA0003849186460000063
The value is [ -pi, pi]In the range, since the period of the trigonometric function is 2pi, the complete phase value Φ (u, v) is expressed as:
Figure BDA0003849186460000067
wherein k (u, v) is the number of periods compensated;
step 7.3, utilizing speckle images to match pixel displacement before and after deformation of an object, determining positions of corresponding points before and after deformation of the object to be measured, combining internal and external parameters calibrated by a camera and a projector to calculate three-dimensional morphology of the object to be measured before and after deformation, wherein three-dimensional deformation (comprising three-dimensional displacement and full-field strain) is required to be obtained according to the positional relationship between the corresponding points before and after deformation of the object, and is specifically as follows:
setting three-dimensional coordinates (X, Y, Z) and (X) of corresponding points before and after deformation of the object to be measured a ,Y a ,Z a ) The three-dimensional displacement is:
Figure BDA0003849186460000064
wherein U, V and W are displacements in three directions, respectively;
then build a local coordinate system O e Converting three-dimensional coordinates and three-dimensional displacement of grid points in world coordinate system before deformation into coordinate system O e In (X) e ,Y e ,Z e) and (Ue ,V e ,W e ) A quadric surface fitting method is used to obtain a displacement field function, which is expressed as follows:
Figure BDA0003849186460000065
wherein ,
Figure BDA0003849186460000066
and
Figure BDA0003849186460000071
respectively a displacement field function U e ,V e ,W e The full field strain is expressed as follows:
Figure BDA0003849186460000072
in the formula ,εxx 、ε yy 、ε zz 、ε yz 、ε zy 、ε xy 、ε yx 、ε zx 、ε xz Representing the strain tensor.
Compared with the prior art, the invention has the remarkable advantages that: 1) The method has the advantages that the deep learning model is creatively applied to high-precision object morphology measurement, the influence of speckle patterns on the quality of grating lines caused by the correlation and combination of grating line projection and digital images is solved, the speckle in the grating lines is effectively eliminated, and the quality of the grating lines and the solving precision are improved. 2) The convolutional neural network built by the invention improves the precision of specific application scenes, can achieve the purpose of rapid measurement by using few data sets, and has good effect in experiments. 3) The invention has high universality for different specific measuring processes.
Drawings
FIG. 1 is a schematic diagram of an experimental apparatus according to the present invention.
Fig. 2 is a diagram of the neural network architecture of the method of the present invention.
FIG. 3 shows the neural network prediction results of the method of the present invention.
FIG. 4 is a phase comparison of a predicted gate line with an original gate line solution according to the method of the present invention.
1: an electronic computer;
2: projector with a light source for projecting light
3: an optical platform;
4: a white plane board and a fixing device thereof;
5: an industrial camera;
6: a high resolution lens;
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The method provided by the invention is used for conveniently and rapidly solving the influence of speckle on the quality of grating lines from the angles of algorithm design and experimental detection. The method has the advantages of high calculation accuracy, simple required equipment, convenience, practicability, low algorithm complexity, high calculation speed and the like.
As shown in FIG. 1, a high-precision three-dimensional morphology and deformation measurement method based on projection imaging comprises the following steps: the device comprises an electronic computer (1), a projector (2), an optical platform (3), a plane board to be tested, a fixing device (4), an industrial camera (5) and a high-resolution lens (6). The industrial camera pixels adopted in the test experiment are 400 ten thousand pixels, and the focal length of the lens is 35mm. The method comprises the following steps:
step 1, fixing an experimental device: fixing an industrial camera and a projector on an optical platform, fixing a plane plate on the optical platform, adjusting the direction of a camera lens to point to the plane of the plane plate, enabling the plane plate to be in a central position in a camera view angle, positioning the projector lens to be aligned to the maximum plane direction of the plane plate, and focusing the camera and the projector on the plane plate;
step 2, system calibration: the camera-projector system calibration is carried out to obtain the internal and external parameters of the camera and the projector, and the specific method is as follows: the projector is taken as another camera to capture an image, so that a dual-camera system calibration system is formed, and the specific operation steps are as follows:
(1) Preparing a red/blue checkerboard calibration plate and grid lines in the horizontal and vertical directions;
(2) Placing the red/blue checkerboard into the field of view of the camera such that it occupies half of the field of view of the camera;
(3) The projector projects red light or blue light, a red/blue checkerboard is displayed as black/Bai Qipan checkers in a shot image, and a camera aperture is adjusted to enable gray values in a checkerboard area to be between 150 and 200, and a checkerboard image is acquired;
(4) The projector projects horizontal and vertical grating line patterns by using white light, and a camera is used for collecting grating line images;
(5) Rotating the positions of the checkerboard, and repeating the operations in (3) and (4), wherein in order to restrain the influence of noise, the checkerboard needs to change 10 positions at least;
(6) Searching angular point coordinates on a checkerboard of red light or blue light for camera calibration and subsequent projector calibration;
(7) Calculating absolute phases of the horizontal and vertical grid lines prepared in (1) and absolute phases of the horizontal and vertical grid lines acquired in (4) respectively serving as standard phases and actual phases in two directions of the grid lines;
(8) Determining standard phases of the corner points in two directions according to the coordinates of the corner points and the actual phases of the grid lines, taking the standard phases of the horizontal directions as row coordinates and the standard phases of the vertical directions as column coordinates, and forming the row coordinates of the corner points in the projector image;
(9) Calibrating respective internal and external parameters of the camera and the projector based on the angular point coordinates in the step (6) and the step (8) by using a single-camera calibration principle respectively;
sI=A[R,t]X W (1)
wherein i= [ u, v,1] T Pixel coordinates, X, of corner points W =[x w ,y w ,z w ,1] T World coordinates, which are corner points, s is a scale factor, a is an internal parameter of a camera or projector, R is a rotation matrix between the world coordinates and the camera or projector coordinates, t is a translation vector between the world coordinates and the camera or projector coordinates, [ R, t]Constituting the external parameters of the camera or projector.
Step 3, training data acquisition: the projector is used for respectively projecting sinusoidal grid lines and sinusoidal grid lines with speckles to the plane plate, the position of the plane plate is rotated or moved to acquire data of different scenes, normalization processing is carried out, and a training data set of the neural network is constructed, wherein the specific method comprises the following steps:
step 3.1, the projector projects sinusoidal grid lines, and the basic mode is as follows:
in a grayscale image, the maximum grayscale value of the image is 255, the minimum grayscale is 0, and the closer the grayscale value is 255, the brighter the image, the closer the grayscale value is 0, and the darker the image, so that a sine-shaped raster line is generated by the following expression:
Figure BDA0003849186460000091
wherein ,In N=1, 2,3, …, N represents the generated gate line pattern, ω represents the frequency of the gate line, x represents the width of the gate line image, and N represents the phase step number;
step 3.2, the projector projects a grid line with sinusoidal speckles, and the basic mode is as follows:
the sine-grid-line speckle image only needs to add a digital speckle field into the sine-grid-line image in the step 3.1, wherein the digital speckle field is designed and manufactured by controlling the number of spots, the center coordinates and the radius of a circle, and the digital speckle field is generated by the following 4 formulas:
Figure BDA0003849186460000101
Figure BDA0003849186460000102
Figure BDA0003849186460000103
n=ρA/(0.25·πd 2 ) (6)
wherein ,(X1 ,Y 1 ) Is the center coordinates of the first scattered spot, (X) i ,Y i) and (Xi ',Y i ' A is the center coordinates of the speckle in the regularly distributed speckle field and the randomly distributed speckle field, alpha is the center distance of the two speckle in the regularly distributed speckle field, ρ is the duty cycle, d is the speckle diameter, f (r) is the random function with interval (-r, r), r is the range interval (0, 1)]N is the number of speckle related to the resolution a of the camera;
and 3.3, rotating or moving the position of the plane plate to acquire data of different scenes, performing normalization processing, and constructing a training data set of the neural network, wherein the basic mode is as follows:
since the gray value of the acquired image is between 0 and 255, the gray value for normalization processing is divided by 255 to obtain a training data set.
Step 4, building a convolutional neural network and training: constructing a convolutional neural network with N image inputs and N image outputs, taking a sinusoidal grid line image with speckles as the neural network input, taking the sinusoidal grid line image as the neural network output, and training a convolutional neural network model, wherein the convolutional neural network model comprises two parts of feature extraction and feature fusion, and selecting an adaptive moment estimation ADAM as an optimizer for training, wherein the specific mode is as follows:
(a) Feature extraction
First, the image data is input through a first convolution layer and a first batch normalization layer to obtain shallow layer characteristics X 1 Shallow layer feature X 1 Sequentially passing through a second convolution layer, a second batch normalization layer and a first dropout layer to obtain further feature X 2 Further feature X 2 Sequentially passing through a third convolution layer, a third batch normalization layer and a second dropout layer to obtain further characteristic X 3 Still further, feature X 3 Sequentially passing through a fourth convolution layer, a fourth batch normalization layer and a third dropout layer to obtain deep features X 4
(b) Feature fusion
Feature X 4 After passing through the first deconvolution, the fifth batch normalization layer and the fourth dropout layer in turn, the method is combined with the characteristic X 3 The characteristic X is obtained through the fusion of a first adder 5 Feature X 5 After passing through the second deconvolution, the sixth batch normalization layer and the fifth dropout layer in turn, the method is combined with the characteristic X 2 Obtaining the characteristic X through the fusion of a second adder 6 Feature X 6 After passing through the third deconvolution, the seventh batch normalization layer and the sixth dropout layer in turn, the method is combined with feature X 1 The third adder is used for fusion to obtain the characteristic X 7 Feature X 7 Obtaining the characteristic X through fourth deconvolution 8 Further feature X 8 Obtaining the characteristic X through a group of residual error structures 9 Last feature X 9 After passing through the fifth convolution layer, and feature X 8 Obtaining a final fusion characteristic X through a fourth adder 10 I.e. the image data to be output.
Step 5, obtaining experimental data: spraying speckle patterns on a plane plate, randomly placing the speckle patterns in fields of view of a projector and a camera, focusing the camera and the projector on the plane plate, respectively projecting sinusoidal grid lines and white light by using the projector, then applying deformation to the plane plate, and respectively collecting sinusoidal grid line images with speckle and speckle images before and after the deformation of the plane plate by using the camera;
step 6, carrying out normalization processing on the grid line images with the speckles before and after the deformation in the step 5, and inputting the grid line images into a trained convolutional neural network, so that the speckles in the grid line images can be eliminated, and the quality of the grid line images is improved;
step 7, calculating the pixel displacement of the speckle by using the speckle images before and after the deformation in the step 5, calculating the phase of the plane plate by using the grid line image predicted in the step 6, and further calculating the three-dimensional shape and deformation of the plane plate in a world coordinate system by combining the internal and external parameters of the camera and the projector in the step 3;
step 7.1, calculating the pixel displacement of the speckle by using the speckle images before and after the deformation in the step 5, wherein the basic mode is as follows:
the camera collects speckle images before and after deformation, correlation matching is carried out by using a correlation function, and the correlation coefficient between a search window and a reference subarea is calculated to selectThe search window with the maximum value of the correlation coefficient is taken as a target subarea, the center point of the target subarea can be regarded as the corresponding point after the deformation of the point to be measured, and the pixel displacement can be obtained by subtracting the coordinates of the center point of the reference subarea and the center point of the target subarea, wherein the correlation coefficient C cc The expression is as follows:
Figure BDA0003849186460000111
in the formula ,f(xi ,y i ) Is the coordinates in the reference subarea as (x i ,y i ) Gray value of dot, g (x i ′,y i ') is the coordinate (x) in the target subregion i ′,y i ' the gray values of the points (coordinates are local coordinates centered on the midpoint of the sub-region),
Figure BDA0003849186460000121
for the mean gray value of the reference subregion>
Figure BDA0003849186460000122
The average gray value of the subarea with the same size as the reference subarea in the target image;
step 7.2, calculating the phase of the plane plate by using the grid line image predicted in step 6, wherein the basic mode is as follows:
ideally, the camera captures an image I n Expressed as:
Figure BDA0003849186460000123
where a (u, v) is the background light intensity, b (u, v) is the reflectivity of the object surface,
Figure BDA0003849186460000124
for the phase to be calculated, (u, v) is the position of the pixel point, θ n N=1, 2,3, …, N being the phase shift phase, expressed as:
Figure BDA0003849186460000125
the least squares phase solution formula is expressed as:
Figure BDA0003849186460000126
obtained by solving above
Figure BDA0003849186460000127
The value is [ -pi, pi]In the range, since the period of the trigonometric function is 2pi, the complete phase value Φ (u, v) is expressed as:
Figure BDA0003849186460000128
wherein k (u, v) is the number of periods compensated;
and 7.3, combining the internal and external parameters of the camera and the projector in the step 3, and calculating the three-dimensional shape and deformation of the plane plate in a world coordinate system, wherein the basic mode is as follows:
firstly, pixel displacement before and after deformation of an object is matched by using speckle images, positions of corresponding points before and after the deformation of the object are found, then, the three-dimensional morphology of the object before and after the deformation can be obtained by calibrating internal and external parameters by combining a camera and a projector, and the three-dimensional deformation is needed to be obtained according to the position relation of the corresponding points before and after the deformation of the object, and the three-dimensional deformation is specifically shown as follows:
three-dimensional coordinates (X, Y, Z) and (X) of corresponding points before and after deformation of the object are set a ,Y a ,Z a ) The three-dimensional displacement is:
Figure BDA0003849186460000131
wherein U, V and W are displacements in three directions, respectively;
then build a local coordinate system O e Converting three-dimensional coordinates and three-dimensional displacement of grid points in world coordinate system before deformation into coordinate system O e In (1) to obtain(X e ,Y e ,Z e) and (Ue ,V e ,W e ) A quadric surface fitting method is used to obtain a displacement field function, which is expressed as follows:
Figure BDA0003849186460000132
wherein ,
Figure BDA0003849186460000133
and
Figure BDA0003849186460000134
the total field strain is expressed as follows, for each coefficient of the displacement field function:
Figure BDA0003849186460000135
in the formula ,εxx 、ε yy 、ε zz 、ε yz 、ε zy 、ε xy 、ε yx 、ε zx 、ε xz Representing the strain tensor.
Examples
In order to verify the effectiveness of the scheme of the invention, a plane plate is used as a measuring object to collect data, and the following simulation experiment is carried out.
1) Data acquisition and preprocessing
Taking three-step phase shift as an example: firstly, projecting sinusoidal grid lines to a plane plate by using a projector, and collecting three images by using a camera as output data of a neural network; then, three images are collected by a camera as input data of a neural network on a grid line with sinusoidal speckles projected to a plane plate; and 60 groups of data of different scenes are acquired through moving and rotating the plane plate, and the 60 groups of data are normalized to obtain training data.
2) Building convolutional neural network model and training
Building a convolutional neural network with 3 image data input and 3 image data output; adaptive moment estimation ADAM is selected as the optimizer training. The specific structure is shown in fig. 2:
3) Prediction of actual data
In practical experiments, the surface of the plane plate is sprayed with speckles, a sinusoidal grid line is projected by a projector, the sinusoidal grid line is acquired by a camera, and 1024X1024 areas of the acquired image are input into a trained neural network, so that the grid line image for removing the speckles can be obtained, and the quality of the grid line is improved. The prediction result is shown in fig. 3, and the neural network constructed by the invention can effectively eliminate speckles in the grid line image and improve the quality of the grid line; as shown in FIG. 4, the green solid line represents the original phase result containing the speckles, the red dotted line represents the phase of the grid line solution output by the neural network, and it is obvious that the invention can effectively improve the grid line phase precision containing the speckles.

Claims (5)

1. The high-precision three-dimensional morphology and deformation measurement method based on projection imaging is characterized in that an experimental device comprises an industrial camera, a lens, an optical platform, an electronic computer, a projector and an object to be measured, and the measurement method comprises the following steps:
step 1, fixing an experimental device: fixing an industrial camera and a projector on an optical platform, fixing an object to be measured on the optical platform, adjusting the direction of a camera lens to point to the object to be measured, enabling the object to be measured to be in a central position in a camera view angle, positioning a projector lens to be aligned to the maximum plane direction of the object to be measured, and focusing the camera and the projector on the object to be measured;
step 2, system calibration: calibrating a camera-projector system to obtain internal and external parameters of a camera and a projector;
step 3, training data acquisition: respectively projecting sinusoidal grid lines and sinusoidal grid lines with speckles to an object to be measured by using a projector, rotating or moving the position of the object to be measured to acquire data of different scenes, and carrying out normalization processing to construct a training data set of the neural network;
step 4, building a convolutional neural network and training: building a convolution neural network with N image inputs and N image outputs, taking a grid line image with sinusoidal speckles as the input of the neural network, taking the sinusoidal grid line image as the output of the neural network, and training a convolution neural network model;
step 5, obtaining experimental data: spraying speckle patterns on an object to be measured, randomly placing the speckle patterns in fields of view of a projector and a camera, focusing the camera and the projector on the object to be measured, respectively projecting sinusoidal grid lines and white light by the projector, then applying deformation to the object to be measured, and respectively collecting sinusoidal grid line images with speckle and speckle images before and after the deformation of the object to be measured by the camera;
step 6, carrying out normalization processing on the grid line images with the speckles before and after the deformation in the step 5, and inputting the grid line images into a trained convolutional neural network, so that the speckles in the grid line images can be eliminated, and the quality of the grid line images is improved;
and 7, calculating pixel displacement of speckles by using speckle images before and after the deformation in the step 5, calculating the phase of the object to be measured by using the grid line image predicted in the step 6, and further calculating the three-dimensional shape and deformation of the object to be measured in a world coordinate system by combining the internal and external parameters of the camera and the projector in the step 3.
2. The projection imaging-based high-precision three-dimensional morphology and deformation measurement method of claim 1, wherein in step 2, camera-projector system calibration is performed to obtain internal and external parameters of a camera and a projector, and the specific method is as follows: the projector is taken as another camera to capture an image, so that a dual-camera system calibration system is formed, and the specific operation steps are as follows:
(1) Preparing a red/blue checkerboard calibration plate and grid lines in the horizontal and vertical directions;
(2) Placing the red/blue checkerboard into the field of view of the camera such that it occupies half of the field of view of the camera;
(3) The projector projects red light or blue light, a red/blue checkerboard is displayed as black/Bai Qipan checkers in a shot image, and a camera aperture is adjusted to enable gray values in a checkerboard area to be between 150 and 200, and a checkerboard image is acquired;
(4) The projector projects horizontal and vertical grating line patterns by using white light, and a camera is used for collecting grating line images;
(5) Rotating the positions of the checkerboard, and repeating the operations in (3) and (4), wherein in order to restrain the influence of noise, the checkerboard needs to change 10 positions at least;
(6) Searching angular point coordinates on a checkerboard of red light or blue light for camera calibration and subsequent projector calibration;
(7) Calculating absolute phases of the horizontal and vertical grid lines prepared in (1) and absolute phases of the horizontal and vertical grid lines acquired in (4) respectively serving as standard phases and actual phases in two directions of the grid lines;
(8) Determining standard phases of the corner points in two directions according to the coordinates of the corner points and the actual phases of the grid lines, taking the standard phases of the horizontal directions as row coordinates and the standard phases of the vertical directions as column coordinates, and forming the row coordinates of the corner points in the projector image;
(9) Calibrating respective internal and external parameters of the camera and the projector based on the angular point coordinates in the step (6) and the step (8) by using a single-camera calibration principle respectively;
sI=A[R,t]X W (1)
wherein i= [ u, v,1] T Pixel coordinates, X, of corner points W =[x w ,y w ,z w ,1] T World coordinates, which are corner points, s is a scale factor, a is an internal parameter of a camera or projector, R is a rotation matrix between the world coordinates and the camera or projector coordinates, t is a translation vector between the world coordinates and the camera or projector coordinates, [ R, t]Constituting the external parameters of the camera or projector.
3. The projection imaging-based high-precision three-dimensional morphology and deformation measurement method of claim 1, characterized in that step 3, a projector is utilized to project sinusoidal grid lines and grid lines with sinusoidal speckles to an object to be measured respectively, the position of the object to be measured is rotated or moved to collect data of different scenes, normalization processing is carried out, and a training data set of a neural network is constructed, specifically, the method comprises the following steps:
step 3.1, the projector projects sinusoidal grid lines, and the basic mode is as follows:
in a grayscale image, the maximum grayscale value of the image is 255, the minimum grayscale is 0, and the closer the grayscale value is 255, the brighter the image, the closer the grayscale value is 0, and the darker the image, so that a sine-shaped raster line is generated by the following expression:
Figure QLYQS_1
wherein ,In N=1, 2,3, …, N represents the generated gate line pattern, ω represents the frequency of the gate line, x represents the width of the gate line image, and N represents the phase step number;
step 3.2, the projector projects a grid line with sinusoidal speckles, and the basic mode is as follows:
the sine-grid-line speckle image only needs to add a digital speckle field into the sine-grid-line image in the step 3.1, wherein the digital speckle field is designed and manufactured by controlling the number of spots, the center coordinates and the radius of a circle, and the digital speckle field is generated by the following 4 formulas:
Figure QLYQS_2
Figure QLYQS_3
Figure QLYQS_4
n=ρA/(0.25·πd 2 ) (6)
wherein ,(X1 ,Y 1 ) Is the center coordinates of the first scattered spot, (X) i ,Y i) and (Xi ',Y i ') are the center coordinates of scattered spots in the scattered spot field and the scattered spot field which are distributed regularly, and alpha is the center of two scattered spots in the scattered spot field which are distributed regularlyDistance ρ is the duty cycle, d is the speckle diameter, f (r) is a random function with a range (-r, r), r is the range (0, 1)]N is the number of speckle related to the resolution a of the camera;
and 3.3, rotating or moving the position of the object to be measured to acquire data of different scenes, performing normalization processing, and constructing a training data set of the neural network, wherein the basic mode is as follows:
since the gray value of the acquired image is between 0 and 255, the gray value for normalization processing is divided by 255 to obtain a training data set.
4. The projection imaging-based high-precision three-dimensional morphology and deformation measurement method is characterized in that in step 4, a convolution neural network with N image inputs and N image outputs is built, a grid line image with sinusoidal speckles is used as a neural network input, a sinusoidal grid line image is used as a neural network output, a convolution neural network model is trained, the convolution neural network model comprises two parts of feature extraction and feature fusion, adaptive moment estimation ADAM is selected as an optimizer for training, and the specific modes are as follows:
(a) Feature extraction
Firstly, inputting image data through a first convolution layer and a first latch normalization layer to obtain shallow layer characteristics X 1 Shallow layer feature X 1 Sequentially passing through a second convolution layer, a second latch normalization layer and a first dropout layer to obtain further feature X 2 Further feature X 2 Sequentially passing through a third convolution layer, a third latch normalization layer and a second dropout layer to obtain further characteristic X 3 Still further, feature X 3 Sequentially passing through a fourth convolution layer, a fourth latch normalization layer and a third dropout layer to obtain deep features X 4
(b) Feature fusion
Feature X 4 Sequentially passing through a first deconvolution layer, a fifth patch normalization layer and a fourth dropout layer and then combining with the feature X 3 The characteristic X is obtained through the fusion of a first adder 5 Special (special)Sign X 5 After passing through the second deconvolution, sixth latch normalization layer and fifth dropout layer in turn, and feature X 2 Obtaining the characteristic X through the fusion of a second adder 6 Feature X 6 After passing through the third deconvolution, the seventh batch normalization layer and the sixth dropout layer in turn, the method is combined with feature X 1 The third adder is used for fusion to obtain the characteristic X 7 Feature X 7 Obtaining the characteristic X through fourth deconvolution 8 Further feature X 8 Obtaining the characteristic X through a group of residual error structures 9 Last feature X 9 After passing through the fifth convolution layer, and feature X 8 Obtaining a final fusion characteristic X through a fourth adder 10 I.e. the image data to be output.
5. The projection imaging-based high-precision three-dimensional morphology and deformation measurement method and device according to claim 1, wherein in step 7, the pixel displacement of the speckle is calculated by using the speckle images before and after deformation in step 5, the phase of the object to be measured is calculated by using the grid line image predicted in step 6, and then the three-dimensional morphology and deformation of the object to be measured in the world coordinate system is calculated by combining the internal and external parameters of the camera and the projector in step 3, and the specific method is as follows:
step 7.1, calculating the pixel displacement of the speckle by using the speckle images before and after the deformation in the step 5, wherein the basic mode is as follows:
the camera acquires speckle images before and after deformation, calculates a correlation coefficient between a search window and a reference subarea, selects the search window with the maximum correlation coefficient as a target subarea, wherein the center point of the target subarea is regarded as a corresponding point after deformation of the reference subarea, and the coordinates of the center points of the reference subarea and the target subarea are subtracted to obtain pixel displacement, wherein the correlation coefficient C cc The expression is as follows:
Figure QLYQS_5
in the formula ,f(xi ,y i ) Is the reference subregionThe middle coordinate is (x i ,y i ) Gray value of dot, g (x i ′,y i ') is the coordinate (x) in the target subregion i ′,y i ' the gray values of the points (coordinates are local coordinates centered on the midpoint of the sub-region),
Figure QLYQS_6
for the mean gray value of the reference subregion>
Figure QLYQS_7
The average gray value of the subarea with the same size as the reference subarea in the target image;
step 7.2, calculating the phase of the object to be measured by using the grid line image predicted in step 6, wherein the basic mode is as follows:
ideally, the camera captures an image I n Expressed as:
Figure QLYQS_8
where a (u, v) is the background light intensity, b (u, v) is the reflectivity of the object surface,
Figure QLYQS_9
for the phase to be calculated, (u, v) is the position of the pixel point, θ n N=1, 2,3, …, N being the phase shift phase, expressed as:
Figure QLYQS_10
the least squares phase solution formula is expressed as:
Figure QLYQS_11
obtained by solving above
Figure QLYQS_12
The value is [ -pi, pi]In the range, since the period of the trigonometric function is 2pi, the complete phase value Φ (u, v) is expressed as:
Figure QLYQS_13
wherein k (u, v) is the number of periods compensated;
step 7.3, utilizing the speckle image to match the pixel displacement before and after the deformation of the object, determining the positions of the corresponding points before and after the deformation of the object to be measured, combining the camera and the projector to calibrate the internal and external parameters to calculate the three-dimensional shape of the object to be measured before and after the deformation, wherein the three-dimensional deformation is required to be obtained according to the position relation of the corresponding points before and after the deformation of the object, and the method is specifically as follows:
setting three-dimensional coordinates (X, Y, Z) and (X) of corresponding points before and after deformation of the object to be measured a ,Y a ,Z a ) The three-dimensional displacement is:
Figure QLYQS_14
wherein U, V and W are displacements in three directions, respectively;
then build a local coordinate system O e Converting three-dimensional coordinates and three-dimensional displacement of grid points in world coordinate system before deformation into coordinate system O e In (X) e ,Y e ,Z e) and (Ue ,V e ,W e ) A quadric surface fitting method is used to obtain a displacement field function, which is expressed as follows:
Figure QLYQS_15
wherein ,
Figure QLYQS_16
and
Figure QLYQS_17
respectively a displacement field function U e ,V e ,W e The full field strain is expressed as follows: />
Figure QLYQS_18
in the formula ,εxx 、ε yy 、ε zz 、ε yz 、ε zy 、ε xy 、ε yx 、ε zx 、ε xz Representing the strain tensor.
CN202211126873.8A 2022-09-16 2022-09-16 High-precision three-dimensional morphology and deformation measurement method based on projection imaging Pending CN116105628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211126873.8A CN116105628A (en) 2022-09-16 2022-09-16 High-precision three-dimensional morphology and deformation measurement method based on projection imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211126873.8A CN116105628A (en) 2022-09-16 2022-09-16 High-precision three-dimensional morphology and deformation measurement method based on projection imaging

Publications (1)

Publication Number Publication Date
CN116105628A true CN116105628A (en) 2023-05-12

Family

ID=86260424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211126873.8A Pending CN116105628A (en) 2022-09-16 2022-09-16 High-precision three-dimensional morphology and deformation measurement method based on projection imaging

Country Status (1)

Country Link
CN (1) CN116105628A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117685899A (en) * 2023-12-08 2024-03-12 魅杰光电科技(上海)有限公司 Method for measuring pattern structure morphology parameters

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117685899A (en) * 2023-12-08 2024-03-12 魅杰光电科技(上海)有限公司 Method for measuring pattern structure morphology parameters

Similar Documents

Publication Publication Date Title
CN110514143B (en) Stripe projection system calibration method based on reflector
CN103649674B (en) Measuring equipment and messaging device
CN112815843B (en) On-line monitoring method for printing deviation of workpiece surface in 3D printing process
Yalla et al. Very high resolution 3D surface scanning using multi-frequency phase measuring profilometry
CN113446957B (en) Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking
CN112132890B (en) Calibration method of digital grating projection measurement system for enlarging calibration space
Gao et al. Stereo camera calibration for large field of view digital image correlation using zoom lens
CN111462246B (en) Equipment calibration method of structured light measurement system
CN116105628A (en) High-precision three-dimensional morphology and deformation measurement method based on projection imaging
Hou et al. Camera lens distortion evaluation and correction technique based on a colour CCD moiré method
WO2014181581A1 (en) Calibration device, calibration system, and imaging device
CN112489109A (en) Three-dimensional imaging system method and device and three-dimensional imaging system
CN112308930A (en) Camera external parameter calibration method, system and device
Cabo et al. A hybrid SURF-DIC algorithm to estimate local displacements in structures using low-cost conventional cameras
Cheng et al. A practical micro fringe projection profilometry for 3-D automated optical inspection
CN113012143B (en) Test piece quality detection method based on two-dimensional digital image correlation method
JP2913021B2 (en) Shape measuring method and device
Yu et al. An improved projector calibration method for structured-light 3D measurement systems
CN114241059B (en) Synchronous calibration method for camera and light source in photometric stereo vision system
CN115839677A (en) Method and system for measuring three-dimensional topography of surface of object with high dynamic range
CN112686960B (en) Method for calibrating entrance pupil center and sight direction of camera based on ray tracing
CN115082538A (en) System and method for three-dimensional reconstruction of surface of multi-view vision balance ring part based on line structure light projection
Miyasaka et al. Development of real-time 3D measurement system using intensity ratio method
TWI837061B (en) System and method for 3d profile measurements using color fringe projection techniques
Kim et al. Development of Structured Light 3D Scanner Based on Image Processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination