CN116518869A - Metal surface measurement method and system based on photometric stereo and binocular structured light - Google Patents

Metal surface measurement method and system based on photometric stereo and binocular structured light Download PDF

Info

Publication number
CN116518869A
CN116518869A CN202310230086.6A CN202310230086A CN116518869A CN 116518869 A CN116518869 A CN 116518869A CN 202310230086 A CN202310230086 A CN 202310230086A CN 116518869 A CN116518869 A CN 116518869A
Authority
CN
China
Prior art keywords
point
normal vector
light source
data
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310230086.6A
Other languages
Chinese (zh)
Inventor
严思杰
刘宏伟
杨一帆
叶松涛
丁汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202310230086.6A priority Critical patent/CN116518869A/en
Publication of CN116518869A publication Critical patent/CN116518869A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention belongs to the technical field of three-dimensional measurement, and particularly discloses a metal surface measurement method and system based on photometric stereo and binocular structured light. Comprising the following steps: acquiring an original depth point cloud of the data missing of the mirror surface area of the surface to be detected by adopting a binocular camera; acquiring complete normal vector data of a surface to be measured by adopting an optical stereo method; and fusing the original depth point cloud with normal vector data to calculate the surface point cloud of the surface to be measured. According to the invention, by combining binocular structured light and near-field photometric stereo measurement data, not only is the defect of measurement data caused by specular reflection on the metal surface completed, but also the accuracy of surface reconstruction is improved by utilizing the mutual fusion optimization of the height data and the normal vector data.

Description

Metal surface measurement method and system based on photometric stereo and binocular structured light
Technical Field
The invention belongs to the field of three-dimensional measurement, and particularly relates to a metal surface measurement method and system based on photometric stereo and binocular structured light.
Background
At present, the application of machine vision in the industrial field is very wide. The structured light three-dimensional measurement technology can restore the surface height by projecting a fringe pattern on the target surface, and the method has high measurement precision and high speed. However, this method has a phenomenon of excessive smoothness in some detail. The photometric stereo technology in machine vision obtains a high-precision surface normal vector by illuminating the surface of an object with light sources in different directions, and then obtains the surface height by utilizing normal vector integration. Although photometric stereo can obtain rich surface details, the integrated error exists when the height is solved by integration, so that the accuracy of the recovered surface three-dimensional information is poor.
In order to solve the above problems, two methods can be fused, and the measurement accuracy can be further improved. Patent document CN1130487214a discloses a detection system and a detection method for product qualification rate based on combination of photometric stereo technology and structured light technology. The method uses surface structured light to measure surface three-dimensional information, then obtains normal vectors of each point, combines normal vector data measured by luminosity three-dimensional measurement to obtain optimized normal vector, and then carries out three-dimensional reconstruction. Thus, three-dimensional data with higher precision and clearer details are obtained. However, in the method, the two methods are required to obtain complete surface point clouds during fusion, and when the specular high-gloss metal surface is measured, missing holes exist in data measured by the structured light, so that the accuracy is affected.
Based on the defects and shortcomings, the field needs to propose a metal surface fusion measurement method based on photometric stereo and binocular structured light, which solves the problems of measurement data missing caused by specular reflection of a metal surface in the prior art.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a metal surface measurement method and a metal surface measurement system based on photometric stereo and binocular structured light, wherein the metal surface fusion measurement method based on the photometric stereo and binocular structured light is correspondingly designed by combining the characteristics of the binocular structured light with higher measurement precision in the structured light method and the near field point light source photometric stereo technical characteristics under optimized perspective projection, firstly, original depth point clouds with missing mirror surface area data are obtained by respectively using the binocular structured light, and complete high-precision normal vector data of a surface to be measured are obtained by using the photometric stereo, and then, three-dimensional point clouds and normal vector data are fused, so that the integrity and the accuracy of images are improved. The specific fusion method is to estimate the position error and the normal error respectively by using three-dimensional point cloud and normal data, and the sum of the position error and the normal error is minimized by iteration of an optimization method. In order to distinguish the data of the high light emission region from the data of the other regions, different weight factor values are designed. In addition, the two methods of the method use the same set of camera system, so that additional equipment errors, calibration errors and registration errors are effectively avoided, the equipment parameters are not required to be repeatedly adjusted, the complete point cloud can be estimated only by measuring once, and the surface reconstruction efficiency is improved.
To achieve the above object, according to one aspect of the present invention, there is provided a metal surface measuring method based on photometric stereo and binocular structured light, comprising the steps of:
s1, acquiring an original depth point cloud of the data missing of the mirror surface area of the surface to be detected by adopting a binocular camera. In this step, the binocular structured light technique used is to project a fringe pattern of sinusoidal intensity distribution, that is, fringe projection profilometry, onto a target surface, and by combining with stereoscopic vision, the target surface is photographed using two cameras, and stereoscopic matching is performed using phase information, thereby obtaining three-dimensional information of the surface.
S2, acquiring complete normal vector data of the surface to be measured by adopting an optical stereo method. In the step, based on the lambertian reflection model, the brightness of the object surface when the light source irradiates is described by using a surface irradiation equation, and meanwhile, in order to solve the problem of specular reflection of the metal surface, specular reflection components in an image are required to be separated, and meanwhile, shadows can influence recovery precision, so that the images are removed together. In addition, in the step, a more realistic perspective projection camera model and a near-field point light source model are used to construct a mapping relation between coordinate points (X, Y) on an image plane and target surface points (X, Y, Z), and a new surface irradiance equation is derived according to the mapping relation:
wherein N is (x,y) =(n 1 ,n 2 ,n 3 ) K=1, 2,..m. Equation (8) is a reconstruction model proposed by the invention, and the model defined by the equation is nonlinear due to the introduction of perspective projection and a near-field point light source model, and the model has five unknown quantities (Z, ρ, n 1 ,n 2 ,n 3 ) Wherein the normal vector (n 1 ,n 2 ,n 3 ) Is a unit vector, so only any two components need to be solved. Because the normal vector is highly coupled to the surface, it is proposed to use the reference plane Z 0 Instead of the height of the real target surface, the coordinates of each point on the reference plane and the direction between each light source position are approximated as the light source direction of each point on the target surface, as shown in fig. 7. Then, the surface reflectivity ρ and the normal vector (n) are solved by using a least square method 1 ,n 2 ,n 3 )。
And S3, fusing the original depth point cloud with normal vector data to calculate the surface point cloud of the surface to be measured.
Furthermore, before S1, the measurement system of the present invention needs to be calibrated to obtain the internal and external parameters of the camera. In the invention, the calibration of the camera is a conventional technical means, and the binocular camera calibration method in the prior art is suitable for the invention, so that the description is omitted.
As a further preference, step S1 comprises the steps of:
s11, acquiring the light intensity of a group of phase-shifted sine waves at any spatial point;
s12, sequentially projecting the phase-shifted sine waves to a surface to be measured to obtain distortion fringe distribution;
and S13, unwrapping the distorted stripe distribution, obtaining a continuous phase diagram, and calculating three-dimensional coordinates of the points of the surface to be detected according to the continuous phase diagram, so as to obtain an original depth point cloud of the data loss of the mirror surface area of the surface to be detected.
As a further preferred aspect, in step S13, the solution equation of the wrapping phase is as follows:
wherein I is n (x, y) is a distortion fringe distribution, N represents a phase shift index n=0, 1, 2.
As a further preference, step S2 comprises the steps of:
s21, describing the brightness of the surface of the object when the light source irradiates by adopting a surface irradiation equation;
s22, removing highlight and shadow areas by adopting a pixel intensity separation method, and extracting lambertian area calculation method vectors;
s23, constructing a mapping relation from a coordinate point on an image plane to a target surface point according to the perspective projection camera model and the near-field point light source model, obtaining a light source direction at each point on the surface according to the mapping relation and the perspective projection relation, and solving the surface reflectivity and the normal vector according to the light source direction.
As a further preferred aspect, in step S21, the calculation model of brightness includes:
in the method, in the process of the invention,is the surface point brightness under the kth light source, L k Is the direction vector of the light source, N (x,y) Is a surface normal vector, ρ is a surface reflection coefficient, and the vectors in the equations are unit vectors;
in step S22, the calculation model of the lambertian region calculation algorithm vector includes:
O k d ={i k (u,v)∈O|αi k min <i k (u,v)<βi k max }
wherein k is the light source label, O k d I is a set of lambertian components extracted from an image k (u, v) is the pixel intensity at the (u, v) position in the image, O is the set of all pixel intensities in the image, i k min And i k max Respectively minimum pixel value and maximum pixel value in the image target area, wherein alpha and beta are separation coefficients;
in step S22, the normal vector at each pixel point is found by at least three lambertian points, with the direction of the light source known.
As a further preferred aspect, step S23 includes:
s231, constructing a perspective projection camera model and a near-field point light source model, wherein the mapping relation between coordinate points (X, Y) on an image plane and target surface points (X, Y, Z) under the model is as follows:
wherein f is the focal length of the camera;
s232 each point P on the target surface under the model k Light source direction at (X, Y) = (X, Y, Z)And the position P of the LED point light source k =(X k ,Y k ,Z k ) Parallel to the connecting line of the point, substituting the relation of perspective projection to obtain the direction of the light source at each point on the target surface;
s233 constructs a new surface irradiance equation from the calculation model of the luminance and the direction of the light source at each point on the target surface.
As a further preference, step S3 comprises the steps of:
s31, reconstructing the surface by adopting a path integration method according to the complete normal vector data of the surface to be detected to obtain height data;
s32, assuming that a curved surface generated by an image method is formed by connecting a plurality of tiny triangular planes, wherein the vertex of each triangular plane corresponds to the central point of a single pixel point in an imaging plane, so that the normal vector data and the three-dimensional coordinates of the original depth point cloud fall on the central point;
s33, adopting a normal vector at the vertex of the triangular plane to adjust the three-dimensional coordinates of each point of the surface, and using the three-dimensional coordinates as position constraint to deduce a position error and a normal error expression, thereby obtaining the optimized position of each point of the target surface, and further obtaining the surface point cloud.
As a further preferred aspect, step S33 specifically includes:
s331 for a given vertex S, position error E P Defined as the optimized vertex position P s = (X, Y, Z) and original vertex position P s 0 =(X 0 ,Y 0 ,Z 0 ) The square sum of the distances between the two, the normal vector direction is utilized to restrict the adjustment direction of the vertex in the optimization process, if deltas is the optimized adjustment value, and as the normal vector is a unit vector, the adjustment vector is deltasN s
S332, optimizing the error of the included angle theta between the tangent line and the normal vector on the curved surface to define a normal error E N Is the sum of squares of cosine values of the included angles;
s333 assuming r and t are two adjacent vertexes of a triangle mesh with S as vertexes, the tangent of S vertex on the mesh can be used as a vectorRepresented, and where the three-dimensional coordinates of r and t are to be used with optimized values, i.e
S334, taking the difference of the influence of two errors under different conditions into consideration, introducing a weighting factor lambda to construct a position error expression and a normal error expression, and taking the expression as an objective function;
and S335, optimizing the objective function by adopting a weighted least square method, and finally obtaining an optimized position P of each point of the target surface, thereby obtaining the surface point cloud.
More specifically, in step S33, in order to improve the accuracy of reconstruction, three-dimensional coordinates of each point of the surface are adjusted using the normal vector at the vertex, and the three-dimensional coordinates are used as the position constraint. And deducing a position error expression and a normal error expression to obtain an optimized value of the three-dimensional coordinate.
For a given vertex s, position error E P Defined as the optimized vertex position P s = (X, Y, Z) and original vertex position P s 0 =(X 0 ,Y 0 ,Z 0 ) The sum of squares of the distances between them. The normal vector direction is utilized to restrict the adjustment direction of the vertex in the optimization process, if deltas is the optimized adjustment value, the adjustment vector is deltasN because the normal vector is a unit vector s Define the position error as
On a curved surface, the normal vector at a point should be perpendicular to the tangent of the surface at that point. The error of the included angle theta between the tangential line and the normal vector can be optimized, and the normal error E is defined for the convenience of calculation N The sum of squares of cosine values of the included angles is:
wherein T is s Is a tangent to the triangular plane. Assuming r, t is two adjacent vertices of a triangular mesh with s as vertices, then the tangent to s at vertex on the mesh may be vector-wiseExpressed, and where the three-dimensional coordinates of r and t are to be used with optimized values, i.e. +.>Substitution formula (14) yields:
considering that the two errors have different effects under different conditions, the objective function is defined as follows after the weighting factor lambda is introduced:
the weighting factor lambda epsilon [0,1] is used for controlling the action of the vector of the method in the optimization process, and the larger lambda is, the larger the adjustment action of the vector of the method in the optimization process is. Since the recovery effect of binocular structured light measurement is better outside the highlight region, the lambda value is set smaller in these regions; in the area of the missing holes, the height value calculated by the normal vector integral needs to be greatly adjusted, so that the value of lambda is set to be larger in the highlight area, and specific numerical values need to be selected empirically. And the optimization process adopts a weighted least square method to finally obtain the optimized position P of each point on the target surface, thereby obtaining the surface point cloud.
According to another aspect of the present invention there is also provided a metal surface measurement system based on photometric stereo and binocular structured light for implementing the above method.
In general, compared with the prior art, the above technical solution conceived by the present invention mainly has the following technical advantages:
1. according to the invention, the original depth point cloud with the missing mirror surface area data is obtained by using binocular structured light, and the complete high-precision normal vector data of the surface to be measured is obtained in a photometric three-dimensional manner, and then the three-dimensional point cloud and the normal vector data are fused, so that the integrity and the accuracy of the image are improved. The specific fusion method is to estimate the position error and the normal error respectively by using three-dimensional point cloud and normal data, and the sum of the position error and the normal error is minimized by iteration of an optimization method. In order to distinguish the data of the high light emission region from the data of the other regions, different weight factor values are designed. In addition, the two methods of the method use the same set of camera system, so that additional equipment errors, calibration errors and registration errors are effectively avoided, the equipment parameters are not required to be repeatedly adjusted, the complete point cloud can be estimated only by measuring once, and the surface reconstruction efficiency is improved.
2. The invention improves the applicability and accuracy of the photometric stereo method by deducing the model of the photometric stereo of the near-field point light source under perspective projection.
3. The invention removes the influence of high light and shadow through the pixel intensity, and completes the complete normal vector recovery of the metal surface.
4. According to the invention, by combining binocular structured light and near-field photometric stereo measurement data, not only is the defect of measurement data caused by specular reflection on the metal surface completed, but also the accuracy of surface reconstruction is improved by utilizing the mutual fusion optimization of the height data and the normal vector data.
Drawings
FIG. 1 is a flow chart of a metal surface measurement method based on photometric stereo and binocular structured light in accordance with a preferred embodiment of the present invention;
FIG. 2 is a flow chart of a binocular structured light algorithm involved in the method of the present invention;
FIG. 3 is a schematic diagram of a binocular structured light measuring apparatus involved in the method of the present invention;
fig. 4 (a) is an original image captured by the camera when the fringes are projected, and fig. 4 (b) is a restored point cloud missing condition diagram of the magnified region;
FIG. 5 is a flow chart of a near field photometric stereo algorithm involved in the method of the present invention;
FIG. 6 is a schematic view of a perspective projection model of a camera involved in the method of the present invention;
FIG. 7 is a schematic view of a reference plane involved in the method of the present invention;
fig. 8 is a schematic diagram of triangulating an imaging plane in accordance with the method of the present invention.
Like reference numerals denote like technical features throughout the drawings, in particular: 1-camera, 2-target surface, 3-projector.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
As shown in fig. 1, the metal surface measurement method based on photometric stereo and binocular structured light provided by the embodiment of the invention adopts binocular structured light with higher measurement precision in the structured light method and near-field point light source photometric stereo technology under optimized perspective projection, and provides a fusion measurement method for solving the mirror reflection problem of the metal surface.
Firstly, original depth point clouds with missing mirror surface area data are obtained by using binocular structured light respectively, and complete high-precision normal vector data of the surface to be measured are obtained in a photometric three-dimensional mode, and then the three-dimensional point clouds and the normal vector data are fused, so that the integrity and the accuracy of images are improved. The specific fusion method is to estimate the position error and the normal error respectively by using three-dimensional point cloud and normal data, and the sum of the position error and the normal error is minimized by iteration of an optimization method. In order to distinguish the data of the high light emission region from the data of the other regions, different weight factor values are designed. In addition, the two methods of the method use the same set of camera system, so that additional equipment errors, calibration errors and registration errors are effectively avoided, the equipment parameters are not required to be repeatedly adjusted, the complete point cloud can be estimated only by measuring once, and the surface reconstruction efficiency is improved.
Step 1: system calibration
Both methods used for the fusion method (binocular structured light measurement and near field photometric stereo measurement) require calibration. The invention uses Zhang Zhengyou calibration method to obtain the internal parameters and relative positions of two cameras. The near-field luminosity three-dimensional is calibrated by using a mirror ceramic ball.
Step 2: binocular structured light measurement
The invention uses binocular structured light measuring technology, and a specific flow chart is shown in figure 2.
The binocular structured light technology used in the invention is to project a fringe pattern of sinusoidal intensity distribution, namely fringe projection profilometry, on a target surface, and by combining with stereoscopic vision, two cameras are used for shooting the target surface, and stereoscopic matching is performed by utilizing phase information, so that three-dimensional information of the surface is obtained, and a schematic diagram of the device is shown in fig. 3.
When a set of phase shifted sinusoidal waveforms is employed, the point (x p ,y p ) Where the intensity value is expressed as:
where N represents the phase shift index n=0, 1, 2. Mean value a p And amplitude b p Typically 0.5, to cover the entire dynamic range of the projector,is the frequency of the sinusoidal stripes (period/pixel). The present invention uses a four-step phase shift method, i.e. n=4,/i>(/ 1000 pixels).
After sequentially projecting the pattern onto the object surface, a distorted fringe distribution (denoted as I n (x, y)) is:
the corresponding wrap phase can be found by the following equation:
it can be seen that the phase determined using the arctangent formula is limited to a range of [ -pi, pi ], known as the wrapped phase, and that unwrapping is required to obtain a continuous phase map. According to the method, a three-frequency phase unwrapping method is used for respectively solving absolute phases under the left camera and the right camera, and then three-dimensional coordinates of each point on the surface of a target are calculated by adopting a three-dimensional matching idea according to the previously calibrated relative positions of the cameras, so that point cloud data are obtained.
The intensity range of the 8-bit gray-scale camera used in the invention after imaging is 0-255. Saturation occurs when the pixel intensity captured by the camera exceeds 255 due to specular reflection on the object surface. The metal surface produced by machining is a non-lambertian surface, and for a highlight region, the image intensity I is saturated, so that the stripe information of the point cannot be accurately obtained, and therefore, the phase information of the highlight region cannot be obtained. Therefore, for the binocular structured light, the generation of the point cloud missing holes in the highlight region is unavoidable. Fig. 4 shows the point cloud missing when measuring a metal blade.
Step 3: near field photometric stereo measurement
Photometric stereo techniques are techniques that use images illuminated by light sources in different directions to estimate the normal vector of a surface. A specific flow chart is shown in fig. 5.
Classical photometric stereo is proposed based on lambertian reflection model, and describes the brightness of the object surface when illuminated by a light source by using the surface irradiance equation:
wherein the method comprises the steps ofIs the surface point brightness under the kth light source, L k Is the direction vector of the light source, N (x,y) Is a surface normal vector, ρ is a surface reflection coefficient, and the vectors in the equations are unit vectors.
The invention mainly solves the problem of specular reflection on the metal surface, so that specular reflection components in the image need to be separated, and shadows can influence recovery precision, so that the problems are removed at the same time. The method uses a pixel intensity separation method to remove highlight and shadow areas and extracts lambertian area calculation method vectors:
O k d ={i k (u,v)∈O|αi k min <i k (u,v)<βi k max } (5)
wherein k is a light source label, O k d I is a set of lambertian components extracted from an image k (u, v) is the pixel intensity at the (u, v) position in the image, O is the set of all pixel intensities in the image, i k min And i k max The minimum and maximum pixel values in the image target area, respectively, and alpha, beta are separation coefficients. Under the condition that the direction of the light source is known, the normal vector at each pixel point can be obtained by only three lambertian points, 14 light sources are arranged, and enough intensity information is theoretically available to recover the surface normal vector.
Meanwhile, the present invention uses a more realistic perspective projection camera model and a near field point light source model, as shown in fig. 6.
The mapping relation between the coordinate point (X, Y) on the image plane and the target surface point (X, Y, Z) under the model is as follows:
where f is the focal length of the camera. Each point P on the target surface under the model k Light source direction at (X, Y) = (X, Y, Z)And the position P of the LED point light source k =(X k ,Y k ,Z k ) Parallel to the connecting line of the point, and substituting the relation of perspective projection to obtain the light source direction of each point on the surface:
it can be seen that the direction of the light source at each point on the target surface is directly related to the Z coordinate of that point, which also makes the solution more complex. From the above derivation, a new surface irradiance equation is obtained:
wherein N is (x,y) =(n 1 ,n 2 ,n 3 ) K=1, 2,..m. Equation (8) is a reconstruction model proposed by the invention, and the model defined by the equation is nonlinear due to the introduction of perspective projection and a near-field point light source model, and the model has five unknown quantities (Z, ρ, n 1 ,n 2 ,n 3 ) Wherein the normal vector (n 1 ,n 2 ,n 3 ) Is a unit vector, so only any two components need to be solved. Because the normal vector is highly coupled to the surface, it is proposed to use the reference plane Z 0 Instead of the height of the real target surface, the coordinates of each point on the reference plane and the direction between each light source position are approximated as the light source direction of each point on the target surface, as shown in fig. 7. Then, the surface reflectivity ρ and the normal vector (n) are solved by using a least square method 1 ,n 2 ,n 3 )。
Step 4: three-dimensional data and normal vector data are fused to calculate surface point cloud
After the complete normal vector data of the target surface is obtained, the surface needs to be reconstructed through the normal vector to obtain the height data. It is known that any arbitrary curved surface can be represented by the following formula:
Z=f(X,Y) (9)
from equation (2.10), the unit normal vector of any point P (X, Y, Z) on the object surface can be expressed as:
assume that the measured value of the unit normal vector at a point on the target surface is (n 1 ,n 2 ,n 3 ) The following steps are:
the partial derivative is integrated along path W to obtain a curved surface, namely:
where W is an arbitrary curve from a certain fixed point to a point (x, y); c is an integration constant representing the surface height of the starting point.
In the measuring system, only the height value of the data missing hole is needed to be calculated, so that the height value measured by binocular structured light can be used as a boundary condition in the integration process to be calculated to obtain the height value for subsequent fusion optimization.
Any three non-collinear adjacent points on the target surface can define a plane, and the curved surface generated by using the image method can be regarded as being formed by connecting a plurality of tiny triangular planes, and the vertex of each triangular plane corresponds to the center point of a single pixel point in the imaging plane. Furthermore, the binocular structured light measurement and the near field photometric stereo use the same set of cameras, and the normal vector diagram and the depth diagram are naturally aligned, so that the normal vector and the three-dimensional coordinates of the surface point obtained in the previous step fall on the center point. A schematic diagram of the triangulating of the imaging plane is shown in fig. 8.
In order to improve the reconstruction accuracy, the three-dimensional coordinates of each point of the surface are adjusted by using the normal vector at the vertex, and the three-dimensional coordinates are used as position constraints. And deducing a position error expression and a normal error expression to obtain an optimized value of the three-dimensional coordinate.
For a given vertex s, position error E P Defined as the optimized vertex position P s = (X, Y, Z) and original vertex position P s 0 =(X 0 ,Y 0 ,Z 0 ) The sum of squares of the distances between them. The normal vector direction is utilized to restrict the adjustment direction of the vertex in the optimization process, if deltas is the optimized adjustment value, the adjustment vector is deltasN because the normal vector is a unit vector s Define the position error as
On a curved surface, the normal vector at a point should be perpendicular to the tangent of the surface at that point. The error of the included angle theta between the tangential line and the normal vector can be optimized, and the normal error E is defined for the convenience of calculation N The sum of squares of cosine values of the included angles is:
wherein T is s Is a tangent to the triangular plane. Assuming r, t is two adjacent vertices of a triangular mesh with s as vertices, then the tangent to s at vertex on the mesh may be vector-wiseExpressed, and where the three-dimensional coordinates of r and t are to be used with optimized values, i.e. +.>Substitution formula (14) yields:
considering that the two errors have different effects under different conditions, the objective function is defined as follows after the weighting factor lambda is introduced:
the weighting factor lambda epsilon [0,1] is used for controlling the action of the vector of the method in the optimization process, and the larger lambda is, the larger the adjustment action of the vector of the method in the optimization process is. Since the recovery effect of binocular structured light measurement is better outside the highlight region, the lambda value is set smaller in these regions; in the area of the missing holes, the height value calculated by the normal vector integral needs to be greatly adjusted, so that the value of lambda is set to be larger in the highlight area, and specific values need to be selected empirically. And the optimization process adopts a weighted least square method to finally obtain the optimized position P of each point on the target surface, thereby obtaining the surface point cloud.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (9)

1. The metal surface measuring method based on photometric stereo and binocular structured light is characterized by comprising the following steps of:
s1, acquiring an original depth point cloud of data missing of a mirror surface area of a surface to be detected by adopting a binocular camera;
s2, acquiring complete normal vector data of the surface to be measured by adopting an optical stereo method;
and S3, fusing the original depth point cloud with normal vector data to calculate the surface point cloud of the surface to be measured.
2. The method according to claim 1, wherein step S1 comprises the steps of:
s11, acquiring the light intensity of a group of phase-shifted sine waves at any spatial point;
s12, sequentially projecting the phase-shifted sine waves to a surface to be measured to obtain distortion fringe distribution;
and S13, unwrapping the distorted stripe distribution, obtaining a continuous phase diagram, and calculating three-dimensional coordinates of the points of the surface to be detected according to the continuous phase diagram, so as to obtain an original depth point cloud of the data loss of the mirror surface area of the surface to be detected.
3. The method according to claim 1, wherein in step S13, the solution equation of the wrapping phase is as follows:
wherein I is n (x, y) is a distortion fringe distribution, N represents a phase shift index n=0, 1, 2.
4. The method according to claim 1, wherein step S2 comprises the steps of:
s21, describing the brightness of the surface of the object when the light source irradiates by adopting a surface irradiation equation;
s22, removing highlight and shadow areas by adopting a pixel intensity separation method, and extracting lambertian area calculation method vectors;
s23, constructing a mapping relation from a coordinate point on an image plane to a target surface point according to the perspective projection camera model and the near-field point light source model, obtaining a light source direction at each point on the surface according to the mapping relation and the perspective projection relation, and solving the surface reflectivity and the normal vector according to the light source direction.
5. The method according to claim 1, wherein in step S21, the calculation model of the brightness includes:
in the method, in the process of the invention,is the surface point brightness under the kth light source, L k Is the direction vector of the light source, N (x,y) Is a surface normal vector, ρ is a surface reflection coefficient, and the vectors in the equations are unit vectors;
in step S22, the calculation model of the lambertian region calculation algorithm vector includes:
O k d ={i k (u,v)∈O|αi k min <i k (u,v)<βi k max }
wherein k is the light source label, O k d I is a set of lambertian components extracted from an image k (u, v) is the pixel intensity at the (u, v) position in the image, O is the set of all pixel intensities in the image, i k min And i k max Respectively minimum pixel value and maximum pixel value in the image target area, wherein alpha and beta are separation coefficients;
in step S22, the normal vector at each pixel point is found by at least three lambertian points, with the direction of the light source known.
6. The method according to claim 5, wherein step S23 includes:
s231, constructing a perspective projection camera model and a near-field point light source model, wherein the mapping relation between coordinate points (X, Y) on an image plane and target surface points (X, Y, Z) under the model is as follows:
wherein f is the focal length of the camera;
s232 each point P on the target surface under the model k Light source direction at (X, Y) = (X, Y, Z)And the position P of the LED point light source k =(X k ,Y k ,Z k ) Parallel to the connecting line of the point, substituting the relation of perspective projection to obtain the direction of the light source at each point on the target surface;
s233 constructs a new surface irradiance equation from the calculation model of the luminance and the direction of the light source at each point on the target surface.
7. The method according to claim 1, wherein step S3 comprises the steps of:
s31, reconstructing the surface by adopting a path integration method according to the complete normal vector data of the surface to be detected to obtain height data;
s32, assuming that a curved surface generated by an image method is formed by connecting a plurality of tiny triangular planes, wherein the vertex of each triangular plane corresponds to the central point of a single pixel point in an imaging plane, so that the normal vector data and the three-dimensional coordinates of the original depth point cloud fall on the central point;
s33, adopting a normal vector at the vertex of the triangular plane to adjust the three-dimensional coordinates of each point of the surface, and using the three-dimensional coordinates as position constraint to deduce a position error and a normal error expression, thereby obtaining the optimized position of each point of the target surface, and further obtaining the surface point cloud.
8. The method according to any one of claims 1-7, wherein step S33 specifically comprises:
s331 for a given vertex S, position error E P Defined as the optimized vertex position P s = (X, Y, Z) and original vertex position P s 0 =(X 0 ,Y 0 ,Z 0 ) The square sum of the distances between the two, the normal vector direction is utilized to restrict the adjustment direction of the vertex in the optimization process, if deltas is the optimized adjustment value, and as the normal vector is a unit vector, the adjustment vector is deltasN s
S332, optimizing the error of the included angle theta between the tangent line and the normal vector on the curved surface to define a normal error E N Is the sum of squares of cosine values of the included angles;
s333 assuming r and t are two adjacent vertexes of a triangle mesh with S as vertexes, the tangent of S vertex on the mesh can be used as a vectorRepresented, and where the three-dimensional coordinates of r and t are to be used with optimized values, i.e
S334, taking the difference of the influence of two errors under different conditions into consideration, introducing a weighting factor lambda to construct a position error expression and a normal error expression, and taking the expression as an objective function;
and S335, optimizing the objective function by adopting a weighted least square method, and finally obtaining an optimized position P of each point of the target surface, thereby obtaining the surface point cloud.
9. A metal surface measurement system based on photometric stereo and binocular structured light for implementing the method of any one of claims 1-8.
CN202310230086.6A 2023-03-10 2023-03-10 Metal surface measurement method and system based on photometric stereo and binocular structured light Pending CN116518869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310230086.6A CN116518869A (en) 2023-03-10 2023-03-10 Metal surface measurement method and system based on photometric stereo and binocular structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310230086.6A CN116518869A (en) 2023-03-10 2023-03-10 Metal surface measurement method and system based on photometric stereo and binocular structured light

Publications (1)

Publication Number Publication Date
CN116518869A true CN116518869A (en) 2023-08-01

Family

ID=87407090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310230086.6A Pending CN116518869A (en) 2023-03-10 2023-03-10 Metal surface measurement method and system based on photometric stereo and binocular structured light

Country Status (1)

Country Link
CN (1) CN116518869A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635679A (en) * 2023-12-05 2024-03-01 之江实验室 Curved surface efficient reconstruction method and device based on pre-training diffusion probability model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635679A (en) * 2023-12-05 2024-03-01 之江实验室 Curved surface efficient reconstruction method and device based on pre-training diffusion probability model

Similar Documents

Publication Publication Date Title
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN110514143B (en) Stripe projection system calibration method based on reflector
WO2018171384A1 (en) Highly efficient three-dimensional image acquisition method based on multi-mode composite encoding and epipolar constraint
CN111473744B (en) Three-dimensional shape vision measurement method and system based on speckle embedded phase shift stripe
CN110207614B (en) High-resolution high-precision measurement system and method based on double telecentric camera matching
WO2013076605A1 (en) Method and system for alignment of a pattern on a spatial coded slide image
CN108195313A (en) A kind of high dynamic range method for three-dimensional measurement based on Intensity response function
CN107610183B (en) Calibration method of fringe projection phase height conversion mapping model
JP2014115109A (en) Device and method for measuring distance
CN109307483A (en) A kind of phase developing method based on structured-light system geometrical constraint
CN115775303B (en) Three-dimensional reconstruction method for high-reflection object based on deep learning and illumination model
CN110345882A (en) A kind of adaptive structure light three-dimension measuring system and method based on geometrical constraint
CN110425998A (en) The components three-dimensional measurement method of gray level image coupling feature point height
CN110692084A (en) Deriving topology information for a scene
CN116518869A (en) Metal surface measurement method and system based on photometric stereo and binocular structured light
CN111353997A (en) Real-time three-dimensional surface defect detection method based on fringe projection
JP2004537732A (en) Three-dimensional imaging by projecting interference fringes and evaluating absolute phase mapping
CN116295113A (en) Polarization three-dimensional imaging method integrating fringe projection
CN113551617B (en) Binocular double-frequency complementary three-dimensional surface type measuring method based on fringe projection
CN116242277A (en) Automatic measurement method for size of power supply cabinet structural member based on full-field three-dimensional vision
CN112294453B (en) Microsurgery surgical field three-dimensional reconstruction system and method
CN112325799A (en) High-precision three-dimensional face measurement method based on near-infrared light projection
CN107941147B (en) Non-contact online measurement method for three-dimensional coordinates of large-scale system
KR100500406B1 (en) 3d shape measuring device and method using the correction of phase errors
Pineda et al. Developing a robust acquisition system for fringe projection profilometry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination