CN117804381A - Three-dimensional reconstruction method for object based on camera array focusing structure light - Google Patents

Three-dimensional reconstruction method for object based on camera array focusing structure light Download PDF

Info

Publication number
CN117804381A
CN117804381A CN202410234573.4A CN202410234573A CN117804381A CN 117804381 A CN117804381 A CN 117804381A CN 202410234573 A CN202410234573 A CN 202410234573A CN 117804381 A CN117804381 A CN 117804381A
Authority
CN
China
Prior art keywords
camera
camera array
dimensional
stripe
phase shift
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410234573.4A
Other languages
Chinese (zh)
Other versions
CN117804381B (en
Inventor
袁寒
邓淋文
李米可
王亚品
肖朝
胡兴成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202410234573.4A priority Critical patent/CN117804381B/en
Publication of CN117804381A publication Critical patent/CN117804381A/en
Application granted granted Critical
Publication of CN117804381B publication Critical patent/CN117804381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a three-dimensional reconstruction method of object based on camera array focusing structure light, which aims to solve the problem of defocusing of a single camera caused by a region with larger depth change of the surface of an object to be measured in a traditional measurement algorithm. According to the invention, a camera array is utilized to obtain clear fringe patterns of different depth planes of an object to be measured, super-resolution four-dimensional fringes are generated according to calibration parameters, the fringe gray value of optimal focusing of each pixel corresponding area of a reference camera image plane is determined by utilizing the contrast of the super-resolution four-dimensional fringes, so that an optimal focusing deformation fringe is obtained, and finally, three-dimensional information of the object is reconstructed by utilizing the deformation fringe.

Description

Three-dimensional reconstruction method for object based on camera array focusing structure light
Technical Field
The invention belongs to the technical field of three-dimensional measurement of structured light, and relates to a three-dimensional reconstruction method of object based on camera array focusing structured light.
Background
The three-dimensional measurement technology is one of solutions for realizing full-size online precise measurement of products and assisting manufacturers to finish product refinement, intelligentization and digital management. The existing mature measuring method can be roughly divided into a contact type measuring method and an optical-based non-contact type measuring method, the three-dimensional shape of an object is measured mainly by directly contacting the surface of the object through a probe, the method is less influenced by the reflection characteristic, the color and the curvature of the surface of the object, the measuring precision is high, the measuring speed is low, and the method is not suitable for measuring a non-rigid object. The non-contact measurement method based on optics mainly utilizes the modulated light field information to finish the geometric shape measurement of the object surface. The advantage of non-contact makes it have huge application value in the measurement of soft, non-contact, fur, yielding etc. objects. Meanwhile, in recent years, the rapid development of area array light field modulation devices and hardware calculation force is realized, so that the measurement efficiency of a non-contact method is remarkably improved.
With the acceleration of the industrialization process, the market ratio and the daily increment of integrated formed products are increased, and the sizes of the object parts are also huge. The optical imaging system limits that the structured light measuring device has a certain depth of field range, and the imaging definition directly affects the precision of structured light measurement, so that the imaging device needs to be focused for many times in order to obtain an effective depth of field range. In the measurement of a large depth scene, it is difficult to ensure that the imaging device has a large depth range at the same time with a single measurement. Therefore, multiple measurements and calibrations are needed, increasing the complexity of the measurements; meanwhile, when the device is applied to online measurement, the device focusing further influences the stability and accuracy of measurement.
Disclosure of Invention
The invention aims to provide a three-dimensional reconstruction method of an object based on camera array focusing structure light, which is characterized in that a camera array is utilized to obtain clear fringe patterns of imaging planes of different depths of the object to be detected, super-resolution four-dimensional fringes are generated according to calibration parameters, the contrast of the super-resolution four-dimensional fringes is utilized to determine the fringe gray value of optimal focusing of each pixel corresponding area of a reference camera image plane, so that an optimal focusing deformation fringe is obtained, and finally, three-dimensional information of the object is reconstructed by utilizing the deformation fringe.
The invention is realized by the following technical scheme:
a three-dimensional reconstruction method of an object based on camera array focusing structure light is realized based on a camera array and a projector, a camera array reference plane is established, and N frames of stripes are projected to the surface of the object to be detected by adopting the projector, wherein N is more than or equal to 3; establishing a plurality of parallel depth imaging planes which are different in distance from a reference plane of the camera array, and calculating by utilizing a homography matrix of the camera array and images on the plurality of depth imaging planes to obtain a deformed fringe pattern; then synthesizing the deformed fringe pattern into super-resolution four-dimensional fringes comprising phase shift information, and determining the optimal focusing deformed fringe of the camera array on each depth imaging plane by utilizing the phase shift information of the super-resolution four-dimensional fringes; calculating an absolute phase by using the optimal focusing deformation stripes, calculating the polar line corresponding relation between the camera array and the projector, calculating the three-dimensional point cloud of the surface of the object to be measured by using the absolute phase and the polar line corresponding relation, and completing the three-dimensional reconstruction of the surface of the object to be measured by using the three-dimensional point cloud.
In order to better realize the invention, the method specifically comprises the following steps:
step 1, calibrating a camera array and a projector to obtain calibration parameters, establishing a camera array reference plane through the calibration parameters, establishing a plurality of parallel depth imaging planes which are different in distance from the camera array reference plane, and calculating a homography matrix of the camera array by using the depth imaging planes;
step 2, projecting N frames of phase shift stripes and N frames of additional stripes to the surface of an object to be detected by using a projector, acquiring images on each depth imaging plane by using a camera array, and resolving to obtain N frames of deformed phase shift stripes and N frames of deformed additional stripes by combining a homography matrix of the camera array;
step 3, projecting N frames of deformed phase shift stripes and N frames of deformed additional stripes on each depth imaging plane to a camera array reference plane by utilizing the homography matrix obtained in the step 1, synthesizing the N frames of deformed phase shift stripes into super-resolution four-dimensional phase shift stripes, and synthesizing the N frames of deformed additional stripes into super-resolution four-dimensional additional stripes;
step 4, calculating the optimal contrast of the super-resolution four-dimensional phase-shift stripe on each depth imaging plane, and updating according to the optimal contrast index to obtain the gray value of each pixel on the super-resolution four-dimensional phase-shift stripe and the gray value of each pixel on the super-resolution four-dimensional additional stripe; the gray value of each pixel on the super-resolution four-dimensional phase shift stripe is utilized to calculate the optimal focusing deformation phase shift stripe, and the gray value of each pixel on the super-resolution four-dimensional additional stripe is utilized to calculate the optimal focusing deformation additional stripe;
and 5, calculating the absolute phase of the object to be measured by utilizing the best focusing deformation phase shift stripe and the best focusing deformation additional stripe, and calculating the three-dimensional point cloud of the surface of the object to be measured by combining the absolute phase, the polar line corresponding relation between the camera array and the projector and the calibration parameters of the camera array and the projector, and completing the three-dimensional reconstruction of the surface of the object to be measured by utilizing the three-dimensional point cloud.
In order to better implement the present invention, further, the step 4 specifically includes:
the step 4 specifically comprises the following steps:
step 4.1, establishing a contrast function related to the super-resolution four-dimensional phase-shift stripe based on the maximum value of the super-resolution four-dimensional phase-shift stripe and the minimum value of the super-resolution four-dimensional phase-shift stripe;
step 4.2, establishing a contrast root mean square error function based on the contrast function, and searching the optimal distance between the corresponding depth imaging plane and the camera array reference plane when the contrast root mean square error function value is minimum;
step 4.3, updating the contrast function according to the optimal distance index;
step 4.4, searching the corresponding optimal camera code when the contrast function value after index updating is maximum;
step 4.5, updating according to the optimal distance and the optimal camera coding index to obtain the gray value of each pixel on the super-resolution four-dimensional phase shift stripe, and calculating the optimal focusing deformation phase shift stripe through the gray value of each pixel on the super-resolution four-dimensional phase shift stripe; and updating according to the optimal distance and the optimal camera coding index to obtain the gray value of each pixel on the super-resolution four-dimensional additional stripe, and calculating the optimal focusing deformation additional stripe through the gray value of each pixel on the super-resolution four-dimensional additional stripe.
In order to better implement the present invention, further, the contrast function of the super-resolution four-dimensional phase-shift stripe is:
wherein:representing the n-th frame super-resolution four-dimensional phase shift stripe; />Representing the index of the nth frame stripe +.>A maximum function of (a); />Representing the index of the nth frame stripe +.>Is a minimum function of (1); />Representing the contrast of the super-resolution four-dimensional phase-shifted stripe pixel at coordinates (x, y) for the mth camera at a distance d from the camera array reference plane; />When the distance between the depth imaging plane and the camera array reference plane is d, the x coordinate of the pixel of the image acquired by the mth camera is projected to the coordinate of the camera array reference plane; />When the distance between the depth imaging plane and the camera array reference plane is d, the y coordinate of the pixel of the image acquired by the mth camera is projected to the coordinate of the camera array reference plane.
In order to better implement the present invention, further, the contrast root mean square error function is:
wherein: m represents the number of cameras in the camera array; m denotes the mth camera in the camera array.
In order to better implement the present invention, further, the step 5 specifically includes:
step 5.1, calculating the wrapping phase of the surface of the object to be measured according to the optimal focusing deformation phase shift stripe, and calculating the additional phase of the surface of the object to be measured according to the optimal focusing deformation additional stripe;
step 5.2, calculating the phase difference and the phase shift period between the wrapping phase and the additional phase, and calculating the fringe order of the surface of the object to be measured through the phase difference and the phase shift period;
step 5.3, calculating the absolute phase of the stripes on the surface of the object to be measured through wrapping phases and stripe orders;
and 5.4, calculating to obtain the polar line corresponding relation between the camera array and the projector by using the calibration parameters of the camera array and the projector, obtaining the imaging coordinate corresponding relation between the projector and the camera array by using the polar line corresponding relation, and solving the three-dimensional point cloud of the surface of the object to be measured by using the polar line theorem and the ray intersection theorem so as to finish the three-dimensional reconstruction of the surface of the object to be measured.
In order to better implement the present invention, further, the step 2 specifically includes:
projecting N frames of phase shift stripes and N frames of additional stripes to the surface of an object to be detected by using a projector, setting the imaging aperture sizes and focusing distances of a plurality of cameras in a camera array to be different values, and then collecting N frames of deformation phase shift stripes and N frames of deformation additional stripes modulated by the surface of the object to be detected by a plurality of cameras in the camera array;
the N-frame deformation phase shift stripes are as follows:
wherein:representing the background light intensity of the deformed phase-shift fringes at the (x, y) coordinates on the imaging plane of the mth camera; />Representing the modulated signal of the deformed phase-shifted fringes at coordinates (x, y) on the imaging plane of the mth camera; />Representing phase information of deformed phase shift fringes modulated by the object at coordinates (x, y) on an imaging plane of an mth camera; m represents the number of cameras in the camera array; n represents the number of deformed phase shift stripes, and N is an integer greater than or equal to 3; n represents an image index corresponding to N frames of deformed phase shift stripes; />Representing the amount of phase shift of the deformed phase shift stripe; m represents the mth camera in the camera array;
the N frame deformation additional stripes are as follows:
wherein:representing the background light intensity of the deformed additional stripe at the (x, y) coordinate on the imaging plane of the mth camera; />Representing the modulated signal of the deformed additional stripe at the coordinates (x, y) on the imaging plane of the mth camera; />Adding phase information of fringes at coordinates (x, y) on an imaging plane of an mth camera for deformation modulated by the object; m represents the number of cameras in the camera array; n represents the number of deformed phase shift stripes, and N is an integer greater than or equal to 3; n represents an image index corresponding to N frames of deformed phase shift stripes; />Representing the amount of phase shift of the deformed phase shift stripe; m denotes the mth coded camera in the camera array.
In order to better implement the present invention, further, the step 1 specifically includes:
step 1.1, selecting any camera in a camera array as a reference camera, and selecting other cameras as non-reference cameras, wherein an imaging plane of the reference camera is taken as a camera array reference plane;
step 1.2, establishing a plurality of depth imaging planes which are parallel to the camera array reference plane and are different in distance from the camera array reference plane, and setting feature voxels on each depth imaging plane;
and 1.3, calculating imaging position relations of feature voxels on each depth imaging plane on the imaging plane of the reference camera and the imaging plane of the non-reference camera according to calibration parameters of the camera array, and calculating a homography matrix between the non-reference camera and the reference camera through the imaging position relations.
In order to better implement the present invention, further, in the step 3, the synthesizing into the super-resolution four-dimensional phase shift stripe through the homography matrix is as follows:
the super-resolution four-dimensional additional stripe synthesized by the homography matrix is as follows:
wherein:;/>an x-coordinate representing an image acquired by an mth camera at a distance d from the camera array reference plane; />Representation and camera array referenceThe y coordinate of the image acquired by the m-th camera with the plane distance d;a homography matrix representing an mth camera at a distance d from the camera array reference plane; x represents->Coordinates projected onto a camera array reference plane; y represents->Projected onto coordinates on the camera array reference plane.
Compared with the prior art, the invention has the following advantages:
the invention utilizes the camera array to obtain the multi-focusing deformed fringe pattern, which can break through the restriction of the depth of field range of the optical lens, thereby ensuring that the clear deformed fringe pattern is obtained in the measurement of large-size objects and ensuring the real-time performance and convenience of the system; meanwhile, the multi-plane calibration technology increases the stability of measurement.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of an arrangement of a camera array and a projector;
FIG. 3 is a schematic diagram of a projector projecting fringes onto an object to be measured;
FIG. 4 is a schematic diagram of a super-resolution four-dimensional phase shift stripe;
FIG. 5 is a schematic diagram of a best focus deformed phase shift stripe;
fig. 6 is a schematic diagram of a result of three-dimensional reconstruction of an object to be measured.
Detailed Description
The following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the invention clearly indicates otherwise, and it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
For convenience of description, the words "upper", "lower", "left" and "right" in the present invention, if they mean only that the directions are consistent with the upper, lower, left, and right directions of the drawings per se, and do not limit the structure, only for convenience of description and simplification of the description, but do not indicate or imply that the apparatus or element to be referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
The terms "mounted," "connected," "secured," and the like are to be construed broadly and refer to either a fixed connection, a removable connection, or an integral body, for example; the terms are used herein as specific meanings as understood by those of ordinary skill in the art, and are not limited to the following terms.
Example 1:
according to the three-dimensional reconstruction method of the object based on the camera array focusing structure light, as shown in fig. 1 and 2, a camera array reference plane is established based on the camera array and the projector, and N frames of stripes are projected to the surface of the object to be detected by adopting the projector, wherein N is more than or equal to 3; establishing a plurality of parallel depth imaging planes which are different in distance from a reference plane of the camera array, and calculating by utilizing a homography matrix of the camera array and images on the plurality of depth imaging planes to obtain a deformed fringe pattern; then synthesizing the deformed fringe pattern into super-resolution four-dimensional fringes comprising phase shift information, and determining the optimal focusing deformed fringe of the camera array on each depth imaging plane by utilizing the phase shift information of the super-resolution four-dimensional fringes; calculating an absolute phase by using the optimal focusing deformation stripes, calculating the polar line corresponding relation between the camera array and the projector, calculating the three-dimensional point cloud of the surface of the object to be measured by using the absolute phase and the polar line corresponding relation, and completing the three-dimensional reconstruction of the surface of the object to be measured by using the three-dimensional point cloud.
The method comprises the following steps:
step 1, calibrating a camera array and a projector to obtain calibration parameters, establishing a camera array reference plane through the calibration parameters, establishing a plurality of parallel depth imaging planes which are different in distance from the camera array reference plane, and calculating a homography matrix of the camera array by using the depth imaging planes;
the camera array consists of M cameras, calibration parameters are obtained by using the existing camera array calibration method, and the calibration parameters of the cameras comprise an internal reference matrix K of the cameras cm Distortion matrix D of camera cm External reference matrix W between cameras cm . Meanwhile, calibrating the projector to obtain calibration parameters of the projector, wherein the calibration parameters of the projector comprise an internal reference matrix K of the projector p Distortion matrix D of projector p External reference matrix W between projector and camera p And an eigen matrix E.
The coding of the camera is denoted m, i.e. the mth camera in the camera array. The camera with m=1 is selected as a reference camera, and the rest cameras in the camera array are non-reference cameras. And establishing a camera array reference plane through the calibration parameters, and calculating the imaging position relation of each characteristic voxel on the depth imaging plane between each non-reference camera and the reference camera of the camera array under different depth conditions according to the calibration parameters of the camera array so as to acquire a homography matrix. The method comprises the following steps:
establishing a feature voxel P (X) p 、Y p 、Z p ),X p 、Y p 、Z p Three-dimensional space coordinates of the P point on the feature voxel, respectively, then the feature voxel P (X p 、Y p 、Z p ) The positional relationship on the imaging plane of each camera is expressed as:
wherein: s is S m An imaging scaling factor for the mth camera; k (K) cm An internal reference matrix for the mth camera; o is 0 matrix; r is R cm A rotation matrix for the current depth imaging plane to the imaging plane of the mth camera, and T cm A translation matrix for the current depth imaging plane to the imaging plane of the mth camera; (x, y) is the point coordinates on the imaging plane of the feature voxel transformed to the mth camera.
Then use the distortion matrix D of the camera cm And calculating the corresponding distorted pixel points of the characteristic voxels on the imaging planes of the cameras, obtaining distorted pixel points of U (U is more than or equal to 4) characteristic voxels on the imaging planes of different cameras, and then obtaining pairing of the U-group distorted pixel points and the characteristic voxels, wherein the homography matrix between the non-reference camera and the reference camera under different depth conditions can be calculated through the pairing of the U-group distorted pixel points and the characteristic voxels.
Step 2, as shown in fig. 3, projecting N frames of phase-shift stripes and N frames of additional stripes to the surface of an object to be detected by using a projector, acquiring images on each depth imaging plane by using a camera array, and resolving to obtain N frames of deformed phase-shift stripes and N frames of deformed additional stripes by combining a homography matrix of the camera array;
step 3, as shown in fig. 4, using the homography matrix obtained in step 1, projecting N-frame deformed phase shift stripes and N-frame deformed additional stripes on each depth imaging plane to a camera array reference plane, synthesizing the N-frame deformed phase shift stripes into super-resolution four-dimensional phase shift stripes, and synthesizing the N-frame deformed additional stripes into super-resolution four-dimensional additional stripes;
step 4, as shown in fig. 5, calculating the optimal contrast of the super-resolution four-dimensional phase-shift stripe on each depth imaging plane, and updating according to the optimal contrast index to obtain the gray value of each pixel on the super-resolution four-dimensional phase-shift stripe and the gray value of each pixel on the super-resolution four-dimensional additional stripe; the gray value of each pixel on the super-resolution four-dimensional phase shift stripe is utilized to calculate the optimal focusing deformation phase shift stripe, and the gray value of each pixel on the super-resolution four-dimensional additional stripe is utilized to calculate the optimal focusing deformation additional stripe;
and 5, as shown in fig. 6, calculating the absolute phase of the object to be measured by using the best focusing deformation phase shift stripe and the best focusing deformation additional stripe, and calculating the three-dimensional point cloud of the surface of the object to be measured by combining the absolute phase, the polar line corresponding relation between the camera array and the projector and the calibration parameters of the camera array and the projector, and completing the three-dimensional reconstruction of the surface of the object to be measured by using the three-dimensional point cloud.
Example 2:
the method for reconstructing an object in three dimensions based on camera array focusing structure light of the present embodiment is improved on the basis of embodiment 1, and the step 1 specifically includes:
step 1.1, selecting any camera in a camera array as a reference camera, and selecting other cameras as non-reference cameras, wherein an imaging plane of the reference camera is taken as a camera array reference plane;
step 1.2, establishing a plurality of depth imaging planes which are parallel to the camera array reference plane and are different in distance from the camera array reference plane, and setting feature voxels on each depth imaging plane;
and 1.3, calculating imaging position relations of feature voxels on each depth imaging plane on the imaging plane of the reference camera and the imaging plane of the non-reference camera according to calibration parameters of the camera array, and calculating a homography matrix between the non-reference camera and the reference camera through the imaging position relations.
The step 2 specifically comprises the following steps:
projecting N frames of phase shift stripes and N frames of additional stripes to the surface of an object to be detected by using a projector, setting the imaging aperture sizes and focusing distances of a plurality of cameras in a camera array to be different values, and then collecting N frames of deformation phase shift stripes and N frames of deformation additional stripes modulated by the surface of the object to be detected by a plurality of cameras in the camera array;
the N-frame deformation phase shift stripes are as follows:
wherein:representing the background light intensity of the deformed phase-shift fringes at the (x, y) coordinates on the imaging plane of the mth camera; />Representing the modulated signal of the deformed phase-shifted fringes at coordinates (x, y) on the imaging plane of the mth camera; />Representing phase information of deformed phase shift fringes modulated by the object at coordinates (x, y) on an imaging plane of an mth camera; m represents the number of cameras in the camera array; n represents the number of deformed phase shift stripes, and N is an integer greater than or equal to 3; n represents an image index corresponding to N frames of deformed phase shift stripes; />Representing the amount of phase shift of the deformed phase shift stripe; m represents the mth camera in the camera array;
the N frame deformation additional stripes are as follows:
wherein:representing the background light intensity of the deformed additional stripe at the (x, y) coordinate on the imaging plane of the mth camera; />Representing the modulated signal of the deformed additional stripe at the coordinates (x, y) on the imaging plane of the mth camera; />Adding phase information of fringes at coordinates (x, y) on an imaging plane of an mth camera for deformation modulated by the object; m represents the number of cameras in the camera array; n represents the number of deformed phase shift stripes, and N is an integer greater than or equal to 3; n represents an image index corresponding to N frames of deformed phase shift stripes; />Representing the amount of phase shift of the deformed phase shift stripe; m denotes the mth coded camera in the camera array.
In the step 3, the super-resolution four-dimensional phase shift stripes are synthesized by a homography matrix:
the super-resolution four-dimensional additional stripe synthesized by the homography matrix is as follows:
wherein:;/>an x-coordinate representing an image acquired by an mth camera at a distance d from the camera array reference plane; />Representing the y-coordinate of an image acquired by an mth camera at a distance d from the camera array reference plane;a homography matrix representing an mth camera at a distance d from the camera array reference plane; x represents->Coordinates projected onto a camera array reference plane; y represents->Projected onto coordinates on the camera array reference plane.
The step 4 specifically comprises the following steps:
step 4.1, establishing a contrast function related to the super-resolution four-dimensional phase-shift stripe based on the maximum value of the super-resolution four-dimensional phase-shift stripe and the minimum value of the super-resolution four-dimensional phase-shift stripe;
step 4.2, establishing a contrast root mean square error function based on the contrast function, and searching the optimal distance between the corresponding depth imaging plane and the camera array reference plane when the contrast root mean square error function value is minimum;
step 4.3, updating the contrast function according to the optimal distance index;
step 4.4, searching the corresponding optimal camera code when the contrast function value after index updating is maximum;
step 4.5, updating according to the optimal distance and the optimal camera coding index to obtain the gray value of each pixel on the super-resolution four-dimensional phase shift stripe, and calculating the optimal focusing deformation phase shift stripe through the gray value of each pixel on the super-resolution four-dimensional phase shift stripe; and updating according to the optimal distance and the optimal camera coding index to obtain the gray value of each pixel on the super-resolution four-dimensional additional stripe, and calculating the optimal focusing deformation additional stripe through the gray value of each pixel on the super-resolution four-dimensional additional stripe.
The step 5 specifically comprises the following steps:
step 5.1, calculating the wrapping phase of the surface of the object to be measured according to the optimal focusing deformation phase shift stripe, and calculating the additional phase of the surface of the object to be measured according to the optimal focusing deformation additional stripe;
step 5.2, calculating the phase difference and the phase shift period between the wrapping phase and the additional phase, and calculating the fringe order of the surface of the object to be measured through the phase difference and the phase shift period;
step 5.3, calculating the absolute phase of the stripes on the surface of the object to be measured through wrapping phases and stripe orders;
and 5.4, calculating to obtain the polar line corresponding relation between the camera array and the projector by using the calibration parameters of the camera array and the projector, obtaining the imaging coordinate corresponding relation between the projector and the camera array by using the polar line corresponding relation, and solving the three-dimensional point cloud of the surface of the object to be measured by using the polar line theorem and the ray intersection theorem so as to finish the three-dimensional reconstruction of the surface of the object to be measured.
Other portions of this embodiment are the same as those of embodiment 1, and thus will not be described in detail.
Example 3:
the three-dimensional reconstruction method of the object based on the camera array focusing structure light of the embodiment is improved on the basis of embodiment 1 or 2, and specifically comprises the following steps:
step 1: computing homography matrix
The camera array formed by 4 cameras, as shown in fig. 2, uses the existing method to obtain the calibration parameters between the camera arrays, including the firstmReference matrix K of each camera cm First, themDistortion matrix D of each camera cm An extrinsic matrix W between each non-reference camera and the reference camera cm
Obtaining projector calibration parameters including an internal reference matrix K of the projector P Distortion matrix D of projector P Obtaining an external reference matrix W between a projector and a reference camera P And an eigenvalue matrix E.
Selecting camera arraysmThe camera of=1 is a reference camera, and the remaining cameras are non-reference cameras. And calculating the imaging position relation of each characteristic voxel on the depth imaging plane under different depth conditions between each non-reference camera and the reference camera of the camera array according to the camera array calibration parameters, and obtaining a homography matrix.
The depth imaging plane is parallel to the camera array reference plane and the current feature voxel P (X p ,Y p ,Z p ) The imaging positions of the imaging bearing planes of the cameras are as follows:
step 2: collecting deformed stripes
The 3-frame phase shift fringes required by the 3-step phase shift algorithm are encoded according to the phase measurement profilometry principle and the 3-frame additional fringes required for determining the fringe order are encoded by the double-frequency heterodyne method. Projecting the coded stripes to the surface of an object to be measured by using a projector, and simultaneously acquiring deformed stripes modulated by the surface of the object by using a camera array, wherein the first step is thatmThe 3-frame deformation phase shift stripes acquired by the cameras are as follows:
wherein,representing the background light intensity of the deformed phase-shift fringes at the (x, y) coordinates on the imaging plane of the mth camera; />Representing the modulated signal of the deformed phase-shifted fringes at coordinates (x, y) on the imaging plane of the mth camera; />Representing phase information of deformed phase shift fringes modulated by the object at coordinates (x, y) on an imaging plane of an mth camera; n represents an image index corresponding to N frames of deformed phase shift stripes; />Representing the amount of phase shift of the deformed phase shift stripe; m represents the mth camera in the camera array;
the 3 frames of deformation additional stripes obtained by acquisition are as follows:
wherein,representing the background light intensity of the deformed additional stripe at the (x, y) coordinate on the imaging plane of the mth camera; />Representing the modulated signal of the deformed additional stripe at the coordinates (x, y) on the imaging plane of the mth camera; />Adding phase information of fringes at coordinates (x, y) on an imaging plane of an mth camera for deformation modulated by the object; n represents corresponding N frame deformation phase shift stripesIs a picture index of (a); />Representing the amount of phase shift of the deformed phase shift stripe; m denotes the mth coded camera in the camera array.
Wherein, the focusing position and aperture size of each camera in the camera array are different.
Step 3: synthesis of super-resolution four-dimensional stripes
And projecting the 3-frame deformation phase shift stripes and the 3-frame deformation additional stripes acquired by the non-reference camera in the camera array to a reference plane of the camera array according to plane division. The distance between the depth imaging plane and the camera array reference plane isdAt the time of the firstmPixel coordinates of the image collected by each camerax,y) The corresponding pixel coordinates after projection to the camera array reference plane are:
wherein:an x-coordinate representing an image acquired by an mth camera at a distance d from the camera array reference plane; />Representing the y-coordinate of an image acquired by an mth camera at a distance d from the camera array reference plane; />A homography matrix representing an mth camera at a distance d from the camera array reference plane; x represents->Coordinates projected onto a camera array reference plane; y represents->Projected onto coordinates on the camera array reference plane.
According to the pixel coordinate relation, the synthesized super-resolution four-dimensional phase shift stripe is as follows:
the synthesized super-resolution four-dimensional additional stripes are:
wherein,n=1,2,3;m=1,2,3,4。
step 4: calculating the best focusing deformation stripe
Contrast function using super-resolution four-dimensional phase-shift fringes:
wherein:representing the n-th frame super-resolution four-dimensional phase shift stripe; />Representing the index of the nth frame stripe +.>Wherein N is greater than or equal to 1 and less than or equal to N;representing the index of the nth frame stripe +.>Wherein N is greater than or equal to 1 and less than or equal to N; />Representing the contrast of the super-resolution four-dimensional phase-shifted stripe pixel at coordinates (x, y) for the mth camera at a distance d from the camera array reference plane; />When the distance between the depth imaging plane and the camera array reference plane is d, the x coordinate of the pixel of the image acquired by the mth camera is projected to the coordinate of the camera array reference plane; />When the distance between the depth imaging plane and the camera array reference plane is d, the y coordinate of the pixel of the image acquired by the mth camera is projected to the coordinate of the camera array reference plane.
Establishing a contrast root mean square error according to the fringe contrast:
taking m=4 then there is:
traversingE(x,y,d) Finding out the minimum root mean square value of each pixel pointdValue, record currentdThe value isd min Calculate the maximumV(x,y,m,d min ) Corresponding to the maximum value ofmValue, record currentmThe value ism max . According tod min Andm max and updating the gray value of each pixel by index, and calculating the best focusing deformation phase shift stripe:
adding stripes to the best focus distortion:
step 5: reconstructing three-dimensional information of an object
Calculating the wrapping phase of the object surface according to the 3-frame best focusing deformation phase shift stripe
Calculating additional phase using 3-frame best focus distortion additional fringes
Will beAnd->The difference value operation can obtain the phase difference +.>
Obtaining the period of the best focusing deformation phase shift stripeT 1 Period of optimum focusing deformation additional stripeT 2 The phase shift period can be calculated:
wherein: t (T) 12 Representing the phase shift period.
Then the fringe order K (x, y) of the surface of the object is obtained:
by wrapping the phaseAnd the cord levelK(x,y) Absolute phase +.>
And the polar line correspondence of the reference camera and the projector can be obtained by the eigenvalue matrix E between the reference camera and the projector obtained in the step 1, and the polar line correspondence and the absolute phase are obtained by the polar line correspondenceThe correspondence of the projector and the reference camera image plane coordinates can be obtained. Combining an internal reference matrix, a distortion matrix and an external reference matrix of the reference camera and the projector, and obtaining a three-dimensional point cloud of the object surface by utilizing an epipolar theorem and a ray intersection theorem to finish three-dimensional reconstruction of the object surface.
Other portions of this embodiment are the same as those of embodiment 1 or 2, and thus will not be described in detail.
The above is only a preferred embodiment of the present invention, and the present invention is not limited in any way, and any simple modification and equivalent changes of the above embodiments according to the technical substance of the present invention fall within the protection scope of the present invention.

Claims (9)

1. A three-dimensional reconstruction method of an object based on camera array focusing structure light is realized based on a camera array and a projector, and is characterized in that a camera array reference plane is established, and N frames of stripes are projected to the surface of the object to be detected by adopting the projector, wherein N is more than or equal to 3; establishing a plurality of parallel depth imaging planes which are different in distance from a reference plane of the camera array, and calculating by utilizing a homography matrix of the camera array and images on the plurality of depth imaging planes to obtain a deformed fringe pattern; then synthesizing the deformed fringe pattern into super-resolution four-dimensional fringes comprising phase shift information, and determining the optimal focusing deformed fringe of the camera array on each depth imaging plane by utilizing the phase shift information of the super-resolution four-dimensional fringes; calculating an absolute phase by using the optimal focusing deformation stripes, calculating the polar line corresponding relation between the camera array and the projector, calculating the three-dimensional point cloud of the surface of the object to be measured by using the absolute phase and the polar line corresponding relation, and completing the three-dimensional reconstruction of the surface of the object to be measured by using the three-dimensional point cloud.
2. The method for reconstructing an object based on camera array focusing structured light according to claim 1, comprising the steps of:
step 1, calibrating a camera array and a projector to obtain calibration parameters, establishing a camera array reference plane through the calibration parameters, establishing a plurality of parallel depth imaging planes which are different in distance from the camera array reference plane, and calculating a homography matrix of the camera array by using the depth imaging planes;
step 2, projecting N frames of phase shift stripes and N frames of additional stripes to the surface of an object to be detected by using a projector, acquiring images on each depth imaging plane by using a camera array, and resolving to obtain N frames of deformed phase shift stripes and N frames of deformed additional stripes by combining a homography matrix of the camera array;
step 3, projecting N frames of deformed phase shift stripes and N frames of deformed additional stripes on each depth imaging plane to a camera array reference plane by utilizing the homography matrix obtained in the step 1, synthesizing the N frames of deformed phase shift stripes into super-resolution four-dimensional phase shift stripes, and synthesizing the N frames of deformed additional stripes into super-resolution four-dimensional additional stripes;
step 4, calculating the optimal contrast of the super-resolution four-dimensional phase-shift stripe on each depth imaging plane, and updating according to the optimal contrast index to obtain the gray value of each pixel on the super-resolution four-dimensional phase-shift stripe and the gray value of each pixel on the super-resolution four-dimensional additional stripe; the gray value of each pixel on the super-resolution four-dimensional phase shift stripe is utilized to calculate the optimal focusing deformation phase shift stripe, and the gray value of each pixel on the super-resolution four-dimensional additional stripe is utilized to calculate the optimal focusing deformation additional stripe;
and 5, calculating the absolute phase of the object to be measured by utilizing the best focusing deformation phase shift stripe and the best focusing deformation additional stripe, and calculating the three-dimensional point cloud of the surface of the object to be measured by combining the absolute phase, the polar line corresponding relation between the camera array and the projector and the calibration parameters of the camera array and the projector, and completing the three-dimensional reconstruction of the surface of the object to be measured by utilizing the three-dimensional point cloud.
3. The method for reconstructing an object based on camera array focusing structured light according to claim 2, wherein the step 4 specifically comprises:
step 4.1, establishing a contrast function related to the super-resolution four-dimensional phase-shift stripe based on the maximum value of the super-resolution four-dimensional phase-shift stripe and the minimum value of the super-resolution four-dimensional phase-shift stripe;
step 4.2, establishing a contrast root mean square error function based on the contrast function, and searching the optimal distance between the corresponding depth imaging plane and the camera array reference plane when the contrast root mean square error function value is minimum;
step 4.3, updating the contrast function according to the optimal distance index;
step 4.4, searching the corresponding optimal camera code when the contrast function value after index updating is maximum;
step 4.5, updating according to the optimal distance and the optimal camera coding index to obtain the gray value of each pixel on the super-resolution four-dimensional phase shift stripe, and calculating the optimal focusing deformation phase shift stripe through the gray value of each pixel on the super-resolution four-dimensional phase shift stripe; and updating according to the optimal distance and the optimal camera coding index to obtain the gray value of each pixel on the super-resolution four-dimensional additional stripe, and calculating the optimal focusing deformation additional stripe through the gray value of each pixel on the super-resolution four-dimensional additional stripe.
4. A method for three-dimensional reconstruction of an object based on camera array focusing structured light as defined in claim 3, wherein the contrast function of the super-resolution four-dimensional phase-shift fringes is:
wherein:representing the n-th frame super-resolution four-dimensional phase shift stripe; />Representing the index of the nth frame stripe +.>A maximum function of (a); />Representing the index of the nth frame stripe +.>Is a minimum function of (1); />Representing the contrast of the super-resolution four-dimensional phase-shifted stripe pixel at coordinates (x, y) for the mth camera at a distance d from the camera array reference plane; />When the distance between the depth imaging plane and the camera array reference plane is d, the x coordinate of the pixel of the image acquired by the mth camera is projected to the coordinate of the camera array reference plane; />When the distance between the depth imaging plane and the camera array reference plane is d, the y coordinate of the pixel of the image acquired by the mth camera is projected to the coordinate of the camera array reference plane.
5. The method for three-dimensional reconstruction of an object based on camera array focusing structured light of claim 4, wherein the contrast root mean square error function is:
wherein: m represents the number of cameras in the camera array; m denotes the mth camera in the camera array.
6. The method for reconstructing an object based on camera array focusing structured light according to any one of claims 2 to 5, wherein said step 5 specifically comprises:
step 5.1, calculating the wrapping phase of the surface of the object to be measured according to the optimal focusing deformation phase shift stripe, and calculating the additional phase of the surface of the object to be measured according to the optimal focusing deformation additional stripe;
step 5.2, calculating the phase difference and the phase shift period between the wrapping phase and the additional phase, and calculating the fringe order of the surface of the object to be measured through the phase difference and the phase shift period;
step 5.3, calculating the absolute phase of the stripes on the surface of the object to be measured through wrapping phases and stripe orders;
and 5.4, calculating to obtain the polar line corresponding relation between the camera array and the projector by using the calibration parameters of the camera array and the projector, obtaining the imaging coordinate corresponding relation between the projector and the camera array by using the polar line corresponding relation, and solving the three-dimensional point cloud of the surface of the object to be measured by using the polar line theorem and the ray intersection theorem so as to finish the three-dimensional reconstruction of the surface of the object to be measured.
7. The method for reconstructing an object based on camera array focusing structured light according to any one of claims 2 to 5, wherein said step 2 specifically comprises:
projecting N frames of phase shift stripes and N frames of additional stripes to the surface of an object to be detected by using a projector, setting the imaging aperture sizes and focusing distances of a plurality of cameras in a camera array to be different values, and then collecting N frames of deformation phase shift stripes and N frames of deformation additional stripes modulated by the surface of the object to be detected by a plurality of cameras in the camera array;
the N-frame deformation phase shift stripes are as follows:
wherein:representing the background light intensity of the deformed phase-shift fringes at the (x, y) coordinates on the imaging plane of the mth camera; />Representing the modulated signal of the deformed phase-shifted fringes at coordinates (x, y) on the imaging plane of the mth camera;representing phase information of deformed phase shift fringes modulated by the object at coordinates (x, y) on an imaging plane of an mth camera; m represents the number of cameras in the camera array; n represents the number of deformed phase shift stripes, and N is an integer greater than or equal to 3; n represents an image index corresponding to N frames of deformed phase shift stripes; />Representing the amount of phase shift of the deformed phase shift stripe; m represents the mth camera in the camera array;
the N frame deformation additional stripes are as follows:
wherein:representing the background light intensity of the deformed additional stripe at the (x, y) coordinate on the imaging plane of the mth camera; />Representing the modulated signal of the deformed additional stripe at the coordinates (x, y) on the imaging plane of the mth camera;adding phase information of fringes at coordinates (x, y) on an imaging plane of an mth camera for deformation modulated by the object; m represents the number of cameras in the camera array; n represents the number of deformed phase shift stripes, and N is an integer greater than or equal to 3; n represents a pair ofImage index of phase shift stripe should N frame deformation; />Representing the amount of phase shift of the deformed phase shift stripe; m denotes the mth coded camera in the camera array.
8. The method for reconstructing an object based on camera array focusing structured light according to any one of claims 2 to 5, wherein said step 1 specifically comprises:
step 1.1, selecting any camera in a camera array as a reference camera, and selecting other cameras as non-reference cameras, wherein an imaging plane of the reference camera is taken as a camera array reference plane;
step 1.2, establishing a plurality of depth imaging planes which are parallel to the camera array reference plane and are different in distance from the camera array reference plane, and setting feature voxels on each depth imaging plane;
and 1.3, calculating imaging position relations of feature voxels on each depth imaging plane on the imaging plane of the reference camera and the imaging plane of the non-reference camera according to calibration parameters of the camera array, and calculating a homography matrix between the non-reference camera and the reference camera through the imaging position relations.
9. The method for reconstructing an object based on camera array focusing structure light according to any one of claims 2 to 5, wherein the synthesizing the super-resolution four-dimensional phase shift fringes by the homography matrix in the step 3 is:
the super-resolution four-dimensional additional stripe synthesized by the homography matrix is as follows:
wherein:;/>an x-coordinate representing an image acquired by an mth camera at a distance d from the camera array reference plane; />Representing the y-coordinate of an image acquired by an mth camera at a distance d from the camera array reference plane; />A homography matrix representing an mth camera at a distance d from the camera array reference plane; x represents->Coordinates projected onto a camera array reference plane; y represents->Projected onto coordinates on the camera array reference plane.
CN202410234573.4A 2024-03-01 2024-03-01 Three-dimensional reconstruction method for object based on camera array focusing structure light Active CN117804381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410234573.4A CN117804381B (en) 2024-03-01 2024-03-01 Three-dimensional reconstruction method for object based on camera array focusing structure light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410234573.4A CN117804381B (en) 2024-03-01 2024-03-01 Three-dimensional reconstruction method for object based on camera array focusing structure light

Publications (2)

Publication Number Publication Date
CN117804381A true CN117804381A (en) 2024-04-02
CN117804381B CN117804381B (en) 2024-05-10

Family

ID=90423661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410234573.4A Active CN117804381B (en) 2024-03-01 2024-03-01 Three-dimensional reconstruction method for object based on camera array focusing structure light

Country Status (1)

Country Link
CN (1) CN117804381B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002500369A (en) * 1997-12-31 2002-01-08 ザ リサーチ ファウンデーション オブ ステイト ユニヴァーシティ オブ ニューヨーク Method and apparatus for three-dimensional surface contouring using a digital video projection system
JP2004317495A (en) * 2003-03-31 2004-11-11 Mitsutoyo Corp Method and instrument for measuring noncontactly three-dimensional shape
WO2018144828A1 (en) * 2017-02-03 2018-08-09 Massachusetts Institute Of Technology Tunable microlenses and related methods
CN108592824A (en) * 2018-07-16 2018-09-28 清华大学 A kind of frequency conversion fringe projection structural light measurement method based on depth of field feedback
CN110288642A (en) * 2019-05-25 2019-09-27 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimension object fast reconstructing method based on camera array
US20200166333A1 (en) * 2016-12-07 2020-05-28 Ki 'an Chishine Optoelectronics Technology Co., Ltd. Hybrid light measurement method for measuring three-dimensional profile
CN111288925A (en) * 2020-01-18 2020-06-16 武汉烽火凯卓科技有限公司 Three-dimensional reconstruction method and device based on digital focusing structure illumination light field
CN115307577A (en) * 2022-08-09 2022-11-08 中北大学 Target three-dimensional information measuring method and system
CN115468513A (en) * 2022-09-01 2022-12-13 南京信息工程大学 Rapid projection strategy method, device and storage medium for three-dimensional measurement
EP4179978A1 (en) * 2021-11-16 2023-05-17 Koninklijke Philips N.V. 3d ultrasound imaging with fov adaptation
CN117450955A (en) * 2023-12-21 2024-01-26 成都信息工程大学 Three-dimensional measurement method for thin object based on space annular feature
US20240037765A1 (en) * 2019-11-08 2024-02-01 Nanjing University Of Science And Technology High-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002500369A (en) * 1997-12-31 2002-01-08 ザ リサーチ ファウンデーション オブ ステイト ユニヴァーシティ オブ ニューヨーク Method and apparatus for three-dimensional surface contouring using a digital video projection system
JP2004317495A (en) * 2003-03-31 2004-11-11 Mitsutoyo Corp Method and instrument for measuring noncontactly three-dimensional shape
US20200166333A1 (en) * 2016-12-07 2020-05-28 Ki 'an Chishine Optoelectronics Technology Co., Ltd. Hybrid light measurement method for measuring three-dimensional profile
WO2018144828A1 (en) * 2017-02-03 2018-08-09 Massachusetts Institute Of Technology Tunable microlenses and related methods
CN108592824A (en) * 2018-07-16 2018-09-28 清华大学 A kind of frequency conversion fringe projection structural light measurement method based on depth of field feedback
CN110288642A (en) * 2019-05-25 2019-09-27 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimension object fast reconstructing method based on camera array
US20240037765A1 (en) * 2019-11-08 2024-02-01 Nanjing University Of Science And Technology High-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection
CN111288925A (en) * 2020-01-18 2020-06-16 武汉烽火凯卓科技有限公司 Three-dimensional reconstruction method and device based on digital focusing structure illumination light field
EP4179978A1 (en) * 2021-11-16 2023-05-17 Koninklijke Philips N.V. 3d ultrasound imaging with fov adaptation
CN115307577A (en) * 2022-08-09 2022-11-08 中北大学 Target three-dimensional information measuring method and system
CN115468513A (en) * 2022-09-01 2022-12-13 南京信息工程大学 Rapid projection strategy method, device and storage medium for three-dimensional measurement
CN117450955A (en) * 2023-12-21 2024-01-26 成都信息工程大学 Three-dimensional measurement method for thin object based on space annular feature

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HONG-LAN XIE等: "Methodology development and application of X-ray imaging beamline at SSRF", NUCLEAR SCIENCE AND TECHNIQUES, no. 10, 15 October 2020 (2020-10-15), pages 76 - 96 *
李学星: "基于光栅投影相位测量的动态场景三维信息获取相关技术研究", 中国博士学位论文全文数据库信息科技辑, no. 03, 15 March 2019 (2019-03-15), pages 138 - 25 *
李红梅 等: "基于灰度拓展复合光栅的单帧三维测量方法", 红外与激光工程, vol. 49, no. 06, 25 June 2020 (2020-06-25), pages 92 - 99 *
窦蕴甫 等: "一种改进的基于条纹对比度的三维测量方法", 光电工程, vol. 38, no. 08, 15 August 2011 (2011-08-15), pages 84 - 89 *
肖朝 等: "基于条纹调制度的多投影显示融合方法", 光学学报, vol. 36, no. 04, 10 April 2016 (2016-04-10), pages 182 - 189 *

Also Published As

Publication number Publication date
CN117804381B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN110514143B (en) Stripe projection system calibration method based on reflector
CN109506589B (en) Three-dimensional profile measuring method based on structural light field imaging
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN110296667B (en) High-reflection surface three-dimensional measurement method based on line structured light multi-angle projection
CN111288925B (en) Three-dimensional reconstruction method and device based on digital focusing structure illumination light field
CN109727290B (en) Zoom camera dynamic calibration method based on monocular vision triangulation distance measurement method
CN113237435B (en) High-light-reflection surface three-dimensional vision measurement system and method
CN112465912B (en) Stereo camera calibration method and device
CN112525107B (en) Structured light three-dimensional measurement method based on event camera
CN113205592B (en) Light field three-dimensional reconstruction method and system based on phase similarity
CN107990846B (en) Active and passive combination depth information acquisition method based on single-frame structured light
CN111189416B (en) Structural light 360-degree three-dimensional surface shape measuring method based on characteristic phase constraint
CN109307483A (en) A kind of phase developing method based on structured-light system geometrical constraint
CN113506348B (en) Gray code-assisted three-dimensional coordinate calculation method
Dekiff et al. Three-dimensional data acquisition by digital correlation of projected speckle patterns
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
CN117450955B (en) Three-dimensional measurement method for thin object based on space annular feature
CN116295113A (en) Polarization three-dimensional imaging method integrating fringe projection
Zhou et al. 3D shape measurement based on structured light field imaging
CN114993207B (en) Three-dimensional reconstruction method based on binocular measurement system
CN117804381B (en) Three-dimensional reconstruction method for object based on camera array focusing structure light
CN116433841A (en) Real-time model reconstruction method based on global optimization
CN113160393B (en) High-precision three-dimensional reconstruction method and device based on large depth of field and related components thereof
Hongsheng et al. Three-dimensional reconstruction of complex spatial surface based on line structured light

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant