CN118067035A - 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation - Google Patents

4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation Download PDF

Info

Publication number
CN118067035A
CN118067035A CN202410060600.0A CN202410060600A CN118067035A CN 118067035 A CN118067035 A CN 118067035A CN 202410060600 A CN202410060600 A CN 202410060600A CN 118067035 A CN118067035 A CN 118067035A
Authority
CN
China
Prior art keywords
dimensional
imaging
spectrum
aperture coding
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410060600.0A
Other languages
Chinese (zh)
Inventor
韩静
蔡舒祺
于浩天
郑东亮
吕嫩晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202410060600.0A priority Critical patent/CN118067035A/en
Publication of CN118067035A publication Critical patent/CN118067035A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation, which adopts a 4D imaging system to tightly couple and cooperatively image two different mechanisms of spectrum imaging and three-dimensional imaging, wherein a mask plate in an aperture coding spectrum imaging module in the system is removed, and an aperture coding pattern is used for actively illuminating a target object, so that an image after space light intensity modulation is obtained, and the effect similar to the mask plate is achieved. According to the invention, active coded aperture snapshot spectral imaging is combined with digital image related DIC, digital speckle images used for 3D imaging and random codes required by spectral imaging are creatively combined, spectral reconstruction and three-dimensional reconstruction can be synchronously performed, so that the system speed is greatly improved, and the spectral imaging is realized by adopting an active CASSI scheme and does not contain a mechanical motion structure and a mask plate, so that the device is stable in structure and can be used for sampling rapidly.

Description

4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation
Technical Field
The invention relates to a 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation, and belongs to the technical field of optical measurement.
Background
The spectrum of a point in the scene is represented by its distribution of electromagnetic radiation over a range of wavelengths. In conventional digital imaging devices, the spectrum is measured with three-way red, green, blue (RGB) sensors, which are designed to fit with three-viewpoint color measurements in the human visual system. However, three-channel representation does not capture the complex details of the natural scene spectrum from the diversity and complexity of illumination and reflectance spectra in the real world. Because the properties of various materials and objects can be inferred from detailed spectra, acquisition systems for accurate spectral measurements can be an effective tool for scientific research and engineering applications.
With the increasing maturity of undersampled spectral imaging systems and the greatly improved computing power of the back-end, it is desirable that available spectral data possess higher dimensions and accuracy. With recent advances in optical metrology methods in three-dimensional (3D) shape measurement, hyperspectral imaging systems have not been satisfied anymore with spectral analysis of two-dimensional (2D) image planes. Triangulation-based optical methods for three-dimensional shape measurement have been used multiple times in combination with spectroscopic measurement means to obtain four-dimensional (4D) data of a target. Three-dimensional imaging solutions used in these works can be divided into passive and active. Passive includes stereoscopic vision (Stereo vision) and motion structure (SfM, structure from Motion), among others. And active includes Structured Light (SL) and laser triangulation (Laser triangulation).
Implementation of the above scheme relies on a combination of a spectral imaging module and a three-dimensional imaging module, and the system structure becomes complex accordingly. And then the defects of low coupling degree between modules, high registering difficulty, high manufacturing cost and the like are caused. Furthermore, most spectral imaging modules in the above-described systems contain mechanical moving structures that negatively affect the stability of the system and the sampling rate is limited. Optical filter-based spectral imaging modules, such as acousto-optic tunable filters (AOTF, acousto-optic Tunable Filter) or Fabry-perot Interference (FPI) filters, are also incorporated, which simplify the system architecture but also bring about high prices.
Therefore, a new 4D multi-frame spectral three-dimensional imaging method is needed to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to provide a 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation, so as to solve the problems in the background art.
A4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation adopts a 4D imaging system, wherein the 4D imaging system comprises a projector, an aperture coding spectrum imaging module and a second vision module,
The projector, the aperture coding spectrum imaging module and the second vision module are opposite to the target object;
The aperture coding spectrum imaging module comprises a collimating objective lens L1, an amice dispersion prism L2, a first focusing lens L3 and a first camera, wherein the collimating objective lens L1, the amice dispersion prism L2, the first focusing lens L3 and the first camera are sequentially arranged in front of the target object;
Removing the Amician dispersion prism L2 from the aperture coding spectrum imaging module, wherein the aperture coding spectrum imaging module becomes a first vision module;
The second vision module comprises a second focusing lens L3 and a second camera, and the second focusing lens L3 and the second camera are sequentially arranged in front of the target object;
the first vision module and the second vision module form a binocular stereoscopic vision module;
The method comprises the following steps:
Step 1: the projector projects an aperture coding pattern and a digital speckle pattern onto the surface of a target object, the aperture coding spectrum imaging module is used for acquiring a random coding image, and the binocular stereoscopic vision module is used for acquiring a digital speckle image;
Step 2: carrying out spectrum reconstruction on the random coded image acquired in the step 1 by utilizing a Twok reconstruction algorithm to obtain a spectrum reconstruction result; carrying out binocular vision three-dimensional measurement on the digital speckle image acquired in the step 1 through a DIC method to obtain a three-dimensional point cloud;
step 3: and (3) fusing the spectrum reconstruction result in the step (2) with the three-dimensional point cloud to obtain the 4D multi-frame spectrum three-dimensional imaging.
Further, the measured value c at the detector of the first camera of step 1 is represented by the following formula:
c=Hf
where H is a linear operator and f is the spectral source density.
Further, the light projected onto the object by the projector is white.
Further, in the first step, the projector sequentially projects the same aperture coding pattern to the target object by using red light and blue light.
Further, in the second step, the offset distance Θ of the random coded image is obtained by using the following formula:
Θ=Q1-2)2+Q2-2)+Q3
Where Q 1,Q2 and Q 3 are weight coefficients of wavelength position distribution at camera coordinates of the first video camera.
Further, the distortion status P of the randomly encoded image in the second step is represented by the following formula:
P=Norm(histeq(Pb1/Pb2)+histeq(Pr1/Pr2))
Where histeq (·) is the histogram equalization function, norm (·) is the normalization function, P b1 and P r1 are the actual shape of the random encoding pattern, and P b2 and P r2 are the negative templates of the random encoding pattern.
Further, in step3, the spectrum reconstruction result in step 2 is fused with a three-dimensional point cloud, which includes the following steps:
Step 31, taking the peak images of the three peaks of red, green and blue in the spectrum reconstruction result in the step 2 as RGB channels, and carrying out fusion color matching to obtain a two-dimensional color image;
step 32, corresponding the three-dimensional space points of the three-dimensional point cloud in the step 2 with the point coordinates of the two-dimensional image to obtain three-dimensional space point coordinates;
And 33, corresponding the coordinates of the two-dimensional color image in the step 31 to the coordinates of the three-dimensional space points to obtain the 4D multi-frame spectrum three-dimensional imaging.
Furthermore, in the second step, binocular vision three-dimensional measurement is performed by a DIC method, and a reference subset and a target subset in the DIC method are matched by using a ZNCC coefficient:
CZNCC(P)=1-0.5×CZNSSD(P)
Where f (x, y) is the gray scale intensity at the original picture (x, y) coordinate position, g (x ', y') is the gray scale intensity at the deformed picture (x ', y') coordinate position, Is the average gray intensity of the original picture,/>Is the average gray level intensity of the target picture, P is the direction vector relative to the displacement mapping function, and N is the total number of valid object pixels in the subset.
Further, the aperture coding pattern adopts multi-frame coding. The multi-frame coding can effectively improve the spectrum recovery quality, and then the quality of three-dimensional reconstruction is improved.
The beneficial effects are that: according to the 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation, aperture coding is combined with digital image correlation, digital speckle images for 3D imaging and random coding images required by spectrum imaging are combined originally, three-dimensional reconstruction and spectrum reconstruction can be carried out synchronously, the system speed is greatly improved, a spectrum imaging module for aperture coding is adopted to realize spectrum imaging, a mechanical movement structure and a mask plate are not included, and the device is stable in structure, rapid in sampling, simple in structure, stable and easy to maintain.
Drawings
FIG. 1 is a schematic diagram of a 4D spectral imaging system;
FIG. 2 is a schematic diagram of a conventional digital image correlation three-dimensional measurement imaging system;
FIG. 3 is a diagram of the three-dimensional reconstruction result of a standard sphere;
FIG. 4 is a graph comparing the results of FPP and DIC three-dimensional reconstruction;
FIG. 5 is a diagram of a verification of system spectral measurements using a standard color doll;
fig. 6 is a graph of a color doll 4D map measurement.
Fig. 7 is a graph of mango 4D pattern measurement.
Detailed Description
The present application is further illustrated in the accompanying drawings and detailed description which are to be understood as being merely illustrative of the application and not limiting of its scope, and various modifications of the application, which are equivalent to those skilled in the art upon reading the application, will fall within the scope of the application as defined in the appended claims.
A. 4D spectrum imaging system
As shown in fig. 1, a projector (DLP 6500) projects a series of templates onto the surface of an observation target, and a spectral imaging module obtains a dispersion image of the target after illumination modulation of the projector. Wherein L1 is a collimating objective lens, and the light rays reach an Amician dispersion prism L2 after being collimated by the collimating objective lens L1. The light is dispersed into the spectral range of 420-660 nm through an amice dispersing prism L2 and then focused by a focusing lens L3 to a camera (hakuwei view CE120-10 UM). The required dispersive images are projected by a projector (DLP 6500), acquired by two cameras (Haikang Wei view CE120-10 UM), for three-dimensional reconstruction and spectral reconstruction models.
The collimating objective lens L1, the Amician prism L2, the focusing lens L3 and the camera form a quasi-single dispersion aperture coding snapshot imaging spectrometer (SD-CASSI) system without a mask plate. The structured light modulates spatial information of all wavelengths in the detection target in a coded pattern. This produces an image of multiple coded scenes at a wavelength dependent location on the detector array plane. The spatial intensity pattern on this plane contains the coded mixture of spatial and spectral information about the scene. The cameras on the left side and the right side simultaneously acquire images, the acquired digital speckle patterns are used for three-dimensional reconstruction, the whole flow speed is high, the synchronous performance can be realized, and high-precision and high-speed 4D imaging is realized.
B. Mathematical model of system operation
The system is mainly divided into two modules: an aperture coding spectrum imaging module and a binocular stereo vision module.
The spectral intensity entering the instrument under uniform illumination (without structured light encoding) can be expressed as f 0 (x, y; λ). Let the transfer function of the encoded pattern be T (x, y), then the spectral intensity entering the aperture is:
f1(x,y;λ)=f0(x,y;λ)T(x,y) (1)
After propagation through the dispersive element, the spectral density of the detector plane is:
f2(x,y;λ)=∫∫δ(x’-[x+Θ(λ-λc)])δ(y’-y)×f1(x’,y’;λ)dx’dy’
=f0(x+Θ(λ-λc),y;λ)T(x+Θ(λ-λc),y) (2)
Unlike the previous analysis, the present embodiment uses an amix prism, so that dispersion shifts in different wavelength bands are not approximated to be linear. Where Θ represents the dispersion offset of the dispersive element, and the offset position Θ of the image at different λ c under near field conditions is linearly and positively correlated with the dispersion angle θ. The delta (-) function describes propagation through single magnification imaging optics and dispersive elements with linear dispersion alpha and center wavelength lambda c. The detector array is wavelength insensitive and measures the intensity of the incident light rather than the spectral density. Thus, successive images on the detector array can be represented as:
c(x,y)=∫f0(x+Θ(λ-λc),y;λ)T(x+Θ(λ-λc),y)dλ (3)
Notably, this image is the sum of the wavelength dimensions of a mask modulated and post-sheared data cube. The detector pixelates the optical density with a pixel size delta, wherein each pixel location (n, m) at image c corresponds to a physical location (x, y). The captured image c (m, n) can then be described as:
Letting each element of the coded aperture be the same size as a detector pixel Δ, the mask function T (x, y) can be expressed as a discrete Boolean function of a two-dimensional square pinhole array T (m ', n'):
Thus, it is possible to obtain:
The discrete form of the spectral source density f 0 (x, y; λ) is denoted as f ijk, the aperture coding function T (x, y) is denoted as T ij, and the detector measurements can be written in matrix form:
the above equation can also be described as a matrix-vector equation:
c=Hf (8)
Where H is a linear operator representing the forward model of the system. The matrix H is established in terms of the dispersion coefficient by registering the mask pattern of the target wavelength. The matrix H projects voxels of the three-dimensional sampled and clipped information f onto the pixels of the detector array c. By minimizing c=hf 2 2, estimating f as a hyperspectral image of the object. The invention herein employs TwaiST as the reconstruction algorithm.
Figure 2 illustrates a typical DIC three-dimensional measurement imaging system. The difference from fig. 1 is that the collimating objective and dispersive elements required for spectral imaging are removed. During operation of the system, the projector projects a structured pattern (typically a digital speckle pattern) onto the surface of the object; recording images of the digital speckle patterns by a double camera, and obtaining the relation of corresponding points through stereo matching; and calculating the three-dimensional coordinates through camera calibration parameters.
To track the same point in I left and I right, a subset of references centered around the point of interest is extracted from I left. Then, in each iteration, the initial guess-centered target subset extracted from I right is transformed and compared to the reference subset. Once the transformed target subset that best matches the reference subset is found, the correspondence point between I left and I right is determined. The criteria we use to determine the best match between the reference subset and the target subset is the sum of zero mean normalized squared differences corrected (ZNSSD) criteria, which is insensitive to potential scale and offset variations in sub-intensities. ZNSSD coefficients can be expressed as:
where f (x, y) is the gray scale intensity at the original picture (x, y) coordinate position and g (x ', y') is the gray scale intensity at the deformed picture (x ', y') coordinate position. Is the average gray intensity of the original picture,/>Is the average gray scale intensity of the target picture. P is the direction vector relative to the displacement mapping function. N is the total number of valid object pixels in the subset.
The relationship of ZNSSD coefficients to zero-mean normalized correlation coefficient (ZNCC) is shown as follows:
CZNCC(P)=1-0.5×CZNSSD(P) (10)
since the range of the ZNCC coefficients is-1 to 1, the larger the number, the higher the similarity between the target subset and the reference subset, so that it is simpler in proving the similarity.
In the stereoscopic vision method, when a point in a three-dimensional space is captured by two cameras at the same time, three-dimensional data of the point is obtained by utilizing two-dimensional coordinates of the point mapped under two different camera coordinate systems and combining equipment calibration parameters. Through stereo matching, for points (X, Y, Z) on the object to be measured, points mapped to the left and right cameras in the two-dimensional pixel coordinate system are (u l,vl) and (u r,vr). Through the above process, the corresponding points on the object to be measured can be found in the two-dimensional coordinates of the two imaging devices. Subsequently, the parameters calibrated for the equipment by the previous section can be utilized to obtain
Where M1 is the product of the left camera internal and external reference matrix and M2 is the product of the right camera internal and external reference matrix, four linear equations for three-dimensional coordinates (X, Y, Z) can be derived by correlating the two equations
The equation comprises four equations and three unknowns, so that the equation set can be solved, and the optimal solution can be obtained by using a least square method.
C. coding mode extraction and geometric accuracy verification
Unlike imaging spectrometers that use spatial light modulators such as DMDs, the aperture coding pattern required for active CASSI is distorted by the modulation of the shape change of the object surface, rather than a regular pattern. After dispersion by an amix prism, spatial aliasing occurs, which causes that the specific shape of the aperture coding pattern cannot be directly obtained. As shown in fig. 3. The method for solving the problem in the system utilizes the composition mode of white light of a projector: a narrower bandwidth red and blue light and a longer bandwidth green light. The projector projects the same aperture coding pattern with red light and blue light sequentially. Because of the short dispersion length of the Amician prism, the bandwidth of the white light with the bandwidth of more than 200nm only occupies about 20 pixels after being dispersed. Thus the bandwidth is only 20nm for red and blue, and the dispersion length is only between 2-3 pixels. After reducing the camera exposure time, the dispersion length can be reduced to within one pixel, and then the actual shape of the aperture coding pattern, i.e., P b1,Pr1, is obtained. On the other hand, the system also shoots a negative template, namely P b2,Pa, with the "0" and the "1" reversed. The division of the positive and negative templates can effectively reduce uneven gray scales caused by different colors or shadows. By using the method, the state P after the encoding template is distorted can be extracted, and meanwhile, the start-stop position of chromatic dispersion can be accurately obtained, so that the ROI can be conveniently selected. The following is the acquisition process of P:
P=Norm(histeq(Pb1/Pb2)+histeq(Pr1/Pr2)) (14)
wherein histeq (·) is the histogram equalization function and Norm (·) is the normalization function.
In terms of three-dimensional accuracy, a standard ceramic sphere of 50.798 mm diameter was used in the experiments to verify the three-dimensional accuracy of the system. The final three-dimensional reconstruction error of the digital spot, RMSE, was 0.0912 mm. To verify accuracy and effect we used Fringe Projection Profilometry (FPP), a structured light three-dimensional reconstruction method with relatively high accuracy, as a reference for comparison. The final FPP has a three-dimensional reconstruction error, RMSE of 0.0857. In summary, the FPP is relatively more accurate in terms of error than the two methods- -FPP and digital speckle.
In this regard, we reconstructed three-dimensionally with white plastic dolls using FPP and digital scattering methods, respectively, and compared them in detail with the same number of point clouds. As can be seen from the details in parts a and b of fig. 4, the detail recovery effect of the FPP is relatively better than that of digital scattering, but the FPP uses 12-step 8-gray code projection, which is much slower than that of single frame projection of digital scattering. Overall, digital speckle is slightly lower in accuracy than FPP, but has more advantages in speed.
D. Spectral calibration
For a single independent wedge prism, its deflection angle θ can be expressed simply as:
θ=(n-1)α (16)
This shows that when light is incident vertically or near vertically, the deflection angle produced is related only to the vertex angle α and the refractive index n, and δ is positively correlated with n.
The description of normal dispersion is given by the empirical formula derived from the Oletum-Lu Yi Cauchy (Augustin-Louis Cauchy, 1789-1857) in 1836:
wherein the three coefficients P 1,P2 and P 3 are called Cauchy constants.
In combination with the above equation, the relationship between the deflection angle θ and the wavelength λ can be expressed as:
Wherein P 1'=(P1-1)α,P2'=αP2,P3'=αP3. The amix prism used in this chapter can be regarded as a glue of 3 wedge prisms, so that its deflection angle can still be expressed in the form of the above formula. In near field conditions, the offset distance θ and the deflection angle θ of the image are linearly related. Thus there is:
Θ=Q1-2)2+Q2-2)+Q3 (19)
where Q 1,Q2 and Q 3 are weight coefficients of wavelength position distribution at camera coordinates.
For a CMOS array with equally distributed pixels, the number of wavelength shifted pixels n p corresponds to the discretized offset distance Θ.
A color doll as shown in fig. 5 was used to calibrate the wavelength and check the quality of the spectral recovery. Part b of fig. 5 shows the fitted curve for the different wavelengths and the locations where they were made. In the wavelength calibration of the present experiment, spectral images (Thorlabs, 10 nm) of the color doll at 420 to 660nm were collected sequentially using filters and the positions np of the images at each band were recorded. Part c of fig. 5 illustrates the spectral curves of several color zones of the color doll, wherein the standard curves are taken from the data cubes captured by the filter sets. After curve fitting and downsampling, the average SAM for each color region in the standard color plate reached 4.2.
E. System actual target observation result
The system used in the experiment is a quasi-single dispersion aperture coded snapshot imaging spectrometer (SD-CASSI) system without a mask plate, which is formed by a collimating objective lens, a focusing lens, an amix prism, a DLP6500 projector with the resolution of 1920×1080 and two Hikvision CE120-10UM cameras with the resolution of 4000×3036.
Experiments first 4D imaging was performed on a prior color doll. A color doll resembling a Kitty cat has a portion of its upper side adhered with a plasticine, as shown in portion a of FIG. 6. The standard for the color setting of color spectrum images is from CIE 1931.
The spectral information of the different regions is clearly presented as detected by the imaging system provided by the present invention, as part b in fig. 6.
In experiments we selected mango as the subject, as shown in part a of fig. 7, glue was applied to some areas, simulating damaged areas, and it was desired to differentiate damaged areas by spectroscopy. Part b of fig. 7 is the 4D imaging result of the mango and the single band image is taken to distinguish glue and glue free areas. In fig. 7, the spectrum curve of the glue area is compared with the spectrum curve of the glue-free area, so that it is obvious that the spectrum curve of the glue area is different from the spectrum curve of the glue-free area in different wave bands.
In summary, the invention provides a 4D multi-frame spectrum three-dimensional imaging system based on aperture coding and digital image correlation, which realizes four-dimensional map imaging. The system combines CASSI and DIC, adopts an active projection mode, projects the projection patterns required by the two modes, and synchronously carries out three-dimensional reconstruction and spectrum reconstruction. The spectral characteristics of the white LED projector light source are used to determine the dispersion length. By using the method of correlation calculation, the dispersion "mask" distorted by the target surface topography modulation is solved.
The above detailed description is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Various modifications, substitutions and improvements of the technical scheme of the present invention will be apparent to those skilled in the art from the description and drawings provided herein without departing from the spirit and scope of the invention.

Claims (9)

1. A4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation is characterized in that a 4D imaging system is adopted, the 4D imaging system comprises a projector, an aperture coding spectrum imaging module and a second vision module,
The projector, the aperture coding spectrum imaging module and the second vision module are opposite to the target object;
The aperture coding spectrum imaging module comprises a collimating objective lens L1, an amice dispersion prism L2, a first focusing lens L3 and a first camera, wherein the collimating objective lens L1, the amice dispersion prism L2, the first focusing lens L3 and the first camera are sequentially arranged in front of the target object;
Removing the Amician dispersion prism L2 from the aperture coding spectrum imaging module, wherein the aperture coding spectrum imaging module becomes a first vision module;
The second vision module comprises a second focusing lens L3 and a second camera, and the second focusing lens L3 and the second camera are sequentially arranged in front of the target object;
the first vision module and the second vision module form a binocular stereoscopic vision module;
The method comprises the following steps:
Step 1: the projector projects an aperture coding pattern and a digital speckle pattern onto the surface of a target object, the aperture coding spectrum imaging module is used for acquiring a random coding image, and the binocular stereoscopic vision module is used for acquiring a digital speckle image;
Step 2: carrying out spectrum reconstruction on the random coded image acquired in the step 1 by utilizing a Twok reconstruction algorithm to obtain a spectrum reconstruction result; carrying out binocular vision three-dimensional measurement on the digital speckle image acquired in the step 1 through a DIC method to obtain a three-dimensional point cloud;
step 3: and (3) fusing the spectrum reconstruction result in the step (2) with the three-dimensional point cloud to obtain the 4D multi-frame spectrum three-dimensional imaging.
2. The method of 4D multi-frame spectral three-dimensional imaging based on aperture coding and digital image correlation as claimed in claim 1, wherein the measured value c at the detector of the first camera of step 1 is represented by the following formula:
c=Hf
where H is a linear operator and f is the spectral source density.
3. The method of 4D multi-frame spectral three-dimensional imaging based on aperture coding and digital image correlation of claim 1, wherein the light projected by the projector onto the target object is white.
4. The method of 4D multi-frame spectral three-dimensional imaging based on aperture coding and digital image correlation of claim 1, wherein in step one the projector projects the same aperture coding pattern onto the target object with red light and blue light sequentially.
5. The method for 4D multi-frame spectral three-dimensional imaging based on aperture coding and digital image correlation as claimed in claim 1, wherein in the second step, the offset distance Θ of the randomly coded image is obtained by using the following formula:
Θ=Q1-2)2+Q2-2)+Q3
Where Q 1,Q2 and Q 3 are weight coefficients of wavelength position distribution at camera coordinates of the first video camera.
6. The method for 4D multi-frame spectral three-dimensional imaging based on aperture coding and digital image correlation as claimed in claim 1, wherein the distortion state P of the randomly coded image in the second step is represented by the following formula:
P=Norm(histeq(Pb1/Pb2)+histeq(Pr1/Pr2))
Where histeq (·) is the histogram equalization function, norm (·) is the normalization function, P b1 and P r1 are the actual shape of the random encoding pattern, and P b2 and P r2 are the negative templates of the random encoding pattern.
7. The method for 4D multi-frame spectral three-dimensional imaging based on aperture coding and digital image correlation as claimed in claim 1, wherein the step 3 of fusing the spectral reconstruction result of the step 2 with a three-dimensional point cloud comprises the steps of:
Step 31, taking the peak images of the three peaks of red, green and blue in the spectrum reconstruction result in the step 2 as RGB channels, and carrying out fusion color matching to obtain a two-dimensional color image;
step 32, corresponding the three-dimensional space points of the three-dimensional point cloud in the step 2 with the point coordinates of the two-dimensional image to obtain three-dimensional space point coordinates;
And 33, corresponding the coordinates of the two-dimensional color image in the step 31 to the coordinates of the three-dimensional space points to obtain the 4D multi-frame spectrum three-dimensional imaging.
8. The 4D multi-frame spectral three-dimensional imaging method based on aperture coding and digital image correlation of claim 1, wherein in step two, binocular vision three-dimensional measurement is performed by DIC method, and the reference subset and the target subset in the DIC method are matched by ZNCC coefficients:
CZNCC(P)=1-0.5×CZNSSD(P)
Where f (x, y) is the gray scale intensity at the original picture (x, y) coordinate position, g (x ', y') is the gray scale intensity at the deformed picture (x ', y') coordinate position, Is the average gray intensity of the original picture,/>Is the average gray level intensity of the target picture, P is the direction vector relative to the displacement mapping function, and N is the total number of valid object pixels in the subset.
9. The method for 4D multi-frame spectral three-dimensional imaging based on aperture coding and digital image correlation as recited in claim 1, wherein the aperture coding pattern uses multi-frame coding.
CN202410060600.0A 2024-01-15 2024-01-15 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation Pending CN118067035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410060600.0A CN118067035A (en) 2024-01-15 2024-01-15 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410060600.0A CN118067035A (en) 2024-01-15 2024-01-15 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation

Publications (1)

Publication Number Publication Date
CN118067035A true CN118067035A (en) 2024-05-24

Family

ID=91106627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410060600.0A Pending CN118067035A (en) 2024-01-15 2024-01-15 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation

Country Status (1)

Country Link
CN (1) CN118067035A (en)

Similar Documents

Publication Publication Date Title
Baek et al. Compact single-shot hyperspectral imaging using a prism
CN107607040B (en) Three-dimensional scanning measurement device and method suitable for strong reflection surface
JP4324238B2 (en) Image processing system, method and apparatus
US9325966B2 (en) Depth measurement using multispectral binary coded projection and multispectral image capture
JP6319329B2 (en) Surface attribute estimation using plenoptic camera
US6611344B1 (en) Apparatus and method to measure three dimensional data
US20090185173A1 (en) Apparatus and method for determining characteristics of a light source
EP2104365A1 (en) Method and apparatus for rapid three-dimensional restoration
Yau et al. Underwater camera calibration using wavelength triangulation
CN113205592B (en) Light field three-dimensional reconstruction method and system based on phase similarity
JP2012504771A (en) Method and system for providing three-dimensional and distance inter-surface estimation
CN112489109B (en) Three-dimensional imaging system method and device and three-dimensional imaging system
Zhang et al. A novel 3D multispectral vision system based on filter wheel cameras
CN110248179B (en) Camera pupil aberration correction method based on light field coding
US10096113B2 (en) Method for designing a passive single-channel imager capable of estimating depth of field
Li et al. Spectral MVIR: Joint reconstruction of 3D shape and spectral reflectance
TWI731414B (en) Cultural relic digital archive and restoration system
CN118067035A (en) 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation
CN115307577A (en) Target three-dimensional information measuring method and system
US11676293B2 (en) Methods for depth sensing using candidate images selected based on an epipolar line
CN112903103A (en) Computed spectrum imaging system and method based on DMD and complementary all-pass
Tang et al. Multi-image-distance imaging system for extending depth-of-field
CN112325799A (en) High-precision three-dimensional face measurement method based on near-infrared light projection
Lathuiliere et al. Stereoscopic system for 3D reconstruction using multispectral camera and LCD projector
CN117804381B (en) Three-dimensional reconstruction method for object based on camera array focusing structure light

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination