CN108592824B - Variable-frequency fringe projection structured light measuring method based on depth of field feedback - Google Patents

Variable-frequency fringe projection structured light measuring method based on depth of field feedback Download PDF

Info

Publication number
CN108592824B
CN108592824B CN201810777820.XA CN201810777820A CN108592824B CN 108592824 B CN108592824 B CN 108592824B CN 201810777820 A CN201810777820 A CN 201810777820A CN 108592824 B CN108592824 B CN 108592824B
Authority
CN
China
Prior art keywords
camera
projector
pixel
kernel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810777820.XA
Other languages
Chinese (zh)
Other versions
CN108592824A (en
Inventor
徐静
陈恳
饶刚
吴丹
王国磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201810777820.XA priority Critical patent/CN108592824B/en
Publication of CN108592824A publication Critical patent/CN108592824A/en
Application granted granted Critical
Publication of CN108592824B publication Critical patent/CN108592824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical means
    • G01B11/24Measuring arrangements characterised by the use of optical means for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical means for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré

Abstract

The invention provides a variable-frequency fringe projection structured light measurement method based on field depth feedback, which belongs to the field of non-contact optical three-dimensional measurement and is characterized in that firstly, the change relation between a defocusing core and the field depth of a camera and a projector is calibrated; measuring a workpiece with variable depth of field for the first time by using a uniform sine stripe image sequence, respectively obtaining the depth of field of a camera and a projector, and calculating the size of a comprehensive defocusing kernel on the surface of the workpiece according to the relationship between a calibrated defocusing kernel and the depth of field; respectively and optimally designing horizontal and vertical N variable-frequency sinusoidal phase shift fringe image sequences based on the size of a defocusing nucleus in a common visual field of a projector and a camera; projecting and collecting the frequency conversion stripe image sequence, and obtaining the projector pixel coordinate matched with the camera pixel coordinate in the public visual field again by a frequency conversion decoding method; and reconstructing a three-dimensional coordinate point cloud according to the obtained matched pixel coordinates of the projector and the camera. The invention can effectively improve the comprehensive measurement precision of the workpiece with the changed depth of field, thereby being beneficial to improving the robustness of the structured light measurement method of the fringe projection.

Description

Variable-frequency fringe projection structured light measuring method based on depth of field feedback
Technical Field
The invention belongs to the field of non-contact optical three-dimensional measurement, particularly relates to the field of time phase shift method stripe projection structured light three-dimensional measurement, and provides a frequency conversion stripe projection structured light measurement method based on field depth feedback.
Background
The fringe projection structured light measuring method is a method which comprises the steps of forming a measuring system by a camera, a projector and a computer, actively projecting a coded image to the surface of a measured object by adopting the projector, then collecting a corresponding coded image projected on the surface of the measured object by using the camera, calculating the pixel matching relation between the projector and the camera, and then obtaining the three-dimensional coordinate of the surface of the measured object according to the triangulation principle. The fringe projection structured light three-dimensional measurement method has the obvious characteristics of no contact, high precision, high measured point cloud density, strong flexibility and the like, has strong advantages in high-speed, high-precision and high-resolution three-dimensional measurement tasks, and therefore has important application in the fields of surface quality detection, reverse engineering, three-dimensional structure scanning modeling and the like.
The existing fringe projection structured light three-dimensional measurement method mainly uses an N-step sinusoidal phase shift fringe image sequence as a coded image, then projects and collects the projected sinusoidal phase shift fringe image sequence, and uses a phase shift decoding algorithm to solve high-precision matched pixels of a camera and a projector, so that the three-dimensional coordinate of a measured object is reconstructed by a triangulation principle. In the existing N-step sinusoidal phase shift fringe image sequence, each image is a sinusoidal fringe with uniform frequency. The phase shift decoding algorithm has the advantages of high resolution, high precision, high speed and easy realization, and is widely applied to the three-dimensional measurement of the fringe projection structured light. An N-step sine phase shift fringe projection structured light three-dimensional reconstruction method generally uses two groups of sine phase shift fringe image sequences of low frequency and high frequency for calculating the matching relationship between a camera and a projector. During measurement, two groups of N sinusoidal phase shift fringe image sequences are respectively projected on the surface of a measured object in sequence according to a phase shift sequence, then the fringe image sequences projected on the surface of the object are sequentially collected, a high-frequency and low-frequency two groups of sinusoidal phase shift fringe image sequences are utilized, a unwrapping algorithm is used for calculating the pixel matching relationship between a camera and a projector one by one, then a triangulation principle is used for calculating and obtaining the three-dimensional coordinate corresponding to the measured object, optimization processing is not needed, the calculation efficiency is high, the robustness and the universality are high, and therefore the method is widely applied to fringe projection three-dimensional measurement.
However, since commercial projectors usually have a large clear aperture to achieve light intensity efficiency, and projectors have a small depth of field, only a non-blurred image can be obtained at the focal plane, the projector will be out of focus at a location far from the focal plane. The defocus effect is usually characterized by using a gaussian point spread function, the defocus degree is represented by a defocus kernel, and the larger the defocus kernel is, the more serious the defocus degree is. The gaussian defocus function is a low pass filter, so that the amplitude of the sinusoidal fringe image will decrease with increasing frequency when the defocus level is uniform. The higher the frequency of the sinusoidal fringe image is, the smaller the pixel change value corresponding to the same light intensity change in the fringe image is, however, due to the reasons of temperature change, unequal exposure time, environmental conditions, digital quantization and the like, noise inevitably exists in the process of projecting an image and acquiring the image, after the frequency of the coding fringe is increased, due to the defocused low-pass filtering effect, the amplitude of a projection pattern projected on the surface of an object is reduced, and the pixel error caused by the same light intensity error is increased. The pixel decoding error and the frequency of the sine stripe image are not monotonous, so that different depths of field exist, and the optimal sine stripe image frequency is different, so that the decoding error caused by light intensity noise is minimized.
Both the camera and the projector have a working range of depth of field, and away from the in-focus position, the resulting image will be blurred, i.e. the amount of defocus of the camera and the projector. Particularly for projectors, the degree of blur of the projected image is more pronounced at locations away from the focal plane. The defocus effect can be characterized by a gaussian point spread function, and the defocus degree can be represented by a gaussian defocus kernel. In some existing researches, in order to compensate and properly correct the defocus of a camera or a projector, quantitative measurement of the defocus kernel size of the camera or the projector when acquiring an image or a projected image is required, a secondary blur method is used for the defocus kernel measurement of the camera, and the blur degree of the image acquired by the camera is calculated, so that the defocus kernel size is obtained. The defocusing kernel measurement of the projector generally uses a correlation analysis method to calculate a projection image and a standard blurred template image for correlation analysis, so as to obtain the defocusing kernel size of the projector.
For the measured object with the changed depth of field such as a curved surface or a step, the defocusing degree at each position is different along with the change of the depth of field, so that when the surface of the measured object with the changed depth of field is measured, if a uniform sine stripe image with constant frequency is used, the frequency stripe image can only meet the higher measurement precision of the area at a specific depth of field position, and the other positions do not reach the optimal frequency, so the measurement precision is not optimal, and the robustness of the method is poor.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a variable-frequency fringe projection structured light measuring method based on depth of field feedback. The invention can effectively improve the comprehensive measurement precision of the workpiece with the changed depth of field, thereby being beneficial to improving the robustness of the structured light measurement method of the fringe projection.
The invention provides a variable-frequency fringe projection structured light measurement method based on depth of field feedback, which is characterized by comprising the following steps of:
1) building a defocusing kernel calibration system of the projector; calibrating the size of a defocusing nucleus of the projector under different depths of field by using the built defocusing nucleus calibration system of the projector, and fitting to obtain the relation between the defocusing nucleus of the projector and the depth of field of the projector;
2) building a camera defocusing core calibration system; calibrating the size of a defocusing kernel of the camera under different depths of field by using the built camera defocusing kernel calibration system, and fitting to obtain the relation between the defocusing kernel of the camera and the depth of field of the camera;
3) measuring the surface of a measured object by using a uniform frequency sine stripe image sequence to obtain a three-dimensional coordinate point cloud of the measured object which is measured for the first time; according to the point cloud of the three-dimensional coordinate of the measured object, the depth of field of a projector corresponding to a projector pixel and the depth of field of a camera corresponding to a camera pixel coordinate in a public view are obtained, and the size of a defocusing core of the projector and the size of a defocusing core of the camera on the surface of the measured object in the public view of the camera and the projector are respectively calculated by using the calibration results in the step 1) and the step 2);
4) optimally designing a sinusoidal phase shift fringe image sequence according to the camera defocusing kernel and the projector defocusing kernel fed back by the depth of field; according to the reciprocal relation between the optimal frequency and the defocusing kernel, optimally designing the frequency of a sinusoidal fringe image sequence which changes along with the depth of field of the object to be measured by the comprehensive defocusing kernel in the common visual field of the camera and the projector obtained in the step 3), and generating a variable-frequency sinusoidal fringe image sequence projection image of the N-step phase shifting method according to the optimally designed fringe frequency;
5) the computer is adopted to control the projector to project the optimized N-step longitudinal frequency conversion sine stripe image sequence and N-step transverse frequency conversion sine stripe image sequence, and a camera is used for synchronously acquiring to obtain the corresponding N-step frequency conversion sine stripe image sequence; obtaining the monotonous relative phase size corresponding to each pixel coordinate of the camera in the public vision field of the projector and the camera by using an N-step phase shift method, and calculating the phase periodicity corresponding to each pixel coordinate of the camera by using the pixel coordinate of the projector matched with the pixel coordinate of the camera in the public vision field of the camera projector obtained by using the uniform frequency sine stripe image sequence for the first time in the step 3-2), thereby obtaining the absolute phase; the projector pixel coordinate which is matched with the camera pixel coordinate in a re-optimization mode is obtained through decoding calculation;
6) and (3) reconstructing to obtain a new three-dimensional coordinate point cloud of the measured object by using the projector pixel coordinates which are obtained in the step 5) and matched with the camera pixels in the public view, the camera internal parameter matrix and the projector internal parameter matrix which are calibrated in the step 3-1), and the conversion relation matrix and the translation vector of the camera relative to the projector.
The invention has the following characteristics and beneficial effects:
the invention calibrates the relation between the defocusing kernel and the depth of field of the projector and the camera, measures the surface of a workpiece with variable depth of field by using the conventional uniform frequency, obtains the depth of field under the coordinate systems of the projector and the camera, calculates the size of the comprehensive defocusing kernel, and has different depth of field and different optimized frequencies according to the theory that the optimal frequency and the defocusing kernel are in reciprocal relation. The variable defocusing kernel caused by the variable depth of field of the primary measurement is utilized to design the non-uniform fringe frequency which changes along with the depth of field, and a frequency conversion phase and de-phase method is designed according to the non-uniform fringe frequency. And the pixel coordinates of the camera and the projector are matched again by using the frequency-converted phase-shift fringe image, so that the matching precision is improved, and the comprehensive measurement precision is improved.
The invention optimizes the frequency of the sinusoidal phase shift fringe image by utilizing the depth of field of the measured object which is measured at one time, thereby improving the comprehensive measurement precision and the measurement robustness of the sinusoidal phase shift fringe projection structured light system. The method can be applied to quality detection of industrial products, three-dimensional surface measurement, reverse engineering, three-dimensional visual navigation and the like.
Drawings
FIG. 1 is a flow chart of a variable frequency fringe projection structured light measurement method based on depth of field feedback according to the present invention;
FIG. 2 is a schematic diagram of a defocus calibration system of the present invention;
FIG. 3 is a schematic diagram of a rectangular array image of calibrated projector defocus kernels;
FIG. 4 is a schematic diagram of a black and white phase image of a calibrated camera defocus kernel;
FIG. 5 is a schematic view of a fringe structured optical system measuring variable depth of field workpiece and frequency conversion design;
FIG. 6 is a schematic of a piecewise fitting defocus kernel;
wherein: the system comprises a sliding rail 1, a sliding table 2, a projector 3, a camera 4, a computer 5, a projection screen 6 and a measured object 7.
Detailed Description
The invention provides a variable-frequency fringe projection structured light measurement method based on depth of field feedback, which is further described in detail below with reference to the accompanying drawings and specific embodiments.
The invention provides a frequency conversion fringe projection structured light measuring method based on depth of field feedback, the whole flow is shown as figure 1, and the method comprises the following steps:
1) building a defocusing kernel calibration system of the projector; calibrating the size of a defocusing nucleus of the projector under different depths of field by using the built defocusing nucleus calibration system of the projector, and fitting to obtain the defocusing nucleus sigma of the projectorpA functional relationship with the projector depth of field H; the method comprises the following specific steps:
1-1) building a defocusing kernel calibration system of the projector; the defocused nuclear calibration system of the projector is shown in fig. 2, and includes a slide rail 1 (no special type requirement, the slide rail can measure scales, and it is sufficient to ensure the slide rail to move stably, in this embodiment, 40mmx40mm aluminum slide rail is used), a slide table 2 (no special type requirement, in this embodiment, 50mmx50mmx20mm aluminum seat is used), a projector 3 (a digital light projector with better linearity without correction is used, in this embodiment, a lightcraft 4500DLP digital projector is used, the resolution is 912pixelsx1140 pixelss), a camera 4 (an industrial camera with the resolution of 1000x1000 pixels or more, in this embodiment, a JAIGO-5000C-USB camera is used, the resolution is 2560pixelsx2048 pixelss), a computer 5 (a computer with 3G or more memory is used, in this embodiment, a notebook computer with CP65S, 2.6GHz, a window7 system), a projection screen 6 (a flat and smooth, a white diffuse reflection board, area 800mmx1000 mm).
The sliding rail 1 is fixed on a flat and stable desktop, and the projection screen 6 is fixed at one end of the sliding rail 1, so that the projection screen 6 is perpendicular to the sliding rail 1 in the moving direction. The sliding table 2 is arranged on the sliding rail 1 and can move back and forth along the sliding rail 1; projecting apparatus 3 is fixed on slip table 2, when guaranteeing slip table 2 to move along slip table 1, 3 optical axes of projecting apparatus are perpendicular with projection screen 6. The camera 4 is placed on the desktop on one side of the sliding rail 1, the camera 4 is focused on the projection screen 6, the camera 4 and the projector 3 are respectively connected with the computer 5 through USB data lines, and the hard trigger end of the projector is connected with the hard trigger receiving end of the camera through a flat cable, so that the defocusing nucleus calibration system of the projector is formed.
1-2) moving the sliding table, changing the distance between the projector and the projection screen, and calibrating the defocus kernel sigma of the projector under different depths of field H of the projector by using a cross-correlation analysis methodpThen fitting to obtain a projector defocus kernel σpAnd depth of field H. The method comprises the following specific steps:
1-2-1) Generation of S with a computerw×ShProjector defocusing kernel calibration image of rectangular array(P denotes for projector calibration, superscript P denotes the image generated for projection, subscript calib denotes the calibration image FIG. 3 is a schematic illustration of the projector defocus kernel calibration image with resolution of width WpPixel, high HpA pixel. SwCalibrating column number, S, of image for segmenting projector defocus kernelhCalibrating number of lines, number of columns S of image for segmenting defocused kernel of projectorwAnd number of lines ShAnd determining according to the resolution of the projector, and ensuring that the width and the height of the rectangular sub-block obtained after the defocusing kernel calibration image of the projector is divided are within the range of 100pixels to 250 pixels. Each rectangular sub-block image is a pattern with a white square at the center and black at the edge, and each sub-block is L in lengthw1Pixel, high Lh1A pixel. The length of the middle white square is Lw2Pixel with a height of Lh2A pixel having Wp=Sw·Lw1And Hp=Sh·Lh1. The rectangular sub-blocks are selected to ensure that the width of the black edge around each sub-block is greater than 25 pixels. In this example, Wp=912,Hp=1140,Sw×Sh=4×5,Lw1=Lh1=228,Lw2=Lh2168); respectively generating uniform longitudinal low-frequency sine stripe image sequences(V denotes longitudinal, superscript u denotes low frequency (low frequency is one sinusoidal period along the transverse direction), subscript i denotes phase shift sequence number; the number of phase shift steps N needs to be greater than 3, the higher the number of steps, the higher the measurement accuracy, in this embodiment, the number of steps N is 20, and N denotes phase shift steps) and uniform longitudinal high frequency sinusoidal stripe image sequence(superscript h denotes high frequency, which is greater than 1 period along the transverse direction) and a uniform transverse low frequency sinusoidal fringe image sequence(H denotes horizontal, superscript u denotes low frequency) and uniform horizontal high frequency sinusoidal fringe image sequences
1-2-2) measuring depth of field H of the projector to the projection screen at the focal position of the projectorf. Projecting projector defocusing kernel calibration image in sequenceN uniform longitudinal low-frequency sinusoidal stripe image sequencesN uniform longitudinal high-frequency sinusoidal stripe image sequencesN uniform transverse low-frequency sinusoidal stripe image sequencesAnd N uniform longitudinal high-frequency sinusoidal stripe image sequencesTo the projection screen.
After each projection, the camera collects the corresponding projection image on the projection screen; three-dimensional according to the phase-shift methodReconstruction principle, calculating the coordinates (U) of each pixel of the camera in the common field of view of the camera and the projector (the common field of view represents the common field of view range of the camera and the projector, i.e. the area projected by the projector can be acquired by the camera)c,Vc) Matched projector low frequency longitudinal relative phase V phiu(Uc,Vc) And a high frequency longitudinal relative phase V phih(Uc,Vc) The expression is:
in the formula (I), the compound is shown in the specification,andcamera pixel coordinates (U) of the ith image of the longitudinal low-frequency sinusoidal fringe image sequence and the longitudinal high-frequency sinusoidal fringe image sequence acquired by the camera in the common field of view of the camera and the projectorc,Vc) The gray value of (d).
Calculating coordinates (U) of each pixel of a camera in a common field of view of the camera and the projectorc,Vc) Matched transverse low-frequency relative phase H phi of projectoru(Uc,Vc) And transverse high-frequency relative phase H phih(Uc,Vc) The expression is:
in the formula (I), the compound is shown in the specification,andcamera pixel coordinates (U) of ith image of transverse low-frequency sine stripe image sequence and transverse high-frequency sine stripe image sequence acquired by camera in common visual field of camera and projectorc,Vc) The gray value of (d).
1-2-3) according to each pixel coordinate (U) in the camera image coordinate system in the camera and projector common field of view obtained in step 1-2-2)c,Vc) Longitudinal low frequency relative phase V phiu(Uc,Vc) And a longitudinal high-frequency relative phase V phih(Uc,Vc) And transverse low frequency relative phase Hphiu(Uc,Vc) And transverse high-frequency phase H phih(Uc,Vc) Calculating the coordinates (U) of each pixel in the camera image coordinate systemc,Vc) Matched projector abscissaAnd ordinateThe expression is as follows:
where round () represents the rounding function, TuFor uniform low frequency sinusoidal fringe period, ThIs the period size of the uniform high-frequency sine stripe.
1-2-4) calculating the corresponding gray value of the projector image under the integral point coordinate by utilizing a bilinear interpolation function griddata function in MATLAB, thereby obtaining the projector defocusing kernel calibration imageGeometric correction image with same resolutionThe expression is as follows:
wherein the content of the first and second substances,respectively representing camera pixel coordinates { (U) in a common field of viewc,Vc) The abscissa sequence and the ordinate sequence corresponding thereto,representing a sequence of gray values corresponding to a rectangular array of images acquired by a common-view camera. { UpAnd { V }pExpressing the coordinate sequence of the integral point pixel of the projector image, and expressing the expression:
({Up},{Vp})=meshgrid(1:Wp,1:Hp)
where meshgrid () is the grid point generating function, WpAnd HpThe projector image horizontal pixel resolution and the projector image vertical pixel resolution are respectively.
1-2-5) using computer pairsPerforming Gaussian blur processing of different degrees, starting from 1, increasing the size of a defocus kernel to 20 by taking 0.2 pixel as a step length, and generating a rectangular array standard template blurred image sequence I of known defocus kernels1+(i-1)·0.2(i ═ 1,2, …, 96). The expression is as follows:
in the formula, G1+(i-1)·0.2(Up,Vp) A Gaussian blur function with a defocus kernel σ of 1+ (i-1) · 0.2, notationRepresenting a convolution operation.
1-2-6) geometrically corrected image obtained in 1-2-4)Performing cross-correlation calculation with each image in the fuzzy image sequence of the standard template of the rectangular array, and performing geometric correction image during calculationEach rectangular sub-block ofRectangular sub-blocks tI respectively corresponding to the standard template blurred image sequence1{w,h},tI1.2{w,h},I1.4{w,h}…,tI20{ w, h } performing cross-correlation calculation with a cross-correlation coefficient r1{w,h},r1.2{w,h},r1.4{w,h},…,r20{ w, h }, blurring the defocus kernel size σ of the most related image in the rectangular array standard template blurred image sequence sub-block { w, h }, andw,has geometrically corrected imagesThe defocus kernel size of the w column and h row rectangular sub-block. Calculating the geometric correction image in turn according to the methodThe defocus kernel size of each rectangular sub-block, thereby obtaining a defocus kernel distribution array of the geometrically corrected image as:
averaging the obtained defocused kernel distribution array:
as the defocus kernel size at the current position of the projector.
1-2-7) △ H (△ H is determined according to the relative precision of the depth of field and the defocusing nucleus of the projector, the value is 5mm to 15mm, the value is 5mm in the embodiment) at an equal distance along the sliding rail, the sliding table is moved, the projector is far away from the projection screen, the steps 1-2-2) to 1-2-6) are repeated, the size of the defocusing nucleus of the projector corresponding to the current depth of field of the projector is calculated, and therefore the depth of field H of the projector and the size sigma of the defocusing nucleus of the projectorp(H) The corresponding sequence:
from the projector defocus kernel σpThe linear property that changes with the depth of field H of the projector, the relationship between the line fitting projector defocus kernel and the depth of field is as follows:
σp=apH+kp
in the formula, apAnd bpIs the linear fitting coefficient of the projector defocusing kernel and the depth of field, and H is the depth of field of the projector.
It should be added here that each sub-block image of the rectangular array image generated by the computer for calibration of the projector defocus core is a rectangle, and as shown in fig. 3, each rectangular sub-block is a white block in the center and a black rectangle around the rectangular sub-block. Length L of subblockw1And a height Lh1The pixels are:
in the step 1-2-4), when geometric correction is carried out on the rectangular array image collected by the camera in the public view, the first projected image is a pure white image, the view field of the camera is larger than that of the projector, and the image range projected by the projector is in the view field of the camera by adjusting the position relation between the camera and the projector. By threshold segmentation, the projection in the camera view can be obtainedPixel coordinates of the region. According to the projected horizontal and vertical uniform sine stripe image sequence, the horizontal and vertical coordinates of the projector matched with the camera pixel coordinates in the public visual field can be calculated through a phase shift algorithmDue to the calculation principle of phase shift method matching, the matched projector pixel coordinates are of a floating point type, and when autocorrelation calculation is performed, the resolution of the images needs to be consistent. Thereby generating a coordinate sequence { U ] with the same resolution as the projection gray scale by using a mesegrid () function in MATLABpAnd { V }pAnd is the coordinate of an integer point. Using bilinear interpolation function grirdata () in MATALAB to make bilinear interpolation to obtain gray value of integral point pixel coordinate position, and making it and projected imageCorrected image of the same resolution
In step 1-2-5), when a standard template image with known blur kernel is generated, the defocus kernel of the ith template image is set as:
σi=1+(i-1)·0.2
the fspecial function in MATLAB is used for generating a Gaussian blur kernel as follows:
then, filtering by using an image filtering function to generate a standard blurred image:
calculating the correction image subblocks in step 1-2-6)Sub-block tI corresponding to the template image1+(i-1)·0.2{w,h}(w=1,2,…,Sw;h=1,2,…,Sh) The cross correlation coefficient is calculated by the following formula:
2) building a camera defocusing core calibration system; calibrating the size of the defocusing nucleus of the camera under different depths of field by using the built camera defocusing nucleus calibration system, and fitting to obtain the camera defocusing nucleus sigmacA relation to a depth of field L of the camera; the method comprises the following specific steps:
2-1) building a camera defocusing core calibration system; the same equipment as the defocusing nucleus calibration system of the projector in the step 1-1) is adopted, wherein the sliding rail 1 is fixed on a stable desktop, and the projection screen 6 is fixed at one end of the sliding rail 1, so that the projection screen 6 is perpendicular to the moving direction of the sliding rail 1. The sliding table 2 is arranged on the sliding rail 1 and can move back and forth along the sliding rail 1; camera 4 is fixed on slip table 2, when guaranteeing slip table 2 along slip table 1 motion, and camera 4 optical axis is perpendicular with projection screen 6. The camera 4 is connected to the computer 5 through a USB data line.
And 2-2) fitting to obtain the relation between the camera defocusing kernel and the depth of field. The method comprises the following specific steps: 2-2-1) generating a calibration image for a camera defocus kernel by using a computer(C denotes the camera defocus kernel calibration image, and the superscript denotes the print image FIG. 4 shows an exemplary black and white vertical stripe pattern of the camera defocus kernel calibration with a resolution of width WcPixel, high HcA pixel; the widths of the black stripe and the white stripe are LsPixel, requiring Ls>25 pixels, in this embodiment, Wc=2560,Hc=2048,Ls80), printing(the print size depends on the camera focus position viewing field range, and the size of the print stripe in this embodiment is A3 paper size), and the print sizeAnd flatly fixing the sliding table on the projection screen, moving the sliding table back and forth, and observing images acquired by a camera on a computer in real time to make the images clearer, wherein the projection screen is the focusing position of the camera at the moment.
Measuring depth of field L of camera from projection screen at focus positionfAnd capturing images with a camera(the superscript c indicates the image captured by the camera, and corresponds to the resolution of the camera, in this embodiment2560pixelX2048pixel), the acquired image is extracted using Canny operator first using edge extraction algorithm in MATLAB or OpenCVCalculating all black and white boundary lines to obtain a binary image with white boundary and black background(subscript bw denotes black and white binary image, in this example, edge function in MATLAB is used to collect imagePerforming edge extraction to obtain a binary image), and finding out the binary image by using a Hough transform algorithm in MATLAB or OpenCV (matrix laboratory) to remove other noisesPixel coordinates of all black and white stripe boundary positions { (x)ei,yei),i=1,2,…,Ne}(NeRepresenting the number of pixel coordinates on the boundary) (in this example, the hough variation function in MATLAB was used to extract the black and white boundary edge).
2-3) use of loose coke nuclei as sigma0Of the Gaussian function pair of the acquired imagesPerforming re-blurring treatment to obtain blurred imageFinding an imageAnd imageImage gradient ofAndfor the extracted black and white stripe boundary coordinates { (x)ei,yei),i=1,2,…,NeOne point (x) inei,yei) Calculating the defocus kernel for the point comprises the following sub-steps:
2-3-1) black and white boundary line image coordinate (x)ei,yei) The resulting gradient ratio, expressed as:
in the formula, σeiAs a boundary coordinate (x)ei,yei) The defocus kernel size.
2-3-2) pixel coordinate (x) on the boundary lineei,yei) Size of defocused kernel σ ofeiCan be calculated as:
calculating the size of defocusing kernels of all pixel points on the black and white stripe boundary line according to the methods in 2-3-1) and 2-3-2), and calculating an average value as the size of the defocusing kernels of the camera under the current depth of field, wherein the calculation formula is as follows:
2-4) sequentially moving the sliding table in equal step length △ L (determined according to the depth of field and the defocusing kernel precision, △ L takes the value of 5mm to 15mm, and the value of 5mm in the embodiment) to enable the camera to be far away from or close to the projection screen, repeating the steps 2-1) to 2-3) and calculating the size of the defocusing kernel of the camera under different depths of field L of the camera, so that the defocusing kernel sigma of the camera is obtainedcAnd the corresponding sequence of the camera depth of field L is:
in the formula I1And l2To a depth of field L from the camera focus positionfNumber of movements towards and away from the projection screen 6, respectively (l)1And l2According to the working depth of field of the camera and the step length △ L of each movement, the L is determinedf-l1△L>Minimum working distance, L, of camera allowed definitionf+l2△L<Camera allowed maximum working distance for sharpness). Camera defocus kernel sigmacThe size and the depth of field L of the camera are in a nonlinear relation, and a defocus kernel sigma of the camera is obtained by fitting a cubic polynomialcThe relationship with the depth of field L is:
σc=b0+b1L+b2L2+b3L3
in the formula, bi(i ═ 1,2,3,4) is the coefficient of cubic curve fit, and L is the camera depth of field.
It should be added that, in the relationship method for calibrating the camera defocus core and the camera depth of field by the re-blurring method in step 2-3), the size of the paper is selected according to the size of the field of view of the camera, and then the generated vertical stripe image between black and white is printed with high precision. The printed fringe image is then laid flat on the whiteboard directly in front of the camera. And a light supplement lamp is arranged in front of the white board. The field of view of the camera is smaller than that of the printed black and white striped paper, so that black and white stripes are in the range of the acquired image. For the collected imageFirstly, using a Canny operator in MATLAB to carry out edge extraction and generate a binary image. In order to remove noise, a Hough transform operator is used for finding out coordinates { (x) on a black-and-white fringe pattern boundary line in the Hough transform operatorei,yei),i=1,2,…,Ne}. In order to obtain the defocusing kernel size of the camera in the depth of field L, the fspecial function in MATLAB is used to obtain the known defocusing kernel as sigma0Gaussian defocus function of (1):
image acquired by camera at depth of field L of camera by using Gaussian defocusing function in above formulaPerforming re-blurring treatment to obtain a re-blurred imageThe expression is as follows:
image acquisition using gradient function in MATLABAnd the image after the re-blurringThe gradient is calculated. Get the respective edge UcAxial gradient sum along VcGradient of axis The calculation is as follows:
total gradientAndrespectively as follows:
3) and measuring the surface of the measured object by using the uniform frequency sine stripe image sequence to obtain the three-dimensional coordinate point cloud of the measured object which is measured for the first time. According to the point cloud of the three-dimensional coordinates of the measured object measured for the first time, the projector field depth size corresponding to the projector pixel and the camera field depth size corresponding to the camera pixel coordinate in the common field of view (as shown in fig. 2, A is the camera field of view, B is the projector field of view, and B is all in A, then the common field of view of the camera and the projector is B) and the calibration result in the step 1) and the step 2) are utilized to respectively calculate the projection defocusing kernel size and the camera defocusing kernel size of the measured object surface in the common field of view of the camera and the projector. The method comprises the following specific steps:
3-1) fixing the relative position relationship between the camera and the projector to ensure that the camera and the projector have a public visual field range to form a structured light measurement system for fringe projection, as shown in fig. 5, in order to build the structured light measurement system, the relative positions of the camera and the projector are fixed to ensure that the camera and the projector have a public visual field, an object to be measured 7 is placed in the public visual field range, the camera 4 and the projector 3 are connected with a computer 6 by adopting a USB data line, and the camera 4 and the projector 3 are connected by a hardware trigger line. The structured light measurement system is calibrated by using a structured light measurement system calibration method, and a camera internal parameter matrix is obtained:
and an intrinsic parameter matrix of the projector:
wherein f iscuAnd fcvAre respectively a camera UcAnd VcA focal length of direction; f. ofpuAnd fpvAre respectively a projector UpAnd VpA focal length of direction; u shapec0And Vc0Separate camera UcAnd VcCoordinates of the center point of the direction; u shapep0And Vp0Separate camera UpAnd VpThe center point coordinates of the direction. Resolution of the camera is Wc×HcResolution of the projector is Wp×Hp
Calibrating the structured light measurement system, and simultaneously obtaining a conversion relation matrix of the camera relative to the projector:
Rpc=[rij]3×3
and translation vector:
tpc=[t1,t2,t3]T
wherein r isijIs a matrix RpcValue of i row and j column, t1,t2,t3Respectively, the translation amounts in the X, Y and Z directions.
3-2) Using the sequence of uniform longitudinal Low-frequency sinusoidal fringe images generated in 1-2-1)And a uniform longitudinal high-frequency sinusoidal fringe image sequenceAnd uniform transverse low-frequency sinusoidal fringe image sequenceAnd a sequence of uniform transverse high frequency sinusoidal fringe imagesMeasuring a measured object 7 in a common view field of a camera and a projector by using a phase shift method principle (the measured object 7 is a non-reflective curved surface workpiece, and the workpiece in this embodiment is a plastic curved surface part), obtaining a three-dimensional coordinate point cloud of the measured object 7 (as shown in fig. 5, the measured object 7 is placed on a fixed workbench (the workbench is not shown in the figure), and a camera projector measuring system measures above the measured object 7), and obtaining a camera pixel coordinate { (U) in the common view field of the camera and the projectorck,Vck),k=1,2,…,NsvAnd a camera depth value { L }k,k=1,2,…,Nsv},NsvThe number of camera pixels under the public view. The projector pixel coordinates corresponding to all the camera pixel coordinates obtained by using the phase shift method are { (U pk,V pk),k=1,2,…,NsvAnd obtaining the depth of field of the projector as { H }k,k=1,2,…,Nsv}. Utilizing the projector defocus kernel sigma calibrated in the step 1)pCalculating the corresponding projector defocusing kernel size in the public view according to the relation with the depth of field L as follows:
pkpk=apHk+kp,k=1,2,…,Nsv}
and step 2) calculating the size of the corresponding camera defocus kernel in the public view according to the calibrated relation between the camera defocus kernel and the depth of field:
ckck=b0+b1Lk+b2Lk 2+b3Lk 3,k=1,2,…,Nsv}
and calculating the comprehensive scattered-focus kernel by using Gaussian fuzzy property as follows:
using MATLAB bilinear interpolation functiongriddata, calculating projector origin coordinates in public viewThe following comprehensive loose coke nuclei are:
in the formula (I), the compound is shown in the specification,the number of the integral point coordinates of the projector in the public view.
It should be noted here that, in the step 3-1), calibrating the structured light three-dimensional measurement system for fringe projection is a conventional method, and a calibration board (not shown) for calibration is placed in a common visual field range of the projector and the camera after mutual position fixing, and the calibration board (not shown) is a standard calibration board and consists of a large mark point and a plurality of small mark points which specially determine a world coordinate system. Changing the position of a calibration board (not shown) for multiple times to obtain the coordinates of a plurality of mark points on the calibration board (not shown) under different positions under a world coordinate system, projecting a template image for measurement onto the calibration board (not shown) under different positions by using a projector and shooting the calibration board (not shown) by using a camera, processing an image shot by using the camera by using a computer, obtaining the coordinate values and the intensity values of the plurality of mark points on the calibration board under the image coordinate system of the camera according to the image shot by the camera, decoding the intensity values of the plurality of mark points on the calibration board to obtain the coordinates of the plurality of mark points on the calibration board under the image coordinate system of the projector, and using an OpenCV or a binocular camera according to the obtained coordinates of the plurality of mark points on the calibration board under the world coordinate system, the coordinates of the plurality of mark points on the calibration board under the image coordinate system of the camera and the coordinates of the plurality of mark points on the calibration board under the image coordinate system of the projector by using the OpenCV or the binocular camera Calibrating the tool box to obtain the internal reference matrix K of the projectorPInternal reference matrix K of cameraCAnd a conversion matrix R between the projector and the camerapcAnd a translation vector tpc. The camera coordinate system isRefers to a coordinate system with an origin at the optical center of the camera, with the coordinate unit being millimeters (mm); the image coordinate system of the camera refers to a coordinate system with an origin at the upper left corner of an image on an imaging plane of the camera, and the coordinate unit is a pixel (pixel); the projector coordinate system is a coordinate system with an origin at the optical center of the projector, and the coordinate unit is millimeter (mm); the image coordinate system of the projector means that the coordinate unit of the coordinate system with the origin at the upper left corner of the image on the imaging plane of the projector is pixel (pixel); internal reference matrix K of projectorpInternal reference matrix K of sum cameracIncluding the focal lengths of the projector and the camera in the horizontal direction and the vertical direction, respectively, and the principal point of the image, the transformation between the camera coordinate system and the projector coordinate system and the image coordinate system of the camera and the projector coordinate system, respectively, is expressed. Rotation matrix R between projector and camerapcAnd a translation vector tpcThe transformation between the projector coordinate system and the camera coordinate system is expressed.
4) And optimally designing a sinusoidal phase shift fringe image sequence according to the camera defocusing kernel and the projector defocusing kernel fed back by the depth of field. According to the reciprocal relation between the optimum frequency and the defocus kernel sigma, i.e. the optimum frequencyFrom the synthetic defocus kernel in the camera and projector common field of view obtained in step 3):and optimally designing the frequency of the sinusoidal fringe image sequence which is different along with the measurement position and the depth of field, and generating the variable-frequency sinusoidal fringe image sequence of the N-step phase shift method according to the optimized frequency which is changed along with the measurement position. The method specifically comprises the following steps:
4-1) calculating the normalized pixel coordinate (u) of the common view camera by using the calibrated camera internal parameters under the camera coordinate systemck,vck) The expression is as follows:
projector intrinsic parameter calculation using calibrationPublic view projector normalized pixel coordinates (u)pk,vpk) The expression is as follows:
4-2) as shown in FIG. 5, it is a schematic diagram of the image sequence of longitudinal frequency conversion sine stripe optimally designed according to the point cloud of three-dimensional coordinate of the measured object measured at the first measurement, and on the projector image plane, when designing the longitudinal frequency conversion sine stripe image, on the abscissa pixel coordinateIn public viewThe range of coordinate variation is [ V ]1,V2](V2>V1) Calculating the coordinates of the abscissa pixelOptimized equivalent scattered coke coreThereby designing an optimum frequency ofAnd thereby optimally designing the frequency-converted sinusoidal fringe image sequence. The method specifically comprises the following steps:
4-2-1) at pixel coordinateTo measure the error coefficient kjCan be calculated as:
in the formula (I), the compound is shown in the specification,for projector coordinatesNormalized coordinates. u. ofcjAnd vcjFor projector coordinatesNormalized coordinates of corresponding camera pixel coordinates. The intermediate variables E and J are respectively:
E=r11ucj+r12vcj+r13
J=r31ucj+r32vcj+r33
4-2-2) calculating the abscissa pixel coordinateThe equivalent scattered-coke kernel is calculated as:
whereinAs pixel coordinatesTotal defocus kernel size, λjAs a weighting factor, calculate as:
4-3) in common field of view of camera and projector, abscissa pixel coordinateThe minimum value of (d) is:
the maximum value is:
according to the method of 4-2), fromToSequentially calculating the size of each row of pixel optimization equivalent defocusing kernels, and obtaining the corresponding sequence of the optimization equivalent defocusing kernels and the corresponding abscissa pixel coordinates as follows:
and 4-4) performing piecewise linear fitting on the optimized equivalent defocusing kernel and the abscissa pixel coordinate sequence to obtain piecewise fitting parameters. As shown in FIG. 6, the section length is set to WsThe width of the projector image plane is WprjPerforming linear fitting on the mth segment optimized equivalent defocusing kernel and the abscissa pixel coordinate in the public visual field to obtain a piecewise linear relation
σ m=αmUpm,m=1,2,…,Su
In the formula, αmAnd βmAnd optimizing the slope and intercept of the equivalent defocusing kernel and the coordinate straight line fitting of the abscissa pixel for the mth section. SuThe number of segmentation intervals for the projector abscissa pixel in the public view is calculated as:
where the round () function represents a rounding function. Abscissa pixel intervalInner defocus kernel set to constantTime, abscissa pixel coordinateAnd (3) calculating the phase:
when αmWhen not equal to 0, the abscissa pixel coordinateAnd (3) calculating the phase:
in the formula, C0Is composed ofIn the phase value of m isThe number of segments in which it is located. Cm-1Is the phase value at the end of the m-1 th segment interval αmAnd βmThe slope and intercept of the m-th segment of the straight line fit are respectively.When αmWhen 0, the abscissa pixel coordinateAnd (3) calculating the phase:
interval(s)Inner defocus kernel set to constantTime, abscissa pixel coordinateAnd (3) calculating the phase:
in the formula, αSuIs the S thuSlope of the segment, βSuIs the S thuSlope of segment
The ordinate pixel coordinate can be designed according to the same method as the steps 4-2) to 4-4)To phase position
4-5) according to the frequency conversion phase obtained in the step 4-4)Respectively generating N-step longitudinal frequency conversion sine stripe image sequencesAnd N-step transverse frequency conversion sine stripe image sequenceRespectively as follows:
in the formula Ip′Is Ip″Respectively, the mean value and the amplitude value of the generated frequency conversion sinusoidal image.
5) N-step longitudinal frequency conversion sine stripe image sequence after optimized projection of computer-controlled projectorAnd N-step transverse frequency conversion sine stripe image sequenceAnd synchronously acquiring by using a camera to obtain a corresponding N-step frequency conversion sine stripe image sequence. Obtaining monotone relative phase corresponding to each pixel coordinate of a camera in a common visual field of a projector and the camera by using N-step phase shift method calculationSize, and using the camera pixel coordinates { (U) in the camera projector common field of view initially obtained with the uniform frequency sinusoidal fringe image sequence in step 3-2)ck,Vck),k=1,2,…,NsvMatched projector pixel coordinates { (U pk,V pk),k=1,2,…,NsvCalculating the phase cycle number (Q (U)) corresponding to each camera pixel coordinateck,Vck),k=1,2,…,NsvThereby obtaining an absolute phaseThereby obtaining the projector pixel coordinate of the camera pixel coordinate re-optimization matching by decoding calculationThe method comprises the following specific steps:
5-1) Using projector in structured light measurement System to convert transversely-converted sinusoidal fringe image sequencesAnd a longitudinal frequency conversion stripTexture image sequenceProjecting on the surface of the measured object, and collecting corresponding transverse frequency conversion sine stripe image sequence by using a cameraAnd corresponding longitudinal frequency-conversion sinusoidal stripe image sequenceUsing acquired longitudinally-converted fringe image sequencesCalculating camera pixel coordinates (U) in a common field of view using phase shift methodsck,Vck)(k=1,2,…,Nsv) The lateral relative phase of (a) is:
it is changed to a monotonic transverse relative phase, calculated as:
5-2) and utilizing the camera pixel coordinate { (U) in the camera projector public view field obtained by the uniform frequency sine stripe image sequence for the first time in the step 3-2)ck,Vck),k=1,2,…,NsvMatched projector pixel coordinates { (U pk,V pk),k=1,2,…,NsvAnd the frequency conversion monotone relative phase obtained by calculation in the step 5-1)Calculating a camera pixel (U)ck,Vck) The number of 2pi cycles of the absolute phase of the frequency conversion is expressed as:
where round () is a rounding function. Thereby camera pixel coordinates (U)ck,Vck) The absolute frequency conversion phase at (a) can be calculated as:
5-3) camera pixel coordinates (U) from the obtainedck,Vck) At absolute frequency conversion phase when αmNot equal to 0, camera pixel coordinates (U)ck,Vck) The matched projector pixel abscissa can be calculated as:
when αmWhen the content is equal to 0, the content,can be calculated as:
using acquired transversely-converted fringe image sequencesCalculating the coordinate (U) of the camera pixel in the public view by the same method from the step 5-1) to the step 5-3)ck,Vck)(k=1,2,…,Nsv) Matched projector longitudinal coordinates ofSo as to obtain a re-matched projector transverse and longitudinal coordinate sequence:
6) matching pixel coordinates of the camera and the projector obtained by the re-matching in the step 5) and the phase calibrated in the step 3-1)Built-in parameter matrix KCAnd the intrinsic parameter matrix K of the projectorPAnd a conversion relation matrix R of the camera with respect to the projectorpc=[rij]3×3And a translation vector tpc=[t1,t2,t3]TAnd reconstructing by using a three-dimensional reconstruction algorithm to obtain a new three-dimensional coordinate point cloud of the measured object.
6-1) Camera and projector common View, Pixel coordinate sequence { (U) in Camera coordinate Systemck,Vck),k=1,2,…,NsvThe corresponding three-dimensional coordinate sequence isFrom the camera intrinsic parameters, pixel coordinates (U), calibrated in step 3-1)ck,Vck) And three-dimensional coordinatesSatisfy a first equation:
re-matching the corresponding projector pixel coordinate sequence under the projector coordinate systemCorresponding three-dimensional coordinate sequence isThe projector internal parameters and the projector pixel coordinates calibrated in the step 3-1)And three-dimensional coordinatesSatisfy the second equation:
transformation relation matrix R of camera relative to projectorpc=[rij]3×3And a translation vector tpc=[t1,t2,t3]TRelation, three-dimensional coordinate points in the camera coordinate systemCorresponding to three-dimensional coordinate points in the projector coordinate systemSatisfy a third equation:
according to the first equation, the second equation and the third equation, calculating to obtain a new three-dimensional coordinate point cloud of the surface of the measured object in the common view of the camera and the projector, wherein the new three-dimensional coordinate point cloud under the projector coordinate is as follows:
when a new measured object is reselected, only the steps 3) to 6) need to be repeated.

Claims (7)

1. A frequency conversion fringe projection structured light measurement method based on depth of field feedback is characterized by comprising the following steps:
1) building a defocusing kernel calibration system of the projector; calibrating the size of a defocusing nucleus of the projector under different depth of field by using the built defocusing nucleus calibration system of the projector, and fitting to obtain a functional relation between the defocusing nucleus of the projector and the depth of field of the projector;
2) building a camera defocusing core calibration system; calibrating the size of a defocusing kernel of the camera under different depths of field by using the built camera defocusing kernel calibration system, and fitting to obtain the relation between the defocusing kernel of the camera and the depth of field of the camera;
3) a structured light measuring system for fringe projection is set up and calibrated to obtain an internal parameter matrix of a camera, an internal parameter matrix of a projector, a conversion relation matrix of the camera relative to the projector and a translation vector; measuring the surface of a measured object by using a uniform frequency sine stripe image sequence to obtain a three-dimensional coordinate point cloud of the measured object which is measured for the first time; according to the point cloud of the three-dimensional coordinate of the measured object, the projector depth of field of the corresponding projector pixel and the camera depth of field of the corresponding camera pixel coordinate in the common visual field of the camera and the projector are obtained, and the size of the defocusing kernel of the projector and the size of the defocusing kernel of the camera on the surface of the measured object in the common visual field of the camera and the projector are respectively calculated by using the calibration results in the step 1) and the step 2);
4) optimally designing a sinusoidal phase shift fringe image sequence according to the camera defocusing kernel and the projector defocusing kernel fed back by the depth of field; according to the reciprocal relation between the optimal frequency and the defocusing kernel, the comprehensive defocusing kernel in the common visual field of the camera and the projector obtained in the step 3): optimally designing the frequency of a sinusoidal fringe image sequence which is different along with the measurement position and the depth of field, and generating N projected images of a variable-frequency sinusoidal fringe image sequence of an N-step phase shift method by designing the optimal frequency which is changed along with the measurement position, wherein N is more than 3;
5) the computer is adopted to control the projector to project the optimized N-step longitudinal frequency conversion sine stripe image sequence and N-step transverse frequency conversion sine stripe image sequence, and a camera is used for synchronously acquiring to obtain the corresponding N-step frequency conversion sine stripe image sequence; obtaining the monotonous relative phase size corresponding to each pixel coordinate of the camera in the public vision field of the projector and the camera by using an N-step phase shift method, and calculating the phase period number corresponding to each camera pixel coordinate by using the projector pixel coordinate matched with the camera pixel coordinate in the public vision field of the camera and the projector obtained in the step 3), thereby obtaining an absolute phase; the projector pixel coordinate which is matched with the camera pixel coordinate in a re-optimization mode is obtained through decoding calculation;
6) and (3) re-optimizing the matched projector pixel coordinates by using the re-obtained camera pixel coordinates in the step 5), the camera internal parameter matrix obtained in the step 3), the internal parameter matrix of the projector, the conversion relation matrix and the translation vector of the camera relative to the projector, and reconstructing by using a three-dimensional reconstruction algorithm to obtain a new three-dimensional coordinate point cloud of the measured object.
2. The method according to claim 1, wherein the step 1) comprises the following steps:
1-1) building a defocusing kernel calibration system of the projector; the projector defocusing kernel calibration system structure comprises a sliding rail, a sliding table, a projector, a camera, a computer and a projection screen; the sliding rail is fixed on the flat and stable desktop, the projection screen is fixed at one end of the sliding rail, and the projection screen is perpendicular to the moving direction of the sliding rail; the sliding table is arranged on the sliding rail and can move back and forth along the sliding rail; the projector is fixed on the sliding table, and when the sliding table moves along the sliding rail, the optical axis of the projector is perpendicular to the projection screen; the camera is placed on the desktop on one side of the sliding rail, so that the camera focuses on the projection screen, the camera and the projector are respectively connected with the computer through USB data lines, and the hard trigger end of the projector is connected with the hard trigger receiving end of the camera through a flat cable, so that a defocusing nucleus calibration system of the projector is formed;
1-2) moving the sliding table, changing the distance between the projector and the projection screen, calibrating the size of the defocusing nucleus of the projector under different depths of field of the projector by using a cross-correlation analysis method, and fitting to obtain a functional relation between the defocusing nucleus of the projector and the depth of field; the method specifically comprises the following steps:
1-2-1) projector defocus kernel calibration image using computer generated rectangular array
1-2-2) measuring depth of field H of the projector to the projection screen at the focal position of the projectorfProjecting projector defocused kernel calibration images in sequenceN uniform longitudinal low-frequency sinusoidal stripe image sequencesN uniform longitudinal high-frequency sinusoidal stripe image sequencesN uniform transverse low-frequency sinusoidal stripe image sequencesAnd N uniform longitudinal high-frequency sinusoidal stripe image sequencesTo the projection screen;
after each projection, the camera collects the corresponding projection image on the projection screen; according to the three-dimensional reconstruction principle of the phase shift method, the coordinates (U) of each pixel of the camera in the common visual field of the camera and the projector are calculatedc,Vc) Matched projector low frequency longitudinal relative phase V phiu(Uc,Vc) And a high frequency longitudinal relative phase, expressed as:
in the formula (I), the compound is shown in the specification,andcamera pixel coordinates (U) of the ith image of the longitudinal low-frequency sinusoidal fringe image sequence and the longitudinal high-frequency sinusoidal fringe image sequence acquired by the camera in the common field of view of the camera and the projectorc,Vc) The gray value of (d);
calculating coordinates (U) of each pixel of a camera in a common field of view of the camera and the projectorc,Vc) The lateral low-frequency relative phase and the lateral high-frequency relative phase of the matched projector are expressed as follows:
in the formula (I), the compound is shown in the specification,andcamera pixel coordinates (U) of ith image of transverse low-frequency sine stripe image sequence and transverse high-frequency sine stripe image sequence acquired by camera in common visual field of camera and projectorc,Vc) The gray value of (d);
1-2-3) according to each pixel coordinate (U) in the camera image coordinate system in the camera and projector common field of view obtained in step 1-2-2)c,Vc) Longitudinal low frequency relative phase V phiu(Uc,Vc) And a longitudinal high-frequency relative phase V phih(Uc,Vc) And transverse low frequency relative phase Hphiu(Uc,Vc) And transverse high-frequency phase H phih(Uc,Vc) Calculating the coordinates (U) of each pixel in the camera image coordinate systemc,Vc) Matched projector abscissaAnd ordinateThe expression is as follows:
where round () represents a rounding function,TuFor uniform low frequency sinusoidal fringe period, ThThe period of the uniform high-frequency sine stripe is large;
1-2-4) calculating the corresponding gray value of the projector image under the integral point coordinate by utilizing a bilinear interpolation function griddata function in MATLAB, thereby obtaining the projector defocusing kernel calibration imageGeometric correction image with same resolutionThe expression is as follows:
wherein the content of the first and second substances,respectively representing camera pixel coordinates { (U) in a common field of viewc,Vc) The abscissa sequence and the ordinate sequence corresponding thereto,representing a sequence of gray values corresponding to a rectangular array image acquired by a public view camera: { UpAnd { V }pExpressing the coordinate sequence of the integral point pixel of the projector image, and expressing the expression:
({Up},{Vp})=meshgrid(1:Wp,1:Hp)
where meshgrid () is the grid point generating function, WpAnd HpRespectively the transverse pixel resolution and the longitudinal pixel resolution of the projector image;
1-2-5) using computer pairsPerforming Gaussian blur processing of different degrees to generate a rectangular array standard template blurred image sequence with known defocus kernelsColumn I1+(i-1)·0.2I ═ 1,2, …, 96; the expression is as follows:
in the formula, is G1+(i-1)·0.2(Up,Vp) A Gaussian blur function with a defocus kernel σ of 1+ (i-1) · 0.2, notationRepresenting a convolution operation;
1-2-6) geometrically corrected image obtained in 1-2-4)Performing cross-correlation calculation with each image in the fuzzy image sequence of the standard template of the rectangular array, and performing geometric correction image during calculationEach rectangular sub-block ofRectangular sub-blocks tI respectively corresponding to the standard template blurred image sequence1{w,h},tI1.2{w,h},I1.4{w,h}…,tI20{ w, h } performing cross-correlation calculation with a cross-correlation coefficient r1{w,h},r1.2{w,h},r1.4{w,h},…,r20{ w, h }, the defocus kernel size of the standard template blurred image with the maximum cross-correlation coefficient in the rectangular sub-block { w, h } in the rectangular array standard template blurred image sequenceThe size of a defocusing kernel is used as the w column and h row rectangular sub-block; calculating the geometric correction image in turn according to the methodThe size of the defocus kernel of each rectangular sub-blockThe defocus kernel distribution array from which the geometrically corrected image is obtained is:
averaging the obtained defocused kernel distribution array:
as the defocus kernel size at the current position of the projector;
1-2-7) moving the sliding table at an equal distance delta H along the sliding rail every time to enable the projector to be far away from the projection screen, repeating the steps 1-2-2) to 1-2-6), and calculating the defocusing kernel size of the projector under the corresponding current depth of field, so that the depth of field H and the defocusing kernel size sigma of the projector are obtainedp(H) The corresponding sequence:
from the projector defocus kernel σpThe linear property that changes with the depth of field H of the projector, the relationship between the line fitting projector defocus kernel and the depth of field is as follows:
σp=apH+kp
in the formula, apAnd kpIs the linear fitting coefficient of the projector defocusing kernel and the depth of field, and H is the depth of field of the projector.
3. The method according to claim 2, wherein the step 2) comprises the following steps:
2-1) building a camera defocusing core calibration system; the same equipment as the defocusing nucleus calibration system of the projector in the step 1-1) is adopted, wherein the slide rail is fixed on a stable desktop, the projection screen is fixed at one end of the slide rail, and the projection screen is vertical to the movement direction of the slide rail; the sliding table is arranged on the sliding rail and can move back and forth along the sliding rail; the camera is fixed on the sliding table, and when the sliding table moves along the sliding rail, the optical axis of the camera is perpendicular to the projection screen; the camera is connected with the computer through a USB data line;
2-2) fitting to obtain the relation between the camera defocusing kernel and the depth of field; the method comprises the following specific steps:
2-2-1) generating images for calibrating black and white vertical stripes of defocused core of camera by using computerAnd to be printedThe sliding table is flatly fixed on the projection screen, and then the sliding table is moved back and forth to make the image clearest, so that the projection screen is the focusing position of the camera;
measuring depth of field L of camera from projection screen at focus positionfAnd capturing images with a cameraExtracting captured images using Canny operatorsCalculating all black and white stripe boundaries to obtain a binary image with white boundary and black backgroundFromThe pixel coordinate of the black and white stripe boundary (x) extracted from the imageei,yei),i=1,2,…,Ne},NeThe number of the black and white stripe boundary pixel coordinates is shown;
2-3) use of loose coke nuclei as sigma0Of the Gaussian function pair of the acquired imagesPerforming re-blurring treatment to obtain blurred imageFinding an imageAnd imageImage gradient ofAndfor the extracted black and white stripe boundary coordinates { (x)ei,yei),i=1,2,…,NeOne point (x) inei,yei) Calculating the defocus kernel for the point comprises the following sub-steps:
2-3-1) black and white boundary line image coordinate (x)ei,yei) The resulting gradient ratio, expressed as:
in the formula, σeiAs a boundary coordinate (x)ei,yei) The defocus kernel size of (d);
2-3-2) pixel coordinate (x) on the boundary lineei,yei) Size of defocused kernel σ ofeiThe calculation is as follows:
calculating the size of defocusing kernels of all pixel points on the boundary line according to the methods in 2-3-1) and 2-3-2), and calculating the average value as the current depth L of field of the camerafThe defocus kernel size of (c) is calculated as:
2-4) moving the sliding table in sequence with equal step length delta L to enable the camera to be far away from or close to the projection screen: repeating the steps 2-1) to 2-3) to calculate the size of the defocusing kernel of the camera under different camera depth of field L; thereby obtaining a camera defocus kernel σcAnd the corresponding sequence of the camera depth of field L is:
in the formula I1And l2To a depth of field L from the camera focus positionfThe number of movements towards and away from the projection screen, respectively; deriving a defocus kernel σ for a camera using cubic polynomial fittingcThe relationship with the depth of field L is:
σc=b0+b1L+b2L2+b3L3
in the formula, biIs the coefficient of the cubic curve fit, i ═ 1,2,3, 4; l is the camera depth of field.
4. The method according to claim 3, wherein the step 3) comprises the following steps:
3-1) fixing the relative position relationship between the camera and the projector, so that the camera and the projector have a public visual field range to form a structured light measurement system for fringe projection, and calibrating the formed structured light measurement system by using a structured light measurement system calibration method to obtain a parameter matrix in the camera:
and an intrinsic parameter matrix of the projector:
wherein f iscuAnd fcvAre respectively a camera UcAnd VcA focal length of direction; f. ofpuAnd fpvAre respectively a projector UpAnd VpA focal length of direction; u shapec0And Vc0Separate camera UcAnd VcCoordinates of the center point of the direction; u shapep0And Vp0Separate camera UpAnd VpCoordinates of the center point of the direction; resolution of the camera is Wc×HcResolution of the projector is Wp×Hp
Calibrating the structured light measurement system, and simultaneously obtaining a conversion relation matrix of the camera relative to the projector:
Rpc=[rij]3×3
and translation vector:
tpc=[t1,t2,t3]T
wherein r isijIs a matrix RpcValue of i row and j column, t1,t2,t3Respectively representing the translation amounts in the X direction, the Y direction and the Z direction;
3-2) measuring the measured object in the common visual field of the camera and the projector by utilizing the phase shift method principle to obtain the camera pixel coordinate { (U) in the common visual field of the camera and the projectorck,Vck),k=1,2,…,NsvAnd corresponding camera depth of field { L }k,k=1,2,…,Nsv},NsvThe number of camera pixels under a public view; the projector pixel coordinates corresponding to all the camera pixel coordinates obtained by using the phase shift method are { (U pk,V pk),k=1,2,…,NsvAnd obtaining a corresponding depth of field { H } of the projectork,k=1,2,…,Nsv}; utilizing the projector defocus kernel sigma calibrated in the step 1)pCalculating the corresponding projector defocusing kernel size in the public view according to the relation with the depth of field H of the projector as follows:
pkpk=apHk+kp,k=1,2,…,Nsv}
and step 2) calibrating the camera defocus kernel sigmacCalculating the size of a corresponding camera defocusing kernel in the public view according to the relation with the depth of field L of the camera as follows:
ckck=b0+b1Lk+b2Lk 2+b3Lk 3,k=1,2,…,Nsv}:
and calculating the comprehensive defocus kernel in the public view by using Gaussian blur properties as follows:
calculating the coordinate of the integral point pixel of the projector in the public visual field by using MATLAB bilinear interpolation function griddataThe following comprehensive loose coke nuclei are:
in the formula (I), the compound is shown in the specification,the number of the integral point coordinates of the projector in the public view.
5. The method according to claim 4, wherein the step 4) comprises the following steps:
4-1) calculating the normalized pixel coordinate (u) of the common view camera by using the calibrated camera internal parameters under the camera coordinate systemck,vck) The expression is as follows:
computing public view projector normalized pixel coordinates (u) using calibrated projector intrinsic parameterspk,vpk) The expression is as follows:
4-2) measured from the primary measurementOptimally designing a longitudinal frequency conversion sine stripe image sequence on a projector image plane and on an abscissa pixel coordinate by using an object three-dimensional coordinate point cloudIn public viewThe range of coordinate variation is [ V ]1,V2](V2>V1) Calculating the coordinates of the abscissa pixelOptimized equivalent scattered coke coreThereby designing an optimum frequency ofAnd optimally designing a variable-frequency sine stripe image sequence; the method specifically comprises the following steps:
4-2-1) at pixel coordinateTo measure the error coefficient kjThe calculation is as follows:
in the formula (I), the compound is shown in the specification,for projector coordinatesNormalized coordinate ucjAnd vcjFor projector coordinatesNormalized coordinates of corresponding camera pixel coordinates: the intermediate variables E and J are respectively:
E=r11ucj+r12vcj+r13
J=r31ucj+r32vcj+r33
4-2-2) calculating the abscissa pixel coordinateThe equivalent scattered-coke kernel is calculated as:
whereinAs pixel coordinatesTotal defocus kernel size, λjAs a weighting factor, calculate as:
4-3) in common field of view of camera and projector, abscissa pixel coordinateThe minimum value of (d) is:
the maximum value is:
according to the method of 4-2), fromToSequentially calculating the size of each row of pixel optimization equivalent defocusing kernels, and obtaining the corresponding sequence of the optimization equivalent defocusing kernels and the corresponding abscissa pixel coordinates as follows:
4-4) performing piecewise linear fitting on the optimized equivalent defocusing kernel and the abscissa pixel coordinate sequence to obtain piecewise fitting parameters, and performing linear fitting on the mth optimized equivalent defocusing kernel and the abscissa pixel coordinate in the public view to obtain a piecewise linear relation:
σ m=αmUpm,m=1,2,…,Su
in the formula, αmAnd βmOptimizing the slope and intercept of the linear fit of the equivalent defocusing kernel and the abscissa pixel coordinate for the mth section; suThe number of segmentation intervals for the projector abscissa pixel in the public view is calculated as:
wherein, WsFor interval length, round () function represents a rounding function, the abscissa pixel intervalInner defocus kernel set to constantTime, abscissa pixel coordinateAnd (3) calculating the phase:
when αmWhen not equal to 0, the abscissa pixel coordinateAnd (3) calculating the phase:
in the formula, C0Is composed ofIn the phase value of m isNumber of segments: cm-1α as the phase value at the end of the m-1 th segment intervalmAnd βmRespectively the slope and intercept of the m-th section of straight line;when αmWhen 0, the abscissa pixel coordinateAnd (3) calculating the phase:
interval(s)Inner defocus kernel set to constantTime, abscissa pixel coordinateAnd (3) calculating the phase:
in the formula (I), the compound is shown in the specification,is the S thuThe slope of the segment(s) is,is the S thuThe intercept of the segment;
obtaining the ordinate pixel coordinate by the same method as the steps 4-2) to 4-4)To phase position
4-5) according to the frequency conversion phase obtained in the step 4-4)Andrespectively generating N-step longitudinal frequency conversion sine stripe image sequencesAnd N-step transverse frequency conversion sine stripe image sequenceRespectively as follows:
in the formula Ip′Is Ip″Respectively, the mean value and the amplitude value of the generated frequency conversion sinusoidal image.
6. The method according to claim 5, wherein the step 5) comprises the following steps:
5-1) Using projector in structured light measurement System to convert transversely-converted sinusoidal fringe image sequencesAnd longitudinal frequency conversion stripe image sequenceProjecting on the surface of the measured object, and collecting corresponding transverse frequency conversion sine stripe image sequence by using a cameraAnd corresponding longitudinal frequency-conversion sinusoidal stripe image sequenceUsing acquired longitudinally-converted fringe image sequencesCalculating camera pixel coordinates (U) in a common field of view using phase shift methodsck,Vck)(k=1,2,…,Nsv) Is/are as followsThe lateral relative phase is:
it is changed to a monotonic transverse relative phase, calculated as:
5-2) utilizing the camera pixel coordinates { (U) in the camera projector common field of view obtained primarily with the uniform frequency sinusoidal fringe image sequence in step 3-2)ck,Vck),k=1,2,…,NsvMatched projector pixel coordinates { (U pk,V pk),k=1,2,…,NsvAnd the frequency conversion monotone relative phase obtained by calculation in the step 5-1)Calculating a camera pixel (U)ck,Vck) The number of 2pi cycles of the absolute phase of the frequency conversion is expressed as:
wherein round () is a rounding function; thereby camera pixel coordinates (U)ck,Vck) The absolute frequency conversion phase is calculated as:
5-3) camera pixel coordinates (U) from the obtainedck,Vck) At absolute frequency conversion phase when αmNot equal to 0, camera pixel coordinates (U)ck,Vck) The matched projector pixel abscissa is calculated as:
when αmWhen the content is equal to 0, the content,can be calculated as:
using acquired transversely-converted fringe image sequencesCalculating the coordinate (U) of the camera pixel in the public view by the same method from the step 5-1) to the step 5-3)ck,Vck)(k=1,2,…,Nsv) Matched projector longitudinal coordinates ofSo as to obtain a re-matched projector transverse and longitudinal coordinate sequence:
7. the method according to claim 6, wherein the step 6) comprises the following steps:
6-1) Camera and projector common View, Pixel coordinate sequence { (U) in Camera coordinate Systemck,Vck),k=1,2,…,NsvThe corresponding three-dimensional coordinate sequence isFrom the camera intrinsic parameters, pixel coordinates (U), calibrated in step 3-1)ck,Vck) And three-dimensional coordinatesSatisfy a first equation:
re-matching the corresponding projector pixel coordinate sequence under the projector coordinate systemCorresponding three-dimensional coordinate sequence isThe projector internal parameters and the projector pixel coordinates calibrated in the step 3-1)And three-dimensional coordinatesSatisfy the second equation:
transformation relation matrix R of camera relative to projectorpc=[rij]3×3And a translation vector tpc=[t1,t2,t3]TRelation, three-dimensional coordinate points in the camera coordinate systemCorresponding to three-dimensional coordinate points (X) in the projector coordinate systempk,Ypk,Zpk)TSatisfy a third equation:
according to the first equation, the second equation and the third equation, calculating to obtain a new three-dimensional coordinate point cloud of the surface of the measured object in the common view of the camera and the projector, wherein the new three-dimensional coordinate point cloud under the projector coordinate is as follows:
CN201810777820.XA 2018-07-16 2018-07-16 Variable-frequency fringe projection structured light measuring method based on depth of field feedback Active CN108592824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810777820.XA CN108592824B (en) 2018-07-16 2018-07-16 Variable-frequency fringe projection structured light measuring method based on depth of field feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810777820.XA CN108592824B (en) 2018-07-16 2018-07-16 Variable-frequency fringe projection structured light measuring method based on depth of field feedback

Publications (2)

Publication Number Publication Date
CN108592824A CN108592824A (en) 2018-09-28
CN108592824B true CN108592824B (en) 2020-06-30

Family

ID=63617589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810777820.XA Active CN108592824B (en) 2018-07-16 2018-07-16 Variable-frequency fringe projection structured light measuring method based on depth of field feedback

Country Status (1)

Country Link
CN (1) CN108592824B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109357621B (en) * 2018-12-10 2020-08-11 福州大学 Three-dimensional vibration displacement measuring device and method based on linear array camera and position sensing stripes
CN111750803B (en) * 2019-03-26 2021-07-30 天津理工大学 Fringe projection measuring method based on dynamic focusing principle
CN110470219A (en) * 2019-08-16 2019-11-19 福建农林大学 The out-of-focus image distance measuring method and device retained based on edge spectrum
CN110645919B (en) * 2019-08-23 2021-04-20 安徽农业大学 Structured light three-dimensional measurement method based on airspace binary coding
CN110940294B (en) * 2019-11-22 2020-12-29 华中科技大学 Image coding and decoding method in surface structured light measurement system
CN113327317A (en) * 2021-08-04 2021-08-31 浙江清华柔性电子技术研究院 Three-dimensional point cloud picture acquisition method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8472744B2 (en) * 2008-05-27 2013-06-25 Nikon Corporation Device and method for estimating whether an image is blurred
US8201951B2 (en) * 2008-11-19 2012-06-19 Seiko Epson Corporation Catadioptric projectors
CN103942830B (en) * 2014-04-04 2016-08-17 浙江大学 Directly utilize and there is the method that the phase place of nonlinearity erron realizes scene three-dimensional reconstruction
US9646225B2 (en) * 2015-08-21 2017-05-09 Sony Corporation Defocus estimation from single image based on Laplacian of Gaussian approximation
CN107622514B (en) * 2017-09-30 2020-10-16 常州工学院 Autonomous calibration method for convex lens model of camera
CN108230399B (en) * 2017-12-22 2019-11-08 清华大学 A kind of projector calibrating method based on structured light technique

Also Published As

Publication number Publication date
CN108592824A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN108592824B (en) Variable-frequency fringe projection structured light measuring method based on depth of field feedback
KR101475382B1 (en) Method for extracting self adaptive window fourie phase of optical three dimensionl measurement
Huang et al. Bezier interpolation for 3-D freehand ultrasound
CN107917679B (en) Dynamic detection and compensation method for highlight and dark regions
CN109631798B (en) Three-dimensional surface shape vertical measurement method based on pi phase shift method
CN104111038A (en) Method for using phase fusion algorithm to repair phase error caused by saturation
CN109631796B (en) Three-dimensional surface shape vertical measurement method based on two-dimensional S-transform ridge-taking method
US9157874B2 (en) System and method for automated x-ray inspection
CN103940370A (en) Target object three-dimensional information acquisition method based on periodical co-prime hybrid coding
Malik et al. Depth map estimation using a robust focus measure
CN109974624A (en) A method of the reduction projected image quantity based on multifrequency phase shift
Amir et al. High precision laser scanning of metallic surfaces
CN110702034A (en) High-light-reflection surface three-dimensional surface shape measuring method, server and system
Sutton et al. Development and assessment of a single-image fringe projection method for dynamic applications
Jia et al. A field measurement method for large objects based on a multi-view stereo vision system
CN110500970B (en) Multi-frequency structured light three-dimensional measurement method
CN109443250B (en) Structured light three-dimensional surface shape vertical measurement method based on S transformation
CN109373912B (en) Binocular vision-based non-contact six-degree-of-freedom displacement measurement method
Espino et al. Vision system for 3D reconstruction with telecentric lens
CN107588741B (en) Method and system for measuring camera depth change based on moire fringes
CN111028295A (en) 3D imaging method based on coded structured light and dual purposes
CN110349193A (en) Fast image registration method suitable for Fourier transform spectrometer,
Sui et al. Active Stereo 3-D Surface Reconstruction Using Multistep Matching
KR101819141B1 (en) Method for decoding line structured light patterns by using fourier analysis
Liu et al. Investigation of Phase Pattern Modulation for Digital Fringe Projection Profilometry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant