WO2016145582A1 - 相位偏移校准方法、3d形状检测的方法、系统及投影系统 - Google Patents

相位偏移校准方法、3d形状检测的方法、系统及投影系统 Download PDF

Info

Publication number
WO2016145582A1
WO2016145582A1 PCT/CN2015/074254 CN2015074254W WO2016145582A1 WO 2016145582 A1 WO2016145582 A1 WO 2016145582A1 CN 2015074254 W CN2015074254 W CN 2015074254W WO 2016145582 A1 WO2016145582 A1 WO 2016145582A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase
value
target
invalid
pixel
Prior art date
Application number
PCT/CN2015/074254
Other languages
English (en)
French (fr)
Inventor
王曌
王冠
吴昌力
Original Assignee
香港应用科技研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港应用科技研究院有限公司 filed Critical 香港应用科技研究院有限公司
Priority to PCT/CN2015/074254 priority Critical patent/WO2016145582A1/zh
Priority to CN201510115382.7A priority patent/CN104713497B/zh
Publication of WO2016145582A1 publication Critical patent/WO2016145582A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object

Definitions

  • the present invention relates to a system and method for measuring and detecting a target shape, and more particularly to a phase offset calibration method, a system for detecting a 3D shape of a target using optical projection pattern calibration and phase offset calibration, and Method and projection system for detecting 3D shapes.
  • a projector In current techniques for measuring and detecting target shapes, a projector is typically used to project a specific light (eg, stripe or structured light) onto a target surface and a reference plane, and then to form a stripe of light on the target surface and the reference plane.
  • the image is captured and the information such as the position and height of the target is calculated by the phase method according to the change of the optical signal caused by the target shape in the captured image, and finally the 3D shape of the target is formed.
  • the projector 101 projects stripe light through a grating such as a sine grating onto a target surface and a reference plane, and then photographs it by the camera 102 to form an image.
  • the shape of the target causes the points of the target surface to be unequal in distance from the reference plane behind it, thereby causing the stripe light projected onto the target to be deformed.
  • the light projected from the projector is sinusoidal light and then becomes a deformed shape.
  • the distance z(x, y) of the target surface from the reference plane at the point p at the coordinates (x, y) on the target can be calculated by the following formula (1):
  • the plane formed by the projector and the camera is parallel to the reference plane
  • l 0 represents the distance between the plane formed by the projector and the camera and the reference plane
  • B represents the distance between the projector and the camera
  • f 0 represents the period of the sinusoidal grating of the projected stripe light.
  • phase value in the above equation is actually calculated by the phase offset method, and usually requires at least three steps of phase shift to determine the phase value. In the actual measurement process, it can also be a four-step phase shift or the like.
  • the calculation of the phase value is generally performed by first calculating the phase wrap and then performing the phase unwrap calculation to finally obtain the phase value.
  • Figure 3 shows a schematic of phase wrapping and phase unwrapping.
  • the phase envelope value is calculated according to the following formula (2), wherein the phase wrapping value ⁇ (x, y) of the point (x, y) is:
  • I i (x, y) represents the intensity of the phase shift map of the ith frame at point (x, y).
  • phase ⁇ (x, y) is calculated by the inverse tangent function
  • the phase value obtained by the solution is included in [- ⁇ , + ⁇ ]. It is 2N ⁇ away from the continuous and true phase values, so to obtain a continuous true phase value, it needs to be unwrapped.
  • N is the number of stages of phase unwrapping.
  • the steps of measuring 3D shape using structured light are as follows: capturing a reflected image; phase wrapping; phase unwrapping; generating relative phase; calculating height.
  • the projection system of the axis deviation is used in the measurement and detection of the current target shape, and the key problem is distortion.
  • This distortion changes the period of the sinusoidal pattern and the spacing of each strip becomes uneven. Therefore, calibration is usually performed using a coordinate calibration method, for example, by projecting pattern light on a checkerboard, but this method is complicated in processing, low in accuracy, and limited by the limited resolution of the board.
  • the present invention proposes a system and method for detecting a 3D shape of a target using phase offset calibration, which has a short calculation time and high accuracy.
  • the invention provides a method for detecting a 3D shape of a target, comprising the following steps,
  • Step S1 projecting structured light of at least two different directions onto the target, acquiring structural light patterns formed by the target reflection in the at least two different directions, and calculating the at least two different directions The phase envelope value of the structured light pattern at the desired pixel on the target;
  • Step S2 analyzing structured light patterns acquired along the at least two different directions, and obtaining pixel positions having invalid phase wrapping values in a first direction of at least two different directions for gray values of the structured light patterns;
  • Step S3 calibrating the invalid phase wrapping value of the first direction along the effective phase wrapping value of the second direction as the compensation direction by the pixel having the invalid phase wrapping value to compensate the phase offset, thereby wrapping the invalid phase
  • the value compensation is a compensated invalid phase wrapping value
  • Step S4 combining the effective phase wrapping value in the first direction and the compensated invalid phase wrapping value at the desired pixel into A set of combined phase wrapping values to calculate the depth value at the desired pixel on the target.
  • the step of obtaining the pixel position of the first direction of the at least two different directions having the invalid phase wrapping value in the step S2 is: analyzing the gray value of each pixel acquired along the first direction and And a brightness value, if the gray value or the brightness value of a pixel is greater than a predetermined first threshold or less than a predetermined second threshold, determining that the phase wrapping value in the first direction at the pixel is invalid Phase wrap value.
  • the calibration process in the step S3 includes the following steps:
  • Step S301 obtaining a phase offset value of a first direction of a pixel position having an invalid phase wrapping value in a first direction and a phase offset value of a second direction as a compensation direction by using a phase offset calibration table;
  • Step S302 calculating a compensation phase wrapping value corresponding to the invalid phase wrapping value in the first direction according to the phase offset value and the effective phase wrapping value in the second direction;
  • Step S303 replacing the original invalid phase wrapping value with the compensated phase wrapping value to obtain a compensated invalid phase wrapping value.
  • the method for obtaining the phase offset calibration table includes the following steps:
  • Step S401 calculating a phase wrapping value at a desired pixel of the structured light pattern captured in each direction according to the structured light pattern on the reference plane separately captured in each of the at least two different directions, wherein the structure Light is structured light that is projected onto the reference plane in the at least two different directions and reflected by the reference plane;
  • Step S402 calculating a phase value at the desired pixel on the reference plane along the at least two different directions of the structured light pattern according to the phase wrapping value;
  • Step S403 calculating a phase offset value of each pixel along each direction.
  • step S404 the phase offset value of each pixel along each direction is used as a calibration value and recorded to obtain a phase offset calibration table.
  • phase offset value and the effective phase wrap value are added or subtracted to obtain a compensated phase wrap value corresponding to the invalid phase wrap value.
  • step S1 further comprising the step of performing mesh fitting optical calibration on the projection system with the projector and the camera for performing the method before the step S1, comprising the following steps:
  • Step S501 optical simulation projects a predetermined grid pattern of the projector onto a predetermined plane
  • Step S502 fitting an optical parameter of the projection distortion according to a deformation condition of the projected mesh pattern
  • Step S503 modulating the light emitted by the projector according to the fitted optical distortion parameter, so that the light emitted by the projector is an orthographic projection.
  • the projector is two stripe lights that are spaced apart in two directions, and the strips of light in two directions emitted by the modulated projector are parallel in the same phase period, and each direction after modulation The light strip emitted by the projector has a phase on the same plane Same width interval.
  • phase wrapping value in step S2 is calculated by using at least four point comparison algorithms, and the calculation direction is as follows:
  • phase wrap value ⁇ 1 (x, y) of the point on the target with coordinates (x, y) along the first direction as ⁇ ′ 0 , and the coordinates on the target as (x, y+1),
  • the phase wrap values of the four points of x, y-1), (x-1, y), (x+1, y) in the first direction represent ⁇ ' i
  • ⁇ ' 1 is the (x-1, y
  • ⁇ ' 1 is the (x-1, y
  • ⁇ ' 2 is the phase wrap value ⁇ (x+1, y) along the first direction of the (x+1, y)th point
  • ⁇ ′ 3 is the phase wrapping value ⁇ (x, y-1) along the first direction of the (x, y-1)th point
  • ⁇ ′ 4 is the first direction of the (x, y+1)th point Phase wrap value ⁇ (x, y+1),
  • phase The threshold is a suitable value selected according to actual needs.
  • the depth value at the required pixel on the calculation target in the step S4 is: performing phase unwrapping processing on the combined phase wrapping value, obtaining a phase value of the unwrapped, and calculating a target according to the phase value of the unwrapped object.
  • the depth value at the desired pixel is: performing phase unwrapping processing on the combined phase wrapping value, obtaining a phase value of the unwrapped, and calculating a target according to the phase value of the unwrapped object.
  • the present invention also proposes a projection system for detecting a 3D shape of a target, comprising the following components:
  • At least two projectors for projecting structured light of at least two different directions onto the target At least two projectors for projecting structured light of at least two different directions onto the target
  • a camera for acquiring a structured light pattern in which the structured light is reflected by a target
  • the processor calculates, according to the structured light pattern acquired by the camera, a phase wrapping value at a desired pixel on the target along the at least two different directions of the structured light pattern; and the structured light acquired along the at least two different directions
  • the pattern is analyzed, and the pixel position of the first direction of the at least two different directions having the invalid phase wrapping value is obtained for the gray value of the structured light pattern; and the second direction of the pixel edge having the invalid phase wrapping value is used as the compensation direction
  • the phase wrapping value performs a calibration process on the invalid phase wrapping value in the first direction to compensate the phase offset, thereby compensating the invalid phase wrapping value into the compensated invalid phase wrapping value;
  • the effective phase wrap value of the direction and the compensated invalid phase wrap value are combined into a set of combined phase wrap values to calculate a depth value at the desired pixel on the target.
  • the depth value at the desired pixel on the calculation target is: phase unwrapping processing on the combined phase wrapping value, obtaining a phase value of the unwrapped, and calculating a required value on the target according to the phase value of the unwrapped The depth value at the pixel.
  • the invention also proposes a phase offset calibration method, comprising the following steps:
  • Step S601 calculating a phase wrapping value at a desired pixel of the structured light pattern captured in each direction according to a structured light pattern on a reference plane separately captured in each of at least two different directions, wherein
  • the structured light is structured light that is projected onto the reference plane in the at least two different directions and reflected by the reference plane;
  • Step S602 calculating a phase value of the structured light along the at least two different directions on the reference plane on the reference plane according to the phase wrapping value;
  • Step S603 calculating a phase offset value of each pixel along each direction.
  • Step S604 recording the phase offset value of each pixel along each direction to obtain a phase offset calibration table, and calibrating the invalid phase value of the desired direction according to the phase offset calibration table to compensate the actual measurement.
  • the invalid phase value is
  • the invention also proposes a system for detecting a 3D shape of a target, comprising the following modules,
  • a first calculation module that calculates a phase wrap value of a structured light pattern formed by the target reflection in the at least two different directions at a desired pixel on the target;
  • An analysis module that analyzes structured light patterns acquired along the at least two different directions
  • a second calculating module which obtains pixel positions having invalid phase wrapping values in a first direction of at least two different directions with respect to a gray value of the structured light pattern
  • a calibration module that calibrates the invalid phase wrap value of the first direction by using an effective phase wrap value of the second direction as a compensation direction of the pixel having the invalid phase wrap value to compensate the phase offset, thereby invalidating the phase
  • the parcel value is compensated as the compensated invalid phase parcel value
  • a merging module that combines the effective phase wrap value in the first direction and the compensated invalid phase wrap value at the desired pixel into a set of combined phase wrap values
  • a third calculation module that calculates a depth value at a desired pixel at the target.
  • the calculation time and calculation difficulty of the target 3D shape can be greatly reduced.
  • the calculation time can be reduced from 3738 ms to 2846 ms, the calculation speed can be increased by 24%, and the phase wrapping is The time can be only 538ms.
  • the integrated phase map can be effectively maintained in the entire area in the calibration projection direction, which is beneficial to improve the efficiency and accuracy of the phase unwrapping, and the consistent phase diagram is more favorable for the phase wrapping algorithm on the GPU.
  • Figure 1 is a schematic diagram showing the principle of structured light projection in the prior art.
  • Figure 2 shows the deformation of sinusoidal light projected onto a 3D target in the prior art.
  • Figure 3 shows a schematic diagram of phase wrapping and phase unwrapping.
  • FIG. 4 is a schematic diagram of a projection system and an object in accordance with a preferred embodiment of the present invention.
  • FIG. 5 is a flow chart showing a method of detecting a 3D shape of a target using phase offset calibration, in accordance with a preferred embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing optical alignment of a projection pattern using a mesh fitting method in accordance with a preferred embodiment of the present invention.
  • Figure 7 illustrates a non-optically calibrated projection pattern and an optically calibrated projection pattern in accordance with a preferred embodiment of the present invention.
  • Figure 8 is a graph showing the intensity distribution of a column in the un-optically aligned projection pattern and the optically calibrated projection pattern as shown in Figure 7.
  • FIG. 9 is a schematic diagram showing an ideal situation and an actual situation of projected light after phase offset calibration according to a preferred embodiment of the present invention.
  • Figure 10 is a diagram showing the relationship between the position and distance of components in an optical system projected to a 3D object in accordance with a preferred embodiment of the present invention.
  • Figure 11A is a schematic illustration of a structured light pattern projected in two directions in accordance with a preferred embodiment of the present invention.
  • Figure 11B is a diagram showing a phase value curve projected in two directions and a resultant phase value curve thereof, in accordance with a preferred embodiment of the present invention.
  • Figure 11C is a schematic illustration of adjacent points on a target in accordance with a preferred embodiment of the present invention.
  • Figure 12 is a flow chart showing a method of synthesizing phase values in accordance with a preferred embodiment of the present invention.
  • FIGS. 13A-13C are schematic diagrams showing the manner in which the detection system is arranged in accordance with various preferred embodiments of the present invention.
  • Figure 14 is a diagram showing the arrangement of a detection system in accordance with a preferred embodiment of the present invention.
  • 15A-15B are diagrams showing the steps of proofreading by an angled detection system and an image of structured light formed in each step, in accordance with a preferred embodiment of the present invention.
  • Fig. 16 is a view showing a calibration pattern obtained in accordance with the arrangement of the detecting system shown in Fig. 14.
  • Figure 17 shows a simplified flow chart of a detection method in accordance with a preferred embodiment of the present invention.
  • Figure 18 is a diagram showing the determination of a shaded area and a highly reflective area of an image based on the acquired image, in accordance with a preferred embodiment of the present invention.
  • a projection system includes projectors 401, 402 and a camera 403 on the same plane.
  • the camera 403 can also be replaced by an image sensor, and the target 405 is located with the projector 401. 402 and the reference plane 404 at a distance from the plane of the camera 403.
  • a flowchart of a method of detecting a 3D shape of a target by the projection system is as shown in FIG. 5, and the following steps are sequentially performed.
  • a suitable projector and camera are selected to form a projection system that is in the same plane as the camera.
  • two projectors are employed, which can employ three, four or the like of any number of projectors.
  • step 502 the optical pattern calibration step is performed before the target is measured, that is, the optical projection pattern calibration is performed on the projection system.
  • a grid fitting method is used to perform optical projection pattern calibration.
  • This step 502 is not required for each target measurement. Since the distance between the reference plane and the plane where the camera and projector are located is fixed, only one calibration and one setting of the projection system are required for the same reference plane or the same reference plane, and after the setting is fixed No further calibration is required. Thus this step is not actually necessary, but an optional step.
  • step 503 the projector projects an image toward the target in direction 1, and the camera captures the projected image.
  • step 504 the projector projects an image toward the target in direction 2, and the camera captures the projected image.
  • Step 505 the processing device processes the captured image data in the direction 1 to obtain a phase wrapping value of the desired pixel point on the target of the direction 1.
  • Step 506 the processing device processes the captured image data in the direction 2 to obtain a phase wrapping value of the desired pixel point on the target of the direction 2.
  • step 507 the obtained phase wrapping value is calibrated, that is, the phase offset calibration table is used to perform reference phase offset calibration on the obtained phase wrapping value.
  • the phase offset calibration table in step 507 is predetermined, and the specific determination method will be described in detail in the following embodiments.
  • Step 508 the compensation of the parcel phase value is implemented according to the result of the phase offset calibration.
  • Step 509 performing phase unwrapping processing on the compensated phase wrapping value to obtain a phase value.
  • Step 510 calculating a height value of the target according to the phase value.
  • step 502 shows a prior art optical projection pattern calibration of the projection system by using a grid fitting method according to a preferred embodiment of the present invention, that is, a specific implementation of the above step 502, which occurs before the measurement step. .
  • the projector's axis deviates from the light engine design itself, the light it emits is usually tilted, and the present invention simulates the deviation between the actual projected light and the ideal mesh by simulating the optical model of the system.
  • the method of projection modulates the light emitted by the projector.
  • the mesh fitting method in the present invention performs fitting using an optical model, so that optical projection pattern calibration can be performed more accurately and conveniently.
  • the optical model of the projection system is optically simulated.
  • the structured light emitted by the projector is projected onto the standard plane of the system.
  • the field of view FOV of the projected light is, for example, 25 mm ⁇ 25 mm, which is represented as shown in Figure 6.
  • the sampling point of the cross symbol shown on the side may be, for example, 20 ⁇ 20, or may be other values, such as 30 ⁇ 30, etc., and the maximum sampling point is 100 ⁇ 100.
  • the projector casts the light with the sample point and the ideal mesh onto the standard plane as a trapezoid, not a square, and each sample point is also Not in every grid of the ideal mesh.
  • the light emitted by the projector is modulated.
  • the positions of the respective sampling points in the projected grid are compared with the positions of the corresponding grids in the ideal grid, and the difference obtained by the comparison is utilized, for example, a multi-order nonlinear equation or the like.
  • the method fits the deviation coefficient to obtain a fitting function. Applying a fitting function to the projected structured light pattern generated by the projection system, so that the projected light can substantially completely coincide with each lattice corresponding to the ideal mesh at each sampling point, ensuring that the periodic structured light pattern is projected. Still have the same cycle interval. In a simple case, only a single direction deviation fit can be performed according to the direction of the projected light stripe to improve the fitting accuracy and efficiency.
  • the captured uncalibrated pattern and the calibrated pattern are respectively shown in FIG.
  • the intensity distribution of a column in the uncalibrated projected pattern is more evenly distributed along its angular intensity than the intensity distribution of a column in the calibrated projected pattern.
  • optical projection pattern calibration by the grid fitting method is very simple, and can generate sampling points with sufficient high resolution, and the accuracy of the obtained calibrated projection pattern is also high.
  • phase offset calibration table is as follows: Ideally, after the previous optical projection pattern calibration, the situation should be as shown on the left side of Figure 9, that is, the pattern projected by the two projectors is in the relevant direction. It should be completely symmetrical, no distortion occurs, the pattern light in both directions is completely parallel, and all of the calibrated pattern light should have the same period. For the point p in the left diagram of FIG. 9, the phase value of the light projected by the projector that transmits the light directions 1, 2 with The relationship should ideally be among them a phase value calculated for the pattern light projected by the projector 1 A phase value calculated for the pattern light projected by the projector 2.
  • phase difference phase shift
  • the phase offset calibration table is generated by projecting and calculating a reference plane after projecting the target in the projection system and before projecting the target.
  • the principle of the phase offset calibration table is as follows.
  • the projection system projects the structured light in both directions onto the target on the reference plane.
  • the projection system first calculate the height z 1 of the p point according to the projector illumination pattern of the first direction 1 .
  • l 0 represents the distance between the plane formed by the projector and the camera and the reference plane
  • B represents the distance between the projector and the camera
  • f 0 represents the period of the structured grating of the projected stripe light
  • phase values of the projectors in the first direction and the second direction are respectively calculated at the respective pixels in the pattern illuminated by the reference plane, and the phase offset calibration table can be generated according to the obtained phase values, as follows.
  • phase values of the pattern projected to the reference plane are separately calculated in two directions;
  • the phase values of each pixel in two directions are respectively recorded in the following reference plane phase offset calibration table, wherein the calibration value of each pixel is As shown in Table 1.
  • the value recorded by the phase offset calibration table is actually the phase value of the point of the projected structured light on the reference plane obtained by the projection system according to the reference plane. If there are more than two projectors in the projection system, that is, the direction of the structured light. More than two, for example three, four, etc., the phase value in each direction is recorded in the phase offset calibration table.
  • the projection system may be that the optical projection pattern has been calibrated by the grid fitting method, or the optical projection pattern calibration may not be performed. Such calculation and recording are performed at the pixel points of the respective samples on the reference plane, thereby obtaining a phase offset calibration table for each pixel point of the entire reference plane.
  • the projection system can perform illumination and height calculation of the target.
  • the structured light will be projected by the projector to the target in directions 1 and 2, respectively, for effecting projection of the structured light on the surface of the target opposite the projection system, wherein the surface of the target opposite the surface is at a distance from the projector.
  • the camera then captures an image projected onto the aforementioned surface of the target.
  • the structured light is spaced stripe light, and the strips of light in two directions emitted by the projector modulated by the optical pattern are parallel in the same phase period, and the modulated strips emitted by the projector in each direction are modulated. Have the same width spacing on the same plane.
  • the projector projects an image to the target in directions 1, 2, respectively, and the camera captures the projected image.
  • the specific method is as follows.
  • the processing device processes the captured image data in directions 1 and 2, respectively, to obtain the phase envelope values of the desired pixel points on the targets of directions 1 and 2, respectively.
  • phase envelope value is calculated by the following equation, where the phase envelope value ⁇ (x, y) of the point (x, y) is:
  • the phase envelope value at the desired pixel on the target is obtained.
  • four light intensity values are used to calculate the phase envelope value of the point (x, y), but in actual calculation, the phase envelope value can also be calculated using the light intensity values of other numbers of patterns such as three frames, five frames, and the like. ⁇ (x, y) will not be described here.
  • the structured light patterns acquired in two directions are analyzed, and the pixel positions having the invalid phase wrapping values in the first direction of the two different directions are obtained for the gray values of the structured light patterns.
  • a specific method of obtaining a pixel position having an invalid phase wrap value in a first direction of two different directions for the gradation value of the structured light pattern is as follows.
  • the structured light pattern is first transformed into a grayscale image such that the structured light pattern on the target becomes a texture pattern representing the target. This transformation is known in the art and will not be described herein.
  • the structured light pattern may be converted into a plurality of texture patterns, such as a pattern with higher brightness and a pattern with lower brightness, and preferably may be converted into 2, 3, 4, Five, more preferably, can be transformed into four texture patterns.
  • a gray value and/or a brightness of each pixel acquired in the first direction is analyzed. If a gray value of a pixel is greater than a predetermined threshold or the brightness value is less than a predetermined threshold, the pixel is, for example, In the shaded area, if the gray value of a pixel is smaller than some other predetermined threshold or the brightness value is greater than some other predetermined threshold, the pixel is, for example, a reflective area, thereby determining the phase wrapping at the pixel. The value is an invalid phase wrap value, that is, the phase wrap value in the first direction at that pixel is inaccurate.
  • the acquired image illuminated by the stripe light is converted into a grayscale image having different overall grayscale levels, which are divided into two object grayscale images for measurement as shown.
  • a graph with a higher gray value and a lower luminance value (as shown in the gray image of the upper measurement object)
  • analyze the light intensity condition if the light intensity (that is, the luminance value) of a certain region is greater than the light intensity threshold 1
  • the area is determined to be a highly reflective area.
  • a region that is, a luminance value
  • the choice of threshold can be set based on experience or existing thresholds. For example, if the light intensity threshold is selected, the light intensity may be set to a maximum light intensity threshold and/or a minimum light intensity threshold, and the maximum light intensity threshold and/or minimum light intensity threshold may be an overall gray level formed according to the transformation.
  • the threshold is calculated by, for example, a maximum inter-class variance method, a P-parameter method, a maximum entropy threshold method, or the like, and the threshold value may be set and input by the operator in advance.
  • Each of the different grayscale maps may have different light intensity and/or grayscale threshold levels.
  • the intensity distribution of the object is analyzed. Due to the shadow effect or the high reflective property of the material of the measuring object itself, the light intensity of the occlusion area or the reflective area is significantly different from other areas, and may vary according to the light intensity. In the image, an invalid phase value region such as a reflective area or a shadow area is divided, and other light intensity is in the effective phase value area in the threshold range.
  • the light intensity value at the first direction position in the two directions in FIG. 11B is smaller than a preset threshold at which the phase wrapping value of the first direction corresponding to the corresponding position is defined as the invalid phase wrapping value. Therefore, point p is a point in the invalid phase region.
  • the invalid phase wrapping value can be compensated in the following manner.
  • Step 1101 determining an invalid phase region of the image projected by the projector 1 on the target, as described above.
  • Step 1102 for the point P in the invalid phase region determined in step 1101, determine a phase wrapping value of the second direction when the projector 2 is projected at the same point.
  • step 1103 it is determined whether there is a phase boundary.
  • the position distribution of each point is as shown in FIG. 11C.
  • the algorithm is:
  • phase wrap value ⁇ 1 (x, y) of the point on the target with coordinates (x, y) along the first direction as ⁇ ′ 0 , and the coordinates on the target as (x, y+1),
  • the phase wrap values of the four points of x, y-1), (x-1, y), (x+1, y) in the first direction represent ⁇ ' i
  • ⁇ ' 1 is the (x-1, y
  • ⁇ ' 1 is the (x-1, y
  • ⁇ ' 2 is the phase wrap value ⁇ (x+1, y) along the first direction of the (x+1, y)th point
  • ⁇ ′ 3 is the phase wrapping value ⁇ (x, y-1) along the first direction of the (x, y-1)th point
  • ⁇ ′ 4 is the first direction of the (x, y+1)th point Phase wrap value ⁇ (x, y+1),
  • phase threshold is based on a value selected for selection, for example, a value between (0-2 ⁇ ) is selected, for example, ⁇ is selected as the phase threshold.
  • the most ideal state is that the height of the measuring object is within the measurement range of the stripe period.
  • phase wrapping value ⁇ 1 along the first direction of the desired pixel point (x, y) can be obtained.
  • Step 1104 according to the obtained phase wrapping value ⁇ 1 (x, y) along the first direction, the compensated phase wrapping value at the desired pixel point along the first direction is obtained. Repeating this step for all pixels that are ineffective in the first direction yields a compensated phase envelope value for all invalid pixel points along the first direction.
  • the phase wrap value of all the invalid pixel points in the first direction compensated and the phase wrap values of the effective pixel points in the first direction constitute an accurate phase wrap value of each pixel point in the first direction.
  • phase wrapping value of the effective pixel point along the first direction may be directly recorded as described above, and the subsequent unwrapping calculation may be performed, or may be selective.
  • the phase offset calibration table is first used to optimize the effective phase envelope value in the first direction to obtain a more accurate phase envelope value along the first direction, and then to record and further calculate the optimized phase envelope value.
  • the specific process is to calculate the phase wrap value in the first direction and the phase wrap value in the second direction by using a phase offset calibration table, such as a least squares method or a weighted optimization method, to obtain an optimized phase wrapping of the first direction. value.
  • a phase offset calibration table such as a least squares method or a weighted optimization method
  • phase unwrapping processing is performed on the combined phase wrapping values in the first direction to obtain a phase value.
  • the formula for phase unwrapping is as follows:
  • N is the number of stages of phase unwrapping, for example, a suitable number of stages calculated by a least squares method, a path integration method, or the like.
  • the height value of the target is calculated based on the phase value.
  • the distance z(x, y) of the target surface from the reference plane at the point p at the coordinates (x, y) on the target can be calculated by:
  • the plane formed by the projector and the camera is parallel to the reference plane
  • l 0 represents the distance between the plane formed by the projector and the camera and the reference plane
  • the phase of the point with the coordinate (x, y) on the reference plane in the first direction
  • B represents the distance between the projector and the camera
  • f 0 represents the period of the sinusoidal grating of the projected stripe light.
  • Step 1201 selecting a projector for projecting multi-frame structured light to the target in the first direction and the second direction, respectively, and then performing steps 1202 and 1203 simultaneously;
  • Step 1202 projecting the structured light of the optically calibrated pattern to the target in the direction 1, and capturing the pattern on the target in the direction 1 to go to step 1204;
  • Step 1203 projecting the structured light of the optically calibrated pattern to the target in the direction 2, and capturing the pattern on the target in the direction 2, go to step 1208;
  • Step 1204 generating a grayscale texture image using the captured pattern of the direction 1 and analyzing the texture image to obtain a position of the pixel having the invalid phase value along the direction 1 on the target; although only two texture images are shown , but in fact, more or less texture images can be generated for analysis;
  • Step 1205 while performing step 1204, calculate the phase value of each pixel point on the target in the direction 1 by using the captured pattern, wherein the phase wrapping value along the direction 1 at the point p of the target is (x, y) ⁇ 1 (x, y) is:
  • the light intensity of the pattern is calculated to determine the most suitable phase wrapping value
  • Step 1206 Calculate the effective phase segment and the invalid phase segment according to the obtained position of the pixel point having the invalid phase value in the direction 1 and the phase value of each pixel.
  • Step 1207 it is determined whether a certain pixel point is the effective phase of direction 1, if yes, go to step 1208, if no, go to step 1211;
  • Step 1208 calculating a phase wrapping value of each pixel point on the target along the direction 2, wherein the phase wrapping value ⁇ 2 (x, y) along the direction 2 at the point p of the (x, y) coordinate on the target is:
  • Step 1209 optimizing the phase wrapping values of the first direction and the second direction, such as using a method of least squares or weighting optimization, referring to the phase wrapping value of the phase offset calibration table and the phase wrapping of the second direction.
  • the value is calculated to obtain a preferred value of the effective phase wrap value along the first direction, and then proceeds to step 1212;
  • Step 1210 Perform a reference phase offset calibration to obtain a phase offset calibration table.
  • Step 1211 according to the position of the pixel point having the invalid phase wrapping value along the direction 1 obtained in step 1208, using the edge
  • the phase wrapping value of the direction 2 and the reference phase offset value at the pixel point in the phase offset calibration table obtained in step 1210 are compensated for the phase wrapping value of the first direction to obtain a compensated phase wrapping value of the invalid region;
  • Step 1212 combining the compensated phase wrapping value of the invalid region in the first direction obtained in step 1211 with the phase wrapping value of the effective region in the optimized first direction obtained in step 1309 to become the effective region in the first direction and The phase wrap value of the desired pixel in the invalid area;
  • Step 1213 performing phase unwrapping processing, wherein the phase value along the first direction is Where N is the number of stages of phase unwrapping, and ⁇ 1 (x, y) represents the phase wrapping value at point p where the coordinates of the target are (x, y);
  • the height z(x, y) can be calculated by:
  • the arrangement of the projector and camera in the projection system can be determined as needed.
  • the two projectors P1, P2 and the camera can be arranged in a straight line as viewed in a plan view, or for example, three (P1, P2, P3), four (P1, P2).
  • a plurality of projectors of P3, P4) are arranged around the camera.
  • the projection system setting angle should be considered in the step of pattern calibration, so the captured image of each projector has a uniform periodic distribution.
  • phase wrapping value compensation of the first direction in the direction of the phase offset calibration table may be selected according to the needs and the position of the projector.
  • the other processes are similar to the two projectors and will not be described here.
  • the two projectors and the camera can be arranged at an obtuse angle as seen from a plan view, and the angle between the projector P2 and the straight line of the projector P1 is ⁇ .
  • T is the number of pixels per strip
  • I is the light intensity
  • i is the number of the pixel sample point.
  • the calculation method of the light intensity of the light projected by the projector P2 is as follows:
  • I' (1-sin((x-(j-i*tan(angle))+1)/T*2*pi))I/2*255;
  • angle represents the angle formed by the line between the projector P2 and the projector P1, which is represented as ⁇ in the figure, and T is each strip
  • i represents the number of the pixel sample point
  • I' represents the light intensity
  • x represents the abscissa of the measured point
  • j represents the ordinate of the measured point.
  • the stripe light that illuminates the target is substantially the same, for example, all in a sinusoidal pattern.
  • phase offset calibration table for all directions, calculating a phase offset value for each pixel along each direction; calculating a target acquired along the at least two different directions
  • the pattern of the structured light calculates a phase wrap value of each of the pixels of the pattern of the structured light in each of the directions on the target according to the obtained pattern.
  • the angle of the projector which is the value of which of the other directions in the phase offset calibration table is used for compensation. It may be an appropriate selection based on parameters previously input into the system, with the best effect in the ineffective area, with as little shadow as possible and no reflection.
  • Figure 16 shows the image of the two projectors illuminating the target and the field of view of the camera when the projector is aligned with the camera as shown in Figure 14, wherein the field of view is the field of view of P1 overlapping the field of view of P2 Area.
  • the camera can use two different lenses, each capturing a projected pattern to match the best field of view between the two projected patterns.
  • the two projectors can have different projection fields of view.
  • FIG 17 is a simplified flow chart showing a detection method in accordance with a preferred embodiment of the present invention, which simply includes the following steps:
  • Step 1701 selecting a projector
  • Step 1702 the projector 1 projects in the direction 1;
  • Step 1703 obtaining an image to calculate a phase wrapping value of 1;
  • Step 1704 the projector 2 projects in the direction 2;
  • Step 1705 obtaining an image to calculate a phase wrapping value of 2;
  • Step 1706 according to the phase wrapping values 1 and 2 and in the prior phase offset calibration table, calculate the effective phase wrapping value 1 along the direction 1;
  • Step 1707 performing phase unwrapping on the phase wrapping value 1;
  • a final height value is calculated.
  • the present invention performs the combining process of the phase wrapping values in two directions after the phase wrapping step, obtaining all effective phase wrapping values in one direction, and then only the effective phase in that direction.
  • the parcel value is unwrapped, and the unwrapping is completed in one step, thereby reducing the calculation time while ensuring the calculation accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种相位偏移校准方法、3D形状检测的方法和检测3D形状的投影系统。3D形状检测的方法包括:将至少两个不同方向的具有图案的结构光投射到目标(405)上并获取结构光图案,并计算相位包裹值;对结构光图案进行分析,针对结构光图案的灰度值得到具有无效相位包裹值的像素位置;进行校准处理,以补偿相位偏移;将所需像素处沿第一方向的有效相位包裹值和经补偿的无效相位包裹值合并成一组合并的相位包裹值,以计算目标(405)处的所需像素处的深度值。该方法能快速地计算以精确地获得目标(405)的3D形状数据。

Description

相位偏移校准方法、3D形状检测的方法、系统及投影系统 技术领域
本发明涉及一种对目标形状进行测量和检测的系统和方法,具体而言,涉及一种相位偏移校准方法、利用光学投射图案校准和相位偏移校准对目标的3D形状进行检测的系统和方法以及检测3D形状的投影系统。
背景技术
在目前的目标形状的测量和检测的技术中,通常使用投影仪投射特定的光(例如条纹光或结构光)到目标表面及参考平面上,然后对目标表面及参考平面上形成的条纹光的图像进行拍摄并根据拍摄的图像中目标形状造成的光信号的变化利用相位法计算目标的位置和高度等信息,最终形成目标的3D形状。如图1所示,投影仪101将通过例如正弦光栅等光栅的条纹光投射到目标表面及参考平面上,然后由照相机102对其进行拍摄以形成图像。
由于目标的形状使得目标表面的各点与其后面的参考平面的距离不等,由此使得投射到目标上的条纹光变形。如图2所示,从投影仪投射出的光为正弦光,而后变为变形的形状。如图10所示,在目标上坐标为(x,y)的点p处的目标表面与参考平面的距离z(x,y)可以通过下式(1)计算:
Figure PCTCN2015074254-appb-000001
其中,投影仪与相机所形成的平面平行于参考平面,l0表示投影仪与照相机所形成的平面与参考平面的距离,
Figure PCTCN2015074254-appb-000002
表示目标和参考平面之间的相位值的差,B表示投影仪与照相机之间的距离,f0表示投射的条纹光的正弦光栅的周期。
上式中的相位值实际是通过相位偏移方法计算的,通常需要进行至少三步相位移动以确定相位值。在实际测量过程中,也可以是四步相位移动等。相位值的计算一般来说为,首先进行相位包裹(phase wrap)的计算,而后进行相位解包裹(phase unwrap)的计算,最终得到相位值。图3示出了相位包裹和相位解包裹的示意图。
具体地,首先进行相位包裹的计算,向目标上照射多帧条纹光,从第1帧到第N帧,N是根据需要得到的整数,例如N=3、4等,在N等于三时,根据下式(2)计算相位包裹值,其中点(x,y)的相位包裹值θ(x,y)为:
Figure PCTCN2015074254-appb-000003
其中Ii(x,y)表示第i帧相移图在点(x,y)的光强。
由于相位θ(x,y)是通过反正切函数计算得到的,因而求解所得的相位值包含在[-π,+π]中, 和连续、真实的相位值相差2Nπ,所以要得到连续真实的相位值还需要对其进行解包裹得处理。
利用以下公式(3)计算相位解包裹的结果,也就是计算得到点(x,y)的相位包裹值
Figure PCTCN2015074254-appb-000004
Figure PCTCN2015074254-appb-000005
其中N表示相位解包裹的级数。
一般而言,利用结构光测量3D形状的步骤如下:捕获反射的图像;相位包裹;相位解包裹;生成相对相位;计算高度。
此外,在目前目标形状的测量和检测中使用的是轴偏离的投射系统,其关键问题是失真。这种失真会改变正弦图案的周期,并且每个条带的间距会变得不平均。因此,通常使用坐标校准方法进行校准,例如通过在棋盘格上投射图案光的方法进行校准,但是这种方法处理过程复杂、准确性低,同时受到棋盘有限分辨率的限制。
在公开号为US20120127305A1的美国专利申请中,其公开了一种用于得到表面轮廓的方法和装置,其中公开的方法中利用至少两个方向的数据来获得合成的高度值。这种方法中需要将两个方向的相位分别进行包裹、解包裹和得到高度值,而后将两个高度值合成来得到合成高度值。因而这种方法需要进行两个方向的多次运算,非常耗时。
在公开号为US20140253929A1的美国专利申请中,其公开了一种用于3D表面的方法和装置,其中公开的方法将至少两个方向的数据分别产生的相位解包裹合成,而后形成最终的高度值。在这种方法中也需要将两个方向的相位分别进行包裹和解包裹,而且需要两个照相机和两个投影仪,也是耗时并且结构复杂的。
发明内容
鉴于以上问题,本发明提出了一种利用相位偏移校准对目标的3D形状进行检测的系统和方法,其计算时间较短并且准确度高。
本发明提出了一种对目标的3D形状进行检测的方法,包括以下步骤,
步骤S1,将至少两个不同方向的具有图案的结构光投射到目标上,沿所述至少两个不同方向获取被目标反射后形成的结构光图案,并计算沿所述至少两个不同方向的结构光图案在目标上的所需像素处的相位包裹值;
步骤S2,对沿所述至少两个不同方向获取的结构光图案进行分析,针对结构光图案的灰度值得到至少两个不同方向的第一方向的具有无效相位包裹值的像素位置;
步骤S3,利用具有无效相位包裹值的像素沿作为补偿方向的第二方向的有效相位包裹值对该第一方向的无效相位包裹值进行校准处理,以补偿相位偏移,从而将该无效相位包裹值补偿为经补偿的无效相位包裹值;
步骤S4,将所需像素处沿所述第一方向的有效相位包裹值和经补偿的无效相位包裹值合并成 一组合并的相位包裹值,以计算目标上的所需像素处的深度值。
其中,在所述步骤S2中得到所述至少两个不同方向的第一方向的具有无效相位包裹值的像素位置的步骤为:分析沿所述第一方向获取的每个像素的灰度值和/或亮度值,如果某个像素的灰度值或亮度值大于预先给定的第一阈值或小于预先给定的第二阈值,则判定该像素处的沿第一方向的相位包裹值为无效相位包裹值。
其中,所述步骤S3中的所述校准处理包括以下步骤:
步骤S301,利用相位偏移校准表获得具有沿第一方向的无效相位包裹值的像素位置的第一方向的相位偏移值和作为补偿方向的第二方向的相位偏移值;
步骤S302,根据所述相位偏移值和沿第二方向的有效相位包裹值计算与沿第一方向的无效相位包裹值相对应的补偿相位包裹值;
步骤S303,利用所述补偿相位包裹值替换原来的无效相位包裹值,得到经补偿的无效相位包裹值。
其中,所述相位偏移校准表的获取方法包括以下步骤:
步骤S401,根据在至少两个不同方向的每个方向上单独捕获的参考平面上的结构光图案,计算沿每个方向捕获的结构光图案的所需像素处的相位包裹值,其中所述结构光是沿所述至少两个不同方向投射到参考平面上并由参考平面反射的结构光;
步骤S402,根据所述相位包裹值计算沿所述至少两个不同方向的结构光图案在参考平面上的所述所需像素处的相位值;
步骤S403,计算每个像素沿每个方向的相位偏移值;和
步骤S404,将每个像素沿每个方向的相位偏移值作为校准值并记录,获得相位偏移校准表。
其中,所述步骤S302中,将所述相位偏移值和有效相位包裹值相加或相减,得到与无效相位包裹值相对应的补偿相位包裹值。
其中,进一步包括在所述步骤S1之前,对执行该方法的、具有投影仪和照相机的投影系统进行网格拟合法光学校准的步骤,其包括以下步骤:
步骤S501,光学模拟将投影仪的预定网格图案投影到预定平面上;
步骤S502,根据投射网格图案的变形情况拟合投影畸变的光学参数;
步骤S503,根据拟合的光学畸变参数对投影仪发出的光进行调制,以使得所述投影仪发出的光为正投影。
其中,所述投影仪为两个并沿两个方向投射间隔的条纹光,调制后的投影仪发出的两个方向中的光的条带在同一相位周期内平行,且调制后的每个方向投影仪发出的光条带在同一平面上具有相 同的宽度间隔。
其中,利用至少四点比较算法计算步骤S2中的相位包裹值,计算方向如下:
首先设置目标上的坐标为(x,y)的点沿第一方向的相位包裹值θ1(x,y)表示为θ′0,将目标上的坐标为(x,y+1),(x,y-1),(x-1,y),(x+1,y)的四个点沿第一方向的相位包裹值表示θ′i,θ′1为第(x-1,y)点的沿第一方向的相位包裹值θ(x-1,y),θ′2为第(x+1,y)点的沿第一方向的相位包裹值θ(x+1,y),θ′3为第(x,y-1)点的沿第一方向的相位包裹值θ(x,y-1),θ′4为第(x,y+1)点的沿第一方向的相位包裹值θ(x,y+1),
如果存在(θ′0-θ′i)>相位阈值,
Figure PCTCN2015074254-appb-000006
如果存在(θ′0-θ′i)<-相位阈值,
Figure PCTCN2015074254-appb-000007
否则
Figure PCTCN2015074254-appb-000008
其中θ1(x,y)表示沿第一方向的(x,y)点的相位包裹值,θ2(x,y)表示沿第二方向的(x,y)点的相位包裹值,相位阈值为根据实际需要选定的合适的值。
其中,所述步骤S4中的计算目标上的所需像素处的深度值为:对该组合并的相位包裹值进行相位解包裹处理,得到解包裹的相位值,根据解包裹的相位值计算目标上的所需像素处的深度值。
本发明还提出了一种用于检测目标的3D形状的投影系统,包括以下组件:
至少两个投影仪,用于将至少两个不同方向的具有图案的结构光投射到目标上;
照相机,用于获取所述结构光被目标反射后的结构光图案;
存储器;
处理器,根据照相机获取的结构光图案,计算沿所述至少两个不同方向的结构光图案在目标上的所需像素处的相位包裹值;对沿所述至少两个不同方向获取的结构光图案进行分析,针对结构光图案的灰度值得到至少两个不同方向的第一方向的具有无效相位包裹值的像素位置;利用具有无效相位包裹值的像素沿作为补偿方向的第二方向的有效相位包裹值对第一方向的该无效相位包裹值进行校准处理,以补偿相位偏移,从而将该无效相位包裹值补偿为经补偿的无效相位包裹值;将所需像素处沿所述第一方向的有效相位包裹值和经补偿的无效相位包裹值合并成一组合并的相位包裹值,以计算目标上的所需像素处的深度值。
其中,所述计算目标上的所需像素处的深度值为:对该组合并的相位包裹值进行相位解包裹处理,得到解包裹的相位值,根据解包裹的相位值计算目标上的所需像素处的深度值。
本发明还提出了一种相位偏移校准方法,包括以下步骤:
步骤S601,根据在至少两个不同方向的每个方向上单独捕获的参考平面上的结构光图案,计算沿每个方向捕获的所述结构光图案的所需像素处的相位包裹值,其中所述结构光是沿所述至少两个不同方向投射到参考平面上并由参考平面反射的结构光;
步骤S602,根据所述相位包裹值计算沿所述至少两个不同方向的结构光在参考平面上的所述所需像素处的相位值;
步骤S603,计算每个像素沿每个方向的相位偏移值;和
步骤S604,将每个像素的沿每个方向的相位偏移值进行记录,获得相位偏移校准表,并在实际测量时根据相位偏移校准表对所需方向的无效相位值进行校准以补偿该无效相位值。
本发明还提出了一种对目标的3D形状进行检测的系统,包括以下模块,
第一计算模块,其计算沿所述至少两个不同方向获取的被目标反射后形成的结构光图案在目标上的所需像素处的相位包裹值;
分析模块,其对沿所述至少两个不同方向获取的结构光图案进行分析;
第二计算模块,其针对结构光图案的灰度值得到至少两个不同方向的第一方向的具有无效相位包裹值的像素位置;
校准模块,其利用具有无效相位包裹值的像素沿作为补偿方向的第二方向的有效相位包裹值对该第一方向的无效相位包裹值进行校准处理,以补偿相位偏移,从而将该无效相位包裹值补偿为经补偿的无效相位包裹值;
合并模块,其将所需像素处沿所述第一方向的有效相位包裹值和经补偿的无效相位包裹值合并成一组合并的相位包裹值;
第三计算模块,其计算目标处的所需像素处的深度值。
利用本发明的方法和系统,可以极大地减小目标的3D形状的计算时间和计算难度,对于一定性能的计算机,计算时间可以从3738ms下降到2846ms,计算速度可提高24%,并且相位包裹的时间可以仅为538ms。此外,利用本发明的方法可使得整合后的相位图在校准投影方向保持全区域有效,有利于提高相位解包裹的效率和准确度,且具有一致性的相位图更利于相位包裹算法在GPU上实现并行运算。
附图说明
图1表示现有技术中结构光投射原理的示意图。
图2表示现有技术中投射到3D目标上的正弦光的变形。
图3表示相位包裹和相位解包裹的示意图表。
图4表示根据本发明一较佳实施方式的投影系统及目标的示意图。
图5表示根据本发明一较佳实施方式的利用相位偏移校准对目标的3D形状进行检测的方法的流程图。
图6表示根据本发明一较佳实施方式的利用网格拟合法进行投射图案光学校准的示意图。
图7表示根据本发明一较佳实施方式的未光学校准的投射图案和已光学校准的投射图案。
图8表示如图7所示的未光学校准的投射图案和已光学校准的投射图案中一列的强度分布图表。
图9表示根据本发明一较佳实施方式的经过相位偏移校准后的投射光的理想情形和实际情形的示意图。
图10表示根据本发明一较佳实施方式的投射到3D目标的光学系统中各部件的位置和距离的关系的示意图。
图11A表示根据本发明一较佳实施方式的沿两个方向投影的结构光图案的示意图。
图11B表示根据本发明一较佳实施方式的沿两个方向投影的相位值曲线以及其合成的相位值曲线的示意图。
图11C表示根据本发明一较佳实施方式的目标上相邻点的示意图。
图12表示根据本发明一较佳实施方式的相位值的合成方法的流程图。
图13A-13C表示根据本发明多个较佳实施方式的检测系统的设置方式的示意图。
图14表示根据本发明一较佳实施方式的检测系统的设置方式的示意图。
图15A-15B表示根据本发明一较佳实施方式的成角度设置的检测系统进行校对的步骤和每个步骤所形成的结构光的图像的示意图。
图16表示根据图14所示的检测系统的设置方式得到的校准图案的示意图。
图17表示根据本发明一较佳实施方式的检测方法的简化流程图。
图18表示根据本发明一较佳实施方式的根据采集的图像判定图像的阴影区域和高反光区域的示意图。
具体实施方式
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。
如图4所示,根据本发明一较佳实施方式的投影系统包括位于同一平面上的投影仪401、402和照相机403,该照相机403也可以用图像传感器代替,目标405位于与投影仪401、402以及照相机403所在平面间隔一段距离的参考平面404上。
利用该投影系统对目标的3D形状进行检测的方法的流程图如图5所示,依次进行如下步骤。
步骤501,选择合适的投影仪和照相机,以构成投影系统,该投影仪与照相机位于同一平面内。在本实施例中采用两个投影仪,其可以采用三个、四个等多个任意数量的投影仪。
步骤502,在对目标进行测量之前,先进行光学图案校准的步骤,也就是对投影系统进行预先的光学投射图案校准。本申请中采用网格拟合法进行光学投射图案校准。该步骤502并非对每一次的目标测量都需要。由于参考平面与照相机和投影仪所在平面之间的距离是固定的,因此对于同一参考平面或距离相同的参考平面,仅需进行一次校准和对投影系统进行一次设定,并且固定该设定之后无需再进行校准。因而该步骤实际上并非必需,而是可选的步骤。
步骤503,投影仪沿方向1向目标投射图像,而后照相机捕获投射的图像。
步骤504,投影仪沿方向2向目标投射图像,而后照相机捕获投射的图像。
步骤505,处理装置对方向1上的捕获图像数据进行处理,得到方向1的目标上所需像素点的相位包裹值。
步骤506,处理装置对方向2上的捕获图像数据进行处理,得到方向2的目标上所需像素点的相位包裹值。
步骤507,对得到的相位包裹值进行校准,也就是利用相位偏移校准表对得到的相位包裹值进行参考相位偏移校准。该步骤507中相位偏移校准表是预先确定的,具体确定方法将在下面的实施例中详述。
步骤508,根据相位偏移校准的结果实现包裹相位值的补偿。
步骤509,对补偿后的相位包裹值进行相位解包裹处理,得到相位值。
步骤510,根据相位值计算目标的高度值。
下面将对以上对目标的3D形状进行检测的方法的一些步骤进行详细说明,以更清楚地阐述本发明。
现详述步骤502中所指出的网格拟合法。如图6所示,其表示根据本发明一较佳实施方式的利用网格拟合法对投影系统进行预先的光学投射图案校准也就是关于以上步骤502的具体实现方式,该校准发生在测量步骤之前。由于投影仪的轴偏离的光引擎设计自身的缺陷,其发出的光通常是倾斜的,而本发明通过模拟该系统的光学模型来模拟实际投射光与理想网格之间的偏差,采用网格投射的方法对投影仪发出的光进行调制。与现有的光学投射后对投影仪进行调制的方法不同,本发明中的网格拟合法利用光学模型进行拟合,因而能够更加准确、方便地进行光学投射图案校准。
如图6左侧所示,光学模拟该投影系统的光学模型,投影仪发出的结构光投射到系统的标准平面上,投射的光的视场FOV例如为25mm×25mm,表示为如图6右侧所示的交叉符号的采样点例如可以为20×20,也可以是其他数值,例如30×30等,最大采样点为100×100。从图6中右侧可见,投影仪将具有采样点的光和理想网格投射到标准平面上的光为梯形,而非正方形,每个采样点也并 未位于理想网格的每个格子中。为了消除轴偏离投射的光生成的图案的变形现象,即,使得投射的光为正投射,对投影仪发出的光进行调制。根据投影网格图案变形情况拟合,将投射的网格中的各个采样点的位置与理想网格中相对应的各个格子的位置相比较,根据比较得到的差异利用例如多阶非线性方程等方法拟合偏差系数,得到拟合函数。将拟合函数应用于由该投影系统产生的投影结构光图案,使得投射出的光在每个采样点能够与理想网格中相对应的每个格子基本完全重合,保证周期结构光图案投影后仍能具有相同的周期间隔。简单情况下,更可根据投射光条纹的方向只进行单一方向偏差拟合以提高拟合精度与效率。
利用网格拟合法投射正弦图案时,捕获的未校准的图案和已校准的图案分别如图7所示。从图7、8中可以看到,未校准的投射图案中一列的强度分布和已校准的投射图案中一列的强度分布相比,已校准的投射图案强度沿其角度分布得更加均匀。
由此可知,利用网格拟合法实现的光学投射图案校准十分简便,并且可产生足够高分辨率的采样点,所得到的校准后的投射图案的准确率也很高。
现详述步骤507中的相位偏移校准表的确定方法。生成相位偏移校准表的原因如下:理想情况下,在经过预先的光学投射图案校准后,得到的情况应该如图9的左侧所示,也就是两个投影仪投射的图案在相关的方向应当完全对称,不会发生失真,在两个方向的图案光是完全平行的,并且所有的校准后的图案光应当具有相同的周期。对于图9的左图中的点p而言,透射光方向1、2的投影仪投射的光的相位值
Figure PCTCN2015074254-appb-000009
Figure PCTCN2015074254-appb-000010
的关系理想情况下应该是
Figure PCTCN2015074254-appb-000011
其中
Figure PCTCN2015074254-appb-000012
为对投影仪1投射的图案光进行计算得到的相位值,
Figure PCTCN2015074254-appb-000013
为对投影仪2投射的图案光进行计算得到的相位值。
但实际情况远非如此理想,由于例如系统设置误差等问题,在两个方向投射之间产生了相位差(相位偏移),如图9的右图所示,其中对于点p,相位值不相等,
Figure PCTCN2015074254-appb-000014
因而对于沿第一方向的相位值为无效相位值的像素,就需要将该像素处的沿第二方向的相位值对沿第一方向的相位值进行补偿,以得到沿第一方向的相位值。因而就需要通过一种补偿方法在考虑相位偏移的情况下通过第二方向的相位值计算得到沿第一方向的相位值。该补偿方法中需要使用相位偏移校准表。
该相位偏移校准表是在确定了投影系统中的投影仪和照相机之后、对目标进行投射之前,通过对参考平面进行投影和计算来生成。相位偏移校准表的原理如下所述。
实际上,在测量具体目标时,投影系统将两个方向的结构光投射到参考平面上的目标上。如图10所示,对于目标上的点p而言,首先计算根据第一方向1的投影仪照射图案计算p点的高度z1
Figure PCTCN2015074254-appb-000015
其中,l0表示投影仪与照相机所形成的平面与参考平面的距离,B表示投影仪与照相机之间的距离,f0表示投影的条纹光的结构光栅的周期,
Figure PCTCN2015074254-appb-000016
为根据第一方向的投影仪照射的图案得到的测量 物体上p点的相位值,
Figure PCTCN2015074254-appb-000017
为第一方向的投影仪照射到参考平面上时P点在参考平面上对应的相位值。
计算根据第二方向2的投影仪照射图案计算p点的高度z2
Figure PCTCN2015074254-appb-000018
Figure PCTCN2015074254-appb-000019
为根据第二方向的投影仪照射的图案得到的测量物体上p点的相位。上述关于z1与z2的公式与式(1)是有差别的,这是因为相对于距离z1与z2而言,
Figure PCTCN2015074254-appb-000020
的值很小,在计算中是可以忽略的。
由于理论上z1与z2应当相等,即z1=z2,所以
Figure PCTCN2015074254-appb-000021
由此
Figure PCTCN2015074254-appb-000022
进而p点处的校准值为
Figure PCTCN2015074254-appb-000023
因此可以看出,在p点处的校准值可以根据参考平面上各个点的相位值得到。
根据以上原理,分别计算第一方向和第二方向的投影仪在参考平面照射的图案中的各个像素处的相位值,根据得到的相位值即可生成相位偏移校准表,步骤如下。
第一步,在参考平面上投射校准的图案光;
第二步,在两个方向单独计算投射到参考平面的图案的相位值;
第三步,将每个像素在两个方向上的相位值分别记录在以下参考平面相位偏移校准表中,其中每个像素的校准值为
Figure PCTCN2015074254-appb-000024
如表1所示。
Figure PCTCN2015074254-appb-000025
表1
该相位偏移校准表记录的值实际上为投影系统根据参考平面得到的投射的各个结构光在参考平面上的点的相位值,如果投影系统中投影仪超过两个,也就是结构光的方向超过两个,例如三个、四个等,则该相位偏移校准表中记录沿每个方向的相位值。其中该投影系统可以是已利用网格拟合法进行光学投射图案校准,也可以并未进行光学投射图案校准。在参考平面上的各个采样的像素点处都进行这样的计算和记录,由此得到整个参考平面的各个像素点的相位偏移校准表。
在进行光学投射图案的校准和相位偏移校准表的生成之后,该投影系统便可以进行目标的照射和高度计算。
将由投影仪分别沿方向1和2向目标投射结构光,用于在该目标与投影系统相对的表面上实现结构光的投影,其中该目标的上述表面相反的表面与投影仪的距离为在进行光学投射图案校准时参考平面与投影仪的距离。而后照相机捕获投射到目标的上述表面上的图像。其中,结构光为间隔的条纹光,经光学图案调制后的投影仪发出的两个方向中的光的条带在同一相位周期内平行,且调制后的每个方向投影仪发出的光条带在同一平面上具有相同的宽度间隔。
步骤503和504中,投影仪沿方向1、2分别向目标投射图像,而后照相机捕获投射的图像。具体方法如下。
投影仪向目标上照射多帧条纹光,从第1帧到第N帧,例如N=4等,而后由照相机捕获投射的图像,得到每一帧光照射到图像上时,第i帧的条纹光在图像上的点(x,y)的光强Ii(x,y)。
步骤505和506中,处理装置对方向1、2上的捕获图像数据分别进行处理,分别得到方向1、2的目标上所需像素点的相位包裹值。
通过下式计算相位包裹值,其中点(x,y)的相位包裹值θ(x,y)为:
Figure PCTCN2015074254-appb-000026
其中Ii(x,y)表示第i(i=1、2、3或4)帧相移图在点(x,y)的光强。
由此根据沿两个不同方向的结构光的图案和以上方式计算,得到在目标上的所需像素处的相位包裹值。在本发明中,使用四个光强值来计算点(x,y)的相位包裹值,但实际计算时,也可以使用三帧、五帧等其他数目的图案的光强值计算相位包裹值θ(x,y),在此不再赘述。
同时,对沿两个方向获取的结构光图案进行分析,针对结构光图案的灰度值得到两个不同方向的第一方向的具有无效相位包裹值的像素位置。
由于两个投影仪在投射方向的不同造成目标表面有反光、遮挡、黑影等区域,需要判断无效相位包裹值所在的像素位置。针对结构光图案的灰度值得到两个不同方向的第一方向的具有无效相位包裹值的像素位置的具体方法如下所述。首先将结构光图案变换为灰度图,使得目标上的结构光图案变成表示目标的纹理图案,这种变换方式是现有技术中已知的,在此不再赘述。同时,为了准确判断灰度变换情况,可以将结构光图案变换成多个纹理图案,例如亮度较高的图案、亮度较低的图案,优选地,可以变换为2个、3个、4个、5个,更优选地,可以变换为4个纹理图案。分析沿第一方向获取的每个像素的灰度值和/或亮度,如果某个像素的灰度值大于某一个预先给定的阈值或亮度值小于某一个预定的阈值,则该像素处例如为阴影区域,如果某个像素的灰度值小于某另外一个预先给定的阈值或亮度值大于某另外一个预定的阈值,则该像素处例如为反光区域,由此判定该像素处的相位包裹值为无效相位包裹值,也就是说,在该像素处沿第一方向的相位包裹值是不准确的。
如图18所示,采集到的被条纹光照射的图像,将其变换为具有不同整体灰度水平的灰度图,如图所示,将其分为两个物体灰度图以进行测量。对于灰度值较高而亮度值较低的图(如上方的测量物体灰度图所示),对其光强情况进行分析,如果某区域光强(也就是亮度值)大于光强阈值1,则判定该区域为高反光区域。对于灰度值较低而亮度较高的图(如下方的测量物体灰度图所示),如果某区域的(也就是亮度值)小于光强阈值2,则判定该区域为阴影区域。当然,不仅可以使用亮度值来进行图像的阴影区域、反光区域等异常区域的判断,也可以利用其它合适的值来判断,例如灰度值等,不限于上述。对于阈值的选择,可以根据经验或现有的阈值设定。例如,如果是选择光强阈值,则将光强可以设定最大光强阈值和/或最小光强阈值,该最大光强阈值和/或最小光强阈值可以是根据变换形成的整体灰度水平而判断并计算或由其他方式实现,阈值通过例如最大类间方差法、P参数法、最大熵阈值法等方法计算得出,也可以由操作者预先对阈值进行设定和输入。每个整体灰度不同的图可以具有不同的光强和/或灰度阈值水平。
如图11A-C中所示,分析它们的强度分布,由于阴影效应或测量物体本身材质的高反光特性,遮挡区域或反光区域的光强与其他区域有显著的不同,根据光强的变化可以在图像中划分出反光区域、阴影区域等无效相位值区域,其他光强在阈值范围为有效相位值区域。
如图11B中在两个方向上中的第一方向位置处的光强值小于预先设定的阈值,在该像素处,相应的位置对应的第一方向的相位包裹值定义为无效相位包裹值,因此点p为无效相位区域中的点。
该无效相位包裹值可以通过以下方式补偿。
步骤1101,确定投影仪1投射在目标上的图像的无效相位区域,具体方法如前面所述。
步骤1102,对于步骤1101中确定的无效相位区域中的点P,确定投影仪2投射在同一点处时的第二方向的相位包裹值。
步骤1103,确定是否存在相位界跃。在本实施例中,例如利用至少四点比较算法,各点位置分布如图11C所示,具体地,该算法为:
首先设置目标上的坐标为(x,y)的点沿第一方向的相位包裹值θ1(x,y)表示为θ′0,将目标上的坐标为(x,y+1),(x,y-1),(x-1,y),(x+1,y)的四个点沿第一方向的相位包裹值表示θ′i,θ′1为第(x-1,y)点的沿第一方向的相位包裹值θ(x-1,y),θ′2为第(x+1,y)点的沿第一方向的相位包裹值θ(x+1,y),θ′3为第(x,y-1)点的沿第一方向的相位包裹值θ(x,y-1),θ′4为第(x,y+1)点的沿第一方向的相位包裹值θ(x,y+1),
如果存在(θ′0-θ′i)>相位阈值,
Figure PCTCN2015074254-appb-000027
如果存在(θ′0-θ′i)<-相位阈值,
Figure PCTCN2015074254-appb-000028
否则
Figure PCTCN2015074254-appb-000029
其中θ1(x,y)表示沿第一方向的(x,y)点的相位包裹值,θ2(x,y)表示沿第二方向的(x,y)点的相位包裹值,以上两个参数已在步骤505和506中利用式(2)进行了计算。相位阈值为根据为选定的合适的值,例如选择(0-2π)之间的值,例如选择π为相位阈值。最理想状态为测量物体高度在条纹周期的测量范围之内。
经过以上计算可以得到所需像素点(x,y)的沿第一方向的相位包裹值θ1
步骤1104,根据得到的沿第一方向的相位包裹值θ1(x,y),得到已补偿的沿第一方向的所需像素点处的相位包裹值。对所有沿第一方向无效的像素点进行重复该步骤,可以得到已补偿的沿第一方向的所有无效像素点的相位包裹值。由已补偿的沿第一方向的所有无效像素点的相位包裹值和沿第一方向的有效像素点的相位包裹值组成了沿第一方向的各像素点的准确的相位包裹值。从而实现了如前述步骤508中的根据相位偏移校准的结果实现包裹相位值的补偿的过程。
此外,在对沿第一方向无效的相位包裹值进行计算的同时,可以如上所述的直接记录沿第一方向的有效像素点的相位包裹值已进行后续的解包裹的计算,也可以选择性地首先使用相位偏移校准表对第一方向的有效相位包裹值进行优化,以得到沿第一方向的更准确的相位包裹值,而后再对优化的相位包裹值进行记录和进一步的计算。
具体过程为,用如最小二乘法或加权优化等方法,参照相位偏移校准表对第一方向有效的相位包裹值与第二方向的相位包裹值进行计算,得到优化的第一方向的相位包裹值。
对于前述步骤509,对合并的沿第一方向的相位包裹值进行相位解包裹处理,得到相位值。相位解包裹的公式如下:
Figure PCTCN2015074254-appb-000030
其中N是相位解包裹的级数,例如是利用最小二乘法、路径积分法等方法计算得到的合适的级数。
与现有技术相比,由于仅需要对一个方向的相位包裹值进行解包裹处理即可,所以减少了计算的时间。
对于前述步骤510,根据相位值计算目标的高度值。目标上坐标为(x,y)的点p处的目标表面与参考平面的距离z(x,y)可以通过下式计算:
Figure PCTCN2015074254-appb-000031
其中,投影仪与相机所形成的平面平行于参考平面,l0表示投影仪与照相机所形成的平面与参考平面的距离,
Figure PCTCN2015074254-appb-000032
表示目标上坐标为(x,y)的点p沿第一方向的相位值
Figure PCTCN2015074254-appb-000033
和参考平面上坐标为(x,y)的点沿第一方向的相位
Figure PCTCN2015074254-appb-000034
之间的相位值的差,B表示投影仪与照相机之间的距离,f0表示投射的条纹光的正弦光栅的周期。
上述步骤1101和1102中进行无效相位区域判定的步骤也表示在如图12所示的生成合并的相位值的流程图中。
步骤1201,选择投影仪,用于分别沿第一方向和第二方向投射多帧结构光到目标上,而后同时进行步骤1202和1203;
步骤1202,沿方向1投射已光学校准图案的结构光到目标上,并沿方向1捕获目标上的图案,转到步骤1204;
步骤1203,沿方向2投射已光学校准图案的结构光到目标上,并沿方向2捕获目标上的图案,转到步骤1208;
步骤1204,利用捕获的方向1的图案生成具有灰度纹理图像,并对该纹理图像进行分析获得目标上具有沿方向1的无效相位值的像素点的位置;虽然仅示出了两个纹理图像,但实际上也可以生成更多或更少的纹理图像以进行分析;
步骤1205,在进行步骤1204的同时,利用捕获的图案计算沿方向1的目标上各个像素点的相位值,其中目标上坐标为(x,y)的点p处的沿方向1的相位包裹值θ1(x,y)为:
Figure PCTCN2015074254-appb-000035
其中Ii(x,y)表示沿第一方向形成的第i帧相移图在点(x,y)的光强i=1,2,3或4,当然也可以用三帧、五帧图案的光强进行计算,从而测得最合适的相位包裹值;
步骤1206,根据获得的具有沿方向1的无效相位值的像素点的位置和各个像素的相位值计算有效相位分段和无效相位分段.;
步骤1207,判断某像素点是否是方向1的有效相位,如果是,转到步骤1208,如果否,转到步骤1211;
步骤1208,计算沿方向2的目标上各个像素点的相位包裹值,其中目标上坐标为(x,y)的点p处的沿方向2的相位包裹值θ2(x,y)为:
Figure PCTCN2015074254-appb-000036
其中Ii(x,y)表示沿第二方向形成的第i帧相移图在点(x,y)的光强,i=1,2,3或4;
步骤1209,使第一方向和第二方向相位包裹值最优化,如用最小二乘法或加权优化等方法,参照相位偏移校准表对第一方向有效的相位包裹值与第二方向的相位包裹值进行计算,得到沿第一方向的有效相位包裹值的优选值,而后转到步骤1212;
步骤1210,进行参考相位偏移校准得到相位偏移校准表;
步骤1211,根据步骤1208得到的具有沿方向1的无效相位包裹值的像素点的位置处,利用沿 方向2的相位包裹值和步骤1210得到的相位偏移校准表中该像素点处的参考相位偏移值进行第一方向的相位包裹值的补偿,得到无效区域的经补偿的相位包裹值;
步骤1212,将步骤1211中得到的第一方向的无效区域的经补偿后的相位包裹值与步骤1309中得到的优化的第一方向的有效区域的相位包裹值合并成为第一方向的有效区域和无效区域中所需像素点的相位包裹值;
步骤1213,进行相位解包裹处理,其中沿第一方向的相位值为
Figure PCTCN2015074254-appb-000037
Figure PCTCN2015074254-appb-000038
其中N表示相位解包裹的级数,θ1(x,y)表示目标上坐标为(x,y)的点p处的相位包裹值;
步骤1214,计算高度。高度z(x,y)可以通过下式计算:
Figure PCTCN2015074254-appb-000039
此外,投影系统中的投影仪和照相机的排列方式可以根据需要确定。如图13A-13C所示,从俯视的方向看,两个投影仪P1、P2与照相机可以以成直线的方式排列,或者以例如3个(P1、P2、P3)、4个(P1、P2、P3、P4)的多个投影仪围绕照相机的方式排列。此时,由于投影仪与照相机的位置不同,每个投影仪照射到目标上所得到的图像也是不同的。投影系统设置角度应当在图案校准的步骤中被考虑,因此每个投影仪的捕获图像具有均匀地周期分布。
并且在计算过程中,对三个、四个投影仪都进行光学图案校准,在计算相位偏移校准表的过程中对三个、四个方向的相位值都进行记录。在后续的补偿过程中,可以根据需要和投影仪的位置选择采用相位偏移校准表中哪个方向的值对第一方向的相位包裹值补偿。其他过程与两个投影仪的方式类似,在此不再赘述。
如图14所示,从俯视的方向看,两个投影仪与照相机可以以成钝角角度的方式排列,投影仪P2与投影仪P1所在直线所成的角度为θ。
这种成角度设置的检测系统进行校对的步骤及其所形成的结构光如图15A和15B所示。
对于投影仪P2与投影仪P1所在直线所成的角度θ小于90°的情况,首先示出了理想的正弦图案和具有任意角度的正弦图案,其中利用上面所述的光学校准参数进行光学图案校准,得到校准后的图案。其中投影仪P1投射的光的光强的计算方法如下:
I=(1-sin(i/T*2*pi)/2*255)
其中T为每个条带的像素的数目,I表示光强度,i表示像素采样点的序号。
而投影仪P2投射的光的光强的计算方法为:
I’=(1-sin((x-(j-i*tan(angle))+1)/T*2*pi))I/2*255;
其中angle表示投影仪P2与投影仪P1所在直线所成的角度,在图中表示为θ,T为每个条带 的像素的数目,i表示像素采样点的序号,I’表示光强度,x表示所测量的点的横坐标,j表示所测量的点的纵坐标。
最终,无论投影仪位于什么方向,都可以实现照射到目标上的条纹光是基本相同的,例如都是正弦图案的。
在利用多于两个的投影仪进行计算时,生成所有方向的相位偏移校准表,计算每个像素沿每个方向的相位偏移值;计算沿所述至少两个不同方向获取的目标上所述结构光的图案,根据得到的图案计算沿所述每个方向的结构光的图案在目标上的各个像素处的相位包裹值。
在无效相位值所在区域的补偿过程中,在得到了沿第一方向的无效区域后,可以根据投影仪的角度判断是采用相位偏移校准表中其他方向中的哪个方向的值进行补偿。其可以是根据预先输入到系统中的参数进行合适的选择,在无效区域中照射效果最好的、尽量无阴影无反光的方向。
图16则示出了当投影仪与照相机如图14的方式排列时,两个投影仪照射到目标上的图像以及照相机的视场,其中该视场是P1的视场与P2的视场所重叠的区域。照相机可以使用两个不同的透镜,每个透镜捕捉一个投影的图案从而在两个投影图案之间匹配最佳的视场。此外,两个投影仪可以具有不同的投影视场。
图17则表示根据本发明一较佳实施方式的检测方法的简化流程图,简单地说包括以下步骤:
步骤1701,选择投影仪;
步骤1702,投影仪1沿方向1进行投射;
步骤1703,得到图像以计算相位包裹值1;
步骤1704,投影仪2沿方向2进行投射;
步骤1705,得到图像以计算相位包裹值2;
步骤1706,根据相位包裹值1和2以及在预先的相位偏移校准表,计算得到沿方向1的有效的相位包裹值1;
步骤1707,进行对相位包裹值1进行相位解包裹;
步骤1708,计算得到最终的高度值。
从图中可以清楚地看出,本发明在相位包裹的步骤后便进行了两个方向的相位包裹值的合并处理,得到一个方向的所有有效相位包裹值,然后仅对该一个方向的有效相位包裹值进行解包裹,将解包裹在一个步骤中完成,从而在保证计算准确度的前提下,减少了计算时间。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (13)

  1. 一种对目标的3D形状进行检测的方法,其特征在于,包括以下步骤,
    步骤S1,将至少两个不同方向的具有图案的结构光投射到目标上,沿所述至少两个不同方向获取被目标反射后形成的结构光图案,并计算沿所述至少两个不同方向的结构光图案在目标上的所需像素处的相位包裹值;
    步骤S2,对沿所述至少两个不同方向获取的结构光图案进行分析,针对结构光图案的灰度值得到至少两个不同方向的第一方向的具有无效相位包裹值的像素位置;
    步骤S3,利用具有无效相位包裹值的像素沿作为补偿方向的第二方向的有效相位包裹值对该第一方向的无效相位包裹值进行校准处理,以补偿相位偏移,从而将该无效相位包裹值补偿为经补偿的无效相位包裹值;
    步骤S4,将所需像素处沿所述第一方向的有效相位包裹值和经补偿的无效相位包裹值合并成一组合并的相位包裹值,以计算目标上的所需像素处的深度值。
  2. 根据权利要求1所述的方法,其特征在于,在所述步骤S2中得到所述至少两个不同方向的第一方向的具有无效相位包裹值的像素位置的步骤为:分析沿所述第一方向获取的每个像素的灰度值和/或亮度值,如果某个像素的灰度值或亮度值大于预先给定的第一阈值或小于预先给定的第二阈值,则判定该像素处的沿第一方向的相位包裹值为无效相位包裹值。
  3. 根据权利要求1-2中任一项所述的方法,其特征在于,所述步骤S3中的所述校准处理包括以下步骤:
    步骤S301,利用相位偏移校准表获得具有沿第一方向的无效相位包裹值的像素位置的第一方向的相位偏移值和作为补偿方向的第二方向的相位偏移值;
    步骤S302,根据所述相位偏移值和沿第二方向的有效相位包裹值计算与沿第一方向的无效相位包裹值相对应的补偿相位包裹值;
    步骤S303,利用所述补偿相位包裹值替换原来的无效相位包裹值,得到经补偿的无效相位包裹值。
  4. 根据权利要求3所述的方法,其特征在于,所述相位偏移校准表的获取方法包括以下步骤:
    步骤S401,根据在至少两个不同方向的每个方向上单独捕获的参考平面上的结构光 图案,计算沿每个方向捕获的结构光图案的所需像素处的相位包裹值,其中所述结构光是沿所述至少两个不同方向投射到参考平面上并由参考平面反射的结构光;
    步骤S402,根据所述相位包裹值计算沿所述至少两个不同方向的结构光图案在参考平面上的所述所需像素处的相位值;
    步骤S403,计算每个像素沿每个方向的相位偏移值;和
    步骤S404,将每个像素沿每个方向的相位偏移值作为校准值并记录,获得相位偏移校准表。
  5. 根据权利要求3所述的方法,其特征在于,所述步骤S302中,将所述相位偏移值和有效相位包裹值相加或相减,得到与无效相位包裹值相对应的补偿相位包裹值。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,进一步包括在所述步骤S 1之前,对执行该方法的、具有投影仪和照相机的投影系统进行网格拟合法光学校准的步骤,其包括以下步骤:
    步骤S501,光学模拟将投影仪的预定网格图案投影到预定平面上;
    步骤S502,根据投射网格图案的变形情况拟合投影畸变的光学参数;
    步骤S503,根据拟合的光学畸变参数对投影仪发出的光进行调制,以使得所述投影仪发出的光为正投影。
  7. 根据权利要求6所述的方法,其特征在于,所述投影仪为两个并沿两个方向投射间隔的条纹光,调制后的投影仪发出的两个方向中的光的条带在同一相位周期内平行,且调制后的每个方向投影仪发出的光条带在同一平面上具有相同的宽度间隔。
  8. 根据权利要求3所述的方法,其特征在于,利用至少四点比较算法计算步骤S2中的相位包裹值,计算方向如下:
    首先设置目标上的坐标为(x,y)的点沿第一方向的相位包裹值θ1(x,y)表示为θ′0,将目标上的坐标为(x,y+1),(x,y-1),(x-1,y),(x+1,y)的四个点沿第一方向的相位包裹值表示θ′i,θ′1为第(x-1,y)点的沿第一方向的相位包裹值θ(x-1,y),θ′2为第(x+1,y)点的沿第一方向的相位包裹值θ(x+1,y),θ′3为第(x,y-1)点的沿第一方向的相位包裹值θ(x,y-1),θ′4为第(x,y+1)点的沿第一方向的相位包裹值θ(x,y+1),
    如果存在(θ′0-θ′i)>相位阈值,
    Figure PCTCN2015074254-appb-100001
    如果存在(θ′0-θ′i)<-相位阈值,
    Figure PCTCN2015074254-appb-100002
    否则
    Figure PCTCN2015074254-appb-100003
    其中θ1(x,y)表示沿第一方向的(x,y)点的相位包裹值,θ2(x,y)表示沿第二方向的(x,y)点的相位包裹值,相位阈值为根据实际需要选定的合适的值。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述步骤S4中的计算目标上的所需像素处的深度值为:对该组合并的相位包裹值进行相位解包裹处理,得到解包裹的相位值,根据解包裹的相位值计算目标上的所需像素处的深度值。
  10. 一种用于检测目标的3D形状的投影系统,其特征在于,包括以下组件:
    至少两个投影仪,用于将至少两个不同方向的具有图案的结构光投射到目标上;
    照相机,用于获取所述结构光被目标反射后的结构光图案;
    存储器;
    处理器,根据照相机获取的结构光图案,计算沿所述至少两个不同方向的结构光图案在目标上的所需像素处的相位包裹值;对沿所述至少两个不同方向获取的结构光图案进行分析,针对结构光图案的灰度值得到至少两个不同方向的第一方向的具有无效相位包裹值的像素位置;利用具有无效相位包裹值的像素沿作为补偿方向的第二方向的有效相位包裹值对第一方向的该无效相位包裹值进行校准处理,以补偿相位偏移,从而将该无效相位包裹值补偿为经补偿的无效相位包裹值;将所需像素处沿所述第一方向的有效相位包裹值和经补偿的无效相位包裹值合并成一组合并的相位包裹值,以计算目标上的所需像素处的深度值。
  11. 根据权利要求10所述的投影系统,其特征在于,所述计算目标上的所需像素处的深度值为:对该组合并的相位包裹值进行相位解包裹处理,得到解包裹的相位值,根据解包裹的相位值计算目标上的所需像素处的深度值。
  12. 一种相位偏移校准方法,其特征在于,包括以下步骤:
    步骤S601,根据在至少两个不同方向的每个方向上单独捕获的参考平面上的结构光图案,计算沿每个方向捕获的所述结构光图案的所需像素处的相位包裹值,其中所述结构光是沿所述至少两个不同方向投射到参考平面上并由参考平面反射的结构光;
    步骤S602,根据所述相位包裹值计算沿所述至少两个不同方向的结构光在参考平面上的所述所需像素处的相位值;
    步骤S603,计算每个像素沿每个方向的相位偏移值;和
    步骤S604,将每个像素的沿每个方向的相位偏移值进行记录,获得相位偏移校准表,并在实际测量时根据相位偏移校准表对所需方向的无效相位值进行校准以补偿该无效相位值。
  13. 一种对目标的3D形状进行检测的系统,其特征在于,包括以下模块,
    第一计算模块,其计算沿所述至少两个不同方向获取的被目标反射后形成的结构光图案在目标上的所需像素处的相位包裹值;
    分析模块,其对沿所述至少两个不同方向获取的结构光图案进行分析;
    第二计算模块,其针对结构光图案的灰度值得到至少两个不同方向的第一方向的具有无效相位包裹值的像素位置;
    校准模块,其利用具有无效相位包裹值的像素沿作为补偿方向的第二方向的有效相位包裹值对该第一方向的无效相位包裹值进行校准处理,以补偿相位偏移,从而将该无效相位包裹值补偿为经补偿的无效相位包裹值;
    合并模块,其将所需像素处沿所述第一方向的有效相位包裹值和经补偿的无效相位包裹值合并成一组合并的相位包裹值;
    第三计算模块,其计算目标处的所需像素处的深度值。
PCT/CN2015/074254 2015-03-13 2015-03-13 相位偏移校准方法、3d形状检测的方法、系统及投影系统 WO2016145582A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2015/074254 WO2016145582A1 (zh) 2015-03-13 2015-03-13 相位偏移校准方法、3d形状检测的方法、系统及投影系统
CN201510115382.7A CN104713497B (zh) 2015-03-13 2015-03-17 相位偏移校准方法、3d形状检测的方法、系统及投影系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/074254 WO2016145582A1 (zh) 2015-03-13 2015-03-13 相位偏移校准方法、3d形状检测的方法、系统及投影系统

Publications (1)

Publication Number Publication Date
WO2016145582A1 true WO2016145582A1 (zh) 2016-09-22

Family

ID=56918155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/074254 WO2016145582A1 (zh) 2015-03-13 2015-03-13 相位偏移校准方法、3d形状检测的方法、系统及投影系统

Country Status (1)

Country Link
WO (1) WO2016145582A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108253907A (zh) * 2018-02-01 2018-07-06 深圳市易尚展示股份有限公司 基于希尔伯特变换相位误差校正的三维测量方法和装置
CN112198494A (zh) * 2019-06-20 2021-01-08 北京小米移动软件有限公司 飞行时间模组标定方法、装置、系统及终端设备
CN113188478A (zh) * 2021-04-28 2021-07-30 伏燕军 一种远心显微三维测量系统混合标定方法
CN114023232A (zh) * 2021-11-05 2022-02-08 京东方科技集团股份有限公司 显示校准方法及显示校准装置、显示器、智能手表
CN114485404A (zh) * 2022-01-30 2022-05-13 嘉兴市像景智能装备有限公司 一种基于路径的高度映射的校准补偿方法
CN114543704A (zh) * 2021-12-29 2022-05-27 西安邮电大学 一种端到端的绝对相位解析方法
CN115052136A (zh) * 2022-05-10 2022-09-13 合肥的卢深视科技有限公司 结构光投影方法、电子设备及存储介质
CN115211849A (zh) * 2021-04-16 2022-10-21 天津大学 一种用于腕部的微波s21解包裹相位无损检测血糖浓度的方法
CN116571875A (zh) * 2023-07-13 2023-08-11 西南交通大学 基于主动投影技术的激光加工检测一体化设备及检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466998A (zh) * 2005-11-09 2009-06-24 几何信息学股份有限公司 三维绝对坐标表面成像的方法和装置
US20110080471A1 (en) * 2009-10-06 2011-04-07 Iowa State University Research Foundation, Inc. Hybrid method for 3D shape measurement
CN102538706A (zh) * 2010-11-19 2012-07-04 株式会社高永科技 绘制表面轮廓的方法和装置
CN103673924A (zh) * 2012-09-11 2014-03-26 株式会社其恩斯 形状测量装置、形状测量方法、和形状测量程序
WO2014091214A1 (en) * 2012-12-12 2014-06-19 The University Of Birmingham Surface geometry imaging
CN104111038A (zh) * 2014-07-07 2014-10-22 四川大学 利用相位融合算法修复饱和产生的相位误差的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466998A (zh) * 2005-11-09 2009-06-24 几何信息学股份有限公司 三维绝对坐标表面成像的方法和装置
US20110080471A1 (en) * 2009-10-06 2011-04-07 Iowa State University Research Foundation, Inc. Hybrid method for 3D shape measurement
CN102538706A (zh) * 2010-11-19 2012-07-04 株式会社高永科技 绘制表面轮廓的方法和装置
CN103673924A (zh) * 2012-09-11 2014-03-26 株式会社其恩斯 形状测量装置、形状测量方法、和形状测量程序
WO2014091214A1 (en) * 2012-12-12 2014-06-19 The University Of Birmingham Surface geometry imaging
CN104111038A (zh) * 2014-07-07 2014-10-22 四川大学 利用相位融合算法修复饱和产生的相位误差的方法

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108253907A (zh) * 2018-02-01 2018-07-06 深圳市易尚展示股份有限公司 基于希尔伯特变换相位误差校正的三维测量方法和装置
CN112198494B (zh) * 2019-06-20 2023-11-21 北京小米移动软件有限公司 飞行时间模组标定方法、装置、系统及终端设备
CN112198494A (zh) * 2019-06-20 2021-01-08 北京小米移动软件有限公司 飞行时间模组标定方法、装置、系统及终端设备
CN115211849A (zh) * 2021-04-16 2022-10-21 天津大学 一种用于腕部的微波s21解包裹相位无损检测血糖浓度的方法
CN113188478A (zh) * 2021-04-28 2021-07-30 伏燕军 一种远心显微三维测量系统混合标定方法
CN114023232A (zh) * 2021-11-05 2022-02-08 京东方科技集团股份有限公司 显示校准方法及显示校准装置、显示器、智能手表
CN114023232B (zh) * 2021-11-05 2024-03-15 京东方科技集团股份有限公司 显示校准方法及显示校准装置、显示器、智能手表
CN114543704A (zh) * 2021-12-29 2022-05-27 西安邮电大学 一种端到端的绝对相位解析方法
CN114543704B (zh) * 2021-12-29 2023-05-26 西安邮电大学 一种端到端的绝对相位解析方法
CN114485404A (zh) * 2022-01-30 2022-05-13 嘉兴市像景智能装备有限公司 一种基于路径的高度映射的校准补偿方法
CN115052136A (zh) * 2022-05-10 2022-09-13 合肥的卢深视科技有限公司 结构光投影方法、电子设备及存储介质
CN115052136B (zh) * 2022-05-10 2023-10-13 合肥的卢深视科技有限公司 结构光投影方法、电子设备及存储介质
CN116571875B (zh) * 2023-07-13 2023-11-03 西南交通大学 基于主动投影技术的激光加工检测一体化设备及检测方法
CN116571875A (zh) * 2023-07-13 2023-08-11 西南交通大学 基于主动投影技术的激光加工检测一体化设备及检测方法

Similar Documents

Publication Publication Date Title
WO2016145582A1 (zh) 相位偏移校准方法、3d形状检测的方法、系统及投影系统
CN104713497B (zh) 相位偏移校准方法、3d形状检测的方法、系统及投影系统
JP5055191B2 (ja) 3次元形状計測方法および装置
US7548324B2 (en) Three-dimensional shape measurement apparatus and method for eliminating 2π ambiguity of moire principle and omitting phase shifting means
US8432395B2 (en) Method and apparatus for surface contour mapping
EP3594617B1 (en) Three-dimensional-shape measurement device, three-dimensional-shape measurement method, and program
JP6519265B2 (ja) 画像処理方法
JP2008500529A (ja) デジタル画像化システムを特徴付ける方法
US20150233707A1 (en) Method and apparatus of measuring the shape of an object
CN110692084B (zh) 用于导出场景的拓扑信息的装置和机器可读存储介质
JP2013504752A (ja) 非接触物体検査
CN108168464A (zh) 针对条纹投影三维测量系统离焦现象的相位误差校正方法
JP6161276B2 (ja) 測定装置、測定方法、及びプログラム
ES2894935T3 (es) Aparato de medición de distancias tridimensionales y procedimiento para el mismo
US20210152796A1 (en) Image calibration for projected images
WO2018163530A1 (ja) 3次元形状計測装置、3次元形状計測方法、及びプログラム
JP2015534091A (ja) 3次元の物体の直線寸法を制御する方法
JP2024507089A (ja) 画像のコレスポンデンス分析装置およびその分析方法
JP4516949B2 (ja) 三次元形状計測装置及び三次元形状計測方法
US9204130B2 (en) Method and system for creating a three dimensional representation of an object
JP3629532B2 (ja) 連続移動物体のリアルタイム形状計測方法及びシステム
KR20120139350A (ko) 캘리브레이션 기능을 가지는 핀홀 검출 장치 및 방법
JP2009180689A (ja) 三次元形状測定装置
JP2012093235A (ja) 三次元形状測定装置、三次元形状測定方法、構造物の製造方法および構造物製造システム
KR20140101223A (ko) 3차원 형상 측정 장치 및 측정 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15884966

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15884966

Country of ref document: EP

Kind code of ref document: A1