WO2022052313A1 - Calibration method for 3d structured light system, and electronic device and storage medium - Google Patents

Calibration method for 3d structured light system, and electronic device and storage medium Download PDF

Info

Publication number
WO2022052313A1
WO2022052313A1 PCT/CN2020/130543 CN2020130543W WO2022052313A1 WO 2022052313 A1 WO2022052313 A1 WO 2022052313A1 CN 2020130543 W CN2020130543 W CN 2020130543W WO 2022052313 A1 WO2022052313 A1 WO 2022052313A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coordinates
projector
point
calibration plate
Prior art date
Application number
PCT/CN2020/130543
Other languages
French (fr)
Chinese (zh)
Inventor
殷习全
彭思龙
Original Assignee
苏州中科全象智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州中科全象智能科技有限公司 filed Critical 苏州中科全象智能科技有限公司
Publication of WO2022052313A1 publication Critical patent/WO2022052313A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2536Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object using several gratings with variable grating pitch, projected on the object with the same angle of incidence

Definitions

  • the present application relates to the field of computer vision technology, for example, to a calibration method of a 3D structured light system, an electronic device, and a storage medium.
  • 3D structured light imaging technology is mainly used in industries such as precision measurement and defect detection, covering manufacturing fields such as surface assembly technology, automotive, aviation, semiconductor, medical, pharmaceutical, and food processing.
  • 3D detection technology provides one-dimensional height information. In the actual measurement process, it can not only accurately locate the measured object, accurately measure two-dimensional information such as size and color, but also obtain the surface contour information of the object. , which can measure and analyze complex information such as flatness, slope, curvature and flaws of the surface of the object.
  • 3D structured light measurement has a series of advantages such as non-contact, fast speed, high precision and strong anti-interference ability, and is a very important direction of precision measurement in the 3D inspection industry.
  • the 3D structure light calibration algorithm determines the detection accuracy of the 3D measurement system.
  • the 3D structure light calibration algorithm is mainly an algorithm to establish the internal parameters of the camera, the structured light projection and the external parameters of the system. It is only by determining these system model parameters. The 3D depth information of the measured object is recovered.
  • the calibration principle of structured light 3D system adopts Zhang Zhengyou's calibration method, but in the specific implementation, the methods are not the same, of course, the accuracy difference is also very large, providing a higher-precision calibration method has gradually become a research field. .
  • a method for calibrating a 3D structured light module includes the following steps: S0, making a calibration board, and printing a checkerboard on the calibration board, and the checkerboard covers the entire calibration board; S1 , Take a calibration photo, place the 3D structured light module in front of the plane of the calibration plate, fix the relative position of the 3D structured light module and the calibration plate, and sequentially take the infrared photo and dot matrix of the calibration plate Projected to the dot matrix projection photo of the calibration plate, the infrared photo of the calibration plate and the dot matrix projection photo are taken as a group of calibration photos; S2, take a calibration photo group, replace the 3D structured light module and all Describe the relative position of the calibration plate, repeat step S1 to take multiple sets of calibration photos; S3, calibrate the 3D structured light module, use a calibration algorithm to calibrate the infrared camera through multiple sets of the calibration photos, and then calibrate the dot matrix projector,
  • the calibration plate is a calibration plate with a white background, the shape of the checkerboard is square, and the gray scale of the black grids in the checkerboard is adjustable;
  • the calibration algorithm is the Zhang Zhengyou plane calibration method;
  • the calibration process includes: S31, extracting Pixel coordinates, extracting the pixel coordinates of the checkerboard grid points of the infrared photo of the calibration board, and setting the corresponding world coordinate values;
  • S32 calibrating the infrared camera, calculating the homography mapping from the world coordinates of the checkerboard grid points to the pixel coordinates, according to the rotation The constraint relationship of the matrix is calculated to obtain the corresponding internal parameters and external parameters of the infrared camera;
  • S33 calculate the conversion relationship, calculate the conversion relationship between the spot coordinates projected by the dot matrix and the dot matrix geometric relationship of the dot matrix projector, and convert the infrared photo.
  • the pixel coordinates of the checkerboard points are converted into the pixel coordinates of the dot matrix projector, and the internal and external parameters of the dot matrix projector are calculated by using the Zhang Zhengyou plane calibration algorithm.
  • This scheme uses a checkerboard calibration board, and needs to detect the corner points of the calibration board as Mark points. In actual calibration applications, when the calibration board poses changes, it will be affected by illumination and uneven imaging quality. The corner extraction accuracy is easily affected, and the circle calibration plate can be used to extract the circle well, and it is not easy to be disturbed.
  • a combination pattern calibration plate and a structured light camera parameter calibration method are disclosed.
  • the pattern on the calibration plate includes an identification mark and a code block area;
  • the identification mark is a closed quadrilateral distinguished by black and white boundaries, and its Inside the boundary are black pixels, and outside the boundary are white pixels;
  • the code block area is located inside the quadrilateral boundary, and the code block is encoded information composed of N ⁇ N black and white square color blocks.
  • the calibration method includes the following steps: S1: using a plurality of combination pattern calibration boards in claim 1, each combination pattern calibration board has a code block area with different patterns; S2: collecting images of the multiple combination pattern calibration boards distributed and arranged, and Perform preprocessing; S3: Calculate the local gradient of each pixel of the image; set the pixel area with the same gradient magnitude and direction as the connected domain; S4: Use the connected domain with the gradient greater than the set threshold in the connected domain as the edge of the calibration pattern, The edge pixels are fitted into different line segments using straight lines; S5: traverse all line segments, detect whether four adjacent line segments can form a complete quadrilateral, and detect whether all areas with complete quadrilaterals contain valid coding areas; S6: Match the detected code with the pattern of the code library, calculate the coordinates of the pattern corresponding to the code in the calibration plate, and the coordinates of the intersection of the four line segments of the quadrilateral in the image; S7: According to the pixel coordinates in the image and the corresponding calibration The board coordinates
  • the related technologies have at least the following deficiencies:
  • the present application provides a calibration method, electronic device and storage medium for a 3D structured light system.
  • the method uses a calibration board including a dot pattern with higher precision and a more complex calibration process, and uses a multi-frequency phase-shift code with higher precision.
  • a calibration board including a dot pattern with higher precision and a more complex calibration process, and uses a multi-frequency phase-shift code with higher precision.
  • the flexible automatic keystone correction algorithm is used to sort the dots, and the calibration board can basically be placed in various postures. Extract more calibration data.
  • the present application provides a method for calibrating a 3D structured light system, comprising the following steps:
  • the step of making a calibration board includes: making a system calibration board and burning it, the pattern on the calibration board is a plurality of circular targets, the center distances of the adjacent circular targets are the same, and a plurality of the circular targets The targets are arranged in rows and columns, and the circular targets are of different sizes;
  • the step of collecting the calibration plate image includes:
  • Adjust the posture of the calibration plate repeat the steps of collecting the images of the calibration plate, until the number of postures of the calibration plate reaches a preset target value, and obtain a plurality of images of the calibration plate;
  • the step of calibrating the structured light system according to a plurality of the collected calibration plate images includes:
  • the absolute phase value of each circle center coordinate is obtained by bilinear interpolation algorithm
  • the coordinates in the projector image corresponding to the coordinates of each circle center in the camera image are obtained;
  • the internal parameters, distortion coefficients and external parameters of the camera and the projector are obtained by Zhang Zhengyou's calibration method
  • the calibrated camera and projector coordinate system transformation matrix is obtained.
  • the application also provides an electronic device, comprising:
  • the processor When the program is executed by the processor, the processor implements the method for calibrating a 3D structured light system as described above.
  • the present application also provides a computer-readable storage medium storing computer-executable instructions, where the computer-executable instructions are used to execute the calibration method for a 3D structured light system as described above.
  • FIG. 3 is a schematic diagram of the sorting of dot patterns in an angled calibration image according to an optional embodiment of the present application
  • FIG. 4 is a schematic structural diagram of an electronic device according to an optional embodiment of the present application.
  • the present application provides a method for calibrating a 3D structured light system, comprising the following steps:
  • the step of making a calibration board is to make a system calibration board and burn it.
  • the pattern on the calibration board is a plurality of circular targets, the center distances of the adjacent circular targets are the same, and the plurality of circular targets are arranged in rows and columns, the circular targets are of different sizes;
  • Adjust the attitude of the calibration board repeat the steps of image acquisition of the calibration board, until the number of attitudes of the calibration board reaches the preset target value, and obtain a plurality of images of the calibration board;
  • the calibration step is to calibrate the structured light system according to a plurality of the collected calibration plate images, which may include the following steps:
  • the absolute phase value of each circle center coordinate is obtained by bilinear interpolation algorithm
  • the coordinates in the projector image corresponding to the coordinates of each circle center in the camera image are obtained;
  • the internal parameters, distortion coefficients and external parameters of the camera and projector are obtained by Zhang Zhengyou's calibration method
  • the calibrated camera and projector coordinate system transformation matrix is obtained.
  • the parameters of the corresponding positions in all the obtained transformation matrices are superimposed, summed and averaged, as the final transformation matrix of the projector and camera coordinate systems.
  • the parameters obtained by calibration include: camera internal parameters and distortion coefficients, projector internal parameters and distortion coefficients, and rigid transformation relationship matrix between camera and projector coordinate systems.
  • the method for calibrating the 3D structured light system further includes: before using a bilinear interpolation algorithm to obtain the absolute phase value of each circle center coordinate, using a multi-frequency multi-step phase shift algorithm to obtain the plane phase of the calibration plate The absolute phase value of the field in the horizontal and vertical directions.
  • ten-step phase-shift code and three-frequency heterodyne method can be used to obtain absolute phase values in the horizontal and vertical directions, including the following steps:
  • sinusoidal grating fringe patterns with different frequencies in the horizontal and vertical directions are projected on the calibration plate respectively.
  • the sinusoidal grating fringe patterns adopt three different wavelengths, and the wavelengths are: ⁇ 1 , ⁇ 2 , ⁇ 3 , the corresponding main phase values are: ⁇ 1 , ⁇ 2 , ⁇ 3 ; ten images are collected for each projection pattern for ten-step phase-shift code calculation;
  • n i is the fringe series of a certain point on the surface of the measured object in the corresponding grating diagram, and n i includes the integer part N i and the fractional part ⁇ n i ;
  • ⁇ i is the wrapping phase of the corresponding grating
  • ⁇ i is the absolute phase of the corresponding grating
  • the extracting the center positions of all circular targets in the calibration plate image may include the following steps:
  • Least squares ellipse fitting is performed on the obtained sub-pixel edge contour points of each circle to obtain the center position of each ellipse, and the center position of each ellipse is taken as the circle center position of each circular target.
  • sorting all circle centers may include the following steps:
  • the sorting is performed as follows:
  • determining the order of the center of each circle may include the following steps:
  • the coordinates of each circular target are obtained by blob analysis, and the distance from each point to all the estimated points is calculated.
  • the point with the smallest distance is the point of the corresponding position, and it is sorted according to the coordinate order.
  • determining the position coordinates of the five circular targets in the calibration plate image may include the following steps:
  • the distance and the smaller point are the first large circle point C1 in the five circular targets, and the first large circle point is located at the intersection of the horizontal centerline of the pattern and the vertical centerline on the left side of the vertical centerline;
  • the distance and the larger point are the second largest point C2 in the five circular targets, and the second largest point is located at the intersection of the horizontal centerline of the pattern and the vertical centerline on the right side of the vertical centerline;
  • the radii of the five great circles are equal, and slightly larger than the radii of the remaining circles of the calibration plate; in terms of arrangement, the two circles with the largest distance belong to the same line as the center circle of the calibration plate pattern, and are located on the left and the left of the middle circle of the pattern respectively.
  • the right side is separated from the center circle by two small circles.
  • the two circles with the smallest distance are in the same row and adjacent to each other.
  • One of the circles belongs to the same column as the center circle of the pattern and is separated by a small circle.
  • one of the large circles is on the right side of the vertical center line, and the last large circle also belongs to the same column as the pattern center circle, on the other side of the pattern horizontal center line, and is also separated by a small circle.
  • the pose of the calibration board is randomly placed, and the positions of the five great circles are determined, and the direction vectors along the horizontal and vertical directions of the pattern can be obtained, so as to obtain the approximate positions of all points in the pattern.
  • the ellipse is fitted to the extracted real center, and compared with the estimated center distance, the center with the smallest distance is the circle corresponding to the position.
  • estimating the coordinates of the small dots may include the following steps:
  • the coordinates of the known large circle point C1 and the two components in the horizontal and vertical directions of the calibration plate are estimated, and the position coordinates of all circular targets are stored.
  • using a bilinear interpolation algorithm to obtain the absolute phase values of each circle center coordinate in the horizontal and vertical directions may include the following steps:
  • the main phase value corresponding to the center of the circle is obtained by bilinear interpolation.
  • the interpolation formula is as follows:
  • ⁇ (i+u,j+v) (1-u)(1-v) ⁇ (i,j)+(1-u)v ⁇ (i,j+1)+u(1-v) ⁇ ( i+1,j)+uv ⁇ (i+1,j+1);
  • is the main value of the phase
  • i, j are the integer parts of the calculated circle center coordinates in the vertical and horizontal directions in the image coordinate system;
  • u, v are the fractional parts of the calculated circle center coordinates in the vertical and horizontal directions in the image coordinate system;
  • phase principal value is obtained as follows:
  • the light intensity distribution function of the multi-step phase-shift grating is:
  • I k (x,y) I′(x,y)+I′′(x,y)cos( ⁇ +(k-1) ⁇ /2)
  • n is the number of steps in the multi-step phase shift method
  • x, y are pixel coordinates
  • I(x, y) is the gray value of the pixel at the position (x, y) in the k-th phase-shift image
  • I'(x,y) is the average gray level of the image
  • I′′(x,y) is the grayscale modulation of the image
  • is the main value of the phase
  • n is the number of steps in the multi-step phase shift method
  • x, y are pixel coordinates
  • I k is the gray value of the pixel
  • N is the number of grating fringe periods
  • ⁇ (x,y) is the main value of the phase.
  • the light-intensity distribution function of the ten-step phase-shift method grating is:
  • I k (x,y) I′(x,y)+I′′(x,y)cos( ⁇ +(k-1) ⁇ /2)
  • k 1-10;
  • x, y are pixel coordinates
  • I(x, y) is the gray value of the pixel at the position (x, y) in the k-th phase-shift image
  • I'(x,y) is the average gray level of the image
  • I′′(x,y) is the grayscale modulation of the image
  • is the main value of the phase
  • k 1-10;
  • x, y are pixel coordinates
  • I k is the pixel gray value of this point
  • N is the number of grating fringe periods
  • ⁇ (x,y) is the main value of the phase.
  • the pixel point corresponding to the phase principal value obtained by the above method is an integer coordinate.
  • the phase principal value corresponding to the circle center is obtained by bilinear interpolation.
  • the interpolation formula is as follows:
  • ⁇ (i+u,j+v) (1-u)(1-v) ⁇ (i,j)+(1-u)v ⁇ (i,j+1)+u(1-v) ⁇ ( i+1,j)+uv ⁇ (i+1,j+1)
  • is the main value of the phase
  • i, j are the coordinates in the vertical and horizontal directions of the point in the camera image used for interpolation
  • u, v are the coordinates in the vertical and horizontal directions of the point in the projector image used for interpolation
  • the coordinates in the projector image corresponding to the coordinates of each circle center in the camera image are obtained by the following formula:
  • u p is the coordinate of the center point c in the u direction in the projector image
  • v p is the coordinate of the center point c in the v direction in the projector image
  • N is the number of grating fringe periods
  • W is the resolution of the projector in the horizontal direction
  • H is the resolution of the projector in the vertical direction
  • ⁇ u (u c ,v c ) is the absolute phase value in the vertical direction of the center point c;
  • ⁇ v (u c ,v c ) is the absolute phase value in the horizontal direction at the center point c;
  • the method for calibrating the 3D structured light system further includes: establishing the camera and projector according to the camera and projector internal parameters, distortion coefficients and external parameters, and the coordinate system transformation matrix between the camera and the projector.
  • the mapping relationship between the projector and the world coordinate system can include the following steps:
  • the absolute phase grayscale image is obtained; here, the three-frequency heterodyne method can be used;
  • N is the number of grating fringe periods
  • W is the resolution of the projector in the horizontal direction
  • ⁇ (u c ,v c ) is the absolute phase value of the pixel
  • u p is the coordinate in the vertical direction in the projector image corresponding to the pixel.
  • the three-dimensional coordinates (X W , Y W , Z W ) of the unique point p are obtained by the following imaging principle formula:
  • a C and A P are the internal parameters of the camera and projector, respectively;
  • M C and M P are the external parameters of the camera and the projector, respectively;
  • S C and S P are the scale factors of the camera and projector, respectively;
  • (u c , vc ) and ( up , v p ) are the image coordinates of the camera and the projector, respectively, both of which use the pre-calibrated system distortion parameters for distortion correction;
  • a pixel coordinate point in the camera corresponds to a coordinate in a horizontal or vertical direction in the projector image.
  • a vertical corresponding line can be determined in the projector image from the vertical absolute phase value, that is, the coordinate value of the corresponding point of the projector image in the vertical direction is determined as the coordinate of the point column; similarly, the horizontal absolute phase value can determine a horizontal line.
  • the corresponding line of that is, the coordinate value of the corresponding point of the projector image in the horizontal direction is determined as the row coordinate of the point.
  • Dot sorting includes the following steps:
  • the coordinates of the first point are pre-judged, so that the coordinates of the first point and the horizontal and vertical directions of the calibration plate are obtained.
  • the respective two components of , and the coordinates of all 99 points are pre-judged (the coordinates of the great circle have been preferentially extracted before, and their coordinate positions can be stored directly without pre-judgment);
  • centroid coordinates of all points obtained by blob analysis calculate the distance from each point to the pre-judged 99 points, and the point with the smallest distance is the point of the corresponding position, so as to realize the sorting and obtain the sorting effect as shown in the figure.
  • the three-dimensional world coordinates in space are (X W , Y W , Z W ), that is, the physical coordinates of the calibration board, (u, v) are the camera image coordinates or projector image coordinates of the extracted center of the circle, and m ij is Elements of the correspondence matrix.
  • the center-to-center distance is the same, and a plane coordinate system can be established.
  • the obtained coordinates of the center of the circle of each group of camera images and the coordinates of the center of the projector image, as well as the physical coordinates of the calibration plate, can be obtained by Zhang Zhengyou's calibration algorithm.
  • the internal parameters and distortion coefficients of the camera and projector can obtain a set of external parameters for the corresponding camera and projector.
  • distortion coefficients and a set of external parameters can be Determine a set of homography matrices, namely the above M matrices.
  • a new matrix can be obtained, which is the relationship matrix between the camera and the projector.
  • the relationship matrix under one pose, and then the elements at the (i, j) position in the obtained relationship matrix are summed and averaged as the transformation matrix of the camera and the projector.
  • the pinhole model in step 5 contains three equations, and after sorting out Z C , two linear equations about m ij can be obtained, namely:
  • the camera and projector homography matrices obtained by Zhang Zhengyou's calibration method, respectively, can obtain four linear equations. Since there are only three variables of XW , YW , and ZW , the three-dimensional coordinates of the point can be obtained by simultaneous equations .
  • the extrinsic parameter and transformation relationship matrix only contains rigid relationship of rotation and translation, it will not change the shape and size of the object.
  • the external parameter matrix Through the external parameter matrix, the three-dimensional coordinates of the space can be converted into the camera coordinate system, and the position coordinates of a point in the camera coordinate system under the projector can be obtained by using the conversion relationship matrix between the camera and the projector.
  • FIG. 4 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment. As shown in FIG. 4 , the electronic device includes: one or more processors 110 and a memory 120 . A processor 110 is taken as an example in FIG. 4 .
  • the electronic device may further include: an input device 130 and an output device 140 .
  • the processor 110 , the memory 120 , the input device 130 and the output device 140 in the electronic device may be connected by a bus or in other ways, and the connection by a bus is taken as an example in FIG. 4 .
  • the memory 120 can be configured to store software programs, computer-executable programs, and modules.
  • the processor 110 executes various functional applications and data processing by running the software programs, instructions and modules stored in the memory 120 to implement any one of the methods in the foregoing embodiments.
  • the memory 120 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device, and the like.
  • the memory may include volatile memory such as random access memory (Random Access Memory, RAM), and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage devices.
  • RAM random access memory
  • non-volatile memory such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage devices.
  • Memory 120 may be a non-transitory computer storage medium or a transitory computer storage medium.
  • the non-transitory computer storage medium such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 120 may optionally include memory located remotely from processor 110, which may be connected to the electronic device via a network. Examples of such networks may include the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 130 may be configured to receive input numerical or character information, and to generate key signal input related to user settings and function control of the electronic device.
  • the output device 140 may include a display device such as a display screen.
  • This embodiment further provides a computer-readable storage medium storing computer-executable instructions, where the computer-executable instructions are used to execute the above method.
  • non-transitory computer-readable storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a RAM, or the like.
  • the application adopts a circular target calibration plate, and performs ellipse fitting by extracting the sub-pixel contour of the circle to obtain the sub-pixel position of the center of the circle, and the accuracy is higher.
  • the calibration plate is placed at random, resulting in inconsistent illumination brightness,
  • the target feature of the circle is relatively obvious, and it can also adapt to changes in illumination and accurately extract the position of the center of the circle;
  • the present application uses bilinear interpolation to obtain the absolute phase value corresponding to each circle center coordinate, and then uses the absolute phase value to calculate the corresponding projector image coordinate.

Abstract

A calibration method for a 3D structured light system, the method comprising: manufacturing a calibration board; collecting calibration board images; and calibrating a structured light system according to the collected plurality of calibration board images. Further provided are an electronic device and a storage medium.

Description

3D结构光系统的标定方法、电子设备及存储介质Calibration method, electronic device and storage medium for 3D structured light system
本公开要求在2020年09月11日提交中国专利局、申请号为202010951187.9的中国专利申请的优先权,以上申请的全部内容通过引用结合在本公开中。The present disclosure claims the priority of a Chinese patent application with application number 202010951187.9 filed with the Chinese Patent Office on September 11, 2020, the entire contents of which are incorporated into the present disclosure by reference.
技术领域technical field
本申请涉及计算机视觉技术领域,例如涉及一种3D结构光系统的标定方法、电子设备及存储介质。The present application relates to the field of computer vision technology, for example, to a calibration method of a 3D structured light system, an electronic device, and a storage medium.
背景技术Background technique
相关技术中,3D结构光成像技术在工业上主要应用在精密测量,缺陷检测等场景,涵盖包含表面组装技术、汽车、航空、半导体、医疗、制药、食品加工等生产制造领域。和二维检测相比,3D检测技术提供了多一维的高度信息,在实际测量过程中,不仅可以精确定位被测物,精确测量尺寸,颜色等二维信息,还能够获取物体表面轮廓信息,这样就能测量分析物体表面的平面度、坡度、弯曲度和瑕疵等复杂信息。3D结构光测量具有非接触、速度快、精度高、抗干扰能力强等一系列优点,是3D检测行业精密测量的一个非常重要的方向。Among related technologies, 3D structured light imaging technology is mainly used in industries such as precision measurement and defect detection, covering manufacturing fields such as surface assembly technology, automotive, aviation, semiconductor, medical, pharmaceutical, and food processing. Compared with two-dimensional detection, 3D detection technology provides one-dimensional height information. In the actual measurement process, it can not only accurately locate the measured object, accurately measure two-dimensional information such as size and color, but also obtain the surface contour information of the object. , which can measure and analyze complex information such as flatness, slope, curvature and flaws of the surface of the object. 3D structured light measurement has a series of advantages such as non-contact, fast speed, high precision and strong anti-interference ability, and is a very important direction of precision measurement in the 3D inspection industry.
3D结构光标定算法,决定了3D测量系统的检测精度,3D结构光标定算法主要是确立相机、结构光投影的内部参数以及他们构成系统的外部参数的算法,正是通过确定这些系统模型参数才能恢复出被测物的3D深度信息。The 3D structure light calibration algorithm determines the detection accuracy of the 3D measurement system. The 3D structure light calibration algorithm is mainly an algorithm to establish the internal parameters of the camera, the structured light projection and the external parameters of the system. It is only by determining these system model parameters. The 3D depth information of the measured object is recovered.
相关技术中,结构光3D系统标定原理都采用张正友标定法,但是在具体实施的时候,方法却不尽相同,当然,精度差异也非常大,提供精度较高的标定方法逐渐成为一个研究的领域。In the related art, the calibration principle of structured light 3D system adopts Zhang Zhengyou's calibration method, but in the specific implementation, the methods are not the same, of course, the accuracy difference is also very large, providing a higher-precision calibration method has gradually become a research field. .
在中国专利申请文献CN110443856A,公开了一种3D结构光模组标定方法,该方法包括如下步骤:S0、制作标定板,在标定板上打印棋盘格,所述棋盘格铺满整个标定板;S1、拍摄标定照片,放置3D结构光模组于所述标定板平面的 正前方,固定所述3D结构光模组和所述标定板的相对位置,依次拍摄所述标定板的红外照片、点阵投射到所述标定板的点阵投射照片,将所述标定板的红外照片和所述点阵投射照片作为一组标定照片;S2、拍摄标定照片组,更换所述3D结构光模组和所述标定板的相对位置,重复步骤S1拍摄多组标定照片;S3、标定3D结构光模组,采用标定算法通过多组所述标定照片对红外相机进行标定,然后对点阵投射器进行标定,得到所述点阵投射器相对于所述红外相机的几何关系。所述标定板为白色背景的标定板,所述棋盘格的形状为正方形,所述棋盘格中黑色格子的灰度可调;所述标定算法为张正友平面标定法;标定过程包括:S31、提取像素坐标,提取所述标定板的红外照片的棋盘格点像素坐标,并设定对应的世界坐标数值;S32、标定红外相机,计算棋盘格点世界坐标到像素坐标的单应性映射,根据旋转矩阵的约束关系计算得到对应的红外相机内参和外参;S33、计算转换关系,计算点阵投射的光斑坐标与所述点阵投射器的点阵几何关系之间的转换关系,将红外照片的棋盘格点像素坐标转换成所述点阵投射器的像素坐标,采用张正友平面标定算法计算所述点阵投射器的内参和外参。该方案采用棋盘格标定板,需要检测标定板的角点,作为Mark点,在实际标定应用中,标定板位姿变化摆放时,受光照以及成像质量不均匀等影响,棋盘格标定板的角点提取精度很容易受影响,采用圆的标定板,可以很好的进行圆的提取,不易受干扰。In Chinese patent application document CN110443856A, a method for calibrating a 3D structured light module is disclosed. The method includes the following steps: S0, making a calibration board, and printing a checkerboard on the calibration board, and the checkerboard covers the entire calibration board; S1 , Take a calibration photo, place the 3D structured light module in front of the plane of the calibration plate, fix the relative position of the 3D structured light module and the calibration plate, and sequentially take the infrared photo and dot matrix of the calibration plate Projected to the dot matrix projection photo of the calibration plate, the infrared photo of the calibration plate and the dot matrix projection photo are taken as a group of calibration photos; S2, take a calibration photo group, replace the 3D structured light module and all Describe the relative position of the calibration plate, repeat step S1 to take multiple sets of calibration photos; S3, calibrate the 3D structured light module, use a calibration algorithm to calibrate the infrared camera through multiple sets of the calibration photos, and then calibrate the dot matrix projector, Obtain the geometric relationship of the dot matrix projector relative to the infrared camera. The calibration plate is a calibration plate with a white background, the shape of the checkerboard is square, and the gray scale of the black grids in the checkerboard is adjustable; the calibration algorithm is the Zhang Zhengyou plane calibration method; the calibration process includes: S31, extracting Pixel coordinates, extracting the pixel coordinates of the checkerboard grid points of the infrared photo of the calibration board, and setting the corresponding world coordinate values; S32, calibrating the infrared camera, calculating the homography mapping from the world coordinates of the checkerboard grid points to the pixel coordinates, according to the rotation The constraint relationship of the matrix is calculated to obtain the corresponding internal parameters and external parameters of the infrared camera; S33, calculate the conversion relationship, calculate the conversion relationship between the spot coordinates projected by the dot matrix and the dot matrix geometric relationship of the dot matrix projector, and convert the infrared photo. The pixel coordinates of the checkerboard points are converted into the pixel coordinates of the dot matrix projector, and the internal and external parameters of the dot matrix projector are calculated by using the Zhang Zhengyou plane calibration algorithm. This scheme uses a checkerboard calibration board, and needs to detect the corner points of the calibration board as Mark points. In actual calibration applications, when the calibration board poses changes, it will be affected by illumination and uneven imaging quality. The corner extraction accuracy is easily affected, and the circle calibration plate can be used to extract the circle well, and it is not easy to be disturbed.
在中国专利申请文献CN110827357A,公开了一种组合图案标定板以及结构光相机参数标定方法,标定板上的图案包括识别标志和码块区域;所述识别标志为黑白边界区分的闭合的四边形,其边界内为黑色像素,边界外部为白色像素;所述码块区域位于四边形边界内部,码块由NxN的黑色和白色方形色块组 成的编码信息。标定方法包括如下步骤:S1:利用权利要求1中的多个组合图案标定板,各组合图案标定板具有图案不同的码块区域;S2:采集分布排列的多个组合图案标定板的图像,并进行预处理;S3:计算图像每个像素的局部梯度;将梯度辐值和方向一致的像素区域设置为连通域;S4:将连通域中梯度大于设定阈值的连通域作为标定图案的边缘,边缘像素点使用直线拟合成不同的线段;S5:遍历所有的线段,检测四个依次邻接的线段能否组成完整的四边形,并检测所有具有完整四边形的区域是否包含有效的编码区域;S6:将检测到的编码与编码库的图案进行匹配,计算编码对应的图案在标定板中的坐标,以及四边形四条线段的交点在图像中对应的坐标;S7:根据图像中的像素坐标和对应的标定板坐标,对结构光相机参数进行标定。该方案中标定板位姿不易倾斜过大,过大会造成图案匹配不上,只适宜小幅度倾斜,这样标定出来的参数就不能充分表述系统的模型。In Chinese patent application document CN110827357A, a combination pattern calibration plate and a structured light camera parameter calibration method are disclosed. The pattern on the calibration plate includes an identification mark and a code block area; the identification mark is a closed quadrilateral distinguished by black and white boundaries, and its Inside the boundary are black pixels, and outside the boundary are white pixels; the code block area is located inside the quadrilateral boundary, and the code block is encoded information composed of N×N black and white square color blocks. The calibration method includes the following steps: S1: using a plurality of combination pattern calibration boards in claim 1, each combination pattern calibration board has a code block area with different patterns; S2: collecting images of the multiple combination pattern calibration boards distributed and arranged, and Perform preprocessing; S3: Calculate the local gradient of each pixel of the image; set the pixel area with the same gradient magnitude and direction as the connected domain; S4: Use the connected domain with the gradient greater than the set threshold in the connected domain as the edge of the calibration pattern, The edge pixels are fitted into different line segments using straight lines; S5: traverse all line segments, detect whether four adjacent line segments can form a complete quadrilateral, and detect whether all areas with complete quadrilaterals contain valid coding areas; S6: Match the detected code with the pattern of the code library, calculate the coordinates of the pattern corresponding to the code in the calibration plate, and the coordinates of the intersection of the four line segments of the quadrilateral in the image; S7: According to the pixel coordinates in the image and the corresponding calibration The board coordinates are used to calibrate the parameters of the structured light camera. In this scheme, the orientation of the calibration plate is not easy to be tilted too much, and if it is too large, the pattern cannot be matched.
相关技术至少存在以下不足:The related technologies have at least the following deficiencies:
1.确立相机传感器和投影仪传感器对应关系精度不够。1. The accuracy of establishing the correspondence between the camera sensor and the projector sensor is insufficient.
2.所获取标定图像状态有限,无法充分描述系统模型。2. The state of the obtained calibration image is limited and cannot fully describe the system model.
发明内容SUMMARY OF THE INVENTION
本申请提供了一种3D结构光系统的标定方法、电子设备及存储介质,该方法使用精度更高,标定流程更复杂的包括圆点图案的标定板,使用精度更高的多频相移码来寻找相机和投影传感器的对应关系,局部提升相位精度,能够更精确确定传感器之间位置关系,同时采用灵活的自动梯形校正算法对圆点进行排序,标定板基本可以摆放为各种姿态,提取更多的标定数据。The present application provides a calibration method, electronic device and storage medium for a 3D structured light system. The method uses a calibration board including a dot pattern with higher precision and a more complex calibration process, and uses a multi-frequency phase-shift code with higher precision. To find the corresponding relationship between the camera and the projection sensor, improve the phase accuracy locally, and more accurately determine the positional relationship between the sensors. At the same time, the flexible automatic keystone correction algorithm is used to sort the dots, and the calibration board can basically be placed in various postures. Extract more calibration data.
本申请提供了一种3D结构光系统的标定方法,包括如下步骤:The present application provides a method for calibrating a 3D structured light system, comprising the following steps:
制作标定板;make a calibration board;
采集标定板图像;以及acquiring an image of the calibration plate; and
根据采集到的多个所述标定板图像,对结构光系统进行标定;calibrate the structured light system according to the collected images of the calibration plate;
其中,制作标定板的步骤包括:制作系统标定板并烧录,所述标定板上的图案为多个圆形靶标,相邻的所述圆形靶标的圆心距一致,多个所述圆形靶标排列为行和列,所述圆形靶标的大小不同;Wherein, the step of making a calibration board includes: making a system calibration board and burning it, the pattern on the calibration board is a plurality of circular targets, the center distances of the adjacent circular targets are the same, and a plurality of the circular targets The targets are arranged in rows and columns, and the circular targets are of different sizes;
其中,采集标定板图像的步骤包括:Wherein, the step of collecting the calibration plate image includes:
设定一个标定板姿态,分别在所述标定板上投影水平和竖直两个方向的多个频率的正弦光栅条纹图案,在另外一张所述标定板上投影没有光栅条纹的光,并在每次投影的同时用相机采集该姿态下投影各种图案的标定板图像;以及Set a calibration plate attitude, project sinusoidal grating fringe patterns of multiple frequencies in the horizontal and vertical directions on the calibration plate respectively, project light without grating fringes on the other calibration plate, and place it on the calibration plate. At the same time of each projection, the camera is used to collect the images of the calibration plate projected with various patterns in the posture; and
调整所述标定板姿态,重复采集标定板图像的步骤,直至所述标定板姿态的数量达到预设的目标值,得到多个标定板图像;Adjust the posture of the calibration plate, repeat the steps of collecting the images of the calibration plate, until the number of postures of the calibration plate reaches a preset target value, and obtain a plurality of images of the calibration plate;
其中,根据采集到的多个所述标定板图像,对结构光系统进行标定的步骤,包括:Wherein, the step of calibrating the structured light system according to a plurality of the collected calibration plate images includes:
提取所述标定板图像中所有所述圆形靶标的圆心位置;extracting the center positions of all the circular targets in the calibration plate image;
根据提取的所有所述圆形靶标的圆心位置对所有圆心进行排序;Sort all circle centers according to the extracted center positions of all the circular targets;
采用双线性插值算法得到每个圆心坐标的绝对相位值;The absolute phase value of each circle center coordinate is obtained by bilinear interpolation algorithm;
利用得到的每个圆心坐标的绝对相位值,得到每个圆心在相机图像中坐标对应的在投影仪图像中坐标;Using the obtained absolute phase value of the coordinates of each circle center, the coordinates in the projector image corresponding to the coordinates of each circle center in the camera image are obtained;
根据得到的相机及所述投影仪图像中的圆心坐标,利用张正友标定方法得到相机及投影仪的内参、畸变系数和外参;According to the obtained coordinates of the circle center in the camera and the projector image, the internal parameters, distortion coefficients and external parameters of the camera and the projector are obtained by Zhang Zhengyou's calibration method;
利用同一所述标定板姿态下的相机和投影仪的外参,计算每个标定板姿态下相机和投影仪坐标系之间的转换矩阵;以及Using the external parameters of the camera and the projector under the same calibration board attitude, calculate the transformation matrix between the camera and the projector coordinate system under each calibration board attitude; and
根据得到的每个标定板姿态下的转换矩阵,得到标定的相机和投影仪坐标 系转换矩阵。According to the obtained transformation matrix under the attitude of each calibration board, the calibrated camera and projector coordinate system transformation matrix is obtained.
本申请还提供了一种电子设备,包括:The application also provides an electronic device, comprising:
处理器;processor;
存储器,设置为存储程序,memory, set to store programs,
当所述程序被所述处理器执行,使得所述处理器实现如上所述的3D结构光系统的标定方法。When the program is executed by the processor, the processor implements the method for calibrating a 3D structured light system as described above.
本申请还提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行如上所述的3D结构光系统的标定方法。The present application also provides a computer-readable storage medium storing computer-executable instructions, where the computer-executable instructions are used to execute the calibration method for a 3D structured light system as described above.
附图说明Description of drawings
图1是本申请可选实施方式的标定方法流程图;1 is a flow chart of a calibration method of an optional embodiment of the present application;
图2是本申请可选实施方式的标定步骤流程图;2 is a flow chart of calibration steps of an optional embodiment of the present application;
图3是本申请可选实施方式的带角度的标定图像中圆点图案的排序示意图;3 is a schematic diagram of the sorting of dot patterns in an angled calibration image according to an optional embodiment of the present application;
图4为本申请可选实施方式的电子设备的结构示意图。FIG. 4 is a schematic structural diagram of an electronic device according to an optional embodiment of the present application.
具体实施方式detailed description
下面结合附图1-3,对本申请的具体实施方式作说明。The specific embodiments of the present application will be described below with reference to the accompanying drawings 1-3.
本申请提供了一种3D结构光系统的标定方法,包括如下步骤:The present application provides a method for calibrating a 3D structured light system, comprising the following steps:
制作标定板步骤,制作系统标定板并烧录,所述标定板上的图案为多个圆形靶标,相邻的所述圆形靶标的圆心距一致,多个所述圆形靶标排列为行和列,所述圆形靶标的大小不同;The step of making a calibration board is to make a system calibration board and burn it. The pattern on the calibration board is a plurality of circular targets, the center distances of the adjacent circular targets are the same, and the plurality of circular targets are arranged in rows and columns, the circular targets are of different sizes;
标定板图像采集步骤,Calibration plate image acquisition steps,
设定一个标定板姿态,分别在标定板上投影水平和竖直两个方向的多个频率的正弦光栅条纹图案,在另外一张所述标定板上投影上投影没有光栅条纹的 光,并在每次投影的同时用相机采集该姿态下投影各种图案的标定板图像;Set a calibration plate attitude, project sinusoidal grating fringe patterns of multiple frequencies in the horizontal and vertical directions on the calibration plate respectively, project light without grating fringes on the other calibration plate, and put it on the calibration plate. At the same time of each projection, the camera is used to collect the calibration plate images of various patterns projected under the posture;
调整标定板姿态,重复标定板图像采集步骤,直至标定板姿态数量达到预设的目标值,得到多个标定板图像;Adjust the attitude of the calibration board, repeat the steps of image acquisition of the calibration board, until the number of attitudes of the calibration board reaches the preset target value, and obtain a plurality of images of the calibration board;
标定步骤,根据采集到的多个所述标定板图像,对结构光系统进行标定,可以包括如下步骤:The calibration step is to calibrate the structured light system according to a plurality of the collected calibration plate images, which may include the following steps:
提取标定板图像中所有圆形靶标的圆心位置;Extract the center positions of all circular targets in the calibration plate image;
根据提取的所有圆形靶标的圆心位置对所有圆心进行排序;Sort all the circle centers according to the center positions of all the extracted circular targets;
采用双线性插值算法得到每个圆心坐标的绝对相位值;The absolute phase value of each circle center coordinate is obtained by bilinear interpolation algorithm;
利用得到的每个圆心坐标的绝对相位值,得到各圆心在相机图像中坐标对应的在投影仪图像中坐标;Using the obtained absolute phase value of the coordinates of each circle center, the coordinates in the projector image corresponding to the coordinates of each circle center in the camera image are obtained;
根据得到的相机及投影仪图像中的圆心坐标,利用张正友标定方法得到相机及投影仪的内参、畸变系数和外参;According to the center coordinates in the obtained camera and projector images, the internal parameters, distortion coefficients and external parameters of the camera and projector are obtained by Zhang Zhengyou's calibration method;
利用同一标定板姿态下的相机和投影仪的外参,计算每个标定板姿态下相机和投影仪坐标系之间的转换矩阵;Using the external parameters of the camera and the projector under the same calibration board attitude, calculate the transformation matrix between the camera and the projector coordinate system under each calibration board attitude;
根据得到的每个标定板姿态下的转换矩阵,得到标定的相机和投影仪坐标系转换矩阵。将得到的所有转换矩阵中对应位置的参数叠加求和并取平均,作为最终的投影仪和相机坐标系的转换矩阵。According to the obtained transformation matrix under the attitude of each calibration board, the calibrated camera and projector coordinate system transformation matrix is obtained. The parameters of the corresponding positions in all the obtained transformation matrices are superimposed, summed and averaged, as the final transformation matrix of the projector and camera coordinate systems.
标定得到的参数包括:相机内参和畸变系数,投影仪内参和畸变系数以及相机和投影仪坐标系之间的刚性变换关系矩阵。The parameters obtained by calibration include: camera internal parameters and distortion coefficients, projector internal parameters and distortion coefficients, and rigid transformation relationship matrix between camera and projector coordinate systems.
作为可选实施方式,所述3D结构光系统的标定方法还包括,在采用双线性插值算法得到每个圆心坐标的绝对相位值之前,利用多频多步相移算法,得到标定板平面相位场在水平和垂直方向的绝对相位值。在一实施例中,可采用十步相移码和三频外差法,得到水平和垂直方向的绝对相位值,包括如下步骤:As an optional implementation manner, the method for calibrating the 3D structured light system further includes: before using a bilinear interpolation algorithm to obtain the absolute phase value of each circle center coordinate, using a multi-frequency multi-step phase shift algorithm to obtain the plane phase of the calibration plate The absolute phase value of the field in the horizontal and vertical directions. In one embodiment, ten-step phase-shift code and three-frequency heterodyne method can be used to obtain absolute phase values in the horizontal and vertical directions, including the following steps:
在标定图像采集步骤中,对标定板分别投影的水平和竖直方向不同频率的正弦光栅条纹图案,所述正弦光栅条纹图案采用三种不同波长,波长分别为:λ 1,λ 2,λ 3,其对应相位主值分别为:φ 1,φ 2,φ 3;每个投影图案采集十张图像,用于十步相移码计算; In the calibration image acquisition step, sinusoidal grating fringe patterns with different frequencies in the horizontal and vertical directions are projected on the calibration plate respectively. The sinusoidal grating fringe patterns adopt three different wavelengths, and the wavelengths are: λ 1 , λ 2 , λ 3 , the corresponding main phase values are: φ 1 , φ 2 , φ 3 ; ten images are collected for each projection pattern for ten-step phase-shift code calculation;
采用三频外差法计算绝对相位值:Calculate the absolute phase value using the three-frequency heterodyne method:
将第一波长λ 1的第一相位φ 1与第二波长λ 2的第二相位φ 2进行叠加,得到相位φ 12,其中相位φ 12所对应的波长为λ 12,叠加计算得到的λ 12为: Superimpose the first phase φ 1 of the first wavelength λ 1 and the second phase φ 2 of the second wavelength λ 2 to obtain the phase φ 12 , where the wavelength corresponding to the phase φ 12 is λ 12 , and the calculated λ 12 obtained by superimposing for:
Figure PCTCN2020130543-appb-000001
Figure PCTCN2020130543-appb-000001
将第二波长λ 2的第二相位φ 2与第三波长λ 3的第三相位φ 3进行叠加,得到相位φ 23,其中相位φ 23所对应的波长为λ 23,叠加计算得到的λ 23为: Superimpose the second phase φ 2 of the second wavelength λ 2 and the third phase φ 3 of the third wavelength λ 3 to obtain the phase φ 23 , where the wavelength corresponding to the phase φ 23 is λ 23 , and the calculated λ 23 obtained by superimposing for:
Figure PCTCN2020130543-appb-000002
Figure PCTCN2020130543-appb-000002
将λ 12和λ 23进行叠加,得到最终叠加的波长λ 123λ 12 and λ 23 are superimposed to obtain the final superimposed wavelength λ 123 .
通过以下公式求取叠加的绝对相位值:The absolute phase value of the superposition is obtained by the following formula:
Figure PCTCN2020130543-appb-000003
Δn i∈[0,1),i=1,2,3,12,23,123,
Figure PCTCN2020130543-appb-000003
Δn i ∈ [0, 1), i=1, 2, 3, 12, 23, 123,
Figure PCTCN2020130543-appb-000004
Figure PCTCN2020130543-appb-000004
Figure PCTCN2020130543-appb-000005
Figure PCTCN2020130543-appb-000005
Figure PCTCN2020130543-appb-000006
Figure PCTCN2020130543-appb-000006
式中,In the formula,
n i为被测物体表面某点在对应光栅图中的条纹级数,n i包含整数部分N i和小 数部分Δn in i is the fringe series of a certain point on the surface of the measured object in the corresponding grating diagram, and n i includes the integer part N i and the fractional part Δn i ;
φ i为对应光栅的包裹相位; φ i is the wrapping phase of the corresponding grating;
Φ i为对应光栅的绝对相位; Φ i is the absolute phase of the corresponding grating;
由光栅12和光栅23叠加产生光栅123,选择λ 1,λ 2,λ 3,使得光栅123的波长λ 123覆盖全场,使N 123=0,并由此得到光栅123的绝对相位。 The grating 123 is generated by the superposition of the grating 12 and the grating 23, λ 1 , λ 2 , λ 3 are selected so that the wavelength λ 123 of the grating 123 covers the whole field, and N 123 =0, and the absolute phase of the grating 123 is obtained.
作为可选实施方式,所述提取标定板图像中所有圆形靶标的圆心位置可以包括以下步骤:As an optional implementation manner, the extracting the center positions of all circular targets in the calibration plate image may include the following steps:
对相机采集到的没有打光栅条纹的标定板图像进行二值化;Binarize the image of the calibration plate without grating stripes collected by the camera;
对二值化后的标定板图像进行blob分析,得到标定板圆的质心位置,实现圆形靶标的粗定位;Perform blob analysis on the image of the calibration plate after binarization to obtain the position of the center of mass of the calibration plate circle, and realize the rough positioning of the circular target;
通过Canny边缘提取,得到圆形靶标的像素边缘位置;Through Canny edge extraction, the pixel edge position of the circular target is obtained;
通过对圆形靶标的边缘像素点进行灰度插值,然后再进行参数拟合,得到圆的亚像素边缘轮廓;By performing grayscale interpolation on the edge pixels of the circular target, and then performing parameter fitting, the sub-pixel edge contour of the circle is obtained;
对得到的各个圆的亚像素边缘轮廓点进行最小二乘椭圆拟合,得到各椭圆的中心位置,将各椭圆的中心位置作为各圆形靶标的圆心位置。Least squares ellipse fitting is performed on the obtained sub-pixel edge contour points of each circle to obtain the center position of each ellipse, and the center position of each ellipse is taken as the circle center position of each circular target.
作为可选实施方式,对所有圆心进行排序可以包括以下步骤:As an optional implementation, sorting all circle centers may include the following steps:
根据得到的标定板各椭圆的中心位置,确定采集的标定板图案中不规则四边形的四个顶点;According to the obtained center positions of each ellipse of the calibration plate, determine the four vertices of the irregular quadrilateral in the collected calibration plate pattern;
将四个顶点按顺序连线成四边形;Connect the four vertices in order to form a quadrilateral;
当所述四边形旋转角度不超过预设值时,确定各圆点的位置坐标,并根据各圆点的位置坐标顺序,确定各圆心的顺序;When the rotation angle of the quadrilateral does not exceed the preset value, determine the position coordinates of each circle point, and determine the order of each circle center according to the order of the position coordinates of each circle point;
当所述四边形旋转角度超过预设值时,按照如下方法进行排序:When the rotation angle of the quadrilateral exceeds the preset value, the sorting is performed as follows:
确定所述四边形的每条边上得所有圆点,将两条相对的边上相对的两个圆 点进行连线;Determine all the dots on each side of the quadrilateral, and connect the two opposite dots on the two opposite sides;
确定所有连线形成的各个四边形内部所有的圆点,并确定各圆点的顺序。Determine all the dots inside each quadrilateral formed by all the connections, and determine the order of the dots.
作为可选实施方式,当所述四边形旋转角度不超过预设值时,确定各圆心顺序可以包括以下步骤:As an optional implementation manner, when the rotation angle of the quadrilateral does not exceed a preset value, determining the order of the center of each circle may include the following steps:
确定标定板图像中五大圆形靶标的位置坐标;Determine the position coordinates of the five circular targets in the calibration plate image;
由这些已知阵列坐标的大圆点,根据摄影几何的直线投影不变性,对位于阵列其他位置的小圆点在图像上的坐标进行预估;From these large dots with known array coordinates, according to the invariance of linear projection of photographic geometry, the coordinates of small dots located in other positions of the array on the image are estimated;
通过blob分析得到的各圆形靶标的坐标,计算各点到预估的所有点的距离,距离最小的点为对应的位置的点,根据坐标顺序进行排序。The coordinates of each circular target are obtained by blob analysis, and the distance from each point to all the estimated points is calculated. The point with the smallest distance is the point of the corresponding position, and it is sorted according to the coordinate order.
作为可选实施方式,确定标定板图像中五大圆形靶标的位置坐标可以包括如下步骤:As an optional embodiment, determining the position coordinates of the five circular targets in the calibration plate image may include the following steps:
选择五个半径相等的圆形靶标,所述五个半径相等的圆形靶标的半径大于标定板图案中其它圆形靶标的半径;Five circular targets with equal radii are selected, and the radii of the five circular targets with equal radii are larger than the radii of other circular targets in the calibration plate pattern;
计算各点之间的距离,得到距离最大的两个圆点,分别为F1和F2,以及距离最小的两个圆点,分别为N1和N2,则剩下的一点确定为五大圆形靶标中第五大圆点C5;Calculate the distance between the points, and get the two points with the largest distance, F1 and F2, and the two points with the smallest distance, N1 and N2, respectively, and the remaining point is determined as one of the five circular targets. The fifth largest dot C5;
分别计算F1、F2到N1、N2的距离之和;Calculate the sum of the distances from F1 and F2 to N1 and N2 respectively;
距离和较小的点为五大圆形靶标中第一大圆点C1,所述第一大圆点位于图案水平中线与垂直中线左侧的垂直中线交叉点;The distance and the smaller point are the first large circle point C1 in the five circular targets, and the first large circle point is located at the intersection of the horizontal centerline of the pattern and the vertical centerline on the left side of the vertical centerline;
距离和较大的点为五大圆形靶标中第二大圆点C2,所述第二大圆点位于图案水平中线与垂直中线右侧的垂直中线交叉点;The distance and the larger point are the second largest point C2 in the five circular targets, and the second largest point is located at the intersection of the horizontal centerline of the pattern and the vertical centerline on the right side of the vertical centerline;
计算N1、N2与C1的距离,距离较小的点为五大圆形靶标中第三大圆点C3,较大的点为五大圆形靶标中第四大圆点C4。Calculate the distances between N1, N2 and C1. The point with the smaller distance is the third largest point C3 in the five circular targets, and the larger point is the fourth largest point C4 in the five circular targets.
在大小上,五个大圆半径相等,且稍大于标定板其余圆的半径;在排布上,距离最大的两个圆与标定板图案中心圆属同一行,分别位于图案中间圆的左侧和右侧,与中心圆均相隔两个小圆,距离最小的两个圆在同一行且位置相邻,其中一圆与图案中心圆属同一列,且相隔一小圆,当该两圆位于图案水平中线下方时,其中一大圆在垂直中心线的右侧,最后一大圆同样与图案中心圆属于同一列,在图案水平中心线的另一侧,且同样相隔一小圆。In terms of size, the radii of the five great circles are equal, and slightly larger than the radii of the remaining circles of the calibration plate; in terms of arrangement, the two circles with the largest distance belong to the same line as the center circle of the calibration plate pattern, and are located on the left and the left of the middle circle of the pattern respectively. The right side is separated from the center circle by two small circles. The two circles with the smallest distance are in the same row and adjacent to each other. One of the circles belongs to the same column as the center circle of the pattern and is separated by a small circle. When the two circles are located in the pattern When the horizontal center line is below, one of the large circles is on the right side of the vertical center line, and the last large circle also belongs to the same column as the pattern center circle, on the other side of the pattern horizontal center line, and is also separated by a small circle.
在标定过程中,标定板的位姿是随意摆放的,确定五个大圆的位置,可以得到沿图案水平方向和垂直方向的方向向量,从而得到图案中所有点的大概位置,通过blob分析和椭圆拟合提取的真实圆心,并和预估的圆心做圆心距比较,距离最小的圆心便是对应该位置的圆。During the calibration process, the pose of the calibration board is randomly placed, and the positions of the five great circles are determined, and the direction vectors along the horizontal and vertical directions of the pattern can be obtained, so as to obtain the approximate positions of all points in the pattern. The ellipse is fitted to the extracted real center, and compared with the estimated center distance, the center with the smallest distance is the circle corresponding to the position.
作为可选实施方式,预估小圆点坐标可以包括如下步骤:As an optional implementation manner, estimating the coordinates of the small dots may include the following steps:
用两个已知的大圆点C1和C2确定一条水平直线,计算与该直线平行的第二行两个已知点之间的距离,以及所述两个已知点连线的X和Y方向分量,得到标定板水平方向的两个分量;Determine a horizontal line with two known great circle points C1 and C2, calculate the distance between the two known points in the second line parallel to the line, and the X and Y of the line connecting the two known points direction component, two components in the horizontal direction of the calibration plate are obtained;
用两个已知的大圆点C3和C4确定一条垂直直线,与该直线平行的第二列两个已知点之间的距离,以及所述两个已知点连线的X和Y方向分量,得到标定板垂直方向的两个分量;Use two known great circle points C3 and C4 to determine a vertical line, the distance between the two known points in the second column parallel to the line, and the X and Y directions of the line connecting the two known points components, two components in the vertical direction of the calibration plate are obtained;
根据已知大圆点C1的坐标,以及标定板水平和垂直方向各自的两个分量,预估得到所有圆形靶标的坐标,并存储所有圆形靶标位置坐标。According to the coordinates of the known large circle point C1 and the two components in the horizontal and vertical directions of the calibration plate, the coordinates of all circular targets are estimated, and the position coordinates of all circular targets are stored.
作为可选实施方式,采用双线性插值算法得到每个圆心坐标在水平和垂直方向的绝对相位值,可以包括如下步骤:As an optional implementation manner, using a bilinear interpolation algorithm to obtain the absolute phase values of each circle center coordinate in the horizontal and vertical directions may include the following steps:
根据提取得到的各圆心亚像素位置坐标,利用双线性插值得到该圆心对应的相位主值,插值公式如下:According to the extracted sub-pixel position coordinates of the center of each circle, the main phase value corresponding to the center of the circle is obtained by bilinear interpolation. The interpolation formula is as follows:
φ(i+u,j+v)=(1-u)(1-v)φ(i,j)+(1-u)vφ(i,j+1)+u(1-v)φ(i+1,j)+uvφ(i+1,j+1);φ(i+u,j+v)=(1-u)(1-v)φ(i,j)+(1-u)vφ(i,j+1)+u(1-v)φ( i+1,j)+uvφ(i+1,j+1);
其中,in,
φ为相位主值;φ is the main value of the phase;
i,j分别为计算得到的圆心坐标在图像坐标系中垂直和水平方向的整数部分;i, j are the integer parts of the calculated circle center coordinates in the vertical and horizontal directions in the image coordinate system;
u,v分别为计算得到的圆心坐标在图像坐标系中垂直和水平方向的小数部分;u, v are the fractional parts of the calculated circle center coordinates in the vertical and horizontal directions in the image coordinate system;
相位主值通过如下方法获得:The phase principal value is obtained as follows:
在采集的投影了正弦条纹码的图像中,多步相移法光栅光强分布函数为:In the acquired image with the sinusoidal stripe code projected, the light intensity distribution function of the multi-step phase-shift grating is:
I k(x,y)=I′(x,y)+I″(x,y)cos(φ+(k-1)π/2) I k (x,y)=I′(x,y)+I″(x,y)cos(φ+(k-1)π/2)
其中,in,
k取值为1-n,其中n为多步相移法中的步数;The value of k is 1-n, where n is the number of steps in the multi-step phase shift method;
x,y为像素坐标;x, y are pixel coordinates;
I(x,y)为第k幅相移图像中(x,y)位置的像素点的灰度值;I(x, y) is the gray value of the pixel at the position (x, y) in the k-th phase-shift image;
I′(x,y)为图像的平均灰度;I'(x,y) is the average gray level of the image;
I″(x,y)为图像的灰度调制;I″(x,y) is the grayscale modulation of the image;
φ为相位主值;φ is the main value of the phase;
通过光强分布函数,得到每个像素点对应的相位主值:Through the light intensity distribution function, the phase principal value corresponding to each pixel is obtained:
Figure PCTCN2020130543-appb-000007
Figure PCTCN2020130543-appb-000007
其中,in,
k取值为1-n,其中n为多步相移法中的步数;The value of k is 1-n, where n is the number of steps in the multi-step phase shift method;
x,y为像素坐标;x, y are pixel coordinates;
I k为该像素点灰度值; I k is the gray value of the pixel;
N为光栅条纹周期个数;N is the number of grating fringe periods;
φ(x,y)为相位主值。φ(x,y) is the main value of the phase.
当采用十步相移法时,在采集的投影了正弦条纹码的图像中,十步相移法光栅光强分布函数为:When the ten-step phase-shift method is adopted, in the collected image with the sinusoidal stripe code projected, the light-intensity distribution function of the ten-step phase-shift method grating is:
I k(x,y)=I′(x,y)+I″(x,y)cos(φ+(k-1)π/2) I k (x,y)=I′(x,y)+I″(x,y)cos(φ+(k-1)π/2)
其中,in,
k取值为1-10;The value of k is 1-10;
x,y为像素坐标;x, y are pixel coordinates;
I(x,y)为第k幅相移图像中(x,y)位置的像素点的灰度值;I(x, y) is the gray value of the pixel at the position (x, y) in the k-th phase-shift image;
I′(x,y)为图像的平均灰度;I'(x,y) is the average gray level of the image;
I″(x,y)为图像的灰度调制;I″(x,y) is the grayscale modulation of the image;
φ为相位主值;φ is the main value of the phase;
通过光强分布函数,得到每个像素点对应的相位主值:Through the light intensity distribution function, the phase principal value corresponding to each pixel is obtained:
Figure PCTCN2020130543-appb-000008
Figure PCTCN2020130543-appb-000008
其中,in,
k取值为1-10;The value of k is 1-10;
x,y为像素坐标;x, y are pixel coordinates;
I k为该点像素灰度值; I k is the pixel gray value of this point;
N为光栅条纹周期个数;N is the number of grating fringe periods;
φ(x,y)为相位主值。φ(x,y) is the main value of the phase.
通过上面方法获得的相位主值对应的像素点是整数坐标,根据提取得到的各圆心亚像素位置坐标,利用双线性插值得到该圆心对应的相位主值,插值公 式如下:The pixel point corresponding to the phase principal value obtained by the above method is an integer coordinate. According to the obtained sub-pixel position coordinates of each circle center, the phase principal value corresponding to the circle center is obtained by bilinear interpolation. The interpolation formula is as follows:
φ(i+u,j+v)=(1-u)(1-v)φ(i,j)+(1-u)vφ(i,j+1)+u(1-v)φ(i+1,j)+uvφ(i+1,j+1)φ(i+u,j+v)=(1-u)(1-v)φ(i,j)+(1-u)vφ(i,j+1)+u(1-v)φ( i+1,j)+uvφ(i+1,j+1)
其中,in,
φ为相位主值;φ is the main value of the phase;
i,j分别为用于插值的相机图像中的点在垂直和水平方向的坐标;i, j are the coordinates in the vertical and horizontal directions of the point in the camera image used for interpolation;
u,v分别为用于插值的投影仪图像中的点在垂直和水平方向的坐标;u, v are the coordinates in the vertical and horizontal directions of the point in the projector image used for interpolation;
作为可选实施方式,利用得到的每个圆心坐标在水平和垂直方向的绝对相位值,通过如下公式得到各圆心在相机图像中坐标对应的在投影仪图像中坐标:As an optional implementation manner, using the obtained absolute phase values of the coordinates of each circle center in the horizontal and vertical directions, the coordinates in the projector image corresponding to the coordinates of each circle center in the camera image are obtained by the following formula:
Figure PCTCN2020130543-appb-000009
Figure PCTCN2020130543-appb-000009
Figure PCTCN2020130543-appb-000010
Figure PCTCN2020130543-appb-000010
其中,in,
u p为圆心c点在投影仪图像中u方向的坐标; u p is the coordinate of the center point c in the u direction in the projector image;
v p为圆心c点在投影仪图像中v方向的坐标; v p is the coordinate of the center point c in the v direction in the projector image;
N为光栅条纹周期个数;N is the number of grating fringe periods;
W为投影仪水平方向的分辨率;W is the resolution of the projector in the horizontal direction;
H为投影仪垂直方向的分辨率;H is the resolution of the projector in the vertical direction;
Φ u(u c,v c)为圆心c点垂直方向绝对相位值; Φ u (u c ,v c ) is the absolute phase value in the vertical direction of the center point c;
Φ v(u c,v c)为圆心c点水平方向绝对相位值; Φ v (u c ,v c ) is the absolute phase value in the horizontal direction at the center point c;
作为可选实施方式,所述3D结构光系统的标定方法还包括,根据标定得到的相机和投影仪内参、畸变系数和外参,以及相机与投影仪之间的坐标系转换矩阵,建立相机和投影仪与世界坐标系的映射关系,可以包括如下步骤:As an optional implementation manner, the method for calibrating the 3D structured light system further includes: establishing the camera and projector according to the camera and projector internal parameters, distortion coefficients and external parameters, and the coordinate system transformation matrix between the camera and the projector. The mapping relationship between the projector and the world coordinate system can include the following steps:
通过多频外差法,得到绝对相位灰度图;此处可采用三频外差法;Through the multi-frequency heterodyne method, the absolute phase grayscale image is obtained; here, the three-frequency heterodyne method can be used;
对于空间任一点p[x w,y w,z w],其投影在相机中的图像坐标为p(u c,v c),根据得到的拍摄图像上的每个像素点的绝对相位值,得到对应的投影仪图像中的一条直线,相机图像与投影仪图像的对应关系为: For any point p[x w , y w , z w ], its projected image coordinates in the camera are p(u c , vc ), according to the obtained absolute phase value of each pixel on the captured image, A straight line in the corresponding projector image is obtained, and the corresponding relationship between the camera image and the projector image is:
Figure PCTCN2020130543-appb-000011
Figure PCTCN2020130543-appb-000011
其中:in:
N为光栅条纹周期个数;N is the number of grating fringe periods;
W为投影仪水平方向的分辨率;W is the resolution of the projector in the horizontal direction;
Φ(u c,v c)为该像素点的绝对相位值; Φ(u c ,v c ) is the absolute phase value of the pixel;
u p为该像素点对应的投影仪图像中垂直方向的坐标。 u p is the coordinate in the vertical direction in the projector image corresponding to the pixel.
根据上述相机图像与投影仪图像的对应关系,由如下成像原理公式,得到唯一的点p的三维坐标(X W,Y W,Z W): According to the corresponding relationship between the camera image and the projector image, the three-dimensional coordinates (X W , Y W , Z W ) of the unique point p are obtained by the following imaging principle formula:
S C[u c,v c,1]=A CM C[X W,Y W,Z W,1] S C [u c , vc , 1] = A C M C [X W , Y W , Z W , 1]
S P[u p,v p,1]=A PM P[X W,Y W,Z W,1] S P [up , v p , 1] = A P M P [ X W , Y W , Z W , 1]
其中,in,
A C和A P分别为相机和投影仪的内参; A C and A P are the internal parameters of the camera and projector, respectively;
M C和M P分别为相机和投影仪的外参; M C and M P are the external parameters of the camera and the projector, respectively;
S C和S P分别为相机和投影仪的比例因子; S C and S P are the scale factors of the camera and projector, respectively;
(u c,v c)和(u p,v p)分别为相机和投影仪的图像坐标,两者均使用预先标定出的系统畸变参数进行畸变矫正; (u c , vc ) and ( up , v p ) are the image coordinates of the camera and the projector, respectively, both of which use the pre-calibrated system distortion parameters for distortion correction;
(X W,Y W,Z W)为点p的唯一三维坐标。 (X W , Y W , Z W ) are the unique three-dimensional coordinates of point p.
由相机和投影仪图像的对应关系表达式知,在投射水平或垂直单向条纹的情况下,相机中一像素坐标点对应的投影仪图像中一水平或垂直方向的坐标。 由垂直的绝对相位值可以在投影仪图像中确定一条垂直的对应线,即确定投影仪图像对应点在垂直方向的坐标值,作为该点列坐标;同理水平的绝对相位值可以确定一条水平的对应线,即确定投影仪图像对应点在水平方向的坐标值,作为该点行坐标。It can be known from the correspondence expression between the camera and projector images that in the case of projecting horizontal or vertical unidirectional stripes, a pixel coordinate point in the camera corresponds to a coordinate in a horizontal or vertical direction in the projector image. A vertical corresponding line can be determined in the projector image from the vertical absolute phase value, that is, the coordinate value of the corresponding point of the projector image in the vertical direction is determined as the coordinate of the point column; similarly, the horizontal absolute phase value can determine a horizontal line. The corresponding line of , that is, the coordinate value of the corresponding point of the projector image in the horizontal direction is determined as the row coordinate of the point.
实施例1Example 1
根据本申请的一个具体实施方案,下面对本申请中的圆心排序方法进行详细说明。标定板上的圆形靶标图案如图3所示。圆点排序包括如下步骤:According to a specific embodiment of the present application, the method for sorting the circle centers in the present application will be described in detail below. The circular target pattern on the calibration plate is shown in Figure 3. Dot sorting includes the following steps:
首先确定中间五大圆形靶标的位置坐标:First determine the position coordinates of the five central circular targets:
1.计算各点之间的距离,求出距离最大的两点(设为F1和F2)和最小的两点(设为N1和N2),则剩下的一点即可确定为点第72点(如图3所示);1. Calculate the distance between each point, find the two points with the largest distance (set as F1 and F2) and the two points with the smallest distance (set as N1 and N2), then the remaining point can be determined as point 72 (As shown in Figure 3);
2.分别计算F1、F2到N1、N2的距离之和,则和较小的点为第47点,较大的点为第53点。2. Calculate the sum of the distances from F1 and F2 to N1 and N2 respectively, then the smaller point is the 47th point, and the larger point is the 53rd point.
3.计算N1、N2和第47点的距离,距离较小的点为第27点,较大的点为第28点。3. Calculate the distance between N1, N2 and the 47th point, the point with the smaller distance is the 27th point, and the larger point is the 28th point.
然后,由这些已知阵列坐标的大圆点,根据摄影几何的直线投影不变性,对位于阵列其他位置的小圆点在图像上的点进行预估。主要步骤如下:Then, from these large dots with known array coordinates, according to the linear projection invariance of the photographic geometry, the points on the image of the small dots located in other positions of the array are estimated. The main steps are as follows:
1.用第47点和第53点两个已知的点确定一条直线,计算第二行两个已知点之间的距离,以及两个点连线的X和Y方向分量,从而得到标定板水平方向的两个分量;1. Determine a straight line with the two known points of the 47th point and the 53rd point, calculate the distance between the two known points in the second row, and the X and Y direction components of the line connecting the two points, so as to obtain the calibration two components in the horizontal direction of the plate;
2.同理,由第28点和第72点两点确定垂直方向的两个分量;2. In the same way, the two components in the vertical direction are determined by the 28th point and the 72nd point;
3.由已知的第47点的坐标,以及标定板水平和垂直方向的各自的两个分量,预判得到第1点的坐标,从而由第1点的坐标,以及标定板水平和垂直方向的各自的两个分量,预判得到所有99点的坐标(之前已优先提取得到大圆的坐标, 可以不用预判,直接存储其坐标位置);3. From the known coordinates of the 47th point and the respective two components of the horizontal and vertical directions of the calibration plate, the coordinates of the first point are pre-judged, so that the coordinates of the first point and the horizontal and vertical directions of the calibration plate are obtained. The respective two components of , and the coordinates of all 99 points are pre-judged (the coordinates of the great circle have been preferentially extracted before, and their coordinate positions can be stored directly without pre-judgment);
4.通过blob分析得到的所有点的质心坐标,计算各点到预判的99点的距离,距离最小的点为对应的位置的点,从而实现排序,得到如图的排序效果。4. The centroid coordinates of all points obtained by blob analysis, calculate the distance from each point to the pre-judged 99 points, and the point with the smallest distance is the point of the corresponding position, so as to realize the sorting and obtain the sorting effect as shown in the figure.
实施例2Example 2
根据本申请的一个具体实施方案,下面详细说明本申请中标定步骤中的具体实施。According to a specific embodiment of the present application, the specific implementation of the calibration step in the present application will be described in detail below.
1.通过对不打光栅条纹的图像处理,得到每个圆圆心的亚像素坐标;1. Obtain the sub-pixel coordinates of the center of each circle through image processing without grating stripes;
2.对于水平和垂直的光栅图像,通过十步相移和多频外差法得到每个像素点的水平和垂直方向的相位主值;2. For the horizontal and vertical grating images, obtain the phase main value of each pixel in the horizontal and vertical directions by ten-step phase shift and multi-frequency heterodyne method;
3.双线性插值分别得到该点的水平和垂直方向的绝对相位值;3. The absolute phase values of the horizontal and vertical directions of the point are obtained by bilinear interpolation;
4.通过相机图像和投影仪图像的对应关系公式得到该点的投影仪图像坐标;4. Obtain the projector image coordinates of the point through the correspondence formula between the camera image and the projector image;
5.已知摄像机的针孔模型:5. The pinhole model of the known camera:
Figure PCTCN2020130543-appb-000012
Figure PCTCN2020130543-appb-000012
在标定中,空间三维世界坐标为(X W,Y W,Z W),也就是标定板的物理坐标,(u,v)为提取的圆心的相机图像坐标或投影仪图像坐标,m ij为对应性矩阵的元素。 In the calibration, the three-dimensional world coordinates in space are (X W , Y W , Z W ), that is, the physical coordinates of the calibration board, (u, v) are the camera image coordinates or projector image coordinates of the extracted center of the circle, and m ij is Elements of the correspondence matrix.
标定板图案中,圆心距一致,可以建立平面坐标系,通过得到的每一组相机图像的圆心坐标位置和投影仪图像的圆心坐标位置,以及标定板的物理坐标,可以通过张正友标定算法分别得到相机和投影仪的内参,畸变系数,同时,对于每一个位姿下的标定板,张正友标定法可以分别得到对应的相机和投影仪一组外参,利用内参,畸变系数和一组外参可以确定一组单应性矩阵,即上述M矩阵。In the calibration plate pattern, the center-to-center distance is the same, and a plane coordinate system can be established. The obtained coordinates of the center of the circle of each group of camera images and the coordinates of the center of the projector image, as well as the physical coordinates of the calibration plate, can be obtained by Zhang Zhengyou's calibration algorithm. The internal parameters and distortion coefficients of the camera and projector. At the same time, for the calibration board in each pose, Zhang Zhengyou’s calibration method can obtain a set of external parameters for the corresponding camera and projector. Using the internal parameters, distortion coefficients and a set of external parameters can be Determine a set of homography matrices, namely the above M matrices.
利用得到的每一个姿态下的相机的外参矩阵,乘以同一位姿下投影仪外参矩阵的逆,可以得到一新的矩阵,该矩阵就是相机和投影仪的关系矩阵,通过求取每一姿态下的关系矩阵,然后将得到的关系矩阵中的(i,j)位置的元素加和取平均,作为相机和投影仪的转换矩阵。Using the obtained extrinsic parameter matrix of the camera in each pose, multiplied by the inverse of the projector extrinsic parameter matrix under the same pose, a new matrix can be obtained, which is the relationship matrix between the camera and the projector. The relationship matrix under one pose, and then the elements at the (i, j) position in the obtained relationship matrix are summed and averaged as the transformation matrix of the camera and the projector.
6.以图3中的点47为例,第5步中的针孔模型包含三个方程,整理消去Z C,可以得到两个关于m ij的线性方程,即: 6. Taking point 47 in Figure 3 as an example, the pinhole model in step 5 contains three equations, and after sorting out Z C , two linear equations about m ij can be obtained, namely:
m 11X W+m 12Y W+m 13Z W+m 14-uX Wm 31-uY Wm 32-uZ Wm 33=um 34 m 11 X W +m 12 Y W +m 13 Z W +m 14 -uX W m 31 -uY W m 32 -uZ W m 33 =um 34
m 21X W+m 22Y W+m 23Z W+m 24-vX Wm 31-vY Wm 32-vZ Wm 33=vm 34 m 21 X W +m 22 Y W +m 23 Z W +m 24 -vX W m 31 -vY W m 32 -vZ W m 33 =vm 34
通过张正友标定法分别得到的相机和投影仪单应性矩阵,从而可以得到四个线性方程组,由于只有X W,Y W,Z W三个变量,从而可以联立方程得到该点空间三维坐标。 The camera and projector homography matrices obtained by Zhang Zhengyou's calibration method, respectively, can obtain four linear equations. Since there are only three variables of XW , YW , and ZW , the three-dimensional coordinates of the point can be obtained by simultaneous equations .
因为外参和转换关系矩阵只包含旋转和平移的刚性关系,所以不会改变物体形状和尺寸。通过外参矩阵,可将空间三维坐标转换至相机坐标系下,利用相机和投影仪的转换关系矩阵就可以得到相机坐标系一点在投影仪下的位置坐标。Because the extrinsic parameter and transformation relationship matrix only contains rigid relationship of rotation and translation, it will not change the shape and size of the object. Through the external parameter matrix, the three-dimensional coordinates of the space can be converted into the camera coordinate system, and the position coordinates of a point in the camera coordinate system under the projector can be obtained by using the conversion relationship matrix between the camera and the projector.
图4是一实施例提供的一种电子设备的硬件结构示意图,如图4所示,该电子设备包括:一个或多个处理器110和存储器120。图4中以一个处理器110为例。FIG. 4 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment. As shown in FIG. 4 , the electronic device includes: one or more processors 110 and a memory 120 . A processor 110 is taken as an example in FIG. 4 .
所述电子设备还可以包括:输入装置130和输出装置140。The electronic device may further include: an input device 130 and an output device 140 .
所述电子设备中的处理器110、存储器120、输入装置130和输出装置140可以通过总线或者其他方式连接,图4中以通过总线连接为例。The processor 110 , the memory 120 , the input device 130 and the output device 140 in the electronic device may be connected by a bus or in other ways, and the connection by a bus is taken as an example in FIG. 4 .
存储器120作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块。处理器110通过运行存储在存储器120中的软件程序、 指令以及模块,从而执行多种功能应用以及数据处理,以实现上述实施例中的任意一种方法。As a computer-readable storage medium, the memory 120 can be configured to store software programs, computer-executable programs, and modules. The processor 110 executes various functional applications and data processing by running the software programs, instructions and modules stored in the memory 120 to implement any one of the methods in the foregoing embodiments.
存储器120可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器可以包括随机存取存储器(Random Access Memory,RAM)等易失性存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件或者其他非暂态固态存储器件。The memory 120 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device, and the like. In addition, the memory may include volatile memory such as random access memory (Random Access Memory, RAM), and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage devices.
存储器120可以是非暂态计算机存储介质或暂态计算机存储介质。该非暂态计算机存储介质,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器120可选包括相对于处理器110远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例可以包括互联网、企业内部网、局域网、移动通信网及其组合。 Memory 120 may be a non-transitory computer storage medium or a transitory computer storage medium. The non-transitory computer storage medium, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 120 may optionally include memory located remotely from processor 110, which may be connected to the electronic device via a network. Examples of such networks may include the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
输入装置130可设置为接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置140可包括显示屏等显示设备。The input device 130 may be configured to receive input numerical or character information, and to generate key signal input related to user settings and function control of the electronic device. The output device 140 may include a display device such as a display screen.
本实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述方法。This embodiment further provides a computer-readable storage medium storing computer-executable instructions, where the computer-executable instructions are used to execute the above method.
上述实施例方法中的全部或部分流程可以通过计算机程序来执行相关的硬件来完成的,该程序可存储于一个非暂态计算机可读存储介质中,该程序在执行时,可包括如上述方法的实施例的流程,其中,该非暂态计算机可读存储介质可以为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或RAM等。All or part of the processes in the methods of the above-mentioned embodiments can be completed by executing the relevant hardware through a computer program, and the program can be stored in a non-transitory computer-readable storage medium. , wherein the non-transitory computer-readable storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a RAM, or the like.
与相关技术相对比,本申请具有如下优点:Compared with the related art, the present application has the following advantages:
(1)本申请采用圆靶标标定板,通过提取圆的亚像素轮廓进行椭圆拟合,得到圆心的亚像素位置,精度更高,同时标定过程中,标定板随意摆放,造成照明亮度不一致,圆的目标特征比较明显,也能够适应光照变化,准确提取圆心位置;(1) The application adopts a circular target calibration plate, and performs ellipse fitting by extracting the sub-pixel contour of the circle to obtain the sub-pixel position of the center of the circle, and the accuracy is higher. At the same time, during the calibration process, the calibration plate is placed at random, resulting in inconsistent illumination brightness, The target feature of the circle is relatively obvious, and it can also adapt to changes in illumination and accurately extract the position of the center of the circle;
(2)本申请利用双线性插值得到每个圆心坐标所对应的绝对相位值,然后利用绝对相位值计算出对应的投影仪图像坐标。(2) The present application uses bilinear interpolation to obtain the absolute phase value corresponding to each circle center coordinate, and then uses the absolute phase value to calculate the corresponding projector image coordinate.

Claims (12)

  1. 一种3D结构光系统的标定方法,包括:A method for calibrating a 3D structured light system, comprising:
    制作标定板;make a calibration board;
    采集标定板图像;以及acquiring an image of the calibration plate; and
    根据采集到的多个所述标定板图像,对结构光系统进行标定;calibrate the structured light system according to the collected images of the calibration plate;
    其中,制作标定板的步骤包括:制作系统标定板并烧录,所述标定板上的图案为多个圆形靶标,相邻的所述圆形靶标的圆心距一致,多个所述圆形靶标排列为行和列,所述圆形靶标的大小不同;Wherein, the step of making a calibration board includes: making a system calibration board and burning it, the pattern on the calibration board is a plurality of circular targets, the center distances of the adjacent circular targets are the same, and a plurality of the circular targets The targets are arranged in rows and columns, and the circular targets are of different sizes;
    其中,采集标定板图像的步骤包括:Wherein, the step of collecting the calibration plate image includes:
    设定一个标定板姿态,分别在所述标定板上投影水平和竖直两个方向的多个频率的正弦光栅条纹图案,在另外一张所述标定板上投影没有光栅条纹的光,并在每次投影的同时用相机采集该姿态下投影各种图案的标定板图像;以及Set a calibration plate attitude, project sinusoidal grating fringe patterns of multiple frequencies in the horizontal and vertical directions on the calibration plate respectively, project light without grating fringes on the other calibration plate, and place it on the calibration plate. At the same time of each projection, the camera is used to collect the images of the calibration plate projected with various patterns in the posture; and
    调整所述标定板姿态,重复采集标定板图像的步骤,直至所述标定板姿态的数量达到预设的目标值,得到多个标定板图像;Adjust the posture of the calibration plate, repeat the steps of collecting the images of the calibration plate, until the number of postures of the calibration plate reaches a preset target value, and obtain a plurality of images of the calibration plate;
    其中,根据采集到的多个所述标定板图像,对结构光系统进行标定的步骤包括:The step of calibrating the structured light system includes:
    提取所述标定板图像中所有所述圆形靶标的圆心位置;extracting the center positions of all the circular targets in the calibration plate image;
    根据提取的所有所述圆形靶标的圆心位置对所有圆心进行排序;Sort all circle centers according to the extracted center positions of all the circular targets;
    采用双线性插值算法得到每个圆心坐标的绝对相位值;The absolute phase value of each circle center coordinate is obtained by bilinear interpolation algorithm;
    利用得到的每个圆心坐标的绝对相位值,得到每个圆心在相机图像中坐标对应的在投影仪图像中坐标;Using the obtained absolute phase value of the coordinates of each circle center, the coordinates in the projector image corresponding to the coordinates of each circle center in the camera image are obtained;
    根据得到的相机及所述投影仪图像中的圆心坐标,利用张正友标定方法得到相机及投影仪的内参、畸变系数和外参;According to the obtained coordinates of the circle center in the camera and the projector image, the internal parameters, distortion coefficients and external parameters of the camera and the projector are obtained by Zhang Zhengyou's calibration method;
    利用同一所述标定板姿态下的相机和投影仪的外参,计算每个标定板姿态 下相机和投影仪坐标系之间的转换矩阵;以及Utilize the external parameters of the camera and the projector under the same described calibration plate attitude, calculate the transformation matrix between the camera and the projector coordinate system under each calibration plate attitude; And
    根据得到的每个标定板姿态下的转换矩阵,得到标定的相机和投影仪坐标系转换矩阵。According to the obtained transformation matrix under the attitude of each calibration board, the calibrated camera and projector coordinate system transformation matrix is obtained.
  2. 根据权利要求1所述的3D结构光系统的标定方法,在采用双线性插值算法得到每个圆心坐标的绝对相位值之前,所述3D结构光系统的标定方法还包括:利用多频多步相移算法,得到标定板平面相位场在水平和垂直方向的绝对相位值。The method for calibrating a 3D structured light system according to claim 1, before the bilinear interpolation algorithm is used to obtain the absolute phase value of each circle center coordinate, the method for calibrating the 3D structured light system further comprises: using multi-frequency and multi-step The phase shift algorithm is used to obtain the absolute phase values of the plane phase field of the calibration plate in the horizontal and vertical directions.
  3. 根据权利要求1所述的3D结构光系统的标定方法,其中,提取所述标定板图像中所有所述圆形靶标的圆心位置的步骤包括:The method for calibrating a 3D structured light system according to claim 1, wherein the step of extracting the center positions of all the circular targets in the calibration plate image comprises:
    对相机采集到的没有打光栅条纹的所述标定板图像进行二值化;Binarize the image of the calibration plate without grating stripes collected by the camera;
    对二值化后的标定板图像进行blob分析,得到标定板圆的质心位置,实现圆形靶标的粗定位;Perform blob analysis on the image of the calibration plate after binarization to obtain the position of the center of mass of the calibration plate circle, and realize the rough positioning of the circular target;
    通过Canny边缘提取,得到所述圆形靶标的像素边缘位置;Through Canny edge extraction, the pixel edge position of the circular target is obtained;
    通过对所述圆形靶标的边缘像素点进行灰度插值,然后进行参数拟合,得到圆的亚像素边缘轮廓;以及By performing grayscale interpolation on the edge pixels of the circular target and then performing parameter fitting, the sub-pixel edge contour of the circle is obtained; and
    对得到的每个圆的亚像素边缘轮廓点进行最小二乘椭圆拟合,得到每个椭圆的中心位置,将每个椭圆的中心位置作为每个圆形靶标的圆心位置。Perform least squares ellipse fitting on the obtained sub-pixel edge contour points of each circle to obtain the center position of each ellipse, and use the center position of each ellipse as the circle center position of each circular target.
  4. 根据权利要求3所述的3D结构光系统的标定方法,其中,对所有圆心进行排序的步骤包括:The method for calibrating a 3D structured light system according to claim 3, wherein the step of sorting all circle centers comprises:
    根据得到的标定板每个椭圆的中心位置,确定采集的标定板图案中不规则四边形的四个顶点;According to the obtained center position of each ellipse of the calibration plate, determine the four vertices of the irregular quadrilateral in the collected calibration plate pattern;
    将四个顶点按顺序连线成四边形;以及Wire the four vertices in sequence into a quadrilateral; and
    当所述四边形旋转角度不超过预设值时,确定每个圆点的位置坐标,并根 据每个圆点的位置坐标顺序,确定每个圆心的顺序;When the angle of rotation of the quadrilateral does not exceed the preset value, the position coordinates of each circle point are determined, and the order of each circle center is determined according to the order of the position coordinates of each circle point;
    其中,当所述四边形旋转角度超过预设值时,按照如下方法进行排序:Wherein, when the rotation angle of the quadrilateral exceeds the preset value, the sorting is performed according to the following method:
    确定所述四边形的每条边上的所有圆点,将两条相对的边上相对的两个圆点进行连线;以及determining all the dots on each side of the quadrilateral, and connecting the two opposite dots on the two opposite sides; and
    确定所有连线形成的每个四边形内部所有的圆点,并确定每个圆点的顺序。Identify all the dots inside each quadrilateral formed by all the lines, and determine the order of each dot.
  5. 根据权利要求4所述的3D结构光系统的标定方法,其中,当所述四边形旋转角度不超过预设值时,确定每个圆心顺序包括以下步骤:The method for calibrating a 3D structured light system according to claim 4, wherein, when the rotation angle of the quadrilateral does not exceed a preset value, determining the order of each circle center comprises the following steps:
    确定标定板图像中五大圆形靶标的位置坐标;Determine the position coordinates of the five circular targets in the calibration plate image;
    由这些已知阵列坐标的大圆点,根据摄影几何的直线投影不变性,对位于阵列其他位置的小圆点在图像上的坐标进行预估;以及From these large dots with known array coordinates, according to the linear projection invariance of photographic geometry, estimate the coordinates of small dots located elsewhere in the array on the image; and
    通过blob分析得到的每个圆形靶标的坐标,计算每个点到预估的所有点的距离,距离最小的点为对应的位置的点,根据坐标顺序进行排序。The coordinates of each circular target are obtained by blob analysis, and the distance from each point to all the estimated points is calculated. The point with the smallest distance is the point of the corresponding position, which is sorted according to the coordinate order.
  6. 根据权利要求5所述的3D结构光系统的标定方法,其中,确定标定板图像中五大圆形靶标的位置坐标包括:The method for calibrating a 3D structured light system according to claim 5, wherein determining the position coordinates of the five circular targets in the calibration plate image comprises:
    选择五个半径相等的圆形靶标,所述五个半径相等的圆形靶标的半径大于标定板图案中其它圆形靶标的半径;Five circular targets with equal radii are selected, and the radii of the five circular targets with equal radii are larger than the radii of other circular targets in the calibration plate pattern;
    计算每个点之间的距离,得到距离最大的两个圆点,分别为F1和F2,以及距离最小的两个圆点,分别为N1和N2,剩下的一点确定为五大圆形靶标中第五大圆点C5;Calculate the distance between each point, and get the two points with the largest distance, F1 and F2, and the two points with the smallest distance, N1 and N2, respectively. The remaining point is determined as one of the five circular targets. The fifth largest dot C5;
    分别计算F1、F2到N1、N2的距离之和;Calculate the sum of the distances from F1 and F2 to N1 and N2 respectively;
    距离和较小的点为五大圆形靶标中第一大圆点C1,所述第一大圆点位于图案水平中线与垂直中线左侧的垂直中线交叉点;The distance and the smaller point are the first large circle point C1 in the five circular targets, and the first large circle point is located at the intersection of the horizontal centerline of the pattern and the vertical centerline on the left side of the vertical centerline;
    距离和较大的点为五大圆形靶标中第二大圆点C2,所述第二大圆点位于图 案水平中线与垂直中线右侧的垂直中线交叉点;以及The distance and the larger point is the second largest point C2 in the five circular targets, and the second largest point is located at the intersection of the horizontal centerline of the pattern and the vertical centerline to the right of the vertical centerline; and
    计算N1、N2与C1的距离,距离较小的点为五大圆形靶标中第三大圆点C3,较大的点为五大圆形靶标中第四大圆点C4。Calculate the distances between N1, N2 and C1. The point with the smaller distance is the third largest point C3 in the five circular targets, and the larger point is the fourth largest point C4 in the five circular targets.
  7. 根据权利要求5所述的3D结构光系统的标定方法,其中,预估小圆点坐标包括:The method for calibrating a 3D structured light system according to claim 5, wherein estimating the coordinates of the small dots comprises:
    用两个已知的大圆点C1和C2确定一条水平直线,计算与该直线平行的第二行两个已知点之间的距离,以及所述两个已知点连线的X和Y方向分量,得到标定板水平方向的两个分量;Determine a horizontal line with two known great circle points C1 and C2, calculate the distance between the two known points in the second line parallel to the line, and the X and Y of the line connecting the two known points direction component, two components in the horizontal direction of the calibration plate are obtained;
    用两个已知的大圆点C3和C4确定一条垂直直线,与该直线平行的第二列两个已知点之间的距离,以及所述两个已知点连线的X和Y方向分量,得到标定板垂直方向的两个分量;以及Use two known great circle points C3 and C4 to determine a vertical line, the distance between the two known points in the second column parallel to the line, and the X and Y directions of the line connecting the two known points components, resulting in two components in the vertical direction of the calibration plate; and
    根据已知大圆点C1的坐标,以及标定板水平和垂直方向各自的两个分量,预估得到所有圆形靶标的坐标,并存储所有所述圆形靶标位置坐标。According to the known coordinates of the large circle point C1 and the respective two components of the horizontal and vertical directions of the calibration plate, the coordinates of all circular targets are estimated and obtained, and the position coordinates of all the circular targets are stored.
  8. 根据权利要求3所述的3D结构光系统的标定方法,其中,采用双线性插值算法得到每个圆心坐标在水平和垂直方向的绝对相位值的步骤包括:The method for calibrating a 3D structured light system according to claim 3, wherein the step of obtaining the absolute phase value of each circle center coordinate in the horizontal and vertical directions by using a bilinear interpolation algorithm comprises:
    根据提取得到的每个圆心亚像素位置坐标,利用双线性插值得到该圆心对应的相位主值,插值公式如下:According to the obtained sub-pixel position coordinates of each circle center, the main phase value corresponding to the circle center is obtained by bilinear interpolation. The interpolation formula is as follows:
    φ(i+u,j+v)=(1-u)(1-v)φ(i,j)+(1-u)vφ(i,j+1)+u(1-v)φ(i+1,j)+uvφ(i+1,j+1);φ(i+u,j+v)=(1-u)(1-v)φ(i,j)+(1-u)vφ(i,j+1)+u(1-v)φ( i+1,j)+uvφ(i+1,j+1);
    其中,in,
    φ为相位主值;φ is the main value of the phase;
    i,j分别为计算得到的圆心坐标在图像坐标系中垂直和水平方向的整数部分;i, j are the integer parts of the calculated circle center coordinates in the vertical and horizontal directions in the image coordinate system;
    u,v分别为计算得到的圆心坐标在所述图像坐标系中垂直和水平方向的小数部分;u, v are respectively the fractional parts of the calculated circle center coordinates in the vertical and horizontal directions in the image coordinate system;
    相位主值通过如下方法获得:The phase principal value is obtained as follows:
    在采集的投影了正弦条纹码的图像中,多步相移法光栅光强分布函数为:In the acquired image with the sinusoidal stripe code projected, the light intensity distribution function of the multi-step phase-shift grating is:
    I k(x,y)=I′(x,y)+I″(x,y)cos(φ+(k-1)π/2) I k (x,y)=I′(x,y)+I″(x,y)cos(φ+(k-1)π/2)
    其中,in,
    k取值为1-n,其中n为多步相移法中的步数;The value of k is 1-n, where n is the number of steps in the multi-step phase shift method;
    x,y为像素坐标;x, y are pixel coordinates;
    I(x,y)为第k幅相移图像中(x,y)位置的像素点的灰度值;I(x, y) is the gray value of the pixel at the position (x, y) in the k-th phase-shift image;
    I'(x,y)为图像的平均灰度;I'(x,y) is the average gray level of the image;
    I”(x,y)为图像的灰度调制;I"(x,y) is the grayscale modulation of the image;
    φ为相位主值;φ is the main value of the phase;
    通过光强分布函数,得到每个像素点对应的相位主值:Through the light intensity distribution function, the phase principal value corresponding to each pixel is obtained:
    Figure PCTCN2020130543-appb-100001
    Figure PCTCN2020130543-appb-100001
    其中,in,
    k取值为1-n,其中n为多步相移法中的步数;The value of k is 1-n, where n is the number of steps in the multi-step phase shift method;
    x,y为像素坐标;x, y are pixel coordinates;
    I k为该像素点灰度值; I k is the gray value of the pixel;
    N为光栅条纹周期个数;N is the number of grating fringe periods;
    φ(x,y)为相位主值。φ(x,y) is the main value of the phase.
  9. 根据权利要求8所述的3D结构光系统的标定方法,所述3D结构光系统的标点方法还包括:利用得到的每个圆心坐标在水平和垂直方向的绝对相位值,通过如下公式得到每个圆心在相机图像中坐标对应的在投影仪图像中坐标:The method for calibrating a 3D structured light system according to claim 8, the punctuation method for the 3D structured light system further comprises: using the obtained absolute phase values of the coordinates of each circle center in the horizontal and vertical directions to obtain each The coordinates of the center of the circle in the camera image correspond to the coordinates in the projector image:
    Figure PCTCN2020130543-appb-100002
    Figure PCTCN2020130543-appb-100002
    Figure PCTCN2020130543-appb-100003
    Figure PCTCN2020130543-appb-100003
    其中,in,
    u p为圆心c点在投影仪图像中u方向的坐标; u p is the coordinate of the center point c in the u direction in the projector image;
    v p为圆心c点在投影仪图像中v方向的坐标; v p is the coordinate of the center point c in the v direction in the projector image;
    N为光栅条纹周期个数;N is the number of grating fringe periods;
    W为投影仪水平方向的分辨率;W is the resolution of the projector in the horizontal direction;
    H为投影仪垂直方向的分辨率;H is the resolution of the projector in the vertical direction;
    Φ u(u c,v c)为圆心c点垂直方向绝对相位值; Φ u (u c ,v c ) is the absolute phase value in the vertical direction of the center point c;
    Φ v(u c,v c)为圆心c点水平方向绝对相位值。 Φ v (u c , v c ) is the absolute phase value in the horizontal direction at the center point c.
  10. 根据权利要求1所述的3D结构光系统的标定方法,所述3D结构光系统的标定方法还包括:根据标定得到的相机和投影仪内参、畸变系数和外参,以及相机与投影仪之间的坐标系转换矩阵,建立相机和投影仪与世界坐标系的映射关系;The method for calibrating a 3D structured light system according to claim 1, the method for calibrating the 3D structured light system further comprises: according to the internal parameters, distortion coefficients and external parameters of the camera and the projector obtained from the calibration, and the difference between the camera and the projector The coordinate system transformation matrix of , establishes the mapping relationship between the camera and the projector and the world coordinate system;
    其中,根据标定得到的相机和投影仪内参、畸变系数和外参,以及相机与投影仪之间的坐标系转换矩阵,建立相机和投影仪与世界坐标系的映射关系的步骤包括:The steps of establishing the mapping relationship between the camera and the projector and the world coordinate system include:
    通过多频外差法,得到绝对相位灰度图;Through the multi-frequency heterodyne method, the absolute phase grayscale image is obtained;
    对于空间任一点p[x w,y w,z w],该点投影在相机中的图像坐标为p(u c,v c),根据得到的拍摄图像上的每个像素点的绝对相位值,得到对应的投影仪图像中的一条直线,相机图像与投影仪图像的对应关系为: For any point p[x w , y w , z w ], the image coordinate of this point projected in the camera is p(u c , vc ), according to the obtained absolute phase value of each pixel on the captured image , a straight line in the corresponding projector image is obtained, and the corresponding relationship between the camera image and the projector image is:
    Figure PCTCN2020130543-appb-100004
    Figure PCTCN2020130543-appb-100004
    其中:in:
    N为光栅条纹周期个数;N is the number of grating fringe periods;
    W为投影仪水平方向的分辨率;W is the resolution of the projector in the horizontal direction;
    Φ(u c,v c)为该像素点的绝对相位值; Φ(u c ,v c ) is the absolute phase value of the pixel;
    u p为该像素点对应的投影仪图像中垂直方向的坐标;以及 u p is the vertical coordinate in the projector image corresponding to the pixel; and
    根据上述相机图像与投影仪图像的对应关系,由如下成像原理公式,得到唯一的点p的三维坐标(X W,Y W,Z W): According to the corresponding relationship between the camera image and the projector image, the three-dimensional coordinates (X W , Y W , Z W ) of the unique point p are obtained by the following imaging principle formula:
    S C[u c,v c,1]=A CM C[X W,Y W,Z W,1] S C [u c , vc , 1] = A C M C [X W , Y W , Z W , 1]
    S P[u p,v p,1]=A PM P[X W,Y W,Z W,1] S P [up , v p , 1] = A P M P [ X W , Y W , Z W , 1]
    其中,in,
    A C和A P分别为相机和投影仪的内参; A C and A P are the internal parameters of the camera and projector, respectively;
    M C和M P分别为相机和投影仪的外参; M C and M P are the external parameters of the camera and the projector, respectively;
    S C和S P分别为相机和投影仪的比例因子; S C and S P are the scale factors of the camera and projector, respectively;
    (u c,v c)和(u p,v p)分别为相机和投影仪的图像坐标,两者均使用预先标定出的系统畸变参数进行畸变矫正; (u c , vc ) and ( up , v p ) are the image coordinates of the camera and the projector, respectively, both of which use the pre-calibrated system distortion parameters for distortion correction;
    (X W,Y W,Z W)为点p的唯一三维坐标。 (X W , Y W , Z W ) are the unique three-dimensional coordinates of point p.
  11. 一种电子设备,包括:An electronic device comprising:
    处理器;processor;
    存储器,设置为存储程序,memory, set to store programs,
    当所述程序被所述处理器执行,使得所述处理器实现如权利要求1-10中任一所述的3D结构光系统的标定方法。When the program is executed by the processor, the processor implements the method for calibrating a 3D structured light system according to any one of claims 1-10.
  12. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行如权利要求1-10任一所述的3D结构光系统的标定方法。A computer-readable storage medium storing computer-executable instructions for executing the method for calibrating a 3D structured light system according to any one of claims 1-10.
PCT/CN2020/130543 2020-09-11 2020-11-20 Calibration method for 3d structured light system, and electronic device and storage medium WO2022052313A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010951187.9 2020-09-11
CN202010951187.9A CN112097689B (en) 2020-09-11 2020-09-11 Calibration method of 3D structured light system

Publications (1)

Publication Number Publication Date
WO2022052313A1 true WO2022052313A1 (en) 2022-03-17

Family

ID=73752106

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/130543 WO2022052313A1 (en) 2020-09-11 2020-11-20 Calibration method for 3d structured light system, and electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN112097689B (en)
WO (1) WO2022052313A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882110A (en) * 2022-05-10 2022-08-09 中国人民解放军63921部队 Relative pose measurement and target design method suitable for micro-nano satellite self-assembly
CN115082557A (en) * 2022-06-29 2022-09-20 中交第二航务工程局有限公司 Tower column hoisting relative attitude measurement method based on binocular vision
CN115201796A (en) * 2022-07-26 2022-10-18 白犀牛智达(北京)科技有限公司 External reference correction method for vehicle sensor
CN115307576A (en) * 2022-08-02 2022-11-08 清华大学 Step boundary compensation method and device in structured light measurement
CN115388874A (en) * 2022-08-12 2022-11-25 北京航空航天大学 Monocular camera-based circular target pose estimation method
CN115930784A (en) * 2023-01-09 2023-04-07 广州市易鸿智能装备有限公司 Point inspection method of visual inspection system
CN116182702A (en) * 2023-01-31 2023-05-30 桂林电子科技大学 Line structure light sensor calibration method and system based on principal component analysis
CN116563388A (en) * 2023-04-28 2023-08-08 北京优酷科技有限公司 Calibration data acquisition method and device, electronic equipment and storage medium
CN116817794A (en) * 2023-06-27 2023-09-29 浙江大学 Underwater high-precision three-dimensional imaging device and method based on structured light
CN117017496A (en) * 2023-09-28 2023-11-10 真健康(北京)医疗科技有限公司 Flexible body surface positioning device and puncture operation navigation positioning system
CN117029705A (en) * 2023-06-27 2023-11-10 苏州瑞威盛科技有限公司 Gear bar span measuring system and method based on non-contact 3D vision
CN117369197A (en) * 2023-12-06 2024-01-09 深圳市安思疆科技有限公司 3D structure optical module, imaging system and method for obtaining depth map of target object

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111811433B (en) * 2020-07-15 2022-03-08 河北工业大学 Structured light system calibration method and device based on red and blue orthogonal stripes and application
CN112991462A (en) * 2021-03-15 2021-06-18 扬州大学 Camera calibration method based on dot diagram
CN113989386B (en) * 2021-10-27 2023-05-30 武汉高德智感科技有限公司 Infrared camera calibration method and system
CN116277979B (en) * 2023-05-24 2023-09-08 南京铖联激光科技有限公司 Optical machine distortion correction method for DLP printer

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050280831A1 (en) * 2004-06-17 2005-12-22 Konica Minolta Sensing, Inc. Phase measurement system
US7061628B2 (en) * 2001-06-27 2006-06-13 Southwest Research Institute Non-contact apparatus and method for measuring surface profile
CN1789906A (en) * 2004-12-17 2006-06-21 北京航空航天大学 Detector for three-dimensional appearance of micro-member through-hole inner surface and its marking and using method
CN103234482A (en) * 2013-04-07 2013-08-07 哈尔滨工程大学 Structured light measuring system calibration method based on sinusoidal grating
CN108717715A (en) * 2018-06-11 2018-10-30 华南理工大学 A kind of line-structured light vision system automatic calibration method for arc welding robot
CN110595387A (en) * 2019-08-01 2019-12-20 佛山市南海区广工大数控装备协同创新研究院 Calibration method of three-dimensional reconstruction system based on multi-frequency structured light

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100520285C (en) * 2006-07-13 2009-07-29 黑龙江科技学院 Vision measuring method for projecting multiple frequency grating object surface tri-dimensional profile
CN102853783A (en) * 2012-09-18 2013-01-02 天津工业大学 High-precision multi-wavelength three-dimensional measurement method
CN110132431B (en) * 2019-03-29 2020-12-25 黑龙江科技大学 Multi-frequency heterodyne grating absolute phase calculation method for image gray scale interval expansion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7061628B2 (en) * 2001-06-27 2006-06-13 Southwest Research Institute Non-contact apparatus and method for measuring surface profile
US20050280831A1 (en) * 2004-06-17 2005-12-22 Konica Minolta Sensing, Inc. Phase measurement system
CN1789906A (en) * 2004-12-17 2006-06-21 北京航空航天大学 Detector for three-dimensional appearance of micro-member through-hole inner surface and its marking and using method
CN103234482A (en) * 2013-04-07 2013-08-07 哈尔滨工程大学 Structured light measuring system calibration method based on sinusoidal grating
CN108717715A (en) * 2018-06-11 2018-10-30 华南理工大学 A kind of line-structured light vision system automatic calibration method for arc welding robot
CN110595387A (en) * 2019-08-01 2019-12-20 佛山市南海区广工大数控装备协同创新研究院 Calibration method of three-dimensional reconstruction system based on multi-frequency structured light

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI ZHONGWEI: "Research on Structured Light 3D Measuring Technology and System Based on Digital Fringe Projection", CHINESE DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, 1 May 2009 (2009-05-01), pages 1 - 128, XP055910823 *
LIU HAIJUN: "Research on Calibration of Structured Light 3D Topography and Deformation Measurement System", CHINESE MASTER'S THESES FULL-TEXT DATABASE, 1 June 2015 (2015-06-01), pages 1 - 68, XP055910816 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882110A (en) * 2022-05-10 2022-08-09 中国人民解放军63921部队 Relative pose measurement and target design method suitable for micro-nano satellite self-assembly
CN114882110B (en) * 2022-05-10 2024-04-12 中国人民解放军63921部队 Relative pose measurement and target design method suitable for micro-nano satellite self-assembly
CN115082557A (en) * 2022-06-29 2022-09-20 中交第二航务工程局有限公司 Tower column hoisting relative attitude measurement method based on binocular vision
CN115082557B (en) * 2022-06-29 2024-03-15 中交第二航务工程局有限公司 Binocular vision-based tower column hoisting relative attitude measurement method
CN115201796A (en) * 2022-07-26 2022-10-18 白犀牛智达(北京)科技有限公司 External reference correction method for vehicle sensor
CN115307576A (en) * 2022-08-02 2022-11-08 清华大学 Step boundary compensation method and device in structured light measurement
CN115307576B (en) * 2022-08-02 2024-04-09 清华大学 Step boundary compensation method and device in structured light measurement
CN115388874A (en) * 2022-08-12 2022-11-25 北京航空航天大学 Monocular camera-based circular target pose estimation method
CN115930784A (en) * 2023-01-09 2023-04-07 广州市易鸿智能装备有限公司 Point inspection method of visual inspection system
CN115930784B (en) * 2023-01-09 2023-08-25 广州市易鸿智能装备有限公司 Point inspection method of visual inspection system
CN116182702B (en) * 2023-01-31 2023-10-03 桂林电子科技大学 Line structure light sensor calibration method and system based on principal component analysis
CN116182702A (en) * 2023-01-31 2023-05-30 桂林电子科技大学 Line structure light sensor calibration method and system based on principal component analysis
CN116563388A (en) * 2023-04-28 2023-08-08 北京优酷科技有限公司 Calibration data acquisition method and device, electronic equipment and storage medium
CN117029705A (en) * 2023-06-27 2023-11-10 苏州瑞威盛科技有限公司 Gear bar span measuring system and method based on non-contact 3D vision
CN116817794B (en) * 2023-06-27 2024-02-13 浙江大学 Underwater high-precision three-dimensional imaging device and method based on structured light
CN116817794A (en) * 2023-06-27 2023-09-29 浙江大学 Underwater high-precision three-dimensional imaging device and method based on structured light
CN117029705B (en) * 2023-06-27 2024-03-22 苏州瑞威盛科技有限公司 Gear bar span measuring system and method based on non-contact 3D vision
CN117017496A (en) * 2023-09-28 2023-11-10 真健康(北京)医疗科技有限公司 Flexible body surface positioning device and puncture operation navigation positioning system
CN117017496B (en) * 2023-09-28 2023-12-26 真健康(北京)医疗科技有限公司 Flexible body surface positioning device and puncture operation navigation positioning system
CN117369197A (en) * 2023-12-06 2024-01-09 深圳市安思疆科技有限公司 3D structure optical module, imaging system and method for obtaining depth map of target object

Also Published As

Publication number Publication date
CN112097689A (en) 2020-12-18
CN112097689B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
WO2022052313A1 (en) Calibration method for 3d structured light system, and electronic device and storage medium
Chen et al. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm
Huang et al. Research on multi-camera calibration and point cloud correction method based on three-dimensional calibration object
TWI729995B (en) Generating a merged, fused three-dimensional point cloud based on captured images of a scene
JP6363863B2 (en) Information processing apparatus and information processing method
US20130127998A1 (en) Measurement apparatus, information processing apparatus, information processing method, and storage medium
CN107917679B (en) Dynamic detection and compensation method for highlight and dark regions
JP5633058B1 (en) 3D measuring apparatus and 3D measuring method
WO2007015059A1 (en) Method and system for three-dimensional data capture
CN102954770A (en) Three-dimensional measurement apparatus, three-dimensional measurement method
US10771776B2 (en) Apparatus and method for generating a camera model for an imaging system
US20160025591A1 (en) Automated deflectometry system for assessing reflector quality
JP6836561B2 (en) Image processing device and image processing method
CN112815843B (en) On-line monitoring method for printing deviation of workpiece surface in 3D printing process
CN109727277B (en) Body surface positioning tracking method for multi-eye stereo vision
CN108362205B (en) Space distance measuring method based on fringe projection
CN109373912A (en) A kind of non-contact six-freedom displacement measurement method based on binocular vision
CN116188558B (en) Stereo photogrammetry method based on binocular vision
CN111220235B (en) Water level monitoring method and device
CN106952262A (en) A kind of deck of boat analysis of Machining method based on stereoscopic vision
CN113505626A (en) Rapid three-dimensional fingerprint acquisition method and system
KR102023087B1 (en) Method for camera calibration
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
Tehrani et al. A new approach to 3D modeling using structured light pattern
TWI659390B (en) Data fusion method for camera and laser rangefinder applied to object detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20953098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20953098

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 22/09/2023)