WO2023010565A1 - 单目散斑结构光系统的标定方法、装置及终端 - Google Patents

单目散斑结构光系统的标定方法、装置及终端 Download PDF

Info

Publication number
WO2023010565A1
WO2023010565A1 PCT/CN2021/111313 CN2021111313W WO2023010565A1 WO 2023010565 A1 WO2023010565 A1 WO 2023010565A1 CN 2021111313 W CN2021111313 W CN 2021111313W WO 2023010565 A1 WO2023010565 A1 WO 2023010565A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
speckle
image
coordinates
correction
Prior art date
Application number
PCT/CN2021/111313
Other languages
English (en)
French (fr)
Inventor
谷飞飞
宋展
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2021/111313 priority Critical patent/WO2023010565A1/zh
Publication of WO2023010565A1 publication Critical patent/WO2023010565A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the present application belongs to the technical field of optical measurement, and in particular relates to a calibration method, device and terminal for a monocular speckle structured light system.
  • Monocular speckle structured light technology can realize three-dimensional reconstruction of objects based on a single speckle image, and is one of the important dynamic measurement methods.
  • This technology generally uses an active infrared speckle projector to project the speckle pattern onto the surface of the scene, and then uses a camera to collect the corresponding scene image, realizes depth estimation based on the triangulation principle, and realizes 3D reconstruction of the scene.
  • monocular speckle structured light system has the advantages of low cost and compact structure compared with binocular speckle structured light system. This technology is widely used in the field of depth camera measurement.
  • the pose relationship between the speckle projector and the camera is determined by the system structure design, which requires artificial adjustment to ensure the relative position of the speckle projector and the camera.
  • the installation try to keep the optical axis parallel to ensure the three-dimensional reconstruction effect of the system.
  • the embodiment of the present application provides a calibration method, device, terminal, and storage medium for a monocular speckle structured light system to solve the difficulty in manual assembly and structural adjustment of the monocular speckle structured light system during assembly. , resulting in installation errors, which cannot guarantee the accuracy of depth estimation and three-dimensional reconstruction of the measured object.
  • the first aspect of the embodiments of the present application provides a calibration method for a monocular speckle structured light system, including:
  • the speckle projector controls the speckle projector to project a speckle coding pattern onto the calibration plate, the speckle coding pattern including N main coding points with corner feature, where N is an integer greater than or equal to 2;
  • M projection point coordinates of each of the main encoding points on the calibration board are obtained, where M is an integer greater than or equal to 3;
  • the second aspect of the embodiments of the present application provides a calibration device for a monocular speckle structured light system, including:
  • the speckle projection module is used to control the speckle projector to project a speckle coding pattern on the calibration plate, the speckle coding pattern includes N main coding points with corner feature, and N is an integer greater than or equal to 2;
  • a coordinate acquisition module configured to obtain M projection point coordinates of each of the main encoding points on the calibration plate under M different postures of the calibration plate relative to the camera, where M is greater than or equal to 3 an integer of
  • An optical center position determining module configured to determine the optical center position of the speckle projector according to the coordinates of the M projection points of the N main coding points;
  • the parameter calibration module is configured to obtain the external parameters of the camera relative to the speckle projector after epipolar correction according to the optical center position of the speckle projector and the optical center position of the camera.
  • the third aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the computer program, the The steps of the method as described in the first aspect.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the method described in the first aspect are implemented.
  • a fifth aspect of the present application provides a computer program product, which, when running on a terminal, causes the terminal to execute the steps of the method described in the first aspect above.
  • the speckle projector by controlling the speckle projector to project the speckle encoding pattern on the calibration plate under different attitudes of the calibration plate relative to the camera, based on the M projections of each main coding point in the speckle coding pattern on the calibration plate Point coordinates, determine the optical center position of the speckle projector, and then perform epipolar correction on the camera based on the optical center position of the speckle projector and the optical center position of the camera, and obtain the external position of the camera relative to the speckle projector after epipolar correction Parameters, realize the calibration of the pose relationship between the camera and the speckle projector, change the status quo of system assembly based on the set pose relationship, avoid installation errors, reduce the difficulty of system assembly, and improve the monocular speckle structured light system object measurement accuracy.
  • Fig. 1 is a flow chart 1 of a calibration method for a monocular speckle structured light system provided by an embodiment of the present application;
  • Fig. 2 is a schematic diagram of the projection point distribution of the speckle coding pattern provided by the embodiment of the present application on the calibration plate under different placement postures;
  • Fig. 3 is a schematic diagram of camera coordinate system adjustment in camera epipolar correction provided by the embodiment of the present application.
  • Fig. 4 is a flowchart 2 of a calibration method for a monocular speckle structured light system provided by an embodiment of the present application;
  • Fig. 5 is a structural diagram of a calibration device for a monocular speckle structured light system provided by an embodiment of the present application
  • FIG. 6 is a structural diagram of a terminal provided by an embodiment of the present application.
  • the term “if” may be construed as “when” or “once” or “in response to determining” or “in response to detecting” depending on the context .
  • the phrase “if determined” or “if [the described condition or event] is detected” may be construed, depending on the context, to mean “once determined” or “in response to the determination” or “once detected [the described condition or event] ]” or “in response to detection of [described condition or event]”.
  • the terminals described in the embodiments of the present application include but are not limited to other portable devices such as mobile phones, laptop computers or tablet computers with touch-sensitive surfaces (eg, touch screen displays and/or touch pads). It should also be appreciated that in some embodiments, the device is not a portable communication device, but a desktop computer with a touch-sensitive surface (eg, a touchscreen display and/or a touchpad).
  • a terminal including a display and a touch-sensitive surface is described.
  • a terminal may include one or more other physical user interface devices such as a physical keyboard, mouse and/or joystick.
  • the terminal supports various applications such as one or more of the following: drawing application, presentation application, word processing application, website creation application, disk burning application, spreadsheet application, gaming application, telephony application programs, video conferencing applications, email applications, instant messaging applications, exercise support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and and/or digital video player applications.
  • applications such as one or more of the following: drawing application, presentation application, word processing application, website creation application, disk burning application, spreadsheet application, gaming application, telephony application programs, video conferencing applications, email applications, instant messaging applications, exercise support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and and/or digital video player applications.
  • Various applications that can be executed on the terminal can use at least one common physical user interface device, such as a touch-sensitive surface.
  • a touch-sensitive surface One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal may be adjusted and/or changed between applications and/or within the respective applications.
  • the common physical architecture eg, touch-sensitive surface
  • the terminal can support various applications with a user interface that is intuitive and transparent to the user.
  • FIG. 1 is a flowchart 1 of a calibration method for a monocular speckle structured light system provided by an embodiment of the present application.
  • a calibration method for a monocular speckle structured light system the method includes the following steps:
  • Step 101 controlling a speckle projector to project a speckle coding pattern onto a calibration plate.
  • the speckle coding pattern includes N main coding points with corner feature, and N is an integer greater than or equal to 2.
  • the corner features that can be used for precise image positioning are added to the random speckle pattern as the main coding points to form a speckle coding pattern.
  • the corner features that can be used for precise image positioning are, for example, pattern features such as grid corners, cross intersections, inflection points, rhombus corners, and checkerboard points.
  • pattern features such as grid corners, cross intersections, inflection points, rhombus corners, and checkerboard points.
  • a cross point formed by a horizontal line and a vertical line may be used as a corner point feature.
  • the parameters of the speckle filled in the speckle encoding pattern can be adjusted.
  • Step 102 under M different postures of the calibration board relative to the camera, M projection point coordinates of each main coding point on the calibration board are acquired.
  • M is an integer greater than or equal to 3.
  • the calibration board adopts a checkerboard calibration board for calibration.
  • the M positions of the calibration board relative to the camera form M different poses between the calibration board and the camera.
  • Pose refers to the positional relationship of the fixed board relative to the camera. It is generally described by a rotation matrix R and a translation matrix T.
  • Different poses refer to different positions of the fixed board relative to the camera, that is, different R
  • the coordinates of the M projected points of each main coding point on the calibration board can be specifically constructed with a point on the calibration board (such as the upper left corner vertex) as the origin to construct the world coordinate system, and obtain the coordinates of each main coding point on each calibration board
  • the coordinates of the projected points, the coordinates of M projected points corresponding to each main coding point are formed on the calibration board under the M poses.
  • the speckle projector Turn on the speckle projector to project the speckle image outward, and for each posture of the calibration plate, the speckle pattern is just projected onto the checkerboard surface of the calibration plate.
  • the camera may use a visible light camera, and the corresponding speckle projector projects visible light patterns; the camera may also use an infrared camera, and the corresponding speckle projector projects infrared light patterns.
  • the speckle projector is controlled to project a speckle coding pattern on the calibration plate, specifically, a fixed (that is, the same) speckle pattern is projected on the calibration plate, and by adjusting the M of the calibration plate relative to the camera different placement postures to achieve M projection point coordinates of each main coding point on the calibration board under M different placement postures of the calibration board.
  • Step 103 determine the optical center position of the speckle projector according to the M projection point coordinates of the N main coding points.
  • the luminescent optical center of the speckle projector is a spatial point light source, and the process of projecting speckle patterns can be simplified as a camera pinhole imaging model without lens distortion.
  • the determination of the optical center position of the speckle projector according to the M projection point coordinates of the N main code points includes:
  • the spatial projection straight line corresponding to each main coding point is obtained; the intersection point of the N spatial projection straight lines is determined as the optical center position of the speckle projector.
  • the speckle projector projects the speckle pattern outward, it starts from the optical center Op to project light outward, so the speckle pattern projected by the speckle projector is on the checkerboard calibration plate Form projection points, where the main code points in the projected speckle pattern are formed on the first pose calibration plate Projection point, formed on the second pose calibration board The projection point, and so on, form the Nth pose calibration board Projection points, etc.
  • the position of the optical center of the speckle projector needs to be reversely determined by means of the coordinates of these projection points.
  • the coordinates of the M projected points of each main code point are fitted to the spatial coordinates to obtain the projected straight lines in space corresponding to the coordinates of the M projected points, and the N main coded points will generate N space projected straight lines correspondingly , determine the intersection point of the N spatial projection straight lines as the optical center position of the speckle projector.
  • the corresponding spatial projection straight line of each main coding point is obtained, including:
  • the coordinates of the M projection points of each main coding point on the calibration board are transformed into the camera coordinate system, and the coordinates of each projection point in the camera coordinate system are obtained.
  • Spatial coordinates Fit the spatial coordinates of the coordinates of each projection point in the camera coordinate system to obtain the spatial projection straight line corresponding to each main coding point.
  • the main coding points of the speckle pattern are projected onto the calibration boards of different poses, and the projection points of a main coding point on different calibration boards are located on the same straight line in space.
  • the corner vertex is used as the origin to construct the world coordinate system, obtain the coordinates of each main code point in the world coordinate system of the projection point on the calibration plate at each attitude, and use the camera to collect the speckle images on the calibration plate at each attitude to obtain the main code
  • the imaging pixel coordinate of the point on the camera plane is p
  • the coordinates of the main coding point P i projected in the world coordinate system and the camera coordinate system are respectively represented by and Indicates that p is
  • the coordinates of the projected pixel points on the camera plane, the coordinates of the main coding point P i in the camera coordinate system can be obtained as:
  • the coordinates of the M projection points of each main coding point on the calibration board are transformed into the camera coordinate system respectively, and the spatial coordinates of the coordinates of each projection point in the camera coordinate system are obtained.
  • the transmission electric coordinates on the plane of the calibration plate under different postures are unified into the same coordinate system, which is convenient for spatial coordinate fitting to obtain the spatial projection straight line corresponding to each main coding point.
  • Step 104 according to the optical center position of the speckle projector and the optical center position of the camera, obtain the extrinsic parameters of the camera relative to the speckle projector after epipolar correction.
  • the X-axis of the corrected camera coordinate system and the projector coordinate system are consistent with the line between the optical centers of the two.
  • the projector and camera can be established The stereo correction relationship model between is calculated to obtain the external parameters of the camera relative to the speckle projector after correction.
  • the optical center of the speckle projector is O p
  • the optical center of the camera is O c . Since the projector itself cannot image, the projected speckle pattern can only be obtained by the camera. Therefore, for convenience, we set the stereo correction
  • the X-axis of the projector coordinate system O p -x p y p z p is set to be consistent with O p O c
  • the corresponding Y-axis and Z-axis in the projector coordinate system are also followed by the establishment rules of the Cartesian coordinate system Establish.
  • the external parameters of the camera relative to the speckle projector after epipolar correction are obtained, including:
  • the epipolar correction is performed on the camera, and the rotation angle and rotation axis required for the camera to reach the state after epipolar correction are obtained; among them, the optical center of the camera after epipolar correction axis and the optical axis of the speckle projector are parallel to each other; based on the rotation angle and rotation axis, the translation matrix and rotation matrix of the epipolar-corrected camera relative to the speckle projector are calculated, and the external parameters including the translation matrix and rotation matrix are obtained.
  • the camera coordinate system before correction is O c -x c y c z c
  • the camera coordinate system after correction is O c -x′ c y′ c z′ c
  • its X axis is consistent with O p O c be consistent.
  • the translation vector before correction T 0 (O c -O p )/
  • arccos[] represents the arccosine operation.
  • the rotation matrix can be obtained from the rotation axis:
  • I is a 3 ⁇ 3 unit matrix
  • [ ⁇ ] ⁇ represents an anti-symmetric matrix
  • T rec R rec ⁇ T 0 .
  • the speckle projector by controlling the speckle projector to project the speckle encoding pattern on the calibration plate under different attitudes of the calibration plate relative to the camera, based on the M projections of each main coding point in the speckle coding pattern on the calibration plate Point coordinates, determine the optical center position of the speckle projector, and then perform epipolar correction on the camera based on the optical center position of the speckle projector and the optical center position of the camera, and obtain the external position of the camera relative to the speckle projector after epipolar correction Parameters, realize the calibration of the pose relationship between the camera and the speckle projector, change the status quo of system assembly based on the set pose relationship, eliminate the parameter error caused by the installation operation, reduce the difficulty of system assembly, and improve the monocular Object measurement accuracy for speckle structured light systems.
  • Embodiments of the present application also provide different implementations of a calibration method for a monocular speckle structured light system.
  • FIG. 4 is a second flow chart of a calibration method for a monocular speckle structured light system provided by an embodiment of the present application.
  • a calibration method for a monocular speckle structured light system includes the following steps:
  • Step 401 under M different postures of the calibration board relative to the camera, the camera is controlled to collect images of the calibration boards in each posture, and M images of the calibration board are obtained.
  • M is an integer greater than or equal to 3.
  • Step 402 based on the M calibration board images, calibrate the internal parameters of the camera and the external parameters of the calibration board relative to each pose of the camera.
  • the internal parameters include the optical center position of the camera.
  • the internal parameters mainly include the camera focal length, Image center, lens distortion coefficient, and deduced parameters such as camera optical center position.
  • the extrinsic parameters are the pose relationship between each calibration board and the camera (including a rotation matrix and a translation matrix).
  • the calibration board can be a checkerboard calibration board, and the classic Zhang Zhengyou checkerboard calibration method can be used to calibrate the internal and external parameters of the camera. No specific limitation is made here.
  • Step 403 controlling the speckle projector to project a speckle coding pattern onto the calibration plate.
  • the speckle coding pattern includes N main coding points with corner feature, and N is an integer greater than or equal to 2.
  • Step 404 under M different postures of the calibration board relative to the camera, M projection point coordinates of each main coding point on the calibration board are obtained.
  • step 102 The implementation process of this step is the same as the implementation process of step 102 in the foregoing embodiments, and will not be repeated here.
  • Step 405 Determine the optical center position of the speckle projector according to the coordinates of the M projection points of the N main coding points.
  • step 103 The implementation process of this step is the same as the implementation process of step 103 in the foregoing embodiments, and will not be repeated here.
  • Step 406 according to the optical center position of the speckle projector and the optical center position of the camera, obtain the extrinsic parameters of the camera relative to the speckle projector after epipolar correction.
  • step 104 The implementation process of this step is the same as the implementation process of step 104 in the foregoing embodiments, and will not be repeated here.
  • the internal parameters specifically include the focal length of the camera, the radial distortion coefficient and the tangential distortion coefficient of the lens; correspondingly, according to the optical center position of the speckle projector and the optical center position of the camera, the epipolar line is obtained After correcting the extrinsic parameters of the camera relative to the speckle projector, it also includes:
  • the corrected focal length In order to eliminate the effect of lens distortion, the corrected focal length defined as:
  • the focal length of the lens and Respectively represent the focal length components in the x and y directions, and generally the two are the same.
  • (f x ,f y ) is the focal length of the camera that has been calibrated in the previous step
  • (k 1 ,k 2 ) and (p 1 ,p 2 ) are the radial and tangential distortion coefficients of the lens respectively
  • the camera The resolution of the collected image is W ⁇ H
  • the resolution of the image collected by the camera is a known parameter of the camera.
  • the affine transformation obtains the corrected target vertex coordinates; based on each The target vertex coordinates are used to obtain the corrected camera image center; the internal parameters are updated according to the distortion-corrected camera focal length and the corrected camera image center.
  • the traditional epipolar correction method generally takes the image center in the camera’s calibrated internal reference directly as the epipolar corrected image center, which is not accurate in practice, and the corrected image will therefore have problems such as distortion and rotation offset , and may reduce the visible area of effective image information. Therefore, we calculate the optimal image center using the calculated extrinsic parameters and the corrected camera focal length.
  • the corrected image center is obtained by taking the corrected trapezoidal area composed of four vertices as its geometric center:
  • the internal parameters of the camera are updated by correcting the focal length of the camera and the center of the camera image.
  • the speckle images collected by the camera can be subjected to epipolar correction to ensure that the row coordinates of each corrected image are aligned. If the line alignment is realized, then in the next disparity estimation step, the pixel feature matching search between the two images only needs to be carried out along the line direction instead of searching the entire image, thus greatly improving the matching efficiency.
  • Step 407 controlling the speckle projector to project the speckle image to the reference plane, and controlling the camera to collect images on the reference plane to obtain the speckle image of the reference plane.
  • Step 408 based on the internal parameters and the extrinsic parameters of the camera relative to the speckle projector after epipolar correction, image correction is performed on the speckle image of the reference surface to obtain a corrected speckle image of the reference surface.
  • the speckle projector projects the speckle onto it, and the camera collects the speckle image projected on the reference plane , this image is saved, and all subsequent measurement images are stereo matched with this image to realize disparity estimation.
  • the vertical relationship between the reference plane and the optical axis of the camera may not be strictly limited.
  • epipolar correction can be performed on the collected images to make up for the parameter errors caused by the manual installation and operation of the measurement system, reduce the complexity of system assembly adjustment, and improve the accuracy of 3D reconstruction of the system.
  • the reference plane no longer needs to be adjusted multiple times to keep it absolutely parallel to the optical axis of the camera, but its position can be roughly adjusted, and then the ideal speckle pattern of the reference plane can be obtained through epipolar correction.
  • Step 409 controlling the speckle projector to project a speckle image on the surface of the object to be measured, and controlling the camera to collect images on the surface of the object to be measured to obtain a speckle image on the surface of the object to be measured.
  • Step 410 Perform image correction on the speckle image on the surface of the object to be measured based on the internal parameters and the external parameters of the camera relative to the speckle projector after epipolar correction, to obtain the corrected speckle image on the surface of the object to be measured.
  • the image correction of the speckle image on the surface of the object to be tested can be based on the internal parameters and the external parameters of the camera relative to the speckle projector after epipolar correction, and the distortion correction, image center correction, rotation correction, and translation can be performed on the speckle image on the surface of the object to be tested Correction and other image correction processing, so that the corrected speckle image of the surface of the object to be tested is aligned with the speckle image of the reference surface corrected by the same standard, and the parallax estimation is performed.
  • Step 411 based on the corrected speckle image of the reference surface, perform parallax estimation and three-dimensional shape reconstruction on the corrected surface speckle image of the object to be measured.
  • global/semi-global methods such as local block matching and SGM (Semi-global stereo matching) can be used to achieve dense Parallax estimation can obtain the parallax information of the object under test, and then use the triangulation principle to realize depth estimation and 3D reconstruction.
  • the corrected speckle image of the reference surface can be obtained by using the internal parameters of the camera through nonlinear mapping
  • the speckle projector projects the speckle pattern to the surface of the object to be measured, and the camera collects the corresponding image and performs epipolar correction, so as to obtain the measured speckle image line-aligned with the speckle image of the reference surface
  • the matching method can use local matching such as SAD (Sum of absolute differences) algorithm, SSD (sum of square differences, error sum of squares) algorithm, and NCC (normalized cross correlation) Algorithms, semi-global algorithms such as SGM can also be used. This results in a disparity map between the two images.
  • the obtained disparity estimation results can be used to restore the depth information of the object to be measured based on the corrected camera calibration parameters (corrected camera focal length, corrected image center), according to the formula ( 8) Realize 3D reconstruction:
  • d in the formula is disparity information
  • z is depth information
  • (x, y, z) is 3D reconstruction information. If z is obtained, then x and y can be calculated from z. Three-dimensional reconstruction refers to calculating all (x, y, z).
  • (x, y, z) T is the three-dimensional coordinates of the spatial point reconstructed by the feature point (u, v) T on the image collected by the camera, is the center of the camera image after epipolar correction, is the focal length of the camera after epipolar correction;
  • (u,v) T represents the pixel coordinates of the point on the image collected by the camera, is the center of the camera image after epipolar correction, is the focal length of the camera after epipolar correction.
  • , z 0 is the distance from the reference plane to the camera, is
  • the known quantity which can be known at the same time when the reference plane is selected, is a known quantity.
  • the speckle projector by controlling the speckle projector to project the speckle encoding pattern on the calibration plate under different attitudes of the calibration plate relative to the camera, based on the M projections of each main coding point in the speckle coding pattern on the calibration plate Point coordinates, determine the optical center position of the speckle projector, and then perform epipolar correction on the camera based on the optical center position of the speckle projector and the optical center position of the camera, and obtain the external position of the camera relative to the speckle projector after epipolar correction parameters and correct the internal parameters of the camera at the same time, realize the calibration of the internal and external parameters of the camera, change the status quo of system assembly based on the set pose relationship, eliminate the parameter error caused by the installation operation, reduce the difficulty of system assembly, and improve the single Object measurement accuracy of speckle structured light system.
  • FIG. 5 is a structural diagram of a calibration device for a monocular speckle structured light system provided by an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
  • the calibration device 500 of the monocular speckle structured light system includes:
  • the speckle projection module 501 is configured to control the speckle projector to project a speckle coding pattern onto the calibration plate, the speckle coding pattern includes N main coding points with corner feature, and N is an integer greater than or equal to 2 ;
  • the coordinate acquisition module 502 is used to obtain M projection point coordinates of each of the main encoding points on the calibration board under M different postures of the calibration board relative to the camera; M is greater than or equal to an integer of 3;
  • An optical center position determination module 503, configured to determine the optical center position of the speckle projector according to the coordinates of the M projection points of the N main coding points;
  • the parameter calibration module 504 is configured to obtain an epipolar-corrected external parameter of the camera relative to the speckle projector according to the optical center position of the speckle projector and the optical center position of the camera.
  • the parameter calibration module is specifically used for:
  • a translation matrix and a rotation matrix of the camera after epipolar correction relative to the speckle projector are calculated to obtain external parameters including the translation matrix and the rotation matrix.
  • the optical center position determination module is specifically used for:
  • the spatial projection straight line corresponding to each of the main coding points is obtained;
  • intersection of the N space projection straight lines is determined as the optical center position of the speckle projector.
  • the optical center position determination module is more specifically used for:
  • the coordinates of the M projection points of each of the main encoding points on the calibration board are respectively transformed into the camera coordinate system to obtain each The spatial coordinates of the projection point coordinates in the camera coordinate system;
  • the spatial coordinates of the coordinates of each of the projection points in the camera coordinate system are fitted to obtain a spatial projection straight line corresponding to each of the main encoding points.
  • the unit also includes:
  • the preliminary calibration module controls the camera to collect images of the calibration boards in each posture under M different postures of the calibration board relative to the camera, and obtains M calibration board images;
  • the internal parameters also include the camera focal length, the radial distortion coefficient and the tangential distortion coefficient of the lens; the device also includes:
  • the first correction module is used to input the focal length of the camera, the radial distortion coefficient, the tangential distortion coefficient and the resolution of the image collected by the camera into the lens distortion model to obtain the focal length of the camera after distortion correction.
  • the unit also includes:
  • the second correction module is used for:
  • affine transformation is used to obtain the corrected target vertex coordinates
  • the corrected camera image center is obtained
  • the internal parameters are updated according to the distortion-corrected camera focal length and the corrected camera image center.
  • the unit also includes:
  • Measurement module for:
  • controlling the speckle projector to project a speckle image to a reference plane, and controlling the camera to collect images on the reference plane to obtain a reference plane speckle image;
  • controlling the speckle projector to project a speckle image on the surface of the object to be measured, and controlling the camera to collect images on the surface of the object to be measured to obtain a speckle image on the surface of the object to be measured;
  • parallax estimation and three-dimensional topography reconstruction are performed on the corrected surface speckle image of the object to be measured.
  • the calibration device of the monocular speckle structured light system provided in the embodiment of the present application can realize the various processes of the above embodiment of the calibration method of the monocular speckle structured light system, and can achieve the same technical effect. In order to avoid repetition, it is not repeated here Let me repeat.
  • FIG. 6 is a structural diagram of a terminal provided by an embodiment of the present application. As shown in this figure, the terminal 6 of this embodiment includes: at least one processor 60 (only one is shown in FIG. A running computer program 62, when the processor 60 executes the computer program 62, implements the steps in any of the above method embodiments.
  • the terminal 6 may be a computing device such as a desktop computer, a notebook, a palmtop computer, or a cloud server.
  • the terminal 6 may include, but not limited to, a processor 60 and a memory 61 .
  • FIG. 6 is only an example of the terminal 6 and does not constitute a limitation on the terminal 6. It may include more or less components than those shown in the figure, or combine some components, or different components, such as
  • the terminal may also include an input and output device, a network access device, a bus, and the like.
  • the processor 60 can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the like.
  • the memory 61 may be an internal storage unit of the terminal 6 , such as a hard disk or memory of the terminal 6 .
  • the memory 61 can also be an external storage device of the terminal 6, such as a plug-in hard disk equipped on the terminal 6, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, Flash card (Flash Card), etc.
  • the memory 61 may also include both an internal storage unit of the terminal 6 and an external storage device.
  • the memory 61 is used to store the computer program and other programs and data required by the terminal.
  • the memory 61 can also be used to temporarily store data that has been output or will be output.
  • the disclosed device/terminal and method may be implemented in other ways.
  • the device/terminal embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated module/unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, all or part of the processes in the methods of the above embodiments in the present application can also be completed by instructing related hardware through computer programs.
  • the computer programs can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps in the above-mentioned various method embodiments can be realized.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, and a read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunication signal and software distribution medium, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunication signal and software distribution medium, etc.
  • This application implements all or part of the processes in the methods of the above-mentioned embodiments, and may also be realized by a computer program product.
  • the steps in the above-mentioned method embodiments can be realized when the terminal is executed. .

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

适用于光学测量技术领域,一种单目散斑结构光系统的标定方法、装置及终端,其中方法包括:控制散斑投射器向标定板上包括N个主编码点的投射散斑编码图案(101);在标定板相对于相机的M个不同摆放姿态下,获取每个主编码点在标定板上的M个投射点坐标(102);根据N个主编码点的M个投射点坐标,确定散斑投射器的光心位置(103);根据散斑投射器的光心位置及相机的光心位置,获取极线校正后的相机相对散斑投射器的外部参数(104)。能够避免人为安装操作带来的参数误差,降低系统装配难度,提升单目散斑结构光系统的物体测量准确度。

Description

单目散斑结构光系统的标定方法、装置及终端 技术领域
本申请属于光学测量技术领域,尤其涉及一种单目散斑结构光系统的标定方法、装置及终端。
背景技术
单目散斑结构光技术基于单张散斑图像即可实现物体三维重建,是重要的动态测量手段之一。该技术一般使用一个主动式红外散斑投射器,将散斑图案投射到场景表面,再使用相机采集相应的场景图像,基于三角原理实现深度估计,实现对场景的三维重建。
在应用中,单目散斑结构光系统相较于双目散斑结构光系统,具有成本低、结构紧凑等优势,该技术在深度相机测量领域具有广泛应用。
目前单目散斑结构光系统在装配时,散斑投射器和相机之间的位姿关系是由系统结构设计而定,这要求通过人为调整的方式,确保散斑投射器和相机的相对位姿在安装中尽量保持光轴平行布置,确保系统的三维重建效果。
但受限于人工装配及结构调整的实际操作难度较大,单目散斑结构光系统中各组成部分间的相对位置很难通过人为调整使其达到理想状态,导致存在安装误差,无法保证对被测物深度估计及三维重建的准确度。
技术问题
本申请实施例提供了一种单目散斑结构光系统的标定方法、装置、终端及存储介质,以解决单目散斑结构光系统在装配时,人工装配及结构调整的实际操作难度较大,导致存在安装误差,无法保证对被测物深度估计及三维重建的准确度的问题。
技术解决方案
为解决上述技术问题,本申请实施例采用的技术方案是:
本申请实施例的第一方面提供了一种单目散斑结构光系统的标定方法,包括:
控制散斑投射器向标定板上投射散斑编码图案,所述散斑编码图案中包括N个具备角点特征的主编码点,N为大于或者等于2的整数;
在所述标定板相对于相机的M个不同摆放姿态下,获取每个所述主编码点在所述标定板上的M个投射点坐标,M为大于或者等于3的整数;
根据N个所述主编码点的所述M个投射点坐标,确定所述散斑投射器的光心位置;
根据所述散斑投射器的光心位置及所述相机的光心位置,获取极线校正后的所述相机相对所述散斑投射器的外部参数。
本申请实施例的第二方面提供了一种单目散斑结构光系统的标定装置,包括:
散斑投射模块,用于控制散斑投射器向标定板上投射散斑编码图案,所述 散斑编码图案中包括N个具备角点特征的主编码点,N为大于或者等于2的整数;
坐标获取模块,用于在所述标定板相对于相机的M个不同摆放姿态下,获取每个所述主编码点在所述标定板上的M个投射点坐标,M为大于或者等于3的整数;
光心位置确定模块,用于根据N个所述主编码点的所述M个投射点坐标,确定所述散斑投射器的光心位置;
参数标定模块,用于根据所述散斑投射器的光心位置及所述相机的光心位置,获取极线校正后的所述相机相对所述散斑投射器的外部参数。
本申请实施例的第三方面提供了一种终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面所述方法的步骤。
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述方法的步骤。
本申请的第五方面提供了一种计算机程序产品,当所述计算机程序产品在终端上运行时,使得所述终端执行上述第一方面所述方法的步骤。
有益效果
本实施例中,通过控制散斑投射器在标定板相对相机的不同姿态下向标定板上投射散斑编码图案,基于该散斑编码图案中每个主编码点在标定板上的M个投射点坐标,确定散斑投射器的光心位置,进而基于散斑投射器的光心位置及相机的光心位置,对相机进行极线校正,获取极线校正后相机相对散斑投射器的外部参数,实现对相机与散斑投射器之间位姿关系的标定,改变基于设定好的位姿关系进行系统装配的现状,避免安装误差,减少系统装配难度,提升单目散斑结构光系统的物体测量准确度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种单目散斑结构光系统的标定方法的流程图一;
图2是本申请实施例提供的散斑编码图案在不同摆放姿态下的标定板上投射点分布示意图;
图3是本申请实施例提供的相机极线校正中相机坐标系调整示意图;
图4是本申请实施例提供的一种单目散斑结构光系统的标定方法的流程图二;
图5是本申请实施例提供的一种单目散斑结构光系统的标定装置的结构 图;
图6是本申请实施例提供的一种终端的结构图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
具体实现中,本申请实施例中描述的终端包括但不限于诸如具有触摸敏感表面(例如,触摸屏显示器和/或触摸板)的移动电话、膝上型计算机或平板计算机之类的其它便携式设备。还应当理解的是,在某些实施例中,所述设备并非便携式通信设备,而是具有触摸敏感表面(例如,触摸屏显示器和/或触摸板)的台式计算机。
在接下来的讨论中,描述了包括显示器和触摸敏感表面的终端。然而,应当理解的是,终端可以包括诸如物理键盘、鼠标和/或控制杆的一个或多个其它物理用户接口设备。
终端支持各种应用程序,例如以下中的一个或多个:绘图应用程序、演示应用程序、文字处理应用程序、网站创建应用程序、盘刻录应用程序、电子表格应用程序、游戏应用程序、电话应用程序、视频会议应用程序、电子邮件应用程序、即时消息收发应用程序、锻炼支持应用程序、照片管理应用程序、数码相机应用程序、数字摄影机应用程序、web浏览应用程序、数字音乐播放器应用程序和/或数字视频播放器应用程序。
可以在终端上执行的各种应用程序可以使用诸如触摸敏感表面的至少一 个公共物理用户接口设备。可以在应用程序之间和/或相应应用程序内调整和/或改变触摸敏感表面的一个或多个功能以及终端上显示的相应信息。这样,终端的公共物理架构(例如,触摸敏感表面)可以支持具有对用户而言直观且透明的用户界面的各种应用程序。
应理解,本实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本申请说明书中,为解决单目散斑结构光系统在装配时,人工装配及结构调整的实际操作难度较大,导致存在安装误差,无法保证对被测物深度估计及三维重建的准确度的问题,提出一种单目散斑结构光系统的标定方法,通过该标定方法,可以精确标定出系统参数,使散斑投射器和相机之间的相对位置无需严格要求,可以消除安装和操作带来的参数误差,显著提高单目散斑结构光在应用中的物体重建精度。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
参见图1,图1是本申请实施例提供的一种单目散斑结构光系统的标定方法的流程图一。如图1所示,一种单目散斑结构光系统的标定方法,该方法包括以下步骤:
步骤101,控制散斑投射器向标定板上投射散斑编码图案。
该散斑编码图案中包括N个具备角点特征的主编码点,N为大于或者等于2的整数。
在随机散斑图案中加入可供精确图像定位的角点特征作为主编码点,形成散斑编码图案。
其中,可供精确图像定位的角点特征例如为:栅格角点、十字交叉点、拐点、菱形角点、棋盘格点等图案特征。作为一个实施方式,可以采用水平线和竖直线形成的十字交叉点作为角点特征。
散斑编码图案中填充的散斑,其亮度、尺寸、分布、大小等参数均可调整。
步骤102,在标定板相对于相机的M个不同摆放姿态下,获取每个主编码点在标定板上的M个投射点坐标。
M为大于或者等于3的整数。
该标定板采用的是标定用的棋盘格标定板。标定板相对相机的M个摆放姿态,形成标定板与相机之间的M个不同的位姿。位姿是指标定板相对于相机的位置关系,一般使用一个旋转矩阵R和一个平移矩阵T来描述,不同的位姿是指标定板相对于相机摆放位置不同,即不同的R|T。
每个主编码点在标定板上的M个投射点坐标具体可以是以标定板上某个点(例如左上角顶点)作为原点构建世界坐标系,获取每个主编码点在各个标定板上的投射点的坐标,M个位姿下的标定板上则形成有与每个主编码点对应的M个投射点坐标。
打开散斑投射器向外投射散斑图像,对于标定板的每一个摆放姿态,使散 斑图案正好投射到标定板的棋盘格表面上。
相机可使用可见光相机,相应的散斑投射器投射可见光图形;相机也可使用红外相机,相应的散斑投射器投射红外光图形。
本申请中该步骤,控制散斑投射器向标定板上投射散斑编码图案,具体是向标定板上投射一张固定(即相同)的散斑图案,通过调变标定板相对于相机的M个不同摆放姿态,实现在标定板的M个不同摆放姿态下,获取每个主编码点在标定板上的M个投射点坐标。
步骤103,根据N个主编码点的M个投射点坐标,确定散斑投射器的光心位置。
散斑投射器的发光光心为空间点光源,其投射散斑图案的过程可简化为无镜头畸变的相机针孔成像模型。
结合图2所示,因此首先需要定位投射器的光心O p,以在后续过程中能够基于投射器光心,对相机进行极线校正。
作为一可选的实施方式,该根据N个主编码点的M个投射点坐标,确定散斑投射器的光心位置,包括:
根据各个主编码点的M个投射点坐标,得到每个主编码点对应的空间投射直线;确定N个空间投射直线的交点为散斑投射器的光心位置。
如图2所示,由于散斑投射器在向外投射散斑图时,是从光心O p出发向外进行光线投射,因此,散斑投射器所投射散斑图案在棋盘格标定板上形成投射点,其中所投射散斑图案中主编码点在第1位姿标定板上形成
Figure PCTCN2021111313-appb-000001
投射点,在第2位姿标定板上形成
Figure PCTCN2021111313-appb-000002
投射点,以此类推,在第N位姿标定板上形成
Figure PCTCN2021111313-appb-000003
投射点,等等。
需要借助于该些投射点的坐标,对散斑投射器的光心位置进行反向确定。
具体是借助于散斑编码图案中具有精准定位功能的每一主编码点在标定板上的M个投射点坐标来实现。
这里是将每一主编码点的M个投射点坐标进行空间坐标拟合,得到该M个投射点坐标对应的在空间中的投射直线,N个主编码点则会对应生成N个空间投射直线,将该N个空间投射直线的交点确定为散斑投射器的光心位置。
其中,根据各个主编码点的M个投射点坐标,得到每个主编码点对应的空间投射直线,包括:
基于相机相对每一摆放姿态的标定板的外部参数,将每个主编码点在标定板上的M个投射点坐标分别转换至相机坐标系中,得到各个投射点坐标在相机坐标系中的空间坐标;对各个投射点坐标在相机坐标系中的空间坐标进行拟合,得到每个主编码点对应的空间投射直线。
结合图2所示,散斑图案的主编码点投射到不同位姿的标定板上,一个主编码点在不同标定板上的投射点位于空间中的同一条直线上。定位投射器光心时具体如下:
相机相对每一摆放姿态的标定板的外部参数,及位姿M i(i=1,...,N),设标 定板相对于相机的外部参数为
Figure PCTCN2021111313-appb-000004
Figure PCTCN2021111313-appb-000005
为旋转矩阵,
Figure PCTCN2021111313-appb-000006
为平移矩阵,由此可定位投射到第i个标定板上的主编码点在相机坐标系中的坐标。
在将每个主编码点在标定板上的M个投射点坐标分别转换至相机坐标系中,得到各个投射点坐标在相机坐标系中的空间坐标的过程中,可以是在以标定板上左上角顶点作为原点构建世界坐标系,获取每个主编码点在各个姿态下标定板上的投射点的世界坐标系下的坐标,利用相机采集各个姿态下标定板上的散斑图像,得到主编码点在相机平面的成像像素坐标为p,主编码点P i在世界坐标系和相机坐标系下投影的坐标分别用
Figure PCTCN2021111313-appb-000007
Figure PCTCN2021111313-appb-000008
表示,p为
Figure PCTCN2021111313-appb-000009
在相机平面的投影像素点坐标,则可以得到主编码点P i在相机坐标系下的坐标即为:
Figure PCTCN2021111313-appb-000010
即由相机相对标定板的外部参数,将每个主编码点在标定板上的M个投射点坐标分别转换至相机坐标系中,得到各个投射点坐标在相机坐标系中的空间坐标,实现对不同摆放姿态下标定板平面上的透射电坐标统一至同一个坐标系下,便于空间坐标拟合得到每个主编码点对应的空间投射直线。
步骤104,根据散斑投射器的光心位置及相机的光心位置,获取极线校正后的相机相对散斑投射器的外部参数。
由于安装误差,散斑投射器和相机之间难以保持严格的光轴平行,因此需要对两者进行立体极线校正。校正后的相机坐标系和投射器坐标系的X轴与两者光心之间的连线保持一致,利用标定的散斑投射器光心位置及相机的光心位置,可以建立投射器和相机之间的立体校正关系模型,通过计算得到相机校正后相对散斑投射器的外部参数。
结合图3所示,散斑投射器光心为O p,相机光心为O c,由于投射器本身不能成像,投射的散斑图案只能通过相机获取,因此方便起见,我们设置立体校正后投射器的内部参数与相机保持一致,即K proj=K cam。而投射器坐标系O p-x py pz p的X轴即设为和O pO c保持一致,投射器坐标系中相应的Y轴和Z轴也依据笛卡尔坐标系的建立规则依次建立。
其中,作为一具体的实施方式,根据散斑投射器的光心位置及相机的光心位置,获取极线校正后的相机相对散斑投射器的外部参数,包括:
基于散斑投射器的光心位置及相机的光心位置,对相机进行极线校正,获取相机达到极线校正后状态所需的旋转角度及旋转轴;其中,极线校正后的相机的光轴与散斑投射器的光轴互相平行;基于旋转角度及旋转轴,计算极线校正后的相机相对于散斑投射器的平移矩阵和旋转矩阵,得到包含平移矩阵及旋转矩阵的外部参数。
如图3所示,校正前相机坐标系为O c-x cy cz c,校正后的相机坐标系为O c-x′ cy′ cz′ c,其X轴与O pO c保持一致。
校正前平移向量T 0=(O c-O p)/||O c-O p||。设校正后的x轴单位向量为ζ,则ζ=[1,0,0] T,校正即是将T 0校正为和ζ保持一致,通过旋转一定的角度β,使相机光轴垂直于基线O pO c方向。
其中,计算校正后的外部参数(旋转矩阵和平移矩阵)如下:
首先,计算旋转角β:校正前后的相机坐标系之间的旋转角度可通过下式计算:
β=arccos[(T 0·ζ)/(||T 0||||ζ||)]        (3);
式中,arccos[]表示反余弦操操作。
其次,计算旋转轴ξ:根据旋转角β和旋转轴ξ的方向可以计算得到旋转轴:
Figure PCTCN2021111313-appb-000011
再次,计算校正后的旋转矩阵R rec:根据rodrigues公式,可以由旋转轴得到旋转矩阵:
Figure PCTCN2021111313-appb-000012
式中I为3×3的单位矩阵,[·] ×表示反对称矩阵。
最后,计算校正后的平移矩阵T rec
T rec=R rec·T 0
该过程,通过极线校正,在获取高精度的基线数据(即相机光心与散斑投射器光心间连线)的同时,可以确保校正后的散斑投射器和相机光轴处于精确的平行位置,实现对单目散斑结构光系统的参数标定,在不需要严格要求系统装配精度的前提下,提升后续单目散斑结构光系统的物体测量准确度,避免不需要和繁琐的系统装配调整操作,减少系统装配难度。
本实施例中,通过控制散斑投射器在标定板相对相机的不同姿态下向标定板上投射散斑编码图案,基于该散斑编码图案中每个主编码点在标定板上的M个投射点坐标,确定散斑投射器的光心位置,进而基于散斑投射器的光心位置及相机的光心位置,对相机进行极线校正,获取极线校正后相机相对散斑投射器的外部参数,实现对相机与散斑投射器之间位姿关系的标定,改变基于设定好的位姿关系进行系统装配的现状,消除安装操作带来的参数误差,减少系统装配难度,提升单目散斑结构光系统的物体测量准确度。
本申请实施例中还提供了单目散斑结构光系统的标定方法的不同实施方式。
参见图4,图4是本申请实施例提供的一种单目散斑结构光系统的标定方法的流程图二。如图4所示,一种单目散斑结构光系统的标定方法,该方法包括以下步骤:
步骤401,在标定板相对于相机的M个不同摆放姿态下,控制相机对各个摆放姿态的标定板进行图像采集,得到M个标定板图像。
M为大于或者等于3的整数。
步骤402,基于M个标定板图像,标定相机的内部参数及相机相对每一摆放姿态的标定板的外部参数。
该内部参数中包括相机的光心位置。
首先,利用相机对标定板进行图像采集得到的标定板图像进行系统标定,标定出单目相机未极线校正前的内部参数和相对于标定板的外部参数,其中,内部参数主要包括相机焦距、图像中心、镜头畸变系数,及可以推导得出的相机光心位置等参数。外部参数是每个标定板与相机之间的位姿关系(包括一个旋转矩阵和一个平移矩阵)。
在具体实现过程中,搭建单目散斑结构光系统后,先关闭散斑投射器,通过相机采集N张不同姿态的标定板图像,然后打开散斑投射器,使散斑完全投射到标定板平面上,相机采集相应的N张散斑图像。对相机的内部参数及相机相对每一摆放姿态的标定板的外部参数的标定,标定板可以是棋盘格标定板,可以使用经典的张正友棋盘格标定法,标定出相机的内部和外部参数。在此不作具体限定。
步骤403,控制散斑投射器向标定板上投射散斑编码图案。
该散斑编码图案中包括N个具备角点特征的主编码点,N为大于或者等于2的整数。
该步骤的实现过程与前述实施方式中的步骤101的实现过程相同,此处不再赘述。
步骤404,在标定板相对于相机的M个不同摆放姿态下,获取每个主编码点在标定板上的M个投射点坐标。
该步骤的实现过程与前述实施方式中的步骤102的实现过程相同,此处不再赘述。
步骤405,根据N个主编码点的M个投射点坐标,确定散斑投射器的光心位置。
该步骤的实现过程与前述实施方式中的步骤103的实现过程相同,此处不再赘述。
步骤406,根据散斑投射器的光心位置及相机的光心位置,获取极线校正后的相机相对散斑投射器的外部参数。
该步骤的实现过程与前述实施方式中的步骤104的实现过程相同,此处不再赘述。
在一个实施方式中,该内部参数中具体还包括相机焦距、镜头的径向畸变系数及切向畸变系数;对应地,根据散斑投射器的光心位置及相机的光心位置,获取极线校正后的相机相对散斑投射器的外部参数之后,还包括:
将相机焦距、径向畸变系数、切向畸变系数及相机采集图像的分辨率,输入至镜头畸变模型中,得到畸变校正后的相机焦距。
在计算完极线校正后相机相对散斑投射器的外部参数后,为了提高极线校正的准确度,还需要考虑相机镜头畸变对三维重建的影响,对相机内参进行校正,抵消相机镜头畸变对成像的影响。
这里,具体需要对相机焦距进行校正,校正后的焦距为
Figure PCTCN2021111313-appb-000013
为了消 除镜头畸变的影响,校正后的焦距
Figure PCTCN2021111313-appb-000014
定义为:
Figure PCTCN2021111313-appb-000015
其中,镜头焦距中
Figure PCTCN2021111313-appb-000016
Figure PCTCN2021111313-appb-000017
分别表示x,y方向的焦距分量,一般情况下两者相同。
式中(f x,f y)为在前面步骤中已经标定得到的相机焦距,(k 1,k 2)和(p 1,p 2)分别为镜头的径向和切向畸变系数,设相机采集图像的分辨率为W×H,
Figure PCTCN2021111313-appb-000018
该相机采集图像的分辨率为相机具备的已知参数。
其中,更进一步地,在将相机焦距、径向畸变系数、切向畸变系数及相机采集图像的分辨率,输入至镜头畸变模型中,得到畸变校正后的相机焦距之后,还包括:
获取相机采集图像的各个顶点坐标;基于各个顶点坐标,结合畸变校正后的相机焦距及极线校正后的相机相对散斑投射器的外部参数,仿射变换得到校正后的目标顶点坐标;基于各个目标顶点坐标,得到校正后的相机图像中心;根据畸变校正后的相机焦距及校正后的相机图像中心,更新内部参数。
传统的极线校正方法一般取相机的标定好的内参中的图像中心直接作为极线校正后的图像中心,这在实际中并不准确,校正后的图像会因此产生扭曲、旋转偏移等问题,并可能减少图像有效信息的可视区域。因此,我们采用已计算出的外部参数和校正后的相机焦距计算最优的图像中心。
设在校正前相机采集图像的四个顶点分别为{p 1,p 2,p 3,p 4},首先假设校正后的图像中心为(0,0) T,则校正后的四个顶点位置
Figure PCTCN2021111313-appb-000019
为:
Figure PCTCN2021111313-appb-000020
将校正后四个顶点组成的不规则四边形区域以其几何中心为准,得到校正后的图像中心:
Figure PCTCN2021111313-appb-000021
其中c 0=[W/2,H/2] T
这样即得到了校正后的相机图像中心
Figure PCTCN2021111313-appb-000022
通过对相机焦距的校正及相机图像中心的校正,实现对相机内参的更新。
得到了校正后的相机内部参数和相对散斑投射器的外部参数之后,即可将相机采集到的散斑图像进行极线校正,以保证校正后的每张图像行坐标均对齐。实现了行对齐,则在接下来的视差估计步骤,两张图像之间进行像素特征匹配搜索只需要沿着行方向进行,而无需对整幅图像进行搜索,从而极大地提高了匹配效率。
并能够直接使用极线校正及畸变校正后的相机内参及相机外参对拍得图像进行处理,将畸变校正及误差校正等操作前移,不必没处理一张图像则需要执行一次畸变校正操作,提升图像处理的速度及处理效率。
下面,对基于极线校正及畸变校正后的相机内参及相机外参进行待测物体三维重建的处理过程进行说明。
步骤407,控制散斑投射器向参考平面投射散斑图像,并控制相机对参考平面进行图像采集得到参考面散斑图像。
步骤408,基于内部参数及极线校正后的相机相对散斑投射器的外部参数,对参考面散斑图像进行图像校正,得到校正后参考面散斑图像。
在通过标定完成的单目散斑结构光系统进行物体测量时,需要确定一个标准平面(即参考平面),散斑投射器将散斑投射上去,相机对参考平面上投射的散斑图像进行采集,这个图像保存下来,后续所有的测量图像都是与该图像进行立体匹配,以实现视差估计。
其中,在设置参考平面时,可以不对参考平面与相机光轴间的垂直关系进行严格限制。可以通过已经标定完成的系统参数,对采集到的图像进行极线校正,弥补测量系统人为安装操作带来的参数误差,降低系统装配调节的复杂度,提升系统三维重建的精度。
对参考面散斑图像进行图像校正,可以基于内部参数及极线校正后的相机相对散斑投射器的外部参数,对待测物表面散斑图像进行畸变校正、图像中心校正、旋转校正、平移校正等图像校正处理。
该过程中,参考平面不再需要多次调整以使其和相机光轴保持绝对平行,可大致调整其位置,后续通过极线校正获取理想的参考面散斑图案。
步骤409,控制散斑投射器向待测物表面投射散斑图像,并控制相机对待测物表面进行图像采集得到待测物表面散斑图像。
步骤410,基于内部参数及极线校正后的相机相对散斑投射器的外部参数,对待测物表面散斑图像进行图像校正,得到校正后待测物表面散斑图像。
对待测物表面散斑图像进行图像校正,可以基于内部参数及极线校正后的相机相对散斑投射器的外部参数,对待测物表面散斑图像进行畸变校正、图像中心校正、旋转校正、平移校正等图像校正处理,使校正后的待测物表面散斑图像与同样标准校正后的参考面散斑图像实现行对齐,进行视差估计。
步骤411,基于校正后参考面散斑图像,对校正后待测物表面散斑图像进行视差估计及三维形貌重建。
最后,可以基于图像校正后的待测物表面散斑图像与校正后的参考面散斑图像通过局部块匹配、SGM(Semi-global stereo matching,半全局立体匹配)等全局/半全局方法实现稠密视差估计,即可获取待测物的视差信息,进而利用三角原理实现深度估计和三维重建。
在进行视差估计时,利用本实施例中提出的单目散斑结构光系统标定方法,可以不对参考平面的放置进行严格限制,只需保证其大致位置垂直于相机 光轴方向即可。采集到参考面散斑图像之后,利用校正得到的相机内部参数,通过非线性映射即可得到校正后的参考面散斑图像
Figure PCTCN2021111313-appb-000023
散斑投射器投射散斑图案至待测物表面,相机采集对应的图像并进行极线校正,从而得到与参考面散斑图像行对齐的测量散斑图像
Figure PCTCN2021111313-appb-000024
沿图像行方向对
Figure PCTCN2021111313-appb-000025
Figure PCTCN2021111313-appb-000026
进行立体匹配,匹配方法可以采用SAD(Sum of absolute differences,绝对差值和)算法、SSD(sum of square differences,误差平方和)算法、NCC(normalized cross correlation,归一化互相关)等局部匹配算法,也可使用SGM等半全局算法。从而得到了两幅图像之间的视差图。
在通过图像比对,得到匹配点对之后,可以由获得的视差估计结果基于校正后的相机标定参数(校正后的相机焦距、校正后的图像中心)恢复待测物的深度信息,根据公式(8)实现三维重建:
Figure PCTCN2021111313-appb-000027
其中,公式中的d为视差信息,z为深度信息,(x,y,z)为三维重建信息。得到了z,则x、y可以由z计算得到。三维重建是指计算得到全部的(x,y,z)。
式中,(x,y,z) T为相机采集图像上特征点(u,v) T重建的空间点的三维坐标,
Figure PCTCN2021111313-appb-000028
为极线校正后的相机图像中心,
Figure PCTCN2021111313-appb-000029
为极线校正后相机焦距;(u,v) T表示相机采集图像上点的像素坐标,
Figure PCTCN2021111313-appb-000030
为极线校正后相机图像中心,
Figure PCTCN2021111313-appb-000031
为极线校正后的相机焦距。B为散斑投射器光心和相机光心之间的距离,称为基线,等于校正后的平移向量的长度,B=||T rec||,z 0为参考平面距离相机的距离,是已知量,在选择参考平面时同时可以知道,为已知量。
本实施例中,通过控制散斑投射器在标定板相对相机的不同姿态下向标定板上投射散斑编码图案,基于该散斑编码图案中每个主编码点在标定板上的M个投射点坐标,确定散斑投射器的光心位置,进而基于散斑投射器的光心位置及相机的光心位置,对相机进行极线校正,获取极线校正后相机相对散斑投射器的外部参数及同时校正相机的内部参数,实现对相机内参与外参的标定,改变基于设定好的位姿关系进行系统装配的现状,消除安装操作带来的参数误差,减少系统装配难度,提升单目散斑结构光系统的物体测量准确度。
参见图5,图5是本申请实施例提供的一种单目散斑结构光系统的标定装置的结构图,为了便于说明,仅示出了与本申请实施例相关的部分。
单目散斑结构光系统的标定装置500包括:
散斑投射模块501,用于控制散斑投射器向标定板上投射散斑编码图案,所述散斑编码图案中包括N个具备角点特征的主编码点,N为大于或者等于2 的整数;
坐标获取模块502,用于在所述标定板相对于相机的M个不同摆放姿态下,获取每个所述主编码点在所述标定板上的M个投射点坐标;M为大于或者等于3的整数;
光心位置确定模块503,用于根据N个所述主编码点的所述M个投射点坐标,确定所述散斑投射器的光心位置;
参数标定模块504,用于根据所述散斑投射器的光心位置及所述相机的光心位置,获取极线校正后的所述相机相对所述散斑投射器的外部参数。
其中,参数标定模块,具体用于:
基于所述散斑投射器的光心位置及所述相机的光心位置,对所述相机进行极线校正,获取所述相机达到极线校正后状态所需的旋转角度及旋转轴;其中,极线校正后的所述相机的光轴与所述散斑投射器的光轴互相平行;
基于所述旋转角度及所述旋转轴,计算极线校正后的所述相机相对于所述散斑投射器的平移矩阵和旋转矩阵,得到包含所述平移矩阵及所述旋转矩阵的外部参数。
其中,光心位置确定模块,具体用于:
根据各个所述主编码点的所述M个投射点坐标,得到每个所述主编码点对应的空间投射直线;
确定N个所述空间投射直线的交点为所述散斑投射器的光心位置。
其中,光心位置确定模块,更具体用于:
基于所述相机相对每一摆放姿态的所述标定板的外部参数,将每个所述主编码点在所述标定板上的M个投射点坐标分别转换至相机坐标系中,得到各个所述投射点坐标在所述相机坐标系中的空间坐标;
对各个所述投射点坐标在所述相机坐标系中的空间坐标进行拟合,得到每个所述主编码点对应的空间投射直线。
该装置还包括:
初步标定模块,在所述标定板相对于相机的M个不同摆放姿态下,控制所述相机对各个摆放姿态的所述标定板进行图像采集,得到M个标定板图像;
基于所述M个标定板图像,标定所述相机的内部参数及所述相机相对每一摆放姿态的所述标定板的外部参数,所述内部参数中包括所述相机的光心位置。
其中,所述内部参数中还包括相机焦距、镜头的径向畸变系数及切向畸变系数;该装置还包括:
第一校正模块,用于将相机焦距、所述径向畸变系数、所述切向畸变系数及相机采集图像的分辨率,输入至镜头畸变模型中,得到畸变校正后的相机焦距。
该装置还包括:
第二校正模块,用于:
获取所述相机采集图像的各个顶点坐标;
基于所述各个顶点坐标,结合所述畸变校正后的相机焦距及极线校正后的所述相机相对所述散斑投射器的外部参数,仿射变换得到校正后的目标顶点坐标;
基于各个所述目标顶点坐标,得到校正后的相机图像中心;
根据所述畸变校正后的相机焦距及所述校正后的相机图像中心,更新所述内部参数。
该装置还包括:
测量模块,用于:
控制所述散斑投射器向参考平面投射散斑图像,并控制所述相机对所述参考平面进行图像采集得到参考面散斑图像;
基于所述内部参数及极线校正后的所述相机相对所述散斑投射器的外部参数,对所述参考面散斑图像进行图像校正,得到校正后参考面散斑图像;
控制所述散斑投射器向待测物表面投射散斑图像,并控制所述相机对所述待测物表面进行图像采集得到待测物表面散斑图像;
基于所述内部参数及极线校正后的所述相机相对所述散斑投射器的外部参数,对所述待测物表面散斑图像进行图像校正,得到校正后待测物表面散斑图像;
基于所述校正后参考面散斑图像,对所述校正后待测物表面散斑图像进行视差估计及三维形貌重建。
本申请实施例提供的单目散斑结构光系统的标定装置能够实现上述单目散斑结构光系统的标定方法的实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
图6是本申请实施例提供的一种终端的结构图。如该图所示,该实施例的终端6包括:至少一个处理器60(图6中仅示出一个)、存储器61以及存储在所述存储器61中并可在所述至少一个处理器60上运行的计算机程序62,所述处理器60执行所述计算机程序62时实现上述任意各个方法实施例中的步骤。
所述终端6可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端6可包括,但不仅限于,处理器60、存储器61。本领域技术人员可以理解,图6仅仅是终端6的示例,并不构成对终端6的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端还可以包括输入输出设备、网络接入设备、总线等。
所述处理器60可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理 器也可以是任何常规的处理器等。
所述存储器61可以是所述终端6的内部存储单元,例如终端6的硬盘或内存。所述存储器61也可以是所述终端6的外部存储设备,例如所述终端6上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器61还可以既包括所述终端6的内部存储单元也包括外部存储设备。所述存储器61用于存储所述计算机程序以及所述终端所需的其他程序和数据。所述存储器61还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/终端和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元 中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序产品来实现,当计算机程序产品在终端上运行时,使得所述终端执行时实现可实现上述各个方法实施例中的步骤。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (17)

  1. 一种单目散斑结构光系统的标定方法,其特征在于,包括:
    控制散斑投射器向标定板上投射散斑编码图案,所述散斑编码图案中包括N个具备角点特征的主编码点,N为大于或者等于2的整数;
    在所述标定板相对于相机的M个不同摆放姿态下,获取每个所述主编码点在所述标定板上的M个投射点坐标;M为大于或者等于3的整数;
    根据N个所述主编码点的所述M个投射点坐标,确定所述散斑投射器的光心位置;
    根据所述散斑投射器的光心位置及所述相机的光心位置,获取极线校正后的所述相机相对所述散斑投射器的外部参数。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述散斑投射器的光心位置及所述相机的光心位置,获取极线校正后的所述相机相对所述散斑投射器的外部参数,包括:
    基于所述散斑投射器的光心位置及所述相机的光心位置,对所述相机进行极线校正,获取所述相机达到极线校正后状态所需的旋转角度及旋转轴;其中,极线校正后的所述相机的光轴与所述散斑投射器的光轴互相平行;
    基于所述旋转角度及所述旋转轴,计算极线校正后的所述相机相对于所述散斑投射器的平移矩阵和旋转矩阵,得到包含所述平移矩阵及所述旋转矩阵的外部参数。
  3. 根据权利要求1所述的方法,其特征在于,所述根据N个所述主编码点的所述M个投射点坐标,确定所述散斑投射器的光心位置,包括:
    根据各个所述主编码点的所述M个投射点坐标,得到每个所述主编码点对应的空间投射直线;
    确定N个所述空间投射直线的交点为所述散斑投射器的光心位置。
  4. 根据权利要求3所述的方法,其特征在于,所述根据各个所述主编码点的所述M个投射点坐标,得到每个所述主编码点对应的空间投射直线,包括:
    基于所述相机相对每一摆放姿态的所述标定板的外部参数,将每个所述主编码点在所述标定板上的M个投射点坐标分别转换至相机坐标系中,得到各个所述投射点坐标在所述相机坐标系中的空间坐标;
    对各个所述投射点坐标在所述相机坐标系中的空间坐标进行拟合,得到每个所述主编码点对应的空间投射直线。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述控制散斑投射器向标定板上投射散斑编码图案之前,还包括:
    在所述标定板相对于相机的M个不同摆放姿态下,控制所述相机对各个摆放姿态的所述标定板进行图像采集,得到M个标定板图像;
    基于所述M个标定板图像,标定所述相机的内部参数及所述相机相对每 一摆放姿态的所述标定板的外部参数,所述内部参数中包括所述相机的光心位置。
  6. 根据权利要求5所述的方法,其特征在于,所述内部参数中还包括相机焦距、镜头的径向畸变系数及切向畸变系数;所述根据所述散斑投射器的光心位置及所述相机的光心位置,获取极线校正后的所述相机相对所述散斑投射器的外部参数之后,还包括:
    将所述相机焦距、所述径向畸变系数、所述切向畸变系数及相机采集图像的分辨率,输入至镜头畸变模型中,得到畸变校正后的相机焦距。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述相机焦距、所述径向畸变系数、所述切向畸变系数及相机采集图像的分辨率,输入至镜头畸变模型中,得到畸变校正后的相机焦距之后,还包括:
    获取所述相机采集图像的各个顶点坐标;
    基于所述各个顶点坐标,结合所述畸变校正后的相机焦距及极线校正后的所述相机相对所述散斑投射器的外部参数,仿射变换得到校正后的目标顶点坐标;
    基于各个所述目标顶点坐标,得到校正后的相机图像中心;
    根据所述畸变校正后的相机焦距及所述校正后的相机图像中心,更新所述内部参数。
  8. 根据权利要求5所述的方法,其特征在于,所述根据所述散斑投射器的光心位置及所述相机的光心位置,获取极线校正后的所述相机相对所述散斑投射器的外部参数之后,还包括:
    控制所述散斑投射器向参考平面投射散斑图像,并控制所述相机对所述参考平面进行图像采集得到参考面散斑图像;
    基于所述内部参数及极线校正后的所述相机相对所述散斑投射器的外部参数,对所述参考面散斑图像进行图像校正,得到校正后参考面散斑图像;
    控制所述散斑投射器向待测物表面投射散斑图像,并控制所述相机对所述待测物表面进行图像采集得到待测物表面散斑图像;
    基于所述内部参数及极线校正后的所述相机相对所述散斑投射器的外部参数,对所述待测物表面散斑图像进行图像校正,得到校正后待测物表面散斑图像;
    基于所述校正后参考面散斑图像,对所述校正后待测物表面散斑图像进行视差估计及三维形貌重建。
  9. 一种单目散斑结构光系统的标定装置,其特征在于,包括:
    散斑投射模块,用于控制散斑投射器向标定板上投射散斑编码图案,所述散斑编码图案中包括N个具备角点特征的主编码点,N为大于或者等于2的整数;
    坐标获取模块,用于在所述标定板相对于相机的M个不同摆放姿态下,获取每个所述主编码点在所述标定板上的M个投射点坐标,M为大于或者等 于3的整数;
    光心位置确定模块,用于根据N个所述主编码点的所述M个投射点坐标,确定所述散斑投射器的光心位置;
    参数标定模块,用于根据所述散斑投射器的光心位置及所述相机的光心位置,获取极线校正后的所述相机相对所述散斑投射器的外部参数。
  10. 根据权利要求9所述的装置,其特征在于,所述参数标定模块,具体用于:
    基于所述散斑投射器的光心位置及所述相机的光心位置,对所述相机进行极线校正,获取所述相机达到极线校正后状态所需的旋转角度及旋转轴;其中,极线校正后的所述相机的光轴与所述散斑投射器的光轴互相平行;
    基于所述旋转角度及所述旋转轴,计算极线校正后的所述相机相对于所述散斑投射器的平移矩阵和旋转矩阵,得到包含所述平移矩阵及所述旋转矩阵的外部参数。
  11. 根据权利要求9所述的装置,其特征在于,所述光心位置确定模块,具体用于:
    根据各个所述主编码点的所述M个投射点坐标,得到每个所述主编码点对应的空间投射直线;
    确定N个所述空间投射直线的交点为所述散斑投射器的光心位置。
  12. 根据权利要求11所述的装置,其特征在于,所述光心位置确定模块,更具体用于:
    基于所述相机相对每一摆放姿态的所述标定板的外部参数,将每个所述主编码点在所述标定板上的M个投射点坐标分别转换至相机坐标系中,得到各个所述投射点坐标在所述相机坐标系中的空间坐标;
    对各个所述投射点坐标在所述相机坐标系中的空间坐标进行拟合,得到每个所述主编码点对应的空间投射直线。
  13. 根据权利要求9至12任一项所述的装置,其特征在于,所述装置还包括:
    初步标定模块,用于在所述标定板相对于相机的M个不同摆放姿态下,控制所述相机对各个摆放姿态的所述标定板进行图像采集,得到M个标定板图像;
    基于所述M个标定板图像,标定所述相机的内部参数及所述相机相对每一摆放姿态的所述标定板的外部参数,所述内部参数中包括所述相机的光心位置。
  14. 根据权利要求13所述的装置,其特征在于,所述内部参数中还包括相机焦距、镜头的径向畸变系数及切向畸变系数;所述装置还包括:
    第一校正模块,用于将所述相机焦距、所述径向畸变系数、所述切向畸变系数及相机采集图像的分辨率,输入至镜头畸变模型中,得到畸变校正后的相机焦距。
  15. 根据权利要求14所述的装置,其特征在于,所述装置还包括:
    第二校正模块,用于获取所述相机采集图像的各个顶点坐标;基于所述各个顶点坐标,结合所述畸变校正后的相机焦距及极线校正后的所述相机相对所述散斑投射器的外部参数,仿射变换得到校正后的目标顶点坐标;基于各个所述目标顶点坐标,得到校正后的相机图像中心;根据所述畸变校正后的相机焦距及所述校正后的相机图像中心,更新所述内部参数。
  16. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    测量模块,用于控制所述散斑投射器向参考平面投射散斑图像,并控制所述相机对所述参考平面进行图像采集得到参考面散斑图像;基于所述内部参数及极线校正后的所述相机相对所述散斑投射器的外部参数,对所述参考面散斑图像进行图像校正,得到校正后参考面散斑图像;控制所述散斑投射器向待测物表面投射散斑图像,并控制所述相机对所述待测物表面进行图像采集得到待测物表面散斑图像;基于所述内部参数及极线校正后的所述相机相对所述散斑投射器的外部参数,对所述待测物表面散斑图像进行图像校正,得到校正后待测物表面散斑图像;基于所述校正后参考面散斑图像,对所述校正后待测物表面散斑图像进行视差估计及三维形貌重建。
  17. 一种终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至8任一项所述方法的步骤。
PCT/CN2021/111313 2021-08-06 2021-08-06 单目散斑结构光系统的标定方法、装置及终端 WO2023010565A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/111313 WO2023010565A1 (zh) 2021-08-06 2021-08-06 单目散斑结构光系统的标定方法、装置及终端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/111313 WO2023010565A1 (zh) 2021-08-06 2021-08-06 单目散斑结构光系统的标定方法、装置及终端

Publications (1)

Publication Number Publication Date
WO2023010565A1 true WO2023010565A1 (zh) 2023-02-09

Family

ID=85154772

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/111313 WO2023010565A1 (zh) 2021-08-06 2021-08-06 单目散斑结构光系统的标定方法、装置及终端

Country Status (1)

Country Link
WO (1) WO2023010565A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116399874A (zh) * 2023-06-08 2023-07-07 华东交通大学 剪切散斑干涉无损检测缺陷尺寸的方法和程序产品
CN117369197A (zh) * 2023-12-06 2024-01-09 深圳市安思疆科技有限公司 3d结构光模组、成像系统及获得目标物体深度图的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170103509A1 (en) * 2015-10-08 2017-04-13 Christie Digital Systems Usa, Inc. System and method for online projector-camera calibration from one or more images
US20200186768A1 (en) * 2017-08-11 2020-06-11 Hilti Aktiengesellschaft System and Method for Recalibrating a Projector System
CN111540004A (zh) * 2020-04-16 2020-08-14 北京清微智能科技有限公司 单相机极线校正方法及装置
CN112669362A (zh) * 2021-01-12 2021-04-16 四川深瑞视科技有限公司 基于散斑的深度信息获取方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170103509A1 (en) * 2015-10-08 2017-04-13 Christie Digital Systems Usa, Inc. System and method for online projector-camera calibration from one or more images
US20200186768A1 (en) * 2017-08-11 2020-06-11 Hilti Aktiengesellschaft System and Method for Recalibrating a Projector System
CN111540004A (zh) * 2020-04-16 2020-08-14 北京清微智能科技有限公司 单相机极线校正方法及装置
CN112669362A (zh) * 2021-01-12 2021-04-16 四川深瑞视科技有限公司 基于散斑的深度信息获取方法、装置及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116399874A (zh) * 2023-06-08 2023-07-07 华东交通大学 剪切散斑干涉无损检测缺陷尺寸的方法和程序产品
CN116399874B (zh) * 2023-06-08 2023-08-22 华东交通大学 剪切散斑干涉无损检测缺陷尺寸的方法和程序产品
CN117369197A (zh) * 2023-12-06 2024-01-09 深圳市安思疆科技有限公司 3d结构光模组、成像系统及获得目标物体深度图的方法
CN117369197B (zh) * 2023-12-06 2024-05-07 深圳市安思疆科技有限公司 3d结构光模组、成像系统及获得目标物体深度图的方法

Similar Documents

Publication Publication Date Title
WO2020207191A1 (zh) 虚拟物体被遮挡的区域确定方法、装置及终端设备
US20210233275A1 (en) Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
WO2020035002A1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
TWI607412B (zh) 多維度尺寸量測系統及其方法
WO2021196548A1 (zh) 距离确定方法、装置及系统
WO2023000595A1 (zh) 一种基于曲面屏的相位偏折测量方法、系统及终端
WO2023010565A1 (zh) 单目散斑结构光系统的标定方法、装置及终端
US20150348266A1 (en) Techniques for Rapid Stereo Reconstruction from Images
CN113793387A (zh) 单目散斑结构光系统的标定方法、装置及终端
US20190096092A1 (en) Method and device for calibration
US8487927B2 (en) Validating user generated three-dimensional models
WO2022021680A1 (zh) 融合结构光和光度学的三维对象重建方法及终端设备
WO2022267285A1 (zh) 机器人位姿的确定方法、装置、机器人及存储介质
CN106570907B (zh) 一种相机标定方法及装置
WO2023201578A1 (zh) 单目激光散斑投影系统的外参数标定方法和装置
CN112288826A (zh) 双目相机的标定方法、装置及终端
CN116309880A (zh) 基于三维重建的物体位姿确定方法、装置、设备及介质
Jiang et al. An accurate and flexible technique for camera calibration
WO2024021654A1 (zh) 一种用于线结构光3d相机的误差校正方法以及装置
WO2023160445A1 (zh) 即时定位与地图构建方法、装置、电子设备及可读存储介质
CN109242941B (zh) 三维对象合成通过使用视觉引导作为二维数字图像的一部分
TWM594322U (zh) 全向立體視覺的相機配置系統
CN110675445B (zh) 一种视觉定位方法、装置及存储介质
Yan et al. A decoupled calibration method for camera intrinsic parameters and distortion coefficients
JP5464671B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21952438

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE