WO2024031809A1 - 一种标定方法、标定系统、深度相机及可读存储介质 - Google Patents

一种标定方法、标定系统、深度相机及可读存储介质 Download PDF

Info

Publication number
WO2024031809A1
WO2024031809A1 PCT/CN2022/123159 CN2022123159W WO2024031809A1 WO 2024031809 A1 WO2024031809 A1 WO 2024031809A1 CN 2022123159 W CN2022123159 W CN 2022123159W WO 2024031809 A1 WO2024031809 A1 WO 2024031809A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth camera
calibration
distance
intersection point
image plane
Prior art date
Application number
PCT/CN2022/123159
Other languages
English (en)
French (fr)
Inventor
张凌鹏
俞涛
王飞
Original Assignee
奥比中光科技集团股份有限公司
深圳奥芯微视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司, 深圳奥芯微视科技有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2024031809A1 publication Critical patent/WO2024031809A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the technical field of depth cameras, and in particular to a calibration method, a calibration system, a depth camera and a readable storage medium.
  • the measurement results and measurement accuracy of the depth camera will be affected by many factors such as the internal camera and the external environment. Therefore, in order to obtain higher-precision depth information, it is necessary to measure the depth.
  • the camera performs depth value calibration.
  • the depth camera calibration method commonly used in the industry is: obtain the actual distance between the depth camera and the calibration position; move the calibration plate to the calibration position; measure the distance between itself and the calibration plate through the depth camera, and obtain the corresponding Measure the distance; compare the measured distance with the actual distance and obtain the calibration parameters to complete the calibration of the depth camera.
  • each component in the depth camera will have tilt errors caused by improper assembly during assembly.
  • the calibration method of the depth camera does not take this tilt error into account, resulting in inaccurate measurement results of the depth camera and poor measurement accuracy. Also at a lower level.
  • This application provides a calibration method, calibration system, depth camera and readable storage medium, aiming to solve the problem of inaccurate measurement results and low measurement accuracy of the depth camera due to the failure to consider the tilt error of the depth camera in the prior art. The problem.
  • the first aspect of the embodiment of the present application provides a calibration method, which is applied to a calibration system.
  • the calibration system includes a depth camera, a calibration board, and a control and processor.
  • the calibration method includes: controlling the transmitting module of the depth camera to transmit the same light beam to the first calibration plate and the second calibration plate that are at different distances and parallel to each other, and actuating the acquisition module of the depth camera to receive the echo signals reflected by each calibration plate;
  • the light beam has a first intersection point and a second intersection point with the first calibration plate and the second calibration plate respectively; the camera coordinates of each intersection point are obtained, and the first distance between each intersection point is calculated with the depth camera's preset internal and external parameters; where , the ideal state indicates that the image plane of the depth camera is parallel to the calibration plate; the second distance between the first intersection point and the second intersection point is calculated based on the echo signal and the calculation principle of the depth camera; the depth is determined based on the first distance and the second distance.
  • the second aspect of the embodiment of the present application provides a calibration system, including a guide rail, a calibration plate slidingly connected to the guide rail, a base, a depth camera, and a control and processor, wherein: the depth camera is placed on the base, and the base and the calibration plate are respectively set up at both ends of the guide rail.
  • the control and processor are used to control the calibration plate to slide on the guide rail, control the transmitting module of the depth camera to emit light signals to the calibration plate, and activate the acquisition module to receive the data reflected by the calibration plate at different distances.
  • the echo signal is received, and the calibration method described in the first aspect of the embodiment of the present application is executed according to the received echo signal to complete the calibration of the depth camera.
  • the third aspect of the embodiment of the present application provides a depth camera, including a projection module, a collection module, a processing module and a storage module, wherein: the storage module is used to store the data obtained when executing the calibration method described in the first aspect of the embodiment of the present application. Calibration parameters, the projection module is used to project light signals to the target area, the acquisition module is used to receive the echo signal reflected by the target area, and the processing module is used to generate a depth image of the target area based on the reflected echo signal. , and correct the depth image based on the calibration parameters in the storage module to obtain a corrected depth image.
  • the fourth aspect of the embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores executable instructions. When the executable instructions are executed, the method described in the first aspect of the embodiment of the present application is executed. Calibration method.
  • the beneficial effect of this application is that the depth camera emits the same light beam to the first calibration plate and the second calibration plate that are at different distances and parallel to each other, and receives the echo reflected by the calibration plate.
  • Wave signal wherein the same light beam emitted by the depth camera has a first intersection point with the first calibration plate and a second intersection point with the second calibration plate.
  • this application first calculates the depth camera in an ideal state (which indicates that the image plane of the depth camera is parallel to the calibration plate) based on the camera coordinates of the first intersection point and the second intersection point and the preset internal and external parameters of the depth camera.
  • the first distance between the first intersection point and the second intersection point (acting as a reference), and then based on the calculation principle of the depth camera and the echo signal, the second distance between the first intersection point and the second intersection point (that is, the actual distance), and finally by comparing the first distance and the second distance, it is judged whether the image plane of the depth camera is parallel to the calibration plate, and based on the judgment result, it is selected whether the depth camera needs to be calibrated (for example, when the image plane of the depth camera is parallel to the calibration plate In the case of the board, it means that the depth camera is currently in an ideal state, and there is no need to perform calibration correction on the depth camera; in the case where the image plane of the depth camera is tilted relative to the calibration board, it means that the depth camera is currently in a non-ideal state, and it is necessary to calibrate the depth camera.
  • this application when calibrating the depth camera, this application fully considers the tilt error (that is, the non-ideal state) of the depth camera. When the depth camera has a tilt error, this application will further calibrate and correct the depth camera. , thereby eliminating the adverse effects caused by tilt errors to a large extent, thereby effectively improving the measurement accuracy of the depth camera and the accuracy of the measurement results.
  • Figure 1 is a schematic flow chart of the calibration method provided by the embodiment of the present application.
  • Figure 2 is an optical path diagram during the calibration process of the calibration method provided by the embodiment of the present application.
  • Figure 3 is a diagram showing the correction effect of the calibration method provided by the embodiment of the present application at 1000mm;
  • Figure 4 is a diagram of the correction effect of the calibration method provided by the embodiment of the present application at 2000mm;
  • Figure 5 is a module block diagram of a depth camera provided by an embodiment of the present application.
  • Figure 6 is a module block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • FIG 1 is a schematic flow chart of a calibration method provided by an embodiment of the present application.
  • This calibration method is applied to a depth camera.
  • the application environment of this calibration method is as shown in Figure 2: it is installed on a guide rail (not shown in the figure) and is slidingly matched with the guide rail.
  • the surface of the second calibration plate B2 is perpendicular to the extension direction of the guide rail.
  • the same light beam G emitted by the depth camera has a first intersection point A1 with the first calibration plate B1 and a second intersection point A2 with the second calibration plate B2.
  • the first calibration plate B1 is parallel to the second calibration plate B2, and the distances between the first calibration plate B1 and the second calibration plate B2 and the depth camera are different; among them, the number of calibration plates is not limited to two (i.e., the first calibration plate B1 and the second calibration plate B1 are not limited to two).
  • Two calibration plates B2) which may be one calibration plate (that is, the first calibration plate B1 moves on the guide rail to obtain the second calibration plate B2), or may include three or more. What is described here is only an example. .
  • all points on the same beam G have the same corresponding pixel point on the image plane of the depth camera (i.e. M1 or M2). That is to say, under ideal conditions, the same beam G passes through the first calibration plate B1. After reflecting with the second calibration plate B2, it will illuminate the pixel point P on M1 (that is, under the ideal state, the corresponding pixel points of the first intersection point A1 and the second intersection point A2 on M1 are both P).
  • the same light beam G After being reflected by the first calibration plate B1 and the second calibration plate B2, it will illuminate the pixel point P' on M2 (that is, in the non-ideal state, the corresponding pixel points of the first intersection point A1 and the second intersection point A2 on M2 are both P' ), and since there is a tilt error in the non-ideal state compared with the ideal state, the corresponding pixel point P of the first intersection point A1 and the second intersection point A2 in the ideal state on M1 is the same as the first intersection point A1 in the non-ideal state.
  • the corresponding pixel point P' of the second intersection point A2 on M2 is different, but the actual distance between the first intersection point A1 and the second intersection point A2 in the external space has nothing to do with whether the image plane of the depth camera is tilted, that is, the third intersection point in the external space
  • the actual distance between the first intersection point A1 and the second intersection point A2 is always the Euclidean distance between the first intersection point A1 and the second intersection point A2.
  • the calibration method provided by the embodiment of the present application includes the following steps 101 to 105.
  • Step 101 Control the transmitting module of the depth camera to transmit the same light beam to the first calibration plate and the second calibration plate that are at different distances and parallel to each other, and activate the acquisition module of the depth camera to receive the echo signal reflected by the calibration plate.
  • the depth camera when calibrating the depth camera, it is necessary to control the depth camera to emit the same light beam G to the first calibration plate B1 and the second calibration plate B2 that are at different distances and parallel to each other, and receive the reflected light from each calibration plate. Echo signal; among them, the intersection point of the same light beam G and the first calibration plate B1 is the first intersection point A1, and the intersection point of the same light beam G and the second calibration plate B2 is the second intersection point A2.
  • the same light beam G represents that the light beam emitted by the depth camera transmitting module to the first calibration plate B1 and the second calibration plate B2 is a beam of light in the same direction. It can also be multiple beams of light in different directions. When there are multiple beams of light in different directions, the calculation can be performed by obtaining the corresponding intersection point according to the direction of each beam of light. There is no limit here.
  • Step 102 Obtain the camera coordinates of the intersection point, and calculate the first distance between the first intersection point and the second intersection point of the depth camera under ideal conditions based on the camera coordinates of the first intersection point and the second intersection point and the depth camera's preset internal and external parameters. .
  • the camera coordinates of the intersection point are obtained, and the camera coordinates of the first intersection point A1 and the second intersection point A2 are obtained based on the world coordinates of the first intersection point A1 and the second intersection point A2 and the depth camera preset internal and external parameters.
  • Step 103 Calculate the second distance between the first intersection point and the second intersection point based on the calculation principle of the depth camera and the echo signal.
  • the distance between the first intersection point A1 and the second intersection point A2 is obtained as the second distance based on the calculation principle of the depth camera, that is, the calculation is performed based on the echo signals received by the depth camera and reflected by each calibration plate.
  • Step 104 Compare the first distance and the second distance, and determine whether the image plane of the depth camera is parallel to the calibration plate.
  • Board B2 is equivalent to determining whether the depth camera is currently in an ideal state or a non-ideal state.
  • the first distance is the same as the second distance, then it is determined that the image plane of the depth camera is parallel to the first calibration plate B1 and the second calibration plate B2, that is, it is determined that the depth camera is currently in an ideal state; if the first distance is equal to If the second distance is different, it is determined that the image plane of the depth camera is tilted relative to the first calibration plate B1 and the second calibration plate B2, that is, it is determined that the depth camera is currently in a non-ideal state, where the non-ideal state indicates that the image plane of the depth camera is tilted relative to the first calibration plate B1 and the second calibration plate B2.
  • the first calibration plate B1 and the second calibration plate B2 are tilted (M2 in Figure 2 represents the image plane of the depth camera in a non-ideal state), and in a non-ideal state, since the image plane of the depth camera is relative to the first calibration plate B1 and the second calibration plate B2 are tilted, so the depth camera has a tilt error.
  • Step 105 Select whether the depth camera needs to be calibrated based on the judgment result.
  • Calibration correction refers to correcting the tilt error of the depth camera to eliminate the adverse effects caused by the tilt error.
  • the image plane of the depth camera is parallel to the first calibration plate B1 and the second calibration plate B2, that is, the depth camera is currently in an ideal state, then the output does not require calibration and correction of the depth camera; if the depth camera The image plane is tilted relative to the first calibration plate B1 and the second calibration plate B2. That is, the depth camera is currently in a non-ideal state. Then the depth camera is calibrated and corrected to eliminate the adverse effects caused by the tilt error.
  • the depth camera when calibrating the depth camera, it is judged based on the first distance and the second distance whether the image plane of the depth camera is parallel to the first calibration plate B1 and the second calibration plate B2, and whether the depth calibration is required based on the judgment result.
  • the camera performs calibration correction. For example, when the image plane of the depth camera is parallel to the first calibration plate B1 and the second calibration plate B2, it means that the depth camera is currently in an ideal state, and there is no need to perform calibration correction on the depth camera; when the depth camera When the image plane is tilted relative to the first calibration plate B1 and the second calibration plate B2, it means that the depth camera is currently in a non-ideal state, and the depth camera can be calibrated and corrected.
  • the embodiment of the present application when calibrating the depth camera, the embodiment of the present application fully considers the tilt error (that is, the non-ideal state) of the depth camera.
  • the embodiment of the present application will further calibrate the depth camera. Calibration correction is performed to eliminate the adverse effects of tilt error to a large extent, thereby effectively improving the measurement accuracy of the depth camera and the accuracy of the measurement results.
  • the "first distance" in step 102 is calculated based on the camera coordinate system, which may specifically include: calculating the first distance according to the first formula; wherein the first formula is expressed as:
  • D(P) represents the first distance
  • u represents the abscissa of the pixel point P corresponding to the first intersection point A1 and the second intersection point A2 on the image plane M1 of the depth camera
  • v represents the first intersection point on the image plane M1 of the depth camera
  • the ordinates of the pixel P corresponding to A1 and the second intersection point A2, (u, v) can be calculated based on the world coordinates of the intersection point and the preset external parameters of the depth camera.
  • f represents the focal length of the depth camera
  • d 1 represents the first calibration plate.
  • d 2 represents the distance between the second calibration plate B2 and the depth camera
  • c x represents the center abscissa of the image plane M1 of the depth camera under the ideal state
  • c y represents the depth camera under the ideal state
  • This implementation corresponds to the ideal state.
  • the first distance is easy to obtain. You only need to first slide the first calibration plate B1 and the second calibration plate B2 relative to the guide rail, so that the first calibration plate B1 and the second calibration plate B2
  • the distances from the depth camera are d 1 and d 2 respectively, and then based on the principle that "all points on the same beam G have the same pixels on the image plane of the depth camera (i.e. M1 or M2)" , use the internal and external parameters of the depth camera to calculate the pixel coordinates (u, v) of the pixel point P on M1, and finally calculate the first distance based on the pixel coordinates (u, v) of the pixel point P on M1.
  • depth cameras have different types (i.e., i-TOF cameras and d-TOF cameras), when calculating the second distance based on the calculation principle of the depth camera, the different types of depth cameras and the second distance should be fully considered are calculated differently.
  • the calculation method of the "second distance" in step 103 may include: obtaining the phase difference between the first intersection point and the second intersection point; and calculating the second distance based on the obtained phase difference.
  • the phase difference between the first intersection point A1 and the second intersection point A2 is substituted into the second formula to calculate the second distance; where the second formula is expressed as:
  • D′(P′) represents the second distance
  • c represents the speed of light
  • represents the pi ratio
  • f m represents the modulation frequency of the i-TOF camera.
  • the calculation process of the phase difference between the first intersection point A1 and the second intersection point A2 will also be different.
  • the pixel can receive the pixel through the first intersection point according to the depth camera.
  • the echo signals reflected by A1 and the second intersection point A2 are calculated to obtain the corresponding phase of the first intersection point A1.
  • the phase corresponding to the second intersection point A2 thus, the phase difference between the first intersection point A1 and the second intersection point A2 is obtained.
  • the second distance can be calculated according to the above-mentioned second formula.
  • the calculation method of the "second distance" in step 103 may include: obtaining the time difference between the first flight time and the second flight time; calculating the second distance according to the obtained time difference; wherein, One flight time is the time elapsed from when the depth camera emits the same light beam G to when the depth camera receives the same light beam G (i.e., the echo signal) reflected back through the first intersection point A1; the second flight time is from the depth The time elapsed from when the camera emits the same light beam G to when the depth camera receives the same light beam G reflected back through the second intersection point A2.
  • the time difference between the first flight time and the second flight time is substituted into the third formula to calculate the second distance; where the third formula is expressed as:
  • D'(P') represents the second distance
  • c represents the speed of light
  • t 2 represents the second flight time
  • t 1 represents the first flight time
  • the first distance is the same as the second distance, that is, D(P) is the same as D′(P′)
  • D(P) is the same as D′(P′)
  • calibrating the depth camera may include: obtaining the rotation matrix between the image plane of the depth camera in the non-ideal state and the image plane of the depth camera in the ideal state; performing calibration on the depth camera based on the obtained rotation matrix. Calibration correction.
  • z represents the distance between the guide rail and the first calibration plate B1/second calibration plate B2
  • K represents the internal parameter matrix of the depth camera
  • q represents the image plane M1 of the depth camera corresponding to the first intersection point A1 and the second intersection point A2
  • z represents the distance between the guide rail and the first calibration plate B1/second calibration plate B2
  • K represents the internal parameter matrix of the depth camera
  • q′ represents the first intersection point A1 and the second intersection point A2 on the image plane M2 of the depth camera.
  • R represents the rotation matrix between the image plane M2 of the depth camera in the non-ideal state and the image plane M1 of the depth camera in the ideal state.
  • "obtaining the rotation matrix between the image plane of the depth camera in the non-ideal state and the image plane of the depth camera in the ideal state” in this embodiment may include: according to the pose expression of the depth camera, obtaining The initial rotation matrix between the image plane of the depth camera under non-ideal conditions and the image plane of the depth camera under ideal conditions; the pose expression is as follows:
  • the depth camera includes n pixels, n is a positive integer greater than 1, D(P,i) represents the first distance obtained by the i-th pixel in the depth camera, and D′(P′,i) represents the depth camera.
  • the second distance obtained by the i-th pixel, e i represents the error between the first distance and the second distance obtained by the i-th pixel in the depth camera, R represents the rotation matrix, T represents the translation matrix of the depth camera, J represents the Jacobian matrix operation, and
  • the minimum R is the initial rotation matrix.
  • the rotation matrix between the image plane M2 of the depth camera in the non-ideal state and the image plane M1 of the depth camera in the ideal state is unknown, that is, the pixel point P' on the image plane M2 and the pixel on the image plane M1
  • the correspondence between points P is unknown, so the position of P′ can only be found based on the estimated value of the current extrinsic parameters of the depth camera. If the current extrinsic parameters of the depth camera are not ideal, then the pixel point P′ on the image plane M2 and the image The distance difference between the pixel points P on the plane M1 is relatively large. Therefore, in order to reduce this difference, it is necessary to optimize the external parameters of the depth camera to find P′ that is more similar to P.
  • R in the pose expression is the initial rotation matrix.
  • "obtaining the rotation matrix between the image plane of the depth camera in the non-ideal state and the image plane of the depth camera in the ideal state” in this embodiment may also include: according to The initial rotation matrix calculates the corresponding pixel coordinates of the first intersection point and the second intersection point in the depth camera; based on the corresponding pixel coordinates of the first intersection point and the second intersection point in the depth camera, calculates the third intersection point between the first intersection point and the second intersection point. distance; differentiate the difference between the third distance and the first distance, and convert the derivative result into a Jacobian matrix; process the Jacobian matrix by calculating increments and iteratively solving with a nonlinear optimization algorithm to obtain the optimal Optimal rotation matrix.
  • the depth camera can be calibrated and corrected using the obtained optimal rotation matrix to eliminate the adverse effects caused by the tilt error; it can be understood that since the optimal rotation matrix eliminates the tilt The adverse effects caused by the error are better than the initial rotation matrix. Therefore, after the optimal rotation matrix is obtained, the depth camera is no longer calibrated through the initial rotation matrix, but the depth camera is calibrated through the optimal rotation matrix. Correction.
  • the initial rotation matrix solved through the previous specific implementation can associate p, q, and s, expressed as follows:
  • z represents the distance between the guide rail and the first calibration plate B1/second calibration plate B2
  • K represents the internal parameter matrix of the depth camera.
  • s is not the same as the pixel coordinates (u′, v′) of the pixel point P′ on the image plane M2 mentioned above.
  • the s is calculated through the initial rotation matrix in the previous specific implementation. out, that is, what is obtained here is the rotated pixel coordinate s.
  • the third distance between the first intersection point A1 and the second intersection point A2 is calculated through the pixel coordinate s, which is not the same as the pixel coordinate (u′, v′) of the pixel point P′ on the image plane M2 mentioned above.
  • the third distance here corresponds to the second distance in the previous article
  • the third distance here is the pixel coordinate calculated through the initial rotation matrix in the previous specific implementation and based on the pixel coordinate in the depth camera
  • the echo signal received by the corresponding pixel in the receiving module is calculated, that is, the third distance obtained here is actually the second distance after rotation.
  • is the Lie algebra form of the initial rotation matrix R
  • is the disturbance term.
  • f x and f y are pixel sizes
  • the camera coordinate of pixel point P′ is q (X, Y)
  • z represents the distance between the guide rail and the first calibration plate B1/second calibration plate B2.
  • the above Jacobian matrix can be processed by calculating increments and iterative solutions using a nonlinear optimization algorithm (such as Gauss-Newton algorithm) to obtain the optimal rotation matrix, so as to utilize the obtained optimal rotation matrix.
  • the optimal rotation matrix is used to calibrate and correct the depth camera.
  • the real distance between the first intersection point A1 and the second intersection point A2 can be obtained through the optimal rotation matrix.
  • the subsequent calibration process can follow the i-TOF camera current Some calibration processes, such as error calibration such as wiggling and FPPN.
  • This embodiment provides a technical means of calculating the real distance between the first intersection point A1 and the second intersection point A2 with a known rotation matrix (i.e., initial rotation matrix or optimal rotation matrix): under ideal conditions, the landmark point (i.e., The corresponding camera coordinates of the first intersection point A1 and the second intersection point A2) on the depth camera image plane are known. According to the initial rotation matrix or the optimal rotation matrix, the corresponding pixel of the landmark point on the depth camera image plane can be obtained. Point the rotated camera coordinates, and then perform plane fitting on the rotated marker plate (i.e., the first calibration plate B1, the second calibration plate B2). The imaging of the marker point on the depth camera image plane satisfies the perspective transformation, and we can get Corresponding to the position information of the image plane, the position information of the image plane M2 of the depth camera under non-ideal conditions can be obtained.
  • a known rotation matrix i.e., initial rotation matrix or optimal rotation matrix
  • embodiments of the present application also provide a calibration system, which includes a guide rail, a calibration plate slidingly connected to the guide rail, a base, a depth camera, and a control and processor, wherein the depth camera is placed on the base, The base and the calibration plate are respectively set up at both ends of the guide rail.
  • the control and processor can control the calibration plate to slide on the guide rail, control the depth camera to emit light signals (i.e., the same beam G mentioned above) to the calibration plate and receive images passing through different distances.
  • the echo signal reflected back by the calibration plate (such as the first calibration plate B1 and the second calibration plate B2 mentioned above), and the above calibration method is performed according to the received echo signal to complete the calibration of the depth camera.
  • the tilt problem of the depth camera can be regarded as the image plane of the depth camera having a rotation angle along the optical center.
  • the rotation angle is calculated (equivalent to the initial rotation matrix or the optimal rotation matrix)
  • the first rotation matrix can be solved.
  • Figures 3 and 4. are a correction effect diagram of the calibration method provided by the embodiment of the present application at 1000mm.
  • Figure 4 is a diagram showing the correction effect of the calibration method provided by the embodiment of the present application at 2000mm.
  • Figure 5 is a module block diagram of a depth camera provided by an embodiment of the present application.
  • the embodiment of the present application also provides a depth camera, including a projection module 501, a collection module 502, a processing module 503 and a storage module 504.
  • the storage module 504 is used to store the calibration obtained when executing the calibration method provided by the embodiment of the present application. Parameters (such as the initial rotation matrix or optimal rotation matrix mentioned above).
  • the projection module 501 is used to project a light signal (i.e., the same light beam G mentioned above) to the target area
  • the acquisition module 502 is used to receive the echo signal reflected back by the target area
  • the processing module 503 is used to calculate the signal based on the reflected signal.
  • the echo signal generates a depth image of the target area, and the depth image is corrected based on the calibration parameters in the storage module 504 to obtain a corrected depth image.
  • Figure 6 is a module block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • This embodiment of the present application also provides a computer-readable storage medium 600.
  • the computer-readable storage medium 600 stores executable instructions 610. When the executable instructions 610 are executed, the calibration method provided by the embodiment of the present application is executed.
  • RAM random access memory
  • ROM read-only memory
  • electrically programmable ROM electrically erasable programmable ROM
  • registers hard disks, removable disks, CD-ROMs, or anywhere in the field of technology. any other known form of storage media.
  • a computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g., computer instructions may be transmitted from a website, computer, server or data center via a wired link (e.g.
  • Coaxial cable, optical fiber, digital subscriber line) or wireless means to transmit to another website, computer, server or data center.
  • Computer-readable storage media can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or other integrated media that contains one or more available media. Available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of Optical Distance (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请提供了一种标定方法、标定系统、深度相机及可读存储介质。标定方法应用于包括深度相机、标定板及控制与处理器的标定系统,且包括:根据第一交点与第二交点的相机坐标和深度相机预设内外参数计算得到深度相机在理想状态下的第一交点与第二交点之间的第一距离;基于深度相机的计算原理及回波信号计算得到第一交点与第二交点之间的第二距离;通过比较第一距离和第二距离判断出深度相机的像平面是否平行于标定板,并根据判断结果选择是否需要对深度相机进行标定校正(比如在深度相机的像平面相对于标定板倾斜时,深度相机处于非理想状态,对深度相机进行标定校正)。本申请消除了倾斜误差带来的不利影响,提升了深度相机的测量精度及准确性。

Description

一种标定方法、标定系统、深度相机及可读存储介质
本申请要求于2022年8月12日提交中国专利局,申请号为202210971812.5,发明名称为“一种标定方法、标定系统、深度相机及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
【技术领域】
本申请涉及深度相机技术领域,尤其涉及一种标定方法、标定系统、深度相机及可读存储介质。
【背景技术】
现有技术中,由于系统误差和随机误差的存在,所以深度相机的测量结果和测量精度均会受到相机内部、外界环境等诸多因素的影响,故而为了获取更高精度的深度信息,需要对深度相机进行深度值的标定。
目前,业内普遍采用的深度相机标定方法是:获取深度相机与标定位置之间的实际距离;将标定板移动至标定位置;通过深度相机对自身与标定板之间的距离进行测量,得到相应的测量距离;对测量距离与实际距离进行比较并得到标定参数,从而完成深度相机的标定工作。
但是,深度相机中各元器件在装配时会存在由装配不当所导致的倾斜误差,而上述深度相机的标定方法却没有考虑到这种倾斜误差,从而导致深度相机的测量结果不准确,测量精度也处于较低的水平。
因此,有必要对上述深度相机的标定方法进行改进。
【技术解决方案】
本申请提供了一种标定方法、标定系统、深度相机及可读存储介质,旨在解决现有技术中由于未考虑到深度相机的倾斜误差而导致深度相机的测量结果不准确,测量精度较低的问题。
为了解决上述技术问题,本申请实施例第一方面提供了一种标定方法,应用于标定系统,标定系统包括深度相机、标定板及控制与处理器。标定方法包括:控制深度相机的发射模块发射同一光束至不同距离且相互平行的第一标定板和第二标定板,并致动深度相机的采集模块接收经各标定板反射回的回波信号;其中,光束分别与第一标定板及第二标定板具有第一交点和第二交点;获取各交点的相机坐标,并与深度相机预设内外参数进行计算各交点之间的第一距离;其中,理想状态指示深度相机的像平面平行于标定板;根据回波信号及深度相 机的计算原理计算得到第一交点与第二交点之间的第二距离;基于第一距离和第二距离判断深度相机的像平面是否平行于标定板,并依据判断结果选择是否需要对深度相机进行标定校正。
本申请实施例第二方面提供了一种标定系统,包括导轨、与导轨滑动连接的标定板、基座、深度相机及控制与处理器,其中:深度相机放置于所述基座上,基座与标定板分别设立于导轨两端,控制与处理器用于控制标定板在导轨上滑动、控制深度相机的发射模块发射光信号至标定板并致动采集模块接收经不同距离的标定板反射回的回波信号,以及根据接收到的所述回波信号执行本申请实施例第一方面所述的标定方法以完成对所述深度相机的标定。
本申请实施例第三方面提供了一种深度相机,包括投影模块、采集模块、处理模块及存储模块,其中:存储模块用于存储执行本申请实施例第一方面所述的标定方法时获取的标定参数,投影模块用于向目标区域投射光信号,采集模块用于接收经目标区域反射回的回波信号,处理模块用于根据反射回的所述回波信号生成所述目标区域的深度图像,并基于存储模块中的标定参数对深度图像进行校正以得到校正后的深度图像。
本申请实施例第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有可执行指令,所述可执行指令被执行时执行本申请实施例第一方面所述的标定方法。
【有益效果】
从上述描述可知,与现有技术相比,本申请的有益效果在于:深度相机发射同一光束至不同距离且相互平行的第一标定板和第二标定板,并接收经标定板反射回的回波信号,其中,深度相机发射的同一光束与第一标定板具有第一交点、与第二标定板具有第二交点。本申请在对深度相机进行标定时,先根据第一交点与第二交点的相机坐标和深度相机预设内外参数计算得到深度相机在理想状态(其指示深度相机的像平面平行于标定板)下的第一交点与第二交点之间的第一距离(起参考作用),再基于深度相机的计算原理及回波信号计算得到第一交点与第二交点之间的第二距离(即实际的距离),最后通过比较第一距离和第二距离判断出深度相机的像平面是否平行于标定板,并根据判断结果选择是否需要对深度相机进行标定校正(比如在深度相机的像平面平行于标定板的情况下,意味着深度相机当前处于理想状态,可以不必对深度相机进行标定校正;在深度相机的像平面相对于标定板倾斜的情况下,意味着深度相机当前处于非理想状态,可以对深度相机进行标定校正)。简言之,本申请在对深度相机进行标定时,充分考虑了深度相机所具有的倾斜误差(即非理想状态),而在深度相机具 有倾斜误差时,本申请会进一步对深度相机进行标定校正,从而在很大程度上消除了倾斜误差所带来的不利影响,进而有效地提升了深度相机的测量精度及测量结果的准确性。
【附图说明】
为了更清楚地说明相关技术或本申请实施例中的技术方案,下面将对相关技术或本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,而并非是全部实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的标定方法的流程示意图;
图2为本申请实施例提供的标定方法于标定过程中的光路图;
图3为本申请实施例提供的标定方法于1000mm下的校正效果图;
图4为本申请实施例提供的标定方法于2000mm下的校正效果图;
图5为本申请实施例提供的深度相机的模块框图;
图6为本申请实施例提供的计算机可读存储介质的模块框图。
【本发明的实施方式】
为了使本申请的目的、技术方案以及优点更加的明显和易懂,下面将结合本申请实施例以及相应的附图,对本申请进行清楚、完整地描述,其中,自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。应当理解的是,下面所描述的本申请的各个实施例仅仅用以解释本申请,并不用于限定本申请,也即基于本申请的各个实施例,本领域的普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。此外,下面所描述的本申请的各个实施例中所涉及的技术特征只要彼此之间未构成冲突就可以相互组合。
图1为本申请实施例提供的标定方法的流程示意图,该标定方法应用于深度相机,该标定方法的应用环境如图2所示:设于导轨(图中未示出)且与导轨滑动配合的第一标定板B1、第二标定板B2,设于导轨一端的基座(图中未示出),设于基座上的深度相机(图中未示出),第一标定板B1、第二标定板B2的板面垂直于导轨的延伸方向,深度相机发射的同一光束G与第一标定板B1具有第一交点A1、与第二标定板B2具有第二交点A2,第一标定板B1平行于第二标定板B2,第一标定板B1、第二标定板B2与深度相机之间的距离不相同;其中,标定板的数量并非仅限于两个(即第一标定板B1和第二标定板B2),其可以为一个标定板(即 第一标定板B1在导轨上移动得到第二标定板B2),也可以包括三个或三个以上,此处所描述的仅作为一种示例。
在上述应用环境中,同一光束G上的所有点在深度相机的像平面(即M1或M2)上相应的像素点均是同一个,也就是说理想状态下同一光束G经第一标定板B1和第二标定板B2反射后会照射于M1上的像素点P(即理想状态下第一交点A1、第二交点A2在M1上相应的像素点均为P),非理想状态下同一光束G经第一标定板B1和第二标定板B2反射后会照射于M2上的像素点P′(即非理想状态下第一交点A1、第二交点A2在M2上相应的像素点均为P′),而由于非理想状态与理想状态相比,非理想状态存在倾斜误差,所以理想状态下第一交点A1、第二交点A2在M1上相应的像素点P与非理想状态下第一交点A1、第二交点A2在M2上相应的像素点P′不同,但是在外部空间中第一交点A1及第二交点A2之间的实际距离与深度相机的像平面是否倾斜无关,即外部空间中第一交点A1与第二交点A2之间的实际距离始终是第一交点A1与第二交点A2之间的欧氏距离。
具体地,本申请实施例提供的标定方法包括如下步骤101至105。
步骤101、控制深度相机的发射模块发射同一光束至不同距离且相互平行的第一标定板和第二标定板,并致动深度相机的采集模块接收经标定板反射回的回波信号。
在本申请实施例中,对深度相机进行标定时,需要控制深度相机发射同一光束G至不同距离且相互平行的第一标定板B1和第二标定板B2,并接收经各个标定板反射回的回波信号;其中,同一光束G与第一标定板B1的交点即为第一交点A1、与第二标定板B2的交点即为第二交点A2。需要说明的是,同一光束G表征为深度相机发射模块发射至第一标定板B1与第二标定板B2的光束是相同方向的一束光,其亦可为不同方向的多束光。当其为不同方向的多束光时,依次按照每束光的方向得到对应的交点进行计算即可,此处不作限制。
步骤102、获取交点的相机坐标,并根据第一交点与第二交点的相机坐标和深度相机预设内外参数计算得到深度相机在理想状态下的第一交点与第二交点之间的第一距离。
在本申请实施例中,获取交点的相机坐标,并根据第一交点A1与第二交点A2的世界坐标与深度相机预设内外参数获取两交点的相机坐标,并进一步计算第一交点A1与第二交点A2之间的第一距离。由于该距离通过坐标变换计算得到,可视为理想状态下的第一距离,其中,理想状态指示深度相机的像平面平行于第一标定板B1及第二标定板B2(图2中的M1即表示理想状态下深度相机的像平面),第一距离起参考作用。
步骤103、基于深度相机的计算原理及回波信号计算得到第一交点与第二交点之间的第二距离。
在本申请实施例中,基于深度相机的计算原理得到第一交点A1与第二交点A2之间的距离为第二距离,即根据深度相机接收到的经各个标定板反射的回波信号进行计算所得到的第一交点A1与第二交点A2之间的距离。需要说明的是,在计算第二距离时,深度相机是否处于理想状态是未知的,由此基于深度相机的计算原理得到的此距离为实际的距离,意在与步骤102中起参考作用的第一距离进行对比。
步骤104、比较第一距离和第二距离,并判断深度相机的像平面是否平行于标定板。
在本申请实施例中,获取到第一距离和第二距离之后,还需要根据所获取的第一距离和第二距离,判断深度相机的像平面是否平行于第一标定板B1及第二标定板B2,相当于判断深度相机当前处于理想状态还是非理想状态。作为一种示例,如果第一距离与第二距离相同,那么确定深度相机的像平面平行于第一标定板B1及第二标定板B2,即确定深度相机当前处于理想状态;如果第一距离与第二距离不同,那么确定深度相机的像平面相对于第一标定板B1及第二标定板B2倾斜,即确定深度相机当前处于非理想状态,其中,非理想状态指示深度相机的像平面相对于第一标定板B1及第二标定板B2倾斜(图2中的M2即表示非理想状态下深度相机的像平面),而且在非理想状态下,由于深度相机的像平面相对于第一标定板B1及第二标定板B2倾斜,所以深度相机具有倾斜误差。
步骤105、根据判断结果选择是否需要对深度相机进行标定校正。
在本申请实施例中,对深度相机的像平面是否平行于第一标定板B1及第二标定板B2进行判断后,还需要根据判断结果选择是否需要对深度相机进行标定校正,而本文中的标定校正指的是对深度相机所具有的倾斜误差进行校正,从而消除倾斜误差所带来的不利影响。作为一种示例,如果深度相机的像平面平行于第一标定板B1及第二标定板B2,即深度相机当前处于理想状态,那么输出不需要对深度相机进行标定校正的结果;如果深度相机的像平面相对于第一标定板B1及第二标定板B2倾斜,即深度相机当前处于非理想状态,那么对深度相机进行标定校正以消除倾斜误差所带来的不利影响。
本申请实施例在对深度相机进行标定时,根据第一距离和第二距离判断深度相机的像平面是否平行于第一标定板B1及第二标定板B2,并根据判断结果选择是否需要对深度相机进行标定校正,比如在深度相机的像平面平行于第一标定板B1及第二标定板B2的情况下,意 味着深度相机当前处于理想状态,可以不必对深度相机进行标定校正;在深度相机的像平面相对于第一标定板B1及第二标定板B2倾斜的情况下,意味着深度相机当前处于非理想状态,可以对深度相机进行标定校正。也就是说,本申请实施例在对深度相机进行标定时,充分考虑了深度相机所具有的倾斜误差(即非理想状态),而在深度相机具有倾斜误差时,本申请实施例会进一步对深度相机进行标定校正,从而在很大程度上消除了倾斜误差所带来的不利影响,进而有效地提升了深度相机的测量精度及测量结果的准确性。
作为一种实施方式,步骤102中的“第一距离”基于相机坐标系进行计算,具体可以包括:根据第一公式计算第一距离;其中,第一公式表示为:
Figure PCTCN2022123159-appb-000001
D(P)表示第一距离,u表示深度相机的像平面M1上与第一交点A1及第二交点A2相应的像素点P的横坐标,v表示深度相机的像平面M1上与第一交点A1及第二交点A2相应的像素点P的纵坐标,(u,v)可根据交点的世界坐标与深度相机预设外参数计算得到,f表示深度相机的焦距,d 1表示第一标定板B1与深度相机之间的距离,d 2表示第二标定板B2与深度相机之间的距离,c x表示理想状态下深度相机的像平面M1的中心横坐标,c y表示理想状态下深度相机的像平面M1的中心纵坐标。
本实施方式与理想状态相对应,理想状态下第一距离容易得到,只需要先将第一标定板B1、第二标定板B2相对于导轨滑动,使得第一标定板B1、第二标定板B2与深度相机之间的距离分别为d 1、d 2,之后再基于“同一光束G上的所有点在深度相机的像平面(即M1或M2)上相应的像素点均是同一个”的原理,利用深度相机的内参、外参计算出M1上像素点P的像素坐标(u,v),最后再根据M1上像素点P的像素坐标(u,v)计算出第一距离即可。
另外,由于深度相机具有不同的类型(即i-TOF相机和d-TOF相机),因此在基于深度相机的计算原理计算第二距离时,应当充分考虑深度相机所具有的不同类型,第二距离的计算方式不同。
作为一种实施方式,步骤103中的“第二距离”的计算方式可以包括:获取第一交点与第二交点之间的相位差;根据所获取的相位差计算第二距离。具体地,本实施方式将第一交点A1与第二交点A2之间的相位差代入第二公式计算出第二距离;其中,第二公式表示为:
Figure PCTCN2022123159-appb-000002
D′(P′)表示第二距离,c表示光速,
Figure PCTCN2022123159-appb-000003
表示第一交点A1相应的相位,
Figure PCTCN2022123159-appb-000004
表示第二交点A2相应的相位,π表示圆周率,f m表示i-TOF相机的调制频率。
不同TOF相机由于抽头数、调制方式的差异,所以其第一交点A1与第二交点A2之间的相位差的计算过程也会存在差异。但是,对于深度相机的像平面M2上与第一交点A1、第二交点A2相应的像素点P′的坐标(u′,v′)而言,可根据深度相机该像素接收到经第一交点A1、第二交点A2反射的回波信号计算得到第一交点A1相应的相位
Figure PCTCN2022123159-appb-000005
和第二交点A2相应的相位
Figure PCTCN2022123159-appb-000006
从而得到第一交点A1与第二交点A2之间的相位差
Figure PCTCN2022123159-appb-000007
进而能够根据上述第二公式计算出第二距离。
作为另一种实施方式,步骤103中的“第二距离”的计算方式可以包括:获取第一飞行时间与第二飞行时间之间的时间差;根据所获取的时间差计算第二距离;其中,第一飞行时间为从深度相机发射同一光束G开始,至深度相机接收到经第一交点A1反射回的同一光束G(即回波信号)为止,期间所历经的时间;第二飞行时间为从深度相机发射同一光束G开始,至深度相机接收到经第二交点A2反射回的同一光束G为止,期间所历经的时间。具体地,本实施方式将第一飞行时间与第二飞行时间之间的时间差代入第三公式计算出第二距离;其中,第三公式表示为:
Figure PCTCN2022123159-appb-000008
其中,D′(P′)表示第二距离,c表示光速,t 2表示第二飞行时间,t 1表示第一飞行时间。
本实施方式中,对于深度相机的像平面M2上与第一交点A1、第二交点A2相应的像素点P′而言,根据像素点P′的坐标(u′,v′)接收的回波信号得到第一飞行时间t 1和第二飞行时间t 2,从而得到第一飞行时间t 1与第二飞行时间t 2之间的时间差Δt=t 2-t 1,进而能够根据上述第三公式计算出第二距离。
进一步地,如果第一距离与第二距离相同,即D(P)与D′(P′)相同,那么确定深度相机的像平面平行于第一标定板B1及第二标定板B2,进而确定深度相机当前处于理想状态,第一交点A1和第二交点A2在像平面的相机坐标相同,即(u,v)=(u′,v′);如果第一距离与第二距离不同,那么确定深度相机的像平面相对于第一标定板B1及第二标定板B2倾斜,即确定深 度相机当前处于非理想状态,第一交点A1和第二交点A2在像平面的相机坐标不相同,即(u,v)≠(u′,v′)。
基于此,作为一种实施方式,步骤105中如果深度相机的像平面相对于第一标定板B1及第二标定板B2倾斜(即深度相机当前处于非理想状态),那么便需要对深度相机进行标定校正以消除倾斜误差所带来的不利影响。在本实施方式中,对深度相机进行标定校正可以包括:获取非理想状态下深度相机的像平面与理想状态下深度相机的像平面之间的旋转矩阵;根据所获取的旋转矩阵对深度相机进行标定校正。
可以理解的是,由于非理想状态与理想状态相比,非理想状态存在倾斜误差(即深度相机的像平面M2相对于第一标定板B1及第二标定板B2倾斜),所以非理想状态下深度相机的像平面M2与理想状态下深度相机的像平面M1之间存在旋转矩阵。具体地,理想状态下深度相机的像平面M1上与第一交点A1及第二交点A2相应的像素点P的坐标(u,v)满足如下关系:
Figure PCTCN2022123159-appb-000009
其中,z表示导轨与第一标定板B1/第二标定板B2之间的距离,K表示深度相机的内参矩阵,q表示深度相机的像平面M1上与第一交点A1及第二交点A2相应的像素点P在相机坐标系下的坐标。
非理想状态下深度相机的像平面M2上与第一交点A1及第二交点A2相应的像素点P′的坐标(u′,v′)满足如下关系:
Figure PCTCN2022123159-appb-000010
其中,z表示导轨与第一标定板B1/第二标定板B2之间的距离,K表示深度相机的内参矩阵,q′表示深度相机的像平面M2上与第一交点A1及第二交点A2相应的像素点P′在相机坐标系下的坐标,R表示非理想状态下深度相机的像平面M2与理想状态下深度相机的像平面M1之间的旋转矩阵。由此可见,只要清楚非理想状态下深度相机的像平面M2与理想状态下深度相机的像平面M1之间的旋转矩阵R,就可以很容易地建立像平面M2上的像素点P′与像平面M1上的像素点P之间的对应关系。
在一个具体实现中,本实施方式中的“获取非理想状态下深度相机的像平面与理想状态 下深度相机的像平面之间的旋转矩阵”可以包括:根据深度相机的位姿表达式,获取非理想状态下深度相机的像平面与理想状态下深度相机的像平面之间的初始旋转矩阵;位姿表达式如下所示:
Figure PCTCN2022123159-appb-000011
其中,深度相机包括n个像素,n为大于1的正整数,D(P,i)表示深度相机中第i个像素所得到的第一距离,D′(P′,i)表示深度相机中第i个像素所得到的第二距离,e i表示深度相机中第i个像素所得到的第一距离与第二距离之间的误差,R表示旋转矩阵,T表示深度相机的平移矩阵,J表示雅可比矩阵运算,且
Figure PCTCN2022123159-appb-000012
最小时的R为初始旋转矩阵。在得到初始旋转矩阵后,便可以通过所得到的初始旋转矩阵对深度相机进行标定校正以消除倾斜误差所带来的不利影响。
对于本具体实现,由于非理想状态下深度相机的像平面M2与理想状态下深度相机的像平面M1之间的旋转矩阵未知,即像平面M2上的像素点P′与像平面M1上的像素点P之间的对应关系未知,所以只能根据深度相机当前外参的估计值去寻找P′的位置,如果深度相机当前的外参不理想,那么像平面M2上的像素点P′与像平面M1上的像素点P之间的距离差的差别就较大,故而为了减小这个差别,便需要优化深度相机的外参以寻找与P更相似的P′。
判断像平面M2上的像素点P′与像平面M1上的像素点P之间是否相似的标准就是计算第一距离与第二距离之间的误差(即第一距离与第二距离之间的差值),定义第一距离与第二距离之间的误差e=D(P)-D′(P′),而考虑到深度相机包括多个像素(比如包括n个,n为大于1的正整数),那么深度相机的位姿估计问题就变成了上述位姿表达式,当该位姿表达式中的
Figure PCTCN2022123159-appb-000013
最小时,该位姿表达式中的R便为初始旋转矩阵。
在另一个具体实现中,以上一个具体实现为基础,本实施方式中的“获取非理想状态下深度相机的像平面与理想状态下深度相机的像平面之间的旋转矩阵”还可以包括:根据初始旋转矩阵计算第一交点、第二交点在深度相机中相应的像素坐标;根据第一交点、第二交点在深度相机中相应的像素坐标,计算第一交点与第二交点之间的第三距离;对第三距离与第一距离的差进行求导,并将求导的结果转换为雅可比矩阵;通过非线性优化算法计算增量、迭代求解的方式对雅可比矩阵进行处理,得到最优旋转矩阵。进一步地,在得到最优旋转矩阵后,便可以通过所得到的最优旋转矩阵对深度相机进行标定校正以消除倾斜误差所带来的 不利影响;可以理解的是,由于最优旋转矩阵消除倾斜误差所带来的不利影响的效果优于初始旋转矩阵,所以在得到最优旋转矩阵后,便不再通过初始旋转矩阵对深度相机进行标定校正,而是通过最优旋转矩阵对深度相机进行标定校正。
为了清楚地理解本具体实现,下面对本具体实现进行详细阐述:
设理想状态下深度相机的像平面M1上与第一交点A1、第二交点A2相应的像素点P的相机坐标为p,非理想状态下深度相机的像平面M2上与第一交点A1、第二交点A2相应的像素点P′的相机坐标为q、像素坐标为s,那么通过上一个具体实现所求解出的初始旋转矩阵可以对p、q、s进行关联,表达如下:
Figure PCTCN2022123159-appb-000014
其中,z表示导轨与第一标定板B1/第二标定板B2之间的距离,K表示深度相机的内参矩阵。需要说明的是,此处的s与前文中像平面M2上的像素点P′的像素坐标(u′,v′)不等同,此处的s为通过上一个具体实现中的初始旋转矩阵计算出的,即此处求取的是旋转后的像素坐标s。
之后,通过像素坐标s计算第一交点A1与第二交点A2之间的第三距离,与像素坐标s不等同前文中像平面M2上的像素点P′的像素坐标(u′,v′)同理,虽然此处的第三距离对应于前文中的第二距离,但是此处的第三距离是通过上一个具体实现中的初始旋转矩阵计算出的像素坐标并根据该像素坐标在深度相机接收模块中对应的像素接收到的回波信号计算得到,即此处求取的第三距离实际上是旋转后的第二距离。在得到第三距离后,对第三距离与第一距离做差,即求取第三距离与第一距离的误差e=D(p)-D′(s);其中,D(p)表示第一距离,D′(s)表示第三距离。通过e=D(p)-D′(s)可以发现,e随着D′(s)变化,而D′(s)与像素坐标s相关,像素坐标s随初始旋转矩阵R变化,那么为了进一步优化初始旋转矩阵R,就要使得第三距离与第一距离的误差e最小,此时对第三距离与第一距离的误差e进行求导,表达式如下:
Figure PCTCN2022123159-appb-000015
其中,ξ为初始旋转矩阵R的李代数形式,δξ为扰动项。
将上述第三距离与第一距离的误差e的求导表达式转换为雅可比矩阵,如下所示:
Figure PCTCN2022123159-appb-000016
其中,
Figure PCTCN2022123159-appb-000017
为像素坐标s处的距离差梯度。
等效地,
Figure PCTCN2022123159-appb-000018
其中,
Figure PCTCN2022123159-appb-000019
是对
Figure PCTCN2022123159-appb-000020
求导,
Figure PCTCN2022123159-appb-000021
可以通过李代数求导得到。基于此,
Figure PCTCN2022123159-appb-000022
Figure PCTCN2022123159-appb-000023
Figure PCTCN2022123159-appb-000024
)可以由如下表达式表示:
Figure PCTCN2022123159-appb-000025
其中,f x与f y为像素尺寸,像素点P′的相机坐标为q(X,Y),z表示导轨与第一标定板B1/第二标定板B2之间的距离。进一步地,将此与
Figure PCTCN2022123159-appb-000026
相结合即可得到更为具体的第三距离与第一距离的误差e的求导表达式的雅可比矩阵。
在得到上述雅可比矩阵之后,便可以通过非线性优化算法(比如高斯牛顿算法)计算增量、迭代求解的方式对上述雅可比矩阵进行处理,从而得到最优旋转矩阵,以利用所得到的最优旋转矩阵对深度相机进行标定校正。在利用最优旋转矩阵对深度相机进行标定校正的过程中,可以通过最优旋转矩阵求取第一交点A1与第二交点A2之间的真实距离,后续的标定流程可以沿用i-TOF相机现有的标定流程,比如wiggling、FPPN等误差标定。
本实施方式提供了一种已知旋转矩阵(即初始旋转矩阵或最优旋转矩阵),去计算第一交点A1与第二交点A2之间的真实距离的技术手段:理想状态下标志点(即第一交点A1、第二交点A2)在深度相机像平面上相应像素点的相应相机坐标是已知的,根据初始旋转矩阵或最优旋转矩阵可以求取标志点在深度相机像平面上相应像素点旋转后的相机坐标,之后再对旋转后的标志板(即第一标定板B1、第二标定板B2)进行平面拟合,标志点在深度相机像平面上的成像满足透视变换,可以得到对应像平面的位置信息,即可以得到非理想状态下深度相机的像平面M2的位置信息。
应当理解的是,上述实施方式仅作为本申请实施例的优选实现,并非是本申请实施例对步骤105的具体流程的唯一限定;对此,本领域技术人员可以在本申请实施例的基础上,根据实际应用场景进行灵活设定。
进一步地,本申请实施例还提供了一种标定系统,该标定系统包括导轨、与导轨滑动连接的标定板、基座、深度相机及控制与处理器,其中,深度相机放置于基座上,基座与标定板分别设立于导轨两端,控制与处理器可控制标定板在导轨上滑动、控制深度相机发射光信号(即前文所述的同一光束G)至标定板并接收经不同距离的标定板(比如前文所述的第一标定板B1及第二标定板B2)反射回的回波信号,以及根据接收到的回波信号执行上述标定方法以完成对深度相机的标定。
综合前文所述,深度相机的倾斜问题可以看成深度相机的像平面沿光心有一个旋转角度,只要计算出该旋转角度(相当于初始旋转矩阵或最优旋转矩阵),就可以求解出第一交点A1与第二交点A2之间的真实距离。实验表明,本申请实施例对深度相机的倾斜校正效果较好,具体校正效果可以参见图3和图4;其中,图3为本申请实施例提供的标定方法于1000mm下的校正效果图,图4为本申请实施例提供的标定方法于2000mm下的校正效果图。
图5为本申请实施例提供的深度相机的模块框图。本申请实施例还提供了一种深度相机,包括投影模块501、采集模块502、处理模块503及存储模块504,其中,存储模块504用于存储执行本申请实施例提供的标定方法时获取的标定参数(比如前文所述的初始旋转矩阵或最优旋转矩阵)。具体地,投影模块501用于向目标区域投射光信号(即前文所述的同一光束G),采集模块502用于接收经目标区域反射回的回波信号,处理模块503用于根据反射回的回波信号生成目标区域的深度图像,并基于存储模块504中的标定参数对深度图像进行校正,从而得到校正后的深度图像。
图6为本申请实施例提供的计算机可读存储介质的模块框图。本申请实施例还提供了一种计算机可读存储介质600,该计算机可读存储介质600上存储有可执行指令610,该可执行指令610被执行时执行本申请实施例提供的标定方法。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按 照本申请所述的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk)等。
需要说明的是,本申请内容中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。对于产品类实施例而言,由于其与方法类实施例相似,所以描述的比较简单,相关之处参见方法类实施例的部分说明即可。
还需要说明的是,在本申请内容中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请内容。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本申请内容中所定义的一般原理可以在不脱离本申请内容的精神或范围的情况下,在其它实施例中实现。因此,本申请内容将不会被限制于本申请内容所示的这些实施例,而是要符合与本申请内容所公开的原理和新颖特点相一致的最宽的范围。

Claims (12)

  1. 一种标定方法,应用于标定系统,所述标定系统包括深度相机、标定板及控制与处理器,其特征在于,所述方法包括:
    控制所述深度相机的发射模块发射同一光束至不同距离且相互平行的第一标定板和第二标定板,并致动所述深度相机的采集模块接收经各标定板反射回的回波信号;其中,所述光束分别与所述第一标定板及所述第二标定板具有第一交点和第二交点;
    获取各交点的相机坐标,并根据所述相机坐标和所述深度相机预设内外参数计算得到所述深度相机在理想状态下的所述第一交点与所述第二交点之间的第一距离;其中,所述理想状态指示所述深度相机的像平面平行于标定板;
    根据所述深度相机的计算原理及所述回波信号计算得到所述第一交点与所述第二交点之间的第二距离;
    基于所述第一距离和所述第二距离判断所述深度相机的像平面是否平行于所述标定板,并依据判断结果选择是否需要对所述深度相机进行标定校正。
  2. 如权利要求1所述的标定方法,其特征在于,所述第一距离的计算方式包括:
    根据第一公式计算所述第一距离;其中,所述第一公式表示为:
    Figure PCTCN2022123159-appb-100001
    D(P)表示所述第一距离,u表示所述理想状态下所述深度相机的像平面上与所述第一交点及所述第二交点相应的像素点的横坐标,v表示所述理想状态下所述深度相机的像平面上与所述第一交点及所述第二交点相应的像素点的纵坐标,f表示所述深度相机的焦距,d 1表示所述第一标定板与所述深度相机之间的距离,d 2表示所述第二标定板与所述深度相机之间的距离,c x表示所述理想状态下所述深度相机的像平面的中心横坐标,c y表示所述理想状态下所述深度相机的像平面的中心纵坐标。
  3. 如权利要求1所述的标定方法,其特征在于,所述第二距离的计算方式包括:
    根据所述回波信号获取所述第一交点与所述第二交点之间的相位差;
    将所述相位差代入第二公式计算出所述第二距离;其中,所述第二公式表示为:
    Figure PCTCN2022123159-appb-100002
    D′(P′)表示所述第二距离,c表示光速,
    Figure PCTCN2022123159-appb-100003
    表示所述第一交点相应的相位,
    Figure PCTCN2022123159-appb-100004
    表示所述 第二交点相应的相位,π表示圆周率,f m表示所述深度相机的调制频率。
  4. 如权利要求1所述的标定方法,其特征在于,所述第二距离的计算方式包括:
    获取第一飞行时间与第二飞行时间之间的时间差;其中,所述第一飞行时间为从所述深度相机发射所述同一光束开始,至所述深度相机接收到经所述第一标定板反射回的所述回波信号为止,期间所历经的时间;所述第二飞行时间为从所述深度相机发射所述同一光束开始,至所述深度相机接收到经所述第二标定板反射回的所述回波信号为止,期间所历经的时间;
    将所述时间差代入第三公式计算出所述第二距离;其中,所述第三公式表示为:
    Figure PCTCN2022123159-appb-100005
    D′(P′)表示所述第二距离,c表示光速,t 2表示所述第二飞行时间,t 1表示所述第一飞行时间。
  5. 如权利要求1-4任一项所述的标定方法,其特征在于,所述基于所述第一距离和所述第二距离判断所述深度相机的像平面是否平行于所述标定板,包括:
    若所述第一距离与所述第二距离相同,则确定所述深度相机的像平面平行于所述标定板,所述深度相机处于所述理想状态;
    若所述第一距离与所述第二距离不同,则确定所述深度相机的像平面相对于所述标定板倾斜,所述深度相机处于非理想状态;其中,所述非理想状态指示所述深度相机的像平面相对于所述标定板倾斜。
  6. 如权利要求5所述的标定方法,其特征在于,所述依据判断结果选择是否需要对所述深度相机进行标定校正,包括:
    若所述深度相机处于所述理想状态,则输出不需要对所述深度相机进行标定校正的结果;
    若所述深度相机处于所述非理想状态,则对所述深度相机进行标定校正。
  7. 如权利要求6所述的标定方法,其特征在于,所述对所述深度相机进行标定校正包括:
    获取所述非理想状态下所述深度相机的像平面与所述理想状态下所述深度相机的像平面之间的旋转矩阵;
    根据所述旋转矩阵对所述深度相机进行标定校正。
  8. 如权利要求7所述的标定方法,其特征在于,所述获取所述非理想状态下所述深度相机的像平面与所述理想状态下所述深度相机的像平面之间的旋转矩阵,包括:
    根据所述深度相机的位姿表达式,获取所述非理想状态下所述深度相机的像平面与所述理想状态下所述深度相机的像平面之间的初始旋转矩阵;所述位姿表达式如下所示:
    Figure PCTCN2022123159-appb-100006
    其中,所述深度相机包括n个像素,n为大于1的正整数,D(P,i)表示所述深度相机中第i个像素所得到的所述第一距离,D′(P′,i)表示所述深度相机中第i个像素所得到的所述第二距离,R表示旋转矩阵,T表示所述深度相机的平移矩阵,J表示雅可比矩阵运算,且
    Figure PCTCN2022123159-appb-100007
    最小时的R为所述初始旋转矩阵。
  9. 如权利要求8所述的标定方法,其特征在于,所述获取所述非理想状态下所述深度相机的像平面与所述理想状态下所述深度相机的像平面之间的旋转矩阵,还包括:
    根据所述初始旋转矩阵计算所述第一交点、所述第二交点在所述深度相机中相应的像素坐标;
    根据所述像素坐标在所述深度相机的接收模块中对应的像素接收的回波信号,计算所述第一交点与所述第二交点之间的第三距离;
    对所述第三距离与所述第一距离的差进行求导,并将求导的结果转换为雅可比矩阵;
    通过非线性优化算法计算增量、迭代求解的方式对所述雅可比矩阵进行处理,得到最优旋转矩阵。
  10. 一种标定系统,其特征在于,包括导轨、与所述导轨滑动连接的标定板、基座、深度相机及控制与处理器,其中:
    所述深度相机放置于所述基座上,所述基座与所述标定板分别设立于所述导轨两端;
    所述控制与处理器用于控制所述标定板在所述导轨上滑动、控制所述深度相机发射光信号至所述标定板并致动所述深度相机的采集模块接收经不同距离的所述标定板反射回的回波信号,以及根据接收到的所述回波信号执行如权利要求1-9任一项所述的方法以完成对所述深度相机的标定。
  11. 一种深度相机,其特征在于,包括投影模块、采集模块、处理模块及存储模块,其中:
    所述存储模块,用于存储执行如权利要求1-9任一项所述的标定方法时获取的标定参数;
    所述投影模块,用于向目标区域投射光信号;
    所述采集模块,用于接收经所述目标区域反射回的回波信号;
    所述处理模块,用于根据反射回的所述回波信号生成所述目标区域的深度图像,并基于所述存储模块中的所述标定参数对所述深度图像进行校正以得到校正后的深度图像。
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有可执行指令,所述可执行指令被执行时执行如权利要求1-9任一项所述的方法。
PCT/CN2022/123159 2022-08-12 2022-09-30 一种标定方法、标定系统、深度相机及可读存储介质 WO2024031809A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210971812.5 2022-08-12
CN202210971812.5A CN115423877A (zh) 2022-08-12 2022-08-12 一种标定方法、标定系统、深度相机及可读存储介质

Publications (1)

Publication Number Publication Date
WO2024031809A1 true WO2024031809A1 (zh) 2024-02-15

Family

ID=84199028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/123159 WO2024031809A1 (zh) 2022-08-12 2022-09-30 一种标定方法、标定系统、深度相机及可读存储介质

Country Status (2)

Country Link
CN (1) CN115423877A (zh)
WO (1) WO2024031809A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228889B (zh) * 2023-04-27 2023-08-15 合肥工业大学 移动标定装置、相机阵列系统标定装置及方法
CN117876502B (zh) * 2024-03-08 2024-06-28 荣耀终端有限公司 深度标定方法、深度标定设备及深度标定系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014106118A (ja) * 2012-11-28 2014-06-09 Kokusai Kogyo Co Ltd 数値表層モデル作成方法、及び数値表層モデル作成装置
CN110570477A (zh) * 2019-08-28 2019-12-13 贝壳技术有限公司 一种标定相机和旋转轴相对姿态的方法、装置和存储介质
CN112198529A (zh) * 2020-09-30 2021-01-08 上海炬佑智能科技有限公司 参考平面调整及障碍物检测方法、深度相机、导航设备
CN112686961A (zh) * 2020-12-31 2021-04-20 杭州海康机器人技术有限公司 一种深度相机标定参数的修正方法、装置
CN114792342A (zh) * 2022-02-28 2022-07-26 中国铁建重工集团股份有限公司 一种线结构光标定方法、装置、设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014106118A (ja) * 2012-11-28 2014-06-09 Kokusai Kogyo Co Ltd 数値表層モデル作成方法、及び数値表層モデル作成装置
CN110570477A (zh) * 2019-08-28 2019-12-13 贝壳技术有限公司 一种标定相机和旋转轴相对姿态的方法、装置和存储介质
CN112198529A (zh) * 2020-09-30 2021-01-08 上海炬佑智能科技有限公司 参考平面调整及障碍物检测方法、深度相机、导航设备
CN112686961A (zh) * 2020-12-31 2021-04-20 杭州海康机器人技术有限公司 一种深度相机标定参数的修正方法、装置
CN114792342A (zh) * 2022-02-28 2022-07-26 中国铁建重工集团股份有限公司 一种线结构光标定方法、装置、设备、存储介质

Also Published As

Publication number Publication date
CN115423877A (zh) 2022-12-02

Similar Documents

Publication Publication Date Title
WO2024031809A1 (zh) 一种标定方法、标定系统、深度相机及可读存储介质
US10430956B2 (en) Time-of-flight (TOF) capturing apparatus and image processing method of reducing distortion of depth caused by multiple reflection
US7561098B2 (en) System and method for estimating airborne radar antenna pointing errors
WO2022227844A1 (zh) 一种激光雷达校准装置和方法
CN109373897B (zh) 一种基于激光虚拟标尺的测量方法
CN110749874B (zh) 激光雷达发射光路的调平装置及方法
CN111427027A (zh) 多线激光雷达的校准方法、装置及系统
WO2021180149A1 (zh) 一种激光雷达参数标定方法及装置
JP2014020978A (ja) 照射装置、距離測定装置、照射装置のキャリブレーションプログラム及びキャリブレーション方法
CN113215653B (zh) 一种确定液口距的方法和系统
WO2021189404A1 (zh) 温漂系数补偿方法、装置、镜头、成像装置和可移动平台
JP7109676B2 (ja) パルス位相シフトを利用した三次元距離測定カメラの非線形距離誤差補正方法
CN115267718A (zh) 基于点云拼接的环视雷达360°探测实现方法
EP3971606A1 (en) Radar elevation angle validation
CN113777592B (zh) 方位角标定方法和装置
JP6892603B2 (ja) 距離計測装置、距離計測方法および距離計測プログラム
CN113822920A (zh) 结构光相机获取深度信息的方法、电子设备及存储介质
JP2017198555A (ja) 情報処理装置、キャリブレーション方法、およびキャリブレーションプログラム
CN110161484B (zh) 距离补偿查找表建立方法及装置、距离补偿方法及装置
JP2021012136A (ja) 3次元情報取得装置及び3次元情報取得方法
CN113835072B (zh) 一种毫米波雷达校准方法、装置及电子设备
CN112213711B (zh) 一种tof相机的校准方法
CN116012455A (zh) 一种相对位置关系确定方法、结构光成像方法及相关系统
KR20210146562A (ko) 지지물의 gps 좌표 보정 장치 및 방법
US20230274462A1 (en) System and method for camera calibration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22954753

Country of ref document: EP

Kind code of ref document: A1