WO2024109403A1 - 3d相机标定方法、点云图像获取方法及相机标定系统 - Google Patents

3d相机标定方法、点云图像获取方法及相机标定系统 Download PDF

Info

Publication number
WO2024109403A1
WO2024109403A1 PCT/CN2023/125286 CN2023125286W WO2024109403A1 WO 2024109403 A1 WO2024109403 A1 WO 2024109403A1 CN 2023125286 W CN2023125286 W CN 2023125286W WO 2024109403 A1 WO2024109403 A1 WO 2024109403A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
coordinate system
position coordinates
points
posture
Prior art date
Application number
PCT/CN2023/125286
Other languages
English (en)
French (fr)
Inventor
赵顺顺
宋启原
汪力骁
李鹏飞
丁有爽
邵天兰
Original Assignee
梅卡曼德(北京)机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 梅卡曼德(北京)机器人科技有限公司 filed Critical 梅卡曼德(北京)机器人科技有限公司
Publication of WO2024109403A1 publication Critical patent/WO2024109403A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present disclosure relates to the technical field of industrial cameras, and in particular to a 3D camera calibration method, a point cloud image acquisition method, and a camera calibration system.
  • machine vision technology can be used to identify objects to be detected.
  • a three-dimensional (3D) point cloud image of the object to be detected must be obtained first, and then the object to be detected is identified based on the 3D point cloud image.
  • a 3D point cloud image of an object to be detected is obtained in the following manner: a 3D camera with fixed parameters is used to collect the position coordinates of multiple position points on the surface of the object to be detected in the camera coordinate system, and a 3D point cloud image of the object to be detected is generated based on the position coordinates.
  • the present disclosure provides a 3D camera calibration method, a point cloud image acquisition method and a camera calibration system to solve the problem that the position coordinates of multiple position points on the surface of an object to be detected collected by related technologies in a camera coordinate system are not accurate enough, which leads to low accuracy of the generated three-dimensional point cloud image.
  • the present disclosure provides a 3D camera calibration method, comprising:
  • the compensation matrix for the 3D camera is determined according to the measured position coordinates of multiple position points of the calibration object in the camera coordinate system and the initial position and posture of the 3D camera in the translation stage coordinate system.
  • the present disclosure provides a point cloud image acquisition method, comprising:
  • the position coordinates of the surface position points of the target object in the camera coordinate system are compensated according to the compensation matrix to obtain the position coordinates after compensation.
  • the compensation matrix is determined according to the measured position coordinates of multiple position points of the calibration object in the camera coordinate system and the initial position and posture of the 3D camera in the translation stage coordinate system.
  • a point cloud image corresponding to the target object is generated.
  • the present disclosure provides a camera calibration system, including a 3D camera and a translation stage;
  • the 3D camera is used to obtain the measured position coordinates of multiple position points of the calibration object in the camera coordinate system; determine the initial position and posture of the 3D camera in the translation stage coordinate system; determine the compensation matrix for the 3D camera according to the measured position coordinates of the multiple position points in the camera coordinate system and the initial position and posture of the 3D camera in the translation stage coordinate system;
  • the translation stage is used to move the calibration object through the base.
  • the present disclosure provides a 3D camera calibration device, comprising:
  • the first determination module is used to determine the initial position and posture of the 3D camera in the coordinate system of the translation stage;
  • the second determination module is used to determine the compensation matrix for the 3D camera according to the measured position coordinates of multiple position points of the calibration object in the camera coordinate system and the initial position and posture of the 3D camera in the translation stage coordinate system.
  • the present disclosure provides a point cloud image acquisition device, comprising:
  • a processing module used for compensating the position coordinates of the surface position points of the target object in the coordinate system of the point cloud image acquisition device according to the compensation matrix to obtain the position coordinates after the compensation process, wherein the compensation matrix is determined according to the measured position coordinates of the plurality of position points of the calibration object in the coordinate system of the point cloud image acquisition device and the initial position and posture of the point cloud image acquisition device in the coordinate system of the translation stage;
  • the generation module is used to generate a point cloud image corresponding to the target object according to the position coordinates after compensation processing.
  • the present disclosure provides an electronic device, comprising: a processor, and a memory communicatively connected to the processor;
  • Memory stores computer-executable instructions
  • the processor executes the computer-executable instructions stored in the memory to implement the 3D camera calibration method as described in the first aspect of the present disclosure or the point cloud image acquisition method as described in the second aspect of the present disclosure.
  • the present disclosure provides a computer-readable storage medium, in which computer program instructions are stored.
  • the computer program instructions are executed by a processor, the 3D camera calibration method described in the first aspect of the present disclosure or the point cloud image acquisition method described in the second aspect of the present disclosure is implemented.
  • the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the 3D camera calibration method described in the first aspect of the present disclosure or the point cloud image acquisition method described in the second aspect of the present disclosure.
  • the 3D camera calibration method, point cloud image acquisition method and camera calibration system obtained the measured position coordinates of multiple position points of the calibration object in the camera coordinate system; determine the initial posture of the 3D camera in the translation stage coordinate system; and determine the compensation matrix for the 3D camera according to the measured position coordinates of the multiple position points in the camera coordinate system and the initial posture of the 3D camera in the translation stage coordinate system. Since the present disclosure uses a translation stage when determining the compensation matrix for the 3D camera, and the translation stage is a device with high precision, the error caused by the translation stage itself can be ignored.
  • the compensation matrix for the 3D camera itself can be determined to improve the accuracy of the 3D camera according to the compensation matrix, that is, the position coordinates of the surface position points of the target object in the camera coordinate system can be obtained more accurately according to the compensation matrix, and then a three-dimensional point cloud image with higher precision can be obtained.
  • the position coordinates of the surface position points of the target object in the camera coordinate system can be compensated by the compensation matrix, the accuracy requirements of the initial position coordinates of the surface position points of the target object photographed by the 3D camera in the camera coordinate system can be reduced, thereby reducing the accuracy requirements of the 3D camera.
  • FIG1 is a schematic diagram of an application scenario of a 3D camera calibration method provided by an embodiment of the present disclosure
  • FIG2 is a flow chart of a 3D camera calibration method provided by an embodiment of the present disclosure.
  • FIG3 is a flow chart of a 3D camera calibration method provided by another embodiment of the present disclosure.
  • FIG4 is a schematic diagram of the principle of spatial region segmentation provided by an embodiment of the present disclosure.
  • FIG5 is a flow chart of point cloud image acquisition provided by an embodiment of the present disclosure.
  • FIG6 is a schematic diagram of the structure of a 3D camera calibration device provided by an embodiment of the present disclosure.
  • FIG7 is a schematic diagram of the structure of a point cloud image acquisition device provided by an embodiment of the present disclosure.
  • FIG8 is a schematic diagram of a camera calibration system provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of the structure of an electronic device provided by the present disclosure.
  • the collection, storage, use, processing, transmission, provision and disclosure of information such as financial data or user data involved shall comply with the provisions of relevant laws and regulations and shall not violate public order and good morals.
  • These geometric model parameters are camera parameters; camera parameters include camera intrinsic parameters and extrinsic parameters;
  • Camera calibration is the process of determining the geometric model parameters (camera parameters) of camera imaging
  • the calibration object may be a calibration plate, three or more calibration balls, or other calibration objects that can form a plane coordinate system.
  • applications such as machine vision, image measurement, photogrammetry, and three-dimensional reconstruction, it can be used to correct lens distortion, etc.
  • the following embodiments take the calibration plate as an example.
  • the intrinsic and extrinsic parameters of the 3D camera will directly affect the accuracy of the position coordinates of multiple position points on the surface of the object in the camera coordinate system.
  • the present disclosure provides a 3D camera calibration method, a point cloud image acquisition method and a camera calibration system.
  • a compensation matrix according to the measured position coordinates of multiple position points of a calibration object in the camera coordinate system and the initial posture of the 3D camera in the translation stage coordinate system, the position coordinates of the surface position points of the target object in the camera coordinate system are compensated.
  • the position coordinates of the surface position points of the target object in the camera coordinate system can be obtained more accurately, and thus a three-dimensional point cloud image with higher accuracy can be obtained.
  • FIG1 is a schematic diagram of an application scenario of a 3D camera calibration method provided by an embodiment of the present disclosure.
  • the front end of the displacement stage fixes the calibration plate through the base
  • the displacement stage can drive the calibration plate to move to change the position through the base
  • the 3D camera is fixed at a preset position.
  • the 3D camera can be fixed on a bracket outside the displacement stage, or it can be fixed on the displacement stage.
  • the 3D camera is fixed on a bracket outside the displacement stage as an example for explanation.
  • the displacement stage drives the calibration plate to move to multiple spatial positions through the base
  • the 3D camera obtains multiple images of the corresponding calibration plate at different spatial positions, multiple first poses of the base in the displacement stage coordinate system, and multiple second poses of the calibration plate in the camera coordinate system; the 3D camera determines the compensation matrix for the 3D camera based on these multiple images, multiple first poses, and multiple second poses.
  • the calibration plate in the process of the translation stage driving the calibration plate to move to multiple spatial positions through the base, can be moved along the z-axis direction of the camera coordinate system, or the calibration plate can be moved in the plane formed by the x-axis and y-axis of the camera coordinate system, or the calibration plate can be rotated along the x-axis or along the y-axis, etc.; wherein, the origin of the camera coordinate system is the optical center of the 3D camera, the x-axis and y-axis of the camera coordinate system are parallel to the x-axis and y-axis of the calibration plate image respectively, and the z-axis of the camera coordinate system is the optical axis of the 3D camera.
  • embodiments of the present disclosure can be applied in scenarios where point cloud images are acquired.
  • FIG. 1 is merely a schematic diagram of an application scenario provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure does not limit the devices included in FIG. 1 , nor does it limit the positional relationship between the devices in FIG. 1 .
  • FIG2 is a flow chart of a 3D camera calibration method provided by an embodiment of the present disclosure. As shown in FIG2 , the method of the embodiment of the present disclosure includes:
  • the calibration plate image can be directly captured by a 3D camera, and then the measurement position coordinates of multiple position points of the calibration plate in the camera coordinate system can be obtained based on the calibration plate image.
  • S202 Determine the initial position and posture of the 3D camera in the translation stage coordinate system.
  • the position and posture of the base in the translation stage coordinate system and the position and posture of the position points on the calibration plate in the camera coordinate system can be collected multiple times, thereby determining the initial position and posture of the 3D camera in the translation stage coordinate system.
  • the subsequent embodiments which will not be described in detail here.
  • the compensation matrix for the 3D camera can be determined according to the measured position coordinates of the multiple position points in the camera coordinate system and the initial position and posture of the 3D camera in the translation stage coordinate system. It can be understood that the compensation matrix contains compensation for the internal parameters of the 3D camera, and the internal parameters of the 3D camera will affect the accuracy of the position coordinates of the position points collected by the 3D camera. The error caused by the deviation of the internal parameters of the 3D camera can be compensated by the compensation matrix.
  • the multiple position points of the calibration plate can be divided into multiple groups of position points according to different spatial regions in the camera coordinate system, and the compensation matrix for the 3D camera is determined according to the measured position coordinates of each group of position points in the camera coordinate system and the initial position and posture of the 3D camera in the translation stage coordinate system.
  • the compensation matrix for the 3D camera is determined according to the measured position coordinates of each group of position points in the camera coordinate system and the initial position and posture of the 3D camera in the translation stage coordinate system.
  • the position coordinates of the position points in the camera coordinate system can be compensated according to the compensation matrix to obtain more accurate position coordinates, thereby obtaining a point cloud image with higher accuracy.
  • the 3D camera calibration method obtained by the embodiment of the present disclosure obtains the measured position coordinates of multiple position points of the calibration plate in the camera coordinate system; determines the initial posture of the 3D camera in the translation stage coordinate system; and determines the compensation matrix for the 3D camera according to the measured position coordinates of the multiple position points in the camera coordinate system and the initial posture of the 3D camera in the translation stage coordinate system. Since the embodiment of the present disclosure uses a translation stage when determining the compensation matrix for the 3D camera, and the translation stage is a device with high precision, the error caused by the translation stage itself can be ignored.
  • the compensation matrix for the 3D camera itself can be determined to improve the accuracy of the 3D camera according to the compensation matrix, that is, the position coordinates of the surface position points of the target object in the camera coordinate system can be obtained more accurately according to the compensation matrix, and then a three-dimensional point cloud image with higher precision can be obtained.
  • the position coordinates of the surface position points of the target object in the camera coordinate system can be compensated by the compensation matrix, the accuracy requirements of the initial position coordinates of the surface position points of the target object photographed by the 3D camera in the camera coordinate system can be reduced, thereby reducing the accuracy requirements of the 3D camera.
  • FIG3 is a flow chart of a 3D camera calibration method provided by another embodiment of the present disclosure. Based on the above embodiment, the present disclosure further describes the 3D camera calibration method. As shown in FIG3, the method of the present disclosure embodiment may include:
  • step S201 in FIG. 2 may further include the following two steps S301 and S302:
  • the displacement stage drives the calibration plate to move through the base to adjust the position of the calibration plate, so that the calibration plate can be moved to multiple spatial positions.
  • the calibration plate can be controlled to change in multiple positions so that the 3D camera can obtain the measured position coordinates of multiple position points at different spatial positions in the camera coordinate system.
  • the 3D camera in response to the calibration plate moving to multiple spatial positions, the 3D camera can directly capture multiple calibration plate images.
  • S302 Obtain measured position coordinates of multiple position points of the calibration plate in a camera coordinate system according to the multiple calibration plate images.
  • the measured position coordinates of a plurality of position points of the calibration plate in the camera coordinate system may be obtained according to the plurality of calibration plate images.
  • step S202 in FIG. 2 may further include the following two steps S303 and S304:
  • the base is, for example, an axis. It can be understood that when the translation stage drives the calibration plate to move to multiple spatial positions through the base, the first pose of the base in the translation stage coordinate system and the second pose of the calibration plate in the camera coordinate system can be obtained each time the calibration plate moves to a certain spatial position, wherein the first pose of the base in the translation stage coordinate system is a known quantity, which can be measured and determined by the sensor on the translation stage; the second pose of the calibration plate in the camera coordinate system can be acquired by a 3D camera.
  • any set of the first poses and the second poses can be selected to obtain the initial pose of the 3D camera in the translation stage coordinate system according to the following formula 1:
  • step S203 in FIG. 2 may further include the following two steps S305 and S306:
  • multiple position points can be divided according to different segmentation regions in the space in the camera coordinate system, specifically, they can be divided into multiple layers according to height, and then each layer is divided into multiple partitions according to the set area to obtain multiple groups of position points.
  • multiple position points are divided into multiple groups of position points according to different spatial regions, including: in the camera coordinate system, the space is divided into multiple layers according to different heights, and each layer is divided into multiple partitions; according to the position coordinates of the multiple position points in the camera coordinate system, the position points in the same partition in the same layer are divided into a group of position points.
  • Figure 4 is a schematic diagram of the principle of spatial area segmentation provided by an embodiment of the present disclosure.
  • multiple position points are divided according to different segmentation areas in the spatial area (i.e., the area represented by 401 in Figure 4). Specifically, they can be divided into multiple layers according to height, and then each layer is divided into multiple partitions according to the set area to obtain multiple groups of position points.
  • the position and posture of the calibration plate relative to the base can be determined according to the initial position and posture of the 3D camera in the coordinate system of the translation stage, the measured position coordinates of the position point in the camera coordinate system, and the position and posture of the base in the coordinate system of the translation stage; further, according to the position coordinates of each position point in the calibration plate in the coordinate system of the calibration plate, the position and posture of the calibration plate relative to the base, and the position and posture of the base in the coordinate system of the translation stage, the theoretical position coordinates of multiple position points on the calibration plate in the camera coordinate system can be determined.
  • the compensation matrix for the 3D camera can be determined according to the measured position coordinates of each group of position points in the camera coordinate system and the initial posture of the 3D camera in the translation stage coordinate system.
  • a compensation matrix for the 3D camera is determined based on the measured position coordinates and initial pose of each group of position points in the camera coordinate system, including: determining the initial theoretical position coordinates of each group of position points in the camera coordinate system based on the initial pose of the 3D camera in the translation stage coordinate system; fitting the measured position coordinates and initial theoretical position coordinates of each group of position points in the camera coordinate system to determine the initial compensation matrix; obtaining the adjusted poses of multiple 3D cameras in the translation stage coordinate system, and determining the theoretical position coordinates of multiple groups of position points after adjustment in the 3D camera; adjusting the initial compensation matrix based on the current poses returned by the multiple 3D cameras after adjustment in the translation stage coordinate system, the measured position coordinates of multiple groups of position points in the camera coordinate system, and the current position coordinates of the 3D camera after adjustment, until the Euclidean distance of the error between the measured position coordinates of each group of position points in the camera coordinate system and the current theoretical position coordinates is less than a preset threshold and/or the adjustment reaches
  • the position points in the calibration plate may be imported into the camera coordinate system by seeking an intermediate medium between each group of position points and the camera coordinate system, thereby determining the initial theoretical position coordinates of each group of position points in the camera coordinate system.
  • Determining the initial theoretical position coordinates of each group of position points in the camera coordinate system according to the initial position and posture of the 3D camera in the translation stage coordinate system can further include: obtaining the position and posture of the base in the translation stage coordinate system and the position and posture of the calibration plate in the camera coordinate system corresponding to each group of position points; determining the position and posture of the calibration plate relative to the base according to the initial position and posture of the 3D camera in the translation stage coordinate system, the position and posture of the base in the translation stage coordinate system and the position and posture of the calibration plate in the camera coordinate system; determining the position coordinates of each group of position points in the translation stage coordinate system according to the position coordinates of each group of position points in the calibration plate coordinate system, the position and posture of the calibration plate relative to the base and the position and posture of the base in the translation stage coordinate system; determining the initial theoretical position coordinates of each group of position points in the camera coordinate system according to the position coordinates and initial position of each group of position points in the translation stage coordinate system.
  • the measured position coordinates and the initial theoretical position coordinates of each group of position points in the camera coordinate system can be fitted, and when determining the initial compensation matrix for the 3D camera, there can be multiple implementation schemes, for example, fitting can be performed by the least squares method.
  • fitting can be performed by the least squares method.
  • the measured position coordinates and the initial theoretical position coordinates of each group of position points in the camera coordinate system can be fitted according to the following formula 2 to determine the initial compensation matrix for the 3D camera:
  • the position and posture of the 3D camera in the translation stage coordinate system are adjusted multiple times, that is, , until the Euclidean distance of the error between the measured position coordinates of each group of position points in the camera coordinate system and the current theoretical position coordinates is less than a preset threshold and/or the adjustment reaches a preset number of times, the compensation matrix M for the 3D camera can be obtained.
  • the compensation matrix includes the compensation amount for each position area in each layer.
  • the 3D camera calibration method obtains a plurality of calibration plate images in response to the calibration plate moving to a plurality of spatial positions; obtains the measured position coordinates of the plurality of position points of the calibration plate in the camera coordinate system according to the plurality of calibration plate images; obtains the measured position coordinates of the plurality of position points of the calibration plate in the camera coordinate system according to the plurality of calibration plate images; obtains the first poses of the plurality of bases in the translation stage coordinate system and the second poses of the plurality of calibration plates in the camera coordinate system in the process that the translation stage drives the calibration plate to move via the base; determines the initial pose of the 3D camera in the translation stage coordinate system according to any group of the first poses and the second poses; divides the plurality of position points of the calibration plate into a plurality of groups of position points according to different spatial regions in the camera coordinate system; determines the compensation matrix for the 3D camera according to the measured position coordinates of each group of position points in the camera coordinate system and the initial pose of the 3D camera
  • the compensation matrix for the 3D camera itself can be determined to improve the accuracy of the 3D camera according to the compensation matrix; wherein, in the camera coordinate system, the multiple position points of the calibration plate are divided according to the spatial area, so that different spatial depths and areas can have corresponding compensation matrices, which can effectively improve the accuracy of the position point coordinates in the camera coordinate system, so that the position coordinates of the surface position points of the target object in the camera coordinate system can be obtained more accurately according to the compensation matrix, and then a more accurate three-dimensional point cloud image can be obtained.
  • FIG5 is a flow chart of point cloud image acquisition provided by an embodiment of the present disclosure. As shown in FIG5, the method of the embodiment of the present disclosure includes:
  • the image of the target object can be directly captured by a 3D camera, and then the position coordinates of the surface position points of the target object in the camera coordinate system can be obtained based on the image of the target object.
  • the position coordinates may be compensated according to the following formula 3:
  • M represents the compensation matrix
  • A represents the position coordinates of the surface position point of the target object in the camera coordinate system
  • the position coordinates are compensated according to the compensation matrix to obtain the compensated position coordinates, including: determining the layer and partition in which the surface position point is located according to the position coordinates of the surface position point of the target object in the camera coordinate system and the internal parameters of the 3D camera; and compensating the position coordinates of the surface position point in the camera coordinate system according to the compensation matrix corresponding to the layer and partition in which the surface position point is located to obtain the compensated position coordinates.
  • the z-axis coordinates, x-axis coordinates, and y-axis coordinates of the position coordinates of the surface position point of the target object in the camera coordinate system can be extracted respectively, thereby determining the layer and partition where the surface position point of the target object is located.
  • determining the layer and partition where the surface position point is located based on the position coordinates of the surface position point of the target object in the camera coordinate system and the internal parameters of the 3D camera can include: determining the layer where the surface position point is located based on the z-axis coordinate of the surface position point in the camera coordinate system; determining the pixel coordinates corresponding to the surface position point based on the x-axis coordinates and y-axis coordinates of the surface position point in the camera coordinate system and the internal parameters of the 3D camera; and determining the partition where the surface position point is located based on the pixel coordinates.
  • the position coordinates of the surface position point in the camera coordinate system can be compensated by the above formula three according to the compensation matrix corresponding to the layer and partition where the surface position point is located, so as to obtain the compensated position coordinates.
  • the compensation of the position coordinates of the surface position points of the target object in the camera coordinate system can further include the following: when the surface position points of the target object are across layers and/or partitions, the weights of each crossed layer and/or partition are determined according to the distance between the surface position point and the crossed layer and/or partition; the position coordinates of the surface position point in the camera coordinate system are compensated according to the compensation matrix of each crossed layer and/or partition to obtain multiple compensated position coordinates of the surface position point; the multiple compensated position coordinates of the surface position point are weightedly summed according to the weights of each crossed layer and/or partition, and the position coordinates after compensation processing are the compensated position coordinates after weighted summation processing.
  • the compensation results of the surface position point are weighted summed according to the distance between the surface position point and the different partitions and the weight corresponding to the distance between the surface position point and the different partitions, so as to obtain the weighted summed compensated position coordinates corresponding to the surface position point.
  • the compensation results of the surface position point are weighted summed according to the distance between the surface position point and the different layers and the weight corresponding to the distance between the surface position point and the different layers, so as to obtain the weighted summed compensated position coordinates corresponding to the surface position point.
  • the compensation results of the surface position point are weighted summed according to the distance between the surface position point and the different layers and the distance between the surface position point and the different partitions, so as to obtain the weighted summed compensated position coordinates corresponding to the surface position point. Therefore, by weighted summing the compensation results of the surface position points located in the boundary area, the accuracy of the coordinate position of the obtained boundary surface position points can be improved.
  • S503 Generate a point cloud image corresponding to the target object according to the position coordinates after compensation processing.
  • a point cloud image corresponding to the target object with higher accuracy can be generated according to the position coordinates after the compensation process.
  • the point cloud image acquisition method obtained by the embodiment of the present disclosure obtains the position coordinates of the surface position points of the target object in the camera coordinate system; the position coordinates are compensated according to the compensation matrix to obtain the position coordinates after compensation, and the compensation matrix is determined according to the measured position coordinates of the multiple position points of the calibration plate in the camera coordinate system and the initial posture of the 3D camera in the translation stage coordinate system; and the point cloud image corresponding to the target object is generated according to the position coordinates after compensation.
  • the compensation matrix is determined according to the measured position coordinates of the multiple position points of the calibration plate in the camera coordinate system and the initial posture of the 3D camera in the translation stage coordinate system, and the translation stage is a device with higher precision, the error caused by the translation stage itself can be ignored, therefore, the compensation matrix for the 3D camera itself can be determined to improve the accuracy of the 3D camera according to the compensation matrix, that is, the position coordinates of the surface position points of the target object in the camera coordinate system can be more accurately obtained according to the compensation matrix, and then a more accurate three-dimensional point cloud image can be obtained.
  • the compensation matrix in the above point cloud image acquisition method is obtained by a 3D camera calibration method as in any of the above method embodiments.
  • FIG6 is a schematic diagram of the structure of a 3D camera calibration device provided by an embodiment of the present disclosure.
  • the 3D camera calibration device 600 of the present disclosure embodiment includes: an acquisition module 601, a first determination module 602, and a second determination module 603. Among them:
  • the acquisition module 601 is used to acquire the measured position coordinates of multiple position points of the calibration plate in the camera coordinate system.
  • the first determination module 602 is used to determine the initial position and posture of the 3D camera in the translation stage coordinate system.
  • the second determination module 603 is used to determine a compensation matrix for the 3D camera according to the measured position coordinates of the plurality of position points in the camera coordinate system and the initial position and posture of the 3D camera in the translation stage coordinate system.
  • the second determination module 603 can be specifically used to: divide multiple position points into multiple groups of position points according to different spatial regions in the camera coordinate system; determine the compensation matrix for the 3D camera according to the measured position coordinates and initial posture of each group of position points in the camera coordinate system.
  • the second determination module 603 when used to divide a plurality of position points into a plurality of groups of position points according to different spatial regions in a camera coordinate system, it can be specifically used to: divide the space into a plurality of layers according to different heights in the camera coordinate system, and each layer is divided into a plurality of partitions; and divide the position points in the same partition in the same layer into a group of position points according to the position coordinates of the plurality of position points in the camera coordinate system.
  • the second determination module 603 when used to determine the compensation matrix for the 3D camera based on the measured position coordinates and initial pose of each group of position points in the camera coordinate system, it can be specifically used to: determine the initial theoretical position coordinates of each group of position points in the camera coordinate system based on the initial pose; fit the measured position coordinates and initial theoretical position coordinates of each group of position points in the camera coordinate system to determine the initial compensation matrix; obtain the adjusted poses of multiple 3D cameras in the translation stage coordinate system, and determine the theoretical position coordinates of multiple groups of position points after adjustment in the 3D camera; adjust the initial compensation matrix based on the current poses returned by the multiple 3D cameras after adjustment in the translation stage coordinate system, the measured position coordinates of multiple groups of position points in the camera coordinate system, and the current position coordinates of the 3D camera after adjustment, until the Euclidean distance of the error between the measured position coordinates of each group of position points in the camera coordinate system and the current theoretical position coordinates is less than a preset threshold and/or the adjustment reaches a preset
  • the second determination module 603 when used to determine the initial theoretical position coordinates of each group of position points in the camera coordinate system according to the initial posture, it can be specifically used to: obtain the posture of the base in the translation stage coordinate system and the posture of the calibration plate in the camera coordinate system corresponding to each group of position points; determine the posture of the calibration plate relative to the base according to the initial posture of the 3D camera in the translation stage coordinate system, the posture of the base in the translation stage coordinate system, and the posture of the calibration plate in the camera coordinate system; determine the position coordinates of each group of position points in the translation stage coordinate system according to the position coordinates of each group of position points in the calibration plate coordinate system, the posture of the calibration plate relative to the base, and the posture of the base in the translation stage coordinate system; determine the initial theoretical position coordinates of each group of position points in the camera coordinate system according to the position coordinates and initial posture of each group of position points in the translation stage coordinate system.
  • the acquisition module 601 may be specifically used to: acquire multiple calibration plate images in response to the calibration plate moving to multiple spatial positions; and acquire measured position coordinates of multiple position points in the camera coordinate system based on the multiple calibration plate images.
  • the first determination module 602 can be specifically used to: obtain the first poses of multiple bases in the translation stage coordinate system and the second poses of multiple calibration plates in the camera coordinate system in the process of the translation stage driving the calibration plate to move through the base; determine the initial pose of the 3D camera in the translation stage coordinate system based on any set of the first pose and the second pose.
  • the device of this embodiment can be used to execute the technical solution of the 3D camera calibration method in any of the above method embodiments. Its implementation principle and technical effects are similar and will not be repeated here.
  • FIG7 is a schematic diagram of the structure of a point cloud image acquisition device provided by an embodiment of the present disclosure.
  • the point cloud image acquisition device 700 of the present disclosure embodiment includes: an acquisition module 701, a processing module 702 and a generation module 703. Among them:
  • the acquisition module 701 is used to acquire the position coordinates of the surface position points of the target object in the coordinate system of the point cloud image acquisition device.
  • the processing module 702 is used to compensate the position coordinates according to the compensation matrix to obtain the compensated position coordinates.
  • the compensation matrix is determined based on the measured position coordinates of multiple position points of the calibration plate in the coordinate system of the point cloud image acquisition device and the initial posture of the 3D point cloud image acquisition device in the coordinate system of the translation stage.
  • the generating module 703 is used to generate a point cloud image corresponding to the target object according to the position coordinates after compensation processing.
  • the compensation matrix is obtained by calibrating the point cloud image acquisition device in any of the above method embodiments.
  • the processing module 702 can be specifically used to: determine the layer and partition in which the surface position point is located based on the position coordinates of the surface position point of the target object in the coordinate system of the point cloud image acquisition device and the internal parameters of the point cloud image acquisition device; and compensate the position coordinates of the surface position point in the coordinate system of the point cloud image acquisition device according to the compensation matrix corresponding to the layer and partition in which the surface position point is located to obtain the compensated position coordinates.
  • the processing module 702 when used to determine the layer and partition of the surface position point according to the position coordinates of the surface position point of the target object in the coordinate system of the point cloud image acquisition device and the internal parameters of the point cloud image acquisition device, it can be specifically used to: determine the layer of the surface position point according to the z-axis coordinate of the surface position point in the coordinate system of the point cloud image acquisition device; determine the pixel coordinates corresponding to the surface position point according to the x-axis coordinate and y-axis coordinate of the surface position point in the coordinate system of the point cloud image acquisition device and the internal parameters of the point cloud image acquisition device; determine the partition of the surface position point according to the pixel coordinates.
  • the processing module 702 when used to perform compensation processing on the position coordinates of the surface position point in the coordinate system of the point cloud image acquisition device according to the compensation matrix corresponding to the layer and partition where the surface position point is located, and obtain the compensated position coordinates, it can be specifically used for: when the surface position point is in a cross-layer and/or cross-partition, determining the weight of each cross-layer and/or partition according to the distance between the surface position point and the cross-layer and/or partition; performing compensation processing on the position coordinates of the surface position point in the coordinate system of the point cloud image acquisition device according to the compensation matrix of each cross-layer and/or partition, to obtain multiple compensated position coordinates of the surface position point; performing weighted summation on the multiple compensated position coordinates of the surface position point according to the weight of each cross-layer and/or partition, and obtaining the compensated position coordinates as the compensated position coordinates after the weighted summation.
  • the device of this embodiment can be used to execute the technical solution of the point cloud image acquisition method in any of the above method embodiments. Its implementation principle and technical effects are similar and will not be repeated here.
  • FIG8 is a schematic diagram of a camera calibration system provided in an embodiment of the present disclosure.
  • the camera calibration system 800 in the embodiment of the present disclosure includes: a 3D camera 801 and a translation stage 802 .
  • the 3D camera 801 is used to obtain the measured position coordinates of multiple position points of the calibration plate in the camera coordinate system; determine the initial posture of the 3D camera in the translation stage coordinate system; and determine the compensation matrix for the 3D camera based on the measured position coordinates of the multiple position points in the camera coordinate system and the initial posture of the 3D camera in the translation stage coordinate system.
  • the translation stage 802 is used to drive the calibration plate to move via the base.
  • the 3D camera 801 can be used to execute a solution of the 3D camera calibration method in any of the above method embodiments, and correspondingly, the structure of the device embodiment of Figure 6 can be adopted, and its implementation principle and technical effect are similar, which will not be repeated here.
  • FIG9 is a schematic diagram of the structure of an electronic device provided by the present disclosure.
  • the electronic device 900 may include: at least one processor 901 and a memory 902 .
  • the memory 902 is used to store programs.
  • the programs may include program codes
  • the program codes include computer-executable instructions.
  • the memory 902 may include a high-speed random access memory (RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM high-speed random access memory
  • non-volatile memory such as at least one disk memory.
  • the processor 901 is used to execute the computer-executable instructions stored in the memory 902 to implement the 3D camera calibration method or the point cloud image acquisition method described in the aforementioned method embodiment.
  • the processor 901 may be a central processing unit (CPU), or an application specific integrated circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present disclosure.
  • the electronic device may be, for example, an electronic device with processing functions such as a 3D camera.
  • the electronic device 900 may further include a communication interface 903.
  • the communication interface 903, the memory 902 and the processor 901 are implemented independently, the communication interface 903, the memory 902 and the processor 901 may be interconnected through a bus and communicate with each other.
  • the bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA Extended Industry Standard Architecture
  • the bus may be divided into an address bus, a data bus, a control bus, etc., but it does not mean that there is only one bus or one type of bus.
  • the communication interface 903, the memory 902 and the processor 901 are integrated on a chip, the communication interface 903, the memory 902 and the processor 901 can communicate through an internal interface.
  • the present disclosure also provides a computer-readable storage medium, in which computer-executable instructions are stored.
  • a processor executes the computer-executable instructions, the above-mentioned 3D camera calibration method or point cloud image acquisition method is implemented.
  • the present disclosure also provides a computer program product, including a computer program, which, when executed by a processor, implements the above-mentioned 3D camera calibration method solution or point cloud image acquisition method solution.
  • the computer-readable storage medium mentioned above may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • optical disk any available medium that can be accessed by a general or special-purpose computer.
  • An exemplary readable storage medium is coupled to a processor so that the processor can read information from the readable storage medium and write information to the readable storage medium.
  • the readable storage medium can also be a component of the processor.
  • the processor and the readable storage medium can be located in an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the processor and the readable storage medium can also exist as discrete components in a 3D camera calibration device or a point cloud image acquisition device.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the steps of the above-mentioned method embodiments are executed; and the aforementioned storage medium includes: ROM, RAM, disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本公开提供一种3D相机标定方法、点云图像获取方法及相机标定系统,涉及工业相机技术领域,该3D相机标定方法包括:获取标定物的多个位置点在相机坐标系下的测量位置坐标;确定3D相机在位移台坐标系下的初始位姿;根据多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。本公开可以确定针对相机自身的补偿矩阵,从而能够根据补偿矩阵更加准确地获得目标物体的表面位置点在相机坐标系下的位置坐标,进而可以获得精度更高的三维点云图像。

Description

3D相机标定方法、点云图像获取方法及相机标定系统
本公开要求于2022年11月24日提交中国专利局、申请号为202211483388.6、申请名称为“3D相机标定方法、点云图像获取方法及相机标定系统”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及工业相机技术领域,尤其涉及一种3D相机标定方法、点云图像获取方法及相机标定系统。
背景技术
近年来,机器视觉技术的应用日益广泛。例如,可以使用机器视觉技术来识别待检测物体。在识别待检测物体之前,首先需获取待检测物体的三维(3D)点云图像,进而根据该3D点云图像来识别待检测物体。
相关技术中,待检测物体的3D点云图像是根据如下方式获取的:通过参数固定的3D相机采集待检测物体表面多个位置点在相机坐标系下的位置坐标,根据该位置坐标生成待检测物体的三维点云图像。
但通过上述方式采集的待检测物体表面多个位置点在相机坐标系下的位置坐标不够准确,进而导致生成的三维点云图像的精度较低。
技术解决方案
本公开提供一种3D相机标定方法、点云图像获取方法及相机标定系统,以解决通过相关技术采集的待检测物体表面多个位置点在相机坐标系下的位置坐标不够准确,进而导致生成的三维点云图像的精度较低的问题。
第一方面,本公开提供一种3D相机标定方法,包括:
确定3D相机在位移台坐标系下的初始位姿;
根据标定物的多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。
第二方面,本公开提供一种点云图像获取方法,包括:
根据补偿矩阵对目标物体的表面位置点在相机坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标,补偿矩阵是根据标定物的多个位置点在相机坐标系下的测量位置坐标和3D相机在位移台坐标系下的初始位姿确定的;
根据补偿处理后的位置坐标,生成目标物体对应的点云图像。
第三方面,本公开提供一种相机标定系统,包括3D相机和位移台;
3D相机,用于获取标定物的多个位置点在相机坐标系下的测量位置坐标;确定3D相机在位移台坐标系下的初始位姿;根据多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵;
位移台,用于通过基台带动标定物移动。
第四方面,本公开提供一种3D相机标定装置,包括:
第一确定模块,用于确定3D相机在位移台坐标系下的初始位姿;
第二确定模块,用于根据标定物的多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。
第五方面,本公开提供一种点云图像获取装置,包括:
处理模块,用于根据补偿矩阵对目标物体的表面位置点在点云图像获取装置坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标,补偿矩阵是根据标定物的多个位置点在点云图像获取装置坐标系下的测量位置坐标和点云图像获取装置在位移台坐标系下的初始位姿确定的;
生成模块,用于根据补偿处理后的位置坐标,生成目标物体对应的点云图像。
第六方面,本公开提供一种电子设备,包括:处理器,以及与处理器通信连接的存储器;
存储器存储计算机执行指令;
处理器执行存储器存储的计算机执行指令,以实现如本公开第一方面所述的3D相机标定方法或本公开第二方面所述的点云图像获取方法。
第七方面,本公开提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序指令,计算机程序指令被处理器执行时,实现如本公开第一方面所述的3D相机标定方法或本公开第二方面所述的点云图像获取方法。
第八方面,本公开提供一种计算机程序产品,包括计算机程序,计算机程序被处理器执行时实现如本公开第一方面所述的3D相机标定方法或本公开第二方面所述的点云图像获取方法。
本公开提供的3D相机标定方法、点云图像获取方法及相机标定系统,通过获取标定物的多个位置点在相机坐标系下的测量位置坐标;确定3D相机在位移台坐标系下的初始位姿;根据多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。由于本公开在确定针对3D相机的补偿矩阵时,使用了位移台,而位移台为精度较高的设备,可以不用考虑由位移台自身引起的误差,因此,可以确定针对3D相机自身的补偿矩阵,以根据补偿矩阵提高3D相机的精度,即可以根据补偿矩阵更加准确地获得目标物体的表面位置点在相机坐标系下的位置坐标,进而可以获得精度更高的三维点云图像。另外,由于可以通过补偿矩阵对目标物体的表面位置点在相机坐标系下的位置坐标进行补偿,因此,可以降低对3D相机拍摄的目标物体的表面位置点在相机坐标系下的初始位置坐标的准确度的要求,从而可以降低对3D相机的精度要求。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图做一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开一实施例提供的3D相机标定方法的应用场景示意图;
图2为本公开一实施例提供的3D相机标定方法的流程图;
图3为本公开另一实施例提供的3D相机标定方法的流程图;
图4为本公开一实施例提供的空间区域切分的原理示意图;
图5为本公开一实施例提供的点云图像获取的流程图;
图6为本公开一实施例提供的3D相机标定装置的结构示意图;
图7为本公开一实施例提供的点云图像获取装置的结构示意图;
图8为本公开一实施例提供的相机标定系统的示意图;
图9为本公开提供的一种电子设备结构示意图。
本发明的实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开的技术方案中,所涉及的金融数据或用户数据等信息的收集、存储、使用、加工、传输、提供和公开等处理,均符合相关法律法规的规定,且不违背公序良俗。
首先,对本公开涉及的部分技术术语进行解释说明:
相机参数,图像测量过程以及机器视觉应用中,为确定空间物体表面某点的三维几何位置与其在图像中对应点之间的相互关系,建立相机成像的几何模型,这些几何模型参数就是相机参数;相机参数包括相机的内参和外参;
相机标定,即确定相机成像的几何模型参数(相机参数)的过程;
标定物,可以为标定板、三个以上的标定球、或者其他标定物,可形成平面坐标系即可,在机器视觉、图像测量、摄影测量、三维重建等应用中,可以用于校正镜头畸变等,下述实施例中以标定板作为示例。
在一些应用场景中,通过3D相机采集物体表面的多个位置点在相机坐标系下的位置坐标的过程中,3D相机的内参和外参将直接影响物体表面的多个位置点在相机坐标系下的位置坐标的准确度。
相关技术中,3D点云图像的多个位置点是通过相同的相机内参获得的,但是,在对3D相机进行标定后,由于物体表面距离3D相机不同深度的位置点的普适性不同,即内参不适用整个空间内所有的位置点,如果按照相同的相机内参得到的3D点云图像,其在预设区间内和区间外的准确度会有不同,即得到的3D点云图像的精度较低。因此,如何能够确定物体表面多个位置点在相机坐标系下的准确位置坐标,为本公开要解决的技术问题。
基于上述问题,本公开提供一种3D相机标定方法、点云图像获取方法及相机标定系统,通过根据标定物的多个位置点在相机坐标系下的测量位置坐标和3D相机在位移台坐标系下的初始位姿确定补偿矩阵,对目标物体的表面位置点在相机坐标系下的位置坐标进行补偿,能够更加准确地获得目标物体的表面位置点在相机坐标系下的位置坐标,进而可以获得精度更高的三维点云图像。
以下,首先对本公开提供的方案的应用场景进行示例说明。
图1为本公开一实施例提供的3D相机标定方法的应用场景示意图。如图1所示,本应用场景中,位移台前端通过基台固定标定板,位移台可以通过基台带动标定板移动以改变位置,3D相机固定在预设位置。3D相机位置的设置在实施中可以有多种方案,例如,3D相机可以固定在位移台外部的支架上,也可以固定在位移台上,本公开实施例中以3D相机固定在位移台外部的支架上为例进行说明。具体地,位移台通过基台带动标定板移动至多个空间位置,3D相机获取对应标定板在不同空间位置时的多个图像、基台在位移台坐标系下的多个第一位姿以及标定板在相机坐标系下的多个第二位姿;3D相机根据这多个图像、多个第一位姿以及多个第二位姿,确定针对3D相机的补偿矩阵。
需要说明的是,位移台通过基台带动标定板移动至多个空间位置的过程中,可以将标定板沿相机坐标系的z轴方向移动,或者,将标定板在相机坐标系的x轴和y轴所构成的平面内移动,或者,将标定板沿x轴旋转或沿y轴旋转等;其中,相机坐标系的原点为3D相机的光心,相机坐标系的x轴、y轴分别与标定板图像的x轴、y轴平行,相机坐标系的z轴为3D相机的光轴。
此外,本公开实施例可以应用在点云图像获取的场景中。
需要说明的是,图1仅是本公开实施例提供的一种应用场景的示意图,本公开实施例不对图1中包括的设备进行限定,也不对图1中设备之间的位置关系进行限定。
下面,通过具体实施例对本公开的技术方案进行详细说明。需要说明的是,下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
图2为本公开一实施例提供的3D相机标定方法的流程图。如图2所示,本公开实施例的方法包括:
S201、获取标定板的多个位置点在相机坐标系下的测量位置坐标。
示例性地,参考图1,在位移台通过基台带动标定板移动到某个空间位置时,可以通过3D相机直接拍摄得到标定板图像,进而根据标定板图像,获取标定板的多个位置点在相机坐标系下的测量位置坐标。
S202、确定3D相机在位移台坐标系下的初始位姿。
该步骤中,示例性地,参考图1,可以多次采集基台在位移台坐标系下的位姿以及标定板上的位置点在相机坐标系下的位姿,进而确定3D相机在位移台坐标系下的初始位姿。对于具体如何确定3D相机在位移台坐标系下的初始位姿,可参考后续实施例,此处不再赘述。
S203、根据多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。
该步骤中,在获得了标定板的多个位置点在相机坐标系下的测量位置坐标,以及3D相机在位移台坐标系下的初始位姿后,可以根据多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。可以理解,补偿矩阵中包含对3D相机内参的补偿,3D相机内参会影响3D相机采集的位置点的位置坐标的准确度,通过补偿矩阵可以补偿3D相机内参偏差引发的误差。示例性地,可以在相机坐标系下根据空间区域的不同,将标定板的多个位置点划分为多组位置点,根据各组位置点在相机坐标系下的测量位置坐标以及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。通过在相机坐标系下将位置点根据空间区域进行划分,可以使得不同空间深度及区域均具有对应的补偿矩阵,从而能够提升相机坐标系下位置点坐标的准确度。对于具体如何确定针对3D相机的补偿矩阵,可参考后续实施例,此处不再赘述。
在确定了针对3D相机的补偿矩阵后,可以根据补偿矩阵对相机坐标系下的位置点的位置坐标进行补偿,得到更为准确的位置坐标,进而可以获得精度更高的点云图像。
本公开实施例提供的3D相机标定方法,通过获取标定板的多个位置点在相机坐标系下的测量位置坐标;确定3D相机在位移台坐标系下的初始位姿;根据多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。由于本公开实施例在确定针对3D相机的补偿矩阵时,使用了位移台,而位移台为精度较高的设备,可以不用考虑由位移台自身引起的误差,因此,可以确定针对3D相机自身的补偿矩阵,以根据补偿矩阵提高3D相机的精度,即可以根据补偿矩阵更加准确地获得目标物体的表面位置点在相机坐标系下的位置坐标,进而可以获得精度更高的三维点云图像。另外,由于可以通过补偿矩阵对目标物体的表面位置点在相机坐标系下的位置坐标进行补偿,因此,可以降低对3D相机拍摄的目标物体的表面位置点在相机坐标系下的初始位置坐标的准确度的要求,从而可以降低对3D相机的精度要求。
图3为本公开另一实施例提供的3D相机标定方法的流程图。在上述实施例的基础上,本公开实施例对3D相机标定方法进行进一步说明。如图3所示,本公开实施例的方法可以包括:
本公开实施例中,图2中S201步骤可以进一步包括如下的S301和S302两个步骤:
S301、响应于标定板移动至多个空间位置,获取多个标定板图像。
示例性地,位移台通过基台带动标定板移动,来调整标定板的位置,使得标定板可以移动至多个空间位置。可以理解,由于3D相机的内参在整个空间区域的普适性不同,因此,在调整标定板的位置时,可以控制标定板在多个位置变化,以使得3D相机能够获取到不同空间位置的多个位置点在相机坐标系下的测量位置坐标。该步骤中,响应于标定板移动至多个空间位置,3D相机可以直接拍摄得到多个标定板图像。
S302、根据多个标定板图像,获取标定板的多个位置点在相机坐标系下的测量位置坐标。
该步骤中,在获得了多个标定板图像后,可以根据多个标定板图像,获取标定板的多个位置点在相机坐标系下的测量位置坐标。
本公开实施例中,图2中S202步骤可以进一步包括如下的S303和S304两个步骤:
S303、在位移台通过基台带动标定板移动的过程中,获取多个基台在位移台坐标系下的第一位姿以及多个标定板在相机坐标系下的第二位姿。
该步骤中,基台比如为轴。可以理解,在位移台通过基台带动标定板移动至多个空间位置时,可以获取每次标定板移动至某个空间位置时,基台在位移台坐标系下的第一位姿以及标定板在相机坐标系下的第二位姿,其中,基台在位移台坐标系下的第一位姿为已知量,其可以通过位移台上的传感器来测量确定;标定板在相机坐标系下的第二位姿可以通过3D相机采集获得。
S304、根据任一组第一位姿和第二位姿,确定3D相机在位移台坐标系下的初始位姿。
示例性地,在获得了多个基台在位移台坐标系下的第一位姿以及多个标定板在相机坐标系下的第二位姿后,可以选择其中任意一组第一位姿和第二位姿,根据如下公式一来获得3D相机在位移台坐标系下的初始位姿:
     公式一
其中, 表示3D相机在位移台坐标系下的初始位姿; 表示基台在位移台坐标系下的位姿; 表示标定板相对于基台的位姿,在获取3D相机在位移台坐标系下的初始位姿时,将 作为已知量,使用 的初始值; 表示标定板相对于相机坐标系的位姿。
本公开实施例中,图2中S203步骤可以进一步包括如下的S305和S306两个步骤:
S305、在相机坐标系下根据空间区域的不同,将标定板的多个位置点划分为多组位置点。
可以理解,在相机坐标系下将位置点根据空间区域进行划分,可以使得不同空间深度及区域均具有对应的补偿矩阵,从而可以提升相机坐标系下位置点坐标的准确度。示例性地,可以在相机坐标系下按照空间中不同切分区域对多个位置点进行划分,具体可以按照高度分成多个分层,然后对每个分层按照设定的区域划分为多个分区,得到多组位置点。
进一步地,可选的,在相机坐标系下根据空间区域的不同,将多个位置点划分为多组位置点,包括:在相机坐标系下将空间根据高度的不同划分为多个分层,各分层内划分有多个分区;根据多个位置点在相机坐标系下的位置坐标,将处于同一分层内同一分区的位置点划分为一组位置点。
示例性地,图4为本公开一实施例提供的空间区域切分的原理示意图,如图4所示,在相机坐标系下按照空间区域(即图4中401表示的区域)中不同切分区域对多个位置点进行划分,具体可以按照高度分成多个分层,然后对每个分层按照设定的区域划分为多个分区,得到多组位置点。
可选的,可以根据3D相机在位移台坐标系下的初始位姿、位置点在相机坐标系下的测量位置坐标及基台在位移台坐标系下的位姿,确定标定板相对于基台的位姿;进一步地,再根据标定板中各位置点在标定板坐标系下的位置坐标、标定板相对于基台的位姿以及基台在位移台坐标系下的位姿,可以确定标定板上的多个位置点在相机坐标系中的理论位置坐标 。其中, 表示标定板上的多个位置点在相机坐标系下的理论位置坐标构成的矩阵,即n个位置点对应的理论坐标 可以表示为:
S306、根据各组位置点在相机坐标系下的测量位置坐标以及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。
该步骤中,在将标定板的多个位置点划分为多组位置点后,可以根据各组位置点在相机坐标系下的测量位置坐标以及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。
进一步地,可选的,根据各组位置点在相机坐标系下的测量位置坐标以及初始位姿,确定针对3D相机的补偿矩阵,包括:根据3D相机在位移台坐标系下的初始位姿确定各组位置点在相机坐标系下的初始理论位置坐标;拟合各组位置点在相机坐标系下的测量位置坐标和初始理论位置坐标,确定初始补偿矩阵;获取多个3D相机在位移台坐标系下调整后的位姿,并确定多个各组位置点在3D相机调整后的理论位置坐标;根据多个3D相机在位移台坐标系下调整后返回的当前位姿、多个各组位置点在相机坐标系下的测量位置坐标及在3D相机调整后的当前位置坐标,调整初始补偿矩阵,直至各组位置点在相机坐标系下的测量位置坐标和当前理论位置坐标之间误差的欧式距离小于预设阈值和/或调整达到预设次数,确定补偿矩阵为当前补偿矩阵。
示例性地,基于上述实施例,对于根据3D相机在位移台坐标系下的初始位姿确定各组位置点在相机坐标系下的初始理论位置坐标,可以有多种实施方案。例如,例如,可以通过寻求各组位置点与相机坐标系之间的中间媒介,从而将标定板中的位置点导入相机坐标系之中,进而确定各组位置点在相机坐标系下的初始理论位置坐标。根据3D相机在位移台坐标系下的初始位姿确定各组位置点在相机坐标系下的初始理论位置坐标,可以进一步包括:获取与各组位置点相对应的基台在位移台坐标系下的位姿及标定板在相机坐标系下的位姿;根据3D相机在位移台坐标系下的初始位姿、基台在位移台坐标系下的位姿以及标定板在相机坐标系下的位姿,确定标定板相对于基台的位姿;根据各组位置点在标定板坐标系下的位置坐标、标定板相对于基台的位姿及基台在位移台坐标系下的位姿,确定各组位置点在位移台坐标系下的位置坐标;根据各组位置点在位移台坐标系下的位置坐标及初始位姿,确定各组位置点在相机坐标系下的初始理论位置坐标。
该实施例中,在确定了定各组位置点在相机坐标系下的初始理论位置坐标后,可以拟合各组位置点在相机坐标系下的测量位置坐标和初始理论位置坐标,确定针对3D相机的初始补偿矩阵时,可以有多种实施方案,例如,可以通过最小二乘法进行拟合。具体地,可以根据如下公式二来拟合各组位置点在相机坐标系下的测量位置坐标和初始理论位置坐标,确定针对3D相机的初始补偿矩阵:
  公式二
其中, 表示相机在位移台坐标系下的位置坐标的误差; 表示理论位置坐标; 表示测量位置坐标,可以通过3D相机直接拍摄得到;M表示补偿矩阵。
具体实施中,根据多组位置点分别对应的测量位置坐标、理论位置坐标以及上述公式二,多次调整3D相机在位移台坐标系下的位姿,即 ,直至各组位置点在相机坐标系下的测量位置坐标和当前理论位置坐标之间误差的欧式距离小于预设阈值和/或调整达到预设次数,即可以得到针对3D相机的补偿矩阵M。其中,补偿矩阵包含针对每层中的每个位置区域所对应位置的补偿量。
本公开实施例提供的3D相机标定方法,通过响应于标定板移动至多个空间位置,获取多个标定板图像;根据多个标定板图像,获取标定板的多个位置点在相机坐标系下的测量位置坐标;根据多个标定板图像,获取标定板的多个位置点在相机坐标系下的测量位置坐标;在位移台通过基台带动标定板移动的过程中,获取多个基台在位移台坐标系下的第一位姿以及多个标定板在相机坐标系下的第二位姿;根据任一组第一位姿和第二位姿,确定3D相机在位移台坐标系下的初始位姿;在相机坐标系下根据空间区域的不同,将标定板的多个位置点划分为多组位置点;根据各组位置点在相机坐标系下的测量位置坐标以及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。由于本公开实施例在确定针对3D相机的补偿矩阵时,使用了位移台,而位移台为精度较高的设备,可以不用考虑由位移台自身引起的误差,因此,可以确定针对3D相机自身的补偿矩阵,以根据补偿矩阵提高3D相机的精度;其中,在相机坐标系下将标定板的多个位置点根据空间区域进行划分,可以使得不同空间深度及区域均具有对应的补偿矩阵,能够有效提升相机坐标系下位置点坐标的准确度,从而可以根据补偿矩阵更加准确地获得目标物体的表面位置点在相机坐标系下的位置坐标,进而可以获得精度更高的三维点云图像。
在上述实施例的基础上,图5为本公开一实施例提供的点云图像获取的流程图。如图5所示,本公开实施例的方法包括:
S501、获取目标物体的表面位置点在相机坐标系下的位置坐标。
示例性地,可以通过3D相机直接拍摄得到目标物体的图像,进而根据目标物体的图像,获取目标物体的表面位置点在相机坐标系下的位置坐标。
S502、根据补偿矩阵对位置坐标进行补偿处理,得到补偿处理后的位置坐标,补偿矩阵是根据标定板的多个位置点在相机坐标系下的测量位置坐标和3D相机在位移台坐标系下的初始位姿确定的。
示例性地,可以根据如下公式三对位置坐标进行补偿处理:
  公式三
其中,M表示补偿矩阵;A表示目标物体的表面位置点在相机坐标系下的位置坐标; 表示补偿处理后的目标物体的表面位置点在相机坐标系下的位置坐标。进一步地,可以对 进行归一化处理,来得到经过补偿处理后的位置坐标。
进一步地,可选的,根据补偿矩阵对位置坐标进行补偿处理,得到补偿处理后的位置坐标,包括:根据目标物体的表面位置点在相机坐标系下的位置坐标和3D相机的内参,确定表面位置点所处的分层和分区;根据与表面位置点所处的分层和分区对应的补偿矩阵,对表面位置点在相机坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标。
示例性地,可以分别提取目标物体的表面位置点在相机坐标系下位置坐标的z轴坐标、x轴坐标及y轴坐标,进而确定目标物体的表面位置点所处的分层和分区。进一步地,可选的,根据目标物体的表面位置点在相机坐标系下的位置坐标和3D相机的内参,确定表面位置点所处的分层和分区,可以包括:根据表面位置点在相机坐标系下的z轴坐标,确定表面位置点所处的分层;根据表面位置点在相机坐标系下的x轴坐标、y轴坐标以及3D相机的内参,确定表面位置点对应的像素坐标;根据像素坐标,确定表面位置点所处的分区。
由于不同分层和分区对应有不同的补偿矩阵,因此,可以根据与表面位置点所处的分层和分区对应的补偿矩阵,通过上述公式三对表面位置点在相机坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标。
由于目标物体的表面位置点在实际当中有可能在各分层或是分区之间,不完全属于任意分层或是分区,因此,为了提升这种跨分层或是分区的表面位置点的补偿精度,对目标物体的表面位置点在相机坐标系下的位置坐标进行补偿时可以进一步包括如下:在目标物体的表面位置点处于跨分层和/或跨分区时,根据表面位置点与所跨分层和/或分区之间的距离确定各所跨分层和/或分区的权重;根据各所跨分层和/或分区的补偿矩阵分别对表面位置点在相机坐标系下的位置坐标进行补偿处理,得到表面位置点的多个补偿位置坐标;根据各所跨分层和/或分区的权重对表面位置点的多个补偿位置坐标进行加权求和,得到补偿处理后的位置坐标为加权求和处理后的补偿位置坐标。
示例性地,在表面位置点属于同分层的不同分区的情况下,根据表面位置点与不同分区之间的距离,按照表面位置点与不同分区之间的距离对应的权重,对表面位置点的补偿结果进行加权求和,得到表面位置点对应的加权求和后的补偿位置坐标。在表面位置点属于不同分层的情况下,根据表面位置点与不同分层之间的距离,按照表面位置点与不同分层之间的距离对应的权重,对表面位置点的补偿结果进行加权求和,得到表面位置点对应的加权求和后的补偿位置坐标。在表面位置点既属于不同分层,且属于不同分区的情况下,根据表面位置点与不同分层之间的距离,以及该表面位置点与不同分区之间的距离,对表面位置点的补偿结果进行加权求和,得到表面位置点对应的加权求和后的补偿位置坐标。因此,通过对位于边界区域的表面位置点的补偿结果进行加权求和,能够提高得到的边界表面位置点的坐标位置的准确度。
S503、根据补偿处理后的位置坐标,生成目标物体对应的点云图像。
该步骤中,在获得补偿处理后的位置坐标后,可以根据补偿处理后的位置坐标,生成精度更高的目标物体对应的点云图像。
本公开实施例提供的点云图像获取方法,通过获取目标物体的表面位置点在相机坐标系下的位置坐标;根据补偿矩阵对位置坐标进行补偿处理,得到补偿处理后的位置坐标,补偿矩阵是根据标定板的多个位置点在相机坐标系下的测量位置坐标和3D相机在位移台坐标系下的初始位姿确定的;根据补偿处理后的位置坐标,生成目标物体对应的点云图像。由于本公开实施例中根据补偿矩阵对位置坐标进行了补偿处理,其中,补偿矩阵是根据标定板的多个位置点在相机坐标系下的测量位置坐标和3D相机在位移台坐标系下的初始位姿确定的,而位移台为精度较高的设备,可以不用考虑由位移台自身引起的误差,因此,可以确定针对3D相机自身的补偿矩阵,以根据补偿矩阵提高3D相机的精度,即可以根据补偿矩阵更加准确地获得目标物体的表面位置点在相机坐标系下的位置坐标,进而可以获得精度更高的三维点云图像。
在上述实施例的基础上,可选的,上述点云图像获取方法中的补偿矩阵是通过如上述任一方法实施例中3D相机标定方法获得的。
下述为本公开装置实施例,可以用于执行本公开方法实施例。对于本公开装置实施例中未披露的细节,请参照本公开方法实施例。
图6为本公开一实施例提供的3D相机标定装置的结构示意图,如图6所示,本公开实施例的3D相机标定装置600包括:获取模块601、第一确定模块602和第二确定模块603。其中:
获取模块601,用于获取标定板的多个位置点在相机坐标系下的测量位置坐标。
第一确定模块602,用于确定3D相机在位移台坐标系下的初始位姿。
第二确定模块603,用于根据多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。
在一些实施例中,第二确定模块603可以具体用于:在相机坐标系下根据空间区域的不同,将多个位置点划分为多组位置点;根据各组位置点在相机坐标系下的测量位置坐标以及初始位姿,确定针对3D相机的补偿矩阵。
可选的,第二确定模块603在用于在相机坐标系下根据空间区域的不同,将多个位置点划分为多组位置点时,可以具体用于:在相机坐标系下将空间根据高度的不同划分为多个分层,各分层内划分有多个分区;根据多个位置点在相机坐标系下的位置坐标,将处于同一分层内同一分区的位置点划分为一组位置点。
可选的,第二确定模块603在用于根据各组位置点在相机坐标系下的测量位置坐标以及初始位姿,确定针对3D相机的补偿矩阵时,可以具体用于:根据初始位姿确定各组位置点在相机坐标系下的初始理论位置坐标;拟合各组位置点在相机坐标系下的测量位置坐标和初始理论位置坐标,确定初始补偿矩阵;获取多个3D相机在位移台坐标系下调整后的位姿,并确定多个各组位置点在3D相机调整后的理论位置坐标;根据多个3D相机在位移台坐标系下调整后返回的当前位姿、多个各组位置点在相机坐标系下的测量位置坐标及在3D相机调整后的当前位置坐标,调整初始补偿矩阵,直至各组位置点在相机坐标系下的测量位置坐标和当前理论位置坐标之间误差的欧式距离小于预设阈值和/或调整达到预设次数,确定补偿矩阵为当前补偿矩阵。
可选的,第二确定模块603在用于根据初始位姿确定各组位置点在相机坐标系下的初始理论位置坐标时,可以具体用于:获取与各组位置点相对应的基台在位移台坐标系下的位姿及标定板在相机坐标系下的位姿;根据3D相机在位移台坐标系下的初始位姿、基台在位移台坐标系下的位姿以及标定板在相机坐标系下的位姿,确定标定板相对于基台的位姿;根据各组位置点在标定板坐标系下的位置坐标、标定板相对于基台的位姿及基台在位移台坐标系下的位姿,确定各组位置点在位移台坐标系下的位置坐标;根据各组位置点在位移台坐标系下的位置坐标及初始位姿,确定各组位置点在相机坐标系下的初始理论位置坐标。
在一些实施例中,获取模块601可以具体用于:响应于标定板移动至多个空间位置,获取多个标定板图像;根据多个标定板图像,获取多个位置点在相机坐标系下的测量位置坐标。
在一些实施例中,第一确定模块602可以具体用于:在位移台通过基台带动标定板移动的过程中,获取多个基台在位移台坐标系下的第一位姿以及多个标定板在相机坐标系下的第二位姿;根据任一组第一位姿和第二位姿,确定3D相机在位移台坐标系下的初始位姿。
本实施例的装置,可以用于执行上述任一方法实施例中3D相机标定方法的技术方案,其实现原理和技术效果类似,此处不再赘述。
图7为本公开一实施例提供的点云图像获取装置的结构示意图,如图7所示,本公开实施例的点云图像获取装置700包括:获取模块701、处理模块702和生成模块703。其中:
获取模块701,用于获取目标物体的表面位置点在点云图像获取装置坐标系下的位置坐标。
处理模块702,用于根据补偿矩阵对位置坐标进行补偿处理,得到补偿处理后的位置坐标,补偿矩阵是根据标定板的多个位置点在点云图像获取装置坐标系下的测量位置坐标和3D点云图像获取装置在位移台坐标系下的初始位姿确定的。
生成模块703,用于根据补偿处理后的位置坐标,生成目标物体对应的点云图像。
可选的,补偿矩阵是通过上述任一方法实施例中点云图像获取装置标定方法获得的。
在一些实施例中,处理模块702可以具体用于:根据目标物体的表面位置点在点云图像获取装置坐标系下的位置坐标和点云图像获取装置的内参,确定表面位置点所处的分层和分区;根据与表面位置点所处的分层和分区对应的补偿矩阵,对表面位置点在点云图像获取装置坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标。
可选的,处理模块702在用于根据目标物体的表面位置点在点云图像获取装置坐标系下的位置坐标和点云图像获取装置的内参,确定表面位置点所处的分层和分区时,可以具体用于:根据表面位置点在点云图像获取装置坐标系下的z轴坐标,确定表面位置点所处的分层;根据表面位置点在点云图像获取装置坐标系下的x轴坐标、y轴坐标以及点云图像获取装置的内参,确定表面位置点对应的像素坐标;根据像素坐标,确定表面位置点所处的分区。
可选的,处理模块702在用于根据与表面位置点所处的分层和分区对应的补偿矩阵,对表面位置点在点云图像获取装置坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标时,可以具体用于:在表面位置点处于跨分层和/或跨分区时,根据表面位置点与所跨分层和/或分区之间的距离确定各所跨分层和/或分区的权重;根据各所跨分层和/或分区的补偿矩阵分别对表面位置点在点云图像获取装置坐标系下的位置坐标进行补偿处理,得到表面位置点的多个补偿位置坐标;根据各所跨分层和/或分区的权重对表面位置点的多个补偿位置坐标进行加权求和,得到补偿处理后的位置坐标为加权求和处理后的补偿位置坐标。
本实施例的装置,可以用于执行上述任一方法实施例中点云图像获取方法的技术方案,其实现原理和技术效果类似,此处不再赘述。
在上述实施例的基础上,图8为本公开一实施例提供的相机标定系统的示意图,如图8所示,本公开实施例的相机标定系统800包括:3D相机801和位移台802。
3D相机801,用于获取标定板的多个位置点在相机坐标系下的测量位置坐标;确定3D相机在位移台坐标系下的初始位姿;根据多个位置点在相机坐标系下的测量位置坐标及3D相机在位移台坐标系下的初始位姿,确定针对3D相机的补偿矩阵。
位移台802,用于通过基台带动标定板移动。
可选的,3D相机801可以用于执行如上述任一方法实施例中3D相机标定方法的方案,其对应地,可以采用图6装置实施例的结构,其实现原理和技术效果类似,此处不再赘述。
图9为本公开提供的一种电子设备结构示意图。如图9所示,该电子设备900可以包括:至少一个处理器901和存储器902。
存储器902,用于存放程序。具体地,程序可以包括程序代码,程序代码包括计算机执行指令。
存储器902可能包含高速随机存取存储器(Random Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
处理器901用于执行存储器902存储的计算机执行指令,以实现前述方法实施例所描述的3D相机标定方法或点云图像获取方法。其中,处理器901可能是一个中央处理器(Central Processing Unit,CPU),或者是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是被配置成实施本公开实施例的一个或多个集成电路。具体的,在实现前述方法实施例所描述的3D相机标定方法或点云图像获取方法时,该电子设备例如可以是3D相机等具有处理功能的电子设备。
可选的,该电子设备900还可以包括通信接口903。在具体实现上,如果通信接口903、存储器902和处理器901独立实现,则通信接口903、存储器902和处理器901可以通过总线相互连接并完成相互间的通信。总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等,但并不表示仅有一根总线或一种类型的总线。
可选的,在具体实现上,如果通信接口903、存储器902和处理器901集成在一块芯片上实现,则通信接口903、存储器902和处理器901可以通过内部接口完成通信。
本公开还提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当处理器执行计算机执行指令时,实现如上3D相机标定方法的方案或点云图像获取方法的方案。
本公开还提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如上的3D相机标定方法的方案或点云图像获取方法的方案。
上述的计算机可读存储介质,上述可读存储介质可以是由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。可读存储介质可以是通用或专用计算机能够存取的任何可用介质。
一种示例性的可读存储介质耦合至处理器,从而使处理器能够从该可读存储介质读取信息,且可向该可读存储介质写入信息。当然,可读存储介质也可以是处理器的组成部分。处理器和可读存储介质可以位于专用集成电路(Application Specific Integrated Circuits,ASIC)中。当然,处理器和可读存储介质也可以作为分立组件存在于3D相机标定装置中或点云图像获取装置中。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述各实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的范围。

Claims (19)

  1. 一种3D相机标定方法,其特征在于,包括:
    确定3D相机在位移台坐标系下的初始位姿;
    根据标定物的多个位置点在相机坐标系下的测量位置坐标及所述3D相机在所述位移台坐标系下的初始位姿,确定针对所述3D相机的补偿矩阵。
  2. 根据权利要求1所述的3D相机标定方法,其特征在于,所述根据所述多个位置点在所述相机坐标系下的测量位置坐标及所述3D相机在所述位移台坐标系下的初始位姿,确定针对所述3D相机的补偿矩阵,包括:
    在所述相机坐标系下根据空间区域的不同,将所述多个位置点划分为多组位置点;
    根据各组位置点在所述相机坐标系下的测量位置坐标以及所述初始位姿,确定针对所述3D相机的补偿矩阵。
  3. 根据权利要求2所述的3D相机标定方法,其特征在于,所述在所述相机坐标系下根据空间区域的不同,将所述多个位置点划分为多组位置点,包括:
    在所述相机坐标系下将空间根据高度的不同划分为多个分层,各分层内划分有多个分区;
    根据所述多个位置点在所述相机坐标系下的位置坐标,将处于同一分层内同一分区的位置点划分为一组位置点。
  4. 根据权利要求2或3所述的3D相机标定方法,其特征在于,所述根据各组位置点在所述相机坐标系下的测量位置坐标以及所述初始位姿,确定针对所述3D相机的补偿矩阵,包括:
    根据所述初始位姿确定各组位置点在所述相机坐标系下的初始理论位置坐标;
    拟合各组位置点在所述相机坐标系下的测量位置坐标和初始理论位置坐标,确定初始补偿矩阵;
    获取多个所述3D相机在所述位移台坐标系下调整后的位姿,并确定多个各组位置点在所述3D相机调整后的理论位置坐标;
    根据多个所述3D相机在所述位移台坐标系下调整后返回的当前位姿、多个各组位置点在所述相机坐标系下的测量位置坐标及在3D相机调整后的当前位置坐标,调整所述初始补偿矩阵,直至各组位置点在所述相机坐标系下的测量位置坐标和当前理论位置坐标之间误差的欧式距离小于预设阈值和/或调整达到预设次数,确定所述补偿矩阵为当前补偿矩阵。
  5. 根据权利要求4所述的3D相机标定方法,其特征在于,所述根据所述初始位姿确定各组位置点在所述相机坐标系下的初始理论位置坐标,包括:
    获取与各组位置点相对应的基台在所述位移台坐标系下的位姿及所述标定物在所述相机坐标系下的位姿;
    根据所述3D相机在所述位移台坐标系下的初始位姿、所述基台在所述位移台坐标系下的位姿以及所述标定物在所述相机坐标系下的位姿,确定所述标定物相对于所述基台的位姿;
    根据各组位置点在标定物坐标系下的位置坐标、标定物相对于基台的位姿及基台在位移台坐标系下的位姿,确定各组位置点在所述位移台坐标系下的位置坐标;
    根据各组位置点在所述位移台坐标系下的位置坐标及所述初始位姿,确定各组位置点在所述相机坐标系下的初始理论位置坐标。
  6. 根据权利要求1至5中任一项所述的3D相机标定方法,其特征在于,所述根据标定物的多个位置点在相机坐标系下的测量位置坐标及所述3D相机在所述位移台坐标系下的初始位姿,确定针对所述3D相机的补偿矩阵前,还包括:
    响应于所述标定物移动至多个空间位置,获取多个标定物图像;
    根据所述多个标定物图像,获取所述多个位置点在所述相机坐标系下的测量位置坐标。
  7. 根据权利要求1至6中任一项所述的3D相机标定方法,其特征在于,所述确定3D相机在位移台坐标系下的初始位姿,包括:
    在所述位移台通过基台带动所述标定物移动的过程中,获取多个基台在所述位移台坐标系下的第一位姿以及多个所述标定物在所述相机坐标系下的第二位姿;
    根据任一组所述第一位姿和所述第二位姿,确定所述3D相机在所述位移台坐标系下的初始位姿。
  8. 一种点云图像获取方法,其特征在于,包括:
    根据补偿矩阵对目标物体的表面位置点在相机坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标,所述补偿矩阵是根据标定物的多个位置点在相机坐标系下的测量位置坐标和3D相机在位移台坐标系下的初始位姿确定的;
    根据所述补偿处理后的位置坐标,生成所述目标物体对应的点云图像。
  9. 根据权利要求8所述的点云图像获取方法,其特征在于,所述补偿矩阵是通过如权利要求1至7中任一项所述的3D相机标定方法获得的。
  10. 根据权利要求8或9所述的点云图像获取方法,其特征在于,所述根据补偿矩阵对所述位置坐标进行补偿处理,得到补偿处理后的位置坐标,包括:
    根据所述目标物体的表面位置点在所述相机坐标系下的位置坐标和所述3D相机的内参,确定所述表面位置点所处的分层和分区;
    根据与所述表面位置点所处的分层和分区对应的补偿矩阵,对所述表面位置点在所述相机坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标。
  11. 根据权利要求10所述的点云图像获取方法,其特征在于,所述根据所述目标物体的表面位置点在所述相机坐标系下的位置坐标和所述3D相机的内参,确定所述表面位置点所处的分层和分区,包括:
    根据所述表面位置点在所述相机坐标系下的z轴坐标,确定所述表面位置点所处的分层;
    根据所述表面位置点在所述相机坐标系下的x轴坐标、y轴坐标以及所述3D相机的内参,确定所述表面位置点对应的像素坐标;
    根据所述像素坐标,确定所述表面位置点所处的分区。
  12. 根据权利要求10或11所述的点云图像获取方法,其特征在于,所述根据与所述表面位置点所处的分层和分区对应的补偿矩阵,对所述表面位置点在所述相机坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标,包括:
    在所述表面位置点处于跨分层和/或跨分区时,根据所述表面位置点与所跨分层和/或分区之间的距离确定各所跨分层和/或分区的权重;
    根据各所跨分层和/或分区的补偿矩阵分别对所述表面位置点在所述相机坐标系下的位置坐标进行补偿处理,得到所述表面位置点的多个补偿位置坐标;
    根据各所跨分层和/或分区的权重对所述表面位置点的多个补偿位置坐标进行加权求和,得到所述补偿处理后的位置坐标为加权求和处理后的补偿位置坐标。
  13. 一种相机标定系统,其特征在于,包括3D相机和位移台;
    所述3D相机,用于获取标定物的多个位置点在相机坐标系下的测量位置坐标;确定3D相机在位移台坐标系下的初始位姿;根据所述多个位置点在所述相机坐标系下的测量位置坐标及所述3D相机在所述位移台坐标系下的初始位姿,确定针对所述3D相机的补偿矩阵;
    所述位移台,用于通过基台带动所述标定物移动。
  14. 根据权利要求13所述的相机标定系统,其特征在于,所述3D相机用于执行如权利要求1至7中任一项所述的3D相机标定方法。
  15. 一种3D相机标定装置,其特征在于,包括:
    第一确定模块,用于确定3D相机在位移台坐标系下的初始位姿;
    第二确定模块,用于根据标定物的多个位置点在相机坐标系下的测量位置坐标及所述3D相机在所述位移台坐标系下的初始位姿,确定针对所述3D相机的补偿矩阵。
  16. 一种点云图像获取装置,其特征在于,包括:
    处理模块,用于根据补偿矩阵对目标物体的表面位置点在点云图像获取装置坐标系下的位置坐标进行补偿处理,得到补偿处理后的位置坐标,所述补偿矩阵是根据标定物的多个位置点在点云图像获取装置坐标系下的测量位置坐标和点云图像获取装置在位移台坐标系下的初始位姿确定的;
    生成模块,用于根据所述补偿处理后的位置坐标,生成所述目标物体对应的点云图像。
  17. 一种电子设备,其特征在于,包括:处理器,以及与所述处理器通信连接的存储器;
    所述存储器存储计算机执行指令;
    所述处理器执行所述存储器存储的计算机执行指令,以实现如权利要求1至12中任一项所述的方法。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序指令,所述计算机程序指令被处理器执行时,实现如权利要求1至12中任一项所述的方法。
  19. 一种计算机程序产品,其特征在于,包括计算机程序,计算机程序被处理器执行时实现如权利要求1至12中任一项所述的方法。
PCT/CN2023/125286 2022-11-24 2023-10-18 3d相机标定方法、点云图像获取方法及相机标定系统 WO2024109403A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211483388.6 2022-11-24
CN202211483388.6A CN115719387A (zh) 2022-11-24 2022-11-24 3d相机标定方法、点云图像获取方法及相机标定系统

Publications (1)

Publication Number Publication Date
WO2024109403A1 true WO2024109403A1 (zh) 2024-05-30

Family

ID=85256350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/125286 WO2024109403A1 (zh) 2022-11-24 2023-10-18 3d相机标定方法、点云图像获取方法及相机标定系统

Country Status (2)

Country Link
CN (1) CN115719387A (zh)
WO (1) WO2024109403A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115719387A (zh) * 2022-11-24 2023-02-28 梅卡曼德(北京)机器人科技有限公司 3d相机标定方法、点云图像获取方法及相机标定系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132891A (zh) * 2020-11-26 2020-12-25 三代光学科技(天津)有限公司 一种扩大标定空间的方法
CN112223302A (zh) * 2020-12-17 2021-01-15 国网瑞嘉(天津)智能机器人有限公司 基于多传感器的带电作业机器人的快速标定方法及装置
CN114371472A (zh) * 2021-12-15 2022-04-19 中电海康集团有限公司 一种激光雷达和相机的自动化联合标定装置及方法
CN114519738A (zh) * 2022-01-24 2022-05-20 西北工业大学宁波研究院 一种基于icp算法的手眼标定误差修正方法
US20220189062A1 (en) * 2020-12-15 2022-06-16 Kwangwoon University Industry-Academic Collaboration Foundation Multi-view camera-based iterative calibration method for generation of 3d volume model
CN115719387A (zh) * 2022-11-24 2023-02-28 梅卡曼德(北京)机器人科技有限公司 3d相机标定方法、点云图像获取方法及相机标定系统
CN115810052A (zh) * 2021-09-16 2023-03-17 梅卡曼德(北京)机器人科技有限公司 相机的标定方法、装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132891A (zh) * 2020-11-26 2020-12-25 三代光学科技(天津)有限公司 一种扩大标定空间的方法
US20220189062A1 (en) * 2020-12-15 2022-06-16 Kwangwoon University Industry-Academic Collaboration Foundation Multi-view camera-based iterative calibration method for generation of 3d volume model
CN112223302A (zh) * 2020-12-17 2021-01-15 国网瑞嘉(天津)智能机器人有限公司 基于多传感器的带电作业机器人的快速标定方法及装置
CN115810052A (zh) * 2021-09-16 2023-03-17 梅卡曼德(北京)机器人科技有限公司 相机的标定方法、装置、电子设备及存储介质
CN114371472A (zh) * 2021-12-15 2022-04-19 中电海康集团有限公司 一种激光雷达和相机的自动化联合标定装置及方法
CN114519738A (zh) * 2022-01-24 2022-05-20 西北工业大学宁波研究院 一种基于icp算法的手眼标定误差修正方法
CN115719387A (zh) * 2022-11-24 2023-02-28 梅卡曼德(北京)机器人科技有限公司 3d相机标定方法、点云图像获取方法及相机标定系统

Also Published As

Publication number Publication date
CN115719387A (zh) 2023-02-28

Similar Documents

Publication Publication Date Title
JP6967715B2 (ja) カメラ校正方法、カメラ校正プログラム及びカメラ校正装置
JP6975929B2 (ja) カメラ校正方法、カメラ校正プログラム及びカメラ校正装置
WO2024109403A1 (zh) 3d相机标定方法、点云图像获取方法及相机标定系统
CN104613930B (zh) 一种测距的方法、装置及移动终端
CN109544643B (zh) 一种摄像机图像校正方法及装置
CN112184811B (zh) 单目空间结构光系统结构校准方法及装置
CN113920206B (zh) 透视移轴相机的标定方法
CN110009687A (zh) 基于三相机的彩色三维成像系统及其标定方法
CN111383264B (zh) 一种定位方法、装置、终端及计算机存储介质
US12058468B2 (en) Image capturing apparatus, image processing apparatus, image processing method, image capturing apparatus calibration method, robot apparatus, method for manufacturing article using robot apparatus, and recording medium
JP2017194569A (ja) 画像処理装置、撮像装置及び画像処理方法
CN113034612A (zh) 一种标定装置、方法及深度相机
Ding et al. A robust detection method of control points for calibration and measurement with defocused images
CN117173254A (zh) 一种相机标定方法、系统、装置和电子设备
CN113822920B (zh) 结构光相机获取深度信息的方法、电子设备及存储介质
CN113034565B (zh) 一种单目结构光的深度计算方法及系统
JP2013187822A (ja) 補正式算出方法、補正方法、補正装置及び撮像装置
CN104811688B (zh) 图像获取装置及其图像形变检测方法
CN107256563B (zh) 基于差异液位图像序列的水下三维重建系统及其方法
WO2023040095A1 (zh) 相机的标定方法、装置、电子设备及存储介质
CN107783310B (zh) 一种柱透镜成像系统的标定方法及装置
CN111292380A (zh) 图像处理方法及装置
JP5925109B2 (ja) 画像処理装置、その制御方法、および制御プログラム
CN112233185B (zh) 相机标定方法、图像配准方法及摄像器件、存储装置
CN115131273A (zh) 信息处理方法、测距方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23893495

Country of ref document: EP

Kind code of ref document: A1