WO2022205845A1 - 位姿标定方法、装置、机器人及计算机可读存储介质 - Google Patents

位姿标定方法、装置、机器人及计算机可读存储介质 Download PDF

Info

Publication number
WO2022205845A1
WO2022205845A1 PCT/CN2021/125046 CN2021125046W WO2022205845A1 WO 2022205845 A1 WO2022205845 A1 WO 2022205845A1 CN 2021125046 W CN2021125046 W CN 2021125046W WO 2022205845 A1 WO2022205845 A1 WO 2022205845A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose
depth camera
robot
target
point cloud
Prior art date
Application number
PCT/CN2021/125046
Other languages
English (en)
French (fr)
Inventor
黄祥斌
徐文质
黄高波
Original Assignee
深圳市优必选科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市优必选科技股份有限公司 filed Critical 深圳市优必选科技股份有限公司
Priority to US17/721,313 priority Critical patent/US20220327739A1/en
Publication of WO2022205845A1 publication Critical patent/WO2022205845A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39057Hand eye calibration, eye, camera on hand, end effector
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present application belongs to the field of robotics, and in particular, relates to a method, device, robot, and computer-readable storage medium for pose calibration.
  • the depth camera needs to be used to accurately detect the ground, obstacles and cliffs to avoid obstacles, so it is very important to accurately calibrate the pose of the depth camera.
  • a special calibration board such as a black and white square
  • the embodiments of the present application provide a pose calibration method, device, robot, and computer-readable storage medium, which can improve the accuracy and efficiency of depth camera calibration.
  • an embodiment of the present application provides a method for calibrating a pose, which is applied to a depth camera, where the depth camera is installed on a robot, and the method for calibrating a pose may include:
  • the target pose of the depth camera is calibrated according to the point cloud data and a preset optimization method, and the target pose includes the pitch angle, roll angle of the depth camera and the height of the depth camera in the robot coordinate system .
  • the calibration of the target pose of the depth camera according to the point cloud data and a preset optimization method may include:
  • the target pose of the depth camera is calibrated according to the candidate pose, the point cloud data and the optimization method.
  • calibrating the target pose of the depth camera according to the candidate pose, the point cloud data and the optimization method may include:
  • a target operation result that satisfies a preset condition is determined, and the candidate pose corresponding to the target operation result is calibrated as the target pose of the depth camera.
  • the objective function corresponding to the optimization method is set according to the following formula:
  • f(Po, Ro, Zs) is the objective function corresponding to the optimization method
  • N is the total number of the point cloud data
  • (x i , y i , z i ) is the ith point cloud data in the camera Coordinates in the coordinate system
  • Po j is the pitch angle corresponding to the jth candidate pose
  • Ro j is the roll angle corresponding to the jth candidate pose
  • Zs j is the height corresponding to the jth candidate pose.
  • the determining of the target operation result satisfying the preset condition may include: determining the smallest operation result as the target operation result.
  • the coordinate system of the robot takes the projection point of the center point of the robot on the ground as the origin, the X axis is directly in front of the robot, and the Y axis is directly left of the robot, And the coordinate system established with the vertical upward direction as the Z axis.
  • an embodiment of the present application provides a pose calibration device, which is applied to a depth camera, where the depth camera is installed on a robot, and the pose calibration device may include:
  • a point cloud data determination module configured to obtain a depth image including a target plane through the depth camera, and determine point cloud data corresponding to the depth image, where the target plane is the plane where the robot is located;
  • a pose calibration module for calibrating the target pose of the depth camera according to the point cloud data and a preset optimization method, where the target pose includes the pitch angle, roll angle of the depth camera and the depth camera The height in the robot coordinate system.
  • the pose calibration module may include:
  • a candidate pose obtaining unit configured to obtain a candidate pose of the depth camera, where the candidate pose includes the initial pose of the depth camera
  • a target pose calibration unit configured to calibrate the target pose of the depth camera according to the candidate pose, the point cloud data and the optimization method.
  • the target pose calibration unit may include:
  • an operation sub-unit used for substituting the point cloud data and the candidate pose into the objective function corresponding to the optimization method for operation to obtain an operation result
  • the target pose calibration sub-unit is used for determining the target operation result satisfying the preset condition, and calibrating the candidate pose corresponding to the target operation result as the target pose of the depth camera.
  • the objective function corresponding to the optimization method is set according to the following formula:
  • f(Po, Ro, Zs) is the objective function corresponding to the optimization method
  • N is the total number of the point cloud data
  • (x i , y i , z i ) is the ith point cloud data in the camera Coordinates in the coordinate system
  • Po j is the pitch angle corresponding to the jth candidate pose
  • Ro j is the roll angle corresponding to the jth candidate pose
  • Zs j is the height corresponding to the jth candidate pose.
  • the target pose calibration sub-unit is further configured to determine the minimum operation result as the target operation result.
  • the coordinate system of the robot takes the projection point of the center point of the robot on the ground as the origin, the X axis is directly in front of the robot, and the Y axis is directly left of the robot, And the coordinate system established with the vertical upward direction as the Z axis.
  • an embodiment of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, which is implemented when the processor executes the computer program
  • the pose calibration method according to any one of the above first aspects.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements any one of the above-mentioned first aspect The pose calibration method of .
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on a robot, enables the robot to perform the pose calibration method described in any one of the first aspects above.
  • a depth image including the target plane (that is, the plane where the robot is located) can be obtained through a depth camera on the robot, and the point cloud data corresponding to the depth image can be determined, so as to be calibrated according to the point cloud data and a preset optimization method
  • the target pose of the depth camera that is to calibrate the pitch angle, roll angle and height of the depth camera in the robot coordinate system, can effectively improve the accuracy of the target pose calibration, and the implementation method is simple and the amount of calculation is small, which can be effectively Improve the efficiency of target pose calibration and improve user experience.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a pose calibration method provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a pose calibration device provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a robot provided by an embodiment of the present application.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • a depth camera is installed in a movable device such as a robot.
  • the depth camera can be used to detect the ground, obstacles and cliffs to avoid obstacles.
  • the pose of the depth camera needs to be calibrated in advance. Due to the installation error and measurement error of the depth camera, the current method of calibrating the pose of the depth camera through a specially made calibration board (such as a black and white square) has problems of low accuracy and efficiency.
  • an embodiment of the present application provides a pose calibration method, which can obtain a depth image including the target plane (that is, the plane where the robot is located) through a depth camera on the robot, and determine the point cloud corresponding to the depth image.
  • the target pose of the depth camera is calibrated according to the point cloud data and the preset optimization method, that is, the pitch angle, roll angle and height of the depth camera in the robot coordinate system are calibrated, which can effectively improve the target pose calibration.
  • Accuracy, simple implementation and small amount of calculation can effectively improve the efficiency of target pose calibration, improve user experience, and have strong ease of use and practicability.
  • FIG. 1 shows a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the depth camera 100 can be installed on the robot 101, and can be installed on the upper part of the robot 101, and can be installed obliquely downward, so that the field of view of the depth camera 100 can be the ground 102, that is, the The field of view of the depth camera 100 is shown by the dotted line in FIG. 1 , so that the depth image including the ground where the robot 101 is located can be obtained by the depth camera 100 to guide the robot 101 to avoid obstacles.
  • the target pose of the depth camera 100 may be calibrated according to the depth image obtained by the depth camera 100 .
  • the camera coordinate system and the robot coordinate system need to be involved.
  • the robot coordinate system may be the projection point of the center point of the robot 101 on the ground 102 as the origin O, the straight front of the robot 101 as the X axis, the right left of the robot 101 as the Y axis, and the vertical upward The coordinate system established with the direction of the Z axis.
  • the position coordinates of the depth camera 100 may be expressed as (Xs, Ys, Zs), and the attitude parameters may be expressed as (Ro, Po, Yo).
  • Ro is the roll angle of the depth camera 100, which is the rotation angle of the depth camera 100 around the X axis
  • Po is the pitch angle of the depth camera 100
  • Yo is the depth camera.
  • the yaw angle of 100 is the rotation angle of the depth camera 100 around the Z axis.
  • the pose calibration method mainly calibrates Zs, Ro and Po of the depth camera 100, that is, the height of the depth camera in the robot coordinate system, and the roll angle and pitch angle of the depth camera.
  • the height in the robot coordinate system may be a Z-axis component or a Z-axis coordinate in the robot coordinate system. That is, the Z-axis components and the Z-axis coordinates described in the embodiments of the present application have the same meaning.
  • FIG. 2 shows a schematic flowchart of a pose calibration method provided by an embodiment of the present application.
  • the pose calibration method can be applied to the application scenario shown in FIG. 1 .
  • the pose calibration method may include:
  • the robot can be placed on a level open ground.
  • the target plane is the ground where the robot is located.
  • the depth camera on the robot can obtain the depth image including the target plane, that is, the depth image of the ground within the field of view of the depth camera can be obtained, and the depth image can be converted into point cloud data to obtain the depth image.
  • the corresponding point cloud data corresponds to the resolution of the depth camera. For example, when the resolution of the depth camera is 640 ⁇ 480, 640 ⁇ 480 point cloud data can be obtained.
  • the calibration of the target pose of the depth camera according to the point cloud data and a preset optimization method may include:
  • Step a obtaining a candidate pose of the depth camera, where the candidate pose includes the initial pose of the depth camera;
  • Step b calibrate the target pose of the depth camera according to the candidate pose, the point cloud data and the optimization method.
  • the robot detects obstacles or cliffs on the ground, mainly based on the height of the objects on the ground.
  • the height is greater than 0 is an obstacle, and the height is less than 0 is a cliff.
  • Height refers to the height of the object in the robot coordinate system, that is, the point cloud data corresponding to the depth image captured by the depth camera is transformed to the Z-axis component in the robot coordinate system.
  • LZ is the translation matrix along the Z axis
  • RY is the rotation matrix around the Y axis
  • RX is the rotation matrix around the X axis.
  • the coordinate transformation matrix T from the camera coordinate system to the robot coordinate system is:
  • z 1 -x 0 sin Po+y 0 cos Po sin Ro+z 0 cos Po cos Ro+Zs.
  • the Z-axis coordinates of each point cloud data transformed to the robot coordinate system are all 0.
  • the Z-axis coordinates of the point cloud data transformed into the robot coordinate system fluctuate within a certain range.
  • the objective function corresponding to the optimization method can be set according to the relationship between the above-mentioned Z-axis components and Po, Ro, and Zs, so as to accurately calibrate the target pose of the depth camera according to the objective function and point cloud data .
  • the objective function corresponding to the optimization method can be set as:
  • f(Po, Ro, Zs) is the objective function corresponding to the optimization method
  • N is the total number of the point cloud data
  • (x i , y i , z i ) is the ith point cloud data in the camera Coordinates in the coordinate system
  • Po j is the pitch angle corresponding to the jth candidate pose
  • Ro j is the roll angle corresponding to the jth candidate pose
  • Zs j is the height corresponding to the jth candidate pose.
  • the candidate poses may include the initial pose of the depth camera and all possible poses of the depth camera.
  • the initial pose of the depth camera may be the pose of the depth camera actually installed on the robot. All possible poses of the depth camera may be determined according to the actual situation, which is not specifically limited in this embodiment of the present application.
  • the point cloud data and the candidate poses can be substituted into the objective function corresponding to the optimization method for operation, and the operation result can be obtained, and the objective operation result satisfying the preset conditions can be determined, and then the objective operation result can be converted into The corresponding candidate pose is calibrated as the target pose of the depth camera.
  • the preset condition may be that the operation result is the smallest, that is, the smallest operation result may be determined as the target operation result, so that the candidate pose with the smallest operation result may be determined as the target pose of the depth camera, so that the calibrated target
  • the pose can transform each point cloud data to the Z-axis coordinate in the robot coordinate system that is closest to 0, reduce installation errors and measurement errors, and improve the accuracy of the depth camera pose calibration, so that the robot can effectively detect the ground for obstacle avoidance. .
  • the initial pose of the depth camera can be used as the first candidate pose, and the first candidate pose and each point cloud data (for example, 640*480 point cloud data) can be substituted into the objective function corresponding to the optimization method, Calculate the operation result corresponding to the first candidate pose.
  • the second candidate pose can be obtained, and the second candidate pose and each point cloud data can be substituted into the objective function corresponding to the optimization method, and the operation result corresponding to the second candidate pose can be calculated, and so on, Until the last candidate pose and each point cloud data are substituted into the objective function corresponding to the optimization method, the operation result corresponding to the last candidate pose is calculated.
  • the smallest operation result is found from all the operation results, and the candidate pose (eg, the fifth candidate pose) corresponding to the smallest operation result can be calibrated as the target pose of the depth camera.
  • a depth image including the target plane (that is, the plane where the robot is located) can be obtained through a depth camera on the robot, and the point cloud data corresponding to the depth image can be determined, so as to be calibrated according to the point cloud data and a preset optimization method
  • the target pose of the depth camera that is to calibrate the pitch angle, roll angle and height of the depth camera in the robot coordinate system, can effectively improve the accuracy of the target pose calibration, and the implementation method is simple and the amount of calculation is small, which can be effectively Improve the efficiency of target pose calibration and improve user experience.
  • FIG. 3 shows a structural block diagram of the pose calibration apparatus provided by the embodiments of the present application. For convenience of description, only the parts related to the embodiments of the present application are shown.
  • the pose calibration device is applied to a depth camera, and the depth camera is installed on the robot.
  • the pose calibration device may include:
  • a point cloud data determination module 301 configured to obtain a depth image including a target plane through the depth camera, and determine point cloud data corresponding to the depth image, where the target plane is the plane where the robot is located;
  • a pose calibration module 302 configured to calibrate the target pose of the depth camera according to the point cloud data and a preset optimization method, where the target pose includes the pitch angle, roll angle and the depth of the depth camera The height of the camera in the robot coordinate system.
  • the pose calibration module 302 may include:
  • a candidate pose obtaining unit configured to obtain a candidate pose of the depth camera, where the candidate pose includes the initial pose of the depth camera
  • a target pose calibration unit configured to calibrate the target pose of the depth camera according to the candidate pose, the point cloud data and the optimization method.
  • the target pose calibration unit may include:
  • an operation sub-unit used for substituting the point cloud data and the candidate pose into the objective function corresponding to the optimization method for operation to obtain an operation result
  • the target pose calibration sub-unit is used for determining the target operation result satisfying the preset condition, and calibrating the candidate pose corresponding to the target operation result as the target pose of the depth camera.
  • the objective function corresponding to the optimization method is set according to the following formula:
  • f(Po, Ro, Zs) is the objective function corresponding to the optimization method
  • N is the total number of the point cloud data
  • (x i , y i , z i ) is the ith point cloud data in the camera Coordinates in the coordinate system
  • Po j is the pitch angle corresponding to the jth candidate pose
  • Ro j is the roll angle corresponding to the jth candidate pose
  • Zs j is the height corresponding to the jth candidate pose.
  • the target pose calibration sub-unit is further configured to determine the minimum operation result as the target operation result.
  • the coordinate system of the robot takes the projection point of the center point of the robot on the ground as the origin, the X axis is directly in front of the robot, and the Y axis is directly left of the robot, And the coordinate system established with the vertical upward direction as the Z axis.
  • FIG. 4 is a schematic structural diagram of a robot according to an embodiment of the present application.
  • the robot of this embodiment includes: at least one processor 40 (only one is shown in FIG. 4 ), a memory 41 , and is stored in the memory 41 and can run on the at least one processor 40
  • the computer program 42 when the processor 40 executes the computer program 42, implements the steps in any of the foregoing embodiments of the pose calibration method.
  • the robot may include, but is not limited to, a processor 40 and a memory 41 .
  • FIG. 4 is only an example of the robot 4, and does not constitute a limitation on the robot 4. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as It may also include input and output devices, network access devices, and the like.
  • the processor 40 can be a central processing unit (central processing unit, CPU), and the processor 40 can also be other general-purpose processors, digital signal processors (digital signal processors, DSP), application specific integrated circuits (application specific integrated circuit) , ASIC), field programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 41 may be an internal storage unit of the robot 4 in some embodiments, such as a hard disk or a memory of the robot 4 . In other embodiments, the memory 41 may also be an external storage device of the robot 4, such as a plug-in hard disk, a smart media card (smart media card, SMC), a secure digital (secure digital) device equipped on the robot 4. digital, SD) card, flash card (flash card), etc. Further, the memory 41 may also include both an internal storage unit of the robot 4 and an external storage device.
  • the memory 41 is used to store an operating system, an application program, a boot loader (Boot Loader), data, and other programs, such as program codes of the computer program.
  • the memory 41 can also be used to temporarily store data that has been output or will be output.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a robot, the steps in the foregoing method embodiments can be implemented when the robot executes.
  • the integrated units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • all or part of the processes in the methods of the above embodiments can be implemented by a computer program to instruct the relevant hardware.
  • the computer program can be stored in a computer-readable storage medium, and the computer program When executed by the processor, the steps of the above-mentioned various method embodiments may be implemented.
  • the computer program includes computer program code, and the computer program code can be in the form of source code, object code, executable file or some intermediate form, etc.
  • the computer-readable storage medium may include at least: any entity or device capable of carrying computer program codes to the device/robot, recording medium, computer memory, read-only memory (ROM,), random access memory (random access memory, RAM,), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunication signals
  • software distribution media For example, U disk, mobile hard disk, disk or CD, etc.
  • computer-readable storage media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/robot and method may be implemented in other ways.
  • the device/robot embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Abstract

一种位姿标定方法、装置、机器人及计算机可读存储介质,适用于机器人技术领域,该位姿标定方法可以通过机器人上的深度摄像机获取包含目标平面(即机器人所在的平面)的深度图像,并可以确定深度图像对应的点云数据,以根据点云数据和预设的优化方法来标定深度摄像机的目标位姿,即标定深度摄像机的俯仰角、滚动角以及深度摄像机在机器人坐标系中的高度,可以有效提高目标位姿标定的准确性,且实现方式简单、计算量小,可以有效提高目标位姿标定的效率,提升用户体验。

Description

位姿标定方法、装置、机器人及计算机可读存储介质
本申请要求于2021年03月30日在中国专利局提交的、申请号为202110344237.1的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于机器人技术领域,尤其涉及一种位姿标定方法、装置、机器人及计算机可读存储介质。
背景技术
在机器人移动过程中,需要使用深度摄像机准确检测地面、障碍物以及悬崖等来进行避障,因此准确标定深度摄像机的位姿非常重要。现有技术中,通常需要制作专门的标定板(如黑白方格)来进行深度摄像机的位姿标定,标定的准确性和效率较低。
技术问题
本申请实施例提供了一种位姿标定方法、装置、机器人及计算机可读存储介质,可以提高进行深度摄像机标定的准确性和效率。
技术解决方案
第一方面,本申请实施例提供了一种位姿标定方法,应用于深度摄像机,所述深度摄像机安装于机器人上,所述位姿标定方法可以包括:
通过所述深度摄像机获取包含目标平面的深度图像,并确定所述深度图像对应的点云数据,所述目标平面为所述机器人所在的平面;
根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,所述目标位姿包括所述深度摄像机的俯仰角、滚动角以及所述深度摄像机在机器人坐标系中的高度。
示例性的,所述根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,可以包括:
获取所述深度摄像机的候选位姿,所述候选位姿包括所述深度摄像机的初始位姿;
根据所述候选位姿、所述点云数据和所述优化方法标定所述深度摄像机的目标位姿。
具体地,所述根据所述候选位姿、所述点云数据和所述优化方法标定所述深度摄像机的目标位姿,可以包括:
将所述点云数据和所述候选位姿代入所述优化方法对应的目标函数进行运算,得到运算结果;
确定满足预设条件的目标运算结果,并将所述目标运算结果对应的候选位姿标定为所述深度摄像机的目标位姿。
可选地,根据下式设置所述优化方法对应的目标函数:
Figure PCTCN2021125046-appb-000001
其中,f(Po,Ro,Zs)为所述优化方法对应的目标函数,N为所述点云数据的总数量,(x i,y i,z i)为第i个点云数据在相机坐标系中的坐标,Po j为第j个候选位姿对应的俯仰角,Ro j为第j个候选位姿对应的滚动角,Zs j为第j个候选位姿对应的高度。
具体地,所述确定满足预设条件的目标运算结果,可以包括:将最小的运算结果确定为所述目标运算结果。
可以理解的是,所述机器人坐标系为以所述机器人的中心点在地面上的投影点为原点、以所述机器人的正前方为X轴、以所述机器人的正左方为Y轴,以及以竖直向上的方向为Z轴所建立的坐标系。
第二方面,本申请实施例提供了一种位姿标定装置,应用于深度摄像机,所述深度摄像机安装于机器人上,所述位姿标定装置,可以包括:
点云数据确定模块,用于通过所述深度摄像机获取包含目标平面的深度图像,并确定所述深度图像对应的点云数据,所述目标平面为所述机器人所在的平面;
位姿标定模块,用于根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,所述目标位姿包括所述深度摄像机的俯仰角、滚动角以及所述深度摄像机在机器人坐标系中的高度。
示例性的,所述位姿标定模块,可以包括:
候选位姿获取单元,用于获取所述深度摄像机的候选位姿,所述候选位姿包括所述深度摄像机的初始位姿;
目标位姿标定单元,用于根据所述候选位姿、所述点云数据和所述优化方法标定所述深度摄像机的目标位姿。
具体地,所述目标位姿标定单元,可以包括:
运算分单元,用于将所述点云数据和所述候选位姿代入所述优化方法对应的目标函数进行运算,得到运算结果;
目标位姿标定分单元,用于确定满足预设条件的目标运算结果,并将所述目标运算结果对应的候选位姿标定为所述深度摄像机的目标位姿。
可选地,根据下式设置所述优化方法对应的目标函数:
Figure PCTCN2021125046-appb-000002
其中,f(Po,Ro,Zs)为所述优化方法对应的目标函数,N为所述点云数据的总数量,(x i,y i,z i)为第i个点云数据在相机坐标系中的坐标,Po j为第j个候选位姿对应的俯仰角,Ro j为第j个候选位姿对应的滚动角,Zs j为第j个候选位姿对应的高度。
具体地,所述目标位姿标定分单元,还用于将最小的运算结果确定为所述目标运算结果。
可以理解的是,所述机器人坐标系为以所述机器人的中心点在地面上的投影点为原点、以所述机器人的正前方为X轴、以所述机器人的正左方为Y轴,以及以竖直向上的方向为Z轴所建立的坐标系。
第三方面,本申请实施例提供了一种机器人,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面中任一项所述的位姿标定方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面中任一项所述的位姿标定方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在机器人上运行时,使得机器人执行上述第一方面中任一项所述的位姿标定方法。
有益效果
本申请实施例与现有技术相比存在的有益效果是:
本申请实施例中,可以通过机器人上的深度摄像机获取包含目标平面(即机器人所在的平面)的深度图像,并确定深度图像对应的点云数据,以根据点云数据和预设的优化方法标定深度摄像机的目标位姿,即标定深度摄像机的俯仰角、滚动角以及深度摄像机在机器人坐标系中的高度,可以有效提高目标位姿标定的准确性,且实现方式简单、计算量小,可以有效提高目标位姿标定的效率,提升用户体验。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一个应用场景示意图;
图2是本申请实施例提供的位姿标定方法的流程示意图;
图3是本申请实施例提供的位姿标定装置的结构示意图;
图4是本申请实施例提供的机器人的结构示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
机器人等可移动装置中安装有深度摄像机,在机器人等可移动装置移动过程中,可以使用深度摄像机检测地面、障碍物以及悬崖等来进行避障。而为使得机器人等可移动装置可以准确检测地面、障碍物以及悬崖等,需要事先标定深度摄像机的位姿。由于深度摄像机存在安装误差和测量误差,目前通过专门制作的标定板(如黑白方格)来标定深度摄像机的位姿的方式存在准确性和效率较低的问题。
为解决上述问题,本申请实施例提供了一种位姿标定方法,该方法可以通过机器人上的深度摄像机获取包含目标平面(即机器人所在的平面)的深度图像,并确定深度图像对应的点云数据,以根据点云数据和预设的优化方法标定深度摄像机的目标位姿,即标定深 度摄像机的俯仰角、滚动角以及深度摄像机在机器人坐标系中的高度,可以有效提高目标位姿标定的准确性,且实现方式简单、计算量小,可以有效提高目标位姿标定的效率,提升用户体验,具有较强的易用性和实用性。
请参阅图1,图1示出了本申请实施例提供的一个应用场景示意图。如图1所示,深度摄像机100可以安装在机器人101上,且可以安装在机器人101的上侧部分,并可以斜向下方安装,以使得深度摄像机100的视场范围可以为地面102,即使得深度摄像机100的视场范围为图1中的虚线部分所示,从而可以通过深度摄像机100获取包含机器人101所在地面的深度图像来指导机器人101进行避障。
应理解,本申请实施例中可以根据深度摄像机100获取的深度图像来进行深度摄像机100的目标位姿的标定。在目标位姿的标定过程中,需要涉及到相机坐标系和机器人坐标系。其中,机器人坐标系可以为以机器人101的中心点在地面102上的投影点为原点O、以机器人101的正前方为X轴、以机器人101的正左方为Y轴,以及以竖直向上方向为Z轴建立的坐标系。
其中,在机器人坐标系中,深度摄像机100的位置坐标可以表示为(Xs,Ys,Zs),姿态参数可以表示为(Ro,Po,Yo)。其中,Ro为深度摄像机100的滚动角,即为深度摄像机100绕着X轴的旋转角度,Po为深度摄像机100的俯仰角,即为深度摄像机100绕着Y轴的旋转角度,Yo为深度摄像机100的偏航角,即为深度摄像机100绕着Z轴的旋转角度。
需要说明的是,影响深度摄像机100进行地面检测的主要参数为Zs、Ro和Po,其余参数不影响地面检测。因此,本申请实施例提供的位姿标定方法主要标定深度摄像机100的Zs、Ro和Po,即标定深度摄像机在机器人坐标系中的高度、深度摄像机的滚动角和俯仰角。应理解,在机器人坐标系中的高度可以为机器人坐标系中的Z轴分量或者Z轴坐标。即本申请实施例所述的Z轴分量和Z轴坐标具有相同含义。
请参阅图2,图2示出了本申请实施例提供的一种位姿标定方法的示意性流程图。所述位姿标定方法可以应用于图1所示的应用场景。如图2所示,所述位姿标定方法,可以包括:
S201、通过所述深度摄像机获取包含目标平面的深度图像,并确定所述深度图像对应的点云数据,所述目标平面为所述机器人所在的平面。
本申请实施例中,可以将机器人放置在水平空旷地面。目标平面即为机器人所在的地面。基于此,机器人上的深度摄像机可以获取包含目标平面的深度图像,即可以获取深度摄像机的视场范围内的地面的深度图像,并可以将该深度图像转换成点云数据,以此得到深度图像对应的点云数据。其中,点云数据的数量与深度摄像机的分辨率对应。例如,在深度摄像机的分辨率为640×480时,则可以获取到640×480个点云数据。
S202、根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,所述目标位姿包括所述深度摄像机的俯仰角、滚动角以及所述深度摄像机在机器人坐标系中的高度。
示例性的,所述根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,可以包括:
步骤a、获取所述深度摄像机的候选位姿,所述候选位姿包括所述深度摄像机的初始位姿;
步骤b、根据所述候选位姿、所述点云数据和所述优化方法标定所述深度摄像机的目标位姿。
可以理解的是,机器人检测地面上的障碍物或者悬崖,主要是通过地面上物体的高度来判断,理论上高度大于0是障碍物,高度小于0是悬崖。高度是指物体在机器人坐标系中的高度,即是指深度摄像机拍摄到的深度图像所对应的点云数据变换至机器人坐标系中的Z轴分量。
因此,需要进行相机坐标系到机器人坐标系的变换,以将点云数据变换至机器人坐标 系。本申请实施例中,相机坐标系到机器人坐标系的变换需要经过X轴的旋转和Y轴的旋转后,再经过Z轴的平移,即相机坐标系到机器人坐标系的坐标变换矩阵T可以为:T=LZRYRX。其中,LZ为沿Z轴的平移矩阵,RY为绕着Y轴的旋转矩阵,RX为绕着X轴的旋转矩阵。
具体地,
Figure PCTCN2021125046-appb-000003
即相机坐标系到机器人坐标系的坐标变换矩阵T为:
Figure PCTCN2021125046-appb-000004
因此,相机坐标系的齐次坐标S(x0,y0,z0,1)与机器人坐标系的齐次坐标R(x1,y1,z1,1)之间的转换关系可以为:R=TS。即转换至机器人坐标系中的Z轴分量与Po、Ro、Zs之间的关系可以为:
z 1=-x 0sin Po+y 0cos Po sin Ro+z 0cos Po cos Ro+Zs。
可以理解的是,在机器人坐标系中,当机器人在绝对水平面,且深度摄像机不存在安装误差和测量误差时,各点云数据变换至机器人坐标系中的Z轴坐标均为0。但是由于安装误差和测量误差的存在,点云数据变换至机器人坐标系中的Z轴坐标存在一定范围内的波动。为减少安装误差和测量误差,需要标定深度摄像机的目标位姿,使得在机器人处于水平面时,各点云数据变换至机器人坐标系中的Z轴坐标接近0。因此,本申请实施例中,可以根据上述的Z轴分量与Po、Ro、Zs之间的关系设置优化方法对应的目标函数,以根据目标函数和点云数据来准确标定深度摄像机的目标位姿。具体地,可以将优化方法对应的目标函数设置为:
Figure PCTCN2021125046-appb-000005
其中,f(Po,Ro,Zs)为所述优化方法对应的目标函数,N为所述点云数据的总数量,(x i,y i,z i)为第i个点云数据在相机坐标系中的坐标,Po j为第j个候选位姿对应的俯仰角,Ro j为第j个候选位姿对应的滚动角,Zs j为第j个候选位姿对应的高度。
应理解,候选位姿可以包括深度摄像机的初始位姿和深度摄像机所有可能存在的位姿。其中,深度摄像机的初始位姿可以为深度摄像机在机器人上实际安装的位姿。深度摄像机所有可能存在的位姿可以根据实际情况进行确定,本申请实施例对此不作具体限定。
在确定优化方法对应的目标函数后,可以将点云数据和候选位姿代入优化方法对应的目标函数进行运算,得到运算结果,并确定满足预设条件的目标运算结果,然后可以将目标运算结果对应的候选位姿标定为深度摄像机的目标位姿。其中,预设条件可以是运算结果最小,即可以将最小的运算结果确定为目标运算结果,从而可以将具有最小运算结果的候选位姿确定为深度摄像机的目标位姿,以使得所标定的目标位姿可以使得各点云数据变换至机器人坐标系中的Z轴坐标最接近0,减少安装误差和测量误差,提高深度摄像机位 姿标定的准确性,从而使得机器人可以有效检测地面来进行避障。
例如,可以将深度摄像机的初始位姿作为第一个候选位姿,并可以将第一个候选位姿和各点云数据(例如640*480个点云数据)代入优化方法对应的目标函数,计算得到第一个候选位姿对应的运算结果。然后,可以获取第二个候选位姿,并可以将第二个候选位姿和各点云数据代入优化方法对应的目标函数,计算得到第二个候选位姿对应的运算结果,以此类推,直至将最后一个候选位姿和各点云数据代入优化方法对应的目标函数,计算得到最后一个候选位姿对应的运算结果。最后,从这所有的运算结果中找到最小的运算结果,并可以将该最小的运算结果所对应的候选位姿(如第五个候选位姿)标定为深度摄像机的目标位姿。
本申请实施例中,可以通过机器人上的深度摄像机获取包含目标平面(即机器人所在的平面)的深度图像,并确定深度图像对应的点云数据,以根据点云数据和预设的优化方法标定深度摄像机的目标位姿,即标定深度摄像机的俯仰角、滚动角以及深度摄像机在机器人坐标系中的高度,可以有效提高目标位姿标定的准确性,且实现方式简单、计算量小,可以有效提高目标位姿标定的效率,提升用户体验。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的位姿标定方法,图3示出了本申请实施例提供的位姿标定装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图3,所述位姿标定装置,应用于深度摄像机,所述深度摄像机安装于机器人上,所述位姿标定装置,可以包括:
点云数据确定模块301,用于通过所述深度摄像机获取包含目标平面的深度图像,并确定所述深度图像对应的点云数据,所述目标平面为所述机器人所在的平面;
位姿标定模块302,用于根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,所述目标位姿包括所述深度摄像机的俯仰角、滚动角以及所述深度摄像机在机器人坐标系中的高度。
示例性的,所述位姿标定模块302,可以包括:
候选位姿获取单元,用于获取所述深度摄像机的候选位姿,所述候选位姿包括所述深度摄像机的初始位姿;
目标位姿标定单元,用于根据所述候选位姿、所述点云数据和所述优化方法标定所述深度摄像机的目标位姿。
具体地,所述目标位姿标定单元,可以包括:
运算分单元,用于将所述点云数据和所述候选位姿代入所述优化方法对应的目标函数进行运算,得到运算结果;
目标位姿标定分单元,用于确定满足预设条件的目标运算结果,并将所述目标运算结果对应的候选位姿标定为所述深度摄像机的目标位姿。
可选地,根据下式设置所述优化方法对应的目标函数:
Figure PCTCN2021125046-appb-000006
其中,f(Po,Ro,Zs)为所述优化方法对应的目标函数,N为所述点云数据的总数量,(x i,y i,z i)为第i个点云数据在相机坐标系中的坐标,Po j为第j个候选位姿对应的俯仰角,Ro j为第j个候选位姿对应的滚动角,Zs j为第j个候选位姿对应的高度。
具体地,所述目标位姿标定分单元,还用于将最小的运算结果确定为所述目标运算结 果。
可以理解的是,所述机器人坐标系为以所述机器人的中心点在地面上的投影点为原点、以所述机器人的正前方为X轴、以所述机器人的正左方为Y轴,以及以竖直向上的方向为Z轴所建立的坐标系。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
图4为本申请一实施例提供的机器人的结构示意图。如图4所示,该实施例的机器人包括:至少一个处理器40(图4中仅示出一个)、存储器41以及存储在所述存储器41中并可在所述至少一个处理器40上运行的计算机程序42,所述处理器40执行所述计算机程序42时实现上述任意各个位姿标定方法实施例中的步骤。
该机器人可包括,但不仅限于,处理器40、存储器41。本领域技术人员可以理解,图4仅仅是机器人4的举例,并不构成对机器人4的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。
所述处理器40可以是中央处理单元(central processing unit,CPU),该处理器40还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器41在一些实施例中可以是所述机器人4的内部存储单元,例如机器人4的硬盘或内存。所述存储器41在另一些实施例中也可以是所述机器人4的外部存储设备,例如所述机器人4上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。进一步地,所述存储器41还可以既包括所述机器人4的内部存储单元也包括外部存储设备。所述存储器41用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器41还可以用于暂时地存储已经输出或者将要输出的数据。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在机器人上运行时,使得机器人执行时可实现上述各个方法实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源 代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质至少可以包括:能够将计算机程序代码携带到装置/机器人的任何实体或装置、记录介质、计算机存储器、只读存储器(read-only memory,ROM,)、随机存取存储器(random access memory,RAM,)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读存储介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/机器人和方法,可以通过其它的方式实现。例如,以上所描述的装置/机器人实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种位姿标定方法,其特征在于,应用于深度摄像机,所述深度摄像机安装于机器人上,所述位姿标定方法包括:
    通过所述深度摄像机获取包含目标平面的深度图像,并确定所述深度图像对应的点云数据,所述目标平面为所述机器人所在的平面;
    根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,所述目标位姿包括所述深度摄像机的俯仰角、滚动角以及所述深度摄像机在机器人坐标系中的高度。
  2. 如权利要求1所述的位姿标定方法,其特征在于,所述根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,包括:
    获取所述深度摄像机的候选位姿,所述候选位姿包括所述深度摄像机的初始位姿;
    根据所述候选位姿、所述点云数据和所述优化方法标定所述深度摄像机的目标位姿。
  3. 如权利要求2所述的位姿标定方法,其特征在于,所述根据所述候选位姿、所述点云数据和所述优化方法标定所述深度摄像机的目标位姿,包括:
    将所述点云数据和所述候选位姿代入所述优化方法对应的目标函数进行运算,得到运算结果;
    确定满足预设条件的目标运算结果,并将所述目标运算结果对应的候选位姿标定为所述深度摄像机的目标位姿。
  4. 如权利要求3所述的位姿标定方法,其特征在于,根据下式设置所述优化方法对应的目标函数:
    Figure PCTCN2021125046-appb-100001
    其中,f(Po,Ro,Zs)为所述优化方法对应的目标函数,N为所述点云数据的总数量,(x i,y i,z i)为第i个点云数据在相机坐标系中的坐标,Po j为第j个候选位姿对应的俯仰角,Ro j为第j个候选位姿对应的滚动角,Zs j为第j个候选位姿对应的高度。
  5. 如权利要求3所述的位姿标定方法,其特征在于,所述确定满足预设条件的目标运算结果,包括:
    将最小的运算结果确定为所述目标运算结果。
  6. 如权利要求1至5中任一项所述的位姿标定方法,其特征在于,所述机器人坐标系为以所述机器人的中心点在地面上的投影点为原点、以所述机器人的正前方为X轴、以所述机器人的正左方为Y轴,以及以竖直向上的方向为Z轴所建立的坐标系。
  7. 一种位姿标定装置,其特征在于,应用于深度摄像机,所述深度摄像机安装于机器人上,所述位姿标定装置包括:
    点云数据确定模块,用于通过所述深度摄像机获取包含目标平面的深度图像,并确定所述深度图像对应的点云数据,所述目标平面为所述机器人所在的平面;
    位姿标定模块,用于根据所述点云数据和预设的优化方法标定所述深度摄像机的目标位姿,所述目标位姿包括所述深度摄像机的俯仰角、滚动角以及所述深度摄像机在机器人坐标系中的高度。
  8. 如权利要求7所述的位姿标定装置,其特征在于,所述位姿标定模块,包括:
    候选位姿获取单元,用于获取所述深度摄像机的候选位姿,所述候选位姿包括所述深度摄像机的初始位姿;
    目标位姿标定单元,用于根据所述候选位姿、所述点云数据和所述优化方法标定所述 深度摄像机的目标位姿。
  9. 一种机器人,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至6任一项所述的位姿标定方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述的位姿标定方法。
PCT/CN2021/125046 2021-03-30 2021-10-20 位姿标定方法、装置、机器人及计算机可读存储介质 WO2022205845A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/721,313 US20220327739A1 (en) 2021-03-30 2022-04-14 Pose calibration method, robot and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110344237.1 2021-03-30
CN202110344237.1A CN112967347B (zh) 2021-03-30 2021-03-30 位姿标定方法、装置、机器人及计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/721,313 Continuation US20220327739A1 (en) 2021-03-30 2022-04-14 Pose calibration method, robot and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2022205845A1 true WO2022205845A1 (zh) 2022-10-06

Family

ID=76280671

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125046 WO2022205845A1 (zh) 2021-03-30 2021-10-20 位姿标定方法、装置、机器人及计算机可读存储介质

Country Status (3)

Country Link
US (1) US20220327739A1 (zh)
CN (1) CN112967347B (zh)
WO (1) WO2022205845A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967347B (zh) * 2021-03-30 2023-12-15 深圳市优必选科技股份有限公司 位姿标定方法、装置、机器人及计算机可读存储介质
CN113609985B (zh) * 2021-08-05 2024-02-23 诺亚机器人科技(上海)有限公司 物体位姿检测方法、检测设备、机器人及可存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018048353A1 (en) * 2016-09-09 2018-03-15 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN107808407A (zh) * 2017-10-16 2018-03-16 亿航智能设备(广州)有限公司 基于双目相机的无人机视觉slam方法、无人机及存储介质
CN112365542A (zh) * 2020-11-26 2021-02-12 上海禾赛科技股份有限公司 位姿标定方法及位姿标定设备、自动控制系统
CN112541950A (zh) * 2019-09-20 2021-03-23 杭州海康机器人技术有限公司 一种深度相机外参的标定方法、及装置
CN112967347A (zh) * 2021-03-30 2021-06-15 深圳市优必选科技股份有限公司 位姿标定方法、装置、机器人及计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2581843B (en) * 2019-03-01 2021-06-02 Arrival Ltd Calibration system and method for robotic cells
CN111965624B (zh) * 2020-08-06 2024-04-09 阿波罗智联(北京)科技有限公司 激光雷达与相机的标定方法、装置、设备和可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018048353A1 (en) * 2016-09-09 2018-03-15 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN107808407A (zh) * 2017-10-16 2018-03-16 亿航智能设备(广州)有限公司 基于双目相机的无人机视觉slam方法、无人机及存储介质
CN112541950A (zh) * 2019-09-20 2021-03-23 杭州海康机器人技术有限公司 一种深度相机外参的标定方法、及装置
CN112365542A (zh) * 2020-11-26 2021-02-12 上海禾赛科技股份有限公司 位姿标定方法及位姿标定设备、自动控制系统
CN112967347A (zh) * 2021-03-30 2021-06-15 深圳市优必选科技股份有限公司 位姿标定方法、装置、机器人及计算机可读存储介质

Also Published As

Publication number Publication date
US20220327739A1 (en) 2022-10-13
CN112967347B (zh) 2023-12-15
CN112967347A (zh) 2021-06-15

Similar Documents

Publication Publication Date Title
WO2022205845A1 (zh) 位姿标定方法、装置、机器人及计算机可读存储介质
WO2021115331A1 (zh) 基于三角测量的坐标定位方法、装置、设备及存储介质
CN107633536B (zh) 一种基于二维平面模板的相机标定方法及系统
CN110561423B (zh) 位姿变换的方法、机器人及存储介质
WO2022160787A1 (zh) 一种机器人手眼标定方法, 装置, 可读存储介质及机器人
CN109828250B (zh) 一种雷达标定方法、标定装置及终端设备
US20220319050A1 (en) Calibration method and apparatus, processor, electronic device, and storage medium
CN113510703A (zh) 机器人位姿的确定方法、装置、机器人及存储介质
WO2022160266A1 (zh) 一种车载摄像头的标定方法、装置及终端设备
US20240001558A1 (en) Robot calibration method, robot and computer-readable storage medium
CN111381224A (zh) 激光数据校准方法、装置及移动终端
US20220327740A1 (en) Registration method and registration apparatus for autonomous vehicle
CN111311671B (zh) 一种工件测量方法、装置、电子设备及存储介质
CN111429530A (zh) 一种坐标标定方法及相关装置
CN111368927A (zh) 一种标注结果处理方法、装置、设备及存储介质
CN111832634A (zh) 异物检测方法、系统、终端设备及存储介质
CN111336938A (zh) 一种机器人及其物体距离检测方法和装置
CN114359400A (zh) 一种外参标定方法、装置、计算机可读存储介质及机器人
WO2022160879A1 (zh) 一种转换参数的确定方法和装置
CN115018922A (zh) 畸变参数标定方法、电子设备和计算机可读存储介质
CN114693769A (zh) 一种c臂机的标定方法及装置
CN112183524A (zh) 机器人有线网络对接方法、系统、终端设备及存储介质
WO2022204953A1 (zh) 确定俯仰角的方法、装置及终端设备
CN114998426B (zh) 机器人测距方法及装置
CN112446928B (zh) 一种拍摄装置的外参确定系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21934481

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21934481

Country of ref document: EP

Kind code of ref document: A1