WO2024045274A1 - 机器人视觉系统的手眼标定方法、系统、终端及介质 - Google Patents

机器人视觉系统的手眼标定方法、系统、终端及介质 Download PDF

Info

Publication number
WO2024045274A1
WO2024045274A1 PCT/CN2022/125016 CN2022125016W WO2024045274A1 WO 2024045274 A1 WO2024045274 A1 WO 2024045274A1 CN 2022125016 W CN2022125016 W CN 2022125016W WO 2024045274 A1 WO2024045274 A1 WO 2024045274A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
point cloud
robot
ball
camera
Prior art date
Application number
PCT/CN2022/125016
Other languages
English (en)
French (fr)
Inventor
张文卿
付傲然
Original Assignee
上海智能制造功能平台有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海智能制造功能平台有限公司 filed Critical 上海智能制造功能平台有限公司
Publication of WO2024045274A1 publication Critical patent/WO2024045274A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/0095Means or methods for testing manipulators

Definitions

  • the present invention relates to a camera-robot system calibration technology in the field of recognition and grabbing technology in industrial production. Specifically, it relates to a hand-eye calibration method, system, terminal and medium for a high-precision robot vision system.
  • the currently commonly used calibration system is based on a plane calibration plate.
  • the camera is used to identify the pose of the calibration plate, and combined with the coordinates of the robot, the camera-robot coordinate relationship is solved.
  • This method requires the two-dimensional calibration plate to be within the field of view of the camera.
  • Another common method is based on auxiliary devices, such as laser trackers or three-axis trackers, to achieve hand-eye calibration. This method will greatly increase the cost and operational difficulty of the calibration system. Therefore, a fast, versatile and low-cost 3D camera-robot calibration system is of great significance.
  • the Chinese invention patent with authorization announcement number CN110842901B discloses a "robot hand-eye calibration method and device based on a new three-dimensional calibration block".
  • the three-dimensional vision device can obtain the three-dimensional calibration
  • the block contains a point cloud of three key points. Based on the coordinates of the key points in the camera coordinate system and the robot coordinate system, the camera-robot hand-eye relationship can be solved.
  • the Chinese invention patent with authorization announcement number CN112091971B discloses a "robot hand-eye calibration method, device, electronic equipment and system", which includes the following steps: 1. Obtain the three-dimensional point cloud image information captured by the three-dimensional camera.
  • the cloud image information includes three-dimensional point cloud information of at least three identification balls that are not all in the same straight line; 2. According to the three-dimensional point cloud image information, calculate the first position data of the center of mass of the identification ball in the camera coordinate system; 3. . Obtain the second position data of the center of mass of the marking ball in the robot base coordinate system; 4. Calculate the transformation matrix between the camera coordinate system and the robot base coordinate system based on the first position data and the second position data.
  • the present invention provides a hand-eye calibration method, system, terminal and medium for a robot vision system.
  • a hand-eye calibration method for a robot vision system including:
  • the hand-eye calibration relationship between the camera and the robot is calculated to complete the hand-eye calibration of the robot vision system.
  • setting the workspace of the three-dimensional point cloud camera includes:
  • the calibration ball includes a calibration piece and a positioning piece; wherein, the calibration piece includes a metal ball, a connecting rod and a base flange, and the positioning piece is provided with a cylindrical groove that matches the radius of the metal ball;
  • the metal ball is fixed to the end of the robot through the connecting rod and the base flange in turn; the positioning member is fixed on the working platform; the metal ball is in contact with the bottom and side surfaces of the cylindrical groove , obtain the determined matching state between the calibration piece and the positioning piece, and fix the position of the calibration ball in the work space;
  • the method further includes: changing the position of the calibration ball in the work space multiple times to obtain the center coordinates of the calibration ball in the robot coordinate system at the corresponding position.
  • the point cloud screening includes:
  • the extraction of the calibration ball includes:
  • the point cloud is further screened
  • the random sampling consistency algorithm is performed again to extract the fine coordinates of the calibration ball and obtain the center coordinates of the calibration ball in the three-dimensional camera coordinate system.
  • calculating the hand-eye calibration relationship between the camera and the robot based on the coordinates of the center of the calibration ball in the camera coordinate system and the robot coordinate system each time includes:
  • a hand-eye calibration system for a robot vision system including:
  • the robot module includes a calibration ball pre-installed at the end of the robot.
  • a calibration ball pre-installed at the end of the robot.
  • Point cloud acquisition module which is based on a three-dimensional point cloud camera, acquires the three-dimensional point cloud information of the calibration ball in the working space of the three-dimensional point cloud camera, and performs point cloud screening to obtain the calibration ball point cloud information and sends it to the calibration module ;
  • Calibration module which is used to control the position of the robot module in the working space of the three-dimensional point cloud camera, extract the calibration ball according to the point cloud information of the calibration ball, and obtain the position of the calibration ball in the three-dimensional camera coordinate system.
  • a terminal including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, it can be used to execute the above. any of the methods described.
  • a computer-readable storage medium on which a computer program is stored.
  • the program when executed by a processor, can be used to perform any of the above methods.
  • the present invention has at least one of the following beneficial effects:
  • the hand-eye calibration method, system, terminal and medium of the robot vision system provided by the present invention are easy to deploy, low in cost and widely used. It is of great significance to reduce enterprise production costs and improve production efficiency.
  • Figure 1 is a workflow diagram of a hand-eye calibration method of a robot vision system in an embodiment of the present invention
  • Figure 2 is a schematic diagram of the component modules of the hand-eye calibration system of the robot vision system in one embodiment of the present invention
  • Figure 3 is a schematic structural diagram of a calibration ball in a preferred embodiment of the present invention.
  • Figure 4 is a schematic diagram of the positioning of the calibration link of the calibration ball in the robot coordinate system in a preferred embodiment of the present invention.
  • 1 is the calibration part
  • 2 is the positioning part
  • 11 is the metal ball
  • 12 is the connecting rod
  • 13 is the base flange
  • 21 is the cylindrical groove.
  • An embodiment of the present invention provides a hand-eye calibration method for a robot vision system.
  • the hand-eye calibration method of the robot vision system may include the following steps:
  • S400 Extract the calibration ball according to the calibration ball point cloud information, and obtain the center coordinates of the calibration ball in the three-dimensional camera coordinate system;
  • setting the working space of the three-dimensional point cloud camera includes:
  • a calibration ball is installed at the end of the robot, and the calibration ball is moved into the work space of the three-dimensional point cloud camera, the position of the calibration ball in the work space is controlled, and the position of the calibration ball in the robot coordinate system is calculated.
  • the coordinates of the sphere center include:
  • the calibration ball includes a calibration piece and a positioning piece;
  • the calibration piece includes a metal ball, a connecting rod and a base flange, and the positioning piece is equipped with a cylindrical groove that matches the radius of the metal ball;
  • the metal ball is fixed to the end of the robot through the connecting rod and the base flange in turn; the positioning piece is fixed on the working platform; through the contact between the metal ball and the bottom and side of the cylindrical groove, the determined fit between the calibration piece and the positioning piece is obtained State, fixes the position of the calibration ball in the work space;
  • the position of the calibration ball in the work space is changed multiple times to obtain the coordinates of the center of the calibration ball in the robot coordinate system at the corresponding position.
  • point cloud screening includes:
  • extracting the calibration ball includes:
  • the clustering algorithm is used to divide the point cloud into different parts, and according to the volume of the point cloud bounding box, the point cloud where the calibration sphere is located is selected; in this step, the point cloud bounding box is the smallest cube that surrounds the specific point cloud.
  • the point cloud where the calibration ball is located is spherical, so the length, width, and height of its bounding box are similar to the radius of the ball.
  • the bounding boxes of other point clouds, such as the end of the robot, debris, noise, etc., do not have the above characteristics, so the calibration ball point cloud can be identified;
  • the point cloud is further screened
  • the random sampling consistency algorithm is performed again to extract the fine coordinates of the calibration ball and obtain the center coordinates of the calibration ball in the three-dimensional camera coordinate system; in this step, during the first RANSAC (random sampling consistency), since the point cloud of the calibration ball contains Based on the point cloud of some connecting rods, in order to ensure that the ball can be identified, the RANSAC used this time adopted a lower standard and obtained calibration ball information with poor accuracy. Before the second RANSAC, first filter out the initially identified point cloud outside the ball. At this time, the point cloud basically only retains the point cloud of the ball. Then use high-precision RANSAC to calculate the ball information.
  • Fine coordinates and sphere center coordinates both refer to the coordinates of the sphere center in the camera coordinate system.
  • the fine coordinates are obtained after the second RANSAC, which means the results are more accurate.
  • the hand-eye calibration relationship between the camera and the robot is calculated based on the coordinates of the center of each calibration ball in the camera coordinate system and the robot coordinate system, including:
  • the pose matrix is a 4x4 matrix.
  • the hand-eye calibration method of the robot vision system obtaineds the point cloud information of the calibration part, and performs noise reduction, segmentation, etc. on the point cloud information based on the shooting results; a specially made spherical calibration ball is fixed at the end of the robot, and the specially made calibration ball The ball and flange make the installation error of the calibration parts less than 0.1mm; by controlling the robot, transforming the spatial coordinates of the calibration ball, identifying the position of the calibration ball and combining multiple recognition results, the hand-eye transformation matrix of the three-dimensional camera-robot is solved.
  • An embodiment of the present invention provides a hand-eye calibration system for a robot vision system.
  • the hand-eye calibration system of the robot vision system may include the following modules:
  • Robot module which includes a calibration ball pre-installed at the end of the robot.
  • a calibration ball pre-installed at the end of the robot.
  • Point cloud acquisition module which is based on a three-dimensional point cloud camera, obtains the three-dimensional point cloud information of the calibration ball in the working space of the three-dimensional point cloud camera, and performs point cloud screening to obtain the calibration ball point cloud information and sends it to the calibration module;
  • Calibration module which is used to control the position of the robot module in the workspace of the three-dimensional point cloud camera, extract the calibration ball according to the calibration ball point cloud information, and obtain the center coordinates of the calibration ball in the three-dimensional camera coordinate system; according to Each time the coordinates of the center of the ball are calibrated in the camera coordinate system and the robot coordinate system, the hand-eye calibration relationship between the camera and the robot is calculated, and the hand-eye calibration of the robot vision system is completed.
  • the hand-eye calibration system of the robot vision system includes the following modules:
  • Robot module which contains a calibration ball that is suitable for the robot (industrial robot or collaborative robot).
  • the calibration ball is fixed at the end of the robot.
  • a special calibration ball calibration tool i.e. positioning piece
  • the center of the calibration ball is determined to be at the robot coordinates. coordinates under the system.
  • this module will change the position of the calibration ball multiple times and output the center coordinates of the calibration ball in the robot coordinate system to the calibration module.
  • the calibration ball can be customized according to the accuracy and optimal working distance of the three-dimensional camera, including a metal ball with a specific radius and a connecting rod with a specific height, fixed to the end of the robot through the base flange; the calibration ball has been surface treated , avoiding the problem that the 3D camera cannot shoot due to the reflection of the metal ball.
  • the cylindrical groove of the calibration ball calibration tooling can be matched with the metal ball of the calibration ball with high precision, and is used for calibrating the calibration ball in the robot coordinate system.
  • Point cloud acquisition module which is based on a three-dimensional camera, acquires the three-dimensional point cloud information of the calibration ball in the work space, and adjusts the working parameters of the three-dimensional camera according to environmental factors such as ambient lighting and shooting noise to reduce the interference of external factors on the shooting effect. ; Then perform point cloud screening processing on the three-dimensional point cloud information to filter out irrelevant point cloud information such as workbench, material frame, robot, flange, calibration ball connecting rod, etc. After each point cloud collection, the obtained point cloud Perform point cloud screening processing to remove irrelevant point cloud information, retaining only the point cloud information of the calibration ball, greatly reducing the size of the point cloud file. After completing the above operations, transfer the obtained calibration ball point cloud information to the calibration module, using For subsequent calibration parts inspection and hand-eye calibration work;
  • Calibration module controls the movement of the robot module and calls the point cloud acquisition module to obtain the calibration ball point cloud information at different positions.
  • the calibration ball is extracted, the ball is fitted based on the spherical point cloud of the calibration ball, and the center coordinates of the sphere in the three-dimensional camera coordinate system are obtained.
  • calculate the hand-eye calibration relationship between the camera and the robot based on the coordinates of each sphere center in the camera coordinate system and the robot coordinate system, and output the calibration error.
  • the point cloud collection module includes:
  • Point cloud acquisition module obtains 3D point cloud information of the calibration ball in the work space based on a 3D point cloud camera.
  • the camera can capture 3D point cloud information of objects within 2 meters.
  • the 3D point cloud information includes points in the 3D camera coordinate system. The three-dimensional coordinates of the point, as well as the grayscale and RGB color information of the point.
  • the 3D camera type is a structured light camera, and there is no need to configure additional light sources for the system.
  • Camera bracket which fixes the 3D point cloud camera in a specific working position.
  • Point cloud screening algorithm module uses a point cloud screening algorithm to delete unnecessary point cloud data (i.e. irrelevant point cloud information) based on the pre-entered working distance and the position parameters and size parameters of the material frame, and obtain the calibration ball cloud information. .
  • the robot module includes:
  • a special calibration ball is used and fixed at the end of the robot; after the calibration ball is installed, the robot's own calibration program is used, combined with the calibration tooling adapted to the calibration ball, to complete the tool calibration of the calibration ball.
  • the structure of the calibration ball is shown in Figure 3.
  • the positioning of the calibration ball in the robot coordinate system is shown in Figure 4.
  • the calibration module includes:
  • Calibration ball extraction algorithm module which uses the calibration ball extraction algorithm to extract the precise position of the calibration ball from the complex point cloud and returns its center coordinates;
  • Hand-eye calibration algorithm module calculates the hand-eye relationship between the camera and the robot based on the coordinates of the calibration ball in at least three positions in the robot coordinate system and the camera coordinate system, and returns the calibration matrix and error results.
  • the point cloud screening algorithm includes the following steps:
  • Step 1 According to the distance between the 3D point cloud camera and the workbench and the size of the workspace, remove the point clouds outside the workspace and only retain the point clouds within the workspace;
  • Step 2 Use the voxel filtering method or the straight-through filtering method to filter out irrelevant point cloud information, reduce the density of the point cloud, and maintain the distribution characteristics of the point cloud to obtain the calibration ball point cloud information.
  • the calibration ball extraction algorithm includes the following steps:
  • the outlier removal algorithm is used to remove point clouds that are not distributed centrally, thereby achieving the effect of retaining the calibration ball point cloud and separating the calibration ball from other point clouds;
  • the clustering algorithm is used to divide the point cloud into different parts, and according to the volume of the point cloud bounding box, the point cloud where the calibration sphere is located is selected;
  • the third step is to use the RANSAC (Random Sampling Consistency) algorithm to perform preliminary fitting of the calibration ball;
  • the fourth step is to further screen the point cloud based on the fitting results
  • the fifth step is to perform the RANSAC (Random Sampling Consistency) algorithm again, extract the fine coordinates of the calibration ball, and return the calculated sphere center coordinates as the result.
  • RANSAC Random Sampling Consistency
  • the robot module is pre-installed with a calibration ball and calculates the tool coordinate system of the calibration ball, moves the calibration ball into the working space of the camera, and places the calibration ball in the camera at this time.
  • the spherical center coordinates in the coordinate system are transmitted to the calibration module;
  • the point cloud acquisition module is used to obtain the three-dimensional point cloud information in the work space, and perform downsampling processing (point cloud filtering processing) on the three-dimensional point cloud information.
  • the processed point cloud Passed to the calibration module the calibration module starts working after obtaining at least three sets of sphere center coordinates and point cloud data of the calibration sphere at different positions, extracts the calibration sphere, obtains the sphere center coordinates of the calibration sphere in the camera coordinate system, and based on the calibration algorithm , solve the camera-robot calibration matrix.
  • An embodiment of the present invention provides a terminal, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, it can be used to execute any one of the above embodiments of the present invention. method.
  • memory is used to store programs; memory may include volatile memory (English: volatile memory), such as random access memory (English: random-access memory, abbreviation: RAM), such as static random access memory (English: static random-access memory, abbreviation: SRAM), Double Data Rate Synchronous Dynamic Random Access Memory (English: Double Data Rate Synchronous Dynamic Random Access Memory, abbreviation: DDR SDRAM), etc.; memory can also include non-volatile Non-volatile memory (English: non-volatile memory), such as flash memory (English: flash memory).
  • the memory is used to store computer programs (such as application programs, functional modules, etc. that implement the above methods), computer instructions, etc.
  • the above-mentioned computer programs, computer instructions, etc. can be stored in one or more memories in partitions. And the above-mentioned computer programs, computer instructions, data, etc. can be called by the processor.
  • the above-mentioned computer programs, computer instructions, etc. can be stored in one or more memories in partitions. And the above-mentioned computer programs, computer instructions, data, etc. can be called by the processor.
  • the processor is configured to execute the computer program stored in the memory to implement each step in the method involved in the above embodiments. For details, please refer to the relevant descriptions in the previous method embodiments.
  • the processor and memory can be independent structures or integrated structures integrated together. When the processor and memory are independent structures, the memory and processor can be connected through bus coupling.
  • An embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored.
  • the program When the program is executed by a processor, the program can be used to perform any of the methods of any of the above embodiments of the present invention.
  • the hand-eye calibration method, system, terminal and medium of the robot vision system provided by the above embodiments of the present invention are easy to deploy, low in cost and widely used. It is of great significance to industrial development, reducing enterprise production costs and improving production efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种机器人视觉系统的手眼标定方法及系统,机器人视觉系统的手眼标定方法包括:设定三维点云相机的工作空间(S100);在机器人端部安装标定球,并将标定球运动至三维点云相机的工作空间内,控制标定球在工作空间内的位置,计算标定球在机器人坐标系下的球心坐标(S200);获取标定球在工作空间内的三维点云信息,并进行点云筛选,得到标定球点云信息(S300);根据标定球点云信息,对标定球进行提取,获取标定球在三维相机坐标系下的球心坐标(S400);根据每一次标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,完成机器人视觉系统的手眼标定(S500)。机器人视觉系统的手眼标定方法部署方便、成本低廉、应用广泛,对推进制造业发展、降低企业生产成本并提高生产效率有着重要的意义。

Description

机器人视觉系统的手眼标定方法、系统、终端及介质 技术领域
本发明涉及工业生产中识别与抓取技术领域的一种相机-机器人系统标定技术,具体地,涉及一种高精度机器人视觉系统的手眼标定方法、系统、终端及介质。
背景技术
随着机器视觉技术与机器人技术的发展,工业生产中深度相机与工业机器人相互协作的工作场合越来越多,需要机器人通过相机实时感知场景信息,这要求三维相机-机器人系统预先进行高精度的标定。一般的相机-机器人系统会由于工作任务的变更,而改变相机或机器人的位置,导致三维相机-机器人系统需要重新标定,因此,需要一种快速且高精度是标定系统。
当前常用的标定系统是基于平面标定板的,用相机识别标定板的位姿,结合机器人的坐标,解出相机-机器人的坐标关系。该方法要求相二维标定板处于相机的视野范围内,而由于标定板体积大、机器人运动空间有限,这一方法受到了较大的限制。另一种常见的方法的基于辅助装置,如激光跟踪仪或三轴跟踪仪,实现手眼标定,该方法会极大地提高标定系统的成本与操作难度。因此,一种快速、通用且低成本的三维相机-机器人标定系统具有重要的意义。
经对现有技术检索发现:
授权公告号为CN110842901B的中国发明专利,公开了一种“基于新型三维标定块的机器人手眼标定方法与装置”,通过调节机器人姿态及三维标定块的摆放姿态,使三维视觉装置能够获取三维标定块上包含有三个关键点的点云,基于关键点在相机坐标系与机器人坐标系下的坐标,可以求解出相机-机器人的手眼关系。
授权公告号为CN112091971B的中国发明专利,公开了一种“机器人手眼标定方法、装置、电子设备和系统”,包括以下步骤:1.获取三维相机拍摄得到的三维点云图像信息,所述三维点云图像信息包括至少三个不全在同一直线的标识球的三维点云信息;2.根据所述三维点云图像信息,计算所述标识球的质心在相机坐标系中的第一位置数据;3.获取所述标识球的质心在机器人基坐标系中的第二位置数 据;4.根据所述第一位置数据和第二位置数据计算相机坐标系与机器人基坐标系之间的转换矩阵。
以上两件发明专利具有成本低、速度快等特点,但是仍然存在标定件过于复杂、需要同时提取三个关键点的信息、对标定件的位姿有着限制、适用性较低等问题。
发明内容
本发明针对现有技术中存在的上述不足,提供了一种机器人视觉系统的手眼标定方法、系统、终端及介质。
根据本发明的一个方面,提供了一种机器人视觉系统的手眼标定方法,包括:
设定三维点云相机的工作空间;
在机器人端部安装标定球,并将所述标定球运动至所述三维点云相机的工作空间内,控制所述标定球在所述工作空间内的位置,计算所述标定球在机器人坐标系下的球心坐标,并发送至标定模块;
获取所述标定球在工作空间内的三维点云信息,并进行点云筛选,得到标定球点云信息;
根据所述标定球点云信息,对标定球进行提取,获取所述标定球在三维相机坐标系下的球心坐标;
重复上述步骤,多次获得所述标定球在机器人坐标系和三维相机坐标系下的球心坐标;
根据每一次所述标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,完成机器人视觉系统的手眼标定。
可选地,所述设定三维点云相机的工作空间,包括:
将三维点云相机固定在特定的工作位置,并根据环境因素,调节三维点云相机的工作参数,得到三维点云相机的工作空间。
可选地,所述在机器人端部安装标定球,并将所述标定球运动至所述三维点云相机的工作空间内,控制所述标定球在所述工作空间内的位置,计算所述标定球在机器人坐标系下的球心坐标,包括:
定制标定球,所述标定球包括标定件和定位件;其中,所述标定件包括金属球、连接杆和底座法兰,所述定位件设有与所述金属球半径适配的圆柱槽;
将所述金属球依次通过所述连接杆和所述底座法兰与机器人端部固定;将所述定位 件固定在工作平台上;通过所述金属球与所述圆柱槽的底面和侧面的接触,得到标定件和定位件之间确定的配合状态,固定所述标定球在工作空间的位置;
多次改变机器人的位姿,利用所述标定件与所述定位件之间的配合状态确保所述标定球的球心的位置不变,计算出所述标定球的球心在机器人坐标系下的坐标。
可选地,还包括:多次改变所述标定球在所述工作空间内的位置,获得相应位置上所述标定球在机器人坐标系下的球心坐标。
可选地,所述进行点云筛选,包括:
根据三维点云相机与工作台之间的距离以及工作空间的大小,去除工作空间以外的点云,仅仅保留工作空间内的点云;
利用体素滤波方法或直通滤波方法,筛除无关点云信息,获得标定球点云信息。
可选地,所述对标定球进行提取,包括:
利用离群点移除算法,去除分布不集中的点云;
利用聚类算法,将点云分割为不同部分,并按照点云包围框的体积,选出标定球球面所在的点云;
利用随机抽样一致性算法,对标定球进行初步拟合;
根据拟合结果,对点云进一步筛选;
再次进行随机抽样一致性算法,提取标定球的精细坐标,得到标定球在三维相机坐标系下的球心坐标。
可选地,所述根据每一次所述标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,包括:
分别计算相机坐标系和机器人坐标系下的球心坐标均值,并将得到的两个坐标系下的球心坐标均值的差值作为两个坐标系间的平移量;
分别获得相机坐标系下的一组球心坐标和机器人坐标系下的一组球心坐标,并分别减去球心坐标均值,完成中心化处理,获得两组球心坐标;
计算两组球心坐标的协方差矩阵H,并对所述协方差矩阵H进行奇异值分解,得到机器人坐标系相对于相机坐标系的位姿矩阵,进而得到相机与机器人之间的手眼标定关系。
根据本发明的另一个方面,提供了一种机器人视觉系统的手眼标定系统,包括:
机器人模块,该模块包括预先在机器人端部安装的标定球,通过将所述标定球运动至三维点云相机的工作空间内,获得所述标定球在所述工作空间内的位置,计算所述 标定球在机器人坐标系下的球心坐标;
点云采集模块,该模块基于三维点云相机,获取所述标定球在三维点云相机工作空间内的三维点云信息,并进行点云筛选,得到标定球点云信息,并发送至标定模块;
标定模块,该模块用于控制所述机器人模块在三维点云相机的工作空间内的位置,根据所述标定球点云信息,对标定球进行提取,获取所述标定球在三维相机坐标系下的球心坐标;根据每一次所述标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,完成机器人视觉系统的手眼标定。
根据本发明的第三个方面,提供了一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时可用于执行上述中任一项所述的方法。
根据本发明的第四个方面,提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时可用于执行上述任一项所述的方法。
由于采用了上述技术方案,本发明与现有技术相比,具有如下至少一项的有益效果:
本发明提供的机器人视觉系统的手眼标定方法、系统、终端及介质,部署方便、成本低廉、应用广泛,可快速部署在任何型号的工业机器人与三维相机构成的系统中,对推进制造业发展、降低企业生产成本并提高生产效率有着重要的意义。
附图说明
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1为本发明一实施例中机器人视觉系统的手眼标定方法的工作流程图;
图2为本发明一实施例中机器人视觉系统的手眼标定系统的组成模块示意图;
图3为本发明一优选实施例中标定球的结构示意图;
图4为本发明一优选实施例中标定球在机器人坐标系下标定环节的定位示意图。
图中,1为标定件,2为定位件,11为金属球,12为连接杆,13为底座法兰,21为圆柱槽。
具体实施方式
下面对本发明的实施例作详细说明:本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。
本发明一实施例提供了一种机器人视觉系统的手眼标定方法。
如图1所示,该实施例提供的机器人视觉系统的手眼标定方法,可以包括如下步骤:
S100,设定三维点云相机的工作空间;
S200,在机器人端部安装标定球,并将标定球运动至三维点云相机的工作空间内,控制标定球在工作空间内的位置,计算标定球在机器人坐标系下的球心坐标;
S300,获取标定球在工作空间内的三维点云信息,并进行点云筛选,得到标定球点云信息;
S400,根据标定球点云信息,对标定球进行提取,获取标定球在三维相机坐标系下的球心坐标;
重复上述步骤,多次获得标定球在机器人坐标系和三维相机坐标系下的球心坐标;
S500,根据每一次标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,完成机器人视觉系统的手眼标定。
在S100的一优选实施例中,设定三维点云相机的工作空间,包括:
将三维点云相机固定在特定的工作位置,并根据环境因素,调节三维点云相机的工作参数,得到三维点云相机的工作空间。
在S200的一优选实施例中,在机器人端部安装标定球,并将标定球运动至三维点云相机的工作空间内,控制标定球在工作空间内的位置,计算标定球在机器人坐标系下的球心坐标,包括:
定制标定球,标定球包括标定件和定位件;其中,标定件包括金属球、连接杆和底座法兰,定位件设有与金属球半径适配的圆柱槽;
将金属球依次通过连接杆和底座法兰与机器人端部固定;将定位件固定在工作平台上;通过金属球与圆柱槽的底面和侧面的接触,得到标定件和定位件之间确定的配合状态,固定标定球在工作空间的位置;
多次改变机器人的位姿,利用标定件与定位件之间的配合状态确保标定球的球心的位置不变,计算出标定球的球心在机器人坐标系下的坐标。
在S200的一优选实施例中,多次改变标定球在工作空间内的位置,获得相应位置上标定球在机器人坐标系下的球心坐标。
在S300的一优选实施例中,进行点云筛选,包括:
根据三维点云相机与工作台之间的距离以及工作空间的大小,去除工作空间以外的点云,仅仅保留工作空间内的点云;
利用体素滤波方法或直通滤波方法,筛除无关点云信息,获得标定球点云信息。
在S400的一优选实施例中,对标定球进行提取,包括:
利用离群点移除算法,去除分布不集中的点云;
利用聚类算法,将点云分割为不同部分,并按照点云包围框的体积,选出标定球球面所在的点云;在该步骤中,点云包围框是包围特定点云的最小立方体,标定球所在的点云呈球状,故它的包围框的长、宽、高与球半径相近。而其他点云如机器人末端、杂物、噪声等的包围框没有上述特征,故可以以此识别出标定球点云;
利用随机抽样一致性算法,对标定球进行初步拟合;
根据拟合结果,对点云进一步筛选;
再次进行随机抽样一致性算法,提取标定球的精细坐标,得到标定球在三维相机坐标系下的球心坐标;在该步骤中,初次RANSAC(随机抽样一致)时,由于标定球的点云包含着部分连接杆的点云,为了确保将球识别出,该次使用的RANSAC采用较低的标准,得到精度较差的标定球信息。第二次RANSAC前,先把初步识别的球外点云滤除,此时点云基本只保留球的点云,再以高精度RANSAC计算球的信息,此时的球心坐标更为精确;精细坐标、球心坐标均指球心在相机坐标系下的坐标,第二次RANSAC后得到精细坐标,表示结果更为准确。
在S500的一优选实施例中,根据每一次标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,包括:
分别计算相机坐标系和机器人坐标系下的球心坐标均值,并将得到的两个坐标系下的球心坐标均值的差值作为两个坐标系间的平移量;
分别获得相机坐标系下的一组球心坐标和机器人坐标系下的一组球心坐标,并分别减去球心坐标均值,完成中心化处理,获得两组球心坐标;
计算两组球心坐标的协方差矩阵H,并对协方差矩阵H进行奇异值分解,得到机器人坐标系相对于相机坐标系的位姿矩阵,进而得到相机与机器人之间的手眼标定关系。在一具体应用实例中,位姿矩阵为一个4ⅹ4的矩阵。
本发明上述实施例提供的机器人视觉系统的手眼标定方法,获取标定件点云信息,并基于拍摄结果对点云信息进行降噪、分割等;在机器人末端固定特制的球形标定球,特制的标定球及法兰使标定件的安装误差在0.1mm以下;通过控制机器人,变换标定球的空间坐标,识别标定球的位置并结合多次识别结果,求解出三维相机-机器人的手眼变换矩阵。
本发明一实施例提供了一种机器人视觉系统的手眼标定系统。
如图2所示,该实施例提供的机器人视觉系统的手眼标定系统,可以包括如下模块:
机器人模块,该模块包括预先在机器人端部安装的标定球,通过将标定球运动至三维点云相机的工作空间内,获得标定球在工作空间内的位置,计算标定球在机器人坐标系下的球心坐标,并发送至标定模块;
点云采集模块,该模块基于三维点云相机,获取标定球在三维点云相机工作空间内的三维点云信息,并进行点云筛选,得到标定球点云信息,并发送至标定模块;
标定模块,该模块用于控制机器人模块在三维点云相机的工作空间内的位置,根据标定球点云信息,对标定球进行提取,获取标定球在三维相机坐标系下的球心坐标;根据每一次标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,完成机器人视觉系统的手眼标定。
需要说明的是,本发明提供的方法中的步骤,可以利用系统中对应的模块、装置、单元等予以实现,本领域技术人员可以参照方法的技术方案实现系统的组成,即,方法中的实施例可理解为构建系统的优选例。
进一步地,本发明提供的机器人视觉系统的手眼标定系统,包括如下模块:
机器人模块,该模块包含与机器人(工业机器人或协作机器人)相适配的标定球,标定球固定在机器人末端,通过特制的标定球标定工装(即定位件),确定标定球球心在机器人坐标系下的坐标。该模块在标定阶段,会多次改变标定球的位置,并将标定球在机器人坐标系下的球心坐标输出至标定模块中。在一优选实施例中,标定球可以根据三维相机的精度与最佳工作距离定制,包括特定半径的金属球与特定高度的连接杆,通过底座法兰与机器人末端固定;标定球做了表面处理,避免了金属球反光造成三维相机无法拍摄的问题。标定球标定工装的圆柱槽可与标定球的金属球高精度配合,用于标定球在机器人坐标系下的标定。
点云采集模块,该模块基于三维相机,获取标定球在工作空间内的三维点云信 息,并根据环境光照、拍摄噪声等环境因素,调节三维相机的工作参数,降低外界因素对拍摄效果的干扰;接着对三维点云信息执行点云筛选处理,筛除工作台、物料框、机器人、法兰盘、标定球连接杆等无关点云信息,在每次点云采集后,对获取的点云进行点云筛选处理去除无关点云信息,只保留标定球的点云信息,极大降低的点云文件的体积,完成上述操作后,再将得到的标定球点云信息传输至标定模块,用于后续的标定件检测与手眼标定工作;
标定模块,该模块控制机器人模块运动并在不同的位置,调用点云采集模块获取标定球点云信息。接着,对标定球进行提取,基于标定球球面点云对球进行拟合,获取三维相机坐标系下的球心坐标。完成至少三次球心识别后,根据每次的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,并输出标定的误差情况。
进一步地,点云采集模块包括:
点云获取模块,该模块基于三维点云相机获取工作空间内标定球的三维点云信息,该相机可拍摄2米以内的物体三维点云信息,三维点云信息包括点在三维相机坐标系下的三维坐标,以及点的灰度和RGB颜色信息。三维相机类型为结构光相机,无需为系统配置额外的光源。
相机支架,该相机支架将三维点云相机固定在特定的工作位置。
点云筛选算法模块,该模块采用点云筛选算法,根据预先输入的工作距离以及物料框的位置参数、大小参数,删除不需要的点云数据(即无关点云信息),得到标定球云信息。
进一步地,机器人模块包括:
基于工业机器人或协作机器人,采用特质的标定球,固定在机器人的末端;安装了标定球后,利用机器人自带的标定程序,结合与标定球适配的标定工装,完成标定球的工具标定。标定球的结构,如图3所示。标定球在机器人坐标系下标定环节的定位,如图4所示。
进一步地,标定模块包括:
标定球提取算法模块,该模块利用标定球提取算法,从复杂的点云中提取出标定球的精确位置,并返回其球心坐标;
手眼标定算法模块,该模块基于至少三个位置的标定球在机器人坐标系与相机坐标系下的坐标,计算出相机与机器人之间的手眼关系,返回标定的矩阵与误差结 果。
进一步地,点云筛选算法,包括如下步骤:
步骤1,根据三维点云相机与工作台之间的距离以及工作空间的大小,去除工作空间以外的点云,仅仅保留工作空间内的点云;
步骤2,利用体素滤波方法或直通滤波方法,筛除无关点云信息,降低点云的密度,并保持点云的分布特征,获得标定球点云信息。
进一步地,标定球提取算法,包括如下步骤:
第一步,利用离群点移除算法,去除分布不集中的点云,从而实现保留标定球点云并使标定球与其他点云分离的效果;
第二步,利用聚类算法,将点云分割为不同部分,并按照点云包围框的体积,选出标定球球面所在的点云;
第三步,利用RANSAC(随机抽样一致性)算法,对标定球进行初步拟合;
第四步,根据拟合结果,对点云进一步筛选;
第五步,再次进行RANSAC(随机抽样一致性)算法,提取标定球的精细坐标,并将标定求的球心坐标作为结果返回。
本发明上述实施例提供的机器人视觉系统的手眼标定系统,机器人模块预先安装标定球并计算标定球的工具坐标系,将标定球运动至在相机的工作空间内,并将此时标定球在相机坐标系下的球心坐标传输至标定模块;点云采集模块用于获取工作空间内的三维点云信息,并对三维点云信息进行降采样处理(点云筛选处理),处理后的点云传至标定模块;标定模块在获取至少三组不同位置标定球的球心坐标以及点云数据后开始工作,对标定球进行提取,获取标定球在相机坐标系下的球心坐标,根据标定算法,解算出相机-机器人的标定矩阵。
本发明一实施例提供了一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时可用于执行本发明上述实施例中任一项的方法。
可选地,存储器,用于存储程序;存储器,可以包括易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random-access memory,缩写:RAM),如静态随机存取存储器(英文:static random-access memory,缩写:SRAM),双倍数据率同步动态随机存取存储器(英文:Double Data Rate Synchronous Dynamic Random Access Memory,缩写:DDR SDRAM)等;存储器也可以包括非易失性存储器(英文:non-volatile  memory),例如快闪存储器(英文:flash memory)。存储器用于存储计算机程序(如实现上述方法的应用程序、功能模块等)、计算机指令等,上述的计算机程序、计算机指令等可以分区存储在一个或多个存储器中。并且上述的计算机程序、计算机指令、数据等可以被处理器调用。
上述的计算机程序、计算机指令等可以分区存储在一个或多个存储器中。并且上述的计算机程序、计算机指令、数据等可以被处理器调用。
处理器,用于执行存储器存储的计算机程序,以实现上述实施例涉及的方法中的各个步骤。具体可以参见前面方法实施例中的相关描述。
处理器和存储器可以是独立结构,也可以是集成在一起的集成结构。当处理器和存储器是独立结构时,存储器、处理器可以通过总线耦合连接。
本发明一实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时可用于执行本发明上述实施例中任一项的方法。
本发明上述实施例提供的机器人视觉系统的手眼标定方法、系统、终端及介质,部署方便、成本低廉、应用广泛,可快速部署在任何型号的工业机器人与三维相机构成的系统中,对推进制造业发展、降低企业生产成本并提高生产效率有着重要的意义。
本发明上述实施例中未尽事宜均为本领域公知技术。
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变形或修改,这并不影响本发明的实质内容。

Claims (10)

  1. 一种机器人视觉系统的手眼标定方法,其特征在于,包括:
    设定三维点云相机的工作空间;
    在机器人端部安装标定球,并将所述标定球运动至所述三维点云相机的工作空间内,控制所述标定球在所述工作空间内的位置,计算所述标定球在机器人坐标系下的球心坐标;
    获取所述标定球在工作空间内的三维点云信息,并进行点云筛选,得到标定球点云信息;
    根据所述标定球点云信息,对标定球进行提取,获取所述标定球在三维相机坐标系下的球心坐标;
    重复上述步骤,多次获得所述标定球在机器人坐标系和三维相机坐标系下的球心坐标;
    根据每一次所述标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,完成机器人视觉系统的手眼标定。
  2. 根据权利要求1所述的机器人视觉系统的手眼标定方法,其特征在于,所述设定三维点云相机的工作空间,包括:
    将三维点云相机固定在特定的工作位置,并根据环境因素,调节三维点云相机的工作参数,得到三维点云相机的工作空间。
  3. 根据权利要求1所述的机器人视觉系统的手眼标定方法,其特征在于,所述在机器人端部安装标定球,并将所述标定球运动至所述三维点云相机的工作空间内,控制所述标定球在所述工作空间内的位置,计算所述标定球在机器人坐标系下的球心坐标,包括:
    定制标定球,所述标定球包括标定件和定位件;其中,所述标定件包括金属球、连接杆和底座法兰,所述定位件设有与所述金属球半径适配的圆柱槽;
    将所述金属球依次通过所述连接杆和所述底座法兰与机器人端部固定;将所述定位件固定在工作平台上;通过所述金属球与所述圆柱槽的底面和侧面的接触,得到标定件和定位件之间确定的配合状态,固定所述标定球在工作空间的位置;
    多次改变机器人的位姿,利用所述标定件与所述定位件之间的配合状态确保所述标定球的球心的位置不变,计算出所述标定球的球心在机器人坐标系下的坐标。
  4. 根据权利要求3所述的机器人视觉系统的手眼标定方法,其特征在于,还包括:
    多次改变所述标定球在所述工作空间内的位置,获得相应位置上所述标定球在机器人坐标系下的球心坐标。
  5. 根据权利要求1所述的机器人视觉系统的手眼标定方法,其特征在于,所述进行点云筛选,包括:
    根据三维点云相机与工作台之间的距离以及工作空间的大小,去除工作空间以外的点云,仅仅保留工作空间内的点云;
    利用体素滤波方法或直通滤波方法,筛除无关点云信息,获得标定球点云信息。
  6. 根据权利要求1所述的机器人视觉系统的手眼标定方法,其特征在于,所述对标定球进行提取,包括:
    利用离群点移除算法,去除分布不集中的点云;
    利用聚类算法,将点云分割为不同部分,再按照点云包围框的体积,选出标定球球面所在的点云;
    利用随机抽样一致性算法,对标定球进行初步拟合;
    根据拟合结果,对点云进一步筛选;
    再次进行随机抽样一致性算法,提取标定球的精细坐标,得到标定球在三维相机坐标系下的球心坐标。
  7. 根据权利要求1所述的机器人视觉系统的手眼标定方法,其特征在于,所述根据每一次所述标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,包括:
    分别计算相机坐标系和机器人坐标系下的球心坐标均值,并将得到的两个坐标系下的球心坐标均值的差值作为两个坐标系间的平移量;
    分别获得相机坐标系下的一组球心坐标和机器人坐标系下的一组球心坐标,并分别减去球心坐标均值,完成中心化处理,获得两组球心坐标;
    计算两组球心坐标的协方差矩阵H,并对所述协方差矩阵H进行奇异值分解,得到机器人坐标系相对于相机坐标系的位姿矩阵,进而得到相机与机器人之间的手眼标定关系。
  8. 一种机器人视觉系统的手眼标定系统,其特征在于,包括:
    机器人模块,该模块包括预先在机器人端部安装的标定球,通过将所述标定球运 动至三维点云相机的工作空间内,获得所述标定球在所述工作空间内的位置,计算所述标定球在机器人坐标系下的球心坐标,并发送至标定模块;
    点云采集模块,该模块基于三维点云相机,获取所述标定球在三维点云相机工作空间内的三维点云信息,并进行点云筛选,得到标定球点云信息,并发送至标定模块;
    标定模块,该模块用于控制所述机器人模块在三维点云相机的工作空间内的位置,根据所述标定球点云信息,对标定球进行提取,获取所述标定球在三维相机坐标系下的球心坐标;根据每一次所述标定球的球心在相机坐标系与机器人坐标系下的坐标,计算出相机与机器人之间的手眼标定关系,完成机器人视觉系统的手眼标定。
  9. 一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时可用于执行权利要求1-7中任一项所述的方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时可用于执行权利要求1-7中任一项所述的方法。
PCT/CN2022/125016 2022-08-29 2022-10-13 机器人视觉系统的手眼标定方法、系统、终端及介质 WO2024045274A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211041992.3 2022-08-29
CN202211041992.3A CN115488878A (zh) 2022-08-29 2022-08-29 机器人视觉系统的手眼标定方法、系统、终端及介质

Publications (1)

Publication Number Publication Date
WO2024045274A1 true WO2024045274A1 (zh) 2024-03-07

Family

ID=84466307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125016 WO2024045274A1 (zh) 2022-08-29 2022-10-13 机器人视觉系统的手眼标定方法、系统、终端及介质

Country Status (2)

Country Link
CN (1) CN115488878A (zh)
WO (1) WO2024045274A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103925879A (zh) * 2014-04-24 2014-07-16 中国科学院合肥物质科学研究院 基于3d图像传感器的室内机器人视觉手眼关系标定方法
CN110116411A (zh) * 2019-06-06 2019-08-13 浙江汉振智能技术有限公司 一种基于球目标的机器人3d视觉手眼标定方法
CN110355754A (zh) * 2018-12-15 2019-10-22 深圳铭杰医疗科技有限公司 机器人手眼系统、控制方法、设备及存储介质
CN110355755A (zh) * 2018-12-15 2019-10-22 深圳铭杰医疗科技有限公司 机器人手眼系统标定方法、装置、设备及存储介质
CN112091971A (zh) * 2020-08-21 2020-12-18 季华实验室 机器人手眼标定方法、装置、电子设备和系统
CN113362396A (zh) * 2021-06-21 2021-09-07 上海仙工智能科技有限公司 一种移动机器人3d手眼标定方法及装置
JP2022122648A (ja) * 2021-02-10 2022-08-23 株式会社キーエンス 制御装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103925879A (zh) * 2014-04-24 2014-07-16 中国科学院合肥物质科学研究院 基于3d图像传感器的室内机器人视觉手眼关系标定方法
CN110355754A (zh) * 2018-12-15 2019-10-22 深圳铭杰医疗科技有限公司 机器人手眼系统、控制方法、设备及存储介质
CN110355755A (zh) * 2018-12-15 2019-10-22 深圳铭杰医疗科技有限公司 机器人手眼系统标定方法、装置、设备及存储介质
CN110116411A (zh) * 2019-06-06 2019-08-13 浙江汉振智能技术有限公司 一种基于球目标的机器人3d视觉手眼标定方法
CN112091971A (zh) * 2020-08-21 2020-12-18 季华实验室 机器人手眼标定方法、装置、电子设备和系统
JP2022122648A (ja) * 2021-02-10 2022-08-23 株式会社キーエンス 制御装置
CN113362396A (zh) * 2021-06-21 2021-09-07 上海仙工智能科技有限公司 一种移动机器人3d手眼标定方法及装置

Also Published As

Publication number Publication date
CN115488878A (zh) 2022-12-20

Similar Documents

Publication Publication Date Title
CN107588721A (zh) 一种基于双目视觉的零件多尺寸的测量方法及系统
CN111476841B (zh) 一种基于点云和图像的识别定位方法及系统
CN110176032B (zh) 一种三维重建方法及装置
CN109794963B (zh) 一种面向曲面构件的机器人快速定位方法
He et al. A critical review for machining positioning based on computer vision
CN111612794A (zh) 基于多2d视觉的零部件高精度三维位姿估计方法及系统
CN111815706A (zh) 面向单品类拆垛的视觉识别方法、装置、设备及介质
CN114743259A (zh) 位姿估计方法、位姿估计系统、终端、存储介质及应用
CN112017248B (zh) 一种基于点线特征的2d激光雷达相机多帧单步标定方法
CN112361958B (zh) 一种线激光与机械臂标定方法
CN109737871B (zh) 一种三维传感器与机械手臂的相对位置的标定方法
CN111523547A (zh) 一种3d语义分割的方法及终端
CN112729112B (zh) 基于机器人视觉的发动机缸体孔径与孔位检测方法
CN112658643B (zh) 接插件装配方法
CN113327298A (zh) 基于图像实例分割和点云pca算法的一种抓取姿态估计方法
CN110992416A (zh) 基于双目视觉与cad模型的高反光面金属零件位姿测量方法
WO2024045274A1 (zh) 机器人视觉系统的手眼标定方法、系统、终端及介质
CN108257184B (zh) 一种基于正方形点阵合作目标的相机姿态测量方法
CN113221953A (zh) 基于实例分割和双目深度估计的目标姿态识别系统与方法
JP2778430B2 (ja) 視覚に基く三次元位置および姿勢の認識方法ならびに視覚に基く三次元位置および姿勢の認識装置
CN113592962B (zh) 基于机器视觉的批量硅片标识识别方法
CN115100270A (zh) 一种基于3d图像信息的各类轨迹智能提取方法及装置
Seo et al. 3D Hole center and surface normal estimation in robot vision systems
CN110060330B (zh) 一种基于点云图像的三维建模方法、装置和机器人
CN117197246B (zh) 一种基于三维点云与双目视觉的人形机器人位置确认方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22957091

Country of ref document: EP

Kind code of ref document: A1