WO2021109575A1 - 基于融入全局视觉和局部视觉的机器人视觉引导方法和装置 - Google Patents

基于融入全局视觉和局部视觉的机器人视觉引导方法和装置 Download PDF

Info

Publication number
WO2021109575A1
WO2021109575A1 PCT/CN2020/101337 CN2020101337W WO2021109575A1 WO 2021109575 A1 WO2021109575 A1 WO 2021109575A1 CN 2020101337 W CN2020101337 W CN 2020101337W WO 2021109575 A1 WO2021109575 A1 WO 2021109575A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
point
overall
processed
vision
Prior art date
Application number
PCT/CN2020/101337
Other languages
English (en)
French (fr)
Inventor
郑振兴
刁世普
秦磊
Original Assignee
广东技术师范大学
广东汇博机器人技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东技术师范大学, 广东汇博机器人技术有限公司 filed Critical 广东技术师范大学
Priority to JP2021537213A priority Critical patent/JP7212236B2/ja
Publication of WO2021109575A1 publication Critical patent/WO2021109575A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Definitions

  • the invention relates to the field of robot vision, in particular to a robot vision guidance method and device based on integrating global vision and local vision.
  • intelligent automation equipment As a powerful tool for manufacturing automation equipment (robot system), it must be high-speed and intelligent.
  • An important means of intelligent automation equipment is to equip the machine with "eyes" and a "brain” that can cooperate with this eye.
  • This "eye” can be a monocular camera, a binocular camera, a multi-eye camera, a three-dimensional scanner, or an RGB-D (RGB+Depth) sensor.
  • the core work content of intelligent automation equipment includes: analyzing the image data acquired by this "eye” (such as image recognition), and then guiding the robot system to complete specific processing or assembly operations based on the analysis results.
  • the surface treatment (grinding and polishing operation) of the parts is an indispensable and important process.
  • it is necessary to use the above-mentioned "eyes” to obtain the images of the parts to be processed, and use the above-mentioned "brain” to analyze and process, so as to realize the precise positioning of the space of the parts to be processed and the precise target. Detection, and then guide the tool grinding and polishing the end of the robot to work on the processing target of the part to be processed.
  • the detection accuracy of the existing robot vision guidance solution with a larger detection field of view is usually not high, and cannot meet the processing accuracy requirements.
  • the robot vision guidance solution needs to set a small detection field of view. Therefore, for larger processing targets, block detection is required, so the computational complexity is high, and a large amount of calculation is required , The calculation time is long, the overall system work efficiency is not high, and the performance requirements of the above-mentioned "brain" software and hardware are high, it is difficult to achieve real-time processing, and it does not meet the needs of the current high-speed industrial production process. Therefore, the present invention discloses a high-precision vision guidance method and device for a grinding and polishing robot that integrates global vision and local vision for large-volume processing targets, thereby providing a precision requirement that can meet high-efficiency processing of large-volume processing targets.
  • the main purpose of the present invention is to provide a robot vision guidance method and device based on the integration of global vision and local vision, which aims to solve the problem that the detection accuracy of the existing robot vision guidance scheme with a larger detection field of view is usually not high and cannot achieve processing.
  • the robot vision guidance solution needs to set a small detection field of view. Therefore, for larger processing targets, block detection is required, which results in high computational complexity and requires a lot of The large amount of calculation and the long calculation time result in low work efficiency of the overall system, and high performance requirements for software and hardware. It is difficult to achieve real-time processing, which does not meet the needs of the current high-speed industrial production process. technical problem.
  • the robot vision guidance method based on the integration of global vision and local vision includes:
  • Step 1 Obtain the overall registration RGB two-dimensional image I RGB and overall registration depth data I D of the target to be processed by the overall RGB-D composite sensor set at the end of the detection robot, and register the overall two-dimensional RGB image from the overall registration I RGB acquiring entire region of the object to be processed S RGB, the calibration matrix using the overall composite sensor RGB-D, according to the entire region from the whole S RGB registration depth data I D extracting target to be processed The overall 3D point cloud data S 3D ;
  • Step 5 Set the i in sequence from 1 to n, and convert the overall processing guidance path point AX i corresponding to the sub-target area point cloud S 3D- i to be detected as the overall processing guidance of the robot base coordinate system
  • Step 6 Set the i sequentially from 1 to n, and guide the local RGB-D composite sensor set at the end of the grinding and polishing robot to the target to be processed according to the overall processing guide path point CX i of the grinding and polishing robot base coordinate system Performing scanning to obtain the local high-precision point cloud SS 3D-i of the area corresponding to the point cloud S 3D-i of the sub-target area to be processed;
  • Step 7 Set the i sequentially from 1 to n, and use the local high-precision point cloud SS 3D-i as a template to find out the local high-precision point cloud RS 3D from the preset sample point cloud RS 3D through the registration algorithm.
  • precision point cloud SS 3D-i partial registration precision point cloud RS 3D-i, computing the local precision point cloud SS 3D-i with high precision local point cloud RS 3D-i corresponding to the difference in cloud point DS 3D-i , analyzing and searching the difference point cloud DS 3D-i to obtain high-precision 3D processing guidance information of the grinding and polishing robot.
  • the optimal path planning algorithm in step 4 is a simulated annealing intelligent optimization algorithm.
  • the registration algorithm in step 7 is an iterative closest point algorithm based on normal distribution transformation.
  • the method of calculating the difference point cloud DS 3D-i corresponding to the local high-precision point cloud SS 3D-i and the local high-precision point cloud RS 3D-i in the step 7 is a fast nearest neighbor search algorithm .
  • the preset high-precision detection parameters of the sub-target area to be processed in the step 3 include a high-precision detection field size, detection accuracy, and detection distance.
  • the present invention further provides a robot vision guidance device based on integrating global vision and local vision, including:
  • the processing target data acquisition module is used to obtain the overall registration RGB two-dimensional image I RGB and the overall registration depth data I D of the target to be processed through the overall RGB-D composite sensor set at the end of the detection robot, and from the overall registration the I RGB RGB quasi two-dimensional image acquired calibration matrix entire region of the object to be processed S RGB, RGB-D using the overall composite sensor, based on the entire region from the whole S RGB registration extract depth data I D
  • the processing target is the overall processing guidance path point of the overall 3D point cloud data S 3D , where j is the serial number of the overall processing guidance path point AX j , and the value range of j is [1, n], so Said n is the total number of the overall processing guide path points AX j;
  • the i corresponds to the j one-to-one, according to the i
  • the processing guide point conversion module is used to sequentially set the i from 1 to n, and convert the overall processing guide path point AX i corresponding to the point cloud S 3D-i of the sub-target area to be processed into a detection robot base
  • the coordinate system integrally processes the guidance path point BX i , and then converts the detection robot base coordinate system integral processing guidance path point BX i into the grinding and polishing robot base coordinate system integral processing guidance path point CX i , thereby guiding the overall processing path
  • the local high-precision collection point cloud module is used to sequentially set the i from 1 to n, and guide the local RGB-D composite set at the end of the grinding and polishing robot according to the overall processing guide path point CX i of the grinding and polishing robot base coordinate system
  • the sensor scans the target to be processed, so as to obtain the local high-precision point cloud SS 3D-i of the target to be processed in the area corresponding to the point cloud S 3D-i of the sub-target area to be processed;
  • the high-precision processing guidance information module is used to sequentially set the i from 1 to n, and use the local high-precision point cloud SS 3D-i as a template from the preset sample point cloud RS 3D through a registration algorithm Find out the local high - precision point cloud RS 3D-i registered with the local high-precision point cloud SS 3D-i , and calculate the local high-precision point cloud SS 3D-i and the local high-precision point cloud RS 3D-i . corresponding to the difference in cloud point DS 3D-i, the difference analysis searching point cloud DS 3D-i obtain the 3D machining precision grinding polishing robot guidance information.
  • the optimal path planning algorithm of the optimal processing path planning module is a simulated annealing intelligent optimization algorithm.
  • the registration algorithm of the high-precision processing guidance information module is an iterative closest point algorithm based on normal distribution transformation.
  • the method for calculating the difference point cloud DS 3D-i corresponding to the local high-precision point cloud SS 3D-i and the local high-precision point cloud RS 3D-i of the high-precision processing guidance information module is Fast nearest neighbor search algorithm.
  • the preset high-precision detection parameters of the sub-target area to be processed of the sub-target area segmentation module include a high-precision detection field size, detection accuracy, and detection distance.
  • the present invention can not only meet the accuracy requirements for high-efficiency machining of large-volume machining targets, but also greatly reduce the amount of calculations. Reduce the complexity of calculation, accelerate the processing speed, reduce the calculation time, and meet the requirements of real-time processing. At the same time, it also reduces the requirements for the performance of software and hardware, which can save costs, reduce the difficulty of development, and meet the requirements for high-speed development. Requirements for large-scale production mode.
  • FIG. 1 is a schematic flowchart of the first embodiment of the robot vision guidance method based on the integration of global vision and local vision according to the present invention
  • FIG. 2 is a schematic diagram of the functional modules of the first embodiment of the robot vision guidance device based on the integration of global vision and local vision according to the present invention
  • FIG. 3 is a schematic diagram of an RGB-D composite sensor implementing the present invention.
  • Figure 4 is a schematic diagram of a robot implementing the present invention.
  • FIG. 1 is a schematic flowchart of a first embodiment of a robot vision guidance method based on the integration of global vision and local vision according to the present invention.
  • the robot vision guidance method based on integrating global vision and local vision includes the following steps:
  • the overall registration RGB two-dimensional image I RGB and overall registration depth data I D of the target to be processed are acquired by the overall RGB-D composite sensor D70 provided at the end of the detection robot, and the overall registration RGB two-dimensional image I acquiring RGB entire region to be processed of the target O10 S RGB, using the overall calibration matrix composite RGB-D sensor S according to the entire region from the global registration RGB depth data I D extracting target to be processed O10's overall 3D point cloud data S 3D .
  • the RGB-D composite sensor D70 is set at the top of the robot arm D40 of the detection robot, and the RGB camera D20 is in the middle of the RGB-D composite vision sensor.
  • the color image data will be in the middle of the RGB-D composite vision sensor before being transmitted to the computer. Compression is performed to ensure the speed of RGB data analysis.
  • the sensors D10 and D30 on the left and right sides of the RGB-D composite vision sensor are respectively responsible for transmitting and receiving infrared rays: firstly, the infrared ray is emitted to the target O10 to be processed through the infrared ray transmitter D10 on the left side. This infrared ray is highly random due to its high randomness.
  • the light spots formed by reflections at any two different positions in the space are different, forming a three-dimensional "light code" for the environment; then use the infrared receiver D30 on the right to collect the infrared image in the field of view; finally, use RGB-D
  • the calibration matrix of the composite vision sensor D70 performs a series of complex calculations on this infrared image, and then the depth data in the field of view can be obtained.
  • the i corresponds to the j one-to-one, according to the one of the i and the j
  • the optimal sub-target area point cloud sequence ⁇ S 3D-i ⁇ i 1->n generated by the optimal path planning algorithm, so that the sub-target area point cloud sub-processing area S 3D-j
  • the sequence of is in line with the actual robot processing process, even if the grinding and polishing robot D50 does not repeatedly pass through all areas of the object to be processed O20, so that the total time for the grinding and polishing robot D50 to perform operations is the shortest.
  • the overall processing guidance path point AX i corresponding to the point cloud S 3D-i of the sub-object O30 area to be processed obtained from the robotic arm D40 of the detection robot is converted into the overall basic coordinate system of the detection robot D40 Process the guide path point BX i , and then convert the inspection robot D40 base coordinate system overall processing guide path point BX i to the grinding and polishing robot D50 base coordinate system overall processing guide path point CX i , and then guide the grinding and polishing robot D50 to polish the end Tool D60 starts the subsequent operations.
  • the i is set sequentially from 1 to n, and the local RGB-D composite sensor set at the end of the grinding and polishing robot is guided to scan the target to be processed according to the overall processing guide path point CX i of the grinding and polishing robot base coordinate system, In this way, the local high-precision point cloud SS 3D-i corresponding to the point cloud S 3D-i of the sub-target area to be processed is obtained.
  • the i is set sequentially from 1 to n, and the local high-precision point cloud SS 3D-i is used as a template to find out the local high-precision point from the preset sample point cloud RS 3D through the registration algorithm.
  • local precision point cloud SS 3D-i registered cloud RS 3D-i computing the local precision point cloud SS 3D-i with high precision local point cloud RS 3D-i corresponding to the difference in cloud point DS 3D-i Analyze and search the difference point cloud DS 3D-i to obtain high-precision 3D processing guidance information of the grinding and polishing robot.
  • the processing steps described above are adopted for the larger-volume processing target O20, and the global vision system and the local vision system are integrated.
  • the visual system D70 of the detection robot D40 is used to realize the rough positioning of the processing target O20, and realize the classification of the processing target O20.
  • combine the high-precision visual inspection system on the grinding and polishing robot D50 to accurately detect the target, and then guide the tool D60 at the end of the grinding and polishing robot D50 to implement high-precision and high-efficiency automatic grinding on the processing target area O30. Throwing homework.
  • the optimal path planning algorithm in step S40 is a simulated annealing intelligent optimization algorithm.
  • the simulated annealing intelligent optimization algorithm has the characteristics of being mature, reliable, and easy to implement in engineering.
  • the registration algorithm in the step S70 is an iterative closest point algorithm based on normal distribution transformation.
  • the iterative nearest point algorithm based on normal distribution transformation has the characteristics of being mature, reliable, and easy to implement in engineering.
  • the method of calculating the difference point cloud DS 3D-i corresponding to the local high-precision point cloud SS 3D-i and the local high-precision point cloud RS 3D-i in the step S70 is a fast nearest neighbor search algorithm.
  • the fast nearest neighbor search algorithm is mature, reliable, and easy to implement in engineering.
  • the preset high-precision detection parameters of the sub-target area to be processed in the step S30 include a high-precision detection field size, detection accuracy, and detection distance.
  • the parameters are easy to measure and obtain, and have the characteristics of high reliability.
  • the robot vision guidance method based on global vision and local vision in the first embodiment of the present invention based on the robot vision guidance method integrating global vision and local vision can be based on the robot vision guidance device integrating global vision and local vision by the present invention.
  • the robot vision guidance device provided by the first embodiment is implemented based on the integration of global vision and local vision.
  • FIG. 2 is a robot vision guidance device 1 based on the integration of global vision and local vision provided by the first embodiment of the present invention based on the robot vision guidance device integrated with global vision and local vision, the device 1 includes :
  • the processing target data acquisition module 10 is used to obtain the overall registration RGB two-dimensional image I RGB and the overall registration depth data I D of the object to be processed through the overall RGB-D composite sensor set at the end of the detection robot, from the overall registration RGB two-dimensional image of the entire region of the I RGB acquiring object to be processed S RGB, using the overall calibration matrix composite RGB-D sensor S according to the entire region from the global registration RGB depth data I D Extract the overall 3D point cloud data S 3D of the target to be processed.
  • the RGB-D composite sensor D70 is set at the top of the robot arm D40 of the detection robot, the RGB camera D20 is in the middle of the RGB-D composite vision sensor, and the color image data will be in the middle of the RGB-D composite vision sensor before being transmitted to the computer. Compression is performed to ensure the speed of RGB data analysis.
  • the sensors D10 and D30 on the left and right sides of the RGB-D composite vision sensor are respectively responsible for transmitting and receiving infrared rays: firstly, the infrared ray is emitted to the target O10 to be processed through the infrared ray transmitter D10 on the left side. This infrared ray is highly random due to its high randomness.
  • the light spots formed by reflections at any two different positions in the space are different, forming a three-dimensional "light code" for the environment; then use the infrared receiver D30 on the right to collect the infrared image in the field of view; finally, use RGB-D
  • the calibration matrix of the composite vision sensor D70 performs a series of complex calculations on this infrared image, and then the depth data in the field of view can be obtained.
  • the optimal sub-target area point cloud sequence ⁇ S 3D-i ⁇ i 1->n generated by the optimal path planning algorithm, so that the sub-target area point cloud sub-processing area S 3D-j
  • the sequence of is in line with the actual robot processing process, even if the grinding and polishing robot D50 does not repeatedly pass through all areas of the object to be processed O20, so that the total time for the grinding and polishing robot D50 to perform operations is the shortest.
  • the processing guide point conversion module 50 is configured to sequentially set the i from 1 to n, and convert the overall processing guide path point AX i corresponding to the sub-target area point cloud S 3D-i to be processed into a detection robot
  • the base coordinate system integrally processes the guidance path point BX i , and then converts the detection robot base coordinate system integral processing guidance path point BX i into the grinding and polishing robot base coordinate system integral processing guidance path point CX i , thereby guiding the integral processing
  • the overall processing guidance path point AX i corresponding to the point cloud S 3D-i of the sub-object O30 area to be processed obtained from the robotic arm D40 of the detection robot is converted into the overall basic coordinate system of the detection robot D40 Process the guide path point BX i , and then convert the inspection robot D40 base coordinate system overall processing guide path point BX i to the grinding and polishing robot D50 base coordinate system overall processing guide path point CX i , and then guide the grinding and polishing robot D50 to polish the end Tool D60 starts the subsequent operations.
  • the local high-precision collection point cloud module 60 is used to sequentially set the i from 1 to n, and guide the local RGB-D set at the end of the grinding and polishing robot according to the overall processing guide path point CX i of the grinding and polishing robot base coordinate system
  • the composite sensor scans the target to be processed, so as to obtain the local high-precision point cloud SS 3D-i of the target to be processed in the area corresponding to the point cloud S 3D-i of the sub-target area to be processed;
  • the high-precision processing guidance information module 70 is configured to sequentially set the i from 1 to n, and use the local high-precision point cloud SS 3D-i as a template from the preset sample point cloud RS 3D through a registration algorithm Find the local high - precision point cloud RS 3D-i registered with the local high-precision point cloud SS 3D-i , and calculate the local high-precision point cloud SS 3D-i and the local high-precision point cloud RS 3D-i Corresponding difference point cloud DS 3D-i , analyze and search the difference point cloud DS 3D-i to obtain high-precision 3D processing guidance information of the grinding and polishing robot.
  • the processing steps described above are adopted for the larger-volume processing target O20, and the global vision system and the local vision system are integrated.
  • the visual system D70 of the detection robot D40 is used to realize the rough positioning of the processing target O20, and realize the classification of the processing target O20.
  • combine the high-precision visual inspection system on the grinding and polishing robot D50 to accurately detect the target, and then guide the tool D60 at the end of the grinding and polishing robot D50 to implement high-precision and high-efficiency automatic grinding on the processing target area O30. Throwing homework.
  • the optimal path planning algorithm of the optimal processing path planning module 40 is a simulated annealing intelligent optimization algorithm.
  • the simulated annealing intelligent optimization algorithm has the characteristics of being mature, reliable, and easy to implement in engineering.
  • the registration algorithm of the high-precision processing guidance information module 70 is an iterative closest point algorithm based on normal distribution transformation.
  • the iterative nearest point algorithm based on normal distribution transformation has the characteristics of being mature, reliable, and easy to implement in engineering.
  • the method of calculating the difference point cloud DS 3D-i corresponding to the local high-precision point cloud SS 3D-i and the local high-precision point cloud RS 3D-i of the high-precision processing guidance information module 70 is fast Nearest neighbor search algorithm.
  • the fast nearest neighbor search algorithm is mature, reliable, and easy to implement in engineering.
  • the preset high-precision detection parameters of the sub-target area to be processed of the sub-target area segmentation module 30 include a high-precision detection field size, detection accuracy, and detection distance.
  • the parameters are easy to measure and obtain, and have the characteristics of high reliability.
  • module units or steps of the present invention can be implemented by a general computing device. Alternatively, they can be implemented by program code executable by the computing device. They are stored in a storage device to be executed by a computing device, and in some cases, the steps shown or described can be performed in a different order than here, or they can be made into individual integrated circuit modules, or the Multiple modules or steps in them are made into a single integrated circuit module to achieve. In this way, the present invention is not limited to any specific combination of hardware and software.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present invention.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

一种基于融入全局视觉和局部视觉的机器人视觉引导方法和装置(1),融入全局视觉系统和局部视觉系统,首先实现对加工目标的粗定位,并实现加工目标进行分块,以及进行路径规划,然后结合高精度视觉检测系统对目标进行精确检测,进而引导机器人实施高精度、高效率的自动化磨抛作业,满足对大体积加工目标进行高效率加工的精度要求。

Description

基于融入全局视觉和局部视觉的机器人视觉引导方法和装置 技术领域
本发明涉及机器人视觉领域,特别涉及一种基于融入全局视觉和局部视觉的机器人视觉引导方法和装置。
背景技术
作为制造强国利器的自动化装备(机器人系统)必须要向高速化,智能化方向迈进。自动化装备的智能化的一个重要手段是给机器装上“眼睛”和能够与这颗眼睛配合的“大脑”。这只“眼睛”可以是单目相机,双目相机,多目相机,三维扫描仪,也可以是RGB-D(RGB+Depth)传感器。而自动化装备智能化的核心工作内容包括了:通过对这只“眼睛”所获取图像数据进行分析(例如图像识别),再根据分析结果来引导机器人系统完成特定的加工或者装配操作。随着加工技术的进步,需要加工的部件的表面越来越复杂,并且对加工的精度要求也越来越高,因此对部件的表面处理(磨抛作业)是一项必不可少的重要过程。为了实现对待加工部件的表面处理的自动化和智能化,需要利用上述“眼睛”获取待加工部件的图像,并借助上述“大脑”进行分析处理从而实现对待加工部件的空间进行精确定位对目标进行精确检测,进而引导磨抛机器人末端的工具对待加工部件的加工目标进行作业。现有的采用检测视野较大的机器人视觉引导方案的检测精度通常不高,无法达到加工精度要求。为了获取高精度的空间定位信息,机器人视觉引导方案需要设定较小的检测视野,因此对于较大的加工目标,需要进行分块检测,因此计算复杂度很高,而且需要很大的计算量,计算时间较长,造成整体系统的工作效率不高,并且对上述“大脑”的软硬件的性能要求高,很难达到处理的实时性,不符合当前高速化的工 业生产过程中的需要。因此,本发明公开了一种融入全局视觉和局部视觉的大体积加工目标的磨抛机器人高精度视觉引导方法和装置,从而提供能满足对大体积加工目标进行高效率加工的精度要求。
发明内容
本发明的主要目的在于提供一种基于融入全局视觉和局部视觉的机器人视觉引导方法和装置,旨在解决现有的采用检测视野较大的机器人视觉引导方案的检测精度通常不高,无法达到加工精度要求,而为了获取高精度的空间定位信息,机器人视觉引导方案需要设定较小的检测视野,因此对于较大的加工目标,需要进行分块检测,从而计算复杂度很高,并且需要很大的计算量,计算时间较长,造成整体系统的工作效率不高,并且对软硬件的性能要求高,很难达到处理的实时性,不符合当前高速化的工业生产过程中的需要的这些技术问题。
为解决上述问题,本发明提供的基于融入全局视觉和局部视觉的机器人视觉引导方法,包括:
步骤1、通过设置在检测机器人的末端的整体RGB-D复合传感器获取待加工目标的整体配准RGB二维图像I RGB和整体配准深度数据I D,从所述整体配准RGB二维图像I RGB获取所述待加工目标的整体区域S RGB,使用所述整体RGB-D复合传感器的标定矩阵,依据所述整体区域S RGB从所述整体配准深度数据I D提取所述待加工目标的整体3D点云数据S 3D
步骤2、分析所述整体3D点云数据S 3D获取所述待加工目标的整体加工引导路径点集{AX j} j=1->n,所述AX j为所述待加工目标在所述整体3D点云数据S 3D的整体加工引导路径点,所述j为所述整体加工引导路径点AX j的序号,所述j的取值范围为[1,n],所述n为所述整体加工引导路径点AX j的总数目;
步骤3、根据所述整体加工引导路径点集{AX j} j=1->n结合预先设定的待加 工子目标区域高精度检测参数将所述待加工目标的所述整体3D点云数据S 3D分割为待加工子目标区域点云集{S 3D-j} j=1->n,所述S 3D-j为对应所述整体加工引导路径点AX j的待加工子目标区域点云;
步骤4、通过最优路径规划算法,将所述待加工子目标区域点云集{S 3D-j} j=1->n进行排序,从而生成最优待加工子目标区域点云序列{S 3D-i} i=1->n,所述S 3D-i为最优待加工子目标区域点云序列{S 3D-i} i=1->n之内所述待加工子目标区域点云,所述i为所述最优加工子目标区域点云序列{S 3D-i} i=1->n的序号,所述i与所述j一一对应,依据所述i与所述j的所述一一对应将所述整体加工引导路径点集{AX j} j=1->n转换为整体加工引导路径点序列{AX i} i=1->n
步骤5、将所述i依次设置为从1至n,将对应所述待加工子目标区域点云S 3D-i的所述整体加工引导路径点AX i转换为检测机器人基坐标系整体加工引导路径点BX i,然后将所述检测机器人基坐标系整体加工引导路径点BX i转换为磨抛机器人基坐标系整体加工引导路径点CX i,从而将所述整体加工引导路径点序列{AX i} i=1->n转换为检测机器人基坐标系加工引导点序列{BX i} i=1->n,将检测机器人基坐标系加工引导点序列{BX i} i=1->n转换为磨抛机器人基坐标系整体加工引导点序列{CX i} i=1->n
步骤6、将所述i依次设置为从1至n,依据磨抛机器人基坐标系整体加工引导路径点CX i引导设置在磨抛机器人的末端的局部RGB-D复合传感器对所述待加工目标进行扫描,从而获取所述待加工目标在对应所述待加工子目标区域点云S 3D-i所对应区域的局部高精度点云SS 3D-i
步骤7、将所述i依次设置为从1至n,通过配准算法以所述局部高精度点云SS 3D-i作为模板从预设的样本点云RS 3D之中查找出与所述局部高精度点云SS 3D-i配准的局部高精度点云RS 3D-i,计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i,分析搜索所述差异点云DS 3D-i得到所述磨抛机器人的高精度3D加工引导信息。
优选地,所述步骤4的所述最优路径规划算法为模拟退火智能优化算法。
优选地,所述步骤7的所述配准算法为基于正态分布变换的迭代最近点算法。
优选地,所述步骤7的所述计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i的方法为快速最近邻搜索算法。
优选地,所述步骤3的所述预先设定的待加工子目标区域高精度检测参数包括高精度检测的视野大小、检测精度、检测距离。
本发明进一步提供基于融入全局视觉和局部视觉的机器人视觉引导装置,包括:
加工目标数据采集模块,用于通过设置在检测机器人的末端的整体RGB-D复合传感器获取待加工目标的整体配准RGB二维图像I RGB和整体配准深度数据I D,从所述整体配准RGB二维图像I RGB获取所述待加工目标的整体区域S RGB,使用所述整体RGB-D复合传感器的标定矩阵,依据所述整体区域S RGB从所述整体配准深度数据I D提取所述待加工目标的整体3D点云数据S 3D
整体加工引导路径模块,用于分析所述整体3D点云数据S 3D获取所述待加工目标的整体加工引导路径点集{AX j} j=1->n,所述AX j为所述待加工目标在所述整体3D点云数据S 3D的整体加工引导路径点,所述j为所述整体加工引导路径点AX j的序号,所述j的取值范围为[1,n],所述n为所述整体加工引导路径点AX j的总数目;
分割子目标区域模块,用于根据所述整体加工引导路径点集{AX j} j=1->n结合预先设定的待加工子目标区域高精度检测参数将所述待加工目标的所述整体3D点云数据S 3D分割为待加工子目标区域点云集{S 3D-j} j=1->n,所述S 3D-j为对应所述整体加工引导路径点AX j的待加工子目标区域点云;
最优加工路径规划模块,用于通过最优路径规划算法,将所述待加工子目标区域点云集{S 3D-j} j=1->n进行排序,从而生成最优待加工子目标区域点云序 列{S 3D-i} i=1->n,所述S 3D-i为最优待加工子目标区域点云序列{S 3D-i} i=1->n之内所述待加工子目标区域点云,所述i为所述最优加工子目标区域点云序列{S 3D-i} i=1->n的序号,所述i与所述j一一对应,依据所述i与所述j的所述一一对应将所述整体加工引导路径点集{AX j} j=1->n转换为整体加工引导路径点序列{AX i} i=1->n
加工引导点转换模块,用于将所述i依次设置为从1至n,将对应所述待加工子目标区域点云S 3D-i的所述整体加工引导路径点AX i转换为检测机器人基坐标系整体加工引导路径点BX i,然后将所述检测机器人基坐标系整体加工引导路径点BX i转换为磨抛机器人基坐标系整体加工引导路径点CX i,从而将所述整体加工引导路径点序列{AX i} i=1->n转换为检测机器人基坐标系加工引导点序列{BX i} i=1->n,将检测机器人基坐标系加工引导点序列{BX i} i=1->n转换为磨抛机器人基坐标系整体加工引导点序列{CX i} i=1->n
局部高精度采集点云模块,用于将所述i依次设置为从1至n,依据磨抛机器人基坐标系整体加工引导路径点CX i引导设置在磨抛机器人的末端的局部RGB-D复合传感器对所述待加工目标进行扫描,从而获取所述待加工目标在对应所述待加工子目标区域点云S 3D-i所对应区域的局部高精度点云SS 3D-i
高精度加工引导信息模块,用于将所述i依次设置为从1至n,通过配准算法以所述局部高精度点云SS 3D-i作为模板从预设的样本点云RS 3D之中查找出与所述局部高精度点云SS 3D-i配准的局部高精度点云RS 3D-i,计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i,分析搜索所述差异点云DS 3D-i得到所述磨抛机器人的高精度3D加工引导信息。
优选地,所述最优加工路径规划模块的所述最优路径规划算法为模拟退火智能优化算法。
优选地,所述高精度加工引导信息模块的所述配准算法为基于正态分布变换的迭代最近点算法。
优选地,所述高精度加工引导信息模块的的所述计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i的方法为快速最近邻搜索算法。
优选地,所述分割子目标区域模块的所述预先设定的待加工子目标区域高精度检测参数包括高精度检测的视野大小、检测精度、检测距离。
本发明通过上述融入全局视觉和局部视觉的大体积加工目标的磨抛机器人高精度视觉引导技术方案,不仅能满足对大体积加工目标进行高效率加工的精度要求,还可以大大减少了计算量,降低计算的复杂度,加快了处理速度,减少了计算时间,满足了实时处理的要求,同时也降低了对软硬件的性能的要求,可以节约成本,降低了开发的难度,符合对高速化大规模生产模式的要求。
附图说明
图1为本发明基于融入全局视觉和局部视觉的机器人视觉引导方法第一实施例的流程示意图;
图2为本发明基于融入全局视觉和局部视觉的机器人视觉引导装置第一实施例的功能模块示意图;
图3为实现本发明的RGB-D复合传感器的示意图;
图4为实现本发明的机器人示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限 定本发明。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
参照图1,图1为本发明基于融入全局视觉和局部视觉的机器人视觉引导方法的第一实施例的流程示意图。如图1所示的实施例,所述基于融入全局视觉和局部视觉的机器人视觉引导方法包括如下步骤:
S10、加工目标数据采集。
即通过设置在检测机器人的末端的整体RGB-D复合传感器D70获取待加工目标的整体配准RGB二维图像I RGB和整体配准深度数据I D,从所述整体配准RGB二维图像I RGB获取所述待加工目标O10的整体区域S RGB,使用所述整体RGB-D复合传感器的标定矩阵,依据所述整体区域S RGB从所述整体配准深度数据I D提取所述待加工目标O10的整体3D点云数据S 3D
参照图3,所述RGB-D复合传感器D70被设置在检测机器人的机械臂D40的顶端,RGB摄像头D20在所述RGB-D复合视觉传感器的中间位置,彩色图像数据在传递到计算机之前会在进行压缩,以保证对RGB数据分析时的速度。所述RGB-D复合视觉传感器左右两侧的传感器D10和D30分别负责发射和接收红外线:首先通过左侧的红外线发射器D10向待加工目标O10发射红外线,这束红外线由于具有高度随机性,其在空间中任意两个不同位置所反射形成的光斑都不相同,对环境形成立体的“光编码”;再通过右侧的红外线接收器D30来采集视野中的红外线图像;最终,利用RGB-D复合视觉传感器D70的标定矩阵对这副红外图像进行一系列复杂的计算,就可以得到视野中的深度数据。
S20、整体加工引导路径。
即分析所述整体3D点云数据S 3D获取所述待加工目标的整体加工引导路径点集{AX j} j=1->n,所述AX j为所述待加工目标在所述整体3D点云数据S 3D的整体加工引导路径点,所述j为所述整体加工引导路径点AX j的序号,所述j的取值范围为[1,n],所述n为所述整体加工引导路径点AX j的总数目。
所述AX i是对应所述整体3D点云数据S 3D的坐标向量,所述整体加工引导路径点集{AX j} j=1->n对应所述整体3D点云数据S 3D的全部所述AX i的集合。
S30、分割子目标区域。
即根据所述整体加工引导路径点集{AX j} j=1->n结合预先设定的待加工子目标区域高精度检测参数将所述待加工目标的所述整体3D点云数据S 3D分割为待加工子目标区域点云集{S 3D-j} j=1->n,所述S 3D-j为对应所述整体加工引导路径点AX j的待加工子目标区域点云。
S40、最优加工路径规划。
即通过最优路径规划算法,将所述待加工子目标区域点云集{S 3D-j} j=1->n进行排序,从而生成最优待加工子目标区域点云序列{S 3D-i} i=1->n,所述S 3D-i为最优待加工子目标区域点云序列{S 3D-i} i=1->n之内所述待加工子目标区域点云,所述i为所述最优加工子目标区域点云序列{S 3D-i} i=1->n的序号,所述i与所述j一一对应,依据所述i与所述j的所述一一对应将所述整体加工引导路径点集{AX j} j=1->n转换为整体加工引导路径点序列{AX i} i=1->n
通过所述最佳路径规划算法所生成的最优待加工子目标区域点云序列{S 3D-i} i=1->n,使得所述待加工子目标区域点云子加工区域S 3D-j的顺序符合实际机器人加工过程,即使磨抛机器人D50不重复的经过所述待加工目标O20所有区域,从而使得磨抛机器人D50进行作业的总时间最短。
S50、加工引导点转换。
即将所述i依次设置为从1至n,将对应所述待加工子目标区域点云S 3D-i的所述整体加工引导路径点AX i转换为检测机器人基坐标系整体加工引导路径点BX i,然后将所述检测机器人基坐标系整体加工引导路径点BX i转换为磨 抛机器人基坐标系整体加工引导路径点CX i,从而将所述整体加工引导路径点序列{AX i} i=1->n转换为检测机器人基坐标系加工引导点序列{BX i} i=1->n,将检测机器人基坐标系加工引导点序列{BX i} i=1->n转换为磨抛机器人基坐标系整体加工引导点序列{CX i} i=1->n
参照图4,从所述检测机器人的机械臂D40所获取的对应所述待加工子目标O30区域点云S 3D-i的所述整体加工引导路径点AX i转换为检测机器人D40基坐标系整体加工引导路径点BX i,然后将所述检测机器人D40基坐标系整体加工引导路径点BX i转换为磨抛机器人D50基坐标系整体加工引导路径点CX i,进而引导磨抛机器人D50的末端打磨工具D60开始进行后续的作业。
S60、局部高精度采集点云。
即将所述i依次设置为从1至n,依据磨抛机器人基坐标系整体加工引导路径点CX i引导设置在磨抛机器人的末端的局部RGB-D复合传感器对所述待加工目标进行扫描,从而获取所述待加工目标在对应所述待加工子目标区域点云S 3D-i所对应区域的局部高精度点云SS 3D-i
S70、高精度加工引导信息。
即将所述i依次设置为从1至n,通过配准算法以所述局部高精度点云SS 3D-i作为模板从预设的样本点云RS 3D之中查找出与所述局部高精度点云SS 3D-i配准的局部高精度点云RS 3D-i,计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i,分析搜索所述差异点云DS 3D-i得到所述磨抛机器人的高精度3D加工引导信息。
因此,对于较大体积加工目标O20采用上述处理步骤,融入全局视觉系统和局部视觉系统,首先通过检测机器人D40的视觉系统D70实现对加工目标O20的粗定位,并实现所述加工目标O20进行分块,以及进行路劲规划;然后结合磨抛机器人D50上高精度视觉检测系统对目标进行精确检测,进而引导磨抛机器人D50末端的工具D60对加工目标区域O30实施高精度、高效率的自动化磨抛作业。因此不仅能满足对大体积加工目标进行高效率加工的精度要求, 还可以大大减少了计算量,降低计算的复杂度,加快了处理速度,减少了计算时间,满足了实时处理的要求,同时也降低了对软硬件的性能的要求,可以节约成本,降低了开发的难度,符合对高速化大规模生产模式的要求。
进一步,所述步骤S40的所述最优路径规划算法为模拟退火智能优化算法。所述模拟退火智能优化算法具有成熟可靠,易于工程实现的特点。
进一步,所述步骤S70的所述配准算法为基于正态分布变换的迭代最近点算法。所述基于正态分布变换的迭代最近点算法具有成熟可靠,易于工程实现的特点。
进一步,所述步骤S70的所述计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i的方法为快速最近邻搜索算法。所述快速最近邻搜索算法具有成熟可靠,易于工程实现的特点。
进一步,所述步骤S30的所述预先设定的待加工子目标区域高精度检测参数包括高精度检测的视野大小、检测精度、检测距离。所述参数具有易于测量获取,而且可靠性高的特点。
上述本发明基于融入全局视觉和局部视觉的机器人视觉引导方法的第一实施例中的基于融入全局视觉和局部视觉的机器人视觉引导方法可以由本发明基于融入全局视觉和局部视觉的机器人视觉引导装置的第一实施例所提供的基于融入全局视觉和局部视觉的机器人视觉引导装置来实现。
参照图2,图2为本发明基于融入全局视觉和局部视觉的机器人视觉引导装置的第一实施例所提供的一种基于融入全局视觉和局部视觉的机器人视觉引导装置1,所述装置1包括:
加工目标数据采集模块10,用于通过设置在检测机器人的末端的整体RGB-D复合传感器获取待加工目标的整体配准RGB二维图像I RGB和整体配准深度数据I D,从所述整体配准RGB二维图像I RGB获取所述待加工目标的整体区域S RGB,使用所述整体RGB-D复合传感器的标定矩阵,依据所述整体区域S RGB从所述整体配准深度数据I D提取所述待加工目标的整体3D点云数据S 3D
参照图3,所述RGB-D复合传感器D70被设置在检测机器人的机械臂D40的顶端,RGB摄像头D20在所述RGB-D复合视觉传感器的中间位置,彩色图像数据在传递到计算机之前会在进行压缩,以保证对RGB数据分析时的速度。所述RGB-D复合视觉传感器左右两侧的传感器D10和D30分别负责发射和接收红外线:首先通过左侧的红外线发射器D10向待加工目标O10发射红外线,这束红外线由于具有高度随机性,其在空间中任意两个不同位置所反射形成的光斑都不相同,对环境形成立体的“光编码”;再通过右侧的红外线接收器D30来采集视野中的红外线图像;最终,利用RGB-D复合视觉传感器D70的标定矩阵对这副红外图像进行一系列复杂的计算,就可以得到视野中的深度数据。
整体加工引导路径模块20,用于分析所述整体3D点云数据S 3D获取所述待加工目标的整体加工引导路径点集{AX j} j=1->n,所述AX j为所述待加工目标在所述整体3D点云数据S 3D的整体加工引导路径点,所述j为所述整体加工引导路径点AX j的序号,所述j的取值范围为[1,n],所述n为所述整体加工引导路径点AX j的总数目。
所述AX i是对应所述整体3D点云数据S 3D的坐标向量,所述整体加工引导路径点集{AX j} j=1->n对应所述整体3D点云数据S 3D的全部所述AX i的集合。
分割子目标区域模块30,用于根据所述整体加工引导路径点集{AX j} j=1->n结合预先设定的待加工子目标区域高精度检测参数将所述待加工目标的所述整体3D点云数据S 3D分割为待加工子目标区域点云集{S 3D-j} j=1->n,所述S 3D-j 为对应所述整体加工引导路径点AX j的待加工子目标区域点云;
最优加工路径规划模块40,用于通过最优路径规划算法,将所述待加工子目标区域点云集{S 3D-j} j=1->n进行排序,从而生成最优待加工子目标区域点云序列{S 3D-i} i=1->n,所述S 3D-i为最优待加工子目标区域点云序列{S 3D-i} i=1->n之内所述待加工子目标区域点云,所述i为所述最优加工子目标区域点云序列{S 3D-i} i=1->n的序号,所述i与所述j一一对应,依据所述i与所述j的所述一一对应将所述整体加工引导路径点集{AX j} j=1->n转换为整体加工引导路径点序列{AX i} i=1->n
通过所述最佳路径规划算法所生成的最优待加工子目标区域点云序列{S 3D-i} i=1->n,使得所述待加工子目标区域点云子加工区域S 3D-j的顺序符合实际机器人加工过程,即使磨抛机器人D50不重复的经过所述待加工目标O20所有区域,从而使得磨抛机器人D50进行作业的总时间最短。
加工引导点转换模块50,用于将所述i依次设置为从1至n,将对应所述待加工子目标区域点云S 3D-i的所述整体加工引导路径点AX i转换为检测机器人基坐标系整体加工引导路径点BX i,然后将所述检测机器人基坐标系整体加工引导路径点BX i转换为磨抛机器人基坐标系整体加工引导路径点CX i,从而将所述整体加工引导路径点序列{AX i} i=1->n转换为检测机器人基坐标系加工引导点序列{BX i} i=1->n,将检测机器人基坐标系加工引导点序列{BX i} i=1->n转换为磨抛机器人基坐标系整体加工引导点序列{CX i} i=1->n
参照图4,从所述检测机器人的机械臂D40所获取的对应所述待加工子目标O30区域点云S 3D-i的所述整体加工引导路径点AX i转换为检测机器人D40基坐标系整体加工引导路径点BX i,然后将所述检测机器人D40基坐标系整体加工引导路径点BX i转换为磨抛机器人D50基坐标系整体加工引导路径点CX i,进而引导磨抛机器人D50的末端打磨工具D60开始进行后续的作业。
局部高精度采集点云模块60,用于将所述i依次设置为从1至n,依据磨 抛机器人基坐标系整体加工引导路径点CX i引导设置在磨抛机器人的末端的局部RGB-D复合传感器对所述待加工目标进行扫描,从而获取所述待加工目标在对应所述待加工子目标区域点云S 3D-i所对应区域的局部高精度点云SS 3D-i
高精度加工引导信息模块70,用于将所述i依次设置为从1至n,通过配准算法以所述局部高精度点云SS 3D-i作为模板从预设的样本点云RS 3D之中查找出与所述局部高精度点云SS 3D-i配准的局部高精度点云RS 3D-i,计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i,分析搜索所述差异点云DS 3D-i得到所述磨抛机器人的高精度3D加工引导信息。
因此,对于较大体积加工目标O20采用上述处理步骤,融入全局视觉系统和局部视觉系统,首先通过检测机器人D40的视觉系统D70实现对加工目标O20的粗定位,并实现所述加工目标O20进行分块,以及进行路劲规划;然后结合磨抛机器人D50上高精度视觉检测系统对目标进行精确检测,进而引导磨抛机器人D50末端的工具D60对加工目标区域O30实施高精度、高效率的自动化磨抛作业。因此不仅能满足对大体积加工目标进行高效率加工的精度要求,还可以大大减少了计算量,降低计算的复杂度,加快了处理速度,减少了计算时间,满足了实时处理的要求,同时也降低了对软硬件的性能的要求,可以节约成本,降低了开发的难度,符合对高速化大规模生产模式的要求。
进一步,所述最优加工路径规划模块40的所述最优路径规划算法为模拟退火智能优化算法。所述模拟退火智能优化算法具有成熟可靠,易于工程实现的特点。
进一步,所述高精度加工引导信息模块70的所述配准算法为基于正态分布变换的迭代最近点算法。所述基于正态分布变换的迭代最近点算法具有成熟可靠,易于工程实现的特点。
进一步,所述高精度加工引导信息模块70的所述计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i的方法为快速最近邻搜索算法。所述快速最近邻搜索算法具有成熟可靠,易于工程实现的特点。
进一步,所述分割子目标区域模块30的所述预先设定的待加工子目标区域高精度检测参数包括高精度检测的视野大小、检测精度、检测距离。所述参数具有易于测量获取,而且可靠性高的特点。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
显然,本领域的技术人员应该明白,上述的本发明的各模块单元或各步骤可以用通用的计算装置来实现,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述 实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (10)

  1. 基于融入全局视觉和局部视觉的机器人视觉引导方法,其特征在于,包括:
    步骤1、通过设置在检测机器人的末端的整体RGB-D复合传感器获取待加工目标的整体配准RGB二维图像I RGB和整体配准深度数据I D,从所述整体配准RGB二维图像I RGB获取所述待加工目标的整体区域S RGB,使用所述整体RGB-D复合传感器的标定矩阵,依据所述整体区域S RGB从所述整体配准深度数据I D提取所述待加工目标的整体3D点云数据S 3D
    步骤2、分析所述整体3D点云数据S 3D获取所述待加工目标的整体加工引导路径点集{AX j} j=1->n,所述AX j为所述待加工目标在所述整体3D点云数据S 3D的整体加工引导路径点,所述j为所述整体加工引导路径点AX j的序号,所述j的取值范围为[1,n],所述n为所述整体加工引导路径点AX j的总数目;
    步骤3、根据所述整体加工引导路径点集{AX j} j=1->n结合预先设定的待加工子目标区域高精度检测参数将所述待加工目标的所述整体3D点云数据S 3D分割为待加工子目标区域点云集{S 3D-j} j=1->n,所述S 3D-j为对应所述整体加工引导路径点AX j的待加工子目标区域点云;
    步骤4、通过最优路径规划算法,将所述待加工子目标区域点云集{S 3D-j} j=1->n进行排序,从而生成最优待加工子目标区域点云序列{S 3D-i} i=1->n,所述S 3D-i为最优待加工子目标区域点云序列{S 3D-i} i=1->n之内所述待加工子目标区域点云,所述i为所述最优加工子目标区域点云序列{S 3D-i} i=1->n的序号,所述i与所述j一一对应,依据所述i与所述j的所述一一对应将所述整体加工引导路径点集{AX j} j=1->n转换为整体加工引导路径点序列{AX i} i=1->n
    步骤5、将所述i依次设置为从1至n,将对应所述待加工子目标区域点云S 3D-i的所述整体加工引导路径点AX i转换为检测机器人基坐标系整体加工引导路径点BX i,然后将所述检测机器人基坐标系整体加工引导路径点BX i 转换为磨抛机器人基坐标系整体加工引导路径点CX i,从而将所述整体加工引导路径点序列{AX i} i=1->n转换为检测机器人基坐标系加工引导点序列{BX i} i=1->n,将检测机器人基坐标系加工引导点序列{BX i} i=1->n转换为磨抛机器人基坐标系整体加工引导点序列{CX i} i=1->n
    步骤6、将所述i依次设置为从1至n,依据磨抛机器人基坐标系整体加工引导路径点CX i引导设置在磨抛机器人的末端的局部RGB-D复合传感器对所述待加工目标进行扫描,从而获取所述待加工目标在对应所述待加工子目标区域点云S 3D-i所对应区域的局部高精度点云SS 3D-i
    步骤7、将所述i依次设置为从1至n,通过配准算法以所述局部高精度点云SS 3D-i作为模板从预设的样本点云RS 3D之中查找出与所述局部高精度点云SS 3D-i配准的局部高精度点云RS 3D-i,计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i,分析搜索所述差异点云DS 3D-i得到所述磨抛机器人的高精度3D加工引导信息。
  2. 如权利要求1所述的基于融入全局视觉和局部视觉的机器人视觉引导方法,其特征在于,所述步骤4的所述最优路径规划算法为模拟退火智能优化算法。
  3. 如权利要求1所述的基于融入全局视觉和局部视觉的机器人视觉引导方法,其特征在于,所述步骤7的所述配准算法为基于正态分布变换的迭代最近点算法。
  4. 如权利要求1所述的基于融入全局视觉和局部视觉的机器人视觉引导方法,其特征在于,所述步骤7的所述计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i的方法为快速最近邻搜索算法。
  5. 如权利要求1所述的基于融入全局视觉和局部视觉的机器人视觉引导方法,其特征在于,所述步骤3的所述预先设定的待加工子目标区域高精度检测参数包括高精度检测的视野大小、检测精度、检测距离。
  6. 基于融入全局视觉和局部视觉的机器人视觉引导装置,其特征在于,包括:
    加工目标数据采集模块,用于通过设置在检测机器人的末端的整体RGB-D复合传感器获取待加工目标的整体配准RGB二维图像I RGB和整体配准深度数据I D,从所述整体配准RGB二维图像I RGB获取所述待加工目标的整体区域S RGB,使用所述整体RGB-D复合传感器的标定矩阵,依据所述整体区域S RGB从所述整体配准深度数据I D提取所述待加工目标的整体3D点云数据S 3D
    整体加工引导路径模块,用于分析所述整体3D点云数据S 3D获取所述待加工目标的整体加工引导路径点集{AX j} j=1->n,所述AX j为所述待加工目标在所述整体3D点云数据S 3D的整体加工引导路径点,所述j为所述整体加工引导路径点AX j的序号,所述j的取值范围为[1,n],所述n为所述整体加工引导路径点AX j的总数目;
    分割子目标区域模块,用于根据所述整体加工引导路径点集{AX j} j=1->n结合预先设定的待加工子目标区域高精度检测参数将所述待加工目标的所述整体3D点云数据S 3D分割为待加工子目标区域点云集{S 3D-j} j=1->n,所述S 3D-j为对应所述整体加工引导路径点AX j的待加工子目标区域点云;
    最优加工路径规划模块,用于通过最优路径规划算法,将所述待加工子目标区域点云集{S 3D-j} j=1->n进行排序,从而生成最优待加工子目标区域点云序列{S 3D-i} i=1->n,所述S 3D-i为最优待加工子目标区域点云序列{S 3D-i} i=1->n之内所述待加工子目标区域点云,所述i为所述最优加工子目标区域点云序列{S 3D-i} i=1->n的序号,所述i与所述j一一对应,依据所述i与所述j的所述一一对应将所述整体加工引导路径点集{AX j} j=1->n转换为整体加工引导路径点 序列{AX i} i=1->n
    加工引导点转换模块,用于将所述i依次设置为从1至n,将对应所述待加工子目标区域点云S 3D-i的所述整体加工引导路径点AX i转换为检测机器人基坐标系整体加工引导路径点BX i,然后将所述检测机器人基坐标系整体加工引导路径点BX i转换为磨抛机器人基坐标系整体加工引导路径点CX i,从而将所述整体加工引导路径点序列{AX i} i=1->n转换为检测机器人基坐标系加工引导点序列{BX i} i=1->n,将检测机器人基坐标系加工引导点序列{BX i} i=1->n转换为磨抛机器人基坐标系整体加工引导点序列{CX i} i=1->n
    局部高精度采集点云模块,用于将所述i依次设置为从1至n,依据磨抛机器人基坐标系整体加工引导路径点CX i引导设置在磨抛机器人的末端的局部RGB-D复合传感器对所述待加工目标进行扫描,从而获取所述待加工目标在对应所述待加工子目标区域点云S 3D-i所对应区域的局部高精度点云SS 3D-i
    高精度加工引导信息模块,用于将所述i依次设置为从1至n,通过配准算法以所述局部高精度点云SS 3D-i作为模板从预设的样本点云RS 3D之中查找出与所述局部高精度点云SS 3D-i配准的局部高精度点云RS 3D-i,计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i,分析搜索所述差异点云DS 3D-i得到所述磨抛机器人的高精度3D加工引导信息。
  7. 如权利要求6所述的基于融入全局视觉和局部视觉的机器人视觉引导装置,其特征在于,所述最优加工路径规划模块的所述最优路径规划算法为模拟退火智能优化算法。
  8. 如权利要求6所述的基于融入全局视觉和局部视觉的机器人视觉引导装置,其特征在于,所述高精度加工引导信息模块的所述配准算法为基于正态分布变换的迭代最近点算法。
  9. 如权利要求6所述的基于融入全局视觉和局部视觉的机器人视觉引导装置,其特征在于,所述高精度加工引导信息模块的的所述计算所述局部高精度点云SS 3D-i与局部高精度点云RS 3D-i所对应的差异点云DS 3D-i的方法为快速最近邻搜索算法。
  10. 如权利要求6所述的基于融入全局视觉和局部视觉的机器人视觉引导装置,其特征在于,所述分割子目标区域模块的所述预先设定的待加工子目标区域高精度检测参数包括高精度检测的视野大小、检测精度、检测距离。
PCT/CN2020/101337 2019-12-02 2020-07-10 基于融入全局视觉和局部视觉的机器人视觉引导方法和装置 WO2021109575A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021537213A JP7212236B2 (ja) 2019-12-02 2020-07-10 オーバービュー視覚およびローカル視覚の一体化によるロボット視覚案内方法及び装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911215508.2A CN111216124B (zh) 2019-12-02 2019-12-02 基于融入全局视觉和局部视觉的机器人视觉引导方法和装置
CN201911215508.2 2019-12-02

Publications (1)

Publication Number Publication Date
WO2021109575A1 true WO2021109575A1 (zh) 2021-06-10

Family

ID=70830739

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/101337 WO2021109575A1 (zh) 2019-12-02 2020-07-10 基于融入全局视觉和局部视觉的机器人视觉引导方法和装置

Country Status (3)

Country Link
JP (1) JP7212236B2 (zh)
CN (1) CN111216124B (zh)
WO (1) WO2021109575A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114888794A (zh) * 2022-04-24 2022-08-12 天津工程机械研究院有限公司 一种机器人人机交互运行路径标记方法及装置
CN115026396A (zh) * 2022-06-27 2022-09-09 天津杰福德自动化技术有限公司 基于3d视觉引导的引熄弧板切割系统
CN115138527A (zh) * 2022-06-22 2022-10-04 深圳市双翌光电科技有限公司 通过视觉引导的快速加工路径生成方法
CN115159149A (zh) * 2022-07-28 2022-10-11 深圳市罗宾汉智能装备有限公司 一种基于视觉定位的取料卸货方法及其装置
CN115592501A (zh) * 2022-10-11 2023-01-13 中国第一汽车股份有限公司(Cn) 一种基于3d线激光视觉引导的顶盖钎焊自适应打磨方法
WO2023047879A1 (ja) * 2021-09-24 2023-03-30 村田機械株式会社 ワーク位置判定装置、レーザ加工装置、及びワーク位置判定方法
CN116468764A (zh) * 2023-06-20 2023-07-21 南京理工大学 基于超点空间引导的多视图工业点云高精度配准系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111216124B (zh) * 2019-12-02 2020-11-06 广东技术师范大学 基于融入全局视觉和局部视觉的机器人视觉引导方法和装置
CN113386052A (zh) * 2021-05-12 2021-09-14 华南理工大学 一种船体磨料水射流除漆除锈设备及其实现方法
CN116652951B (zh) * 2023-06-08 2024-04-05 广州鑫帅机电设备有限公司 一种非结构化大作业空间的机器人视觉定位方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107127755A (zh) * 2017-05-12 2017-09-05 华南理工大学 一种三维点云的实时采集装置及机器人打磨轨迹规划方法
CN107598918A (zh) * 2017-08-16 2018-01-19 广东工业大学 基于打磨机器人的表面打磨处理自动编程方法和装置
CN108297105A (zh) * 2018-01-17 2018-07-20 广东工业大学 一种机械臂任务级时间最优轨迹规划方法
CN108326853A (zh) * 2018-01-17 2018-07-27 广东工业大学 一种打磨机器人系统
CN108994844A (zh) * 2018-09-26 2018-12-14 广东工业大学 一种打磨操作臂手眼关系的标定方法和装置
US20190122425A1 (en) * 2017-10-24 2019-04-25 Lowe's Companies, Inc. Robot motion planning for photogrammetry
CN110103118A (zh) * 2019-06-18 2019-08-09 苏州大学 一种打磨机器人的路径规划方法、装置、系统及存储介质
CN111216124A (zh) * 2019-12-02 2020-06-02 广东技术师范大学 基于融入全局视觉和局部视觉的机器人视觉引导方法和装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103862341A (zh) * 2014-04-01 2014-06-18 重庆大学 铸件打磨设备
CN106931969A (zh) * 2015-12-29 2017-07-07 黑龙江恒和沙科技开发有限公司 一种基于Kinect的机器人三维导航地图生成方法
US10150213B1 (en) * 2016-07-27 2018-12-11 X Development Llc Guide placement by a robotic device
US20180348730A1 (en) * 2017-06-01 2018-12-06 X Development Llc Automatic Generation of Toolpaths
CN108858193B (zh) * 2018-07-06 2020-07-03 清华大学深圳研究生院 一种机械臂抓取方法及系统
CN109102547A (zh) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 基于物体识别深度学习模型的机器人抓取位姿估计方法
CN109108942B (zh) * 2018-09-11 2021-03-02 武汉科技大学 基于视觉实时示教与自适应dmps的机械臂运动控制方法和系统
CN109509215B (zh) * 2018-10-30 2022-04-01 浙江大学宁波理工学院 一种KinFu的点云辅助配准装置及其方法
CN109960402B (zh) * 2018-12-18 2022-04-01 重庆邮电大学 一种基于点云和视觉特征融合的虚实注册方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107127755A (zh) * 2017-05-12 2017-09-05 华南理工大学 一种三维点云的实时采集装置及机器人打磨轨迹规划方法
CN107598918A (zh) * 2017-08-16 2018-01-19 广东工业大学 基于打磨机器人的表面打磨处理自动编程方法和装置
US20190122425A1 (en) * 2017-10-24 2019-04-25 Lowe's Companies, Inc. Robot motion planning for photogrammetry
CN108297105A (zh) * 2018-01-17 2018-07-20 广东工业大学 一种机械臂任务级时间最优轨迹规划方法
CN108326853A (zh) * 2018-01-17 2018-07-27 广东工业大学 一种打磨机器人系统
CN108994844A (zh) * 2018-09-26 2018-12-14 广东工业大学 一种打磨操作臂手眼关系的标定方法和装置
CN110103118A (zh) * 2019-06-18 2019-08-09 苏州大学 一种打磨机器人的路径规划方法、装置、系统及存储介质
CN111216124A (zh) * 2019-12-02 2020-06-02 广东技术师范大学 基于融入全局视觉和局部视觉的机器人视觉引导方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DIAO, SHIPU: "Research on 3D Machining Target Detection and Motion Planning of Ceramic Billet Grinding Robot", INFORMATION & TECHNOLOGY, CHINA DOCTORAL DISSERTATIONS FULL-TEXT DATABASE(ELECTRONIC JOURNAL), no. 2, 15 February 2019 (2019-02-15), pages 1 - 127, XP055817315, ISSN: 1674-022X *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023047879A1 (ja) * 2021-09-24 2023-03-30 村田機械株式会社 ワーク位置判定装置、レーザ加工装置、及びワーク位置判定方法
CN114888794A (zh) * 2022-04-24 2022-08-12 天津工程机械研究院有限公司 一种机器人人机交互运行路径标记方法及装置
CN114888794B (zh) * 2022-04-24 2024-01-30 天津工程机械研究院有限公司 一种机器人人机交互运行路径标记方法及装置
CN115138527A (zh) * 2022-06-22 2022-10-04 深圳市双翌光电科技有限公司 通过视觉引导的快速加工路径生成方法
CN115138527B (zh) * 2022-06-22 2023-12-26 深圳市双翌光电科技有限公司 通过视觉引导的快速加工路径生成方法
CN115026396A (zh) * 2022-06-27 2022-09-09 天津杰福德自动化技术有限公司 基于3d视觉引导的引熄弧板切割系统
CN115159149A (zh) * 2022-07-28 2022-10-11 深圳市罗宾汉智能装备有限公司 一种基于视觉定位的取料卸货方法及其装置
CN115159149B (zh) * 2022-07-28 2024-05-24 深圳市罗宾汉智能装备有限公司 一种基于视觉定位的取料卸货方法及其装置
CN115592501A (zh) * 2022-10-11 2023-01-13 中国第一汽车股份有限公司(Cn) 一种基于3d线激光视觉引导的顶盖钎焊自适应打磨方法
CN116468764A (zh) * 2023-06-20 2023-07-21 南京理工大学 基于超点空间引导的多视图工业点云高精度配准系统

Also Published As

Publication number Publication date
CN111216124B (zh) 2020-11-06
JP2022516852A (ja) 2022-03-03
JP7212236B2 (ja) 2023-01-25
CN111216124A (zh) 2020-06-02

Similar Documents

Publication Publication Date Title
WO2021109575A1 (zh) 基于融入全局视觉和局部视觉的机器人视觉引导方法和装置
CN111179324B (zh) 基于颜色和深度信息融合的物体六自由度位姿估计方法
CN105729468B (zh) 一种基于多深度摄像机增强的机器人工作台
CN111089569B (zh) 一种基于单目视觉的大型箱体测量方法
CN108107444B (zh) 基于激光数据的变电站异物识别方法
WO2021103558A1 (zh) 基于rgb-d数据融合的机器人视觉引导方法和装置
US9836673B2 (en) System, method and computer program product for training a three dimensional object indentification system and identifying three dimensional objects using semantic segments
CN111598172B (zh) 基于异构深度网络融合的动态目标抓取姿态快速检测方法
Ma et al. Binocular vision object positioning method for robots based on coarse-fine stereo matching
Yang et al. Recognition and localization system of the robot for harvesting Hangzhou White Chrysanthemums
CN107895166B (zh) 基于特征描述子的几何哈希法实现目标鲁棒识别的方法
CN116476070B (zh) 大型筒件局部特征机器人扫描测量路径调整方法
CN111275758A (zh) 混合型3d视觉定位方法、装置、计算机设备及存储介质
Xiang Industrial automatic assembly technology based on machine vision recognition
Spevakov et al. Detecting objects moving in space from a mobile vision system
CN113932712A (zh) 一种基于深度相机和关键点的瓜果类蔬菜尺寸测量方法
Kheng et al. Stereo vision with 3D coordinates for robot arm application guide
Wang et al. A binocular vision method for precise hole recognition in satellite assembly systems
CN111354031A (zh) 基于深度学习的3d视觉引导系统
Daqi et al. An industrial intelligent grasping system based on convolutional neural network
Zhao et al. Dmvo: A multi-motion visual odometry for dynamic environments
CN110780276B (zh) 一种基于激光雷达的托盘识别方法、系统和电子设备
Guo Research on vision measurement system of mechanical workpiece based on machine vision
WO2021082380A1 (zh) 一种基于激光雷达的托盘识别方法、系统和电子设备
KR102518014B1 (ko) 부품 스캔 장치 및 방법

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021537213

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20897174

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20897174

Country of ref document: EP

Kind code of ref document: A1