WO2021042374A1 - 用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台 - Google Patents

用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台 Download PDF

Info

Publication number
WO2021042374A1
WO2021042374A1 PCT/CN2019/104728 CN2019104728W WO2021042374A1 WO 2021042374 A1 WO2021042374 A1 WO 2021042374A1 CN 2019104728 W CN2019104728 W CN 2019104728W WO 2021042374 A1 WO2021042374 A1 WO 2021042374A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
environment
industrial robot
unit
calibrated
Prior art date
Application number
PCT/CN2019/104728
Other languages
English (en)
French (fr)
Inventor
丁万
鄢留鹏
王炜
Original Assignee
罗伯特·博世有限公司
丁万
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 罗伯特·博世有限公司, 丁万 filed Critical 罗伯特·博世有限公司
Priority to CN201980100000.XA priority Critical patent/CN114364942A/zh
Priority to PCT/CN2019/104728 priority patent/WO2021042374A1/zh
Priority to DE112019007466.0T priority patent/DE112019007466T5/de
Priority to TW109130228A priority patent/TWI832002B/zh
Publication of WO2021042374A1 publication Critical patent/WO2021042374A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads

Definitions

  • the invention relates to a three-dimensional environment modeling method and equipment for an industrial robot, a computer storage medium and an industrial robot operating platform.
  • the industrial robot needs to plan the optimal trajectory during each cycle time. At this time, the robot needs to know its surrounding environment in order to plan a collision-free trajectory. Then, the complete 3D workbench model should be obtained offline in advance.
  • the existing method is geometric measurement based on 3D software design. Specifically, the operator measures the size of each object and its relative position to the world coordinate system/robot base coordinate system in the actual environment. Then, according to the geometric model and position information of the aforementioned object, a complete three-dimensional workbench model in the simulation environment is finally obtained.
  • the size and position of each object in the actual environment need to be manually modeled and measured, and then these objects are configured in the three-dimensional design software. This takes a lot of time and requires professional design skills. Moreover, with these measurements, the positional relationship between objects is not accurate in the final three-dimensional workbench model. When the environment changes, the measurement work needs to be repeated.
  • a three-dimensional environment modeling method for industrial robots includes: performing analysis on the surrounding environment of the industrial robot based on the color and depth information of the surrounding environment of the industrial robot. Three-dimensional modeling to obtain a first environment model; calibrate the coordinate system of the first environment model to the base coordinate system of the industrial robot; and at least perform grid cutting and filling on the calibrated first environment model, Obtain the workbench model of the industrial robot.
  • the aforementioned 3D environment modeling method is faster and more efficient than the existing 3D geometric measurement-based modeling solution, and does not require manual modeling and measurement of the size and position of each object.
  • the technical solution of the present invention provides a standardized configuration method for all workbench objects (without excessive manual intervention and design skills), so as to obtain the final three-dimensional workbench model.
  • the color and depth information is acquired by a handheld RGB-D camera.
  • the above-mentioned three-dimensional environment modeling method may further include: after calibrating the coordinate system of the first environment model to the base coordinate system of the industrial robot, according to the working space range of the industrial robot, calibrating the calibrated The first environment model is tailored.
  • the coordinate system of the first environment model may be random before calibration.
  • the coordinate system of the first environment model can be calibrated to the robot base coordinate system through the Iterative Closest Point (ICP) method.
  • ICP Iterative Closest Point
  • the calibrated first environment model can be cut according to the scope of the working space of the industrial robot.
  • "Cut" can be based on the pre-defined working range of the industrial robot and the calibrated robot coordinate system information, segmenting the first environment model within the robot working space, and at the same time reconstructing the robot according to the size range of the robot The model is removed to obtain the first environment model within the robot workspace that does not contain the robot model.
  • the above-mentioned three-dimensional environment modeling method may further include: after calibrating the coordinate system of the first environment model to the base coordinate system of the industrial robot, using a grid integration evaluation method to measure the first calibration after calibration. Whether the mapping of the environment model is complete.
  • the above-mentioned three-dimensional environment modeling method may further include: after using the grid integration evaluation method to measure whether the mapping of the calibrated first environment model is complete, evaluating the accuracy of the calibrated first environment model .
  • the mesh cutting and filling includes: selecting an incomplete plane to be reconstructed; according to the vertices on the plane, the least square method is used to obtain the parameters fitting the plane ; And according to the parameters to create a new triangular patch to replace the incomplete plane.
  • the mesh cutting and filling includes: adding the boundary model of the robot workspace to the calibrated first environment model.
  • the mesh cutting and filling includes: sequentially selecting holes smaller than the boundary threshold according to a set boundary threshold; and according to curvature information around the boundary of the selected hole And the rate of change to determine the curvature of the triangular patch used to fill the void.
  • obtaining the workbench model of the industrial robot includes: performing the calibration on the calibrated first environment model After the mesh is cut and filled, a mesh simplification process is performed; and after the mesh simplification process, a workbench model of the industrial robot is obtained.
  • the mesh simplification process includes: determining the number of target patches or the target optimization percentage; and using an extraction algorithm to obtain the determined number of patches.
  • the above-mentioned three-dimensional environment modeling method may further include: obtaining a model of a new object when the surrounding environment of the industrial robot changes; adding the model of the new object to the workbench model or based on the iterative closest point method. Delete from the workbench model.
  • a three-dimensional environment modeling device for an industrial robot, the device comprising: a first acquisition unit configured to be based on the color and the surrounding environment of the industrial robot Depth information to perform three-dimensional modeling of the surrounding environment of the industrial robot to obtain a first environment model; a calibration unit configured to calibrate the coordinate system of the first environment model to the base of the industrial robot A coordinate system; and a second acquisition unit configured to obtain the workbench model of the industrial robot at least by performing grid cutting and filling on the calibrated first environment model.
  • the first acquisition unit is configured to acquire the color and depth information from a handheld RGB-D camera.
  • the above-mentioned three-dimensional environment modeling device may further include: a cropping unit configured to calibrate the coordinate system of the first environment model to the base coordinate system of the industrial robot according to the calibration unit
  • a cropping unit configured to calibrate the coordinate system of the first environment model to the base coordinate system of the industrial robot according to the calibration unit
  • the scope of the working space of the industrial robot is tailored to the calibrated first environment model.
  • the above-mentioned three-dimensional environment modeling equipment may further include: a grid integration degree evaluation unit configured to calibrate the coordinate system of the first environment model to the industrial After the base coordinate system of the robot, the grid integration evaluation method is used to measure whether the mapping of the calibrated first environment model is complete.
  • a grid integration degree evaluation unit configured to calibrate the coordinate system of the first environment model to the industrial After the base coordinate system of the robot, the grid integration evaluation method is used to measure whether the mapping of the calibrated first environment model is complete.
  • the above-mentioned three-dimensional environment modeling device may further include: an accuracy evaluation unit configured to use a grid integration evaluation method in the grid integration evaluation unit to measure the calibrated first environment After the mapping of the model is complete, the accuracy of the calibrated first environment model is evaluated.
  • an accuracy evaluation unit configured to use a grid integration evaluation method in the grid integration evaluation unit to measure the calibrated first environment After the mapping of the model is complete, the accuracy of the calibrated first environment model is evaluated.
  • the second acquisition unit includes a grid cutting and filling unit, wherein the grid cutting and filling unit is configured to: select an incomplete plane to be reconstructed; For the vertices on the plane, the least squares method is used to obtain the parameters fitting the plane; and a new triangle patch is established according to the parameters to replace the incomplete plane.
  • the second acquisition unit includes a mesh cutting and filling unit, wherein the mesh cutting and filling unit is configured to: add the robot workspace boundary model to the calibrated first One environment model.
  • the second acquisition unit includes a mesh cutting and filling unit, wherein the mesh cutting and filling unit is configured to: according to a set boundary threshold, select one smaller than the The hole of the boundary threshold; and according to the curvature information and the rate of change around the boundary of the selected hole, the curvature of the triangle surface used to fill the hole is determined.
  • the second acquiring unit further includes: a grid simplification unit configured to perform grid cutting and filling on the calibrated first environment model Performing a grid simplification process; and a third obtaining unit configured to obtain a workbench model of the industrial robot after the grid simplification unit performs the grid simplification process.
  • the mesh simplification unit is configured to: determine the number of target patches or the target optimization percentage; and use an extraction algorithm to obtain the determined number of patches.
  • the above-mentioned three-dimensional environment modeling device may further include: a fourth acquiring unit, configured to acquire a model of a new object when the surrounding environment of the industrial robot changes; The model of the new object is added to the table model.
  • a fourth acquiring unit configured to acquire a model of a new object when the surrounding environment of the industrial robot changes.
  • the model of the new object is added to the table model.
  • Yet another aspect of the present invention provides a computer storage medium, which includes instructions that execute the aforementioned three-dimensional environment modeling method during runtime.
  • Another solution of the present invention provides an industrial robot operating platform, which includes the aforementioned three-dimensional environment modeling device.
  • the aforementioned 3D environment modeling solution for industrial robots is faster and more efficient, and does not require manual modeling and measurement of the size and position of each object.
  • the three-dimensional environment modeling solution of the present invention performs grid cutting and filling on the constructed original environment model, and solves the problem of incomplete reconstruction of transparent objects. This method can also cut discrete triangular faces, fill in holes, and automatically add boundaries outside of the robot's workspace.
  • Fig. 1 shows a three-dimensional environment modeling method for industrial robots according to an embodiment of the present invention
  • Figure 2 shows a three-dimensional environment modeling method for industrial robots according to an embodiment of the present invention.
  • Fig. 3 shows a three-dimensional environment modeling device for an industrial robot according to an embodiment of the present invention.
  • FIG. 1 shows a three-dimensional environment modeling method 1000 for an industrial robot according to an embodiment of the present invention.
  • step S110 based on the color and depth information of the surrounding environment of the industrial robot, three-dimensional modeling is performed on the surrounding environment of the industrial robot to obtain a first environment model.
  • step S120 the coordinate system of the first environment model is calibrated to the base coordinate system of the industrial robot.
  • step S130 the workbench model of the industrial robot is obtained at least by performing grid cutting and filling on the calibrated first environment model.
  • the fast 3D environment modeling and configuration method based on 3D reconstruction algorithm of the present invention can be divided into three parts: 1) fast 3D environment reconstruction; 2) optimization of 3D environment model; and 3) Simulate the virtual environment configuration of the workstation.
  • the solution of the present invention is faster and more efficient, and does not require manual modeling and measurement of the size and position of each object.
  • the technical solution of the present invention provides a standardized configuration method for all workbench objects (without excessive manual intervention and design skills), so as to obtain the final three-dimensional workbench model.
  • the color and depth information in step S110 can be obtained by the operator holding an RGB-D camera. In one embodiment, the color and depth information is obtained by fixing the camera on the robot.
  • the three-dimensional environment modeling method 1000 may include: cropping the calibrated first environment model according to the working space range of the industrial robot .
  • the coordinate system of the first environment model may be random, so the coordinate system of the first environment model is calibrated to the base coordinate system of the robot.
  • the calibrated first environment model can be cut according to the scope of the working space of the industrial robot. This can eliminate some redundant points in the first environment model and reduce the complexity of the model.
  • the coordinate system of the first environment model can be calibrated to the robot base coordinate system through the Iterative Closest Point (ICP) method.
  • ICP Iterative Closest Point
  • “Cut” can be based on the pre-defined working range of the industrial robot and the calibrated robot coordinate system information, segmenting the first environment model within the robot working space, and at the same time reconstructing the robot according to the size range of the robot The model is removed to obtain the first environment model within the robot workspace that does not contain the robot model.
  • ICP iterative nearest point methods
  • ICP can be used, including but not limited to the point-to-plane search for nearest point precise registration method proposed by Chen and Medioni and Bergevin, and the point proposed by Rusinkiewicz and Levoy.
  • -to-projection fast registration method for searching nearby points Contractive-projection-point registration method for searching nearby points proposed by Soon-Yong and Murali, etc.
  • the three-dimensional environment modeling method 1000 may include: using a grid integration evaluation method to measure whether the mapping of the calibrated first environment model is complete .
  • the grid integration evaluation method may include: firstly, according to the predefined robot working range and the robot coordinate system information calibrated in the previous step, the reconstructed model in the robot workspace is segmented, and at the same time according to the robot The size range of the reconstructed robot model is removed to obtain an environment model within the robot workspace that does not contain the robot model; then, the robot workspace is divided into a certain number of subspaces according to certain rules (such as sector segmentation). The environmental models in the subspace are displayed in sequence; finally, it is determined in sequence whether the part is completely reconstructed.
  • the three-dimensional environment modeling method 1000 may further include: using a grid integration evaluation method to measure the mapping of the calibrated first environment model After whether it is complete, the accuracy of the calibrated first environment model is evaluated.
  • the accuracy evaluation may include: first obtaining the true value of the relative distance between the key objects in the robot workspace; then measuring the corresponding distance value in the reconstruction model, and obtaining a value from
  • performing mesh clipping and filling on the first environment model includes: selecting an incomplete plane to be reconstructed; using the least squares method to obtain parameters fitting the plane according to the vertices on the plane; and According to the parameters, a new triangular face is created to replace the incomplete plane. In this way, the problem of incomplete reconstruction of transparent objects can be solved.
  • the mesh cutting and filling of the first environment model can also be used to delete unnecessary reconstruction parts in the first environment model.
  • the operator can subjectively select the unneeded triangular patch area, and then delete it.
  • the mesh cutting and filling of the first environment model can also be used to add a workspace boundary.
  • the robot workspace boundary model can be quickly added to the reconstructed environment model based on the predefined robot workspace boundary model and the previously calibrated robot coordinate system information.
  • the mesh clipping and filling of the first environment model can also be used to fill the holes.
  • cavities smaller than the boundary number threshold are sequentially selected.
  • the operator can also subjectively decide whether to fill the hole. Then, based on the curvature information and the rate of change around the boundary of the cavity, the curvature of the triangular patch used to fill the cavity is determined.
  • step S130 may specifically include: performing a mesh simplification process after performing mesh trimming and filling on the calibrated first environment model; and after the mesh simplification process, obtaining the image of the industrial robot Workbench model.
  • the mesh simplification process may use an extraction algorithm (for example, the Quadric edge decimation algorithm), and by setting the target number of faces or the target optimization percentage, a simplified environment model with a significantly reduced number of faces is finally obtained.
  • FIG. 2 shows a three-dimensional environment modeling method 2000 for an industrial robot according to an embodiment of the present invention.
  • step S210 the operator obtains the color and depth information flow of the surrounding environment of the industrial robot by holding the RGB-D depth camera.
  • step S220 based on the color and depth information stream, the original environment model is obtained through a three-dimensional reconstruction algorithm.
  • step S230 the original environment model is quickly calibrated.
  • the coordinate system of the original environment model can be calibrated to the coordinate system of the industrial robot (for example, the base coordinate system, the world coordinate system, etc.).
  • step S240 grid integration evaluation is performed.
  • the grid integration evaluation method may include: firstly, according to the predefined robot working range and the robot coordinate system information calibrated in the previous step, the reconstructed model in the robot workspace is segmented, and at the same time according to the robot The size range of the reconstructed robot model is removed to obtain an environment model within the robot workspace that does not contain the robot model; then, the robot workspace is divided into a certain number of subspaces according to certain rules (such as sector segmentation). The environmental models in the subspace are displayed in sequence; finally, it is determined in sequence whether the part is completely reconstructed.
  • grid integration assessment it is possible to measure whether the mapping of the calibrated model is complete.
  • step S220 is executed again. If it is determined that the calibrated model is complete, step S250 is executed.
  • step S250 grid accuracy evaluation is performed.
  • the accuracy evaluation may include: first obtaining the true value of the relative distance between the key objects in the robot workspace; then measuring the corresponding distance value in the reconstruction model, and obtaining a value from
  • various accuracy parameters and methods can be used to measure whether the accuracy of the calibrated model meets the requirements of the three-dimensional workbench model.
  • step S220 if the accuracy of the calibrated model does not meet the requirements, step S220 is executed again. If it is determined that the accuracy of the calibrated model meets the requirements, step S260 is executed.
  • step S260 fast grid cutting and filling are performed.
  • step S260 may include: first selecting a plane to be reconstructed incompletely, obtaining parameters of the fitted plane according to the vertices on the plane and the least square method, and then establishing a new triangle face according to the parameters. Replace the original incomplete plane. In this way, the problem of incomplete reconstruction of transparent objects can be solved by performing fast mesh clipping and filling.
  • step S260 can also be used to delete unnecessary reconstruction parts. Specifically, the operator subjectively selects the unnecessary triangular patch area, and then deletes it.
  • step S260 can also be used to add a workspace boundary. For example, based on the pre-defined robot workspace boundary model and the previously calibrated robot coordinate system information, the robot workspace boundary model can be quickly added to the reconstructed environment model.
  • step S260 can also be used to fill holes.
  • the cavities smaller than the boundary quantity threshold are selected in sequence.
  • the operator subjectively decides whether to fill the cavities; from the curvature information and the rate of change around the cavity boundary, Determine the curvature of the triangular patch used to fill the void.
  • step S270 a mesh simplification process is performed.
  • the mesh simplification process may use an extraction algorithm (for example, the Quadric edge decimation algorithm), and by setting the target number of faces or the target optimization percentage, a simplified environment model with a significantly reduced number of faces is finally obtained.
  • an extraction algorithm for example, the Quadric edge decimation algorithm
  • step S280 the final three-dimensional workbench model is obtained.
  • Fig. 3 shows a three-dimensional environment modeling device 3000 for an industrial robot according to an embodiment of the present invention.
  • the three-dimensional environment modeling device 3000 includes: a first acquiring unit 310, a calibration unit 320, and a second acquiring unit 330.
  • the first acquiring unit 310 is configured to perform a three-dimensional modeling of the surrounding environment of the industrial robot based on the color and depth information of the surrounding environment of the industrial robot, so as to obtain the first environment model.
  • the calibration unit 320 is configured to calibrate the coordinate system of the first environment model to the base coordinate system of the industrial robot.
  • the second obtaining unit 330 is configured to obtain the workbench model of the industrial robot at least by performing grid cutting and filling on the calibrated first environment model.
  • the first obtaining unit 310 is configured to obtain the color and depth information from a handheld RGB-D camera.
  • the three-dimensional environment modeling device 3000 may further include: a cropping unit, wherein the cropping unit is configured to calibrate the coordinate system of the first environment model to the base of the industrial robot in the calibration unit 320 After the coordinate system, the calibrated first environment model is clipped according to the working space range of the industrial robot.
  • a cropping unit configured to calibrate the coordinate system of the first environment model to the base of the industrial robot in the calibration unit 320 After the coordinate system, the calibrated first environment model is clipped according to the working space range of the industrial robot.
  • the three-dimensional environment modeling device 3000 may further include: a grid integration degree evaluation unit, wherein the grid integration degree evaluation unit is configured to calibrate the coordinate system of the first environment model to the calibration unit 320 After the base coordinate system of the industrial robot, a grid integration evaluation method is used to measure whether the mapping of the calibrated first environment model is complete.
  • the three-dimensional environment modeling device 3000 may further include: an accuracy evaluation unit configured to use a grid integration evaluation method to measure the calibration result after the grid integration evaluation unit After the mapping of the first environment model is complete, the accuracy of the calibrated first environment model is evaluated.
  • an accuracy evaluation unit configured to use a grid integration evaluation method to measure the calibration result after the grid integration evaluation unit After the mapping of the first environment model is complete, the accuracy of the calibrated first environment model is evaluated.
  • the second acquiring unit 330 may include a mesh cutting and filling unit, wherein the mesh cutting and filling unit is configured to: select an incomplete plane to be reconstructed; and use The least square method is used to obtain parameters fitting the plane; and a new triangular patch is established according to the parameters to replace the incomplete plane.
  • the grid cutting and filling unit is configured to add the boundary model of the robot workspace to the calibrated first environment model.
  • the grid cutting and filling unit is configured to: according to a set boundary threshold, sequentially select cavities smaller than the boundary threshold; and according to the curvature information and rate of change around the boundary of the selected cavity , To determine the curvature of the triangular patch used to fill the void.
  • the second acquiring unit 330 further includes: a grid simplification unit and a third acquiring unit.
  • the grid simplification unit is configured to perform a grid simplification process after performing grid cutting and filling on the calibrated first environment model.
  • the mesh simplification unit is configured to: determine the number of target patches or the target optimization percentage; and use an extraction algorithm to obtain the determined number of patches.
  • the third acquiring unit is configured to acquire the workbench model of the industrial robot after the grid simplification unit performs the grid simplification process.
  • the three-dimensional environment modeling device 3000 may further include: a fourth acquisition unit and an addition and deletion unit.
  • the fourth obtaining unit is used to obtain a model of a new object when the surrounding environment of the industrial robot changes.
  • the addition and deletion unit is used to add the model of the new object to the workbench model based on the iterative closest point method.
  • the three-dimensional environment modeling method provided by one or more embodiments of the present invention can be implemented by a computer program.
  • a computer storage medium such as a USB flash drive
  • running the computer program can execute the three-dimensional environment modeling method of the embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)

Abstract

一种用于工业机器人的三维环境建模方法,该方法包括:基于所述工业机器人的周围环境的颜色及深度信息,对所述工业机器人的周围环境进行三维建模,从而得到第一环境模型;将所述第一环境模型的坐标系标定到工业机器人的基坐标系;以及至少通过对标定后的第一环境模型进行网格裁剪和填充,获得工业机器人的工作台模型。还提供了一种用于工业机器人的三维环境建模设备、计算机存储介质以及工业机器人操作平台。

Description

用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台 【技术领域】
本发明涉及用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台。
【背景技术】
在某些特定的工业应用场景中,在每个循环时间期间需要工业机器人规划最优轨迹。这时,机器人需要首先知道其周围的环境以便规划一条无碰撞的轨迹。接着,完整的三维工作台模型应提前离线获得。
为了获得完整的三维工作台模型,现有的方式是基于三维软件设计的几何测量。具体来说,操作者测量每个物体的大小以及在实际环境中该物体到世界坐标系/机器人基坐标系的相对位置。接着,根据前述物体的几何模型以及位置信息,最终获得仿真环境下完整的三维工作台模型。
在基于三维几何测量的方式中,实际环境中的每个物体的大小和位置需要手动建模和测量,并且随后在三维设计软件中配置这些物体。这会耗费很多时间并且需要专业的设计技巧。而且,利用这些测量,物体之间的位置关系在最终的三维工作台模型中并不精确。当环境发生变化时,测量工作需要重复进行。
因此,期望一种改进的三维环境建模方案。
【发明内容】
根据本发明的一个方面,提出了一种用于工业机器人的三维环境建模方法,所述方法包括:基于所述工业机器人的周围环境的颜色及深度信息,对所述工业机器人的周围环境进行三维建模,从而得到第一环境模型;将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系;以及至少通过对标定后的第一环境模型进行网格裁剪和 填充,获得所述工业机器人的工作台模型。
前述三维环境建模方法比现有基于三维几何测量的建模方案更为快速和高效,而且不需要手动建模和测量每个物体的大小和位置。而且,本发明的技术方案提供了针对所有工作台物体的标准化的配置方法(而不需要过多的人工干预和设计技巧),从而获得最终的三维工作台模型。
优选地,在上述三维环境建模方法中,所述颜色及深度信息通过手持RGB-D相机获取。
优选地,上述三维环境建模方法还可包括:在将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,根据所述工业机器人的工作空间范围,对标定后的第一环境模型进行裁剪。其中,第一环境模型的坐标系在标定之前可能是随机的。
例如,可通过迭代最近点法(ICP,Iterative Closest Point)将第一环境模型的坐标系标定到机器人基坐标系。在进行裁剪时,可按照工业机器人的工作空间范围,对标定后的第一环境模型进行裁剪。“进行裁剪”可以是根据预定义的工业机器人的工作范围以及标定好的机器人坐标系信息,将机器人工作空间范围内的第一环境模型中分割出来,同时根据机器人的尺寸范围将重建出的机器人模型去除,得到不包含机器人模型的机器人工作空间范围内的第一环境模型。
优选地,上述三维环境建模方法还可包括:在将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整。
优选地,上述三维环境建模方法还可包括:在利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整之后,对所述标定后的第一环境模型进行准确度评估。
优选地,在上述三维环境建模方法中,所述网格裁剪和填充包括:选择待重建的不完整的平面;根据所述平面上的顶点,利用最小二乘法获得拟合所述平面的参数;以及根据所述参数建立新的三角面片来替代所述不完整的平面。
优选地,在上述三维环境建模方法中,所述网格裁剪和填充包括:将机器人工作空间边界模型添加到标定后的第一环境模型中。
优选地,在上述三维环境建模方法中,所述网格裁剪和填充包括:按照设定的边界阈值,依次选取小于所述边界阈值的空洞;以及根据所选取的空洞的边界周围的曲率信息及变化率,确定用于填补空洞的三角面片的曲率大小。
优选地,在上述三维环境建模方法中,至少通过对标定后的第一环境模型进行网格裁剪和填充,获得所述工业机器人的工作台模型包括:在对标定后的第一环境模型进行网格裁剪和填充之后执行网格简化过程;以及在所述网格简化过程之后,获得所述工业机器人的工作台模型。
优选地,在上述三维环境建模方法中,所述网格简化过程包括:确定目标面片数量或者目标优化百分比;以及利用抽取算法获得所确定的数量的面片。
优选地,上述三维环境建模方法还可包括:在所述工业机器人的周围环境发生改变时,获得新物体的模型;基于迭代最近点法将该新物体的模型增加到所述工作台模型或从所述工作台模型删除。
本发明的另一个方案提供了一种用于工业机器人的三维环境建模设备,所述设备包括:第一获取单元,所述第一获取单元配置成基于所述工业机器人的周围环境的颜色及深度信息,对所述工业机器人的周围环境进行三维建模,从而得到第一环境模型;标定单元,所述标定单元配置成将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系;以及第二获取单元,所述第二获取单元配置成至少通过对标定后的第一环境模型进行网格裁剪和填充,获得所述工业机器人的工作台模型。
优选地,在上述三维环境建模设备中,第一获取单元配置成从手持RGB-D相机获取所述颜色及深度信息。
优选地,上述三维环境建模设备还可包括:裁剪单元,所述裁剪单元配置成在所述标定单元将所述第一环境模型的坐标系标定到所 述工业机器人的基坐标系之后,根据所述工业机器人的工作空间范围,对标定后的第一环境模型进行裁剪。
优选地,上述三维环境建模设备还可包括:网格集成度评估单元,所述网格集成度评估单元配置成在所述标定单元将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整。
优选地,上述三维环境建模设备还可包括:准确度评估单元,所述准确度评估单元配置成在所述网格集成度评估单元利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整之后,对所述标定后的第一环境模型进行准确度评估。
优选地,在上述三维环境建模设备中,所述第二获取单元包括网格裁剪和填充单元,其中所述网格裁剪和填充单元配置成:选择待重建的不完整的平面;根据所述平面上的顶点,利用最小二乘法获得拟合所述平面的参数;以及根据所述参数建立新的三角面片来替代所述不完整的平面。
优选地,在上述三维环境建模设备中,所述第二获取单元包括网格裁剪和填充单元,其中所述网格裁剪和填充单元配置成:将机器人工作空间边界模型添加到标定后的第一环境模型中。
优选地,在上述三维环境建模设备中,所述第二获取单元包括网格裁剪和填充单元,其中所述网格裁剪和填充单元配置成:按照设定的边界阈值,依次选取小于所述边界阈值的空洞;以及根据所选取的空洞的边界周围的曲率信息及变化率,确定用于填补空洞的三角面片的曲率大小。
优选地,在上述三维环境建模设备中,所述第二获取单元还包括:网格简化单元,所述网格简化单元配置成在对标定后的第一环境模型进行网格裁剪和填充之后执行网格简化过程;以及第三获取单元,所述第三获取单元配置成在所述网格简化单元执行网格简化过程之后,获得所述工业机器人的工作台模型。
优选地,在上述三维环境建模设备中,所述网格简化单元配置成: 确定目标面片数量或者目标优化百分比;以及利用抽取算法获得所确定的数量的面片。
优选地,上述三维环境建模设备还可包括:第四获取单元,用于在所述工业机器人的周围环境发生改变时,获得新物体的模型;增删单元,用于基于迭代最近点法将该新物体的模型增加到所述工作台模型。
本发明的又一个方案提供了一种计算机存储介质,其包括指令,所述指令在运行时执行如前所述的三维环境建模方法。
本发明的又一个方案提供了一种工业机器人操作平台,其包括如前所述的三维环境建模设备。
与现有的基于三维几何测量的三维环境建模方案相比,前述用于工业机器人的三维环境建模方案更为快速和高效,而且不需要手动建模和测量每个物体的大小和位置。而且,本发明的三维环境建模方案对所构造的原始环境模型进行网格裁剪和填充,解决透明物体重建不完整的问题。该方式还可裁剪离散三角面片,填补空洞以及自动添加机器人工作空间之外的边界。
【附图说明】
参照附图,本发明的公开内容将变得更易理解。本领域技术人员容易理解的是:这些附图仅仅用于说明的目的,而并非意在对本发明的保护范围构成限制。图中:
图1示出了根据本发明的一个实施例的用于工业机器人的三维环境建模方法;
图2示出了根据本发明的一个实施例的用于工业机器人的三维环境建模方法;以及
图3示出了根据本发明的一个实施例的用于工业机器人的三维环境建模设备。
【具体实施方式】
以下说明描述了本发明的特定实施方式以教导本领域技术人员如何制造和使用本发明的最佳模式。为了教导发明原理,已简化或省略了一些常规方面。本领域技术人员应该理解源自这些实施方式的变型将落在本发明的范围内。本领域技术人员应该理解下述特征能够以各种方式接合以形成本发明的多个变型。由此,本发明并不局限于下述特定实施方式,而仅由权利要求和它们的等同物限定。
参考图1,图1示出了根据本发明的一个实施例的用于工业机器人的三维环境建模方法1000。
在步骤S110中,基于所述工业机器人的周围环境的颜色及深度信息,对所述工业机器人的周围环境进行三维建模,从而得到第一环境模型。
在步骤S120中,将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系。
在步骤S130中,至少通过对标定后的第一环境模型进行网格裁剪和填充,获得所述工业机器人的工作台模型。
根据本发明的一个或多个实施例,本发明的基于三维重建算法的快速三维环境建模和配置方法可分为三个部分:1)快速三维环境重建;2)三维环境模型的优化;以及3)模拟工作站的虚拟环境配置。与现有基于三维几何测量的建模方案相比,本发明的方案更为快速和高效,而且不需要手动建模和测量每个物体的大小和位置。而且,本发明的技术方案提供了针对所有工作台物体的标准化的配置方法(而不需要过多的人工干预和设计技巧),从而获得最终的三维工作台模型。
在一个实施例中,步骤S110中的颜色及深度信息可通过操作员手持RGB-D相机获取。在一个实施例中,所述颜色及深度信息通过将相机固定安装在机器人上来获取。
在一个实施例中,在步骤S120之后并且在步骤S130之前,尽管未示出,三维环境建模方法1000可包括:根据所述工业机器人的工作空间范围,对标定后的第一环境模型进行裁剪。其中,第一环境模 型的坐标系可能是随机的,因此通过将该第一环境模型的坐标系标定到机器人的基坐标系。在进行裁剪时,可按照工业机器人的工作空间范围,对标定后的第一环境模型进行裁剪。这可剔除第一环境模型中的一些冗余点,降低模型的复杂度。
例如,可通过迭代最近点法(ICP,Iterative Closest Point)将第一环境模型的坐标系标定到机器人基坐标系。“进行裁剪”可以是根据预定义的工业机器人的工作范围以及标定好的机器人坐标系信息,将机器人工作空间范围内的第一环境模型中分割出来,同时根据机器人的尺寸范围将重建出的机器人模型去除,得到不包含机器人模型的机器人工作空间范围内的第一环境模型。
本领域技术人员理解,可采用各种迭代最近点法ICP,包括但不限于,Chen和Medioni及Bergevin等人提出的point-to-plane搜索就近点的精确配准方法、Rusinkiewicz和Levoy提出的point-to-projection搜索就近点的快速配准方法、Soon-Yong和Murali提出的Contractive-projection-point搜索就近点的配准方法等。
在一个实施例中,在步骤S120之后并且在步骤S130之前,尽管未示出,三维环境建模方法1000可包括:利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整。在一个实施例中,网格集成度评估方法可包括:首先根据预定义的机器人工作范围以及前一步骤标定好的机器人坐标系信息,将机器人工作空间范围内的重建模型分割出来,同时根据机器人的尺寸范围将重建出的机器人模型去除,得到不包含机器人模型的机器人工作空间范围内的环境模型;随后,按照一定的规则(如扇形分割)将机器人工作空间分割成一定数量的子空间,将子空间内的环境模型依次显示;最后,依次确定该部分是否完整重建。
在一个实施例中,在步骤S120之后并且在步骤S130之前,尽管未示出,三维环境建模方法1000还可包括:在利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整之后,对所述标定后的第一环境模型进行准确度评估。在一个实施例中,准确度评估可 包括:首先获取机器人工作空间内关键性物体间相对距离的真值;之后在重建模型中测得对应的距离值,由|测量值-真值|获得一系列的距离误差;最后,判定最大的距离误差是否小于预先设定的阈值从而判定准确度是否满足需求。
在一个实施例中,对第一环境模型进行网格裁剪和填充包括:选择待重建的不完整的平面;根据所述平面上的顶点,利用最小二乘法获得拟合所述平面的参数;以及根据所述参数建立新的三角面片(face)来替代所述不完整的平面。这样,可以解决透明物体重建不完整的问题。
另外,对第一环境模型进行网格裁剪和填充还可用于删除该第一环境模型中不需要的重建部分。例如,操作人员可主观选择不需要的三角面片区域,之后进行删除。
此外,对第一环境模型进行网格裁剪和填充还可用于添加工作空间边界。在一个实施例中,可根据预先定义的机器人工作空间边界模型以及之前标定好的机器人坐标系信息,快速将机器人工作空间边界模型添加到重建的环境模型中。
最后,对第一环境模型进行网格裁剪和填充还可用于填补空洞。在一个实施例中,按照设定的边界阈值,依次选取小于边界数量阈值的空洞。为了防止错误填补空洞的发生,也可由操作人员主观决定是否填补该空洞。接着,由空洞边界周围的曲率(curvature)信息及变化率,确定用于填补空洞的三角面片的曲率大小。
在一个实施例中,步骤S130可具体包括:在对标定后的第一环境模型进行网格裁剪和填充之后执行网格简化过程;以及在所述网格简化过程之后,获得所述工业机器人的工作台模型。在一个实施例中,网格简化过程可采用抽取算法(例如,Quadric edge decimation算法),通过设置目标面片(faces)数量或者目标优化百分比,最终获得面片数量显著减少的简化的环境模型。
参考图2,图2示出了根据本发明的一个实施例的用于工业机器人的三维环境建模方法2000。
在步骤S210中,由操作员通过手持RGB-D深度相机来获取工业机器人的周围环境的颜色及深度信息流。
在步骤S220中,基于所述颜色及深度信息流,通过三维重建算法获取原始环境模型。
在步骤S230中,对该原始环境模型进行快速标定。例如,可将原始环境模型的坐标系标定到工业机器人的坐标系(例如,基坐标系、世界坐标系等)。
在步骤S240中,执行网格集成度评估。在一个实施例中,网格集成度评估方法可包括:首先根据预定义的机器人工作范围以及前一步骤标定好的机器人坐标系信息,将机器人工作空间范围内的重建模型分割出来,同时根据机器人的尺寸范围将重建出的机器人模型去除,得到不包含机器人模型的机器人工作空间范围内的环境模型;随后,按照一定的规则(如扇形分割)将机器人工作空间分割成一定数量的子空间,将子空间内的环境模型依次显示;最后,依次确定该部分是否完整重建。通过执行网格集成度评估,可衡量标定后的模型的映射是否完整。
在一个实施例中,若标定后的模型的映射并不完整,重新执行步骤S220。而若判定标定后的模型完整,则执行步骤S250。
在步骤S250中,执行网格精确度评估。在一个实施例中,准确度评估可包括:首先获取机器人工作空间内关键性物体间相对距离的真值;之后在重建模型中测得对应的距离值,由|测量值-真值|获得一系列的距离误差;最后,判定最大的距离误差是否小于预先设定的阈值从而判定准确度是否满足需求。通过执行网格精确度评估,可通过各种精确度参数和方法来衡量标定后的模型的精确度是否满足三维工作台模型的要求。
在一个实施例中,若标定后的模型的精确度不满足要求,则重新执行步骤S220。而若判定标定后的模型的精确度满足要求,则执行步骤S260。
在步骤S260中,执行快速网格裁剪和填充。
在一个实施例中,步骤S260可包括:首先选择重建不完整的平面,根据平面上的顶点及最小二乘法,获得拟合平面的参数,之后根据该参数建立新的三角面片(face)来替代原有的不完整平面。这样,通过执行快速网格裁剪和填充可解决透明物体重建不完整的问题。
在一个实施例中,步骤S260还可用于删除不需要的重建部分。具体来说,操作人员主观选择不需要的三角面片区域,之后进行删除。
在一个实施例中,步骤S260还可用于添加工作空间边界。例如,根据预先定义的机器人工作空间边界模型以及之前标定好的机器人坐标系信息,快速将机器人工作空间边界模型添加到重建的环境模型中。
在一个实施例中,步骤S260还可用于填补空洞。例如,按照设定的边界阈值,依次选取小于边界数量阈值的空洞,为了防止错误填补空洞的发生,由操作人员主观决定是否填补该空洞;由空洞边界周围的曲率(curvature)信息及变化率,确定用于填补空洞的三角面片的曲率大小。
在步骤S270中,执行网格简化过程。
在一个实施例中,网格简化过程可采用抽取算法(例如,Quadric edge decimation算法),通过设置目标面片(faces)数量或者目标优化百分比,最终获得面片数量显著减少的简化的环境模型。
在步骤S280中,获得最终的三维工作台模型。
尽管图2中未示出,在获得三维工作台模型之后,若外部环境发生变化,可通过如下方式来快速对工作台模型进行增/删物体模型:首先,运行现有的三维重建算法来获得新物体的模型;接着,基于迭代最近点法(ICP,Iterative Closest Point)来将该新物体的模型增加到原工作台模型,或者将该物体模型从原工作台模型中移除。
图3示出了根据本发明的一个实施例的用于工业机器人的三维环境建模设备3000。
如图3所示,三维环境建模设备3000包括:第一获取单元310、标定单元320以及第二获取单元330。其中,第一获取单元310配置 成基于工业机器人的周围环境的颜色及深度信息,对该工业机器人的周围环境进行三维建模,从而得到第一环境模型。标定单元320配置成将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系。第二获取单元330配置成至少通过对标定后的第一环境模型进行网格裁剪和填充,获得所述工业机器人的工作台模型。
在一个实施例中,第一获取单元310配置成从手持RGB-D相机获取所述颜色及深度信息。
尽管图3中未示出,三维环境建模设备3000还可包括:裁剪单元,其中裁剪单元配置成在所述标定单元320将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,根据所述工业机器人的工作空间范围,对标定后的第一环境模型进行裁剪。
在一个实施例中,三维环境建模设备3000还可包括:网格集成度评估单元,其中网格集成度评估单元配置成在所述标定单元320将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整。
在一个实施例中,三维环境建模设备3000还可包括:准确度评估单元,所述准确度评估单元配置成在所述网格集成度评估单元利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整之后,对所述标定后的第一环境模型进行准确度评估。
在一个实施例中,第二获取单元330可包括网格裁剪和填充单元,其中所述网格裁剪和填充单元配置成:选择待重建的不完整的平面;根据所述平面上的顶点,利用最小二乘法获得拟合所述平面的参数;以及根据所述参数建立新的三角面片来替代所述不完整的平面。在另一个实施例中,所述网格裁剪和填充单元配置成:将机器人工作空间边界模型添加到标定后的第一环境模型中。在又一个实施例中,所述网格裁剪和填充单元配置成:按照设定的边界阈值,依次选取小于所述边界阈值的空洞;以及根据所选取的空洞的边界周围的曲率信息及变化率,确定用于填补空洞的三角面片的曲率大小。
在一个实施例中,第二获取单元330还包括:网格简化单元和第三获取单元。其中,网格简化单元配置成在对标定后的第一环境模型进行网格裁剪和填充之后执行网格简化过程。在一个实施例中,网格简化单元配置成:确定目标面片数量或者目标优化百分比;以及利用抽取算法获得所确定的数量的面片。第三获取单元配置成在所述网格简化单元执行网格简化过程之后,获得所述工业机器人的工作台模型。
在一个实施例中,三维环境建模设备3000还可包括:第四获取单元和增删单元。其中,第四获取单元用于在所述工业机器人的周围环境发生改变时,获得新物体的模型。增删单元用于基于迭代最近点法将该新物体的模型增加到所述工作台模型。
本领域技术人员容易理解,本发明的一个或多个实施例提供的三维环境建模方法可通过计算机程序来实现。例如,当存有该计算机程序的计算机存储介质(例如U盘)与计算机相连时,运行该计算机程序即可执行本发明的实施例的三维环境建模方法。
综上所述,本发明的多个实施例提供了三维环境建模方案。尽管只对其中一些本发明的具体实施方式进行了描述,但是本领域普通技术人员应当了解,本发明可以在不偏离其主旨与范围内以许多其他的形式实施,例如在工业机器人操作平台上实施。因此,所展示的例子与实施方式被视为示意性的而非限制性的,在不脱离如所附各权利要求所定义的本发明精神及范围的情况下,本发明可能涵盖各种的修改与替换。

Claims (24)

  1. 一种用于工业机器人的三维环境建模方法,所述方法包括:
    基于所述工业机器人的周围环境的颜色及深度信息,对所述工业机器人的周围环境进行三维建模,从而得到第一环境模型;
    将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系;以及
    至少通过对标定后的第一环境模型进行网格裁剪和填充,获得所述工业机器人的工作台模型。
  2. 如权利要求1所述的三维环境建模方法,其中,所述颜色及深度信息通过手持RGB-D相机获取。
  3. 如权利要求1所述的三维环境建模方法,还包括:
    在将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,根据所述工业机器人的工作空间范围,对标定后的第一环境模型进行裁剪。
  4. 根据权利要求1所述的三维环境建模方法,还包括:
    在将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整。
  5. 根据权利要求4所述的三维环境建模方法,还包括:
    在利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整之后,对所述标定后的第一环境模型进行准确度评估。
  6. 根据权利要求1所述的三维环境建模方法,其中,所述网格裁剪和填充包括:
    选择待重建的不完整的平面;
    根据所述平面上的顶点,利用最小二乘法获得拟合所述平面的参数;以及
    根据所述参数建立新的三角面片来替代所述不完整的平面。
  7. 根据权利要求1所述的三维环境建模方法,其中,所述网格裁 剪和填充包括:
    将机器人工作空间边界模型添加到标定后的第一环境模型中。
  8. 根据权利要求1所述的三维环境建模方法,其中,所述网格裁剪和填充包括:
    按照设定的边界阈值,依次选取小于所述边界阈值的空洞;以及
    根据所选取的空洞的边界周围的曲率信息及变化率,确定用于填补空洞的三角面片的曲率大小。
  9. 根据权利要求1所述的三维环境建模方法,其中,至少通过对标定后的第一环境模型进行网格裁剪和填充,获得所述工业机器人的工作台模型包括:
    在对标定后的第一环境模型进行网格裁剪和填充之后执行网格简化过程;以及
    在所述网格简化过程之后,获得所述工业机器人的工作台模型。
  10. 根据权利要求9所述的三维环境建模方法,其中,所述网格简化过程包括:
    确定目标面片数量或者目标优化百分比;以及
    利用抽取算法获得所确定的数量的面片。
  11. 根据权利要求1所述的三维环境建模方法,还包括:
    在所述工业机器人的周围环境发生改变时,获得新物体的模型;
    基于迭代最近点法将该新物体的模型增加到所述工作台模型。
  12. 一种用于工业机器人的三维环境建模设备,所述设备包括:
    第一获取单元,所述第一获取单元配置成基于所述工业机器人的周围环境的颜色及深度信息,对所述工业机器人的周围环境进行三维建模,从而得到第一环境模型;
    标定单元,所述标定单元配置成将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系;以及
    第二获取单元,所述第二获取单元配置成至少通过对标定后的第一环境模型进行网格裁剪和填充,获得所述工业机器人的工作台模型。
  13. 如权利要求12所述的三维环境建模设备,其中,第一获取单元配置成从手持RGB-D相机获取所述颜色及深度信息。
  14. 如权利要求12所述的三维环境建模设备,还包括:
    裁剪单元,所述裁剪单元配置成在所述标定单元将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,根据所述工业机器人的工作空间范围,对标定后的第一环境模型进行裁剪。
  15. 根据权利要求12所述的三维环境建模设备,还包括:
    网格集成度评估单元,所述网格集成度评估单元配置成在所述标定单元将所述第一环境模型的坐标系标定到所述工业机器人的基坐标系之后,利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整。
  16. 根据权利要求15所述的三维环境建模设备,还包括:
    准确度评估单元,所述准确度评估单元配置成在所述网格集成度评估单元利用网格集成度评估方法来衡量标定后的第一环境模型的映射是否完整之后,对所述标定后的第一环境模型进行准确度评估。
  17. 根据权利要求12所述的三维环境建模设备,其中,所述第二获取单元包括网格裁剪和填充单元,其中所述网格裁剪和填充单元配置成:
    选择待重建的不完整的平面;
    根据所述平面上的顶点,利用最小二乘法获得拟合所述平面的参数;以及
    根据所述参数建立新的三角面片来替代所述不完整的平面。
  18. 根据权利要求12所述的三维环境建模设备,其中,所述第二获取单元包括网格裁剪和填充单元,其中所述网格裁剪和填充单元配置成:
    将机器人工作空间边界模型添加到标定后的第一环境模型中。
  19. 根据权利要求12所述的三维环境建模设备,其中,所述第二获取单元包括网格裁剪和填充单元,其中所述网格裁剪和填充单元配置成:
    按照设定的边界阈值,依次选取小于所述边界阈值的空洞;以及
    根据所选取的空洞的边界周围的曲率信息及变化率,确定用于填补空洞的三角面片的曲率大小。
  20. 根据权利要求12所述的三维环境建模设备,其中,所述第二获取单元还包括:
    网格简化单元,所述网格简化单元配置成在对标定后的第一环境模型进行网格裁剪和填充之后执行网格简化过程;以及
    第三获取单元,所述第三获取单元配置成在所述网格简化单元执行网格简化过程之后,获得所述工业机器人的工作台模型。
  21. 根据权利要求20所述的三维环境建模设备,其中,所述网格简化单元配置成:
    确定目标面片数量或者目标优化百分比;以及
    利用抽取算法获得所确定的数量的面片。
  22. 根据权利要求12所述的三维环境建模设备,还包括:
    第四获取单元,用于在所述工业机器人的周围环境发生改变时,获得新物体的模型;
    增删单元,用于基于迭代最近点法将该新物体的模型增加到所述工作台模型。
  23. 一种计算机存储介质,其包括指令,所述指令在运行时执行如权利要求1至11中任一项所述的三维环境建模方法。
  24. 一种工业机器人操作平台,其包括如权利要求12至22中任一项所述的三维环境建模设备。
PCT/CN2019/104728 2019-09-06 2019-09-06 用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台 WO2021042374A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201980100000.XA CN114364942A (zh) 2019-09-06 2019-09-06 用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台
PCT/CN2019/104728 WO2021042374A1 (zh) 2019-09-06 2019-09-06 用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台
DE112019007466.0T DE112019007466T5 (de) 2019-09-06 2019-09-06 Verfahren und Vorrichtung zur Modellierung einer dreidimensionalen Umgebung eines industriellen Roboters, Computerspeichermedium sowie Arbeitsplattform für industriellen Roboter
TW109130228A TWI832002B (zh) 2019-09-06 2020-09-03 用於工業機器人的三維環境建模方法及設備、電腦儲存媒體以及工業機器人操作平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/104728 WO2021042374A1 (zh) 2019-09-06 2019-09-06 用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台

Publications (1)

Publication Number Publication Date
WO2021042374A1 true WO2021042374A1 (zh) 2021-03-11

Family

ID=74852975

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104728 WO2021042374A1 (zh) 2019-09-06 2019-09-06 用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台

Country Status (4)

Country Link
CN (1) CN114364942A (zh)
DE (1) DE112019007466T5 (zh)
TW (1) TWI832002B (zh)
WO (1) WO2021042374A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105674991A (zh) * 2016-03-29 2016-06-15 深圳市华讯方舟科技有限公司 一种机器人定位方法和装置
CN107607107A (zh) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 一种基于先验信息的Slam方法和装置
CN107729295A (zh) * 2017-10-19 2018-02-23 广东工业大学 一种羽毛球的实时落点预判方法、平台及设备
CN108573221A (zh) * 2018-03-28 2018-09-25 重庆邮电大学 一种基于视觉的机器人目标零件显著性检测方法
US20190022863A1 (en) * 2017-07-20 2019-01-24 Tata Consultancy Services Limited Systems and methods for detecting grasp poses for handling target objects
CN109676604A (zh) * 2018-12-26 2019-04-26 清华大学 机器人曲面运动定位方法及其运动定位系统
CN109961406A (zh) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 一种图像处理的方法、装置及终端设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9384585B2 (en) * 2012-10-23 2016-07-05 Electronics And Telecommunications Research Institute 3-dimensional shape reconstruction device using depth image and color image and the method
US9083960B2 (en) * 2013-01-30 2015-07-14 Qualcomm Incorporated Real-time 3D reconstruction with power efficient depth sensor usage
KR101687017B1 (ko) * 2014-06-25 2016-12-16 한국과학기술원 머리 착용형 컬러 깊이 카메라를 활용한 손 위치 추정 장치 및 방법, 이를 이용한 맨 손 상호작용 시스템
US10573018B2 (en) * 2016-07-13 2020-02-25 Intel Corporation Three dimensional scene reconstruction based on contextual analysis
CN106384079B (zh) * 2016-08-31 2019-04-30 东南大学 一种基于rgb-d信息的实时行人跟踪方法
CA3040599C (en) * 2016-11-29 2022-09-06 Continental Automotive Gmbh Method and system for generating environment model and for positioning using cross-sensor feature point referencing
DE102018201801A1 (de) * 2018-02-06 2019-08-08 Siemens Aktiengesellschaft Vorrichtung zur Wahrnehmung eines Bereichs, Verfahren zum Betreiben einer Vorrichtung und Computerprogrammprodukt

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105674991A (zh) * 2016-03-29 2016-06-15 深圳市华讯方舟科技有限公司 一种机器人定位方法和装置
US20190022863A1 (en) * 2017-07-20 2019-01-24 Tata Consultancy Services Limited Systems and methods for detecting grasp poses for handling target objects
CN107607107A (zh) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 一种基于先验信息的Slam方法和装置
CN107729295A (zh) * 2017-10-19 2018-02-23 广东工业大学 一种羽毛球的实时落点预判方法、平台及设备
CN109961406A (zh) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 一种图像处理的方法、装置及终端设备
CN108573221A (zh) * 2018-03-28 2018-09-25 重庆邮电大学 一种基于视觉的机器人目标零件显著性检测方法
CN109676604A (zh) * 2018-12-26 2019-04-26 清华大学 机器人曲面运动定位方法及其运动定位系统

Also Published As

Publication number Publication date
DE112019007466T5 (de) 2022-03-31
TWI832002B (zh) 2024-02-11
CN114364942A (zh) 2022-04-15
TW202111667A (zh) 2021-03-16

Similar Documents

Publication Publication Date Title
JP6978330B2 (ja) オブジェクト形状および設計からのずれの監視
WO2018059155A1 (zh) 带有几何误差的三维实体模型的构建方法及计算机可读存储介质
JP5703396B2 (ja) 減数された測定点による公差評価
US9275461B2 (en) Information processing apparatus, information processing method and storage medium
WO2017195228A1 (en) Process and system to analyze deformations in motor vehicles
CN106959075B (zh) 利用深度相机进行精确测量的方法和系统
JP2017533414A (ja) ボリューム・データから抽出されるサーフェスデータの局所品質を決定する方法およびシステム
US20160138914A1 (en) System and method for analyzing data
EP3455799B1 (en) Process and system for computing the cost of usable and consumable materials for painting of motor vehicles, from the analysis of deformations in motor vehicles
CN106952331B (zh) 一种基于三维模型的纹理映射方法和装置
US6944564B2 (en) Method for the automatic calibration-only, or calibration and qualification simultaneously of a non-contact probe
WO2021042376A1 (zh) 用于工业机器人的标定方法及装置、三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台
WO2021042374A1 (zh) 用于工业机器人的三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台
JP4393962B2 (ja) 光源推定システムおよびプログラム
WO2019087032A1 (en) Method for the reconstruction of cad models through parametric adaptation
CN110415210B (zh) 一种基于点云贪婪三角投影构建模型的孔洞检测和修补方法
CN108646669B (zh) 一种曲面加工零件表面轮廓误差的近似评估方法
US20240078764A1 (en) Method for creating a 3d digital model of one or more aircraft elements in order to produce augmented reality images
JP3739209B2 (ja) 点群からのポリゴン自動生成システム
CN113396441A (zh) 用于在测量数据中确定表面的计算机实现方法
CN116625242B (zh) 光学三坐标测量机路径规划方法、系统、电子设备及介质
JP6149561B2 (ja) 評価方法および評価装置
US20240135733A1 (en) Fillet detection method
JP6632502B2 (ja) 開口部を有する部品モデルの開口面積を測定するための支援装置および方法
CN114266144A (zh) 基于ug/nx的多工件零件的装配配合检测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19944634

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19944634

Country of ref document: EP

Kind code of ref document: A1