WO2021189194A1 - Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform - Google Patents

Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform Download PDF

Info

Publication number
WO2021189194A1
WO2021189194A1 PCT/CN2020/080684 CN2020080684W WO2021189194A1 WO 2021189194 A1 WO2021189194 A1 WO 2021189194A1 CN 2020080684 W CN2020080684 W CN 2020080684W WO 2021189194 A1 WO2021189194 A1 WO 2021189194A1
Authority
WO
WIPO (PCT)
Prior art keywords
augmented reality
point cloud
codes
reality codes
surrounding environment
Prior art date
Application number
PCT/CN2020/080684
Other languages
French (fr)
Chinese (zh)
Inventor
丁万
察普夫·马克·帕特里克
Original Assignee
罗伯特博世有限公司
丁万
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 罗伯特博世有限公司, 丁万 filed Critical 罗伯特博世有限公司
Priority to CN202080098898.4A priority Critical patent/CN115244581A/en
Priority to PCT/CN2020/080684 priority patent/WO2021189194A1/en
Priority to DE112020006265.1T priority patent/DE112020006265T5/en
Publication of WO2021189194A1 publication Critical patent/WO2021189194A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the invention relates to a three-dimensional environment modeling method and equipment, a computer storage medium and an industrial robot operating platform.
  • the depth sensor of the RGB-D camera may encounter problems when capturing reflective surfaces such as glass.
  • the part of the reconstructed three-dimensional model that was originally a solid surface may be missing or there may be noise.
  • An aspect of the present invention provides a three-dimensional environment modeling method, the method includes: scanning the surrounding environment to obtain a point cloud stream; detecting one or more augmented reality codes in the surrounding environment; and at least based on The information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes are used to operate the point cloud stream in order to model the surrounding environment (so as to achieve accelerated modeling And automatically optimize and perfect the point cloud stream operation).
  • scanning the surrounding environment includes: using an RGB-D camera to scan the surrounding environment from multiple angles.
  • the aforementioned three-dimensional environment modeling method may further include: recording the point cloud flow.
  • detecting one or more augmented reality codes in the surrounding environment includes: when scanning the surrounding environment, using an augmented reality tracking library to detect one of the surrounding environments Or a plurality of augmented reality codes and their degrees of freedom poses relative to the RGB-D camera; and recording the degrees of freedom poses and registering them in a point cloud coordinate system.
  • the one or more augmented reality codes are placed on the surface of the target object.
  • operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes includes: According to the information carried by the one or more augmented reality codes, a geometric structure is constructed at the position of the one or more augmented reality codes, wherein the orientation of the geometric structure with respect to the one or more augmented reality codes And the content of the geometric structure is encoded in the information.
  • the geometric structure is a polygonal boundary/wall, a rectangular parallelepiped, a cylinder, or the like.
  • operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes includes: According to the information carried by the one or more augmented reality codes, remove obstacles or boundaries at or near the location of the one or more augmented reality codes in the point cloud stream, and the area of the point to be removed The range and orientation of is encoded in the information.
  • operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes includes: Perform region operations on the point cloud stream, where the region operations include region segmentation, filtering and/or clustering of the target object, and the parameters of the operation are defined in the one or more augmented reality codes carried Information.
  • Another aspect of the present invention provides a three-dimensional environment modeling method, the method includes: scanning the surrounding environment of the industrial robot to obtain a point cloud flow; detecting one or more enhancements in the surrounding environment Reality code; operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes; and converting the point cloud stream after the operation It is processed into a surface mesh for modeling of the surrounding environment (so as to achieve accelerated modeling and automatic optimization and perfect point cloud flow operations).
  • Another solution of the present invention provides a three-dimensional environment modeling device, the device includes: a scanning device for scanning the surrounding environment, so as to obtain a point cloud flow; detection device for detecting the surrounding environment One or more augmented reality codes; and an execution device for operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes To model the surrounding environment.
  • the scanning device is configured to scan the surrounding environment from multiple angles using an RGB-D camera.
  • the above-mentioned three-dimensional environment modeling equipment may further include: a recording device for recording the point cloud flow.
  • the detection device includes: a first detection unit configured to use an augmented reality tracking library to detect one or more of the surrounding environments when scanning the surrounding environment Augmented reality code and its degree of freedom pose relative to the RGB-D camera; and a first recording unit for recording the degree of freedom pose and registering it to the point cloud coordinate system.
  • the one or more augmented reality codes are placed on the surface of the target object.
  • the execution device is configured to construct a geometric structure at the position of the one or more augmented reality codes according to the information carried by the one or more augmented reality codes,
  • the orientation of the geometric structure relative to the one or more augmented reality codes and the content of the geometric structure are encoded in the information.
  • the geometric structure is a polygonal boundary/wall, a rectangular parallelepiped, a cylinder, or the like.
  • the execution device is further configured to remove the one or more augmented reality codes from the point cloud stream according to the information carried by the one or more augmented reality codes. Obstacles or boundaries at or near the location of the code, where the extent and orientation of the area of the point to be removed are encoded in the information.
  • the execution device is configured to perform a region operation on the point cloud stream, wherein the region operation includes region segmentation, filtering and/or clustering of the target object ,
  • the operating parameters are defined in the information carried by the one or more augmented reality codes.
  • Another solution of the present invention provides a three-dimensional environment modeling device for an industrial robot, the device comprising: a scanning device for scanning the surrounding environment of the industrial robot to obtain a point cloud flow; and a detection device , Used to detect one or more augmented reality codes in the surrounding environment; execution means, used to be based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes To operate the point cloud stream; and a processing device for processing the operated point cloud stream into a surface mesh, so that the surrounding environment can be modeled.
  • Another aspect of the present invention provides a computer storage medium, which includes instructions, which execute the aforementioned three-dimensional environment modeling method during runtime.
  • Another solution of the present invention provides an industrial robot operating platform, which includes the three-dimensional environment modeling device as described above.
  • the aforementioned 3D environment modeling solution utilizes one or more augmented reality codes in the environment, avoiding or reducing the manual operation involved in the data processing process after the environment scan, and it is more Convenient and efficient.
  • the information carried by the augmented reality code can be customized as needed to modify or perform other operations on the point cloud or grid to be reconstructed.
  • Fig. 1 shows a three-dimensional environment modeling method according to an embodiment of the present invention
  • Figure 2 shows a three-dimensional environment modeling method for industrial robots according to an embodiment of the present invention.
  • Fig. 3 shows a three-dimensional environment modeling device according to an embodiment of the present invention.
  • FIG. 1 shows a three-dimensional environment modeling method 1000 according to an embodiment of the present invention.
  • step S110 the surrounding environment is scanned to obtain a point cloud flow.
  • step S120 one or more augmented reality codes in the surrounding environment are detected.
  • step S130 the point cloud stream is operated based on at least the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes, so as to construct the surrounding environment. mold.
  • augmented reality code refers to a mark that performs a specific operation on the point cloud stream obtained after scanning.
  • the augmented reality code can take the form of a two-dimensional code, a data matrix, or a Maxicode, which is not limited in the present invention.
  • the operations performed on the point cloud stream can be encoded or defined in the augmented reality code.
  • one or more augmented reality codes can be placed in the work area, such as on the plane and the object to be divided.
  • these augmented reality codes may carry the following information: whether adjacent surfaces should be included in the reconstructed model or deleted from the reconstructed model, or whether certain point cloud operations (such as segmentation) should be performed.
  • augmented reality codes can be used to mark surfaces that cannot be detected by a 3D scanner (such as an RGB-D camera).
  • the augmented reality code can define a glass wall, including its dimensions, or it can add an artificial wall to the reconstruction model where the augmented reality code is placed.
  • the above-mentioned three-dimensional environment modeling solution utilizes one or more augmented reality codes in the environment, avoids or reduces manual operations involved in the data processing process after environment scanning, and is more convenient and efficient.
  • step S110 includes: using an RGB-D camera to scan the surrounding environment from multiple angles.
  • RGB-D depth cameras include Microsoft Kinect, ASUS Xtion, Obi Zhongguang, Intel RealSense, etc., which are not limited by the present invention.
  • multiple RGB-D cameras may be used to scan the surrounding environment from multiple angles, and multiple RGB-D cameras may be located at different positions, for example, the first RGB-D camera is located at the first position , The second RGB-D camera is located at a second position different from the first position.
  • the three-dimensional environment modeling method 1000 may further include: recording the point cloud flow.
  • the recording step can be located between step S110 and step S120.
  • step S120 includes: when scanning the surrounding environment, using an augmented reality tracking library to detect one or more augmented reality codes in the surrounding environment and their freedom relative to the RGB-D camera Degree pose; and record the degree of freedom pose and register it to the point cloud coordinate system.
  • the augmented reality code such as a two-dimensional code
  • its six degrees of freedom pose and/or other stored information in the augmented reality tracking library
  • the one or more augmented reality codes are stationary relative to the camera (eg, RGB-D depth camera), which is placed on the surface of the target object.
  • the camera eg, RGB-D depth camera
  • one or more augmented reality codes can be placed on the glass wall to help the RGB-D depth camera recognize the wall.
  • the one or more augmented reality codes are also encoded with the dimensions of the wall surface.
  • one or more augmented reality codes may be placed at the boundary of the plane, so as to fill a boundary of a predetermined size in the point cloud stream at the position of the one or more augmented reality codes.
  • one or more augmented reality codes may be placed on the target object, so as to remove the target object or a part of it from the point cloud stream.
  • step S130 includes: constructing a geometric structure at the location of the one or more augmented reality codes according to the information carried by the one or more augmented reality codes, wherein the geometric structure is relative to the one or more augmented reality codes.
  • the orientation of the one or more augmented reality codes and the content of the geometric structure are encoded in the information.
  • the geometric structure may be a polygonal boundary/wall, a rectangular parallelepiped or a cylinder, etc., which is not limited in the present invention.
  • step S130 includes: removing obstacles at or near the location of the one or more augmented reality codes in the point cloud stream according to the information carried by the one or more augmented reality codes The object or boundary in which the extent and orientation of the area of the point to be removed is encoded in the information.
  • step S130 includes: performing a region operation on the point cloud stream, wherein the region operation includes performing region segmentation, filtering, and/or clustering on the target object, and the parameters of the operation are defined in the point cloud stream.
  • the region operation includes performing region segmentation, filtering, and/or clustering on the target object, and the parameters of the operation are defined in the point cloud stream.
  • step S130 may delete the pattern of points and replace them with other points, that is, simultaneously perform (1) Remove the one or more points from the point cloud stream according to the information carried by the one or more augmented reality codes. Obstacles or borders at or near the location of an augmented reality code, where the range and orientation of the area to be removed are encoded in the information; and (2) according to the one or more augmented reality codes carried Information, construct a geometric structure at the location of the one or more augmented reality codes, wherein the orientation of the geometric structure relative to the one or more augmented reality codes and the content of the geometric structure are encoded in the Information.
  • Fig. 2 shows a three-dimensional environment modeling method 2000 for an industrial robot according to an embodiment of the present invention.
  • step S210 the surrounding environment of the industrial robot is scanned to obtain a point cloud stream.
  • step S220 one or more augmented reality codes in the surrounding environment are detected.
  • step S230 the point cloud stream is operated at least based on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes.
  • step S240 the point cloud stream after the operation is processed into a surface mesh, so that the surrounding environment can be modeled.
  • the point cloud stream is acquired and recorded by a handheld RGB-D camera driven by a portable microcomputer.
  • the augmented reality code is detected by the RGB sensor in the camera. This can be achieved, for example, by including a tracking library (such as ARuCO).
  • the library outputs the 6-DOF pose of the augmented reality code relative to the camera. All the augmented reality code poses will be recorded and co-registered to the recorded point cloud coordinate system.
  • the recorded point cloud is streamed to a dedicated PC for offline post-processing, or post-processing on a scanning PC.
  • the point cloud stream is input into the latest reconstruction algorithm.
  • the point cloud library PCL
  • ICP iterative closest point
  • Those skilled in the art understand that various iterative nearest point methods ICP can be used, including but not limited to the point-to-plane search for nearest point precise registration method proposed by Chen and Medioni and Bergevin, and the point proposed by Rusinkiewicz and Levoy.
  • -to-projection fast registration method for searching nearby points Contractive-projection-point registration method for searching nearby points proposed by Soon-Yong and Murali, etc.
  • the location of each detected augmented reality code and the information of each code are imported into the processing sequence.
  • part of the point cloud stream will be deleted or the defined area will be added to the point cloud, or specific operations will be applied to the point cloud stream.
  • the finally obtained enhanced point cloud stream will be further processed into a surface mesh.
  • the above-mentioned 3D environment modeling method 2000 realizes the automatic construction of boundaries or shapes in the 3D model by simply adding augmented reality codes to the environment, using the augmented reality codes to automatically remove unwanted surfaces, define reflective surfaces and eliminate noise and 3D starting points Description (for example, filtering, segmentation and other operations on the point cloud).
  • Fig. 3 shows a three-dimensional environment modeling device 3000 according to an embodiment of the present invention.
  • the three-dimensional environment modeling equipment 3000 includes: a scanning device 310, a detection device 320 and an execution device 330.
  • the scanning device 310 is used to scan the surrounding environment to obtain a point cloud stream.
  • the detection device 320 is used to detect one or more augmented reality codes in the surrounding environment.
  • the execution device 330 is configured to operate the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes, so as to construct the surrounding environment. mold.
  • augmented reality code refers to a mark that performs a specific operation on the point cloud stream obtained after scanning.
  • the augmented reality code can take the form of a two-dimensional code, a data matrix, or a Maxicode, which is not limited in the present invention.
  • the operations performed on the point cloud stream can be encoded or defined in the augmented reality code.
  • one or more augmented reality codes can be placed in the work area, such as on the plane and the object to be divided.
  • these augmented reality codes may carry the following information: whether adjacent surfaces should be included in the reconstructed model or deleted from the reconstructed model, or whether certain point cloud operations (such as segmentation) should be performed.
  • augmented reality codes can be used to mark surfaces that cannot be detected by a 3D scanner (such as an RGB-D camera).
  • the augmented reality code can define a glass wall, including its dimensions, or it can add an artificial wall to the reconstruction model where the augmented reality code is placed.
  • the above-mentioned three-dimensional environment modeling device 3000 utilizes one or more augmented reality codes in the environment to avoid or reduce manual operations involved in the data processing process after environment scanning, which is more convenient and efficient.
  • the scanning device 310 is configured to use an RGB-D camera to scan the surrounding environment from multiple angles.
  • RGB-D cameras include Microsoft Kinect, ASUS Xtion, Obi Zhongguang, Intel RealSense, etc., which are not limited by the present invention.
  • the scanning device 310 may use multiple RGB-D cameras to scan the surrounding environment from multiple angles, and the multiple RGB-D cameras may be located at different positions, for example, the first RGB-D camera is located at In the first position, the second RGB-D camera is located in a second position different from the first position.
  • the above-mentioned three-dimensional environment modeling device 3000 may further include: a recording device for recording the point cloud stream.
  • the detection device 320 may include: a first detection unit and a first recording unit.
  • the first detection unit is configured to use an augmented reality tracking library to detect one or more augmented reality codes in the surrounding environment and their degrees of freedom relative to the RGB-D camera when scanning the surrounding environment. posture.
  • the first recording unit is used to record the degree of freedom pose and register it to the point cloud coordinate system.
  • the detection device 320 may be configured to detect an augmented reality code (such as a two-dimensional code) and its six-degree-of-freedom pose (and/or other stored information in the augmented reality tracking library), and configure them together. Accurate or calibrate to the coordinate system of the point cloud stream.
  • the one or more augmented reality codes are stationary relative to the camera (eg, RGB-D depth camera), which is placed on the surface of the target object.
  • the camera eg, RGB-D depth camera
  • one or more augmented reality codes can be placed on the glass wall to help the RGB-D depth camera recognize the wall.
  • the one or more augmented reality codes are also encoded with the dimensions of the wall surface.
  • one or more augmented reality codes may be placed at the boundary of the plane, so as to fill a boundary of a predetermined size in the point cloud stream at the position of the one or more augmented reality codes.
  • one or more augmented reality codes may be placed on the target object, so as to remove the target object or a part of it from the point cloud stream.
  • the position and pattern of the augmented reality code may contain information related to customized operations during post-processing for the recorded point cloud stream.
  • the execution device 330 is configured to construct a geometric structure at the location of the one or more augmented reality codes according to the information carried by the one or more augmented reality codes, wherein the geometric structure is relative to The orientation of the one or more augmented reality codes and the content of the geometric structure are encoded in the information.
  • the geometric structure may be a polygonal boundary/wall, a rectangular parallelepiped or a cylinder, etc., which is not limited in the present invention.
  • the execution device 330 is further configured to remove the one or more augmented reality codes in the point cloud stream according to the information carried by the one or more augmented reality codes. Or nearby obstacles or boundaries, where the extent and orientation of the area of the point to be removed is encoded in the information.
  • the execution device 330 is configured to perform a region operation on the point cloud stream, wherein the region operation includes region segmentation, filtering and/or clustering of the target object, and the parameters of the operation Defined in the information carried by the one or more augmented reality codes.
  • the execution device 330 may be configured to delete the pattern of points and replace them with other points, that is, perform the following two actions at the same time: (1) According to the information carried by the one or more augmented reality codes, in the point cloud Obstacles or boundaries at or near the location where the one or more augmented reality codes are removed in the stream, where the range and orientation of the area of the point to be removed are encoded in the information; and (2) according to the one Information carried by one or more augmented reality codes, a geometric structure is constructed at the location of the one or more augmented reality codes, wherein the geometric structure is relative to the orientation of the one or more augmented reality codes and the geometry The content of the structure is encoded in the information.
  • the three-dimensional environment modeling device may include the following devices: a scanning device, a detection device, an execution device, and a processing device.
  • the scanning device is used to scan the surrounding environment of the industrial robot, so as to obtain a point cloud stream.
  • the detection device is used to detect one or more augmented reality codes in the surrounding environment.
  • the execution device is configured to operate the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes.
  • the processing device is used to process the operated point cloud stream into a surface grid, so that the surrounding environment can be modeled.
  • the three-dimensional environment modeling method provided by one or more embodiments of the present invention can be implemented by a computer program.
  • a computer storage medium such as a USB flash drive
  • running the computer program can execute the three-dimensional environment modeling method of the embodiment of the present invention.
  • multiple embodiments of the present invention provide a three-dimensional environment modeling solution that uses augmented reality code to accelerate environment reconstruction, and automatically completes and optimizes the point cloud stream containing the augmented reality code part.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a three-dimensional environment modeling method. The method comprises: scanning an ambient environment to obtain a point cloud flow; detecting one or more augmented reality codes in the ambient environment; and carrying out improvement and optimization operations on the point cloud flow at least on the basis of information carried by the one or more augmented reality codes and positions of the one or more augmented reality codes, so as to perform modeling on the ambient environment. The present invention further relates to a three-dimensional environment modeling device, a computer storage medium, and an industrial robot operating platform.

Description

三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台Three-dimensional environment modeling method and equipment, computer storage medium and industrial robot operating platform 【技术领域】【Technical Field】
本发明涉及三维环境建模方法及设备、计算机存储介质以及工业机器人操作平台。The invention relates to a three-dimensional environment modeling method and equipment, a computer storage medium and an industrial robot operating platform.
【背景技术】【Background technique】
在某些特定的工业应用场景中,尝试使用相机(例如RGB-D相机)来进行环境扫描并利用扫描和记录的数据进行环境重建。但在后续进行环境重建时,通常会涉及大量的人工工作,特别是在后处理期间,用户必须指定x,y或z平面中工作区域的边界,删除场景中在记录时存在但不属于场景的临时三维对象,或填充仅部分扫描的对象。In some specific industrial application scenarios, try to use a camera (such as an RGB-D camera) to scan the environment and use the scanned and recorded data to reconstruct the environment. However, in the subsequent environmental reconstruction, a lot of manual work is usually involved, especially during post-processing, the user must specify the boundary of the working area in the x, y, or z plane, and delete the scenes that existed at the time of recording but did not belong to the scene. Temporary 3D objects, or fill objects that are only partially scanned.
另外,RGB-D相机的深度传感器在捕获诸如玻璃之类的反射表面时会遇到问题,例如重建的三维模型中原为固体表面的部分可能会缺失或存在噪声。In addition, the depth sensor of the RGB-D camera may encounter problems when capturing reflective surfaces such as glass. For example, the part of the reconstructed three-dimensional model that was originally a solid surface may be missing or there may be noise.
因此,期望一种改进的三维环境建模方案。Therefore, an improved three-dimensional environment modeling solution is desired.
【发明内容】[Summary of the invention]
本发明的一个方案提出了一种三维环境建模方法,所述方法包括:对周围环境进行扫描,以便获得点云流;检测所述周围环境中的一个或多个增强现实代码;以及至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作,以便对所述周围环境进行建模(从而实现加速建模和自动优化与完善点云流操作)。An aspect of the present invention provides a three-dimensional environment modeling method, the method includes: scanning the surrounding environment to obtain a point cloud stream; detecting one or more augmented reality codes in the surrounding environment; and at least based on The information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes are used to operate the point cloud stream in order to model the surrounding environment (so as to achieve accelerated modeling And automatically optimize and perfect the point cloud stream operation).
优选地,在上述三维环境建模方法中,对周围环境进行扫描包括:利用RGB-D相机从多个角度对所述周围环境进行扫描。Preferably, in the above-mentioned three-dimensional environment modeling method, scanning the surrounding environment includes: using an RGB-D camera to scan the surrounding environment from multiple angles.
优选地,前述三维环境建模方法还可包括:记录所述点云流。Preferably, the aforementioned three-dimensional environment modeling method may further include: recording the point cloud flow.
优选地,在上述三维环境建模方法中,检测所述周围环境中的一个或多个增强现实代码包括:在扫描所述周围环境时,利用增强现实 跟踪库来检测所述周围环境中的一个或多个增强现实代码及其相对于所述RGB-D相机的自由度位姿;以及记录所述自由度位姿,并将其配准到点云坐标系中。Preferably, in the above-mentioned three-dimensional environment modeling method, detecting one or more augmented reality codes in the surrounding environment includes: when scanning the surrounding environment, using an augmented reality tracking library to detect one of the surrounding environments Or a plurality of augmented reality codes and their degrees of freedom poses relative to the RGB-D camera; and recording the degrees of freedom poses and registering them in a point cloud coordinate system.
优选地,在上述三维环境建模方法中,所述一个或多个增强现实代码放置在目标对象的表面上。Preferably, in the above-mentioned three-dimensional environment modeling method, the one or more augmented reality codes are placed on the surface of the target object.
优选地,在上述三维环境建模方法中,至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作包括:根据所述一个或多个增强现实代码所携带的信息,在所述一个或多个增强现实代码的位置处构造几何结构,其中所述几何结构相对于所述一个或多个增强现实代码的取向和所述几何结构的内容被编码在所述信息中。Preferably, in the above-mentioned three-dimensional environment modeling method, operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes includes: According to the information carried by the one or more augmented reality codes, a geometric structure is constructed at the position of the one or more augmented reality codes, wherein the orientation of the geometric structure with respect to the one or more augmented reality codes And the content of the geometric structure is encoded in the information.
优选地,在上述三维环境建模方法中,所述几何结构为多边形边界/墙、长方体或圆柱体等。Preferably, in the above-mentioned three-dimensional environment modeling method, the geometric structure is a polygonal boundary/wall, a rectangular parallelepiped, a cylinder, or the like.
优选地,在上述三维环境建模方法中,至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作包括:根据所述一个或多个增强现实代码所携带的信息,在所述点云流中去除所述一个或多个增强现实代码的位置处或附近的障碍物或边界,其中要去除的点的区域的范围和取向被编码在所述信息中。Preferably, in the above-mentioned three-dimensional environment modeling method, operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes includes: According to the information carried by the one or more augmented reality codes, remove obstacles or boundaries at or near the location of the one or more augmented reality codes in the point cloud stream, and the area of the point to be removed The range and orientation of is encoded in the information.
优选地,在上述三维环境建模方法中,至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作包括:在所述点云流上执行区域操作,其中所述区域操作包括对所述目标对象进行区域分割、过滤和/或聚类,操作的参数定义在所述一个或多个增强现实代码所携带的信息中。Preferably, in the above-mentioned three-dimensional environment modeling method, operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes includes: Perform region operations on the point cloud stream, where the region operations include region segmentation, filtering and/or clustering of the target object, and the parameters of the operation are defined in the one or more augmented reality codes carried Information.
本发明的另一个方案提供了一种三维环境建模方法,所述方法包括:对所述工业机器人的周围环境进行扫描,以便获得点云流;检测所述周围环境中的一个或多个增强现实代码;至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作;以及将操作后的所述点云流处理为表 面网格,以便所述周围环境进行建模(从而实现加速建模和自动优化与完善点云流操作)。Another aspect of the present invention provides a three-dimensional environment modeling method, the method includes: scanning the surrounding environment of the industrial robot to obtain a point cloud flow; detecting one or more enhancements in the surrounding environment Reality code; operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes; and converting the point cloud stream after the operation It is processed into a surface mesh for modeling of the surrounding environment (so as to achieve accelerated modeling and automatic optimization and perfect point cloud flow operations).
本发明的又一个方案提供了一种三维环境建模设备,所述设备包括:扫描装置,用于对周围环境进行扫描,以便获得点云流;检测装置,用于检测所述周围环境中的一个或多个增强现实代码;以及执行装置,用于至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作,以便对所述周围环境进行建模。Another solution of the present invention provides a three-dimensional environment modeling device, the device includes: a scanning device for scanning the surrounding environment, so as to obtain a point cloud flow; detection device for detecting the surrounding environment One or more augmented reality codes; and an execution device for operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes To model the surrounding environment.
优选地,在上述三维环境建模设备中,所述扫描装置配置成利用RGB-D相机从多个角度对所述周围环境进行扫描。Preferably, in the above-mentioned three-dimensional environment modeling device, the scanning device is configured to scan the surrounding environment from multiple angles using an RGB-D camera.
优选地,上述三维环境建模设备还可包括:记录装置,用于记录所述点云流。Preferably, the above-mentioned three-dimensional environment modeling equipment may further include: a recording device for recording the point cloud flow.
优选地,在上述三维环境建模设备中,所述检测装置包括:第一检测单元,用于在扫描所述周围环境时,利用增强现实跟踪库来检测所述周围环境中的一个或多个增强现实代码及其相对于所述RGB-D相机的自由度位姿;以及第一记录单元,用于记录所述自由度位姿,并将其配准到点云坐标系中。Preferably, in the above-mentioned three-dimensional environment modeling equipment, the detection device includes: a first detection unit configured to use an augmented reality tracking library to detect one or more of the surrounding environments when scanning the surrounding environment Augmented reality code and its degree of freedom pose relative to the RGB-D camera; and a first recording unit for recording the degree of freedom pose and registering it to the point cloud coordinate system.
优选地,在上述三维环境建模设备中,所述一个或多个增强现实代码放置在目标对象的表面上。Preferably, in the above-mentioned three-dimensional environment modeling device, the one or more augmented reality codes are placed on the surface of the target object.
优选地,在上述三维环境建模设备中,所述执行装置配置成根据所述一个或多个增强现实代码所携带的信息,在所述一个或多个增强现实代码的位置处构造几何结构,其中所述几何结构相对于所述一个或多个增强现实代码的取向和所述几何结构的内容被编码在所述信息中。Preferably, in the above-mentioned three-dimensional environment modeling device, the execution device is configured to construct a geometric structure at the position of the one or more augmented reality codes according to the information carried by the one or more augmented reality codes, The orientation of the geometric structure relative to the one or more augmented reality codes and the content of the geometric structure are encoded in the information.
优选地,在上述三维环境建模设备中,所述几何结构为多边形边界/墙、长方体或圆柱体等。Preferably, in the above-mentioned three-dimensional environment modeling device, the geometric structure is a polygonal boundary/wall, a rectangular parallelepiped, a cylinder, or the like.
优选地,在上述三维环境建模设备中,所述执行装置还配置成根据所述一个或多个增强现实代码所携带的信息,在所述点云流中去除所述一个或多个增强现实代码的位置处或附近的障碍物或边界,其中 要去除的点的区域的范围和取向被编码在所述信息中。Preferably, in the above-mentioned three-dimensional environment modeling equipment, the execution device is further configured to remove the one or more augmented reality codes from the point cloud stream according to the information carried by the one or more augmented reality codes. Obstacles or boundaries at or near the location of the code, where the extent and orientation of the area of the point to be removed are encoded in the information.
优选地,在上述三维环境建模设备中,所述执行装置配置成在所述点云流上执行区域操作,其中所述区域操作包括对所述目标对象进行区域分割、过滤和/或聚类,操作的参数定义在所述一个或多个增强现实代码所携带的信息中。Preferably, in the above-mentioned three-dimensional environment modeling device, the execution device is configured to perform a region operation on the point cloud stream, wherein the region operation includes region segmentation, filtering and/or clustering of the target object , The operating parameters are defined in the information carried by the one or more augmented reality codes.
本发明的又一个方案提供了一种用于工业机器人的三维环境建模设备,所述设备包括:扫描装置,用于对所述工业机器人的周围环境进行扫描,以便获得点云流;检测装置,用于检测所述周围环境中的一个或多个增强现实代码;执行装置,用于至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作;以及处理装置,用于将操作后的所述点云流处理为表面网格,以便所述周围环境进行建模。Another solution of the present invention provides a three-dimensional environment modeling device for an industrial robot, the device comprising: a scanning device for scanning the surrounding environment of the industrial robot to obtain a point cloud flow; and a detection device , Used to detect one or more augmented reality codes in the surrounding environment; execution means, used to be based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes To operate the point cloud stream; and a processing device for processing the operated point cloud stream into a surface mesh, so that the surrounding environment can be modeled.
本发明的又一个方案提供了一种计算机存储介质,其包括指令,所述指令在运行时执行如前所述的三维环境建模方法。Another aspect of the present invention provides a computer storage medium, which includes instructions, which execute the aforementioned three-dimensional environment modeling method during runtime.
本发明的又一个方案提供了一种工业机器人操作平台,其包括如前所述的三维环境建模设备。Another solution of the present invention provides an industrial robot operating platform, which includes the three-dimensional environment modeling device as described above.
与现有的三维环境建模方案相比,前述三维环境建模方案利用了环境中的一个或多个增强现实代码,避免或降低了环境扫描后数据处理过程中所涉及的人工操作,更为便捷和高效。增强现实代码所携带的信息可根据需要来进行定制,以便对要重建的点云或网格进行修改或其他操作。Compared with the existing 3D environment modeling solution, the aforementioned 3D environment modeling solution utilizes one or more augmented reality codes in the environment, avoiding or reducing the manual operation involved in the data processing process after the environment scan, and it is more Convenient and efficient. The information carried by the augmented reality code can be customized as needed to modify or perform other operations on the point cloud or grid to be reconstructed.
【附图说明】【Explanation of the drawings】
参照附图,本发明的公开内容将变得更易理解。本领域技术人员容易理解的是:这些附图仅仅用于说明的目的,而并非意在对本发明的保护范围构成限制。图中:With reference to the drawings, the disclosure of the present invention will become easier to understand. It is easily understood by those skilled in the art that these drawings are only for illustrative purposes, and are not intended to limit the scope of protection of the present invention. In the picture:
图1示出了根据本发明的一个实施例的三维环境建模方法;Fig. 1 shows a three-dimensional environment modeling method according to an embodiment of the present invention;
图2示出了根据本发明的一个实施例的针对工业机器人的三维环境建模方法;以及Figure 2 shows a three-dimensional environment modeling method for industrial robots according to an embodiment of the present invention; and
图3示出了根据本发明的一个实施例的三维环境建模设备。Fig. 3 shows a three-dimensional environment modeling device according to an embodiment of the present invention.
【具体实施方式】【Detailed ways】
以下说明描述了本发明的特定实施方式以教导本领域技术人员如何制造和使用本发明的最佳模式。为了教导发明原理,已简化或省略了一些常规方面。本领域技术人员应该理解源自这些实施方式的变型将落在本发明的范围内。本领域技术人员应该理解下述特征能够以各种方式接合以形成本发明的多个变型。由此,本发明并不局限于下述特定实施方式,而仅由权利要求和它们的等同物限定。The following description describes specific embodiments of the present invention to teach those skilled in the art how to make and use the best mode of the present invention. In order to teach the principles of the invention, some conventional aspects have been simplified or omitted. Those skilled in the art should understand that variations derived from these embodiments will fall within the scope of the present invention. Those skilled in the art should understand that the following features can be joined in various ways to form many variations of the present invention. Therefore, the present invention is not limited to the following specific embodiments, but only by the claims and their equivalents.
参考图1,图1示出了根据本发明的一个实施例的三维环境建模方法1000。Referring to FIG. 1, FIG. 1 shows a three-dimensional environment modeling method 1000 according to an embodiment of the present invention.
在步骤S110中,对周围环境进行扫描,以便获得点云流。In step S110, the surrounding environment is scanned to obtain a point cloud flow.
在步骤S120中,检测所述周围环境中的一个或多个增强现实代码。In step S120, one or more augmented reality codes in the surrounding environment are detected.
在步骤S130中,至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作,以便对所述周围环境进行建模。In step S130, the point cloud stream is operated based on at least the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes, so as to construct the surrounding environment. mold.
在本发明的上下文中,术语“增强现实代码”是指对扫描后所获得的点云流进行特定操作的标记。增强现实代码可采用二维码、数据矩阵或Maxicode等形式,本发明对此不作限定。对点云流所进行的操作可编码或定义在该增强现实代码中。在实际使用中,可将一个或多个增强现实代码置于工作区域内,例如在平面和要分割的对象上。作为一个示例,这些增强现实代码可带有以下信息:是否应将相邻表面包括在重构模型中或从重构模型中删除,或者是否应执行某些点云操作(例如分割)。而且,这些操作的参数可以存储在增强现实代码中。作为又一个示例,增强现实代码可用于标记3D扫描仪(例如RGB-D相机)无法检测到的表面。例如,增强现实代码可以定义玻璃墙,包括其尺寸,又或者可以在增强现实代码所放置的位置处将人造墙添加到重构模型中。In the context of the present invention, the term "augmented reality code" refers to a mark that performs a specific operation on the point cloud stream obtained after scanning. The augmented reality code can take the form of a two-dimensional code, a data matrix, or a Maxicode, which is not limited in the present invention. The operations performed on the point cloud stream can be encoded or defined in the augmented reality code. In actual use, one or more augmented reality codes can be placed in the work area, such as on the plane and the object to be divided. As an example, these augmented reality codes may carry the following information: whether adjacent surfaces should be included in the reconstructed model or deleted from the reconstructed model, or whether certain point cloud operations (such as segmentation) should be performed. Moreover, the parameters of these operations can be stored in the augmented reality code. As yet another example, augmented reality codes can be used to mark surfaces that cannot be detected by a 3D scanner (such as an RGB-D camera). For example, the augmented reality code can define a glass wall, including its dimensions, or it can add an artificial wall to the reconstruction model where the augmented reality code is placed.
上述三维环境建模方案利用了环境中的一个或多个增强现实代码,避免或降低了环境扫描后数据处理过程中所涉及的人工操作,更为便捷和高效。The above-mentioned three-dimensional environment modeling solution utilizes one or more augmented reality codes in the environment, avoids or reduces manual operations involved in the data processing process after environment scanning, and is more convenient and efficient.
在一个实施例中,步骤S110包括:利用RGB-D相机从多个角度对所述周围环境进行扫描。目前RGB-D深度相机包括微软Kinect、华硕Xtion、奥比中光、英特尔RealSense等,本发明对此不作限定。在一些实施例中,可利用多个RGB-D相机从多个角度对所述周围环境进行扫描,并且多个RGB-D相机可处于不同位置,例如,第一RGB-D相机位于第一位置,第二RGB-D相机位于不同于第一位置的第二位置。In one embodiment, step S110 includes: using an RGB-D camera to scan the surrounding environment from multiple angles. Currently, RGB-D depth cameras include Microsoft Kinect, ASUS Xtion, Obi Zhongguang, Intel RealSense, etc., which are not limited by the present invention. In some embodiments, multiple RGB-D cameras may be used to scan the surrounding environment from multiple angles, and multiple RGB-D cameras may be located at different positions, for example, the first RGB-D camera is located at the first position , The second RGB-D camera is located at a second position different from the first position.
尽管图1中未示出,在一个实施例中,三维环境建模方法1000还可包括:记录所述点云流。该记录步骤可位于步骤S110与步骤S120之间。Although not shown in FIG. 1, in one embodiment, the three-dimensional environment modeling method 1000 may further include: recording the point cloud flow. The recording step can be located between step S110 and step S120.
在一个实施例中,步骤S120包括:在扫描所述周围环境时,利用增强现实跟踪库来检测所述周围环境中的一个或多个增强现实代码及其相对于所述RGB-D相机的自由度位姿;以及记录所述自由度位姿,并将其配准到点云坐标系中。例如,在步骤S120中,检测增强现实代码(例如二维码)以及其六个自由度位姿(和/或增强现实跟踪库中的其他存储信息),并将它们共同配准或标定到点云流的坐标系。In one embodiment, step S120 includes: when scanning the surrounding environment, using an augmented reality tracking library to detect one or more augmented reality codes in the surrounding environment and their freedom relative to the RGB-D camera Degree pose; and record the degree of freedom pose and register it to the point cloud coordinate system. For example, in step S120, the augmented reality code (such as a two-dimensional code) and its six degrees of freedom pose (and/or other stored information in the augmented reality tracking library) are detected, and they are jointly registered or calibrated to a point The coordinate system of the cloud flow.
在一个实施例中,所述一个或多个增强现实代码相对于相机(例如RGB-D深度相机)是静止的,其放置在目标对象的表面上。例如可将一个或多个增强现实代码置于玻璃墙面上,以帮助RGB-D深度相机识别该墙面。该一个或多个增强现实代码还编码有该墙面的尺寸。又例如,可将一个或多个增强现实代码置于平面边界处,以便在该一个或多个增强现实代码的位置处,在点云流中填充预定大小的边界。再例如,可将一个或多个增强现实代码置于目标对象上,以便在点云流中移除该目标对象或其一部分。In one embodiment, the one or more augmented reality codes are stationary relative to the camera (eg, RGB-D depth camera), which is placed on the surface of the target object. For example, one or more augmented reality codes can be placed on the glass wall to help the RGB-D depth camera recognize the wall. The one or more augmented reality codes are also encoded with the dimensions of the wall surface. For another example, one or more augmented reality codes may be placed at the boundary of the plane, so as to fill a boundary of a predetermined size in the point cloud stream at the position of the one or more augmented reality codes. For another example, one or more augmented reality codes may be placed on the target object, so as to remove the target object or a part of it from the point cloud stream.
在一些实施例中,增强现实代码的位置和图案可包含与针对所记 录的点云流的后处理期间定制操作有关的信息。在一个实施例中,步骤S130包括:根据所述一个或多个增强现实代码所携带的信息,在所述一个或多个增强现实代码的位置处构造几何结构,其中所述几何结构相对于所述一个或多个增强现实代码的取向和所述几何结构的内容被编码在所述信息中。该几何结构可以是多边形边界/墙、长方体或圆柱体等等,本发明对此不作限定。在另一个实施例中,步骤S130包括:根据所述一个或多个增强现实代码所携带的信息,在所述点云流中去除所述一个或多个增强现实代码的位置处或附近的障碍物或边界,其中要去除的点的区域的范围和取向被编码在所述信息中。在又一个实施例中,步骤S130包括:在所述点云流上执行区域操作,其中所述区域操作包括对所述目标对象进行区域分割、过滤和/或聚类,操作的参数定义在所述一个或多个增强现实代码所携带的信息中。本领域技术人员可以理解,上述针对步骤S130的不同实施例可根据需要进行排列组合。例如,步骤S130可在删除点的图案并替换为其他点,即同时执行(1)根据所述一个或多个增强现实代码所携带的信息,在所述点云流中去除所述一个或多个增强现实代码的位置处或附近的障碍物或边界,其中要去除的点的区域的范围和取向被编码在所述信息中;以及(2)根据所述一个或多个增强现实代码所携带的信息,在所述一个或多个增强现实代码的位置处构造几何结构,其中所述几何结构相对于所述一个或多个增强现实代码的取向和所述几何结构的内容被编码在所述信息中。In some embodiments, the position and pattern of the augmented reality code may contain information related to customized operations during post-processing of the recorded point cloud stream. In one embodiment, step S130 includes: constructing a geometric structure at the location of the one or more augmented reality codes according to the information carried by the one or more augmented reality codes, wherein the geometric structure is relative to the one or more augmented reality codes. The orientation of the one or more augmented reality codes and the content of the geometric structure are encoded in the information. The geometric structure may be a polygonal boundary/wall, a rectangular parallelepiped or a cylinder, etc., which is not limited in the present invention. In another embodiment, step S130 includes: removing obstacles at or near the location of the one or more augmented reality codes in the point cloud stream according to the information carried by the one or more augmented reality codes The object or boundary in which the extent and orientation of the area of the point to be removed is encoded in the information. In another embodiment, step S130 includes: performing a region operation on the point cloud stream, wherein the region operation includes performing region segmentation, filtering, and/or clustering on the target object, and the parameters of the operation are defined in the point cloud stream. In the information carried by one or more augmented reality codes. Those skilled in the art can understand that the above different embodiments for step S130 can be permuted and combined as needed. For example, step S130 may delete the pattern of points and replace them with other points, that is, simultaneously perform (1) Remove the one or more points from the point cloud stream according to the information carried by the one or more augmented reality codes. Obstacles or borders at or near the location of an augmented reality code, where the range and orientation of the area to be removed are encoded in the information; and (2) according to the one or more augmented reality codes carried Information, construct a geometric structure at the location of the one or more augmented reality codes, wherein the orientation of the geometric structure relative to the one or more augmented reality codes and the content of the geometric structure are encoded in the Information.
图2示出了根据本发明的一个实施例的用于工业机器人的三维环境建模方法2000。Fig. 2 shows a three-dimensional environment modeling method 2000 for an industrial robot according to an embodiment of the present invention.
在步骤S210中,对所述工业机器人的周围环境进行扫描,以便获得点云流。In step S210, the surrounding environment of the industrial robot is scanned to obtain a point cloud stream.
在步骤S220中,检测所述周围环境中的一个或多个增强现实代码。In step S220, one or more augmented reality codes in the surrounding environment are detected.
在步骤S230中,至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进 行操作。In step S230, the point cloud stream is operated at least based on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes.
在步骤S240中,将操作后的所述点云流处理为表面网格,以便所述周围环境进行建模。In step S240, the point cloud stream after the operation is processed into a surface mesh, so that the surrounding environment can be modeled.
在一个实施例中,点云流是由便携式微型计算机驱动的手持式RGB-D相机所获得并记录的。当要记录和重建的区域由RGB-D相机扫描时,增强现实代码由相机中的RGB传感器所检测。这可例如通过包括跟踪库(如ARuCO)来实现。库输出增强现实代码相对于相机的6自由度位姿。所有的增强现实代码位姿都将被记录并共同配准到所记录的点云坐标系中。In one embodiment, the point cloud stream is acquired and recorded by a handheld RGB-D camera driven by a portable microcomputer. When the area to be recorded and reconstructed is scanned by the RGB-D camera, the augmented reality code is detected by the RGB sensor in the camera. This can be achieved, for example, by including a tracking library (such as ARuCO). The library outputs the 6-DOF pose of the augmented reality code relative to the camera. All the augmented reality code poses will be recorded and co-registered to the recorded point cloud coordinate system.
在一个实施例中,在完成记录会话后,将记录的点云流传输到专用PC进行脱机后处理,或在扫描PC上进行后处理。具体来说,点云流被输入到最新的重建算法中。在一个实施例中,首先,使用点云库(PCL)从点云流中去除噪声。接着,使用PCL和迭代最近点(ICP)算法配准并缝合点云数据,以重建整个环境。本领域技术人员理解,可采用各种迭代最近点法ICP,包括但不限于,Chen和Medioni及Bergevin等人提出的point-to-plane搜索就近点的精确配准方法、Rusinkiewicz和Levoy提出的point-to-projection搜索就近点的快速配准方法、Soon-Yong和Murali提出的Contractive-projection-point搜索就近点的配准方法等。In one embodiment, after the recording session is completed, the recorded point cloud is streamed to a dedicated PC for offline post-processing, or post-processing on a scanning PC. Specifically, the point cloud stream is input into the latest reconstruction algorithm. In one embodiment, first, the point cloud library (PCL) is used to remove noise from the point cloud stream. Then, PCL and iterative closest point (ICP) algorithm are used to register and stitch the point cloud data to reconstruct the entire environment. Those skilled in the art understand that various iterative nearest point methods ICP can be used, including but not limited to the point-to-plane search for nearest point precise registration method proposed by Chen and Medioni and Bergevin, and the point proposed by Rusinkiewicz and Levoy. -to-projection fast registration method for searching nearby points, Contractive-projection-point registration method for searching nearby points proposed by Soon-Yong and Murali, etc.
然后,使用自定义软件代码(包含在增强现实代码中),将每个检测到的增强现实代码的位置和每个代码的信息导入到处理序列中。根据存储的信息,部分点云流将被删除或将定义的区域添加到点云,或将特定操作应用于点云流。在执行预定义或定制的动作或操作后,最终获得的增强点云流将进一步处理为表面网格。Then, using the custom software code (included in the augmented reality code), the location of each detected augmented reality code and the information of each code are imported into the processing sequence. According to the stored information, part of the point cloud stream will be deleted or the defined area will be added to the point cloud, or specific operations will be applied to the point cloud stream. After performing a predefined or customized action or operation, the finally obtained enhanced point cloud stream will be further processed into a surface mesh.
上述三维环境建模方法2000通过在环境中简单添加增强现实代码,实现了在三维模型中自动构造边界或形状,利用增强现实代码自动去除不需要的表面,定义反射面并消除噪音以及三维起始点的说明(在点云上进行例如过滤,分段等操作)。The above-mentioned 3D environment modeling method 2000 realizes the automatic construction of boundaries or shapes in the 3D model by simply adding augmented reality codes to the environment, using the augmented reality codes to automatically remove unwanted surfaces, define reflective surfaces and eliminate noise and 3D starting points Description (for example, filtering, segmentation and other operations on the point cloud).
图3示出了根据本发明的一个实施例的三维环境建模设备3000。Fig. 3 shows a three-dimensional environment modeling device 3000 according to an embodiment of the present invention.
如图3所示,三维环境建模设备3000包括:扫描装置310、检测装置320以及执行装置330。其中,扫描装置310用于对周围环境进行扫描,以便获得点云流。检测装置320用于检测所述周围环境中的一个或多个增强现实代码。执行装置330用于至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作,以便对所述周围环境进行建模。As shown in FIG. 3, the three-dimensional environment modeling equipment 3000 includes: a scanning device 310, a detection device 320 and an execution device 330. Wherein, the scanning device 310 is used to scan the surrounding environment to obtain a point cloud stream. The detection device 320 is used to detect one or more augmented reality codes in the surrounding environment. The execution device 330 is configured to operate the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes, so as to construct the surrounding environment. mold.
在本发明的上下文中,术语“增强现实代码”是指对扫描后所获得的点云流进行特定操作的标记。增强现实代码可采用二维码、数据矩阵或Maxicode等形式,本发明对此不作限定。对点云流所进行的操作可编码或定义在该增强现实代码中。在实际使用中,可将一个或多个增强现实代码置于工作区域内,例如在平面和要分割的对象上。作为一个示例,这些增强现实代码可带有以下信息:是否应将相邻表面包括在重构模型中或从重构模型中删除,或者是否应执行某些点云操作(例如分割)。而且,这些操作的参数可以存储在增强现实代码中。作为又一个示例,增强现实代码可用于标记3D扫描仪(例如RGB-D相机)无法检测到的表面。例如,增强现实代码可以定义玻璃墙,包括其尺寸,又或者可以在增强现实代码所放置的位置处将人造墙添加到重构模型中。In the context of the present invention, the term "augmented reality code" refers to a mark that performs a specific operation on the point cloud stream obtained after scanning. The augmented reality code can take the form of a two-dimensional code, a data matrix, or a Maxicode, which is not limited in the present invention. The operations performed on the point cloud stream can be encoded or defined in the augmented reality code. In actual use, one or more augmented reality codes can be placed in the work area, such as on the plane and the object to be divided. As an example, these augmented reality codes may carry the following information: whether adjacent surfaces should be included in the reconstructed model or deleted from the reconstructed model, or whether certain point cloud operations (such as segmentation) should be performed. Moreover, the parameters of these operations can be stored in the augmented reality code. As yet another example, augmented reality codes can be used to mark surfaces that cannot be detected by a 3D scanner (such as an RGB-D camera). For example, the augmented reality code can define a glass wall, including its dimensions, or it can add an artificial wall to the reconstruction model where the augmented reality code is placed.
上述三维环境建模设备3000利用了环境中的一个或多个增强现实代码,避免或降低了环境扫描后数据处理过程中所涉及的人工操作,更为便捷和高效。The above-mentioned three-dimensional environment modeling device 3000 utilizes one or more augmented reality codes in the environment to avoid or reduce manual operations involved in the data processing process after environment scanning, which is more convenient and efficient.
在一个实施例中,扫描装置310配置成利用RGB-D相机从多个角度对所述周围环境进行扫描。目前RGB-D相机包括微软Kinect、华硕Xtion、奥比中光、英特尔RealSense等,本发明对此不作限定。在一些实施例中,扫描装置310可利用多个RGB-D相机从多个角度对所述周围环境进行扫描,并且多个RGB-D相机可处于不同位置,例如,第一RGB-D相机位于第一位置,第二RGB-D相机位于不同于第一位置的第二位置。In one embodiment, the scanning device 310 is configured to use an RGB-D camera to scan the surrounding environment from multiple angles. Currently, RGB-D cameras include Microsoft Kinect, ASUS Xtion, Obi Zhongguang, Intel RealSense, etc., which are not limited by the present invention. In some embodiments, the scanning device 310 may use multiple RGB-D cameras to scan the surrounding environment from multiple angles, and the multiple RGB-D cameras may be located at different positions, for example, the first RGB-D camera is located at In the first position, the second RGB-D camera is located in a second position different from the first position.
尽管图3中未示出,上述三维环境建模设备3000还可包括:记 录装置,用于记录所述点云流。Although not shown in Fig. 3, the above-mentioned three-dimensional environment modeling device 3000 may further include: a recording device for recording the point cloud stream.
在一个实施例中,所述检测装置320可包括:第一检测单元和第一记录单元。其中,第一检测单元用于在扫描所述周围环境时,利用增强现实跟踪库来检测所述周围环境中的一个或多个增强现实代码及其相对于所述RGB-D相机的自由度位姿。第一记录单元用于记录所述自由度位姿,并将其配准到点云坐标系中。在一些实施例中,检测装置320可配置成检测增强现实代码(例如二维码)以及其六个自由度位姿(和/或增强现实跟踪库中的其他存储信息),并将它们共同配准或标定到点云流的坐标系。In an embodiment, the detection device 320 may include: a first detection unit and a first recording unit. Wherein, the first detection unit is configured to use an augmented reality tracking library to detect one or more augmented reality codes in the surrounding environment and their degrees of freedom relative to the RGB-D camera when scanning the surrounding environment. posture. The first recording unit is used to record the degree of freedom pose and register it to the point cloud coordinate system. In some embodiments, the detection device 320 may be configured to detect an augmented reality code (such as a two-dimensional code) and its six-degree-of-freedom pose (and/or other stored information in the augmented reality tracking library), and configure them together. Accurate or calibrate to the coordinate system of the point cloud stream.
在一个实施例中,所述一个或多个增强现实代码相对于相机(例如RGB-D深度相机)是静止的,其放置在目标对象的表面上。例如可将一个或多个增强现实代码置于玻璃墙面上,以帮助RGB-D深度相机识别该墙面。该一个或多个增强现实代码还编码有该墙面的尺寸。又例如,可将一个或多个增强现实代码置于平面边界处,以便在该一个或多个增强现实代码的位置处,在点云流中填充预定大小的边界。再例如,可将一个或多个增强现实代码置于目标对象上,以便在点云流中移除该目标对象或其一部分。In one embodiment, the one or more augmented reality codes are stationary relative to the camera (eg, RGB-D depth camera), which is placed on the surface of the target object. For example, one or more augmented reality codes can be placed on the glass wall to help the RGB-D depth camera recognize the wall. The one or more augmented reality codes are also encoded with the dimensions of the wall surface. For another example, one or more augmented reality codes may be placed at the boundary of the plane, so as to fill a boundary of a predetermined size in the point cloud stream at the position of the one or more augmented reality codes. For another example, one or more augmented reality codes may be placed on the target object, so as to remove the target object or a part of it from the point cloud stream.
在一些实施例中,增强现实代码的位置和图案可包含与针对所记录的点云流的后处理期间定制操作有关的信息。In some embodiments, the position and pattern of the augmented reality code may contain information related to customized operations during post-processing for the recorded point cloud stream.
在一个实施例中,执行装置330配置成根据所述一个或多个增强现实代码所携带的信息,在所述一个或多个增强现实代码的位置处构造几何结构,其中所述几何结构相对于所述一个或多个增强现实代码的取向和所述几何结构的内容被编码在所述信息中。该几何结构可以是多边形边界/墙、长方体或圆柱体等等,本发明对此不作限定。In one embodiment, the execution device 330 is configured to construct a geometric structure at the location of the one or more augmented reality codes according to the information carried by the one or more augmented reality codes, wherein the geometric structure is relative to The orientation of the one or more augmented reality codes and the content of the geometric structure are encoded in the information. The geometric structure may be a polygonal boundary/wall, a rectangular parallelepiped or a cylinder, etc., which is not limited in the present invention.
在另一个实施例中,所述执行装置330还配置成根据所述一个或多个增强现实代码所携带的信息,在所述点云流中去除所述一个或多个增强现实代码的位置处或附近的障碍物或边界,其中要去除的点的区域的范围和取向被编码在所述信息中。In another embodiment, the execution device 330 is further configured to remove the one or more augmented reality codes in the point cloud stream according to the information carried by the one or more augmented reality codes. Or nearby obstacles or boundaries, where the extent and orientation of the area of the point to be removed is encoded in the information.
在又一个实施例中,所述执行装置330配置成在所述点云流上执 行区域操作,其中所述区域操作包括对所述目标对象进行区域分割、过滤和/或聚类,操作的参数定义在所述一个或多个增强现实代码所携带的信息中。In yet another embodiment, the execution device 330 is configured to perform a region operation on the point cloud stream, wherein the region operation includes region segmentation, filtering and/or clustering of the target object, and the parameters of the operation Defined in the information carried by the one or more augmented reality codes.
本领域技术人员可以理解,上述执行装置330的不同实施例可根据需要进行排列组合。例如,执行装置330可配置成在删除点的图案并替换为其他点,即同时执行如下两个动作:(1)根据所述一个或多个增强现实代码所携带的信息,在所述点云流中去除所述一个或多个增强现实代码的位置处或附近的障碍物或边界,其中要去除的点的区域的范围和取向被编码在所述信息中;以及(2)根据所述一个或多个增强现实代码所携带的信息,在所述一个或多个增强现实代码的位置处构造几何结构,其中所述几何结构相对于所述一个或多个增强现实代码的取向和所述几何结构的内容被编码在所述信息中。Those skilled in the art can understand that different embodiments of the execution device 330 described above can be permuted and combined as required. For example, the execution device 330 may be configured to delete the pattern of points and replace them with other points, that is, perform the following two actions at the same time: (1) According to the information carried by the one or more augmented reality codes, in the point cloud Obstacles or boundaries at or near the location where the one or more augmented reality codes are removed in the stream, where the range and orientation of the area of the point to be removed are encoded in the information; and (2) according to the one Information carried by one or more augmented reality codes, a geometric structure is constructed at the location of the one or more augmented reality codes, wherein the geometric structure is relative to the orientation of the one or more augmented reality codes and the geometry The content of the structure is encoded in the information.
在一些实施例中,例如在应用于工业机器人的三维环境建模的场景下,该三维环境建模设备可包括如下装置:即扫描装置、检测装置、执行装置以及处理装置。具体来说,扫描装置用于对所述工业机器人的周围环境进行扫描,以便获得点云流。检测装置用于检测所述周围环境中的一个或多个增强现实代码。执行装置用于至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作。处理装置用于将操作后的所述点云流处理为表面网格,以便所述周围环境进行建模。In some embodiments, for example, in a scenario where a three-dimensional environment modeling is applied to an industrial robot, the three-dimensional environment modeling device may include the following devices: a scanning device, a detection device, an execution device, and a processing device. Specifically, the scanning device is used to scan the surrounding environment of the industrial robot, so as to obtain a point cloud stream. The detection device is used to detect one or more augmented reality codes in the surrounding environment. The execution device is configured to operate the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes. The processing device is used to process the operated point cloud stream into a surface grid, so that the surrounding environment can be modeled.
本领域技术人员容易理解,本发明的一个或多个实施例提供的三维环境建模方法可通过计算机程序来实现。例如,当存有该计算机程序的计算机存储介质(例如U盘)与计算机相连时,运行该计算机程序即可执行本发明的实施例的三维环境建模方法。Those skilled in the art can easily understand that the three-dimensional environment modeling method provided by one or more embodiments of the present invention can be implemented by a computer program. For example, when a computer storage medium (such as a USB flash drive) storing the computer program is connected to the computer, running the computer program can execute the three-dimensional environment modeling method of the embodiment of the present invention.
综上所述,本发明的多个实施例提供了三维环境建模方案,该方案采用增强现实代码加速环境重建,并且对包含增强现实代码部分的点云流进行自动完善和优化。尽管只对其中一些本发明的具体实施方式进行了描述,但是本领域普通技术人员应当了解,本发明可以在不偏离其主旨与范围内以许多其他的形式实施,例如在工业机器人操作 平台上实施。因此,所展示的例子与实施方式被视为示意性的而非限制性的,在不脱离如所附各权利要求所定义的本发明精神及范围的情况下,本发明可能涵盖各种的修改与替换。In summary, multiple embodiments of the present invention provide a three-dimensional environment modeling solution that uses augmented reality code to accelerate environment reconstruction, and automatically completes and optimizes the point cloud stream containing the augmented reality code part. Although only some of the specific embodiments of the present invention have been described, those of ordinary skill in the art should understand that the present invention can be implemented in many other forms without departing from its spirit and scope, such as being implemented on an industrial robot operating platform. . Therefore, the examples and implementations shown are regarded as illustrative rather than restrictive. The present invention may cover various modifications without departing from the spirit and scope of the present invention as defined by the appended claims. And replace.

Claims (22)

  1. 一种三维环境建模方法,所述方法包括:A three-dimensional environment modeling method, the method includes:
    对周围环境进行扫描,以便获得点云流;Scan the surrounding environment in order to obtain the point cloud flow;
    检测所述周围环境中的一个或多个增强现实代码;以及Detect one or more augmented reality codes in the surrounding environment; and
    至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作,以便对所述周围环境进行建模。The point cloud stream is operated based on at least the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes, so as to model the surrounding environment.
  2. 如权利要求1所述的三维环境建模方法,其中,对周围环境进行扫描包括:The 3D environment modeling method according to claim 1, wherein scanning the surrounding environment comprises:
    利用RGB-D相机从多个角度对所述周围环境进行扫描。The RGB-D camera is used to scan the surrounding environment from multiple angles.
  3. 如权利要求1所述的三维环境建模方法,还包括:The three-dimensional environment modeling method according to claim 1, further comprising:
    记录所述点云流。Record the point cloud flow.
  4. 如权利要求2所述的三维环境建模方法,其中,检测所述周围环境中的一个或多个增强现实代码包括:The 3D environment modeling method of claim 2, wherein detecting one or more augmented reality codes in the surrounding environment comprises:
    在扫描所述周围环境时,利用增强现实跟踪库来检测所述周围环境中的一个或多个增强现实代码及其相对于所述RGB-D相机的自由度位姿;以及When scanning the surrounding environment, use an augmented reality tracking library to detect one or more augmented reality codes in the surrounding environment and their degrees of freedom pose relative to the RGB-D camera; and
    记录所述自由度位姿,并将其配准到点云坐标系中。Record the pose of the degree of freedom and register it to the point cloud coordinate system.
  5. 如权利要求1所述的三维环境建模方法,其中,所述一个或多个增强现实代码放置在目标对象的表面上。The three-dimensional environment modeling method of claim 1, wherein the one or more augmented reality codes are placed on the surface of the target object.
  6. 如权利要求1所述的三维环境建模方法,其中,至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作包括:The 3D environment modeling method of claim 1, wherein the point cloud stream is performed at least based on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes. Operations include:
    根据所述一个或多个增强现实代码所携带的信息,在所述一个或多个增强现实代码的位置处构造几何结构,其中所述几何结构相对于所述一个或多个增强现实代码的取向和所述几何结构的内容被编码在所述信息中。According to the information carried by the one or more augmented reality codes, a geometric structure is constructed at the position of the one or more augmented reality codes, wherein the orientation of the geometric structure with respect to the one or more augmented reality codes And the content of the geometric structure is encoded in the information.
  7. 如权利要求6所述的三维环境建模方法,其中,所述几何结构 为多边形边界/墙、长方体或圆柱体。The 3D environment modeling method according to claim 6, wherein the geometric structure is a polygonal boundary/wall, a rectangular parallelepiped, or a cylinder.
  8. 如权利要求1或6所述的三维环境建模方法,其中,至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作包括:The three-dimensional environment modeling method of claim 1 or 6, wherein the point cloud is calculated based on at least the information carried by the one or more augmented reality codes and the position of the one or more augmented reality codes. Flow operations include:
    根据所述一个或多个增强现实代码所携带的信息,在所述点云流中去除所述一个或多个增强现实代码的位置处或附近的障碍物或边界,其中要去除的点的区域的范围和取向被编码在所述信息中。According to the information carried by the one or more augmented reality codes, remove obstacles or boundaries at or near the location of the one or more augmented reality codes in the point cloud stream, and the area of the point to be removed The range and orientation of is encoded in the information.
  9. 如权利要求5所述的三维环境建模方法,其中,至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作包括:The three-dimensional environment modeling method of claim 5, wherein the point cloud stream is performed at least based on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes. Operations include:
    在所述点云流上执行区域操作,其中所述区域操作包括对所述目标对象进行区域分割、过滤和/或聚类,操作的参数定义在所述一个或多个增强现实代码所携带的信息中。Perform region operations on the point cloud stream, where the region operations include region segmentation, filtering and/or clustering of the target object, and the parameters of the operation are defined in the one or more augmented reality codes carried Information.
  10. 一种三维环境建模方法,所述方法包括:A three-dimensional environment modeling method, the method includes:
    对所述工业机器人的周围环境进行扫描,以便获得点云流;Scan the surrounding environment of the industrial robot to obtain a point cloud flow;
    检测所述周围环境中的一个或多个增强现实代码;Detecting one or more augmented reality codes in the surrounding environment;
    至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作;以及Operating the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes; and
    将操作后的所述点云流处理为表面网格,以便所述周围环境进行建模。The point cloud stream after the operation is processed into a surface grid, so that the surrounding environment can be modeled.
  11. 一种三维环境建模设备,所述设备包括:A three-dimensional environment modeling device, which includes:
    扫描装置,用于对周围环境进行扫描,以便获得点云流;Scanning device, used to scan the surrounding environment in order to obtain the point cloud flow;
    检测装置,用于检测所述周围环境中的一个或多个增强现实代码;以及A detection device for detecting one or more augmented reality codes in the surrounding environment; and
    执行装置,用于至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作,以便对所述周围环境进行建模。The execution device is configured to operate the point cloud stream based on at least the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes, so as to construct the surrounding environment mold.
  12. 如权利要求11所述的三维环境建模设备,其中,所述扫描装置配置成利用RGB-D相机从多个角度对所述周围环境进行扫描。The three-dimensional environment modeling device of claim 11, wherein the scanning device is configured to scan the surrounding environment from multiple angles using an RGB-D camera.
  13. 如权利要求11所述的三维环境建模设备,还包括:The three-dimensional environment modeling device according to claim 11, further comprising:
    记录装置,用于记录所述点云流。The recording device is used to record the point cloud stream.
  14. 如权利要求12所述的三维环境建模设备,其中,所述检测装置包括:The three-dimensional environment modeling device according to claim 12, wherein the detection device comprises:
    第一检测单元,用于在扫描所述周围环境时,利用增强现实跟踪库来检测所述周围环境中的一个或多个增强现实代码及其相对于所述RGB-D相机的自由度位姿;以及The first detection unit is configured to use an augmented reality tracking library to detect one or more augmented reality codes in the surrounding environment and their degree of freedom pose relative to the RGB-D camera when scanning the surrounding environment ;as well as
    第一记录单元,用于记录所述自由度位姿,并将其配准到点云坐标系中。The first recording unit is used to record the degree of freedom pose and register it to the point cloud coordinate system.
  15. 如权利要求11所述的三维环境建模设备,其中,所述一个或多个增强现实代码放置在目标对象的表面上。The three-dimensional environment modeling device of claim 11, wherein the one or more augmented reality codes are placed on the surface of the target object.
  16. 如权利要求11所述的三维环境建模设备,其中,所述执行装置配置成根据所述一个或多个增强现实代码所携带的信息,在所述一个或多个增强现实代码的位置处构造几何结构,其中所述几何结构相对于所述一个或多个增强现实代码的取向和所述几何结构的内容被编码在所述信息中。The three-dimensional environment modeling device according to claim 11, wherein the execution device is configured to construct at the location of the one or more augmented reality codes according to the information carried by the one or more augmented reality codes A geometric structure, wherein the orientation of the geometric structure relative to the one or more augmented reality codes and the content of the geometric structure are encoded in the information.
  17. 如权利要求16所述的三维环境建模设备,其中,所述几何结构为多边形边界/墙、长方体或圆柱体。The three-dimensional environment modeling device of claim 16, wherein the geometric structure is a polygonal boundary/wall, a rectangular parallelepiped, or a cylinder.
  18. 如权利要求11或16所述的三维环境建模设备,其中,所述执行装置还配置成根据所述一个或多个增强现实代码所携带的信息,在所述点云流中去除所述一个或多个增强现实代码的位置处或附近的障碍物或边界,其中要去除的点的区域的范围和取向被编码在所述信息中。The three-dimensional environment modeling device according to claim 11 or 16, wherein the execution device is further configured to remove the one from the point cloud stream according to the information carried by the one or more augmented reality codes. Obstacles or boundaries at or near the location of or a plurality of augmented reality codes, where the range and orientation of the area of the point to be removed are encoded in the information.
  19. 如权利要求15所述的三维环境建模设备,其中,所述执行装置配置成在所述点云流上执行区域操作,其中所述区域操作包括对所述目标对象进行区域分割、过滤和/或聚类,操作的参数定义在所述一个或多个增强现实代码所携带的信息中。The three-dimensional environment modeling device according to claim 15, wherein the execution device is configured to perform a region operation on the point cloud stream, wherein the region operation includes region segmentation, filtering, and/or the target object Or clustering, the operating parameters are defined in the information carried by the one or more augmented reality codes.
  20. 一种用于工业机器人的三维环境建模设备,所述设备包括:A three-dimensional environment modeling device for industrial robots, the device comprising:
    扫描装置,用于对所述工业机器人的周围环境进行扫描,以便获 得点云流;A scanning device for scanning the surrounding environment of the industrial robot so as to obtain a point cloud flow;
    检测装置,用于检测所述周围环境中的一个或多个增强现实代码;A detection device for detecting one or more augmented reality codes in the surrounding environment;
    执行装置,用于至少基于所述一个或多个增强现实代码所携带的信息以及所述一个或多个增强现实代码的位置来对所述点云流进行操作;以及An execution device, configured to operate the point cloud stream based at least on the information carried by the one or more augmented reality codes and the location of the one or more augmented reality codes; and
    处理装置,用于将操作后的所述点云流处理为表面网格,以便所述周围环境进行建模。The processing device is used to process the point cloud stream after the operation into a surface grid, so that the surrounding environment can be modeled.
  21. 一种计算机存储介质,其包括指令,所述指令在运行时执行如权利要求1至10中任一项所述的三维环境建模方法。A computer storage medium comprising instructions that execute the three-dimensional environment modeling method according to any one of claims 1 to 10 when the instructions are run.
  22. 一种工业机器人操作平台,其包括如权利要求11至20中任一项所述的三维环境建模设备。An industrial robot operating platform, which comprises the three-dimensional environment modeling equipment according to any one of claims 11 to 20.
PCT/CN2020/080684 2020-03-23 2020-03-23 Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform WO2021189194A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202080098898.4A CN115244581A (en) 2020-03-23 2020-03-23 Three-dimensional environment modeling method and equipment, computer storage medium and industrial robot operating platform
PCT/CN2020/080684 WO2021189194A1 (en) 2020-03-23 2020-03-23 Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform
DE112020006265.1T DE112020006265T5 (en) 2020-03-23 2020-03-23 Method and device for modeling a three-dimensional environment, computer storage medium and work platform for industrial robots

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/080684 WO2021189194A1 (en) 2020-03-23 2020-03-23 Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform

Publications (1)

Publication Number Publication Date
WO2021189194A1 true WO2021189194A1 (en) 2021-09-30

Family

ID=77890791

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080684 WO2021189194A1 (en) 2020-03-23 2020-03-23 Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform

Country Status (3)

Country Link
CN (1) CN115244581A (en)
DE (1) DE112020006265T5 (en)
WO (1) WO2021189194A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279786A (en) * 2014-07-03 2016-01-27 顾海松 Method and system for obtaining object three-dimensional model
US20170193700A1 (en) * 2016-01-04 2017-07-06 Electronics And Telecommunications Research Institute Providing apparatus for augmented reality service, display apparatus and providing system for augmented reality service comprising the same
CN107992643A (en) * 2017-10-25 2018-05-04 刘界鹏 It is a kind of to be produced and installation accuracy control technique with the construction industry structure component calculated based on 3-D scanning cloud data and artificial intelligence identification
CN108492356A (en) * 2017-02-13 2018-09-04 苏州宝时得电动工具有限公司 Augmented reality system and its control method
CN110163968A (en) * 2019-05-28 2019-08-23 山东大学 RGBD camera large-scale three dimensional scenario building method and system
CN110378995A (en) * 2019-05-29 2019-10-25 中德(珠海)人工智能研究院有限公司 A method of three-dimensional space modeling is carried out using projection feature

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279786A (en) * 2014-07-03 2016-01-27 顾海松 Method and system for obtaining object three-dimensional model
US20170193700A1 (en) * 2016-01-04 2017-07-06 Electronics And Telecommunications Research Institute Providing apparatus for augmented reality service, display apparatus and providing system for augmented reality service comprising the same
CN108492356A (en) * 2017-02-13 2018-09-04 苏州宝时得电动工具有限公司 Augmented reality system and its control method
CN107992643A (en) * 2017-10-25 2018-05-04 刘界鹏 It is a kind of to be produced and installation accuracy control technique with the construction industry structure component calculated based on 3-D scanning cloud data and artificial intelligence identification
CN110163968A (en) * 2019-05-28 2019-08-23 山东大学 RGBD camera large-scale three dimensional scenario building method and system
CN110378995A (en) * 2019-05-29 2019-10-25 中德(珠海)人工智能研究院有限公司 A method of three-dimensional space modeling is carried out using projection feature

Also Published As

Publication number Publication date
DE112020006265T5 (en) 2022-10-13
CN115244581A (en) 2022-10-25

Similar Documents

Publication Publication Date Title
JP5328979B2 (en) Object recognition method, object recognition device, autonomous mobile robot
Holz et al. Active recognition and manipulation for mobile robot bin picking
CN111127422A (en) Image annotation method, device, system and host
EP2836987B1 (en) Determining three-dimensional (3d) object data models based on object movement
JP6892286B2 (en) Image processing equipment, image processing methods, and computer programs
WO2014147863A1 (en) Three-dimensional information measuring/displaying device, three-dimensional information measuring/displaying method, and program
JP5182229B2 (en) Image processing apparatus, image processing method, and program
JP2011175477A (en) Three-dimensional measurement apparatus, processing method and program
JP2014063475A (en) Information processor, information processing method, and computer program
US10607397B2 (en) Generating three dimensional models
JP5297727B2 (en) Robot apparatus and object position / orientation estimation method
JP6601613B2 (en) POSITION ESTIMATION METHOD, POSITION ESTIMATION DEVICE, AND POSITION ESTIMATION PROGRAM
JP2016170050A (en) Position attitude measurement device, position attitude measurement method and computer program
CN112269386A (en) Method and device for repositioning symmetric environment and robot
JP2009216503A (en) Three-dimensional position and attitude measuring method and system
JP2014053018A (en) Information processing device, control method for information processing device, and program
WO2021189194A1 (en) Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform
KR102270922B1 (en) A method for performing calibration using measured data without an assumed calibration model and a three-dimensional scanner calibration system for performing the method
JP2018146347A (en) Image processing device, image processing method, and computer program
JP7365567B2 (en) Measurement system, measurement device, measurement method and measurement program
JP7161857B2 (en) Information processing device, information processing method, and program
CN114952832B (en) Mechanical arm assembling method and device based on monocular six-degree-of-freedom object attitude estimation
Mishra et al. Development and evaluation of a Kinect based Bin-Picking system
JP6179267B2 (en) Image processing apparatus, robot control system, image processing method and program
JP2014164641A (en) Image processing apparatus, robot control system, robot, program, and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926882

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20926882

Country of ref document: EP

Kind code of ref document: A1