WO2020258314A1 - 点云模型的切割方法、装置和系统 - Google Patents

点云模型的切割方法、装置和系统 Download PDF

Info

Publication number
WO2020258314A1
WO2020258314A1 PCT/CN2019/093894 CN2019093894W WO2020258314A1 WO 2020258314 A1 WO2020258314 A1 WO 2020258314A1 CN 2019093894 W CN2019093894 W CN 2019093894W WO 2020258314 A1 WO2020258314 A1 WO 2020258314A1
Authority
WO
WIPO (PCT)
Prior art keywords
cutting
point cloud
cutting window
target object
window
Prior art date
Application number
PCT/CN2019/093894
Other languages
English (en)
French (fr)
Inventor
王海峰
费涛
Original Assignee
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子(中国)有限公司 filed Critical 西门子(中国)有限公司
Priority to EP19935005.9A priority Critical patent/EP3971829B1/en
Priority to CN201980096739.8A priority patent/CN113906474A/zh
Priority to US17/620,790 priority patent/US11869143B2/en
Priority to PCT/CN2019/093894 priority patent/WO2020258314A1/zh
Publication of WO2020258314A1 publication Critical patent/WO2020258314A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present invention relates to the field of modeling, in particular to a method, device and system for cutting a point cloud model.
  • RGB-D cameras can provide point cloud images with depth information, which means that many scenes in the real world can be captured by RGB-D cameras, such as digital twins in autonomous factories. ) And the environment awareness in autonomous robots, so point cloud images in RGB-D cameras have been intensively studied.
  • RGB-D cameras can help them easily obtain color and depth information.
  • RGB-D cameras have only limited technology and tool geometry to provide pruning and editing (pruning and editing) of autonomous 3D data.
  • Point cloud cutting as the main method of processing point cloud images, is very important in the field of computer vision, because it can select targets for many secondary tasks from point cloud images.
  • the secondary tasks include point cloud registration, point cloud positioning, and robot grasping.
  • the sensor itself can bring a lot of noise.
  • the raw point cloud image includes many incoherent point clouds of background or other targets. Such irrelevant point clouds will greatly hinder the cutting of 3D models, so a practical 3D point cloud cutting tool must be used.
  • the traditional point cloud cutting tool cannot allow the user to select a specific depth of the point cloud image from a certain viewpoint, which means that it is not always possible to select the point cloud that the user wants.
  • traditional point cloud cutting tools rely on user interfaces without semantic functions, which results in users not being able to automatically select the target they want.
  • the semi-automatic or automatic 3D point cloud data pruning and extraction methods to obtain target geometric features are still big problems to be solved for 3D reconstruction and 3D robot cognition.
  • the prior art also provides several manual methods or tools, such as CloudCompare3D, which can assist users in selecting and trimming 3D point cloud data.
  • CloudCompare3D can assist users in selecting and trimming 3D point cloud data.
  • such software does not support semi-automatic or automatic methods.
  • the cloud comparison 3D mechanism lacks depth information.
  • the prior art also provides some 3D point cloud cutting mechanisms, which provide a system including an optical camera data processing unit, which can obtain a 3D point cloud scene including a target object.
  • an exchange input device the user can input a seed, where the seed indicates the location of the target object.
  • the segmentation method generates segmented point clouds corresponding to the target object by trimming the 3D point cloud based on the position reference input by the user.
  • the first aspect of the present invention provides a point cloud model cutting method, which includes the following steps: S1, using a two-dimensional first cutting window to select a point cloud structure including a target object from a point cloud model, and the first A cutting window has a length and a width; S2, adjusting the depth of the first cutting window, the length, width and depth of the first cutting window constitute a three-dimensional second cutting window, and the target object is located in the second In the cutting window; S3, identifying and marking all point cloud structures in the second cutting window to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows; S4, calculating The volume ratio of the point cloud structure in each third cutting window relative to the second cutting window, the third cutting window with the largest volume ratio is selected and the point cloud structure in the third cutting window is judged to be The target object, wherein the third cutting window is smaller than the second cutting window, and the second cutting window is smaller than the first cutting window.
  • step S3 further includes the following steps: S31, calculating the number k of the third cutting window; S32, randomly selecting k points in all the point cloud structures in the second cutting window as the centroid, Then use the seed centroid as the cluster center to calculate the distances from all other points in the point cloud structure to the centroid, and assign all other points in the point cloud structure to the centroid with the closest distance to become a cluster, and iteratively execute step S31 And S32 until the positions of the k centroids do not change, S33, identify and mark all point cloud structures in the second cutting window to form k three-dimensional third cutting windows, where the k third cutting windows include k clusters.
  • step S3 further includes the following steps: use the data set of the target object to train the sample of the target object, and use the sample of the target object to compare the target object to identify and mark the second cut All the point cloud structures in the window form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows.
  • the second aspect of the present invention provides a point cloud model cutting system, including: a processor; and a memory coupled with the processor, the memory has instructions stored therein, and the instructions make all the points when executed by the processor
  • the electronic device performs an action, the action includes: S1, using a two-dimensional first cutting window to select a point cloud structure including a target object from a point cloud model, the first cutting window having a length and a width; S2 , Adjust the depth of the first cutting window, the length, width and depth of the first cutting window constitute a three-dimensional second cutting window, and the target object is located in the second cutting window; S3, identifying and marking All the point cloud structures in the second cutting window form multiple three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows; S4, calculating the point cloud in each third cutting window The volume ratio of the structure relative to the second cutting window is selected, the third cutting window with the largest volume ratio is selected and the point cloud structure in the third cutting window is judged to be the target object, wherein the
  • the action S3 further includes: S31, calculating the number k of the third cutting window; S32, randomly selecting k points in all the point cloud structures in the second cutting window as the centroid, and then taking The seed centroid is the cluster center. The distance between all other points in the point cloud structure and the centroid is calculated, and all other points in the point cloud structure are assigned the centroid with the closest distance to form a cluster, and steps S31 and S32 are iteratively performed. Until the positions of the k centroids no longer change, S33, identify and mark all point cloud structures in the second cutting window to form k three-dimensional third cutting windows, where k third cutting windows include k Clustering.
  • the action S3 further includes: using the data set of the target object to train a sample of the target object, and using the sample of the target object to compare the target object to identify and mark the target object in the second cutting window All the point cloud structures of to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows.
  • the third aspect of the present invention provides a point cloud model cutting device, which includes: a first cutting device, which uses a two-dimensional first cutting window to select a point cloud structure including a target object from a point cloud model, so The first cutting window has a length and a width; a depth adjusting device adjusts the depth of the first cutting window, the length, width and depth of the first cutting window constitute a three-dimensional second cutting window, and the target object Is located in the second cutting window; a second cutting device that identifies and marks all point cloud structures in the second cutting window to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of them In the third cutting window; a computing device that calculates the volume ratio of the point cloud structure in each third cutting window relative to the second cutting window, selects the third cutting window with the largest volume ratio and judges the The point cloud structure in the third cutting window is the target object, wherein the third cutting window is smaller than the second cutting window, and the second cutting window is smaller than the first cutting window.
  • the second cutting device is also used to calculate the number k of the third cutting window, randomly select k points in all the point cloud structures in the second cutting window as the centroid, and then use the Seed centroid is the cluster center. Calculate the distances from all other points in the point cloud structure to the centroid, and assign the centroid with the closest distance to all other points in the point cloud structure as a cluster until the positions of the k centroids no longer change , Identifying and marking all point cloud structures in the second cutting window to form k three-dimensional third cutting windows, where the k third cutting windows include k clusters.
  • the second cutting device is also used to train samples of the target object using the data set of the target object, and use the sample of the target object to compare the target object to identify and mark the target object in the second cutting window All the point cloud structures of to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows.
  • the fourth aspect of the present invention provides a computer program product, which is tangibly stored on a computer-readable medium and includes computer-executable instructions, which when executed, cause at least one processor to execute The method described in the first aspect of the present invention.
  • the fifth aspect of the present invention provides a computer-readable medium on which computer-executable instructions are stored, and when executed, the computer-executable instructions cause at least one processor to perform the method according to the first aspect of the present invention.
  • the cutting mechanism of the point cloud model provided by the present invention can consider the depth information of the originally omitted 3D point cloud model, and the present invention can automatically filter out the target objects and send them to the customer.
  • the present invention uses clustering methods and deep learning methods to screen target objects.
  • Fig. 1 is an architecture diagram of a point cloud model cutting system according to a specific embodiment of the present invention
  • FIG. 2 is a schematic diagram of a point cloud model of a point cloud model cutting mechanism and a first cutting window according to a specific embodiment of the present invention
  • FIG. 3 is a schematic diagram of a target object of a point cloud model and a second cutting window of a point cloud model cutting mechanism according to a specific embodiment of the present invention
  • FIG. 4 is a schematic diagram of a target object of a point cloud model and a third cutting window of the point cloud model cutting mechanism according to a specific embodiment of the present invention
  • Fig. 5 is a schematic diagram of a clustering manner of a cutting mechanism of a point cloud model according to a specific embodiment of the present invention.
  • the present invention provides a point cloud model cutting mechanism, which uses a three-dimensional cutting window to accurately lock the target object in the point cloud model, and uses the volume ratio to select the target object.
  • the point cloud model cutting system includes software modules and hardware devices.
  • the hardware devices include screen S and computing device D.
  • the screen S has a hardware interface of the computing device D, such as an hdmi or VGA port, which has the ability to display graphic data of the computing device D, and displays the data to the client C.
  • the computing device D has hardware interfaces with the screen S, the mouse M, and the keyboard K, and it has the computing ability to download the point cloud model or the point cloud structure.
  • the mouse M and the keyboard K are the input devices of the client C, and the computing device D can display data to the client C through the screen S.
  • the software module includes a first cutting device 110, a downloading device 120, a depth adjusting device 130, a generating device 140, a second cutting device 150, a calculating device 160, and recommending devices 170 and 180.
  • the download device 120 is used to download a large amount of data of a point cloud model 200 and display the point cloud model 200 on the screen S, and the first cutting device 110 uses a two-dimensional first cutting window to select from the point cloud model Including the point cloud structure of the target object.
  • the depth adjusting device 130 adjusts the depth of the first cutting window to form a three-dimensional second cutting window.
  • the generating device 140 receives the configuration and parameters of the downloading device 120 and the depth adjusting device 130, and generates a second cutting window serving as a limit frame based on the user's input.
  • the second cutting device 150 recognizes and marks all the point cloud structures in the second cutting window to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows.
  • the calculating device 160 calculates the volume ratio of the point cloud structure in each third cutting window relative to the second cutting window, selects the third cutting window with the largest volume ratio, and determines the point in the third cutting window
  • the cloud structure is the target object.
  • the first aspect of the present invention provides a method for cutting a point cloud model, which includes the following steps:
  • step S1 is executed.
  • the first cutting device 110 uses a two-dimensional first cutting window to select a point cloud structure including a target object from a point cloud model, the first cutting window having a length and a width.
  • the download device 120 is used to download a large amount of data of a point cloud model 200 and display the point cloud model 200 on the screen S.
  • the point cloud model 200 includes a first cylinder 210, a second cylinder 220, a first cube 230, a second cube 240, and other redundant point cloud structures (not shown).
  • the first cylinder 210, the second cylinder 220, the first cube 230, and the second cube 240 are all point cloud structures.
  • the target object is the first cylinder 210.
  • first cylinder 210, the second cylinder 220, the first cube 230, and the second cube 240 shown in FIG. 2 are all point cloud structures, and all the objects shown in FIGS. 3 and 4 are also point clouds. Structure, for the convenience of description, the point cloud structure is omitted for brevity. That is, part of the first cylinder 210 and the second cylinder 220 in FIG. 3 is also a point cloud structure, and the first cylinder 210, the first redundant point cloud structure 250 and the second redundant point cloud structure in FIG. 4 260 is also a point cloud structure.
  • the first cutting device 110 obtains the position of the mouse M input by the user C through the keyboard K and the mouse M relative to the screen S to generate a first rectangular cutting window W 1 .
  • the first cutting window W 1 is two-dimensional, but has a length l and a width H, no depth.
  • the target object 210 is accommodated in the first cylinder first cutting window W 1.
  • the window W 1 further comprises a number of redundant points cloud structure (not shown).
  • step S2 the depth adjusting device 130 adjusts the depth of the first cutting window, the length, width, and depth of the first cutting window constitute a three-dimensional second cutting window, and the target object is located in the second cutting window. Cutting window.
  • the depth adjusting device 130 automatically generates a sliding bar (not shown) for the user, and the user C slides the sliding bar on the screen S through the mouse M to input the desired depth.
  • the slider can display two end values of minimum depth and maximum depth for users to choose.
  • the depth indicated by the slide bar before the user input is d'
  • the depth is adjusted from d'to d.
  • the length l, width h, and depth d of the first cutting window constitute a three-dimensional second cutting window W 2
  • the first cylinder 210 as the target object is located in the second cutting window W 2 .
  • a part of the second cylinder 220 is originally in the cutting window.
  • a part of the second cylinder 220 is not accommodated in the second cutting window W 2 through the adjustment of the depth.
  • the user C can also use the mouse M to switch the field of view and angle of the point cloud model 200 displayed on the screen S. Comparing FIG. 3 and FIG. 4, the viewing angle and angle of the point cloud model 200 are different, and the depth of the second cutting window W 2 can be better adjusted by adjusting the viewing angle and the angle.
  • the generating device 140 receives the configuration and parameters of the downloading device 120 and the depth adjusting device 130, and generates a second cutting window W 2 serving as a limit frame based on the user's input.
  • step S3 the second cutting device 150 identifies and marks all point cloud structures in the second cutting window to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows .
  • Step S3 can be implemented in a variety of ways, such as a clustering method, a deep learning method, and a superbody clustering method.
  • step S3 further includes sub-step S31, sub-step S32, and sub-step S33.
  • the number k of the third cutting windows is calculated.
  • the second cutting window W 2 coarse selection only 4 wherein the target object is received 210, cloud point and other redundant structures of the first cylindrical body, wherein said structure comprises a first redundant redundancy cloud point
  • the point cloud structure 250 and the second redundant point cloud structure 260 The point cloud structure in each second window W 2 is accommodated by the third cutting window.
  • the first cylinder 210 is in the third cutting window W 31
  • the first redundant point cloud structure 250 is in the third cutting window.
  • the sub-step S32 randomly select k points in all the point cloud structures in the second cutting window as the centroids, and then use the seed centroid as the clustering center to calculate all other points in the point cloud structure to all points. State the distance of the centroid, and assign the centroid with the closest distance to all other points in the point cloud structure to form a cluster.
  • Steps S31 and S32 are performed iteratively until the positions of the k centroids no longer change.
  • step S33 is executed to identify and mark all the point cloud structures in the second cutting window to form k three-dimensional third cutting windows, where the k third cutting windows include k clusters.
  • Figure 5 shows the principle of the clustering method.
  • Figure 5 includes multiple point cloud structures, in which three points are selected as centroids, namely the first centroid z 1 , the second centroid z 2 and the third centroid z 3. . Then use the first centroid z 1 , the second centroid z 2 and the third centroid z 3 as the clustering centers to calculate the other points in all the point cloud structures in Figure 5 to the first centroid z1, the second centroid z 2 and the third centroid respectively.
  • the center of mass z 3 is the center of mass z 3 .
  • the distances between the third center of mass z 3 and all points in it are d 1 , d 2 ...d n , and then all other points in the point cloud structure are assigned to the point closest to the third center of mass z 3 and The third centroid z 3 becomes a cluster.
  • the first centroid z 1 and the second centroid z 2 also become a cluster. At this point, we have calculated three clusters.
  • any three points in all the point cloud structures in FIG. 4 are randomly selected as centroids, and three clusters can be separated according to the above clustering principle, which are the first cylinder 210 as the target object.
  • the step S3 further includes the following steps: use the data set of the target object to train the sample of the target object, and use the sample of the target object to compare the target object to identify and mark the first All point cloud structures in the second cutting window are formed to form multiple three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows.
  • the number of the third cutting windows is calculated, and then k points in all the point cloud structures in the second cutting window are randomly selected as seed voxels, and the unevenness of different faces is marked by LCCP Relationship to divide the points in the second cutting window that do not have an intersecting area into different superbody clusters, thereby identifying and marking all point cloud structures in the second cutting window to form k three-dimensional third cutting windows, where
  • the k third cutting windows include k clusters.
  • superbody clustering is to select a few seed voxels first, and then grow the region to become a large point cloud cluster, and the points inside each point cloud cluster are similar. Because super-body clustering is over-segmentation, that is, it is possible for adjacent objects to gather together. Super-body clustering will divide them into one object. Then LCCP is to mark the concave-convex relationship of different faces, and then the region growth algorithm is used to Small areas are clustered into large objects. This small area growth algorithm is limited by the concavity and convexity, that is, only the area is allowed to grow across convex edges. Since the first cylinder 210 and the first redundant point cloud structure 250 in the figure do not have an intersecting area, a single superbody cluster can divide them into different point clouds.
  • a plurality of data related to the first cylinder 210 (for example, a CAD model of the first cylinder 210) will be used as a data set to train a sample of the first cylinder 210, and the sample will be used to Compare the first cylinder 210 to identify and mark all point cloud structures in the second cutting window to form multiple three-dimensional third cutting windows, where the first cylinder 210 is located in one of the third cutting windows in.
  • Object recognition is based on the user knowing what the target object he wants to find is, the target object has a boundary, and the target object is very different from other impurities. Based on this, two classifications are made to distinguish the target object and the impurities.
  • a large amount of data is used to train the characteristics of the object, and each sample has a corresponding label. During the training process, the result and the label are continuously compared to reduce errors.
  • step S4 is executed.
  • the calculation device 160 calculates the volume ratio of the point cloud structure in each third cutting window relative to the second cutting window, selects the third cutting window with the largest volume ratio, and determines the third cutting window.
  • the point cloud structure in the cutting window is the target object.
  • the calculation device 160 calculates the volumes of the first cylinder 21, the first redundant point cloud structure 250 and the second redundant point cloud structure 260 respectively.
  • the volume of the first cylinder 210 of the third cutting window W3 1 V 1 the first redundancy third cutting point cloud W 32 window structure 250 is a volume V 2
  • the volume of the second redundant point cloud structure 260 is V 3 .
  • the volume of the second cutting window W 2 is V, so the volume ratio of the first cylinder 210 is The volume ratios of the first redundant point cloud structure 250 and the second redundant point cloud structure 260 are respectively with When satisfied:
  • the third cutting window with the largest volume ratio is determined and the point cloud structure in the third cutting window is determined to be the target object, that is, the first cylinder 210 is the target object.
  • the third cutting window is smaller than the second cutting window, and the second cutting window is smaller than the first cutting window. Therefore, the target object 180 recommended by the recommendation device 170 is recommended to the client C, and the target object is the first cylinder 210.
  • the second aspect of the present invention provides a point cloud model cutting system, including: a processor; and a memory coupled with the processor, the memory has instructions stored therein, and the instructions make all the points when executed by the processor
  • the electronic device performs an action, the action includes: S1, using a two-dimensional first cutting window to select a point cloud structure including a target object from a point cloud model, the first cutting window having a length and a width; S2 , Adjust the depth of the first cutting window, the length, width and depth of the first cutting window constitute a three-dimensional second cutting window, and the target object is located in the second cutting window; S3, identifying and marking All the point cloud structures in the second cutting window form multiple three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows; S4, calculating the point cloud in each third cutting window The volume ratio of the structure relative to the second cutting window is selected, the third cutting window with the largest volume ratio is selected and the point cloud structure in the third cutting window is judged to be the target object, wherein the
  • the action S3 further includes: S31, calculating the number k of the third cutting window; S32, randomly selecting k points in all the point cloud structures in the second cutting window as the centroid, and then taking The seed centroid is the cluster center. The distance between all other points in the point cloud structure and the centroid is calculated, and all other points in the point cloud structure are assigned the centroid with the closest distance to form a cluster, and steps S31 and S32 are iteratively performed Until the positions of the k centroids no longer change, S33, identify and mark all point cloud structures in the second cutting window to form k three-dimensional third cutting windows, where k third cutting windows include k Clustering.
  • the action S3 further includes: using the data set of the target object to train a sample of the target object, and using the sample of the target object to compare the target object to identify and mark the target object in the second cutting window All the point cloud structures of to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows.
  • the third aspect of the present invention provides a point cloud model cutting device, which includes: a first cutting device, which uses a two-dimensional first cutting window to select a point cloud structure including a target object from a point cloud model, so The first cutting window has a length and a width; a depth adjusting device adjusts the depth of the first cutting window, the length, width and depth of the first cutting window constitute a three-dimensional second cutting window, and the target object Is located in the second cutting window; a second cutting device that identifies and marks all point cloud structures in the second cutting window to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of them In the third cutting window; a computing device that calculates the volume ratio of the point cloud structure in each third cutting window relative to the second cutting window, selects the third cutting window with the largest volume ratio and judges the The point cloud structure in the third cutting window is the target object, wherein the third cutting window is smaller than the second cutting window, and the second cutting window is smaller than the first cutting window.
  • the second cutting device is also used to calculate the number k of the third cutting window, randomly select k points in all the point cloud structures in the second cutting window as the centroid, and then use the Seed centroid is the cluster center. Calculate the distances from all other points in the point cloud structure to the centroid, and assign the centroid with the closest distance to all other points in the point cloud structure as a cluster until the positions of the k centroids no longer change , Identifying and marking all point cloud structures in the second cutting window to form k three-dimensional third cutting windows, where the k third cutting windows include k clusters.
  • the second cutting device is also used to train samples of the target object using the data set of the target object, and use the sample of the target object to compare the target object to identify and mark the target object in the second cutting window All the point cloud structures of to form a plurality of three-dimensional third cutting windows, wherein the target object is located in one of the third cutting windows.
  • the fourth aspect of the present invention provides a computer program product, which is tangibly stored on a computer-readable medium and includes computer-executable instructions, which when executed, cause at least one processor to execute The method described in the first aspect of the present invention.
  • the fifth aspect of the present invention provides a computer-readable medium on which computer-executable instructions are stored, and when executed, the computer-executable instructions cause at least one processor to perform the method according to the first aspect of the present invention.
  • the cutting mechanism of the point cloud model provided by the present invention can consider the depth information of the originally omitted 3D point cloud model, and the present invention can automatically filter out the target objects and send them to the customer.
  • the present invention uses clustering methods and deep learning methods to screen target objects.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

本发明提供了点云模型的切割方法、装置和系统,其中,包括如下步骤:S1,利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构;S2,调整第一切割窗口的深度,第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,目标物体位于所述第二切割窗口中;S3,识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;S4,计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口。本发明提供的点云模型的切割机制能够考虑了本来省略的3D点云模型的深度信息,本发明能够自动将目标物体筛选出来发送给客户。

Description

点云模型的切割方法、装置和系统 技术领域
本发明涉及建模领域,尤其涉及点云模型的切割方法、装置和系统。
背景技术
现在,RGB-D照相机的应用得到了越来越多的关注。和RGB照相机相比,RGB-D照相机能够提供具有深度信息的点云图像,其意味着现实世界中很多场景都能被RGB-D照相机捕捉,例如自控工厂中的数字双胞胎(digital twin in autonomous factories)和自主机器人中的环境意识(environment awareness in autonomous robots),因此RGB-D照相机中的点云图像获得了集中研究。
对于能够获得不同类型的RGB-D照相机的最终用户来说,RGB-D照相机能够帮助他们轻易获得颜色和深度信息。然而,对最终用户来说RGB-D照相机只有有限的技术和工具几何来提供修剪和编辑(pruning and editing)的自主3D数据。
点云切割作为处理点云图像的主要方法在计算机视觉领域非常重要,这时由于其能够从点云图像中为许多二次任务选择目标。其中,二次任务包括点云注册、点云定位和机器人抓握等。然而,其传感器本身能够带来大量噪声。此外,未加工的点云图像包括许多背景或者其他目标的不相干点云。这样的不相干点云会极大地阻碍3D模型的切割,因此必须使用一个实用的3D点云切割工具。传统点云切割工具并不能够让用户选择点云图像的从某一视点的特定深度,其意味着并不能够总是选择用户想要的点云。并且,传统点云切割工具依赖没有语义功能的用户界面,这导致用户并不能自动选择他们想要的目标。
直到现在,为了获得目标几何特征的半自动或者自动3D点云数据的修剪和提取方法仍然对3D重构和3D机器人认知是待解决的大问题。现有技术也提供了几种人工方法或者工具,例如云比较3D(CloudCompare3D),其能够协助用户选择和修剪3D点云数据。然而, 这样的软件并不支持半自动或者自动方法。另外,在用户选择或者修剪3D点云数据时,云比较3D机制缺少深度信息。
现有技术还提供了一些3D点云切割机制,其提供了一种包括光学照相机数据处理单元得系统,其能够获得包括一个目标物体的3D点云场景。利用一个交换输入装置,用户能够输入一个种子,其中,种子指示目标物体的位置。最后,该分割方法会通过修剪基于用户输入的位置基准的3D点云,产生分割的对应于目标物体的点云。
发明内容
本发明第一方面提供了点云模型的切割方法,其中,包括如下步骤:S1,利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构,所述第一切割窗口具有长度和宽度;S2,调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中;S3,识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;S4,计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。
进一步地,所述步骤S3还包括如下步骤:S31,计算所述第三切割窗口的数量k;S32,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类,迭代执行步骤S31和S32直至k个质心的位置不再改变,S33,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
进一步地,所述步骤S3还包括如下步骤:利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维 的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
本发明第二方面提供了点云模型的切割系统,包括:处理器;以及与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:S1,利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构,所述第一切割窗口具有长度和宽度;S2,调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中;S3,识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;S4,计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。
进一步地,所述动作S3还包括:S31,计算所述第三切割窗口的数量k;S32,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类,迭代执行步骤S31和S32直至k个质心的位置不再改变,S33,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
进一步地,所述动作S3还包括:利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
本发明第三方面提供了点云模型的切割装置,其中,包括:第一切割装,其利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构,所述第一切割窗口具有长度和宽度;深度调节装置,其调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中;第二切割装置,其识别并标记所述第二切割窗口中的所有点云结构 以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;计算装置,其计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。
进一步地,所述第二切割装置还用于计算所述第三切割窗口的数量k,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类直至k个质心的位置不再改变,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
进一步地,第二切割装置还用于利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
本发明第四方面提供了计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据本发明第一方面所述的方法。
本发明第五方面提供了计算机可读介质,其上存储有计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据本发明第一方面所述的方法。
本发明提供的点云模型的切割机制能够考虑了本来省略的3D点云模型的深度信息,本发明能够自动将目标物体筛选出来发送给客户。此外,本发明利用了聚类方式和深度学习方式等筛选目标物体。
附图说明
图1是根据本发明一个具体实施例的点云模型的切割系统的架构图;
图2是根据本发明一个具体实施例的点云模型的切割机制的点云模型以及第一切割窗口的示意图;
图3是根据本发明一个具体实施例的点云模型的切割机制的点云模型的目标物体以及第二切割窗口的示意图;
图4是根据本发明一个具体实施例的点云模型的切割机制的点云模型的目标物体以及第三切割窗口的示意图;
图5是根据本发明一个具体实施例的点云模型的切割机制的聚类方式示意图。
具体实施方式
以下结合附图,对本发明的具体实施方式进行说明。
本发明提供了点云模型的切割机制,其利用了三维切割窗口准确锁定点云模型中的目标对象,并利用体积占比来将其中的目标对象选出来。
如图1所示,点云模型的切割系统包括软件模块和硬件设备。硬件设备包括屏幕S和计算设备D。屏幕S具有计算设备D的硬件接口,例如hdmi或者VGA端口,其具有计算设备D的图解数据的显示能力,并且将数据显示给客户C。计算设备D具有和屏幕S、鼠标M和键盘K的硬件接口,其具有下载点云模型或者点云结构的计算能力。鼠标M和键盘K是客户C的输入装置,计算设备D能够通过屏幕S显示数据给客户C。
软件模块包括第一切割装置110、下载装置120、深度调节装置130、产生装置140、第二切割装置150、计算装置160、推荐装置170和180。其中,下载装置120用于下载一个点云模型200的大量数据,并且将该点云模型200显示在屏幕S上,第一切割装置110利用二维的第一切割窗口从点云模型中选出包括目标物体的点云结构。深度调节装置130调整所述第一切割窗口的深度,以构成一个三维的第二切割窗口。产生装置140接收下载装置120和深度调节装置130的配置和参数,并基于用户的输入产生充当限位框的第二切割窗口。第二切割装置150识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。计算装置160计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体。
本发明第一方面提供了点云模型的切割方法,其包括如下步骤:
首先执行步骤S1,第一切割装置110利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构,所述第一切割窗口具有长度和宽度。
其中,如图2所示,下载装置120用于下载一个点云模型200的大量数据,并且将该点云模型200显示在屏幕S上。在本实施例中,点云模型200包括一个第一圆柱体210、第二圆柱体220、第一正方体230和第二正方体240以及其他冗余点云结构(未示出)。其中,所述第一圆柱体210、第二圆柱体220、第一正方体230和第二正方体240皆为点云结构。其中的目标物体为第一圆柱体210。
需要说明的是图2示出了的第一圆柱体210、第二圆柱体220、第一正方体230和第二正方体240都是点云结构,图3和图4中显示的所有物体也是点云结构,为了方便说明,简明起见,省略了点云结构。即,图3中的第一圆柱体210、第二圆柱体220的一部分也是点云结构,图4中的第一圆柱体210、第一冗余点云结构250和第二冗余点云结构260也皆为点云结构。
具体地,第一切割装置110获取用户C通过键盘K和鼠标M输入的鼠标M相对于屏幕S的位置产生一个长方形的第一切割窗口W 1。其中,所述第一切割窗口W 1是二维的,只具有长度l和宽度h,并没有深度。如图2所示,作为目标物体的第一圆柱体210被容纳在第一切割窗口W 1中。此外,在第一切个窗口W 1中还包括一些冗余点云结构(未示出)。
然后,执行步骤S2,深度调节装置130调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中。
其中,深度调节装置130会为用户自动产生一个滑条(未示出),用户C通过鼠标M在屏幕S上滑动滑条以输入想要的深度。其中,滑条可以显示出最小深度和最大深度两个端值供用户选择。如图3所示,用户输入以前的滑条所表示的深度为d’,用户输入了想要的深度以后,将深度从d’调整为d。此时,所述第一切割窗口的长度l、宽度h和深度d构成一个三维的第二切割窗口W 2,作为目标物体的第一圆柱体210位于所 述第二切割窗口W 2中。此时,第二圆柱体220的一部分本来在切割窗口中,此时通过深度的调节第二圆柱体220的一部分不在容纳在第二切个窗口W 2中。
此外,用户C还可以通过鼠标M切换屏幕S上显示点云模型200的视野和角度。对比图3和图4,点云模型200的视角和角度不同,通过调节视角和角度能够更好地调整第二切割窗口W 2的深度。
在用户C选定满意的深度以后,产生装置140接收下载装置120和深度调节装置130的配置和参数,并基于用户的输入产生充当限位框的第二切割窗口W 2
接着执行步骤S3,第二切割装置150识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
步骤S3可以通过多种方式实现,例如聚类方法、深度学习方式和超体聚类方式。
其中,按照聚类方式,步骤S3还包括子步骤S31、子步骤S32和子步骤S33。
在子步骤S31中,计算所述第三切割窗口的数量k。如图4所示,第二切割窗口W 2中只是粗略选择,其中容纳有目标物体第一圆柱体210,以及其他冗余点云结构,其中,所述冗余点云结构包括第一冗余点云结构250和第二冗余点云结构260。每个第二窗口W 2中的点云结构都由第三切割窗口容纳,具体地,第一圆柱体210在第三切割窗口W 31中,第一冗余点云结构250在第三切割窗口W 32中,第二冗余点云结构260在第三切割窗口W 33中,因此k=3。
在子步骤S32中,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类。
迭代执行步骤S31和S32直至k个质心的位置不再改变。
最后执行步骤S33,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
具体地,图5示出聚类方式的原理,图5中包括多个点云结构,其中选取三个点作为质心,分别为第一质心z 1、第二质心z 2和第三质心z 3。然后以第一质心z 1、第二质心z 2和第三质心z 3为聚类中心计算图5中所有点云结构中的其他点分别到第一质心z1、第二质心z 2和第三质心z 3。以第三质心z 3,第三质心z 3与其中所有点的距离分别为d 1、d 2……d n,然后将所有点云结构中其他点分配距离第三质心z 3最近的点与第三质心z 3成为一个聚类。以此类推,第一质心z 1和第二质心z 2也分别成为了一个聚类。此时,我们就算出了三个聚类。
在本实施例中,随意选取图4中所有点云结构中的任意3个点作为质心,并按照上述聚类原理就能够分出三个聚类,分别为作为目标物体的第一圆柱体210的第一聚类、第一冗余点云结构250的第二聚类、第二冗余点云结构260的第三聚类。
按照深度学习方式,所述步骤S3还包括如下步骤:利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
按照超体聚类方式,计算所述第三切割窗口的数量,然后随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为种子体素,利用LCCP标记不同面的凹凸关系,以将第二切割窗口中没有相交区域的点分成不同的超体聚类,从而识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
具体地,超体聚类是先选择几个种子体素,然后做区域生长,变成一个个大的点云簇,在每一个点云簇内部的点都是相似的。因为超体聚类是过分割,就是有可能相邻的物体聚在一起,超体聚类会把他们分割到一个物体中去,然后LCCP是标记不同面的凹凸关系,之后采用区域增长算法将小区域聚类成大的物体,这个小区域增长算法受到凹凸性限制,既:只允许区域跨越凸边增长。图中的第一圆柱体210和第一冗余点云结构250因为没有相交区域,因此单独的超体聚类即可将其分割成不同的点云。
具体地,在本实施例中,会利用多个和第一圆柱体210相关的数据(例如第一圆柱体210的CAD模型)作为数据集来训练第一圆柱体210的样本,利用该样本来对比第一圆柱体210,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述第一圆柱体210位于其中一个第三切割窗口中。物体识别基于用户知道自己想要找的目标物体是什么,目标物体具有边界,目标物体和其他杂质差别很大,基于此做二分类区别目标物体和杂质。利用大量数据来训练物体的特征,每一次样本都有对应的标签,在训练的过程中把结果和标签不断对比,以此来减少误差。
最后执行步骤S4,计算装置160计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体。
计算装置160分别计算第一圆柱体21、第一冗余点云结构250和第二冗余点云结构260的体积。假设第三切割窗口W3 1中的第一圆柱体210的体积为V 1,第三切割窗口W 32中的第一冗余点云结构250的体积为V 2,第三切割窗口W 33中的第二冗余点云结构260的体积为V 3。其中第二切割窗口W 2的体积为V,因此第一圆柱体210的体积占比为
Figure PCTCN2019093894-appb-000001
第一冗余点云结构250和第二冗余点云结构260的体积占比分别为
Figure PCTCN2019093894-appb-000002
Figure PCTCN2019093894-appb-000003
当满足:
Figure PCTCN2019093894-appb-000004
Figure PCTCN2019093894-appb-000005
时,
则判断体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,即第一圆柱体210为目标物体。
其中,其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。因此,推荐装置170推荐的目标物体180推荐给客户C,目标物体即为第一圆柱体210。
本发明第二方面提供了点云模型的切割系统,包括:处理器;以及与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:S1,利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点 云结构,所述第一切割窗口具有长度和宽度;S2,调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中;S3,识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;S4,计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。
进一步地,所述动作S3还包括:S31,计算所述第三切割窗口的数量k;S32,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类,迭代执行步骤S31和S32直至k个质心的位置不再改变,S33,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
进一步地,所述动作S3还包括:利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
本发明第三方面提供了点云模型的切割装置,其中,包括:第一切割装,其利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构,所述第一切割窗口具有长度和宽度;深度调节装置,其调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中;第二切割装置,其识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;计算装置,其计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。
进一步地,所述第二切割装置还用于计算所述第三切割窗口的数量k,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类直至k个质心的位置不再改变,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
进一步地,第二切割装置还用于利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
本发明第四方面提供了计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据本发明第一方面所述的方法。
本发明第五方面提供了计算机可读介质,其上存储有计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据本发明第一方面所述的方法。
本发明提供的点云模型的切割机制能够考虑了本来省略的3D点云模型的深度信息,本发明能够自动将目标物体筛选出来发送给客户。此外,本发明利用了聚类方式和深度学习方式等筛选目标物体。
尽管本发明的内容已经通过上述优选实施例作了详细介绍,但应当认识到上述的描述不应被认为是对本发明的限制。在本领域技术人员阅读了上述内容后,对于本发明的多种修改和替代都将是显而易见的。因此,本发明的保护范围应由所附的权利要求来限定。此外,不应将权利要求中的任何附图标记视为限制所涉及的权利要求;“包括”一词不排除其它权利要求或说明书中未列出的装置或步骤;“第一”、“第二”等词语仅用来表示名称,而并不表示任何特定的顺序。

Claims (11)

  1. 点云模型的切割方法,其中,包括如下步骤:
    S1,利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构,所述第一切割窗口具有长度和宽度;
    S2,调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中;
    S3,识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;
    S4,计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,
    其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。
  2. 根据权利要求1所述的点云模型的切割方法,其特征在于,所述步骤S3还包括如下步骤:
    S31,计算所述第三切割窗口的数量k;
    S32,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类,
    迭代执行步骤S31和S32直至k个质心的位置不再改变,
    S33,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
  3. 根据权利要求1所述的点云模型的切割方法,其特征在于,所述步骤S3还包括如下步骤:
    利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
  4. 点云模型的切割系统,包括:
    处理器;以及
    与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:
    S1,利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构,所述第一切割窗口具有长度和宽度;
    S2,调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中;
    S3,识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;
    S4,计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,
    其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。
  5. 根据权利要求4所述的点云模型的切割系统,其特征在于,所述动作S3还包括:
    S31,计算所述第三切割窗口的数量k;
    S32,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类,
    迭代执行步骤S31和S32直至k个质心的位置不再改变,
    S33,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
  6. 根据权利要求1所述的点云模型的切割系统,其特征在于,所述动作S3还包括:
    利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位 于其中一个第三切割窗口中。
  7. 点云模型的切割装置,其中,包括:
    第一切割装,其利用一个二维的第一切割窗口从一个点云模型中选出包括目标物体的点云结构,所述第一切割窗口具有长度和宽度;
    深度调节装置,其调整所述第一切割窗口的深度,所述第一切割窗口的长度、宽度和深度构成一个三维的第二切割窗口,所述目标物体位于所述第二切割窗口中;
    第二切割装置,其识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中;
    计算装置,其计算每个第三切割窗口中点云结构相对于所述第二切割窗口的体积占比,选出体积占比最大的第三切割窗口并判断该所述第三切割窗口中的点云结构即为目标物体,
    其中所述第三切割窗口小于所述第二切割窗口,所述第二切割窗口小于所述第一切割窗口。
  8. 根据权利要求1所述的点云模型的切割装置,其特征在于,所述第二切割装置还用于计算所述第三切割窗口的数量k,随机选取所述第二切割窗口中的所有点云结构中的其中k个点作为质心,然后以所述种子质心为聚类中心计算所有点云结构中的其他点到所述质心的距离,并将所有点云结构中的其他点分配距离最近的质心成为一个聚类直至k个质心的位置不再改变,识别并标记所述第二切割窗口中的所有点云结构以形成k个三维的第三切割窗口,其中k个第三切割窗口中包括了k个聚类。
  9. 根据权利要求1所述的点云模型的切割装置,其特征在于,第二切割装置还用于利用所述目标物体的数据集来训练目标物体的样本,利用所述目标物体的样本来对比所述目标物体,以识别并标记所述第二切割窗口中的所有点云结构以形成多个三维的第三切割窗口,其中,所述目标物体位于其中一个第三切割窗口中。
  10. 计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据权利要求1至3中任一项所述的 方法。
  11. 计算机可读介质,其上存储有计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据权利要求1至3中任一项所述的方法。
PCT/CN2019/093894 2019-06-28 2019-06-28 点云模型的切割方法、装置和系统 WO2020258314A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP19935005.9A EP3971829B1 (en) 2019-06-28 2019-06-28 Cutting method, apparatus and system for point cloud model
CN201980096739.8A CN113906474A (zh) 2019-06-28 2019-06-28 点云模型的切割方法、装置和系统
US17/620,790 US11869143B2 (en) 2019-06-28 2019-06-28 Cutting method, apparatus and system for point cloud model
PCT/CN2019/093894 WO2020258314A1 (zh) 2019-06-28 2019-06-28 点云模型的切割方法、装置和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/093894 WO2020258314A1 (zh) 2019-06-28 2019-06-28 点云模型的切割方法、装置和系统

Publications (1)

Publication Number Publication Date
WO2020258314A1 true WO2020258314A1 (zh) 2020-12-30

Family

ID=74059997

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093894 WO2020258314A1 (zh) 2019-06-28 2019-06-28 点云模型的切割方法、装置和系统

Country Status (4)

Country Link
US (1) US11869143B2 (zh)
EP (1) EP3971829B1 (zh)
CN (1) CN113906474A (zh)
WO (1) WO2020258314A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492619A (zh) * 2022-01-22 2022-05-13 电子科技大学 一种基于统计和凹凸性的点云数据集构建方法及装置
CN115205717A (zh) * 2022-09-14 2022-10-18 广东汇天航空航天科技有限公司 障碍物点云数据处理方法以及飞行设备
CN116310849A (zh) * 2023-05-22 2023-06-23 深圳大学 基于三维形态特征的树木点云单体化提取方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3839793A1 (en) * 2019-12-16 2021-06-23 Dassault Systèmes Interactive object selection
CN113298866B (zh) * 2021-06-11 2024-01-23 梅卡曼德(北京)机器人科技有限公司 物体分类方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293532A1 (en) * 2012-05-04 2013-11-07 Qualcomm Incorporated Segmentation of 3d point clouds for dense 3d modeling
CN105678753A (zh) * 2015-12-31 2016-06-15 百度在线网络技术(北京)有限公司 一种物体分割方法及装置
CN105785462A (zh) * 2014-06-25 2016-07-20 同方威视技术股份有限公司 一种定位三维ct图像中的目标的方法和安检ct系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100721536B1 (ko) * 2005-12-09 2007-05-23 한국전자통신연구원 2차원 평면상에서 실루엣 정보를 이용한 3차원 구조 복원방법
CN102222352B (zh) * 2010-04-16 2014-07-23 株式会社日立医疗器械 图像处理方法和图像处理装置
US10768304B2 (en) * 2017-12-13 2020-09-08 Luminar Technologies, Inc. Processing point clouds of vehicle sensors having variable scan line distributions using interpolation functions
US11341663B2 (en) * 2019-06-03 2022-05-24 Zebra Technologies Corporation Method, system and apparatus for detecting support structure obstructions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293532A1 (en) * 2012-05-04 2013-11-07 Qualcomm Incorporated Segmentation of 3d point clouds for dense 3d modeling
CN105785462A (zh) * 2014-06-25 2016-07-20 同方威视技术股份有限公司 一种定位三维ct图像中的目标的方法和安检ct系统
CN105678753A (zh) * 2015-12-31 2016-06-15 百度在线网络技术(北京)有限公司 一种物体分割方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3971829A4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492619A (zh) * 2022-01-22 2022-05-13 电子科技大学 一种基于统计和凹凸性的点云数据集构建方法及装置
CN114492619B (zh) * 2022-01-22 2023-08-01 电子科技大学 一种基于统计和凹凸性的点云数据集构建方法及装置
CN115205717A (zh) * 2022-09-14 2022-10-18 广东汇天航空航天科技有限公司 障碍物点云数据处理方法以及飞行设备
CN115205717B (zh) * 2022-09-14 2022-12-20 广东汇天航空航天科技有限公司 障碍物点云数据处理方法以及飞行设备
CN116310849A (zh) * 2023-05-22 2023-06-23 深圳大学 基于三维形态特征的树木点云单体化提取方法
CN116310849B (zh) * 2023-05-22 2023-09-19 深圳大学 基于三维形态特征的树木点云单体化提取方法

Also Published As

Publication number Publication date
EP3971829A1 (en) 2022-03-23
EP3971829B1 (en) 2024-01-31
EP3971829A4 (en) 2023-01-18
US11869143B2 (en) 2024-01-09
CN113906474A (zh) 2022-01-07
EP3971829C0 (en) 2024-01-31
US20220358717A1 (en) 2022-11-10

Similar Documents

Publication Publication Date Title
WO2020258314A1 (zh) 点云模型的切割方法、装置和系统
US10872416B2 (en) Object oriented image editing
US10956784B2 (en) Neural network-based image manipulation
US10523916B2 (en) Modifying images with simulated light sources
US11538096B2 (en) Method, medium, and system for live preview via machine learning models
TWI485650B (zh) 用於多相機校準之方法及配置
CN112836734A (zh) 一种异源数据融合方法及装置、存储介质
US10977767B2 (en) Propagation of spot healing edits from one image to multiple images
JP7490784B2 (ja) 拡張現実マップキュレーション
US20180357819A1 (en) Method for generating a set of annotated images
EP3274964B1 (en) Automatic connection of images using visual features
US20190362551A1 (en) System and techniques for automated mesh retopology
CN107949851A (zh) 在场景内的物体的端点的快速和鲁棒识别
US20130301938A1 (en) Human photo search system
CN115147599A (zh) 一种面向遮挡和截断场景的多几何特征学习的物体六自由度位姿估计方法
JP2019211981A (ja) 情報処理装置、情報処理装置の制御方法およびプログラム
WO2024021321A1 (zh) 模型生成的方法、装置、电子设备和存储介质
CN103617616A (zh) 一种仿射不变的图像匹配方法
Baldacci et al. Presentation of 3D scenes through video example
US20230196645A1 (en) Extracted image segments collage
He et al. Viewpoint selection for photographing architectures
WO2023273271A1 (zh) 目标位姿估计方法、装置、计算设备、存储介质及计算机程序
Perez-Yus et al. RGB-D based tracking of complex objects
Guerrero et al. RGB-D Based Tracking of Complex Objects
He et al. Viewpoint Selection for Taking a good Photograph of Architecture.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19935005

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019935005

Country of ref document: EP

Effective date: 20211213

NENP Non-entry into the national phase

Ref country code: DE